Closed Bug 806298 Opened 12 years ago Closed 12 years ago

IonMonkey: 3x slower when creating new array on constructing

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla19

People

(Reporter: h4writer, Assigned: h4writer)

References

(Blocks 1 open bug)

Details

(Whiteboard: [ion:t])

Attachments

(1 file)

Testcase: function test() { new Array(); } for (var i=0; i < 10000000; i++) { new test(); } Ion: 2.866s JM: 0.812s Funny note: When changing "new test()" to "test()". I.e. only function call ionmonkey is 3 times faster than JM Ion: 0.182s JM: 0.521s
Isn't the main difference between "new test()" and "test()" that the former creates twice as many objects?
This sounds like a potential inlining issue. You might want to check the spew produced with IONFLAGS=scripts,mir,inline .
running perf on new.js (variant that calls new once) and newnew.js (variant that calls new twice) shows some interesting patterns: (new) 93.36% js.ion.opt perf-17432.map [.] 0x00007f22e4fe5019 3.39% js.ion.opt js [.] js_NewStringCopyN(JSContext*, unsigned short const*, unsigned long) 1.23% js.ion.opt [kernel.kallsyms] [k] unmap_vmas (newnew) 15.43% js.ion.opt js [.] UncachedInlineCall(js::VMFrame&, js::InitialFrameFlags, void**, bool*, unsigned int) 8.41% js.ion.opt js [.] js::NewObjectWithType(JSContext*, JS::Handle<js::types::TypeObject*>, JSObject*, js::gc::AllocKind) 7.97% js.ion.opt js [.] _ZL8EnterIonP9JSContextPN2js10StackFrameEPv.part.149 7.62% js.ion.opt js [.] js::ion::CanEnter(JSContext*, JS::Handle<JSScript*>, js::StackFrame*, bool) 5.51% js.ion.opt js [.] js::mjit::CallCompiler::update() 4.87% js.ion.opt js [.] js_CreateThisForFunctionWithProto(JSContext*, JS::Handle<JSObject*>, JSObject*) 4.64% js.ion.opt perf-17336.map [.] 0x00007f138a40211e 4.57% js.ion.opt js [.] js_CreateThisForFunction(JSContext*, JS::Handle<JSObject*>, bool) 4.47% js.ion.opt js [.] js::types::TypeMonitorResult(JSContext*, JS::Handle<JSScript*>, unsigned char*, JS::Value const&) 3.81% js.ion.opt js [.] JSCompartment::getNewType(JSContext*, js::TaggedProto, JSFunction*, bool) 3.06% js.ion.opt js [.] JSScript::ensureRanAnalysis(JSContext*) 2.88% js.ion.opt js [.] js::baseops::GetProperty(JSContext*, JS::Handle<JSObject*>, JS::Handle<JSObject*>, JS::Handle<long>, JS::Mut 2.68% js.ion.opt js [.] js::mjit::stubs::UncachedNewHelper(js::VMFrame&, unsigned int, js::mjit::stubs::UncachedCallResult&) 2.60% js.ion.opt js [.] js::gc::FinalizeArenas(js::FreeOp*, js::gc::ArenaHeader**, js::gc::ArenaList&, js::gc::AllocKind, js::SliceB 2.41% js.ion.opt js [.] js::detail::HashTable<js::ReadBarriered<js::types::TypeObject> const, js::HashSet<js::ReadBarriered<js::type 2.20% js.ion.opt js [.] js::RunScript(JSContext*, JS::Handle<JSScript*>, js::StackFrame*) 2.16% js.ion.opt libc-2.13.so [.] __memcpy_ssse3 so evidently, we are rather bad at running this in ion at all (It looks like we compile the function 7 times) Since the type information should never change, I can't see any reason that we'd ever need to recompile this function.
Oh yes, and it looks like we never compile the outer loop in ION in either case.
There are a few bugs like this and I assume they all have the same problem, somehow. Crypto.js. invalidates :243 and :431 dozens of times. I'm investigating this too.
Blocks: IonSpeed
Summary: Ionmonkey: 3x slower when creating new array on constructing → IonMonkey: 3x slower when creating new array on constructing
Evidence points into: - IM creates the same asm code for both functions versions - IM doesn't compile loop because of JSOP_DEFFUN - Ionstub isn't created, because calling into constructing function - But IM deletes JM stubs - Without "new Array" ShouldJaegerCompileCallee is true, because it is short running - With "new Array" ShouldJaegerCompileCallee is false, because "hasFunctionCalls" => Without "new Array" runs in JM => with "new Array" the call is slow because it runs through the interpreter I'll think about/create a solution Monday ;)
Assignee: general → hv1989
JM -> Ion constructor calls don't get inlined, but in the prologue we still check to compile in IM. When this happens the JM chunk is destroyed and we disable JM compilation for that function. Therefore the slow path JM -> Interpreter -> Ion gets used in this case. For now disabling the recompile check fixes this problem. No win/loss on SS and V8. In the future: - We should consider also inlining JM -> Ion constructor calls. I have no idea why we disable them and how hard it would be to also create this fast path. Should be a win on this benchmark, just like the none constructing example. - Or implement JSOP_DEFFUN and we would start using IM for the loop and the constructing would be IM to IM call
Attachment #680559 - Flags: review?(dvander)
Comment on attachment 680559 [details] [diff] [review] Disable recompile check for constructing call Review of attachment 680559 [details] [diff] [review]: ----------------------------------------------------------------- I think the reason we don't is that IonMonkey requires the caller to create |this|, whereas JM expects it to be created on the callee side. I never got around to implementing the code to create |this| in the IC.
Attachment #680559 - Flags: review?(dvander) → review+
For reference: Bug 764310: Compile JSOP_DEFFUN Bug 811218: inlining JM -> new IM() I also forgot to include numbers with patch applied: JM: 0.680s IM: 0.675s https://hg.mozilla.org/integration/mozilla-inbound/rev/470acc9eaa36
Follow-up bug to unregress kraken-dft. The first patch unintentionally also disabled JM -> OSR IM fastpath. https://hg.mozilla.org/integration/mozilla-inbound/rev/93461546766e
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla19
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: