JIT increases GC time in Speedometer3?
Categories
(Core :: JavaScript Engine: JIT, task, P2)
Tracking
()
People
(Reporter: smaug, Unassigned)
References
(Depends on 2 open bugs, Blocks 2 open bugs)
Details
(Whiteboard: [sp3])
I was looking at why Window objects of already run tests stayed in the CC graph for a long time. It seems that there is a JS List object which has references to those globals through baseshape_global edges. That List lives in the parent page. (Is it used internally for async iteration or what?)
The List is apparently collected at the end of the test.
From Matrix
jonco: ICs have GC pointers e.g. to shapes to guard against, and ideally these should really be weak pointers but at the moment they are not. My guess is that is where these are coming from
And jandem suggested trying with javascript.options.blinterp = false
With that I didn't see the all the time increasing GC times, but I did run sp3 with profiler just once with that pref.
Note, we do run almost all the GC/CC slices outside the measured time in sp3, almost. Especially closer to the end of the test one starts to see ALLOC_TRIGGER slices from JS engine happening during the measured time.
Reporter | ||
Comment 1•2 years ago
|
||
Just to try how https://hg.mozilla.org/try/rev/0140a62198cd8e7c01ecb8e0372b6abe48880c7e would change the numbers.
That is something jandem suggested.
https://treeherder.mozilla.org/perfherder/compare?originalProject=try&originalRevision=87c987cc3b41f52a6ec9c50ab8b9ea584807b87f&newProject=try&newRevision=f1dc13161a03c14b9fa93b140786de384d2d0ef3
Reporter | ||
Comment 2•2 years ago
|
||
That doesn't seem to help, or at least not much.
Updated•2 years ago
|
Updated•2 years ago
|
Reporter | ||
Comment 3•2 years ago
|
||
Hmm, maybe the List is coming from BaselineCacheIRCompiler.
If it is https://searchfox.org/mozilla-central/rev/841c6d76db55d110d1fd3f8891d0ed1632be7736/js/src/jit/BaselineCacheIRCompiler.cpp#2169-2177, perhaps that could store the shapes in global/realm specific lists or something, and then when GCing,
shouldPreserveJITCode could check if DOM has marked the Realm being destroyed and if so, GC would clear the list.
Comment 4•2 years ago
|
||
That list is the set of shapes that have been seen at a given IC. Based on the description above, it sounds like there is code in the parent page that runs for every test. Each time, it stores the shape of an object belonging to the current global. Because those shapes have an edge to the global, they keep it alive as long as the IC is alive.
In other words, jonco's guess in comment 1 is pretty much right. The only complication is that the shapes in this particular case are in a ListObject, not in the IC stub itself, because of stub folding (bug 1671228). If we make shape pointers in IC stubs weak, we should also make sure to tweak how we trace this case.
Updated•2 years ago
|
Updated•2 years ago
|
Updated•2 years ago
|
Comment 5•2 years ago
|
||
From matrix: we could potentially throw away JIT code for all realms in a compartment based on a counter of realms in that compartment that the DOM thinks should be dead.
Comment 6•1 years ago
|
||
(In reply to Olli Pettay [:smaug][bugs@pettay.fi] from comment #1)
Just to try how https://hg.mozilla.org/try/rev/0140a62198cd8e7c01ecb8e0372b6abe48880c7e would change the numbers.
That is something jandem suggested.
Because this didn't help a lot, there might also be a GC scheduling issue, In other words, making the GC edges from Baseline ICs weak probably won't help a lot on its own if discarding JIT code on every GC also didn't affect this.
Comment 7•1 years ago
|
||
(In reply to Olli Pettay [:smaug][bugs@pettay.fi] from comment #0)
I noticed that there are about 50 realms alive at the end of an sp3 run, which is much more than would be expected. I tried running with Ion disabled and this reduced the number of realms to around 20. This is still more than expected. Testing with my patches in bug 1837620 did not make any difference here.
GC and CC are being triggered repeatedly, so something is not working properly. In fact GC is happening very frequently and is not freeing up much memory. It looks to me like we are triggering this too aggressively.
Updated•1 year ago
|
Description
•