Closed
Bug 1450514
Opened 7 years ago
Closed 7 years ago
Cycle collector showing up in large numbers of content hangs after Jan. 30 2018
Categories
(Core :: XPCOM, enhancement)
Core
XPCOM
Tracking
()
RESOLVED
WORKSFORME
People
(Reporter: dthayer, Unassigned)
Details
(Whiteboard: [bhr][bhr-html])
See https://arewesmoothyet.com/?mode=track&trackedStat=All%20Hangs
Something that landed around Jan. 30 is causing a large number of Cycle Collector hangs to be reported through the BackgroundHangMonitor. Andrew, I recall you mentioning in bug 1447791 that ghost windows were regressed recently and could be causing cycle collector hangs. Should we expect these CC hangs to go away with bug 1447871?
Flags: needinfo?(continuation)
Reporter | ||
Updated•7 years ago
|
Whiteboard: [bhr][bhr-html]
Comment 1•7 years ago
|
||
Bug 1438211 is what caused that ghost window regression, and that did not land until March. Looking at Telemetry Evolution ( https://mzl.la/2E4uVDt ), the number of ghost windows doesn't look like it changed around January 30, so it would have to be something else causing this. CYCLE_COLLETOR_MAX_PAUSE definition got worse, though! ( https://mzl.la/2uLvhiY ) (I filed bug 1450729 about not getting any telemetry alerts for this.)
I don't remember off hand what might have landed around then that could have caused hangs to get worse.
Flags: needinfo?(continuation)
Comment 2•7 years ago
|
||
IIRC, BHR records much longer hangs than we see in the CYCLE_COLLETOR_MAX_PAUSE telemetry (which are around 30 to 70ms). Is that right? I think Olli might have landed some patches earlier this year to do more CC work when idle or something, but I don't know if it was on January 30, and that should not have increased the number of 1 seconds+ pauses.
Reporter | ||
Comment 3•7 years ago
|
||
(In reply to Andrew McCreight [:mccr8] from comment #2)
> IIRC, BHR records much longer hangs than we see in the
> CYCLE_COLLETOR_MAX_PAUSE telemetry (which are around 30 to 70ms). Is that
> right? I think Olli might have landed some patches earlier this year to do
> more CC work when idle or something, but I don't know if it was on January
> 30, and that should not have increased the number of 1 seconds+ pauses.
BHR records anything >128ms, and looking at the distribution it looks like most of these are <256ms, and almost all of them are <512ms.
Hmm, I don't see anything from Olli that looks like a cause in the push logs.
Comment 4•7 years ago
|
||
Max idle time is 50ms, so that shouldn't affect to long pauses.
Comment 5•7 years ago
|
||
(In reply to Doug Thayer [:dthayer] from comment #3)
> Hmm, I don't see anything from Olli that looks like a cause in the push logs.
Could you link the push logs you are looking at? Thanks.
Flags: needinfo?(dothayer)
Reporter | ||
Comment 6•7 years ago
|
||
I was just looking at this: https://hg.mozilla.org/mozilla-central/pushloghtml?startdate=2018-01-29&enddate=2018-01-31
It's possible that the range should go out to 2018-02-01, but I don't think so. I'm investigating an issue right now where data seems to show up in the graphs on arewesmoothyet a day later than they should, even though it uses build IDs as the x axis.
Flags: needinfo?(dothayer)
Reporter | ||
Comment 7•7 years ago
|
||
This appears to have fallen off in a steep decline after around 2018/04/16.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WORKSFORME
You need to log in
before you can comment on or make changes to this bug.
Description
•