Closed
Bug 1205782
Opened 9 years ago
Closed 2 years ago
Recording allocations causes huge spike in GC activity
Categories
(DevTools :: Performance Tools (Profiler/Timeline), defect, P3)
DevTools
Performance Tools (Profiler/Timeline)
Tracking
(Not tracked)
RESOLVED
INVALID
People
(Reporter: fitzgen, Unassigned)
References
Details
This was made super apparent apparent by fixing bug 1204169 and bug 1204584.
When recording a profile without allocations, I'm seeing nursery collections take maybe 1-5% of time and full GCs negligible amounts.
When recording the same actions with allocations, I'm seeing about 30% of time in nursery collections and full GCs taking significant amounts of time (need to profile again because I forget the exact full GC numbers).
We need to investigate and profile more to determine the root cause. I may spin off blocking bugs for each issue identified.
Initial ideas, thoughts, and suspicions:
* SavedFrame stacks are objects and therefore capturing them causes GC pressure. Intuitively, this should be pretty minimal because of the tail sharing. One idea: we know that these objects will probably be around for a while, therefore it might make sense for them to skip the nursery entirely and be allocated in the tenured heap.
* When draining the allocations log, we create an object for each entry in the log plus an array object containing them all. Only way to get around this would be to create a binary format and just hand over a typed array which we send across the RDP as bulk data and the client decodes. This is doable but fairly heavyweight, so let's not look into this until we have pinpointed these allocations as problematic.
* After draining the logs, the devtools server allocates a few arrays of length(number-of-total-allocations-recorded-not-just-drained-right-now). It then fills up those arrays in the frame cache code and uses the iteration protocol while doing that. If the iteration protocol's objects aren't being escape analysis'd away, then that is a few more allocations per entry in the log.
* Then we send the resulting packet through the protocol.js stuff. This actually does (a simplified version of) JSON.stringify(JSON.parse(packet)) so there is at least another allocation per entry. Probably more within the protocol.js layer.
* Then we go through the actual transport and connection and all that, where there are probably tons of allocations as well.
I know we have talked about just building up the allocations on the server before sending them out (right now they are always streamed ASAP IIRC) but haven't actually changed anything AFAIK. This would probably help with the last two bullet points.
Anyways, as I said above, more investigation and profiling is needed.
Reporter | ||
Comment 2•9 years ago
|
||
Got about half of the extra overhead gone in bug 1241311. Leaving the remaining work (investigation of devtools RDP server allocations and bug 1196862) to others.
Assignee: nfitzgerald → nobody
Status: ASSIGNED → NEW
Comment 3•9 years ago
|
||
Triaging. Filter on ADRENOCORTICOTROPIC (yes).
Priority: -- → P3
Summary: recording allocations causes huge spike in GC activity → Recording allocations causes huge spike in GC activity
Updated•6 years ago
|
Product: Firefox → DevTools
Updated•2 years ago
|
Severity: normal → S3
Updated•2 years ago
|
Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → INVALID
You need to log in
before you can comment on or make changes to this bug.
Description
•