Closed
Bug 1384178
Opened 7 years ago
Closed 7 years ago
assemblePing can be slow, mostly due to cloneInto
Categories
(Toolkit :: Telemetry, enhancement, P2)
Toolkit
Telemetry
Tracking
()
RESOLVED
WORKSFORME
Performance Impact | low |
Tracking | Status | |
---|---|---|
firefox57 | --- | fix-optional |
People
(Reporter: mstange, Unassigned)
Details
(Keywords: perf)
Here's a profile of 37ms of jank due to assemblePing: https://perfht.ml/2eKVcQg
There's a GC in the reading part of the cloneInto, and the writing part seems to be slow due to CrossCompartmentWrappers.
Comment 1•7 years ago
|
||
This is from here:
https://dxr.mozilla.org/mozilla-central/rev/7d2e89fb92331d7e4296391213c1e63db628e046/toolkit/components/telemetry/TelemetryController.jsm#410
We actually had issues before where the payload object (passed in from external / not-core-telemetry call-sites) was modified after being passed into assemblePing() and before being sent out to our servers.
We need to keep making sure this doesn't happen.
Markus, do you know any alternative approaches here (like more efficient cloning or freezing this kind of object)?
Or do you know who might?
Flags: needinfo?(mstange)
Comment 3•7 years ago
|
||
Most of the time spent there is in proxy and GC overhead.
Using StructuredCloneHolder would definitely help with that, assuming the whole object tree you're cloning is in the same compartment, since it enters the object compartment before cloning, so generally doesn't need to create wrappers. It would help even more if you don't actually need to immediately deserialize the cloned data.
Flags: needinfo?(kmaglione+bmo)
Comment 4•7 years ago
|
||
(In reply to Kris Maglione [:kmag] from comment #3)
> Using StructuredCloneHolder would definitely help with that, assuming the
> whole object tree you're cloning is in the same compartment, since it enters
> the object compartment before cloning, so generally doesn't need to create
> wrappers. It would help even more if you don't actually need to immediately
> deserialize the cloned data.
Without larger changes, we can only say that accessing the cloned data (stringification) will happen often "nearly immediately", but can also take up to a few minutes.
There is definitely no write access anymore though.
Do you know how we could improve things from our JS code specifically?
Flags: needinfo?(kmaglione+bmo)
Updated•7 years ago
|
Whiteboard: [qf]
Comment 5•7 years ago
|
||
Georg, does this bug affect users on the release channel by default? And what is the impact? It wasn't obvious to us if this should be a P1 or not given where we are currently in terms of the timeline of the 57 release...
Flags: needinfo?(gfritzsche)
Comment 6•7 years ago
|
||
Release ping sizes are substantially smaller. We also did a number of other optimization for Telemetry.
I think this is not critical for the 57 release.
Flags: needinfo?(gfritzsche)
Updated•7 years ago
|
Priority: -- → P2
Updated•7 years ago
|
Whiteboard: [qf] → [qf:p3]
Updated•7 years ago
|
status-firefox57:
--- → fix-optional
Comment 7•7 years ago
|
||
With the recent improvements we did, we should get some fresh data on the performance here.
Maybe this is good enough now or shows different functions to optimize.
Priority: P2 → P1
Comment 8•7 years ago
|
||
From bug 1186409 components share a common global. Does that change the landscape for cloned data?
Updated•7 years ago
|
Priority: P1 → P2
Comment 9•7 years ago
|
||
We improved various things around the Telemetry core in the QF context.
Without further data, closing this bug.
Please re-open or file new bug if this is still an issue.
Status: NEW → RESOLVED
Closed: 7 years ago
Flags: needinfo?(kmaglione+bmo)
Resolution: --- → WORKSFORME
Updated•3 years ago
|
Performance Impact: --- → P3
Whiteboard: [qf:p3]
You need to log in
before you can comment on or make changes to this bug.
Description
•