OMT brotli decompression
Categories
(Core :: Networking: HTTP, enhancement, P3)
Tracking
()
Performance Impact | high |
People
(Reporter: ehsan.akhgari, Unassigned)
References
(Depends on 1 open bug, Blocks 1 open bug)
Details
(4 keywords, Whiteboard: [necko-triaged])
Attachments
(1 obsolete file)
Reporter | ||
Comment 1•8 years ago
|
||
Reporter | ||
Comment 2•8 years ago
|
||
Updated•8 years ago
|
Comment 3•8 years ago
|
||
Comment 4•8 years ago
|
||
Reporter | ||
Comment 5•7 years ago
|
||
Comment 6•7 years ago
|
||
Comment 7•7 years ago
|
||
Updated•6 years ago
|
Comment 10•6 years ago
|
||
(In reply to Selena Deckelmann :selenamarie :selena use ni? pronoun: she from comment #9)
Junior - is this something you could investigate?
Given Comment 4 and Comment 6, IMO it could be a P3 for now. It could be my backlog but I'm not familiar with the pb-Http setup.
Comment 11•6 years ago
|
||
QF triage team has reviewed this bug and it seems like a pageload issue.
Comment 12•6 years ago
|
||
I think moving decompression off the main thread would help quite a bit.
Here's a more recent profile with 175ms spent inside nsHTTPCompressConv::OnDataAvailable: http://bit.ly/2ShNR8P
This is from loading https://www.facebook.com/DwayneJohnson .
Generally, the main thread is always busy during page load; the more work we can move off of it, the better. And things that are already asynchronous from the main thread's perspective (such as network requests) are a particularly attractive target.
Updated•6 years ago
|
Comment 13•6 years ago
|
||
Hi :dragana,
What all is required to accomplish this? Is it sufficient to read the data from the HTTP stream into a buffer and then pass that off to a thread to do the decoding?
Comment 14•6 years ago
|
||
(In reply to Barret Rennie [:brennie] from comment #13)
Hi :dragana,
What all is required to accomplish this? Is it sufficient to read the data from the HTTP stream into a buffer and then pass that off to a thread to do the decoding?
we have away to deliver onDataAvailable off-mail-thread, but all listeners must agree.
For listeners that require onDataAvailable to be deliver on the main thread but are encoded to do the following:
- deliver onDataAvailabe to the decoder, to nsHTTPCompressConv, of the main thread.
- the decoder should dispatch decoded data to the main thread.
we need to be careful to properly cancel data delivery from the decoder if channel gets canceled.
I think this should work.
Comment 15•6 years ago
|
||
Comment 16•5 years ago
|
||
So I ran some try runs of the patch I posted above (and some variants) to do some try runs and here are the results:
Always move off-main-thread, all decompression
This is the current version of the patch posted for review. If the listener chain for nsHTTPCompessConv
doesn't support off-main-thread OnDataAvailable
, nsHTTPCompressConv::OnDataAvailable
will dispatch a task to call its listener on the main thread.
Regresses the following:
- raptor-tp6-imgur-firefox opt 4.06%
- raptor-tp6-instagram-firefox opt 27.92%
- raptor-tp6-linkedin-firefox opt 2%
- raptor-tp6-sheets-firefox opt 3%
- raptor-tp6-yahoo-news-firefox opt 2.62%
Only move off-main-thread when entire listener chain supports, all decompression
Regresses the following:
- raptor-tp6-imgur-firefox opt 5.18%
- raptor-tp6-instagram-firefox opt 28.44%
Improves the following: - raptor-tp6-wikipedia-firefox opt 2.28%
Improves the following: - raptor-tp6-google-firefox opt 3.10%
Always move off-main-thread, only Brotli
Like the patch posted for review, this will always re-target off-the-main thread and dispatch the listener back to the main thread when it doesn't support re-targeting.
Regresses the following:
raptor-tp6-facebook-firefox-cold opt 33.20%
Improves the following:
- raptor-tp6-google-firefox opt 3.15%
- raptor-tp6-reddit-firefox opt 3.48%
- raptor-tp6-youtube-firefox opt 3.10%
Only move off-main-thread when entire listener chain supports, only Brotli
Regresses the following:
- raptor-tp6-facebook-firefox-cold opt 49.73%
- raptor-tp6-imgur-firefox opt 4.44%
- raptor-tp6-instagram-firefox opt 30.12%
- raptor-tp6-linkedin-firefox opt 2.21%
Updated•4 years ago
|
Updated•4 years ago
|
Comment 17•4 years ago
|
||
I'm going to get this compiling again and re-evaluate it with our latest performance tests which include features like visual metrics.
Comment 18•4 years ago
|
||
Current nightly still seeing about 65ms in the content process during facebook pageload on a MacBook Pro.
https://share.firefox.dev/3eyoWLl
Comment 19•4 years ago
|
||
I've gotten the patches compiling again.
Nazim was kind enough to capture new profiles with these patches:
Baseline, 109ms of Brotli decompression on content process main thread:
https://share.firefox.dev/3ny3zhd
With Brotli moved omt: now only 4ms of Brotli decompression on content process main thread:
https://share.firefox.dev/3sXGHZx
Awaiting visual metric runs on try.
Comment 20•4 years ago
|
||
We have some interesting results from visual metrics.
I only rebuilt two of the variations from Comment 16
Always move off-main-thread, all decompression
While this regresses many sites, there are also numerous improvements of up to 50% in visual metrics.
Always move off-main-thread, only Brotli decompression
This regresses more sites than it improves, which is interesting.
Comment 21•4 years ago
|
||
I think a good next step would be to compare these variations on larger set of pages, ideally using live sites.
Updated•3 years ago
|
Comment 22•2 years ago
|
||
The Performance Priority Calculator has determined this bug's performance priority to be P1.
Platforms: macOS
Impact on site: Causes noticeable jank
Page load impact: Some
[x] Affects major website
[x] Able to reproduce locally
Comment 23•2 years ago
|
||
While this would be useful right now, to my understanding it will no longer be needed once we can move the network calls off the content process main thread, Bug 1639433.
Updated•2 years ago
|
Comment 24•2 years ago
|
||
The severity field for this bug is set to S3. However, the Performance Impact
field flags this bug as having a high impact on the performance.
:jesup, could you consider increasing the severity of this performance-impacting bug? Alternatively, if you think the performance impact is lower than previously assessed, could you request a re-triage from the performance team by setting the Performance Impact
flag to ?
?
For more information, please visit auto_nag documentation.
Updated•1 year ago
|
Description
•