1.87 - 10.87% raptor-tp6m-cnn-geckoview-cold fcp (android-hw-g5-7-0-arm7-api-16) regression on push 3af7f65785fe9e7ddd3d33afcede92f7ce21e111 (Thu November 14 2019)
Categories
(Core :: DOM: Core & HTML, defect)
Tracking
()
Tracking | Status | |
---|---|---|
firefox71 | --- | unaffected |
People
(Reporter: alexandrui, Unassigned)
References
(Regression)
Details
(Keywords: perf, perf-alert, regression)
Raptor has detected a Firefox performance regression from push:
As author of one of the patches included in that push, we need your help to address this regression.
Regressions:
11% raptor-tp6m-cnn-geckoview-cold fcp android-hw-g5-7-0-arm7-api-16 pgo 6,647.67 -> 7,370.33
3% raptor-tp6m-cnn-geckoview-cold fcp android-hw-g5-7-0-arm7-api-16 pgo 7,288.62 -> 7,472.00
2% raptor-tp6m-cnn-geckoview-cold fcp android-hw-g5-7-0-arm7-api-16 pgo 7,345.21 -> 7,482.50
You can find links to graphs and comparison views for each of the above tests at: https://treeherder.mozilla.org/perf.html#/alerts?id=24465
On the page above you can see an alert for each affected platform as well as a link to a graph showing the history of scores for this test. There is also a link to a Treeherder page showing the Raptor jobs in a pushlog format.
To learn more about the regressing test(s) or reproducing them, please see: https://wiki.mozilla.org/TestEngineering/Performance/Raptor
*** Please let us know your plans within 3 business days, or the offending patch(es) will be backed out! ***
Our wiki page outlines the common responses and expectations: https://wiki.mozilla.org/TestEngineering/Performance/Talos/RegressionBugsHandling
Reporter | ||
Updated•5 years ago
|
Comment 1•5 years ago
|
||
So, that patch was only fixing the tests (because without it they'll crash).
So the regression was introduced by one of the patches between https://hg.mozilla.org/integration/autoland/rev/3a1f314c3022 and https://hg.mozilla.org/integration/autoland/rev/3af7f65785fe9e7ddd3d33afcede92f7ce21e111.
That includes two possible culprits:
Given the size of bug 1589447, I suspect bug 1590167 is the real culprit here, but worth double-checking... Jonathan there was something about the mapped_hyph integration that was slower on android right? That may explain this regression.
Comment 2•5 years ago
|
||
Looking at the perfherder graph there, it seems quite noisy, making it hard to be sure if/when there's a small regression. However, I can imagine it's possible that bug 1590167 could have an effect. I don't see any actual use of hyphens:auto on CNN (though I'm not sure exactly what page(s) are used in the test), but the precompiled hyphenation tables are somewhat larger than the old pattern files, and the increase in package size may be causing decompression and startup to be fractionally slower.
If that's the case, I'd say this is a WontFix, as the tradeoff is expected: we're trading a slightly larger package size (and possibly initial decompression time) against the elimination of per-content-process resource loading time and memory footprint for hyphenation.
Comment 3•5 years ago
|
||
That works for me.
Reporter | ||
Comment 4•5 years ago
|
||
There's a small regression there for the noise in the area are there are some (outliar?) datapoints that look pretty much out of this movie, but still make finding the culprit pretty difficult. If you add the failed and superseded jobs in the process of finding the cuprit, yeah, it makes it even harder to identify with 100% accuracy.
Updated•3 years ago
|
Description
•