Closed
Bug 1225101
Opened 9 years ago
Closed 9 years ago
[e10s] SYSTEM_FONT_FALLBACK regressed
Categories
(Firefox :: General, defect)
Firefox
General
Tracking
()
RESOLVED
WORKSFORME
People
(Reporter: rvitillo, Unassigned)
References
(Blocks 1 open bug, )
Details
The histogram regressed in e10s. See provided URL and grep for the metric's name.
Comment 1•9 years ago
|
||
Hey John, this probe regressed under e10s. Do you feel this is something we should investigate before rolling e10s out?
Flags: needinfo?(jdaggett)
Comment 2•9 years ago
|
||
I think you really need to provide more context here. "Regressed" in the sense that the telemetry stats for this measure show up as slower under e10s? Is this data aggregated across platforms? Are the distribution of platform samples equalized or different for the two e10s and non-e10s sets?
System font fallback times are *very* much dependent upon the platform services utilized, so you need to separate out this measure per platform.
Off the top of my head, I would guess that having multiple processes hitting the same system service concurrently results in a minor loss in time due to contention for font-related resources. But that's a SOMPWAG (*).
For system font fallback times, we have two measures, SYSTEM_FONT_FALLBACK in µs and SYSTEM_FONT_FALLBACK_FIRST in ms. Since SYSTEM_FONT_FALLBACK is measured in microseconds, I doubt there's much that's statistically significant here. If you look at SYSTEM_FONT_FALLBACK_SCRIPT, the distribution of fallback scripts is different for e10s vs. non-e10s. You would need to account for that difference and do the analysis per-platform before you can determine whether there's anything actionable here or not.
(*) seat of my pants wild-ass guess
Flags: needinfo?(jdaggett)
Comment 3•9 years ago
|
||
(In reply to John Daggett (:jtd) from comment #2)
> I think you really need to provide more context here. "Regressed" in the
> sense that the telemetry stats for this measure show up as slower under
> e10s? Is this data aggregated across platforms? Are the distribution of
> platform samples equalized or different for the two e10s and non-e10s sets?
slightly, if you open this link up -
http://nbviewer.ipython.org/github/vitillo/e10s_analyses/blob/master/aurora/e10s_all_histograms_experiment.ipynb
and search for SYSTEM_FONT_FALLBACK you'll see the regression. It's minor, a slight shift from 0 and 1 out to a range of 2 - 20.
> System font fallback times are *very* much dependent upon the platform
> services utilized, so you need to separate out this measure per platform.
Roberto, can we do this?
> Off the top of my head, I would guess that having multiple processes hitting
> the same system service concurrently results in a minor loss in time due to
> contention for font-related resources. But that's a SOMPWAG (*).
We're currently testing with a single content process, so I don't think this is the issue.
> For system font fallback times, we have two measures, SYSTEM_FONT_FALLBACK
> in µs and SYSTEM_FONT_FALLBACK_FIRST in ms. Since SYSTEM_FONT_FALLBACK is
> measured in microseconds, I doubt there's much that's statistically
> significant here. If you look at SYSTEM_FONT_FALLBACK_SCRIPT, the
> distribution of fallback scripts is different for e10s vs. non-e10s. You
> would need to account for that difference and do the analysis per-platform
> before you can determine whether there's anything actionable here or not.
Interesting, I'm tempted to resolve as not significant then. SYSTEM_FONT_FALLBACK_FIRST and SYSTEM_FONT_FALLBACK_SCRIPT appear to improve with e10s.
Updated•9 years ago
|
Flags: needinfo?(rvitillo)
Reporter | ||
Comment 4•9 years ago
|
||
(In reply to Jim Mathies [:jimm] from comment #3)
> (In reply to John Daggett (:jtd) from comment #2)
> > I think you really need to provide more context here. "Regressed" in the
> > sense that the telemetry stats for this measure show up as slower under
> > e10s? Is this data aggregated across platforms? Are the distribution of
> > platform samples equalized or different for the two e10s and non-e10s sets?
This is the result of an A/B test on all Aurora users so the samples are equalized among platforms.
> > System font fallback times are *very* much dependent upon the platform
> > services utilized, so you need to separate out this measure per platform.
>
> Roberto, can we do this?
We could but...
> Interesting, I'm tempted to resolve as not significant then.
> SYSTEM_FONT_FALLBACK_FIRST and SYSTEM_FONT_FALLBACK_SCRIPT appear to improve
> with e10s.
as this seems to be non signficant I would be inclined to close it.
Flags: needinfo?(rvitillo)
Comment 5•9 years ago
|
||
(In reply to Jim Mathies [:jimm] from comment #3)
> Interesting, I'm tempted to resolve as not significant then.
> SYSTEM_FONT_FALLBACK_FIRST and SYSTEM_FONT_FALLBACK_SCRIPT appear to improve
> with e10s.
The SYSTEM_FONT_FALLBACK_SCRIPT metric is an enumeration, it reflects the script of the character for which fallback occurs. So the horizontal scale is meaningless other than it allows you to distinguish which script has the highest occurrence of fallback (e.g. Arabic/CJK/Greek/Hebrew/etc)
Yeah, I don't see anything that looks significant here. If the SYSTEM_FONT_FALLBACK_FIRST suddenly regresses then we need to figure out what's going on.
Comment 6•9 years ago
|
||
Great, thanks for the help guys, resolving this out.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WORKSFORME
You need to log in
before you can comment on or make changes to this bug.
Description
•