Closed Bug 1228561 Opened 9 years ago Closed 8 years ago

[checkerboard-experiment] Tweak our display port size and strategy for more modern devices

Categories

(Core :: Panning and Zooming, defect)

ARM
Gonk (Firefox OS)
defect
Not set
normal

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: cwiiis, Unassigned)

References

Details

(Keywords: perf, Whiteboard: [gfx-noted])

This is specifically about b2g, and possibly Android too, though I'm unaware of whether or not they share values. IIRC, the last time we tweaked display port values, we were targeting the Alcatel One-touch Fire C, a slow, single-core device with a 320x480 screen. Our current dogfood device is a fast, quad-core device with a 1280x720 screen (the Sony Xperia Z3[C]). Our displayport values are currently large. This makes sense at 320x480 on a slow device - having a larger (relatively) displayport has a lower memory impact at such a low resolution, and because rendering is expensive and slow, we want to cache as much as possible. On a fast, 1280x720 device, this does not make sense. The memory impact of a large displayport is much larger, and though rendering is much faster, memory bandwidth does not scale to the same extent and so we end up caching rendering that would be much faster to draw closer to when it's needed. The consequence is that we are spending more time painting off-screen content that's blocking processing of things we want to do immediately (such as responding to touch events and animating things that are visible). In gaia, we work around this a lot by adding overflow:hidden during animations, but this is a hack, and an expensive one at that, as changing overflow like this triggers whole-layer invalidations. Though the compositor should only be drawing what is visible on screen, many async animations run at a slower frame-rate when overflow:hidden is omitted (and so the whole displayport is visible). I suggest two things to remedy this: 1: Stop redistributing unused displayport space to the other axis (e.g. if a page is not horizontally scrollable, that area is redistributed to the vertical axis). Assuming the values we've picked are sane for a page that can scroll in all directions, it doesn't make sense to increase them if increasing them has a performance penalty. 2: Dramatically reduce our displayport scale values. Up to (but backing off from) the point where checkerboarding becomes more noticeable. This will have to be done manually I believe, we don't have any good automated tests for this and there are nuances that I think would be better tweaked by assessing the situation interactively. kats, what do you think of this? Any objection to either of these suggestions?
Flags: needinfo?(bugmail.mozilla)
So I have no objections to tweaking our displayport multipliers. I'm less inclined to flip the pref for redistributing unused displayport space, but if it's a significant win then sure. However, I would really like to wait until we have reliable checkerboarding data (bug 1226826 lays out my rough plan for this) before we start fiddling with these parameters, so that we can be sure we're not making checkerboarding worse.
Depends on: 1226826
Flags: needinfo?(bugmail.mozilla)
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #1) > So I have no objections to tweaking our displayport multipliers. I'm less > inclined to flip the pref for redistributing unused displayport space, but > if it's a significant win then sure. > > However, I would really like to wait until we have reliable checkerboarding > data (bug 1226826 lays out my rough plan for this) before we start fiddling > with these parameters, so that we can be sure we're not making > checkerboarding worse. This sounds good, but I should hope that we base this decision on interactive performance too, and not just checkerboarding. Otherwise we'd just make the the displayport as big as memory would allow, surely?
Agreed. I don't know if we have good metrics on that, but as you said in comment 0, we want to go "Up to (but backing off from) the point where checkerboarding becomes more noticeable." - and I know we can have good data on that at least, so I would like to make sure we have that in place first. It would also be nice to make the displayport multipliers more heuristic based. Mason took a step in that direction recently by increasing the displayport for systems with > 4GB of memory. We could probably add more heuristics based on screen size and so on.
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #3) > Agreed. I don't know if we have good metrics on that, but as you said in > comment 0, we want to go "Up to (but backing off from) the point where > checkerboarding becomes more noticeable." - and I know we can have good data > on that at least, so I would like to make sure we have that in place first. > > It would also be nice to make the displayport multipliers more heuristic > based. Mason took a step in that direction recently by increasing the > displayport for systems with > 4GB of memory. We could probably add more > heuristics based on screen size and so on. Heuristics would be great - though I think that particular heuristic is not a good idea. More memory doesn't mean more memory bandwidth or a faster CPU, and increasing the displayport size increases pressure on both of those (which I think we're seeing problems with on b2g and fennec). Ideally we'd have asynchronous updates and OMTP - in that situation, it'd be a great idea... But with neither, every increase in displayport size is a trade with response-time and animation performance.
Summary: Tweak our display port size and strategy for more modern devices → [checkerboard-experiment] Tweak our display port size and strategy for more modern devices
Btw if you have suggestions on new multipliers
Keywords: perf
Whiteboard: [gfx-noted]
(whoops, submitted too early) .. feel free to suggest them. The checkerboarding telemetry code is landed but the telemetry dashboard doesn't seem to show B2G data so we'll be flying blind for now. If we can find the data we can tell if our changes made any difference.
The only thing that worries me with optimising our displayport values now is, as far as I know, we don't have any telemetry for response-time (and do we still run tests for fluidity of composition?). Also, do we have paint-speed tests that are tested while a displayport is in effect? That would also be good. If we optimise for checkerboarding, then, for the most part, bigger is better. On the other hand, the displayport right now is so large that to do any kind of user interaction with elements and maintain anything approaching 60Hz and sub-100ms response, you have to disable overflow and/or be *extremely* careful. In the absence of better testing and tools, I would suggest that we define an amount of checkerboarding we deem as acceptable on a particular platform(s), and have our displayport be as small as possible while still fulfilling that minimum requirement. I'd also suggest that a displayport larger than 4x the screen size (so 2x on each axis) is too large and that common, weaker GPUs, especially on mobile, just don't have the bandwidth to handle any more than that, if even that. I'd also strongly recommend at least doing suggestion 1 in comment #0 - it makes sense from a bandwidth perspective to redistribute displayport, but the trade-off isn't linear and we end up with huge display-lists that really slow down painting (and consequently, everything else).
We did (In reply to Chris Lord [:cwiiis] from comment #7) > The only thing that worries me with optimising our displayport values now > is, as far as I know, we don't have any telemetry for response-time (and do > we still run tests for fluidity of composition?). We have a probe added in bug 1221697 for composite time which should help here. We don't have a probe for frame uniformity although we have a mochitest for it somewhere. > Also, do we have > paint-speed tests that are tested while a displayport is in effect? That > would also be good. Not really, although if painting takes too long that will impact checkerboarding. > If we optimise for checkerboarding, then, for the most part, bigger is > better. This is not really true. Bigger often means more checkerboarding rather than less, because we can paint less frequently. Bug 1208636 increased the displayport size on high-memory desktop systems and we had to undo that because the user experience was worse. > On the other hand, the displayport right now is so large that to do > any kind of user interaction with elements and maintain anything approaching > 60Hz and sub-100ms response, you have to disable overflow and/or be > *extremely* careful. > > In the absence of better testing and tools, I would suggest that we define > an amount of checkerboarding we deem as acceptable on a particular > platform(s), and have our displayport be as small as possible while still > fulfilling that minimum requirement. I'd also suggest that a displayport > larger than 4x the screen size (so 2x on each axis) is too large and that > common, weaker GPUs, especially on mobile, just don't have the bandwidth to > handle any more than that, if even that. This sounds reasonable. > I'd also strongly recommend at least doing suggestion 1 in comment #0 - it > makes sense from a bandwidth perspective to redistribute displayport, but > the trade-off isn't linear and we end up with huge display-lists that really > slow down painting (and consequently, everything else). Also reasonable. I'm fine with fiddling with any of these prefs for B2G. And it turns out we can actually get some B2G telemetry data, we just have to do it more manually - see https://github.com/mozilla/telemetry-dashboard/issues/223#issuecomment-185799226
Since this is specifically about B2G, I'm going to close it.
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.