Closed Bug 1613260 Opened 5 years ago Closed 4 years ago

Support per-task scale for local space rasterization.

Categories

(Core :: Graphics: WebRender, enhancement, P3)

enhancement

Tracking

()

RESOLVED FIXED
mozilla75
Tracking Status
firefox75 --- fixed
firefox76 --- fixed
firefox77 --- fixed
firefox78 --- fixed

People

(Reporter: gw, Assigned: bpeers)

References

(Regressed 1 open bug)

Details

Attachments

(13 files)

When an off-screen surface is rasterized, we select either screen-space or local-space rasterization mode. We prefer local-space mode for a number of reasons, but sometimes fall back to screen-space mode for some situations that WR doesn't currently handle.

In order to use local-space rasterization for all cases, we need to support a scaling factor for a given render task, so that we can draw it at an arbitrary scale.

We already have some support for doing this (the global device-pixel scale) so conceptually this isn't too tricky to render to a given scale. However, we also need to be aware of this scale when the surface is used as an input (e.g. filters, mix-blend-mode readbacks etc) which will complicate things a bit.

Blocks: 1613262
Priority: -- → P3

Hi Glenn, do you know of a small test case (like a wrench Yaml file) that shows the limitation where it falls back to screen-space? That would help to understand the problem and to iterate faster when fixing it. Thanks!

I don't know of any specific existing tests for this, but the info below should (hopefully!) be enough to work out how to create such a test case:

The existing code to adjust raster roots is [1]. This basically says to fall back to screen raster mode if the size of the surface that would be created is going to be larger than the maximum texture size we want to draw to.

So, I think you should be able to create such a test case by having a rectangle prim that is inside a stacking context where the stacking context contains (a) a filter such as opacity, to force creation of an off-screen surface and (b) the transform has a very small scale factor on it, such that the source would need to be very large in order to draw with that small scale value.

Note: There is an optimization where we skip off-screen surfaces for simple opacity cases (e.g. a single rectangle or image), so you will need to ensure you don't hit that fast path in this case.

[1] https://searchfox.org/mozilla-central/rev/3a0a8e2762821c6afc1d235b3eb3dde63ad3b01a/gfx/wr/webrender/src/picture.rs#4872

(nevermind all this)

Assignee: nobody → bpeers
Attached image tilecacheref2.png (deleted) —

I think I have a reasonable repro case to start plumbing: if I use this yaml file...

root:
  items:
    - type: "stacking-context"
      perspective: 100
      perspective-origin: 100 100
      items:
        - type: "stacking-context"
          filters: blur(2)
          transform: rotate-x(15)
          items:
            - image: checkerboard(2, 8, 64);
              bounds: [40, 40, 516, 516]
        - image: checkerboard(2, 32, 2);
          bounds: [700, 40, 68, 68]

... and artifically lower MAX_SURFACE_SIZE: f32 = 256.0 (nested picture no longer establishes a root), then it's interesting to see what happens in RenderDoc:

  • first we do the perspective warp into a render target;
  • then we ping-pong 2 passes to blur the result of step 1;
  • and that goes into a tile cache.

The order of operations is thus different from what we get with regular "establishes a raster root":

  • first blit into a (larger) rendertarget, without perspective;
  • ping-pong 2 passes to blur this flat output of step 1;
  • apply the perspective warp when rasterizing into the tilecache.

The latter sounds more correct based on guessing what the yaml's order of operations is supposed to be.

Thus I think I can start the plumbing by using the order of operations as a test case: lower MAX_SURFACE_SIZE and hack away until it manages to do the latter set of renderpasses?

Yup, that sounds correct, and a good plan.

You might find it simpler to debug by using a filter that doesn't involve intermediate passes (e.g. invert or sepia) rather than blur. I don't think that will affect your plan above, but it might make debugging simpler by involving fewer passes / targets to keep track of?

Attached image blurtest.png (deleted) —

Good idea, that does streamline the println spam a bit!

I suspect that the current implementation might not always work... If the surface_rect is sized just under MAX_SURFACE_SIZE but the picture has a blur, then we might ask for a RenderTaskLocation::Dynamic with a device_rect that's larger than MAX_SURFACE_SIZE; which I think is what we're trying to avoid?

Not that it probably matters much visually (as long as the backend clamps requests to MAX_SURFACE_SIZE as opposed to blowing up)...

Other than that, I'm getting suspiciously good results with (in post_update)

if raster_config.establishes_raster_root
    && (surface_rect.size.width > MAX_SURFACE_SIZE
        || surface_rect.size.height > MAX_SURFACE_SIZE) {
    //raster_config.establishes_raster_root = false;
    //state.are_raster_roots_assigned = false;
    raster_config.task_scale =
        // round_out will grow by 1 integer pixel if origin is on a
        // fractional position, so keep that margin for error with -1:
        (MAX_SURFACE_SIZE as f32 - 1.0 ) / surface_rect.size.width.max(surface_rect.size.height);
}

and on the take_context side:

let device_pixel_scale = frame_state
         .surfaces[raster_config.surface_index.0]
         .device_pixel_scale * Scale::new(raster_config.task_scale);

Since everything below that (blurs, UV calculation) is already set up to deal with arbitrary device_pixel_scale, it... seems to work?
"Work" as in "manages to stay below artificial 128x128 limit while still establishing a root". Except for blur turning that back into 134x134.

Of course even if that works, it's only the first step -- but I'm just wondering if this is too easy. What am I missing? :)
Cheers.

Attached file late_res_change_report.pdf (deleted) —

I changed the code a bit to adjust the requested raster size at the very last moment -- after blur and inflation has already been taken into account. That is, in each filter branch, just before we calculate UVs, check if the rectangle would be too big and shrink it if necessary. Which then means re-adjusting blur and inflation as well.

I've set up a bunch of test cases and compared the original with forcing MAX_SURFACE_SIZE == 128. Most cases look good; just for the mix blend I'm not sure if things are blurrier than they should be.

The PDF has the diff and all the test cases. Maybe we can sync up again next week to see if this approach makes sense? Thanks!

Yep, it sounds like that's mostly on the right track, building off that initial work I mentioned which supports a per-task device-pixel scale. I'm assuming it's one of those tasks where the first 80% is done, now for the next 80% :)

That is, I suspect even though the basic path is fairly straightforward, there might be a heap of edge cases / significant complexity for things like mix-blend-mode, background-blend-mode, getting blurs exactly right as you mentioned above etc. It's possible that those may not end up being a massive problem though.

A try run with the max surface size set quite low might help identify tests that are obviously wrong and need extra work, although as we discussed, the fuzziness many tests would experience might make it difficult to see which tests are clearly incorrect.

As part of this work, it's probably also worth adding a significant number of wrench tests that exercise the local scale raster functionality with various filter chains and other picture effects (such as complex clips, mix-blend-mode etc).

Attached file reftesting.pdf (deleted) —

Hi Glenn, thanks for taking a look at the code and making sure it's moving along the right direction :)

I ran the reftests with a really low max surface size, and had a closer look at 6 that started failing. They all seem to be minor issues as expected by upscaling from low res textures. In the browser, acid tests also look as expected, bit of browsing seems fine too.

Eventually I did find an ad that was wrong beyond just being a bit blurry. It took a bit of time to untangle the CSS and iframe spaghetti but I have it pared down to the CSS below. Animated opacity fades in some text which looks fine, but when the fade is finished it pops to something that has the wrong scale... interesting, not sure what's going on there, I'll dig a bit :) Cheers.

 	.gwd-page-content {
		perspective:1400px;
	 	-webkit-perspective:1400px;
	 	-moz-perspective:1400px;
		transform-style:preserve-3d;
	 	-webkit-transform-style:preserve-3d;
		-moz-transform-style:preserve-3d;
	 	position:absolute
	}

 	.gwd-p-1ake {
		position:absolute;
 		transform-style:preserve-3d;
	}

	@-webkit-keyframes
	gwd-gen-1jetgwdanimation_gwd-keyframes {
		  0% { opacity:0; }
		100% { opacity:1; }
	}
	#page1.gwd-play-animation .gwd-gen-1jetgwdanimation {
		-webkit-animation:1s linear 0s 1 normal forwards
		gwd-gen-1jetgwdanimation_gwd-keyframes;
	}

	.gwd-p-vfid {
		font-size:40px
	}

	.gwd-gen-1mv0gwdanimation {
		-webkit-animation:1s linear 0s 1
		normal forwards gwd-gen-1mv0gwdanimation_gwd-keyframes;
	}

 	.gwd-page-1wdp {
		height:200px;
		width:300px
	}
Attached image at_ad_bug_fade_pop.gif (deleted) —

Repro case. The characters are positioned correctly, but are scaled up, so, bad UVs I'm guessing.

Attached image reftest_A.png (deleted) —

gfx/wr/wrench/reftests/transforms/raster_root_A_8192.yaml

Attached image reftest_B.png (deleted) —

gfx/wr/wrench/reftests/transforms/raster_root_B_8192.yaml

Attached image reftest_C.png (deleted) —

gfx/wr/wrench/reftests/text/raster_root_C_8192.yaml

The above tests are without --angle and that turns out to support gpu_supports_advanced_blend.
Enabling --angle removes that support, which runs a different code path, which fails the test in very obvious ways (bad UVs).

So I'm still working on this, fixing up the edge cases. The good news is this means the tests are in fact covering the edge cases that we were worrying about, and are breaking them, so our confidence can go up a little (once they pass :) ).

Attached image alpha_problem.png (deleted) —

Got the UVs figured out, still chasing after a (final?) bit of strangeness with the alpha values not matching up. There's a CopySubresource that looks suspicious.

Attached image coordinates_ref.png (deleted) —

I've made the tests more interesting to make sure that the readback of the background isn't being shifted or scaled, with another checkerboard in the background.

I think I also found all (or enough :) ) places where different "device scales" were being jumbled together, I hope to have a patch on Monday. The new tests now pass with and without --angle (Windows).

Attached image updated_test_C.png (deleted) —

Updated reftest raster_root_C.

Adding leave-open due to debug toggle that needs to be removed if nothing blows up for a while.

Keywords: leave-open
Pushed by bpeers@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/12389e959ebb Support per-task scale for local space rasterization r=gw,aosmond
Pushed by kgupta@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/438e07547213 Increase allowed fuzz on new tests so they pass in AppVeyor CI. r=Bert
Regressions: 1634162

Should we close this?

Flags: needinfo?(bpeers)

Yes, thanks for the reminder.

Status: NEW → RESOLVED
Closed: 4 years ago
Flags: needinfo?(bpeers)
Resolution: --- → FIXED
Target Milestone: --- → mozilla75
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: