Open Bug 1161818 Opened 10 years ago Updated 2 years ago

Drawing on large canvases is slow, 2X slower than Chrome.

Categories

(Core :: Graphics: Canvas2D, defect, P3)

Unspecified
macOS
defect

Tracking

()

People

(Reporter: mbx, Unassigned)

References

(Depends on 1 open bug, Blocks 1 open bug)

Details

(Whiteboard: [gfx-noted])

Attachments

(1 file)

Attached file canvasSize.html (deleted) —
Canvas performance is poor when drawing on large canvases. On OSX, using both Skia and CG, CopyableCanvasLayer dominates execution time. Is the copy operation of the Canvas buffer absolutely necessary? Perf is a lot better when gfx.canvas.azure.accelerated is true.
Depends on: 1161636
Blocks: shumway-m3
Jet is following up with the gfx team.
Assignee: nobody → bugs
Blocks: shumway-m5
No longer blocks: shumway-m3
Current results with Chrome, Nightly 49 (false), Nightly 49 (true): 1024 - 60 fps, 59 fps, 60 fps 2048 - 60 fps, 58 fps, 60 fps 3072 - 60 fps, 29 fps, 60 fps 4096 - 60 fps, 18 fps, 58 fps 5120 - 27 fps, 11 fps, 13 fps [2] 6144 - 11 fps, FAIL [1], FAIL [1] 1. Nightly 49 with the pref set to false failed soon after processing 5120, showing a black content window and sending "###!!! [Child][DispatchAsyncMessage] Error: (msgtype=0xFFFB,name=???) Payload error: message could not be deserialized. [GFX1-]: Failed to create a valid ShmemTextureHost" to stdout. The same result was witnessed with the pref set to true although it seemed to make it further into the test. 2. Nightly 49 with the pref set to true started sending "[GFX1-]: Failed to create a SkiaGL DrawTarget, falling back to software" to stdout soon after processing 5120, although the test visibly continued.
Whiteboard: [gfx-noted]
Version: 40 Branch → Trunk
I can reproduce the perf issue on windows and mac. I got 3x or more slower than chrome on windows. The creation failure in '6144' should be due to the limitation of "gfx.max-alloc-size". There is a related bug 1282074. SkiaGL has another limitation of surface size [1], so it falls back to software. For current canvas implementation, gecko always copies the whole surface to the backbuffer when updating[2]. I think this might be the problem, but it's limited by the current architecture. I'll think how to improve it. [1] https://dxr.mozilla.org/mozilla-central/source/gfx/2d/DrawTargetSkia.h#148 [2] https://dxr.mozilla.org/mozilla-central/source/gfx/layers/CopyableCanvasLayer.cpp#88
(In reply to Ethan Lin[:ethlin] from comment #3) > I can reproduce the perf issue on windows and mac. I got 3x or more slower > than chrome on windows. The creation failure in '6144' should be due to the > limitation of "gfx.max-alloc-size". There is a related bug 1282074. SkiaGL > has another limitation of surface size [1], so it falls back to software. > For current canvas implementation, gecko always copies the whole surface to > the backbuffer when updating[2]. I think this might be the problem, but it's > limited by the current architecture. I'll think how to improve it. > > [1] > https://dxr.mozilla.org/mozilla-central/source/gfx/2d/DrawTargetSkia.h#148 > [2] > https://dxr.mozilla.org/mozilla-central/source/gfx/layers/ > CopyableCanvasLayer.cpp#88 As bug 1161636 comment 1, those steps should be able to reduce the copy times. I also found memory leakage in this test case. I'll file another bug to fix it.
Blocks: 1290072
There is some work to remove unnecessary canvas copies that started with a large architectural change in bug 1167235 and is more generally tracked with bug 1290072. This work is not enabled by default everywhere yet but will be soon as we figure out the various regressions. My expectation is that we'll be in a much better when these optimizations get enabled (currently behind the pref layers.shared-buffer-provider.enabled). That said, there are different ways to use canvas and some will trigger optimizations and some won't. For example we will be able to detect that the first drawing operation in a frame covers the entire canvas and be able to skip a copy of the full canvas. I am mentioning this because it makes it less evident to create minimal test cases that reflect what real web content do since it's not obvious that a ClearRect can optimize away a full copy. I don't want to add more complexity (CanvasRenderingContext2D is already quite a mess) for an artificial test case if it does not reflect real world uses of canvas 2D, so it would be good to mention examples of real sites that are impacted in addition to the test cases to be sure that the latter do reflect the real problems we are trying to solve. In this case for example, it would be interesting to know if many sites create very large canvases and render animations in them over the content of previous frames (without clearing first).
Assignee: bugs → svoisen

I notice a big performance difference with https://neal.fun/deep-sea/

Hi all,

We're also experiencing this problem of getImageData / drawImage performance. Here is a demo to reproduce
https://demo.scichart.com/javascript-multi-pane-stock-charts

Try using the mouse wheel in this chart demo on Firefox vs. Chrome. The performance difference is HUGE!

What we're doing. We have one WebGL canvas which we draw to, and we use getImageData() / drawImage() to read back pixels from webgl and write into HTML5 canvas. This gets around the limitation of number of webgl canvases per browser.

The only workaround we have is to render directly to WebGL canvas but that limits the # of charts we can have on a screen.

Performance difference is massive between Firefox & Chrome. So much so we're recommending to our users they must use chrome.

Any help appreciated. Can also supply further info if requested.

The bug assignee didn't login in Bugzilla in the last 7 months.
:lsalzman, could you have a look please?
For more information, please visit auto_nag documentation.

Assignee: sean → nobody
Flags: needinfo?(lsalzman)
Flags: needinfo?(lsalzman)
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: