Closed
Bug 1039150
Opened 10 years ago
Closed 10 years ago
FontSizeUtils keeps alive a CanvasRenderingContext and the backing GLContext, using 1.5MB+ of memory
Categories
(Firefox OS Graveyard :: Gaia, defect, P1)
Firefox OS Graveyard
Gaia
Tracking
(blocking-b2g:2.0+, firefox31 wontfix, firefox32 fixed, firefox33 fixed, b2g-v1.4 unaffected, b2g-v2.0 fixed, b2g-v2.1 fixed)
People
(Reporter: khuey, Assigned: mattwoodrow)
References
Details
(Keywords: memory-footprint, regression, Whiteboard: [c=regression p= s=2014.07.18.t u=2.0] [MemShrink])
Attachments
(1 file)
(deleted),
patch
|
roc
:
review+
|
Details | Diff | Splinter Review |
FontSizeUtils uses canvases to do some font metrics stuff. But it uses the context in a way which causes us to create the backing GLContext, which is quite expensive.
https://github.com/mozilla-b2g/gaia/blame/master/shared/js/font_size_utils.js#L45 causes http://hg.mozilla.org/mozilla-central/annotate/7883d8e9f210/content/html/content/src/HTMLCanvasElement.cpp#l805 to happen when we get the context which causes us to EnsureTarget() at http://hg.mozilla.org/mozilla-central/annotate/7883d8e9f210/content/canvas/src/CanvasRenderingContext2D.cpp#l1092 which creates the context.
I'm not entirely sure whether this is a Gaia bug or a Gecko bug, but it costs us at least 1.5MB per process that uses FontSizeUtils (which is all of them, AFAICT).
Reporter | ||
Comment 1•10 years ago
|
||
I'm going to lob this grenade at mhenretty. Gregor can redirect if needed.
Flags: needinfo?(mhenretty)
Assignee | ||
Comment 2•10 years ago
|
||
I think this is a gecko bug. The gaia call to set opaque is unnecessary, but shouldn't have negative effects.
Assignee: nobody → matt.woodrow
Attachment #8456651 -
Flags: review?(roc)
Attachment #8456651 -
Flags: review?(roc) → review+
Assignee | ||
Comment 3•10 years ago
|
||
Comment 4•10 years ago
|
||
The reason we cache canvas for font measurement was a performance optimization, since re-setting ctx.font was apparently slow after the first time. see https://bugzilla.mozilla.org/show_bug.cgi?id=967440#c39 for more information.
If the gecko fix doesn't give us what we need, we can look into not using this performance optimization on low memory devices.
Flags: needinfo?(mhenretty)
Comment 5•10 years ago
|
||
Adding Mike here to see if this showed up in automation or if we need new tests for this.
Flags: needinfo?(mlee)
Comment 6•10 years ago
|
||
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Comment 7•10 years ago
|
||
status-firefox31:
--- → wontfix
status-firefox32:
--- → fixed
status-firefox33:
--- → fixed
Target Milestone: --- → 2.0 S6 (18july)
Reporter | ||
Comment 8•10 years ago
|
||
Yep, that fixed it.
Comment 9•10 years ago
|
||
Hub, do any of our existing make test-perf memory tests capture this kind of increase in memory usage? If so which ones, if not, what kind of test do we need to capture this?
Severity: normal → blocker
Flags: needinfo?(mlee) → needinfo?(hub)
Priority: -- → P1
Whiteboard: [MemShrink] → [c=regression p= s=2014.07.18.t u=2.0] [MemShrink]
Comment 10•10 years ago
|
||
At the moment we don't have any tests for that.
Not sure of the actual testing scenario, but we should be able to collect before and after the memory statistics for specific tests.
Flags: needinfo?(hub)
Comment 11•10 years ago
|
||
(In reply to Gregor Wagner [:gwagner] from comment #5)
> Adding Mike here to see if this showed up in automation or if we need new
> tests for this.
Kyle, can you describe a test scenario we can use to create automation to detect this type of memory issue? If you don't think automation is needed here please say so.
Flags: needinfo?(khuey)
Reporter | ||
Comment 12•10 years ago
|
||
I would expect this to show up when just starting apps. It certainly did in one of Usage or Settings.
Flags: needinfo?(khuey)
Comment 13•10 years ago
|
||
(In reply to Mike Lee [:mlee] Paris: 2014.07.21 - 25 from comment #11)
> (In reply to Gregor Wagner [:gwagner] from comment #5)
> > Adding Mike here to see if this showed up in automation or if we need new
> > tests for this.
>
> Kyle, can you describe a test scenario we can use to create automation to
> detect this type of memory issue? If you don't think automation is needed
> here please say so.
Mike, the issue in question here would show up when starting most apps with any sort of workload. So, for instance, if you did a memory profile at the time "above-the-fold" rendering was complete in the contacts app, you would have seen a ~1.5mb increase after the regressing patch in question landed. To me, the perf timing events you already have in some of your apps would also be a good time to do a memory profile, and you could have some failure triggers based on what we should not exceed for those events.
Comment 14•10 years ago
|
||
Thanks Michael.
Wander,
Please file a bug and work with Hub and Eli to create a new memory performance test using our new App Launch Events [1] as described by Michael in comment 13. It may help to look at the apps currently implementing [2] these new events.
Thanks,
Mike
[!] https://developer.mozilla.org/en-US/Apps/Build/Performance/Firefox_OS_app_responsiveness_guidelines#Implementation
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=996038
Flags: needinfo?(wcosta)
Updated•10 years ago
|
Comment 15•10 years ago
|
||
(In reply to Mike Lee [:mlee] Paris: 2014.07.21 - 25 from comment #14)
> Thanks Michael.
>
> Wander,
>
> Please file a bug and work with Hub and Eli to create a new memory
> performance test using our new App Launch Events [1] as described by Michael
> in comment 13. It may help to look at the apps currently implementing [2]
> these new events.
>
Create Bug 1044297.
Flags: needinfo?(wcosta)
You need to log in
before you can comment on or make changes to this bug.
Description
•