Closed Bug 1370412 Opened 7 years ago Closed 7 years ago

Excessive CPU usage by the content process at lumi.ava01.com

Categories

(Core :: Graphics, enhancement, P3)

Unspecified
macOS
enhancement

Tracking

()

RESOLVED FIXED
mozilla57
Performance Impact low
Tracking Status
firefox57 --- fixed

People

(Reporter: mantaroh, Assigned: aosmond)

References

()

Details

(Keywords: perf, Whiteboard: [QRC][QRC_Analyzed] [gfx-noted])

Attachments

(12 files, 19 obsolete files)

(deleted), patch
tnikkel
: review+
Details | Diff | Splinter Review
(deleted), patch
tnikkel
: review+
Details | Diff | Splinter Review
(deleted), patch
tnikkel
: review+
Details | Diff | Splinter Review
(deleted), patch
tnikkel
: review+
Details | Diff | Splinter Review
(deleted), patch
aosmond
: review+
Details | Diff | Splinter Review
(deleted), patch
tnikkel
: review+
Details | Diff | Splinter Review
(deleted), patch
tnikkel
: review+
Details | Diff | Splinter Review
(deleted), patch
tnikkel
: review+
Details | Diff | Splinter Review
(deleted), patch
aosmond
: review+
Details | Diff | Splinter Review
(deleted), patch
tnikkel
: review+
Details | Diff | Splinter Review
(deleted), patch
aosmond
: review+
Details | Diff | Splinter Review
(deleted), patch
aosmond
: review+
Details | Diff | Splinter Review
Some tweets on twitter was reported the excessive CPU usage of Firefox. If we open this site[1], CPU usage of Firefox is higher than other UAs. CPU usage of Nightly is around 300% on my macOS. (Windows 10 is around 95%.) [1] http://lumi.ava01.com/ (VOCALOID product's page) On macOS, the CPU usage of Chromium and Safari TP is around 100%, and on WIndows, Edge is around 15%. Futhermore, this site use transform(scale) on top image, however it couldn't render correctly. Profiler result is here: https://perfht.ml/2rYaJSa It looks like consumed most of time for painting. But I'm not sure that why it caused this phenomenon.
This site seems to cause 100% cpu in the process even after closing the window in nightly 55. about:memory did not show the window any more.
QRC_NeedAnalysis of profile in comment 0
Whiteboard: [QRC][QRC_NeedAnalysis]
Keywords: perf
OS: Unspecified → Mac OS X
Priority: -- → P3
Whiteboard: [QRC][QRC_NeedAnalysis] → [QRC][QRC_NeedAnalysis] [gfx-noted]
This site uses two large sprite images: https://lumi.ava01.com/assets/sprite/src-lumi-s742fd468da.png and https://lumi.ava01.com/assets/sprite/src-s2323c79e1b.png . There are two problems here: (1) We're painting these images on the main thread, and (2) our downscale-during-decode job queue gets backed up. These images are used as CSS background images on <span> elements, which are wrapped in <div> elements with animated transforms. We don't create ImageLayers for them because they're clipped background images, so we paint them on the main thread. However, more importantly, even after I've closed the tab, our image decoding threads are busy with this call stack: nsThread::ProcessNextEvent(bool, bool*) mozilla::image::DecodePoolWorker::Run() mozilla::image::DecodedSurfaceProvider::Run() mozilla::image::Decoder::Decode(mozilla::image::IResumable*) mozilla::image::nsPNGDecoder::DoDecode(mozilla::image::SourceBufferIterator&, mozilla::image::IResumable*) mozilla::Maybe<[...]>([...]) mozilla::image::nsPNGDecoder::ReadPNGData(char const*, unsigned long) MOZ_PNG_process_data MOZ_PNG_push_read_IDAT MOZ_PNG_proc_IDAT_data It looks like the decode pool gets swamped and never recovers.
Flags: needinfo?(aosmond)
I cut off that stack prematurely; here's the rest of it: MOZ_PNG_push_proc_row mozilla::image::nsPNGDecoder::WriteRow(unsigned char*) mozilla::image::DownscalingFilter<mozilla::image::SurfaceSink>::DoAdvanceRow() void skia::ConvolveHorizontally<true>(unsigned char const*, skia::ConvolutionFilter1D const&, unsigned char*)
It is constantly requesting a new downscale on decode for a different size, and we don't attempt to throttle it in anyway. We should probably accept a substitute when there are too many outstanding decodes.
Also, even slowing down the rate of outstanding decodes doesn't help with the memory pressure we are going to encounter. I easily reached 1GB of decoded surfaces on this page for a single image at various sizes. That might not be the best use of resources ;).
Began work on a factor 2 algorithm to minimize the number of decodes and CPU usage in this sort of situation: 1) Two preferences will control whether we go into "factor 2" mode: a) Must exceed the given maximum number of decoded surfaces for a particular image, and b) Must exceed the given maximum estimated memory required for the aggregate surfaces When not in "factor 2" mode: 2) Proceed as we do today. When in "factor 2" mode: 3) Next time we need to decode to a particular size, we will instead decode to a "factor 2" size of the largest, native size of the image, which is the "best match" (as determined by the same logic in SurfaceCache::LookupBestMatch) to the requested size. 4) When a decode completes, we will attempt to compact the surface cache to eliminate all previously decoded surfaces which are not a "factor 2" sizes. They can only be removed when their "best match factor 2" size is already in the cache.
Assignee: nobody → aosmond
Flags: needinfo?(aosmond)
Fix how I did not call SurfaceCacheImpl::StopTracking for pruned surfaces. This caused all sorts of troublesome issues.
Attachment #8877264 - Attachment is obsolete: true
Attachment #8877260 - Attachment is obsolete: true
Attachment #8877261 - Attachment is obsolete: true
Attachment #8877705 - Attachment is obsolete: true
Fix a build warning/error. try: https://treeherder.mozilla.org/#/jobs?repo=try&revision=b2a379ee254504191abfd21a6572e38037184a98 Note this appears to fix several other related bugs from my testing (and can trigger the factor of 2 mode). Namely bug 1329377, bug 1243446 and bug 1157559.
Attachment #8880166 - Attachment is obsolete: true
(In reply to Andrew Osmond [:aosmond] from comment #28) > Created attachment 8880173 [details] [diff] [review] > Part 9. Add factor of 2 state to the SurfaceCache memory reports., v3 > > Fix a build warning/error. > > try: > https://treeherder.mozilla.org/#/ > jobs?repo=try&revision=b2a379ee254504191abfd21a6572e38037184a98 > > Note this appears to fix several other related bugs from my testing (and can > trigger the factor of 2 mode). Namely bug 1329377, bug 1243446 and bug > 1157559. None of the failures come across as related. I doubt talos tests are affected, but who knows, maybe it worked some magic: https://treeherder.mozilla.org/#/jobs?repo=try&revision=fda27cb3a80dd782c9efd5c5ee15285d4b9dd0b3 https://treeherder.mozilla.org/perf.html#/comparechooser?newProject=try&newRevision=fda27cb3a80dd782c9efd5c5ee15285d4b9dd0b3
Whiteboard: [QRC][QRC_NeedAnalysis] [gfx-noted] → [QRC][QRC_Analyzed] [gfx-noted][qf]
Whiteboard: [QRC][QRC_Analyzed] [gfx-noted][qf] → [QRC][QRC_Analyzed] [gfx-noted][qf:p3]
Attachment #8877256 - Flags: review?(tnikkel)
Attachment #8877259 - Flags: review?(tnikkel)
Attachment #8880156 - Flags: review?(tnikkel)
Attachment #8880160 - Flags: review?(tnikkel)
Attachment #8880161 - Flags: review?(tnikkel)
Attachment #8880162 - Flags: review?(tnikkel)
Attachment #8880163 - Flags: review?(tnikkel)
Attachment #8880164 - Flags: review?(tnikkel)
Attachment #8880165 - Flags: review?(tnikkel)
Attachment #8880167 - Flags: review?(tnikkel)
Attachment #8880173 - Flags: review?(tnikkel)
Attachment #8896225 - Flags: review?(tnikkel) → review+
Attachment #8880156 - Flags: review?(tnikkel) → review+
Attachment #8877259 - Flags: review?(tnikkel) → review+
Comment on attachment 8877256 [details] [diff] [review] Part 2. Give image::LookupResult an optional preferred size to decode at when the surface is not found., v1 > enum class MatchType : uint8_t > { > NOT_FOUND, // No matching surface and no placeholder. > PENDING, // Found a matching placeholder, but no surface. > EXACT, // Found a surface that matches exactly. > SUBSTITUTE_BECAUSE_NOT_FOUND, // No exact match, but found a similar one. >- SUBSTITUTE_BECAUSE_PENDING // Found a similar surface and a placeholder >+ SUBSTITUTE_BECAUSE_PENDING, // Found a similar surface and a placeholder > // for an exact match. >+ SUBSTITUTE_BECAUSE_BEST // No exact match, but this is the best we >+ // are able to decode. > }; Can you explain here why we would hit this? That is probably more useful to callers trying to determine how to handle this return value. Have you checked the places that use the current values in the enum to see if they need to be changed because of this new value? Say because they want to check all substitute return values, so they should be modified to include this new kind of substitution. >+ gfx::IntSize mDecodeSize; What does this field mean?
Comment on attachment 8880156 [details] [diff] [review] Part 1. Add preference to control "factor of 2" mode for ImageSurfaceCache., v2 Can you explain what native sizes means in the pref file?
Comment on attachment 8880160 [details] [diff] [review] Part 4. Add ImageSurfaceCache::MaybeSetFactor2Mode and state., v2 >+ * The cache may also enter "factor of 2" mode which occurs when the number of >+ * surfaces in the cache exceeds the "image.cache.factor2.threshold-surfaces" >+ * pref and the aggregate size of the surfaces exceeds the >+ * "image.cache.factor2.threshold-kb" pref. When in "factor of 2" mode, the I don't see a image.cache.factor2.threshold-kb pref in any of the other patches.
Comment on attachment 8880160 [details] [diff] [review] Part 4. Add ImageSurfaceCache::MaybeSetFactor2Mode and state., v2 >+ // Determine how many native surfaces this image has. Zero means we either >+ // don't know, or we don't want to limit the number of surfaces for this >+ // image arbitrarily. Why does 0 native sizes mean that? Why would we not want to limit the number of surfaces for one image?
Comment on attachment 8880161 [details] [diff] [review] Part 5. Add ImageSurfaceCache factor of 2 mode size calculations., v2 Do we actually need to store an array for this? Can't we just compute the values on demand when we need them?
(In reply to Timothy Nikkel (:tnikkel) from comment #31) > Comment on attachment 8877256 [details] [diff] [review] > Part 2. Give image::LookupResult an optional preferred size to decode at > when the surface is not found., v1 > > > enum class MatchType : uint8_t > > { > > NOT_FOUND, // No matching surface and no placeholder. > > PENDING, // Found a matching placeholder, but no surface. > > EXACT, // Found a surface that matches exactly. > > SUBSTITUTE_BECAUSE_NOT_FOUND, // No exact match, but found a similar one. > >- SUBSTITUTE_BECAUSE_PENDING // Found a similar surface and a placeholder > >+ SUBSTITUTE_BECAUSE_PENDING, // Found a similar surface and a placeholder > > // for an exact match. > >+ SUBSTITUTE_BECAUSE_BEST // No exact match, but this is the best we > >+ // are able to decode. > > }; > > Can you explain here why we would hit this? That is probably more useful to > callers trying to determine how to handle this return value. > Yes will update. > Have you checked the places that use the current values in the enum to see > if they need to be changed because of this new value? Say because they want > to check all substitute return values, so they should be modified to include > this new kind of substitution. > Yes I did. There are not many users of LookupBestMatch. > >+ gfx::IntSize mDecodeSize; > > What does this field mean? mDecodeSize is the size the SurfaceCache is suggesting the caller use when decoding (e.g. downscale-on-decode) due to returning NOT_FOUND or SUBSTITUTE_BECAUSE_NOT_FOUND. (In reply to Timothy Nikkel (:tnikkel) from comment #32) > Comment on attachment 8880156 [details] [diff] [review] > Part 1. Add preference to control "factor of 2" mode for ImageSurfaceCache., > v2 > > Can you explain what native sizes means in the pref file? Yes will update. (In reply to Timothy Nikkel (:tnikkel) from comment #33) > Comment on attachment 8880160 [details] [diff] [review] > Part 4. Add ImageSurfaceCache::MaybeSetFactor2Mode and state., v2 > > >+ * The cache may also enter "factor of 2" mode which occurs when the number of > >+ * surfaces in the cache exceeds the "image.cache.factor2.threshold-surfaces" > >+ * pref and the aggregate size of the surfaces exceeds the > >+ * "image.cache.factor2.threshold-kb" pref. When in "factor of 2" mode, the > > I don't see a image.cache.factor2.threshold-kb pref in any of the other > patches. Ah yes, I decided to skip the memory usage thresholding as I deemed the surface thresholding sufficient. I will update the comments. (In reply to Timothy Nikkel (:tnikkel) from comment #34) > Comment on attachment 8880160 [details] [diff] [review] > Part 4. Add ImageSurfaceCache::MaybeSetFactor2Mode and state., v2 > > >+ // Determine how many native surfaces this image has. Zero means we either > >+ // don't know, or we don't want to limit the number of surfaces for this > >+ // image arbitrarily. > > Why does 0 native sizes mean that? Why would we not want to limit the number > of surfaces for one image? I should make the comments clearer. It is referring mainly to VectorImage, which always returns 0. I think the quality difference with SVG would be much more apparent to the user so I sort of punted on solving it. I don't have any evidence today that content authors are requesting many many different sizes of an SVG unreasonably. (In reply to Timothy Nikkel (:tnikkel) from comment #35) > Comment on attachment 8880161 [details] [diff] [review] > Part 5. Add ImageSurfaceCache factor of 2 mode size calculations., v2 > > Do we actually need to store an array for this? Can't we just compute the > values on demand when we need them? We can. But it will need to recompute them on each lookup because typically the caller doesn't request exactly a factor-of-2 size, and so it needs the set to determine whether to return SUBSTITUTE_BECAUSE_NOT_FOUND vs SUBSTITUTE_BECAUSE_BEST/PENDING.
(In reply to Andrew Osmond [:aosmond] from comment #36) > > >+ gfx::IntSize mDecodeSize; > > > > What does this field mean? > > mDecodeSize is the size the SurfaceCache is suggesting the caller use when > decoding (e.g. downscale-on-decode) due to returning NOT_FOUND or > SUBSTITUTE_BECAUSE_NOT_FOUND. Can you include comments to describe this? Also the name could be clearer. mSuggestedSize? > (In reply to Timothy Nikkel (:tnikkel) from comment #34) > > Comment on attachment 8880160 [details] [diff] [review] > > Part 4. Add ImageSurfaceCache::MaybeSetFactor2Mode and state., v2 > > > > >+ // Determine how many native surfaces this image has. Zero means we either > > >+ // don't know, or we don't want to limit the number of surfaces for this > > >+ // image arbitrarily. > > > > Why does 0 native sizes mean that? Why would we not want to limit the number > > of surfaces for one image? > > I should make the comments clearer. It is referring mainly to VectorImage, > which always returns 0. I think the quality difference with SVG would be > much more apparent to the user so I sort of punted on solving it. I don't > have any evidence today that content authors are requesting many many > different sizes of an SVG unreasonably. Wasn't there a bug recently where exactly this was happening? I'm fairly sure I remember such. > (In reply to Timothy Nikkel (:tnikkel) from comment #35) > > Comment on attachment 8880161 [details] [diff] [review] > > Part 5. Add ImageSurfaceCache factor of 2 mode size calculations., v2 > > > > Do we actually need to store an array for this? Can't we just compute the > > values on demand when we need them? > > We can. But it will need to recompute them on each lookup because typically > the caller doesn't request exactly a factor-of-2 size, and so it needs the > set to determine whether to return SUBSTITUTE_BECAUSE_NOT_FOUND vs > SUBSTITUTE_BECAUSE_BEST/PENDING. Dividing by 2 a dozen times or so times seems cheap compared to saving an array the whole time, no?
(In reply to Timothy Nikkel (:tnikkel) from comment #37) > (In reply to Andrew Osmond [:aosmond] from comment #36) > Can you include comments to describe this? Also the name could be clearer. > mSuggestedSize? > Agreed. > > (In reply to Timothy Nikkel (:tnikkel) from comment #34) > Wasn't there a bug recently where exactly this was happening? I'm fairly > sure I remember such. > With SVGs specifically? There were a number of bugs with similar problems (see comment 28) which I tested this patch set against to ensure I fixed them all. > > (In reply to Timothy Nikkel (:tnikkel) from comment #35) > Dividing by 2 a dozen times or so times seems cheap compared to saving an > array the whole time, no? I guess so. I will need to iterate and compare, so it is at most 31 + native sizes... I can easily fit that on an array on the stack. We really only need to iterate *once* (that isn't true right now, but I will correct that flaw now that I remember it) and not even through the whole thing if we compare native sizes first followed by decreasing factors of the largest size...
(In reply to Andrew Osmond [:aosmond] from comment #38) > With SVGs specifically? There were a number of bugs with similar problems > (see comment 28) which I tested this patch set against to ensure I fixed > them all. Yeah, svg specifically.
(In reply to Timothy Nikkel (:tnikkel) from comment #39) > (In reply to Andrew Osmond [:aosmond] from comment #38) > > With SVGs specifically? There were a number of bugs with similar problems > > (see comment 28) which I tested this patch set against to ensure I fixed > > them all. > > Yeah, svg specifically. Okay. I think SVGs would need a different approach. They don't necessarily have a given "native" size bound to them, and even if they did, they can upscale easily. Depending on the image itself, they might also not request valid sizes where the width/height ratio is the same. The CPU usage generating the raster images was a limiting factor in the listed bugs. If the SVG problem was merely memory consumption, we could discard the least recently used surfaces or something the next time a new one is inserted. If it is CPU, I think the only way around it would be some sort of dynamic/relative acceptable scaling (e.g. if it has the same width/height ratio, then we could force it to prefer a downscale as long as its area doesn't stray too far...). I'd want to see the bug first though, and I think it could be considered a separate/followup bug to what we are solving here. With some rejigging I imagine it could reuse a lot of the same machinery.
Update based on review feedback.
Attachment #8880156 - Attachment is obsolete: true
Attachment #8898341 - Flags: review+
Attachment #8880160 - Attachment is obsolete: true
Attachment #8880160 - Flags: review?(tnikkel)
Attachment #8898344 - Flags: review?(tnikkel)
Attachment #8880161 - Attachment is obsolete: true
Attachment #8880161 - Flags: review?(tnikkel)
Attachment #8898345 - Flags: review?(tnikkel)
Attachment #8880162 - Attachment is obsolete: true
Attachment #8880162 - Flags: review?(tnikkel)
Attachment #8898346 - Flags: review?(tnikkel)
Attachment #8898344 - Flags: review?(tnikkel) → review+
Attachment #8898345 - Flags: review?(tnikkel) → review+
Comment on attachment 8898346 [details] [diff] [review] Part 6. ImageSurfaceCache::LookupBestMatch should enter factor of 2 mode on cache misses., v3 >- Pair<already_AddRefed<CachedSurface>, MatchType> >+ Tuple<already_AddRefed<CachedSurface>, MatchType, IntSize> > LookupBestMatch(const SurfaceKey& aIdealKey) Can you document the what the fields in the return tuple mean?
Attachment #8898346 - Flags: review?(tnikkel) → review+
Attachment #8880163 - Flags: review?(tnikkel) → review+
Comment on attachment 8880164 [details] [diff] [review] Part 8a. Add "explicit" flag to ISurfaceProvider state to indicate when the caller won't accept substitutes., v1 "Explicit" is very generic. What about "specific" or "no substitutes"? >diff --git a/image/DecoderFlags.h b/image/DecoderFlags.h >index c4a4df0e59fe..bc54d8f91039 100644 >--- a/image/DecoderFlags.h >+++ b/image/DecoderFlags.h >@@ -18,17 +18,18 @@ namespace image { > * instead either influence which surfaces are generated at all or the tune the > * decoder's behavior for a particular scenario. > */ > enum class DecoderFlags : uint8_t > { > FIRST_FRAME_ONLY = 1 << 0, > IS_REDECODE = 1 << 1, > IMAGE_IS_TRANSIENT = 1 << 2, >- ASYNC_NOTIFY = 1 << 3 >+ ASYNC_NOTIFY = 1 << 3, >+ IS_EXPLICIT = 1 << 4 > }; Document the new flag, whatever name is ends up with.
Attachment #8880173 - Flags: review?(tnikkel) → review+
Attachment #8880167 - Flags: review?(tnikkel) → review+
Comment on attachment 8880165 [details] [diff] [review] Part 8. Add ImageSurfaceCache::Prune to discard surfaces which are not needed in factor of 2 mode., v3 > /** >+ * Attempts to remove cache entries (including placeholders) associated with >+ * the given image from the cache, assuming there is an equivalent entry that >+ * it is able substitute that entry with. >+ * >+ * @param aImageKey The image whose cache which should be pruned. >+ */ >+ static void PruneImage(const ImageKey aImageKey); You should mention this only applies to factor of 2 mode images.
Attachment #8880165 - Flags: review?(tnikkel) → review+
Changed from Explicit to Substitutable (along with the inverted meaning).
Attachment #8880164 - Attachment is obsolete: true
Attachment #8880164 - Flags: review?(tnikkel)
Attachment #8903239 - Flags: review?(tnikkel)
Update based on name changes in part 8a.
Attachment #8880165 - Attachment is obsolete: true
Attachment #8903240 - Flags: review+
Comment on attachment 8903239 [details] [diff] [review] Part 8a. Add "substitutable" flag to ISurfaceProvider state to indicate when the caller won't accept substitutes., v2 Having the flag for the non-usual case of "not substituable" seems better, and its only one more letter.
Attachment #8903239 - Flags: review?(tnikkel) → review+
Attachment #8898343 - Flags: review?(tnikkel) → review+
(In reply to Timothy Nikkel (:tnikkel) from comment #54) > Comment on attachment 8903239 [details] [diff] [review] > Part 8a. Add "substitutable" flag to ISurfaceProvider state to indicate when > the caller won't accept substitutes., v2 > > Having the flag for the non-usual case of "not substituable" seems better, > and its only one more letter. I renamed this to CANNOT_SUBSTITUTE.
Status: NEW → ASSIGNED
Pushed by aosmond@gmail.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/83c49f7c9c99 Part 0. Add imgIContainer::GetNativeSizesLength to determine a ceiling on the maximum number of expected, unique surfaces. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/b9e5d359e10e Part 1. Add preference to control "factor of 2" mode for ImageSurfaceCache. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/87b213841ad3 Part 2. Give image::LookupResult an optional preferred size to decode at when the surface is not found. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/4f0c916b3960 Part 3. Break out ImageSurfaceCache::CompareArea from LookupBestMatch for reuse. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/fcc167260154 Part 4. Add ImageSurfaceCache::MaybeSetFactor2Mode and state. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/536e6a1d3b4f Part 5. Add ImageSurfaceCache factor of 2 mode size calculations. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/9d1a5ce9c333 Part 6. ImageSurfaceCache::LookupBestMatch should enter factor of 2 mode on cache misses. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/71f26daac945 Part 7. ImageSurfaceCache::Lookup should enter factor of 2 mode on cache misses. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/c856f5132643 Part 8a. Add "substitutable" flag to ISurfaceProvider state to indicate when the caller won't accept substitutes. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/85cdb40c2621 Part 8b. Add ImageSurfaceCache::Prune to discard surfaces which are not needed in factor of 2 mode. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/5cd1c5cd9b01 Part 9. Add factor of 2 state to the SurfaceCache memory reports. r=tnikkel https://hg.mozilla.org/integration/mozilla-inbound/rev/e273a191e75d Part 10. Add SurfaceCache gtests for factor of 2 mode. r=tnikkel
I did a partial backout of just part 10. The reasoning being is the only two checks failed in the whole gtest -- the functionality still appeared to work, but why it decided to decode an extra surface on Windows specifically is very puzzling. There is *no* platform specific code afaik that could be affected.
(In reply to Andrew Osmond [:aosmond] from comment #30) > Created attachment 8896225 [details] [diff] [review] > Part 0. Add imgIContainer::GetNativeSizesLength to determine a ceiling on > the maximum number of expected, unique surfaces., v2 > > Recent try: > https://treeherder.mozilla.org/#/ > jobs?repo=try&revision=c64046493c529697616654b92d1c77ce0816664b In my try here, the same gtests pass on Windows (save Windows 7 debug which has an unrelated failure...).
Backout by aosmond@gmail.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/8b1930f2a188 Backed out changeset e273a191e75d for Windows gtest failures.
Depends on: 1397235
Depends on: 1401524
Depends on: 1415085
Performance Impact: --- → P3
Whiteboard: [QRC][QRC_Analyzed] [gfx-noted][qf:p3] → [QRC][QRC_Analyzed] [gfx-noted]
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: