Closed Bug 810105 Opened 12 years ago Closed 12 years ago

[gallery] reproducible crash while browsing gallery, perhaps due to dark matter

Categories

(Firefox OS Graveyard :: Gaia::Gallery, defect, P1)

x86
Gonk (Firefox OS)
defect

Tracking

(blocking-basecamp:+)

RESOLVED WORKSFORME
B2G C2 (20nov-10dec)
blocking-basecamp +

People

(Reporter: marcia, Assigned: djf)

References

(Blocks 1 open bug)

Details

(Keywords: crash, unagi, Whiteboard: [MemShrink:P1])

Attachments

(6 files, 1 obsolete file)

unagi, seen while running the smoketest - Gaia: 7ffd1c81db16bf0c6c3285a46259411d603109ca Gecko: f059d3668a1acdeca0fac3c489205edcbe3c8779 STR: 1. Swipe through pictures in the gallery 2. Eventually I crash - it happens every time. Will attach a log in a moment. I have a DCIM directory on my SD card with pictures, as well as some JPGs as well, but not as many as described in Bug 809782
Attached file logcat when crash occurs (deleted) —
Assignee: nobody → dale
blocking-basecamp: --- → +
Priority: -- → P1
Doug: Is this the same issue as Bug 796082?
Isn't this a OOM? Marcia, can you run b2g_ps (repeatedly) while reproducing this?
Assignee: dale → dflanagan
Marcia, What kind of pictures do you have on your sdcard? Are they from the Gaia camera app? We just recently made the camera take much larger pictures than it used to, so if you're using the gallery with pictures you took in the last few days, that might be what is causing the problem.
I can reproduce this. Starting with no photos on the sd card, I take 4 photos with the camera (at the new larger 1200x1600 resolution) and then quit the camera and open the gallery. Panning back and forth among the 4 pictures causes a crash, which I assume is an OOM.
Installing the latest kernel update (#3) onto my unagi makes it harder to reproduce this. But it is still reproducible. So the gallery app keeps three photos open at a time, so it can pan smoothly between them. That is all it is doing. Gecko doesn't seem to release the image memory quickly enough. And the images I'm trying to display are just 1200x1600. Less than 2 megapixels, so its not like each one takes up that much memory. It will be pretty sad if we have to switch to a smaller image size in the camera because gecko can't manage image memory efficiently.
This command is very helpful: watch -n 1 'adb shell b2g-ps' When I run that and pan slowly through the images, I see the app's memory usage go up by 8mb (2 megapixels times 4 bytes per pixel) each time pan to a new image. Then, if I wait 5 to 10 seconds ('till the next GC, I think) I see it go back down by 8mb. I can do this indefinitely if I wait for the GC before panning again. But if I pan faster, memory usage increases more quickly. When it reaches ~150mb, the app is killed. Sometimes, I see memory jump up more than 8mb and then come down one step after 1 second and come down the rest of the way after 5 to 10 seconds. So some memory gets freed quickly. The fact that memory is eventually being collected when I pan slowly enough says to me that there is not a memory leak in the Gallery app, and that this OOM is just a symptom of the long-standing suckiness of imagelib memory management. I don't get why gecko can't detect memory pressure and force a GC before the OS detects memory pressure and kills the gecko process. Cc'ing cjones because he knows about apps getting killed and jlebar because (IIRC) he understands the imagelib issues. Meanwhile, the only workaround I can think of is to downgrade the camera to 1 megapixel and take 1200x800 photos. cc'ing daleharvey
> And the images I'm trying to display are just 1200x1600. Less than 2 megapixels, so > its not like each one takes up that much memory. 1200 * 1600 * 4 bytes (argb) = 7.3mb. That's quite large for a device with less than 100mb of usable memory available to Firefox. Plus we have to keep a copy of the compressed images in memory, and you may be keeping more than three compressed images in memory. > I don't get why gecko can't detect memory pressure and force a GC before the OS detects > memory pressure and kills the gecko process. That's a good question. Does your build have low-memory notifications (bug 800166)? Can someone please link me to code which demonstrates how scrolling in the gallery app works? In particular, how do we "get rid of" images which are far away from the current viewport? Do we remove the <img> tags from the DOM, or something else?
Oh, we probably don't drop decoded images immediately when the element is removed from the dom; this is required e.g. for bug 791731. This kind of thing is so incredibly hard to fix without bug 689623; that's why that bug has been a MemShrink:P1 for 10 months now. We can try to put a limit on how much out-of-dom image data we keep around. That's going to be a somewhat invasive change to imagelib, because imagelib does not currently distinguish between out-of-dom and other "unlocked" images (e.g. ones in background tabs). Kyle, any other ideas here?
This is a link to github, to a workaround that reduces the camera resolution to 1024x768. I've already landed it, with r+ from daleharvey
Attachment #680248 - Flags: review+
(In reply to Justin Lebar [:jlebar] from comment #9) > Oh, we probably don't drop decoded images immediately when the element is > removed from the dom; this is required e.g. for bug 791731. > > This kind of thing is so incredibly hard to fix without bug 689623; that's > why that bug has been a MemShrink:P1 for 10 months now. > > We can try to put a limit on how much out-of-dom image data we keep around. > That's going to be a somewhat invasive change to imagelib, because imagelib > does not currently distinguish between out-of-dom and other "unlocked" > images (e.g. ones in background tabs). > > Kyle, any other ideas here? Without knowing much about what the gallery does, if you trigger another load in the <img> when removing it from the DOM that should cause us to drop the image data immediately.
Marcia, If you erase the photos on your sdcard and take new ones at the new size, I hope you will not see the crash anymore. If so, maybe we can remove some of the keywords and other high-priority stuff. But let's keep the bug open to try to address the underlying issues.
(In reply to Kyle Huey [:khuey] (khuey@mozilla.com) from comment #11) > (In reply to Justin Lebar [:jlebar] from comment #9) > > Oh, we probably don't drop decoded images immediately when the element is > > removed from the dom; this is required e.g. for bug 791731. > > I've got 3 img elements. The one displayed on the screen and one to the left and one to the right, ready to handle panning. I never remove these images from the DOM. I just change their locations and their src properties. > > This kind of thing is so incredibly hard to fix without bug 689623; that's > > why that bug has been a MemShrink:P1 for 10 months now. > > > > We can try to put a limit on how much out-of-dom image data we keep around. > > That's going to be a somewhat invasive change to imagelib, because imagelib > > does not currently distinguish between out-of-dom and other "unlocked" > > images (e.g. ones in background tabs). > > > > Kyle, any other ideas here? > > Without knowing much about what the gallery does, if you trigger another > load in the <img> when removing it from the DOM that should cause us to drop > the image data immediately. Does this happen if I just set src without removing it from the DOM? It used to be that I created a new img element when the user panned and removed the old one from the DOM. But I changed that long, long ago to fix an earlier version of the OOM.
Yeah, setting the src should cause the image data to be freed immediately (well, as soon as the load completes, really).
(In reply to Justin Lebar [:jlebar] from comment #8) > > And the images I'm trying to display are just 1200x1600. Less than 2 megapixels, so > > its not like each one takes up that much memory. > > 1200 * 1600 * 4 bytes (argb) = 7.3mb. That's quite large for a device with > less than 100mb of usable memory available to Firefox. But small compared to the 5mp resolution that the camera is capable of. Presumably when running Android, the camera can handle full-resolution images. Or maybe it gets tricky with screen-sized thumbnail images or something. Plus we have to keep > a copy of the compressed images in memory, and you may be keeping more than > three compressed images in memory. I could check on that, but I don't think I'm doing it. Its all blobs and blob urls. Maybe they're being retained in a cache or something. > > I don't get why gecko can't detect memory pressure and force a GC before the OS detects > > memory pressure and kills the gecko process. > > That's a good question. Does your build have low-memory notifications (bug > 800166)? > I have the most recent over-the-air update I've been offered. > Can someone please link me to code which demonstrates how scrolling in the > gallery app works? In particular, how do we "get rid of" images which are > far away from the current viewport? Do we remove the <img> tags from the > DOM, or something else? As explained in the comment above, the img tags never get removed, they just have their position animated and their src changed. See https://github.com/mozilla-b2g/gaia/blob/master/apps/gallery/js/gallery.js#L1094
> I have the most recent over-the-air update I've been offered. I need a gecko rev. Let's figure out how to get that.
Justin, Marcia gives her gecko version in comment 1. It might not be the same as mine, but she was certainly experiencing this bug.
So if we can't get this fixed in Gecko, I'll see if I can get the camera to include thumbnails that are at least as large as the phone's screen. Then, when the user pans between images, I could just display the thumbnail instead of the full image, and only switch to the full-size image when the user wants to zoom in. That would be a big win. Still, though, it seems like there is something wrong with how memory is being managed here. Like the gallery process isn't getting a chance to gc before it is killed.
> Still, though, it seems like there is something wrong with how memory is being managed > here. Like the gallery process isn't getting a chance to gc before it is killed. There's definitely a lot wrong with how memory is managed here. But we can't guarantee anyone a chance to gc before they're killed, unfortunately. Marcia's build is from 5 days ago, which should have the low-mem notifications. So maybe we need to tweak those. https://github.com/mozilla/releases-mozilla-aurora/commit/f059d3668a1acdeca0fac3c489205edcbe3c8779 (btw, since we have five different gecko repositories, links are much more helpful than bare csets.)
I played with the gallery app. With the default photos in Gaia, I wasn't able to make the app crash. I also couldn't get a high level of decoded image data. It was ~1.5mb after swiping through all the images on the phone. ├───2.95 MB (11.49%) -- images │ ├──2.95 MB (11.49%) -- content │ │ ├──2.06 MB (08.00%) -- used │ │ │ ├──1.39 MB (05.42%) ── uncompressed-heap │ │ │ ├──0.66 MB (02.57%) ── raw │ │ │ └──0.00 MB (00.00%) ── uncompressed-nonheap │ │ └──0.90 MB (03.49%) -- unused │ │ ├──0.90 MB (03.49%) ── raw │ │ └──0.00 MB (00.00%) ++ (2 tiny) However, I also saw 15mb of heap-unclassified in the gallery app. 25.71 MB (100.0%) -- explicit ├──15.38 MB (59.82%) ── heap-unclassified perhaps that's what is causing us to OOM, not decoded images. Sounds like we need some DMD action here.
Whiteboard: [MemShrink]
Summary: [gallery] reproducible crash while browsing gallery → [gallery] reproducible crash while browsing gallery, perhaps due to dark matter
Justin, No, it doesn't crash with the default photos. Its been working fine with them. The thing that triggered this crash is that the camera resolution went from 640x480 to 1600x1200. (Most of the images in gaia/media-samples/ are quite small). And now I've just reduced the camera resolution to 1024x768, so you may not be able to trigger the crash. Change the definition of MAX_IMAGE_RES in apps/camera/js/camera.js to get big images again and I'd guess you'll be able to crash it. I don't know what dark matter is in this context, but with the 2mp images, and the watch command in comment 7 it seemed pretty clear to me that the OOM was caused pretty directly by panning back and forth
I'm all set up to run DMD here, except the gallery app on my Linux desktop doesn't have any photos. :( Any ideas here?
justin, Gecko uses xdg_pictures_dir on desktop. Try adding pictures to ~/Pictures. (also see - https://wiki.archlinux.org/index.php/Xdg-user-dirs-update)
Milestoning for C2 (deadline of 12/10), as this meets the criteria of "remaining P1 bugs not already milestoned for C1".
Target Milestone: --- → B2G C2 (20nov-10dec)
Huh, I got the gallery to crash on my desktop machine. That doesn't sound like an OOM...
> No, it doesn't crash with the default photos. Right. But what I expected to happen was that I'd see a large amount of decoded image data, indicating a bug in imagelib which could cause an OOM when the images were larger. I didn't see that, which indicates that imagelib may not be at fault here. I'm also unable to reproduce the high heap-unclassified (or high decoded-image-data) on desktop. That further suggests that imagelib may not be at fault here, and that the issue may instead be with a device-specific library allocating too much memory. On the other hand, I was able to get the gallery app to crash on desktop, and that likely isn't an OOM. I'll see if I can get a stack for that crash. Otherwise, we may need to try out a heap profiler on the device.
Whiteboard: [MemShrink] → [MemShrink:P1]
Attached file Heap profile of main process (deleted) —
A heap profile! This is still quite ugly -- the output doesn't have lines/functions outside libxul, and many of the stacks look more than a bit fishy. But many of the stacks also look quite interesting, so maybe we can divine something interesting from this profile. Unfortunately I can't calculate heap-unclassified for this profile, since the about:memory data is completely messed up (it's reporting 25GB of memory usage). So I don't even know if we're hitting the high heap-unclassified issue in this profile.
Attached file Heap profile of gallery app process (obsolete) (deleted) —
This is the one to look at (the heap-unclassified manifests in the gallery app). This does not look particularly mysterious. Highlights from the attached profile follow below. Please take these stacks with a grain of salt. >3,840,000B * 2 allocs = 7,680,000B > gfxImageSurface /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxImageSurface.cpp:110 > nsRefPtr<gfxASurface>::assign_with_AddRef(gfxASurface*) /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:846 > nsRefPtr<gfxASurface>::operator=(gfxASurface*) /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:932 > ?? /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxAndroidPlatform.cpp:58 > gfxPlatform::OptimizeImage(gfxImageSurface*, gfxASurface::gfxImageFormat) /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxPlatform.cpp:464 > nsRefPtr<gfxASurface>::assign_assuming_AddRef(gfxASurface*) /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:861 > operator=<gfxASurface> /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:941 > ?? /home/jlebar/code/moz/ff-git2/src/image/src/imgFrame.cpp:337 > mozilla::image::RasterImage::DecodingComplete() /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsError.h:155 > nsCOMPtr<imgIDecoderObserver>::get() const /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsCOMPtr.h:764 > nsCOMPtr<imgIDecoderObserver>::operator imgIDecoderObserver*() const /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsCOMPtr.h:777 > ?? /home/jlebar/code/moz/ff-git2/src/image/src/Decoder.cpp:273 > mozilla::image::nsJPEGDecoder::NotifyDone() /home/jlebar/code/moz/ff-git2/src/image/decoders/nsJPEGDecoder.cpp:548 >3,840,000B in one alloc > gfxImageSurface /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxImageSurface.cpp:110 > nsRefPtr<gfxASurface>::assign_with_AddRef(gfxASurface*) /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:846 > nsRefPtr<gfxASurface>::operator=(gfxASurface*) /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:932 > ?? /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxAndroidPlatform.cpp:58 > gfxPlatform::OptimizeImage(gfxImageSurface*, gfxASurface::gfxImageFormat) /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxPlatform.cpp:464 > nsRefPtr<gfxASurface>::assign_assuming_AddRef(gfxASurface*) /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:861 > operator=<gfxASurface> /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:941 > ?? /home/jlebar/code/moz/ff-git2/src/image/src/imgFrame.cpp:337 > mozilla::image::RasterImage::DecodingComplete() /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsError.h:155 This plus the one above is probably the decoded image data >57,600B * 14 allocs = 806,400B > gfxImageSurface /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxImageSurface.cpp:110 > nsRefPtr<gfxASurface>::assign_with_AddRef(gfxASurface*) /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsAutoPtr.h:846 > nsDisplayImage::GetContainer() /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsError.h:155 These are probably the gallery thumbnails. Again, I don't know that these are contributing to heap-unclassified (because these builds mess up about:memory), but I've never seen the gallery app report 12mb for decoded images.
Attachment #682959 - Attachment is obsolete: true
Attached file Heap profile of gallery app process (deleted) —
(Correct full profile of gallery app.)
njn wanted to have a look at this. It appears that some reporters are wrapping around to -1. Maybe malloc_usable_size is returning -1?
This report is busted, but the image reporters seem to be working properly. I see 11mb of decoded images in this memory report, which corresponds nicely to the heap profile. ├─────12,732,765 B (00.15%) -- images │ ├──12,732,333 B (00.15%) -- content │ │ ├──12,732,333 B (00.15%) -- used │ │ │ ├──11,541,056 B (00.13%) ── uncompressed-heap │ │ │ ├───1,191,277 B (00.01%) ── raw We're probably seeing different behavior here now that we have image locking enabled.
> We're probably seeing different behavior here now that we have image locking enabled. I've never been able to reproduce this crash (even after loading my sdcard full of pictures taken at high resolution). Can someone with a recent build test whether this crash still reproduces? It's possible that enabling image locking (bug 807143) fixed this problem.
A lot of these stack traces appear rotated. For example, if you take the 2nd one from the main process and rotate it upwards by 11, you get this, which makes a lot more sense: ft_alloc /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/base/ftsystem.c:74 ft_mem_qalloc /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/base/ftutil.c:76 FT_Stream_EnterFrame /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/base/ftstream.c:267 FT_Stream_ExtractFrame /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/base/ftstream.c:200 tt_face_load_kern /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/sfnt/ttkern.c:68 sfnt_load_face /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/sfnt/sfobjs.c:748 tt_face_init /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/truetype/ttobjs.c:537 open_face /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/base/ftobjs.c:1153 FT_Open_Face /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/base/ftobjs.c:2080 FT_New_Face /home/jlebar/code/moz/ff-git2/src/modules/freetype2/src/base/ftobjs.c:1215 gfxFT2FontList::AppendFacesFromFontFile(nsCString&, bool, FontNameCache*) /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxFT2FontList.cpp:797 ~nsACString_internal /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsTSubstring.h:85 ~nsCString /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsTString.h:22 ?? /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxFT2FontList.cpp:1164 gfxFT2FontList::FindFonts() /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsTHashtable.h:131 gfxFT2FontList::InitFontList() /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxFT2FontList.cpp:1224 gfxAndroidPlatform::CreatePlatformFontList() /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsError.h:155 gfxPlatformFontList::Init() /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxPlatformFontList.h:93 ?? /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxPlatform.cpp:313 gfxPlatform::GetPlatform() /home/jlebar/code/moz/ff-git2/src/gfx/thebes/gfxPlatform.cpp:241 nsThread::ProcessNextEvent(bool, bool*) /home/jlebar/code/moz/ff-git2/src/xpcom/threads/nsThread.cpp:627 . . mozilla::ipc::MessagePump::Run(base::MessagePump::Delegate*) /home/jlebar/code/moz/ff-git2/src/ipc/glue/MessagePump.cpp:83 MessageLoop::RunInternal() /home/jlebar/code/moz/ff-git2/src/ipc/chromium/src/base/message_loop.cc:216 ~AutoRunState /home/jlebar/code/moz/ff-git2/src/ipc/chromium/src/base/message_loop.cc:502 ?? /home/jlebar/code/moz/ff-git2/src/ipc/chromium/src/base/message_loop.cc:182 nsBaseAppShell::Run() /home/jlebar/code/moz/ff-git2/src/widget/xpwidgets/nsBaseAppShell.cpp:165 nsAppStartup::Run() /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsError.h:155 XREMain::XRE_mainRun() /home/jlebar/code/moz/B2G/objdir-gecko/dist/include/nsError.h:155 XREMain::XRE_main(int, char**, nsXREAppData const*) /home/jlebar/code/moz/ff-git2/src/toolkit/xre/nsAppRunner.cpp:3898 XRE_main /home/jlebar/code/moz/ff-git2/src/toolkit/xre/nsAppRunner.cpp:4084 It looks like there is at least a couple of MiB worth of stuff under gfxFT2FontList::AppendFacesFromFontFile(). Do we have a lot of fonts?
> It looks like there is at least a couple of MiB worth of stuff under > gfxFT2FontList::AppendFacesFromFontFile(). Do we have a lot of fonts? I just filed bug 812957 about this.
Component: Gaia → Gaia::Gallery
This bug has been called out as likely having risk to non-B2G platforms. Given that, marking as P1, and moving into the C2 milestone. We should prioritize this landing to mozilla-beta as soon as possible, to prevent late-breaking regressions to other platforms.
Justin, Perhaps part of the reason you can't reproduce this is that I very quickly reduced the camera resolution as a workaround. That workaround was in https://github.com/mozilla-b2g/gaia/pull/6336 If you edit apps/camera/js/camera.js to change this line: MAX_IMAGE_RES: 1024 * 768, // was: 1600*1200 you'll get bigger images and might be able to reproduce the crash. I have no idea what image locking is, but as you say, it could be that changing that fixed the crash. I'm going to be working over in https://bugzilla.mozilla.org/show_bug.cgi?id=809782 on three different memory-related improvements to the gallery app. One of them will involve using the embedded preview image instead of the full-size image until the user zooms in, so that should dramatically reduce the memory usage of casual browsing from image to image that cause the crash here. Also, I'd like someone else to take this bug. I've landed a workaround already, and am working on a more permanent solution in #809782. This started as a Gaia bug but has turned into a Gecko bug. I don't know anything about image locking, "dark matter" or "slim-fast" and can't contribute here anymore. Justin or Nicholas, will one of you pick this up?
> Perhaps part of the reason you can't reproduce this is that I very quickly reduced the > camera resolution as a workaround. I've been trying to reproduce with this line changed, as you suggested earlier. > Also, I'd like someone else to take this bug. I would really appreciate your continued assistance here; I don't even know if this bug still happens. I'm not asking you to debug the dark-matter problem; I just need some help because I cannot reproduce the issue. Are you willing to continue helping with that?
This bug is not reproducible in the gallery any more (gallery now displays preview images instead of fullscreen images), so I'm removing the smoketest and reproducible keywords. If it was just me working on this, I'd just close the bug now because I've done all I can on the Gaia side. Justin, if you want to keep investigating, you might want a custom test app to do it. You could write a simple app with 4-6 2-3megapixel images, and a single <img> element. Every second, have the app display a new image in the <img> element and just keep looping while monitoring memory usage. If that doesn't crash, increase the image size and/or reduce the delay between swapping images. And if you still can't get it to crash, then you could try turning image locking off again and see if that crashes it. (Months ago I think I wrote a test app kind of like this and may have attached it to a bugzilla bug. I can't find it anymore, but maybe your bugzilla searching skills are better than mine. I want to say I called it something like "thrash".) On second thought, maybe you could reproduce this bug with the gallery app... You'll just want to load the sdcard with large images that do not have embedded previews. PNG screenshots of your desktop might work, for example. Is there a pref you can set to test with image locking on and off, or is that something you'd need to rebuild gecko to test? Justin: if you don't plan to keep investigating, please close the bug.
> Is there a pref you can set to test with image locking on and off, or is that something > you'd need to rebuild gecko to test? It's a pref; see bug 807143. FYI, all prefs are defined in .js files and are defined in lines which start with "pref". You can therefore in general find prefs by doing something like $ git grep 'pref.*locking' -- '*.js' from your gecko clone.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → WORKSFORME
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: