Closed Bug 856670 Opened 12 years ago Closed 4 years ago

OOM crash in [@ js::RemapWrapper]

Categories

(Core :: JavaScript Engine, defect)

22 Branch
defect
Not set
critical

Tracking

()

RESOLVED INCOMPLETE
Tracking Status
firefox21 --- unaffected
firefox22 + verified
firefox23 + verified
firefox27 - wontfix
firefox28 + wontfix
firefox29 + wontfix
firefox30 + wontfix
firefox31 --- wontfix
firefox32 --- wontfix
firefox33 --- wontfix
firefox47 --- wontfix
firefox48 --- wontfix
firefox49 --- wontfix
firefox-esr45 --- wontfix
firefox50 --- wontfix
firefox51 --- wontfix
firefox52 --- wontfix
firefox-esr52 --- wontfix
firefox53 --- wontfix
firefox54 --- wontfix

People

(Reporter: tracy, Unassigned)

References

Details

(Keywords: crash, regression)

Crash Data

Attachments

(1 file)

This bug was filed from the Socorro interface and is report bp-7cbd5b26-b166-4b64-a063-723972130401 . ============================================================= Might this be related to bug 854604?
https://crash-stats.mozilla.com/report/list?signature=js::RemapWrapper%28JSContext*,%20JSObject*,%20JSObject*%29 The mac crash doesn't have much of a stack - https://crash-stats.mozilla.com/report/index/d1097121-8dcf-429f-9471-6fc982130401 has a bit more info: Frame Module Signature Source 0 mozjs.dll js::RemapWrapper js/src/jswrapper.cpp:1105 1 mozjs.dll js::Vector<js::WrapperValue,8,js::TempAllocPolicy>::convertToHeapStorage obj-firefox/dist/include/js/Vector.h:644 2 mozjs.dll js::RemapAllWrappersForObject js/src/jswrapper.cpp:1143 3 mozjs.dll JS_RefreshCrossCompartmentWrappers js/src/jsapi.cpp:1729 4 xul.dll nsGlobalWindow::SetNewDocument dom/base/nsGlobalWindow.cpp:1832 5 xul.dll nsCOMPtr<nsIDocument>::nsCOMPtr<nsIDocument> obj-firefox/dist/include/nsCOMPtr.h:556 6 xul.dll DocumentViewerImpl::Init layout/base/nsDocumentViewer.cpp:686 7 xul.dll nsDocShell::SetupNewViewer docshell/base/nsDocShell.cpp:8107 8 xul.dll nsDocShell::Embed docshell/base/nsDocShell.cpp:6159 9 xul.dll nsDocShell::CreateContentViewer docshell/base/nsDocShell.cpp:7894 10 xul.dll nsDSURIContentListener::DoContent docshell/base/nsDSURIContentListener.cpp:122 11 xul.dll nsDocumentOpenInfo::DispatchContent uriloader/base/nsURILoader.cpp:360 12 xul.dll nsJARChannel::OnStartRequest modules/libjar/nsJARChannel.cpp:947 13 xul.dll nsInputStreamPump::OnStateStart netwerk/base/src/nsInputStreamPump.cpp:417 14 xul.dll nsInputStreamPump::OnInputStreamReady netwerk/base/src/nsInputStreamPump.cpp:368 15 xul.dll nsInputStreamReadyEvent::Run xpcom/io/nsStreamUtils.cpp:82 16 xul.dll nsThread::ProcessNextEvent xpcom/threads/nsThread.cpp:627 17 xul.dll NS_ProcessNextEvent_P obj-firefox/xpcom/build/nsThreadUtils.cpp:238 18 FWPUCLNT.DLL FwpmIPsecTunnelDeleteByKey0 19 xul.dll nsRunnableMethodImpl<tag_nsresult obj-firefox/dist/include/nsThreadUtils.h:367 20 xul.dll nsThread::ProcessNextEvent xpcom/threads/nsThread.cpp:627 21 xul.dll NS_ProcessNextEvent_P obj-firefox/xpcom/build/nsThreadUtils.cpp:238 22 xul.dll nsXULWindow::CreateNewContentWindow xpfe/appshell/src/nsXULWindow.cpp:1805 23 xul.dll nsXULWindow::CreateNewWindow xpfe/appshell/src/nsXULWindow.cpp:1730 24 xul.dll nsAppStartup::CreateChromeWindow2 toolkit/components/startup/nsAppStartup.cpp:697 25 xul.dll nsWindowWatcher::OpenWindowInternal embedding/components/windowwatcher/src/nsWindowWatcher.cpp:741 26 xul.dll nsWindowWatcher::OpenWindow2 embedding/components/windowwatcher/src/nsWindowWatcher.cpp:472 27 xul.dll nsGlobalWindow::OpenInternal dom/base/nsGlobalWindow.cpp:9376 28 xul.dll nsGlobalWindow::OpenNoNavigate dom/base/nsGlobalWindow.cpp:5945 29 xul.dll nsDocShell::InternalLoad docshell/base/nsDocShell.cpp:8585 30 RapportTanzan19.DLL RapportTanzan19.DLL@0x8ca2 31 @0x70ff061e Here are some correlations: js::RemapWrapper(JSContext*, JSObject*, JSObject*)|EXCEPTION_BREAKPOINT (45 crashes) 18% (8/45) vs. 2% (2316/136572) {2D3F3651-74B9-4795-BDEC-6DA2F431CB62} 16% (7/45) vs. 2% (3345/136572) {CAFEEFAC-0016-0000-0035-ABCDEFFEDCBA} 13% (6/45) vs. 2% (2723/136572) {BBDA0591-3099-440a-AA10-41764D9DB4DB} 13% (6/45) vs. 2% (2978/136572) {CAFEEFAC-0016-0000-0033-ABCDEFFEDCBA} 13% (6/45) vs. 3% (4039/136572) {EB9394A3-4AD6-4918-9537-31A1FD8E8EDF} 9% (4/45) vs. 1% (1268/136572) gophoto@gophoto.it 9% (4/45) vs. 1% (1532/136572) personas@christopher.beard (Personas, https://addons.mozilla.org/addon/10900) 11% (5/45) vs. 4% (5585/136572) {b9db16a4-6edc-47ec-a1f4-b86292ed211d} (Video DownloadHelper, https://addons.mozilla.org/addon/3006) 9% (4/45) vs. 2% (2947/136572) {CAFEEFAC-0016-0000-0037-ABCDEFFEDCBA} 9% (4/45) vs. 3% (4359/136572) {37964A3C-4EE8-47b1-8321-34DE2C39BA4D} 7% (3/45) vs. 1% (1800/136572) {23fcfd51-4958-4f00-80a3-ae97e717ed8b} 7% (3/45) vs. 2% (2199/136572) jqs@sun.com (Java Quick Starter, http://java.sun.com/javase/downloads/) Norton is the top addon correlation, but hard to tell if that is really the issue. There is also a comment in a Mac crash report which mentions the Feedly extension.
Assignee: nobody → general
Component: XUL → JavaScript Engine
It's currently #6 top browser crasher in 22.0a1 and #2 in 23.0a1. It started spiking in 22.0a1/20130325 and exploded in 22.0a1/20130331. The regression ranges are: * spike: http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=0a10eca0c521&tochange=3acbf951b3b1 * explosion: http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=8693d1d4c86d&tochange=1932c6f78248
Keywords: regression, topcrash
Version: unspecified → 22 Branch
Hm, so this is from those MOZ_CRASHes that billm put into the brain transplant code to just die if we fail along the way, as a security precaution. Here, it looks like cx->wrap() is failing, which is interesting. From the looks of it, either the prewrap hook (xpc::WrapperFactory::PrepareForWrapping) or the wrap hook (xpc::WrapperFactory::Rewrap) is failing. Unfortunately, those functions are big enough that I can't really posit too much of a guess as to where the problem lies. Some STR would be helpful here.
Keywords: steps-wanted
Comments talk about websites with images like tumblr.
(In reply to Bobby Holley (:bholley) from comment #3) > Hm, so this is from those MOZ_CRASHes that billm put into the brain > transplant code to just die if we fail along the way, as a security > precaution. In which bug was that done?
It looks like the MOZ_CRASHes were added to RemapWrapper in bug 809295.
(which landed in 19)
It looks like something is exhausting the C stack here. We do stack overflow checks in the wrapping code, so that's probably what's failing.
More precisely, in all of the dumps that I looked at, the C stack has >1024 frames or else it's incomplete. Here's an example with a big stack: https://crash-stats.mozilla.com/report/index/8d335fea-8f4d-4a90-916a-15a442130403 Bobby pointed out that this is probably related to bug 856138.
David's crash shows up differently when we don't catch it in Breakpad and it winds up in the Apple Crash Reporter, it's a stack overflow there. That may explain why the stacks on crash-stats are kind of crazy.
I crashed the browser 3 times in a row visiting that site and only 1 Firefox crash report. Apple caught the crash and gave me the follow report http://pastebin.mozilla.org/2275848
The stack overflow is from nsThread::Shutdown somehow recursing. I have no idea how that happens.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → DUPLICATE
These crashes occur on all platforms while bug 856138 is a Cocoa bug (Mac only). How can it be a duplicate?
Flags: needinfo?(benjamin)
I based the duplicate on comment 10-13. Those are definitely bug 856138, and bug 856138 is not necessarily mac-only but probably shows up much more frequently on mac. The crash signature tracked here appears to be perhaps several different issues: http://hg.mozilla.org/releases/mozilla-release/annotate/6ffe3e9da8a8/js/src/jswrapper.cpp#l1105 is a non-recursive crash because compartment->wrap failed. There are several others like thisl https://crash-stats.mozilla.com/report/index/b88c7f09-8c32-4ee5-9ce8-77e0d2130405 is ultimately due to infinite recursion from nsDocShell::EnsureContentViewer. https://crash-stats.mozilla.com/report/index/c50911d3-08fc-4c8c-912e-6a46e2130405 is probably out-of-stack due to bug 856138. https://crash-stats.mozilla.com/report/index/a16694b7-99f6-4ff6-a3ea-ef30d2130405 appears to be recursion in nsGlobalWindow::OpenNoNavigate... So many of these may be out-of-stack crashes, but with a lot of different causes. I wonder what makes JSCompartment::wrap particularly sensitive to stack growth?
Status: RESOLVED → REOPENED
Flags: needinfo?(benjamin)
Resolution: DUPLICATE → ---
<bholley> bsmedberg: we do a native stack check <bholley> bsmedberg: at the top of wrap <bholley> bsmedberg: JS_CHECK_CHROME_RECURSION or somesuch <bsmedberg> bholley: does it use the actual native stack size, or is it hard-configured? <bholley> bsmedberg: http://mxr.mozilla.org/mozilla-central/source/js/src/jsfriendapi.h#614 <bholley> bsmedberg: looks like it calls into js::GetNativeStackLimit <bholley> bsmedberg: Waldo would know more <bsmedberg> 128 * sizeof(size_t) * 1024 <bsmedberg> that's a nicely hardcoded number http://hg.mozilla.org/mozilla-central/annotate/55f9e3e3dae7/js/xpconnect/src/XPCJSRuntime.cpp#l2705 Since that hardcoded 512k is well below the default stack size of 1MB on Windows, I suspect that we're not actually running out of stack space. In some of these cases, we will eventually crash due to infinite recursion anyway, but in at least some of the cases here, we're just "really deep" in normal stack calls and we should probably bump the default JS_SetNativeStackQuota to 1MB on Windows at least. I'm not sure what the default stack size is on *nix.
Why do we need so much stack space? All the stacks I've seen look really repetitive. Could we restructure them to use loops?
Which ones? Some are obvious infinite-recursions that we need to diagnose and fix anyway. But https://crash-stats.mozilla.com/report/index/d1097121-8dcf-429f-9471-6fc982130401 probably isn't, and although the crash-stats report is busted, it's clear that Trusteer Rapport is on the stack and may be responsible for some or a lot of the stack usage. https://crash-stats.mozilla.com/report/index/9cc45c3e-9762-461b-92d5-0343d2130405 is similar with Norton toolbar. And as noted, yes we should at least fix some of these in bug 856138 by using a loop, but I'm not sure that there are other generic solutions that are obvious from these crash reports. Making the JS stack size match the native stack size seems like an obvious ameliorating factor, though.
Depends on: 856138
OK, I was mainly thinking about bug 856138. That sounds reasonable.
FWIW, I was hitting this loading http://bit.ly/ZGT6Mj
Bill, are you investigating this from the JS side of things? What are the next steps here?
Assignee: general → wmccloskey
I just hit this when I was approving stuff on bill.com, hit next on the bill window too fast and Aurora crashed. https://crash-stats.mozilla.com/report/index/bp-82c6d313-54c3-4243-9694-821bf2130411
Crashes have almost completely stopped since 23.0a1/20130405. The working range is: http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=c232bec6974d&tochange=55f9e3e3dae7
(In reply to Scoobidiver from comment #25) > Crashes have almost completely stopped since 23.0a1/20130405. Any candidate fix to uplift to Aurora?
For 22 this is #2 on top crash list. It's dropping fast down the 23 top crash list. (#5 -> #35)
I looked through the comment 25 "fix" range and I did not see any obvious bug summaries. Possible candidates would include: * reducing the size of a thread pool, especially the thread pool for indexeddb or DOM workers * significantly reducing the size of stack frames here, although I don't know how we'd figure that out * increasing the stack limit in the JS engine, which is what I believe billm is going to do in this bug and it doesn't look done yet. So perhaps KaiRo you could ask in tomorrow's engineering meeting if people are aware of anything in the range that might be responsible for this going away?
This came up during the dev meeting. We'd really like to know the answer to comment 28 (which remained unanswered in the meeting).
Nothing looks obvious to me. I usually don't find anything in these regression ranges.
Are STR our only option moving forward here? I don't want to send QA on a wild goose chase, so is there anything we can land speculatively or to probe the issue further?
Flags: needinfo?(wmccloskey)
The obvious step forward here is to increase the JS stack size limit to match the native stack size.
Attached patch patch to increase stack size (deleted) — — Splinter Review
This makes the stack bigger on the platforms where we crash. I'm pretty skeptical that it's going to work--the stack in bug 856138 had over 27,000 frames on it, so a 1MB stack on Windows doesn't seem like enough. I guess it's worth a shot, though. This bug used to be reproducible (bug 856138 and Dolske's comment). So it seems like we have a pretty good chance at getting an STR. However, even without an STR, it seems like we could do more to investigate this. From bug 856138 comment 4, it sounds like we have a really gigantic thread pool. The Windows crashreports all seem to have ~2400 threads. For some reason the Mac reports only have a normal number. Could we come up with a diagnostic patch to crash if when we start creating too many threads? If we got a stack there, it might tell us who's creating them.
Attachment #741568 - Flags: review?(benjamin)
Flags: needinfo?(wmccloskey)
Attachment #741568 - Flags: review?(benjamin) → review+
(In reply to Bill McCloskey (:billm) from comment #33) > This bug used to be reproducible (bug 856138 and Dolske's comment). FWIW, the URL in comment 21 no longer crashes for me.
(In reply to Justin Dolske [:Dolske] from comment #34) > (In reply to Bill McCloskey (:billm) from comment #33) > > > This bug used to be reproducible (bug 856138 and Dolske's comment). > > FWIW, the URL in comment 21 no longer crashes for me. Would you mind checking Aurora? That's where we're getting crashes right now.
Comment on attachment 741568 [details] [diff] [review] patch to increase stack size We're hoping this patch fixes a crash that's only on Aurora, so we need to land there. [Approval Request Comment] Bug caused by (feature/regressing bug #): unknown User impact if declined: Testing completed (on m-c, etc.): On m-c Risk to taking this patch (and alternatives if risky): Very low. May take longer for stack overflow errors to be reported? String or IDL/UUID changes made by this patch: None
Attachment #741568 - Flags: approval-mozilla-aurora?
Note: the crash is not only on Aurora, it's just more prevalent on Aurora. I grabbed some data from 18-April: Thread counts for all crashes (exluding crashes with 0 or 1 threads because that's bogus): Mean: 32 Stddev: 40 Total with more than 154 threads: 538 (0.1%) Thread counts for bug 856670: Mean: 581 Stddev: 996.597997709 Total with more than 154 threads: 57 (23.8%) It's clear that about 25% of this crash (at least) is huge numbers of threads probably combined with the stupid nsThreadPool::Shutdown behavior in bug 856138. Apparently some of this may have been caused by bug 855923, which just landed on -central and is not in the candidate fix range. However, if I'm reading bug 716140 correctly, multithread image decoding did land for 22/aurora, and so that could be the case or a cause of that 25%. Bug 855923 seems like a low-risk candidate fix for Aurora. It is very difficult to classify the rest of the crashes in this bug because stackwalk is broken most of the time. I don't know whether that is because of JITcode or other problems, but I'm going to loading a few minidumps directly to see if there is better dump info.
Comment on attachment 741568 [details] [diff] [review] patch to increase stack size Its a #2 top-crasher on aurora.Approving the patch to see is we can speculatively lower the crash-volume .
Attachment #741568 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
It seems to have almost completely stopped after 22.0a2/20130422. The working range is: http://hg.mozilla.org/releases/mozilla-aurora/pushloghtml?fromchange=5ab3070be609&tochange=e71fde65f5e9 It has been fixed by the patch of bug 854803. Should we backed out the Aurora patch?
(In reply to Scoobidiver from comment #42) > It has been fixed by the patch of bug 854803. At least that would also match the trunk range where this crash signature dropped sharply. You said in comment #23 that it stopped on trunk with the 4/5 build, and bug 854803 landed on m-c on 4/4, so that matches nicely.
Flags: needinfo?(bbajaj)
Benjamin, not sure of the needsinfo request here for me . Can you please what exactly is needed from me here ?
Flags: needinfo?(bbajaj)
Bhavana, to answer scoobidiver's question in comment 42.
Flags: needinfo?(bbajaj)
Remaining crashes have a low volume: #106 in 23.0a1.
Assignee: wmccloskey → general
Bill, looks like the patch in bug 854803 fixed the crashes here, so do you think we can backout the patch landed in this bug which was landed speculatively ? If so, we should do it asap before Fx22 beta 2 goes to build(Tuesday).
Flags: needinfo?(bbajaj)
I don't think there is a reason to back out this patch. It's not especially risky, and it's a good long-term fix in any case.
Keywords: verifyme
I wasn't able to reproduce the crash with neither one of the suggestions from: comment 10, comment 21, comment 24. In Socorro, there seem to be a lot of crashes reported within last month, on: Fx 23 beta 3, beta 4, beta 5, beta 6; also in 24.0a2 and 25.0a1: https://crash-stats.mozilla.com/report/list?product=Firefox&query_search=signature&query_type=contains&reason_type=contains&date=2013-07-19&range_value=28&range_unit=days&hang_type=any&process_type=any&signature=js%3A%3ARemapWrapper%28JSContext%2A%2C+JSObject%2A%2C+JSObject%2A%29 Does anyone have any thoughts/suggestions?
Flags: needinfo?
QA Contact: manuela.muntean
(In reply to Manuela Muntean [:Manuela] [QA] from comment #49) > Does anyone have any thoughts/suggestions? The patch was intended to lower the crash volume (was #6 crasher in 22.0). Did it?
Flags: needinfo?
(In reply to Scoobidiver from comment #50) > (In reply to Manuela Muntean [:Manuela] [QA] from comment #49) > > Does anyone have any thoughts/suggestions? > The patch was intended to lower the crash volume (was #6 crasher in 22.0). > Did it? I think the crash volume is lower, but there still are 27 crashes with Firefox 23 beta 6. Let's see what happens with beta 7.
(In reply to Manuela Muntean [:Manuela] [QA] from comment #51) > I think the crash volume is lower Clearly as it's #110 browser crasher in 22.0 and #173 in 23.0b6.
While checking again in Socorro, the most recent crashes have 2013-07-17 as date, 1 day before Fx 23 beta 7 appeared (build ID: 20130718163513), regarding 24.0a2 and 25.0a1. There aren't any crashes with 23.0b7. I'm marking this as verified fixed, due to the fact that there aren't any crashes after 2013-07-17. https://crash-stats.mozilla.com/report/list?product=Firefox&query_search=signature&query_type=contains&reason_type=contains&date=2013-07-19&range_value=28&range_unit=days&hang_type=any&process_type=any&signature=js%3A%3ARemapWrapper%28JSContext%2A%2C+JSObject%2A%2C+JSObject%2A%29
(In reply to Manuela Muntean [:Manuela] [QA] from comment #53) > I'm marking this as verified fixed, due to the fact that there aren't any > crashes after 2013-07-17. With a cutoff date set to July, you won't see recent crashes. What about those crash reports: bp-6fa988c2-3a2f-4461-b454-3da4a2130723 or https://crash-stats.mozilla.com/report/list?version=Firefox:23.0b7&signature=js::RemapWrapper%28JSContext*,%20JSObject*,%20JSObject*%29?
Thanks Scoobidiver! When I checked before, the crashes on 23.0b7 weren't listed.....weird. Seems like this issue is still reproducing.....
Those are crashes on this line: bobbyholley@143946 936 if (!wcompartment->wrap(cx, &tobj, wobj)) wmccloskey@112849 937 MOZ_CRASH();
This is #3 topcrash on FF27 release right now (while 10% throttled). Bholley - any changes lately that might have brought this back to the fore?
Flags: needinfo?(bobbyholley)
Keywords: topcrash
I think this is probably related to bug 969441.
Flags: needinfo?(bobbyholley)
There's nothing at this point we can verify might help in the dot release we're doing of 27.0.1, driven by stability bug 934509 so not tracking this issue for FF27.
Since this is a critical issue though and might become a high volume crash on later versions, will track for those and see if we can get more eyes on this.
Naveed, can you get someone assigned to look into this and bug 969441? This looks like it's been a recent regression (as of FF27) so getting the regression range and a possible backout/fix should be more likely sooner than later.
Flags: needinfo?(nihsanullah)
me bp-ea57beb7-48bc-4174-8ed6-235f42140217 2014-02-14 nightly build I had several links open, One being http://hiltongardeninn3.hilton.com/en/hotels/pennsylvania/hilton-garden-inn-philadelphia-center-city-PHLGIGI/attractions/index.html cited in in the crash report. I was bebopping in areas of that site just before crash, then clicked https://www.chorusamerica.org/about/press-room in thunderbird
Hey Chris, we're going to have to wontfix this for FF28 but I hear you're working with the JS folks on prioritizing bugs - would be great to see some attention on this for FF29.
Flags: needinfo?(cpeterson)
bholley says his fix for related bug 969441 will probably fix this crash too. The patch is very low risk and should be uplifted to Aurora 29 if it works.
Flags: needinfo?(nihsanullah)
Flags: needinfo?(cpeterson)
bug 969441 checkin on 2014-03-07 does not seem to have helped me. bp-bdae48ce-d621-458f-863e-62cb12140313 And looking at crash stats for nightly builds of past 3 weeks I don't see the rating having changed substantially since the checkin
Wayne, 29 beta4 will have a new patch in bug 969441 (https://hg.mozilla.org/releases/mozilla-beta/rev/26d605510c95). Once this beta is released (probably today), could you check if this bug still occurs? Thanks
Flags: needinfo?(vseerror)
(In reply to Sylvestre Ledru [:sylvestre] from comment #70) > Wayne, 29 beta4 will have a new patch in bug 969441 > (https://hg.mozilla.org/releases/mozilla-beta/rev/26d605510c95). Once this > beta is released (probably today), could you check if this bug still occurs? > Thanks Not successful in several attempts at reproducing crash per comment 66 with nightly build from 03-31 and 04-02, so I am unable to confirm a change in crash behavior based on checkin from bug 969441.
Flags: needinfo?(vseerror)
Kairo - this is now at #14 topcrash, it seems to come and go in severity, do we have anything else to work with (dlls, addons)?
Flags: needinfo?(kairo)
Correlations for Firefox 29.0.1 Windows NT 88% (770/879) vs. 27% (27660/103425) WindowsCodecs.dll 100% (878/879) vs. 42% (43598/103425) cscapi.dll 100% (878/879) vs. 43% (44314/103425) linkinfo.dll 97% (849/879) vs. 40% (41266/103425) mf.dll 96% (848/879) vs. 40% (41219/103425) mfreadwrite.dll 100% (878/879) vs. 44% (45652/103425) ntshrui.dll 97% (853/879) vs. 42% (42959/103425) mfplat.dll 97% (849/879) vs. 42% (42978/103425) dxva2.dll 93% (820/879) vs. 39% (39849/103425) ksuser.dll 92% (809/879) vs. 38% (39384/103425) dhcpcsvc6.DLL Correlations for Firefox 30.0b Windows NT 99% (205/207) vs. 40% (13329/33284) cscapi.dll 99% (205/207) vs. 41% (13665/33284) linkinfo.dll 82% (169/207) vs. 24% (8087/33284) WindowsCodecs.dll 100% (206/207) vs. 42% (14134/33284) ntshrui.dll 89% (184/207) vs. 32% (10702/33284) dhcpcsvc6.DLL 92% (191/207) vs. 36% (12011/33284) mfreadwrite.dll 92% (191/207) vs. 36% (12021/33284) mf.dll 92% (191/207) vs. 37% (12226/33284) mfplat.dll 92% (191/207) vs. 38% (12707/33284) dxva2.dll 89% (184/207) vs. 35% (11783/33284) ksuser.dll 87% (180/207) vs. 34% (11301/33284) slc.dll 92% (191/207) vs. 40% (13424/33284) avrt.dll 89% (184/207) vs. 39% (12886/33284) dhcpcsvc.dll URLs don't say much: 288 https://www.facebook.com/ 233 about:blank 87 about:newtab 27 https://twitter.com/ [...] Crash addresses are all over the place, but the majority seems to end in 0x9203, the signature generally happens across all platforms (Win/Mac/Android), products (Firefox, Thunderbird, SeaMonkey) and architectures (x86, x86_64, arm). Comments imply that a number of those people are crashing a lot, but data says most installations crash only once with this. See https://crash-stats.mozilla.com/report/list?signature=js%3A%3ARemapWrapper%28JSContext%2A%2C+JSObject%2A%2C+JSObject%2A%29 for more data.
Flags: needinfo?(kairo)
(In reply to Lukas Blakk [:lsblakk] from comment #73) > Kairo - this is now at #14 topcrash, it seems to come and go in severity, do > we have anything else to work with (dlls, addons)? Also, I don't see it come and go, from all I see, it's pretty stable between releases and hovering around #10-#15.
Leaving this open and marked affected for 31/32 but with nothing to go on, this is unfortunately a wontfix for 30.
It happened to me now on FF30 b9 bp-eb8efa4b-36ae-45af-81d0-84d542140604
(In reply to Fernando Hartmann from comment #77) > It happened to me now on FF30 b9 bp-eb8efa4b-36ae-45af-81d0-84d542140604 Is there any information you can share about what you were doing before this crash occurred?
(In reply to Anthony Hughes, QA Mentor (:ashughes) from comment #78) > Is there any information you can share about what you were doing before this > crash occurred? I use FF as always and every thing appeared normal, the I leave my computer for a while, and when I returned the FF Crash report window was on the middle of my screen. Unfortunately I don't have more information to help.
I crash almost daily on this in nightly, it happens first start after updating. I'll try reduce the profile and see if i can find an explicit cause.
I'm tagging this unactionable since we still have no leads. Present rank is #17 on Release, #40 on Beta, #47 on Aurora, and #70 on Nightly.
Whiteboard: [leave open] → [leave open][unactionable]
The decline between release (30) and beta (31) was visible between 30 and 31 on beta as well and could possibly be related to GGC.
I hit this locally, and have a patch that fixes the the crash I encountered. It's related to document.domain though, so it won't handle the case in comment 77. I'll file a dependent bug and we can see if it has any impact on the crash volume here.
Depends on: 1040181
Stats for the last week: * Firefox 33 has 30 crashes, ranked 62nd * Firefox 32 has 46 crashes, ranked 27th * Firefox 31 has 1425 crashes, ranked 21st This will likely remain an issue once we release Firefox 31 but no longer seems to be a topcrash in Aurora or Nightly. We should continue to track this for the time being.
(In reply to Anthony Hughes, QA Mentor (:ashughes) from comment #84) > This will likely remain an issue once we release Firefox 31 but no longer > seems to be a topcrash in Aurora or Nightly. We should continue to track > this for the time being. IIRC, this increased in volume when we disabled GGC, so I think at least some of those crashes change signature when GGC is on.
Assignee: general → nobody
My SO's Firefox beta 32.0 crashed like this: bp-506dcc86-66d4-4842-9adb-878842140801 01/08/2014 12:24 p.m. Commenting so we are notified if this gets fixed.
Currently the #3 topcrash for Firefox 34.0b7 with 464/28000 crashes. Affected builds: 2014102715, 2014103017, 2014110314, 2014110522, 2014110620. These look like the buildids for various beta releases. For beta 4, each platform had a different buildid because of a glitch in the build process. Most of the crash signatures for beta 4 were on the win32 build, 20141027152126.
(In reply to Robert Kaiser (:kairo@mozilla.com) - on vacation or slow to reply until the end of June from comment #74) > > Comments imply that a number of those people are crashing a lot, but data > says most installations crash only once with this. > > See > https://crash-stats.mozilla.com/report/ > list?signature=js%3A%3ARemapWrapper%28JSContext%2A%2C+JSObject%2A%2C+JSObject > %2A%29 for more data. I (am one of those people who) have Firefox crashing a lot. It may be caused partially by my extra-huge session (number of tabs easily exceeds 100, maybe even 1000). A feature request, by the way: currently, "switch to tab" means that I have no way of going back to previous tab on which I was located. It would have been a useful feature, after "switch to tab" is done, to have somewhere a way to "switch back to where I was before switch-to-tab was done". Lately, instead of crashing, Firefox just freezes, using 25% CPU, and with growing usage of memory (in normal usage, it's 1+GB; sometimes, it's 2+GB; when freezing, it may reach 3+GB). I kill Firefox, then in Restore Session remove one or two tabs, and ten-twenty minutes later I have functioning Firefox back. I am on EPS channel (the one with versions jumping by 7), currently on 31.7.0 Here is the list of latest crashes I had during the last year, as my about:crashes page says: bp-a8640f01-419d-42b2-bd9e-fc4672150602 02/06/2015 13:26 bp-f0074ea7-43cc-41bb-b45e-840412150518 18/05/2015 12:17 bp-4365ee88-5082-4cba-83be-f20062150509 10/05/2015 08:40 bp-21bf2110-4580-4e43-a7aa-12aae2150507 08/05/2015 09:59 bp-639abbcb-9629-46c4-b7be-d12b32150423 23/04/2015 16:09 1fcbfadc-3d3f-4bcf-8da7-30226b8f2214 14/04/2015 22:53 bp-1f0d83b2-e287-454a-b525-c76bf2150413 13/04/2015 15:45 bp-bcf0ef6f-d61f-43ff-a1d0-ce4842150413 13/04/2015 11:11 bp-a9228812-3537-4fdd-8b76-edc8f2150413 13/04/2015 11:02 bp-1d4ced23-42cd-4aae-acde-cfc602150413 13/04/2015 10:56 63bd69da-2a53-4e38-b710-01d2c8cefa5f 13/04/2015 09:22 bp-43381a3e-0708-4b7d-8f5a-2da922150407 08/04/2015 08:48 bp-976b291b-1eff-4c48-a007-65a372150407 08/04/2015 08:46 2514a9ac-8a41-4c56-8d70-a8de5b6943c9 08/04/2015 08:36 bp-cf3d2e8a-a810-4a89-9a37-b979b2150313 13/03/2015 12:09 bp-1ec70c9a-0da5-410b-a753-394902150313 13/03/2015 12:08 bp-95f2019a-20df-468d-a345-8fd082150313 13/03/2015 11:44 bp-64a6b9e8-49ae-4b93-9a1c-7e5452150313 13/03/2015 11:24 bp-b8e098d8-9630-4b6e-bb8a-940252150313 13/03/2015 11:21 bp-90a54268-7f06-424e-b0f7-b00c82150312 13/03/2015 09:18 bp-71960ac4-1176-44a9-bd45-0db902150312 13/03/2015 09:17 bp-5d217893-0fb6-4455-b1e3-d3cec2150312 13/03/2015 09:16 bp-9f3fb472-cfe2-47ea-a1ca-a4e4b2150310 11/03/2015 10:46 5f0452a1-6e5a-4e30-8125-ebd5bc9156bf 06/03/2015 13:06 c48cf347-2460-492c-a4f5-0f58179dd0bc 27/02/2015 17:04 bp-b5a41ecf-bbea-4145-bebc-cc85e2150224 24/02/2015 14:30 bp-51363a48-554d-4969-827c-ad6732150224 24/02/2015 14:24 bp-908d415d-f124-41a4-92f1-a5faa2150224 24/02/2015 14:22 bp-f008ef84-b7bd-4b58-b8f7-ba3712150224 24/02/2015 14:16 bp-d226c3db-9131-4b99-8312-e48eb2150223 23/02/2015 16:31 bp-15c60500-74b6-4703-83c8-a66772150223 23/02/2015 16:29 11d6993e-3882-455b-92a5-49bcb2b0d63e 23/02/2015 10:50 bp-2104e76c-0a2b-424e-bf66-57a442150210 10/02/2015 11:17 0629590f-f7e7-488b-bb29-c5c2ed769e78 28/01/2015 10:23 9baa75a4-4dd9-49d0-a295-ca611990bc09 17/01/2015 08:46 77429118-b3f9-411c-bdba-4f7bf3088af9 16/01/2015 18:34 4aaf18af-c263-44f7-bd5a-c5632442aa8d 11/01/2015 09:56 78732ba9-8d1b-492d-b2f5-64d192cd35b2 06/01/2015 09:20
(In reply to Wikiwide from comment #88) > (In reply to Robert Kaiser (:kairo@mozilla.com) - on vacation or slow to > reply until the end of June from comment #74) > > > > Comments imply that a number of those people are crashing a lot, but data > > says most installations crash only once with this. > > > > See > > https://crash-stats.mozilla.com/report/ > > list?signature=js%3A%3ARemapWrapper%28JSContext%2A%2C+JSObject%2A%2C+JSObject > > %2A%29 for more data. > > I (am one of those people who) have Firefox crashing a lot. It may be caused > partially by my extra-huge session (number of tabs easily exceeds 100, maybe > even 1000). A feature request, by the way: currently, "switch to tab" means > that I have no way of going back to previous tab on which I was located. It > would have been a useful feature, after "switch to tab" is done, to have > somewhere a way to "switch back to where I was before switch-to-tab was > done". I don't think it reasonable to have 100+ tabs open much less 1000, especially on an x86 system whose memory limit, even with 4GB installed, is about 3.1 GB usable if *only* FF is open. No amount of code efficiency can make FF stable in such a case. This isn't an outlier or extreme case, you're asking FF to do more than it is capable of doing with these memory limits. That being said, one of the crashing modules (IPSLdr32.dll 14.1.2.8, appears to belong to Norton 360) is likely being starved of memory and just giving up. Your whole system is likely being starved of memory and you, not FF, are creating the perfect storm for this crashing behavior. You may want to get a hold of a 64-bit OS considering you have an i7-3770K and move to 64-bit FF 39.0 when it is released. Even some of the newer 32-bit FF versions have better memory handling but in your extreme case, nothing but a 64-bit OS will help you IMHO.
@Arthur K. sorry, about the memory usage in Firefox, can you give a look here please? https://bugzilla.mozilla.org/show_bug.cgi?id=1152973 there are attachments. thanks.
Crash Signature: [@ js::RemapWrapper(JSContext*, JSObject*, JSObject*)] → [@ js::RemapWrapper(JSContext*, JSObject*, JSObject*)] [@ js::RemapWrapper]
Crash volume for signature 'js::RemapWrapper': - nightly (version 50): 17 crashes from 2016-06-06. - aurora (version 49): 139 crashes from 2016-06-07. - beta (version 48): 1980 crashes from 2016-06-06. - release (version 47): 9908 crashes from 2016-05-31. - esr (version 45): 504 crashes from 2016-04-07. Crash volume on the last weeks: Week N-1 Week N-2 Week N-3 Week N-4 Week N-5 Week N-6 Week N-7 - nightly 2 7 0 4 1 2 1 - aurora 23 20 26 28 17 14 5 - beta 348 288 264 287 312 301 81 - release 1622 1556 1408 1481 1550 1363 445 - esr 66 52 63 58 57 43 33 Affected platforms: Windows, Mac OS X
The following search indicates that this is OOM-related: https://crash-stats.mozilla.com/search/?signature=%3Djs%3A%3ARemapWrapper&_sort=-date&_facets=signature&_facets=contains_memory_report&_columns=date&_columns=signature&_columns=product&_columns=version&_columns=build_id&_columns=platform#facet-signature Of the 2666 occurrences of crashes with the signature "[@ js::RemapWrapper]" in the past 7 days, 1709 of them have a ContainsMemoryReport=1 field, which indicates that memory was low near the time of crash. (See the "Contains memory report" facet in the search output.)
(In reply to Nicholas Nethercote [:njn] from comment #92) > Of the 2666 occurrences of crashes with the signature "[@ js::RemapWrapper]" > in the past 7 days, 1709 of them have a ContainsMemoryReport=1 field, which > indicates that memory was low near the time of crash. (See the "Contains > memory report" facet in the search output.) Out of curiosity, what is the over all rate of having that field set? Maybe most people just are low on memory.
That said, it does look like this is an OOM crash. I looked at a few crashes, and they were happening on this MOZ_CRASH(): if (!wcompartment->wrap(cx, &tobj, wobj)) MOZ_CRASH(); We could annotate these crashes to confirm that more easily. I looked at about a dozen reports and they were all in SetNewDocument().
(In reply to Andrew McCreight [:mccr8] from comment #93) > (In reply to Nicholas Nethercote [:njn] from comment #92) > > Of the 2666 occurrences of crashes with the signature "[@ js::RemapWrapper]" > > in the past 7 days, 1709 of them have a ContainsMemoryReport=1 field, which > > indicates that memory was low near the time of crash. (See the "Contains > > memory report" facet in the search output.) > > Out of curiosity, what is the over all rate of having that field set? Maybe > most people just are low on memory. 4.9% of all crash reports from the past week have ContainsMemoryReport=1. Of crash reports with this signature, the fraction is 64.1%. So it's very suggestive.
I'm guessing there is still nothing we can do here?
(In reply to David Bolter [:davidb] from comment #96) > I'm guessing there is still nothing we can do here? This is just an OOM crash.
Summary: Firefox crash [@ js::RemapWrapper] → OOM crash in [@ js::RemapWrapper]
Crash volume for signature 'js::RemapWrapper': - nightly (version 54): 0 crashes from 2017-01-23. - aurora (version 53): 1 crash from 2017-01-23. - beta (version 52): 19 crashes from 2017-01-23. - release (version 51): 63 crashes from 2017-01-16. - esr (version 45): 2341 crashes from 2016-08-03. Crash volume on the last weeks (Week N is from 01-30 to 02-05): W. N-1 W. N-2 W. N-3 W. N-4 W. N-5 W. N-6 W. N-7 - nightly 0 - aurora 0 - beta 9 - release 27 0 - esr 120 109 120 105 79 91 131 Affected platforms: Windows, Mac OS X Crash rank on the last 7 days: Browser Content Plugin - nightly - aurora #272 - beta #576 #589 - release #508 #491 - esr #130
(In reply to Andrew McCreight [:mccr8] from comment #97) > (In reply to David Bolter [:davidb] from comment #96) > > I'm guessing there is still nothing we can do here? > > This is just an OOM crash. indeed total virtual 1 2147352576 574 76.43 % 2 4294836224 168 22.37 %
Keywords: topcrash

Half the crashes of the last 6 months are version 60.x, the other half are version 6n>0. What's left is six random version 8x, and no version 7x crashes.

Would seem to be not actionable.

Status: REOPENED → RESOLVED
Closed: 12 years ago4 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: