Closed
Bug 865960
Opened 12 years ago
Closed 12 years ago
JS OOM should throw instead of silently stopping execution
Categories
(Core :: JavaScript Engine, defect)
Core
JavaScript Engine
Tracking
()
RESOLVED
FIXED
mozilla23
People
(Reporter: vlad, Assigned: luke)
References
Details
Attachments
(2 files, 1 obsolete file)
(deleted),
text/html
|
Details | |
(deleted),
patch
|
billm
:
review+
|
Details | Diff | Splinter Review |
The attached testcase just allocates a 1GB arraybuffer. Upon reload, it should throw an out of memory exception (32-bit build, I guess might need another reload if you're very lucky). Instead, JS execution just silently fails. This is bad.
Assignee | ||
Comment 1•12 years ago
|
||
Can reproduce on 64-bit by just making AllocateArrayBufferContents fail when nbytes > 1GB.
Assignee: general → luke
Assignee | ||
Comment 2•12 years ago
|
||
Well it's easy to see why nothing is thrown. Despite this comment in js_ReportOutOfMemory:
/*
* We clear a pending exception, if any, now so the hook can replace the
* out-of-memory error by a script-catchable exception.
*/
NS_ScriptErrorReporter returns early without throwing, claiming:
463 // We don't want to report exceptions too eagerly, but warnings in the
464 // absence of werror are swallowed whole, so report those now.
Note: js_ReportOutOfMemory is a completely custom error reporting path; normally an error is reported via ReportError which sets an exception and only calls the error reporter if an exception isn't set.
Anyone know how this is *supposed* to work?
Assignee | ||
Comment 3•12 years ago
|
||
My initial thought for this was that the fix would be to make a small change in the DOM error reporter so that at least we'd get an error console message.
But that still doesn't fix the fundamental problem we're seeing: we have a real problem on 32-bit with the BF cache keeping pages alive that hold large allocations. vlad, bz, others: what if we had a scheme whereby:
1. on large ArrayBuffer allocations (and other places with the same large-allocation problem), if we fail, we call some "large allocation failure handler" that is a callback into the DOM
2. from this handler, we drop everything in the BF Cache in the whole browser
3. after calling the large allocation failure handler, we retry the allocation. If this fails, then print an error message (or, even better, attempt to throw, so that the page can give the user the advice to close other tabs)
On the JS engine side, this doesn't seem too complicated. What about on the DOM "flush the BF cache" side?
Comment 4•12 years ago
|
||
(In reply to Luke Wagner [:luke] from comment #3)
> My initial thought for this was that the fix would be to make a small change
> in the DOM error reporter so that at least we'd get an error console message.
Split this change out into a separate bug? We have to eagerly report OOM exactly because it's uncatchable.
> On the JS engine side, this doesn't seem too complicated. What about on the
> DOM "flush the BF cache" side?
Don't we already do this for the memory pressure notification?
Reporter | ||
Comment 5•12 years ago
|
||
> But that still doesn't fix the fundamental problem we're seeing: we have a
> real problem on 32-bit with the BF cache keeping pages alive that hold large
> allocations.
It's not just bfcache, but bfcache is one area where it'll come up (just plain "reload" is another, where bfcache isn't involved). I filed bug 865959 yesterday with this same testcase for the docshell side of things. Can you stick your proposal over there so we can discuss?
Comment 6•12 years ago
|
||
> Don't we already do this for the memory pressure notification?
Yes, but I don't know that we send those on large-allocation failures in JS...
Assignee | ||
Comment 7•12 years ago
|
||
With this patch, we always throw an exception from js_ReportOutOfMemory. Instead of using ReportError, which can oom while allocating error and report objects, the patch simply uses a static "out of memory" atom. Since this is the first time OOM has thrown, and previous behavior was to silently abort execution, I don't think we need to be too worried about the change in behavior.
One jsreftest was removed because it failed on try server and, if you look at the skip-if directive on the first line, it seems like it is more trouble than it is worth.
Attachment #743815 -
Flags: review?(wmccloskey)
Comment 8•12 years ago
|
||
You can probably remove most of the "silentfail" annotations on js tests, replacing them with try/catch or a new "oom" annotation that expects a specific uncaught exception.
http://mxr.mozilla.org/mozilla-central/search?string=silentfail&find=js/src/tests
How did regress-336409-2.js fail on try? What makes it different from the other OOM tests?
Updated•12 years ago
|
Summary: ArrayBuffer allocations fail silently sometimes → JS OOM should throw instead of silently stopping execution
Comment on attachment 743815 [details] [diff] [review]
throw on fail
This looks nice. Can you just explain a few things, since I don't understand this code very well?
What does this patch do in the browser when we OOM? Is something written to the error console? I'm trying to understand NS_ScriptErrorReporter, but I don't know what half the stuff in there does.
Also, do we have to check JS_IsRunning before doing cx->setPendingException? That's what ReportError does, and it seems like the right thing as far as I can tell.
Also, I'm a little worried about what will happen if even small mallocs fail. We won't allocate during js_ReportOutOfMemory, but we'll still need to handle the exception later, assuming it's uncaught. It looks like that path can allocate, and I'm a bit worried that we'll end up in an infinite loop if those allocations fail. However, I guess we'll be in the !JS_Running() situation then, so as long as we can handle that without allocating, I guess we'll be okay. The browser will still screw up, I imagine, but at least we can say we did our best.
Assignee | ||
Comment 10•12 years ago
|
||
(In reply to Bill McCloskey (:billm) from comment #9)
> What does this patch do in the browser when we OOM? Is something written to
> the error console? I'm trying to understand NS_ScriptErrorReporter, but I
> don't know what half the stuff in there does.
Nothing is written to the error console on throw. If the exception fails to be caught, "unhandled exception: out of memory" will be printed to the console (like any uncaught exception).
> Also, do we have to check JS_IsRunning before doing cx->setPendingException?
> That's what ReportError does, and it seems like the right thing as far as I
> can tell.
Oh yes, of course, good catch.
> Also, I'm a little worried about what will happen if even small mallocs
[...]
You're right, checking JS_IsRunning should fix that. In the not-running case, we won't return early from NS_ScriptErrorReporter (JS_DescribeScriptedCaller will return false), so we should even get a real error message on the console.
Assignee | ||
Comment 11•12 years ago
|
||
Attachment #743815 -
Attachment is obsolete: true
Attachment #743815 -
Flags: review?(wmccloskey)
Attachment #744138 -
Flags: review?(wmccloskey)
Assignee | ||
Comment 12•12 years ago
|
||
(In reply to Jesse Ruderman from comment #8)
> How did regress-336409-2.js fail on try? What makes it different from the
> other OOM tests?
Previously, it silent-failed, now it throws "out of memory" which isn't textually matched by the "expected" predicate.
Attachment #744138 -
Flags: review?(wmccloskey) → review+
Assignee | ||
Comment 13•12 years ago
|
||
Comment 14•12 years ago
|
||
So was the failure in regress-336409-2.js supposed to be fixed before you pushed? Because it's sure as heck failing on Linux32. Backed out.
https://hg.mozilla.org/integration/mozilla-inbound/rev/e4e0678bf69c
Assignee | ||
Comment 15•12 years ago
|
||
That's odd, because I removed that test in the patch...
https://hg.mozilla.org/integration/mozilla-inbound/rev/e0c3de95b861#l2.2
Comment 16•12 years ago
|
||
Sorry, it's regress-336410-2.js that's failing here.
Assignee | ||
Comment 17•12 years ago
|
||
Oops, different numbers. I thought I try'd this. Anyhow, it's another test doing the same thing with the same insane skip-if constraints. I'll just fix them by adding "|out of memory" to the expect.
Assignee | ||
Comment 18•12 years ago
|
||
Try shows green, let's have another go:
https://hg.mozilla.org/integration/mozilla-inbound/rev/15850ed77719
Comment 19•12 years ago
|
||
Status: NEW → RESOLVED
Closed: 12 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
Target Milestone: --- → mozilla23
Comment 20•11 years ago
|
||
(In reply to Luke Wagner [:luke] from comment #2)
> Anyone know how this is *supposed* to work?
Oops. I do. The error reporter is supposed to report the error. (But not exceptions. It's supposed to check JS_IsExceptionPending.)
Throwing the string "out of memory" doesn't sit well. It seems to me there are two categories of OOM:
* we're really out of memory
* you asked for too much memory
In the former case, if there are platforms where that can happen, terminating the script (what we used to do) seems much better than throwing an exception that user code can catch. I don't think the catch/finally machinery will work reliably when we're out of GC heap.
In the latter case, we're not really OOM. We can still allocate an Error object. So we should throw InternalError("can't allocate that much memory"). This is kind of like js_ReportAllocationOverflow. I would put Vlad's original scenario (asking for a 1GB ArrayBuffer) in this category.
You need to log in
before you can comment on or make changes to this bug.
Description
•