Closed Bug 1413218 Opened 7 years ago Closed 7 years ago

Malloc allocation GC threshold grows too aggressively

Categories

(Core :: JavaScript: GC, enhancement, P3)

enhancement

Tracking

()

RESOLVED FIXED
mozilla58
Tracking Status
firefox58 --- fixed

People

(Reporter: jonco, Assigned: jonco)

References

Details

Attachments

(1 file)

There are a couple of bugs reporting OOMs when loading large PDFs, and this seems to be because the malloc threshold can grow without limit resulting in very large heap sizes. For example bug 1412794 reports FF getting killed by the OS on a 4GB system due to excessive memory use. We do still want this threshold to be dynamic so that we don't constantly GC when we are allocating large amounts of memory. However, we could let it grow more slowly and we could put a maximum limit on it.
Priority: -- → P3
Here's a patch to reduce the growth factor to 1.5 (from 2) and impose a 1GB limit on the malloc bytes threshold.
Attachment #8923917 - Flags: review?(sphink)
Comment on attachment 8923917 [details] [diff] [review] bug1413218-limit-malloc-threshold-growth Review of attachment 8923917 [details] [diff] [review]: ----------------------------------------------------------------- The thresholds are a little arbitrary, and it got me thinking about the ramp-up case where you really are allocating a bunch of memory and we're going to do log_1.5(n) steps to get there. But sure, why not give this a try?
Attachment #8923917 - Flags: review?(sphink) → review+
Pushed by jcoppeard@mozilla.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/8fb7879b388f Make the malloc threashold grow a little slower r=sfink
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla58
(In reply to Steve Fink [:sfink] [:s:] from comment #2) > Comment on attachment 8923917 [details] [diff] [review] > bug1413218-limit-malloc-threshold-growth > > Review of attachment 8923917 [details] [diff] [review]: > ----------------------------------------------------------------- > > The thresholds are a little arbitrary, and it got me thinking about the > ramp-up case where you really are allocating a bunch of memory and we're > going to do log_1.5(n) steps to get there. But sure, why not give this a try? Not sure how useful it is, but we have a slightly more convoluted method in nsTArray [1] to help avoid heap churn (exponential up to 8MB), but also have a sane large growth strategy (grow by 1.125). Might be worth a follow up investigation. [1] http://searchfox.org/mozilla-central/rev/423b2522c48e1d654e30ffc337164d677f934ec3/xpcom/ds/nsTArray-inl.h#148-166
No longer blocks: 1341093
Depends on: 1341093
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: