Closed
Bug 1156316
Opened 10 years ago
Closed 7 years ago
Add memory reporter for IonBuilder
Categories
(Core :: JavaScript Engine: JIT, defect, P5)
Core
JavaScript Engine: JIT
Tracking
()
RESOLVED
DUPLICATE
of bug 1448563
Tracking | Status | |
---|---|---|
firefox40 | --- | affected |
People
(Reporter: erahm, Assigned: n.nethercote)
Details
(Whiteboard: [MemShrink:P2])
Attachments
(1 file)
(deleted),
application/x-gzip
|
Details |
Under certain loads I'm seeing up to 50MiB of heap-unclassified in j2me.js apps that DMD indicates is originating from IonBuilder, we should report this.
Allocation stacks look roughly like this:
> replace_malloc memory/replace/dmd/DMD.cpp:1146 (libdmd.so+0x3d84) 0xb6f41d84
> js::detail::BumpChunk::new_(unsigned int) js/src/ds/LifoAlloc.cpp:24 (libxul.so+0x1243a06) 0xb5c5ea06
> js::LifoAlloc::ensureUnusedApproximate(unsigned int) js/src/ds/LifoAlloc.h:296 (libxul.so+0x1457d30) 0xb5e72d30
> js::jit::LiveRangeAllocator<js::jit::LinearScanVirtualRegister, true>::buildLivenessInfo() js/src/jit/LiveRangeAllocator.cpp:610 (libxul.so+0x129e684) 0xb5cb9684
> js::jit::LinearScanAllocator::go() js/src/jit/LinearScan.cpp:1291 (libxul.so+0x129d408) 0xb5cb8408
> js::jit::GenerateLIR(js::jit::MIRGenerator*) js/src/jit/Ion.cpp:1670 (libxul.so+0x1265fae) 0xb5c80fae
> js::jit::CompileBackEnd(js::jit::MIRGenerator*) js/src/jit/Ion.cpp:1759 (libxul.so+0x126612c) 0xb5c8112c
> js::jit::IonBuilder::setBackgroundCodegen(js::jit::CodeGenerator*) js/src/jit/IonBuilder.h:844 (libxul.so+0x1392e14) 0xb5dade14
> js::HelperThread::threadLoop() js/src/vm/HelperThreads.cpp:1210 (libxul.so+0x1393510) 0xb5dae510
Attached is a full DMD report.
Assignee | ||
Comment 1•10 years ago
|
||
The interesting thing here is that IonBuilder overallocates so that it always has a certain amount of memory ballast to fall back on if an OOM occurs. That's what ensureUnusedApproximate() is doing. IIRC the ballast size is something like 12 or 16 KiB.
I've seen this allocation stack a lot when doing *cumulative* heap profiling, but I always assumed these allocations were short-lived and there were few alive at once.
In this case, 50 MiB means either there's an enormous single IonBuilder, or lots of small ones. Both cases seem surprising. If it's the latter case, then a lot of the allocated space will actually be unused ballast.
Updated•10 years ago
|
Whiteboard: [MemShrink]
Assignee | ||
Updated•9 years ago
|
Assignee: nobody → n.nethercote
Whiteboard: [MemShrink] → [MemShrink:P2]
Assignee | ||
Comment 2•9 years ago
|
||
I started looking at this and I'm a bit puzzled. This kind of allocation comes from TempAllocator, and as far as I can tell all TempAllocator objects are stack-allocated. If Firefox was a single-threaded program, then we should never see allocations like this showing up in DMD, because if the reporters are running then Ion wouldn't be running and so there wouldn't be any TempAllocator objects alive.
Of course, Firefox is multi-threaded so I figure that must be relevant here -- e.g. some Ion compilation threads are alive while the memory reporter thread is running. Assuming that's correct, I don't see how to write a reporter for these, because the memory reporter thread (which is the main thread) can't see what's in the Ion threads' stacks. Well, I suppose there could be some global variable holding all the currently live TempAllocators, and each Ion thread would be required to add and remove TempAllocator objects (with appropriate locking) as appropriate, and the memory reporter could query that... though that sounds annoying.
Jan, does this make sense to you? Am I misunderstanding anything about how TempAllocator works?
Flags: needinfo?(jdemooij)
Comment 3•9 years ago
|
||
(In reply to Nicholas Nethercote [:njn] from comment #2)
> This kind of allocation
> comes from TempAllocator, and as far as I can tell all TempAllocator objects
> are stack-allocated.
In IonCompile we heap-allocate one (alloc->new_<TempAllocator>..).
> Of course, Firefox is multi-threaded so I figure that must be relevant here
> -- e.g. some Ion compilation threads are alive while the memory reporter
> thread is running.
Yes, exactly. That's why we heap allocate the LifoAlloc/TempAllocator there: we pass it to the compilation thread.
> Well, I suppose there
> could be some global variable holding all the currently live TempAllocators,
> and each Ion thread would be required to add and remove TempAllocator
> objects (with appropriate locking) as appropriate, and the memory reporter
> could query that... though that sounds annoying.
Each off-thread compilation has an associated "builder" (IonBuilder instance), and from each builder you can get the TempAllocator as builder->alloc(). See MarkOffThreadNurseryObjects::trace in Ion.cpp for an example of how you can find all off-thread compilations/builders.
However, the problem is that if the compilation is in progress, the helper thread will also be accessing this LifoAlloc/TempAllocator, so we either need to lock or access the data we care about in a thread-safe way.
(FWIW, this TempAllocator is used for all allocations for a single Ion compilation. It'd be interesting to see how much memory each compilation pass allocates, maybe it's worth having a separate allocator that's freed after each compilation pass...)
Flags: needinfo?(jdemooij)
Comment 4•8 years ago
|
||
LifoAlloc has a memory reporter, so we would have to plug the memory reporter of the LifoAlloc associated with each IonBuilder instance, in order to report the memory associated with the Ion compilations.
Priority: -- → P5
Updated•7 years ago
|
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → DUPLICATE
You need to log in
before you can comment on or make changes to this bug.
Description
•