Closed
Bug 646575
Opened 14 years ago
Closed 7 years ago
Creating Sandbox objects leads to memory leaks
Categories
(Core :: XPConnect, defect)
Core
XPConnect
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: ochameau, Assigned: peterv)
References
Details
(Keywords: memory-leak, Whiteboard: [MemShrink:P2])
Attachments
(2 files, 2 obsolete files)
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
text/javascript
|
Details |
We are facing various leaks in jetpack project and we were able to reduce one to Sandboxes.
Simply creating empty sandboxes without doing anything with them, seems to lead to memory leaks, measured around 20kb/sandbox:
let principal = Cc["@mozilla.org/systemprincipal;1"].
createInstance(Ci.nsIPrincipal);
let instances = 0;
function loop() {
Cu.Sandbox(principal);
if (instances++ < 2000)
loop();
else
dump("end\n");
}
loop();
Reporter | ||
Comment 1•14 years ago
|
||
If we execute this script in xulrunner 1.9.2.16 release, we get these numbers:
2472478
Go leak test ...
end
3269158
2471990
So no leaks at all, we even free more memory!
But if we execute that in xulrunner 2.0 release or today nightly, we get that:
3690774
3690774
Go leak test ...
end
34190994
28971022
28905134
28905134
5836302
4787598
4787598
4787598
4787598
4787598
4787598
=> total leak: 1096824, per sandbox: 548
And if we do the same but by creating 1000 sandboxes instead of 2000:
3690774
3690774
Go leak test ...
end
18763578
16358798
16293294
16293294
5807502
5807502
5807502
5807502
4758926
4758926
4758926
4758926
4758926
4758926
=> total leak: 1068152, per/sandbox: 1068
But wait! Sometimes it doesn't leak that much, and I get sometimes, this:
3690774
3690774
Go leak test ...
end
16572154
16359182
16293294
16293294
5807502
4758926
4758926
4758926
3710350
3710350
=> total leak: 19576, per/sandbox: 19
Comment 2•14 years ago
|
||
This is hurting us badly. Running a bunch of Jetpack-based add-ons causes memory usage to more than double in my case.
Reporter | ||
Comment 3•14 years ago
|
||
As jetpack is overusing sandboxes I've built another test that execute the same function but within one sandbox.
Here is memory results for xulrunner 1.9.2.16:
2472998
2473030
Go leak test ...
end
undefined
3586886
2472662
2472662
And for xulrunner 2.0 release:
3691302
3691302
Go leak test ...
end
undefined
33671010
29210058
29210154
29210154
5092746
5092746
But this time results are more stable. So it never appears to free more memory.
Comment 4•14 years ago
|
||
Updated•14 years ago
|
Comment 5•14 years ago
|
||
I can't tell from comment #3 - Alex, are you saying it's better or worse than it first seemed?
Reporter | ||
Comment 6•14 years ago
|
||
(In reply to comment #5)
> I can't tell from comment #3 - Alex, are you saying it's better or worse than
> it first seemed?
It's not worse, nor better. At least it's easier for platform team to debug this leak using sandbox in sandbox, as the leak append everytime you run it. (Whereas when you just use one level of sandbox, sometimes, memory is freed :o)
Then per your comment 2, you should check again memory usage with bugs 607601, 649334 and 642306 now being fixed and landed!
We were leaking all content document matched by any page-mod or tab.attach (bug 607601)!!!
And we were leaking all Worker/symbiont objects (bug 649334).
We still leak API objects that still use unload.when and are destroyed before firefox exit. I forgot them, I'm reopening the last bug to fix this over all the SDK.
Updated•13 years ago
|
Whiteboard: [MemShrink:P1]
Comment 7•13 years ago
|
||
Alexandre: is this something that we can work around in the jetpack code?
Assignee | ||
Updated•13 years ago
|
Assignee: nobody → peterv
Reporter | ||
Comment 8•13 years ago
|
||
(In reply to comment #7)
> Alexandre: is this something that we can work around in the jetpack code?
For now, I don't know any way to avoid this leak in jetpack code. There may be some way to not leak as tests highlight that this *may* not leak if we have only one level of sandboxes. But in jetpack case, we are overusing sandboxes so it is going to be hard to dig that uncertain fix.
This leak is not huge, regarding previous tests I made, it leaks less then 1KB per sandbox. The problem is that we are creating one sandbox per content document per addon using page-mod API.
But I have to say that jetpack was used to leak all content documents and all related API objects, so this leak is wayyyy smaller and we are now tracking complex leaks like this one.
From what I know Blake and Peter started digging this bug in platform code but they are lacking of efficient tools to track this bug. DumpHeap seemed to crash or return wrong results? So I decided to refresh Atul's memory component to help figuring out this leak (work in progress).
Updated•13 years ago
|
Whiteboard: [MemShrink:P1] → [MemShrink:P2]
Comment 9•13 years ago
|
||
I made some investigation on this issue, and the more I looked into it the more I got the feeling that this is not really a memleak but more like a memory fragmentation. I used js-gc-heap and resident memory reporters in my test, and I also turned on refcnt/memory logging (with XPCOM_MEM_BLOAT_LOG XPCOM_MEM_LEAK_LOG) further more I tried my luck with dumpHeap and modified the example to shut down in the end so I could use the memory report on the end.
So for me with 1000 sandbox the numbers were:
---------------------------------------------
js-gc-heap: 2097152 resident: 21815296
js-gc-heap: 2097152 resident: 21827584
Go leak test ...
end
js-gc-heap: 23068672 resident: 45441024
js-gc-heap: 2097152 resident: 24428544
js-gc-heap: 2097152 resident: 24428544
js-gc-heap: 2097152 resident: 24428544
js-gc-heap: 2097152 resident: 24428544
js-gc-heap: 2097152 resident: 24428544
delete sandbox
js-gc-heap: 2097152 resident: 24346624
js-gc-heap: 2097152 resident: 24346624
js-gc-heap: 2097152 resident: 24346624
for 2000:
---------
js-gc-heap: 2097152 resident: 21807104
js-gc-heap: 2097152 resident: 21819392
Go leak test ...
end
js-gc-heap: 44040192 resident: 67915776
js-gc-heap: 2097152 resident: 25894912
js-gc-heap: 2097152 resident: 25894912
js-gc-heap: 2097152 resident: 25894912
js-gc-heap: 2097152 resident: 25894912
js-gc-heap: 2097152 resident: 25894912
delete sandbox
js-gc-heap: 2097152 resident: 26013696
js-gc-heap: 2097152 resident: 26013696
js-gc-heap: 2097152 resident: 26013696
(note that the memory reporter interface changed in the meanwhile since ff7 what alex measured I think was the resident mem usage, and I think the numbers are lower then before...)
The js-gc-heap shrunk back to it's original size during the tests, but the resident memory (thats an indicator of the overall mem footprint of the process) did not.
The refcnt/memory logging also could not detect any leaking refcounted objects, so based on this two facts and some code investigation I'm fairly positive that there is no object leaking.
The resident memory change was quite suspicious to me, because it was not linear to the number of sandboxes created in the test... And not very big either. I can very much imagine that a simple memory fragmentation can cause something like this (based on my earlier expereience with v8 I'm not too familiar with spidermonkey yet) and as a comparsion I changed the test to instead of creating sandboxes inside sandboxes do only this simple loop:
for (var i=0; i<100000; i++) {
let s = [{a:'blah'+i},{a:'blah2'+i},{a:'blah3'+i},{a:'blah4'+i}];
}
the results are quite similar: resident memory starting from 21778432 and after
the loop it shrunk back only to 23920640. Would that be a leak too? Personly I doubt it, but I don't know the engine that much, I'm very new on this project.
Two more notes, one is that if the js-gc-heap memory usage during the test went above some lines then it did not shrink back to it's original size, but it shrink back to a fixed but larger size (it's an optimisation I assume based on the fixed numbers), and the other that with sandox inside a sandbox a lot less object creation used the same effect as in this trivial example. So either the memory compartment, or the custom GC behaviour of the sandbox result in a higher memory fragmentation or indeed there is some small low level leaking there but it will be extremly difficult to catch with this fragmentation based effect around which is probably much bigger than the leak itself (IF there is any)...
Finally I also tried to use TraceMalloc just to be sure about my theory about this issue, but it looks like it is not supported in xpcshell...
Any ideas, critics, hints are highly welcome at the moment I have the feeling based on the results that I should work on some other issues instead.
Comment 10•13 years ago
|
||
> Any ideas, critics, hints are highly welcome at the moment I have the feeling based on the results
> that I should work on some other issues instead.
One question is: Is the growth (due to fragmentation or whatever) unbounded, or bounded at 5mb?
If it's just 5mb, that's kind of lost in the noise. Just running about:memory uses on the order of 5mb of RAM.
It would also be helpful to see about:memory?verbose before and after your testcase. That may show us where the memory is going. (You should also GC/CC before collecting resident numbers in about:memory.)
On face, it sounds like this isn't necessarily a big issue.
Reporter | ||
Comment 11•13 years ago
|
||
We are used to create sandboxes that have a short life in jetpack,
and last time I checked, it seems that such behavior introduced a leak.
And I may be wrong, but two xpcshell tests tends to highlight something wrong arounds sandboxes.
Now, lot of stuff changed in mozilla-central, so it may have been fixed or changed.
In these xpcshell tests I'm not using about:memory. Instead I use memory reporter components so that we aren't messed up with about:memory.
I'll try to run these tests again and see if our APIs are still leaking.
Comment 12•13 years ago
|
||
Alexandre, any chance you could re-run the tests now?
Comment 13•13 years ago
|
||
Bump?
Reporter | ||
Comment 14•13 years ago
|
||
Sorry guys for the delay. I still think there is something wrong.
But it is now clearly intermittent (while it was constant leak during the initial report).
With this updated script, that prints js-gc-heap, it seems to be leaking intermittently. Here is two log dumps using today's nightly:
## leaking
2097152
2097152
2097152
Go leak test ...
end
97517568
97517568
97517568
97517568
97517568
5242880
5242880
...
5242880
5242880
delete sandbox
5242880
5242880
...
# no-leak
2097152
2097152
2097152
Run leak test ...
end
97517568
97517568
97517568
97517568
97517568
5242880
4194304
4194304
4194304
4194304
4194304
4194304
delete sandbox
4194304
4194304
4194304
4194304
3145728
3145728
3145728
3145728
3145728
2097152
2097152
...
And I can confirm this behavior when using Jetpack API. We end up seing an intermittent leak when creating/destroying API instances that uses sandboxes internaly.
Attachment #523082 -
Attachment is obsolete: true
Attachment #523094 -
Attachment is obsolete: true
Comment 16•11 years ago
|
||
(Clearing a 5-month-old needinfo request, due to lack of response.)
Reporter | ||
Comment 17•11 years ago
|
||
My feeling was that it was leaking consistanly during the original report, and then, we may just see some artefact of the GC, allocating an extra chunk of memory in some runs, which doesn't necessary means there is a leak.
I'm no longer contributing to jetpack, which is the main project using sandboxes.
As many other topics, looking at memory consumption of SDK modules would be interesting...
but I clearly have not enough cycles to keep looking at that.
Even if there isn't a leak, I'm convinced SDK modules, using one sandbox, global, compartment per module have a pretty heavy memory footprint. I'm not sure anyone really track the actual memory usage of these modules.
Flags: needinfo?(poirot.alex)
It is my understanding that Jetpack is not meant for add-ons anymore. I also seem to remember that the one sandbox/global/compartment per module was designed as a safety/security feature for interaction between untrusted add-ons. Does it still make sense?
Reporter | ||
Comment 19•11 years ago
|
||
Jetpack is still meant for add-ons. There is only a focus shift being made to devtools APIs.
Comment 20•7 years ago
|
||
Given jetpack is no longer a thing and we haven't seen other reports about sandboxes leaking lets go ahead and close this.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•