Closed Bug 531270 Opened 15 years ago Closed 15 years ago

Weak references hashtable doesn't shrink if lots of references are created and deleted immediately

Categories

(Tamarin Graveyard :: Garbage Collection (mmGC), defect, P2)

x86
macOS
defect

Tracking

(Not tracked)

VERIFIED FIXED
flash10.1

People

(Reporter: achicu, Assigned: lhansen)

Details

Attachments

(1 file)

User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; en-us) AppleWebKit/531.9 (KHTML, like Gecko) Version/4.0.3 Safari/531.9 Build Identifier: GC::ClearWeakRef calls weakRefs.remove(item, /*rehash=*/!collecting). The problem is that most of the time ClearWeakRef is called while the garbage collector runs. That means the table will not shrink at all. Even though it has the chance to shrink, it will just divide by 2 meaning that the new size doesn't take in account the occupancy. The numDeletes gets back to 0 and the condition ( numDeleted * 10 >= tableSize , that is required to shrink, will never be true again. Reproducible: Always Steps to Reproduce: Run the following as3. package { import avmplus.System; public class MethodClosureFactory { public function MethodClosureFactory() { } public function doSomething():void { } public function getClosure():Function { return doSomething; } } } import avmplus.System; function collector():void { for(var j:int = 0; j<1; j++) { var dictionary:Array = []; System.write(" j = " + j + "\n"); for(var i:int = 0; i< 1000000; i ++) { dictionary[i] = new MethodClosureFactory().getClosure(); } } } collector(); System.write("type exit to quit.\n"); while(true) { System.forceFullCollection(); System.write("Total memory: " + System.totalMemory + "\n"); if (System.readLine() == "exit") break; } Actual Results: The memory increases and doesn't get back. Expected Results: The memory should decrease.
I've updated the test to better show the bug: package { import avmplus.System; public class MethodClosureFactory { public function MethodClosureFactory() { } public function doSomething():void { } public function getClosure():Function { return doSomething; } } } import avmplus.System; function collector():void { System.write("Start\n"); var dictionary:Array = []; for(var i:int = 0; i< 1000000; i ++) { dictionary[i] = new MethodClosureFactory().getClosure(); } System.write("End\n"); } System.write("Total memory: " + System.totalMemory + "\n"); System.write("Type exit to quit.\n"); while(true) { collector(); System.forceFullCollection(); System.forceFullCollection(); System.forceFullCollection(); System.forceFullCollection(); System.write("Total memory: " + System.totalMemory + "\n"); if (System.readLine() == "exit") break; }
Attached patch fix (deleted) — Splinter Review
* removed rehashIfNeeded because "grow" can fail silently * changed the shrink condition in "grow" to match the one in "remove"
Attachment #414736 - Flags: superreview?(lhansen)
Attachment #414736 - Flags: review?(treilly)
Impacts AIR 2.0 (lots of weak refs).
Status: UNCONFIRMED → NEW
Component: Virtual Machine → Garbage Collection (mmGC)
Ever confirmed: true
Priority: -- → P2
QA Contact: vm → gc
Target Milestone: --- → flash10.1
I'm a little nervous about the proposed fix because I'm not sure what the invariant is that the current code might be maintaining. (The 'collecting' flag isn't true during GC in general, it's true during the finalization pass, and indeed most weak-ref clearing occurs during finalization, so it's easy to see why removing the flag makes shrinking kick in. But during finalization the GC is in a vulnerable state; we allow allocation and deallocation but we're not 100% confident that all the bugs have been ironed out.) Waiting to see if Tommy has any more insights on that.
(Indeed for something like the weak-ref table the most reasonable strategy might be for the GC to check the table occupancy explicitly at the end of each GC cycle, given that the normal case is that many weak refs will be deleted during some GCs.)
Attachment #414736 - Flags: review?(treilly) → review+
Comment on attachment 414736 [details] [diff] [review] fix Alright then. I will integrate.
Attachment #414736 - Flags: superreview?(lhansen) → superreview+
Assignee: nobody → lhansen
Status: NEW → ASSIGNED
redux changeset: 3230:257f4c419ce6
Status: ASSIGNED → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Flags: in-testsuite?
When running the testcase above with latest avmshell (redux r3839), memory continues to climb by ~200-400k per cycle. Is this expected? Seems like the dictionary in the collector function should get fully collected after every loop.
Now that I look at the test case I'm not sure why it's relevant; it uses an Array for a dictionary, and array entries aren't weak, nor are the environment references from a closure weak (to my knowledge). (The weakref code in the GC was most definitely not shrinking the table properly, but maybe what we need is a better test case?) Anyway Chris is right, the memory consumption in the benchmark above should stabilize after some time. Since full collections are forced here, "some time" should be immediately, more or less. If memory usage is still rising 200-400K per cycle that could be fragmentation caused by allocating array storage, since there's much more storage than 400K allocated per iteration - clearly the bulk of it is collected. If fragmentation plays a part then stabilization would take longer, and could in principle never happen if the underlying allocator is particularly inept at selecting blocks.
Memory does not stabilize over time - new bug 548158 filed.
removing in-testsuite? as it will be covered by bug# 548158
Flags: in-testsuite?
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: