Closed Bug 722946 Opened 13 years ago Closed 13 years ago

GC: allow for inline re-keying of hash tables during iteration

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla14

People

(Reporter: terrence, Assigned: terrence)

References

Details

Attachments

(1 file, 7 obsolete files)

Because of the Map/Set builtins (and possibly cross compartment wrappers), it is possible to have unconstrained client objects as the key of a HashMap/HashSet. For generational and compacting GC, we will need to move these objects in memory. This means we will need to remove and re-insert these keys in a new location... during GC. Because we cannot safely allocate memory for this, we will probably need to do some interesting algorithm work inside the HashTable. For the moment, code that uses Map & Set in the wild is sufficiently rare that crashing the browser when we run out of memory in this one function with this one very specific edge case is "safe." However, this would be good to fix in the medium term.
With some appropriate HashTable cleanups, since I was touching the code.
Assignee: general → terrence
Status: NEW → ASSIGNED
Attachment #594335 - Flags: review?(luke)
Blocks: 726687
Comment on attachment 594335 [details] [diff] [review] v0: An implementation of Luke's rekeying algorithm Review of attachment 594335 [details] [diff] [review]: ----------------------------------------------------------------- First of all, sorry for the delay! Two high-level comments: 1. I'm a little worried about adding sizeLog2/sizeMask as members since this bloats sizeof(HashTable) and there are a few cases where we may make a lot of hash tables with a small enough number of elements where sizeof(HashTable) matters. The primary case that comes to mind is KidsHash which can be per-shape (although most shapes won't have one). That doesn't mean we can't, but we'd want to do a measured trade-off (speedup vs. size increase), preferably in a separate bug/patch. 2. Thinking about it more carefully, it may be faster to not use 'key == NewKeyFunction::rekey(key)' to determine whether a 'trial' element is in the right place. Rather, we could just have a current index 'i' and anything less than i is either empty or in the right place and anything above i gets evicted, unconditionally. That would (1) avoid the first pass to clear collision bits (2) avoid repeatedly calling NewKeyFunction::rekey(key) for a given element and (3) afford a simpler inductive hypothesis. This may have been what you were originally thinking, so I'm sorry if this 'key == rekey(key)' idea was misleading. ::: js/public/HashTable.h @@ +294,3 @@ > #endif > + mutable DebugOnly<bool> entered; > + DebugOnly<uint64_t> mutationCount; Since ReentrencyGuard is defined always, you can nix the #ifdef DEBUG entirely. @@ +791,5 @@ > + * > + * Key must be convertable to Lookup to use rekey. > + */ > + Lookup curOldKey = HashPolicy::getKey(cur->t); > + Lookup curNewKey = NewKeyFunction::rekey(curOldKey); s/Lookup/Key/ @@ +794,5 @@ > + Lookup curOldKey = HashPolicy::getKey(cur->t); > + Lookup curNewKey = NewKeyFunction::rekey(curOldKey); > + if (curOldKey == curNewKey) { > + continue; > + } No braces around single-line then branch. @@ +806,5 @@ > + if (trial->isFree()) { > + /* Note: doing 1-way copy instead of swap saves ~20%. */ > + trial->t = cur->t; > + updateKey(trial, curKeyHash, curNewKey); > + cur->setFree(); The comment is good for me, but probably we can remove it from the final version. So, to get faster, it seems like we'd to avoid copying the whole Entry::t just to overwrite the key portion. Could you make a single, compound: moveWithNewKey(dst, src, newKey, newKeyHash) ? @@ +813,5 @@ > + > + /* If we happen to hash to the same location. */ > + if (trial == cur) { > + updateKey(trial, curKeyHash, curNewKey); > + break; IIUC, this case can only happen on the first iteration or after changing to a new key ('curNewKey = trialNewKey' below). There is another issue where we recompute 'h2' when the original lookup compute it once before the collision loop. Both of these suggest breaking out a second inner loop that handles collisions. For this loop, trial != cur and h2 would be constant. @@ +820,5 @@ > + /* > + * Re-key the trial location. Either the call will not be > + * effectful - the trial location is alreay in the right spot > + * - or it will be effectful and is thus not in the right > + * spot: we have found a new location for current and can swap This use of 'effectful' is a bit confusing. How about "If trial's current key matches its new key, it does not need to be moved (or it has already been moved) so put it on the collision path for curNewKey." @@ +826,5 @@ > + */ > + const Lookup trialOldKey = HashPolicy::getKey(trial->t); > + const Lookup trialNewKey = NewKeyFunction::rekey(trialOldKey); > + HashNumber trialKeyHash = prepareHash(trialNewKey); > + if (trial->matchHash(trialKeyHash) && trialOldKey == trialNewKey) { Given that we expect key comparison to be cheap, perhaps we should remove the matchHash call and push the prepareHash(trialNewKey) down to the 'curKeyHash' assignment line below? @@ +1200,5 @@ > > + /* > + * The |rekey<class>()| function performs inline rekeying of a hash table, > + * specifically, without allocating any new memory other than the stack > + * space required for the rekey function itself. This functionality is You can probably leave out the "other than the stack space required for the rekey function itself" part. The reader probably expects this :) @@ +1202,5 @@ > + * The |rekey<class>()| function performs inline rekeying of a hash table, > + * specifically, without allocating any new memory other than the stack > + * space required for the rekey function itself. This functionality is > + * intended to allow a moving garbage collector to move keys in a hash > + * table during collection, potentially when highly memory constrained. Also, this could probably be left out of the abstract specification. Grep will take them to use cases. @@ +1212,5 @@ > + * effectful on the first call of the old key, since it may be called > + * again repeatedly on the rekeyed value. Because this is generally used > + * in places where allocating is very bad, you must be able to make this > + * decision based on some natural property of the system. Finally, Lookup > + * must be implicitly convertable to Key. I think this para could be expressed a bit more succinctly as: The rekey function must be injective and pure. That is: - k1 != k2 must imply rekey(k1) != rekey(k2) - rekey(k) must equal rekey(k) @@ +1227,5 @@ > + * HM odds; > + * odds.putNew(1, 'a'); > + * odds.putNew(3, 'b'); > + * // Note: rekey<OddToEven> will break if we uncomment the next line. > + * // odds.putNew(4, '-'); Given this comment, this doesn't seem like a great example. How about just: struct Inc { unsigned rekey(unsigned k) { return k + 1; } };
Attachment #594335 - Flags: review?(luke)
(In reply to Luke Wagner [:luke] from comment #2) > 1. I'm a little worried about adding sizeLog2/sizeMask as members since this > ... > increase), preferably in a separate bug/patch. Folding those into this patch was bad judgement. I've taken them out. > 2. Thinking about it more carefully, it may be faster to not use 'key == > NewKeyFunction::rekey(key)' to determine whether a 'trial' element is in the > right place. Rather, we could just have a current index 'i' and anything > less than i is either empty or in the right place and anything above i gets > evicted, unconditionally. That would (1) avoid the first pass to clear > collision bits (2) avoid repeatedly calling NewKeyFunction::rekey(key) for a > given element and (3) afford a simpler inductive hypothesis. This may have > been what you were originally thinking, so I'm sorry if this 'key == > rekey(key)' idea was misleading. Yes, that sounds more or less like exactly like what I had in mind. That said, I do think what we have here is ultimately simpler and more elegant. I think there are some thorny edge cases in the algorithm I was thinking of. In particular, consider what happens if we have a high collision volume in the case where we aren't actually moving anything. But then again, maybe you are actually thinking of something else. :-P > ::: js/public/HashTable.h > @@ +294,3 @@ > > #endif > > + mutable DebugOnly<bool> entered; > > + DebugOnly<uint64_t> mutationCount; > > Since ReentrencyGuard is defined always, you can nix the #ifdef DEBUG > entirely. I wondered what that was about. > @@ +806,5 @@ > > + if (trial->isFree()) { > > + /* Note: doing 1-way copy instead of swap saves ~20%. */ > > + trial->t = cur->t; > > + updateKey(trial, curKeyHash, curNewKey); > > + cur->setFree(); > > The comment is good for me, but probably we can remove it from the final > version. So, to get faster, it seems like we'd to avoid copying the whole > Entry::t just to overwrite the key portion. Could you make a single, > compound: > moveWithNewKey(dst, src, newKey, newKeyHash) > ? I would dearly like to. Sadly, typename T is opaque to Entry and HashTable: they only know about T::T(&T), operator=(&T), and HashPolicy::setKey. As far as I was able to tell, you cannot get at the Value part of T from either HashTable or Entry. > @@ +813,5 @@ > > + > > + /* If we happen to hash to the same location. */ > > + if (trial == cur) { > > + updateKey(trial, curKeyHash, curNewKey); > > + break; > > IIUC, this case can only happen on the first iteration or after changing to > a new key ('curNewKey = trialNewKey' below). There is another issue where > we recompute 'h2' when the original lookup compute it once before the > collision loop. Both of these suggest breaking out a second inner loop that > handles collisions. For this loop, trial != cur and h2 would be constant. That is how I originally started writing it. The problem was, IIRC, that you need to jump from one loop to the other without going back through the toplevel loop: it copies over cur. I think you can indeed do it with just one more level of loop (or two labels :-), but I didn't consider the code and complexity expansion worth the savings. I'd be willing to be talked out of that, of course. > @@ +826,5 @@ > > + */ > > + const Lookup trialOldKey = HashPolicy::getKey(trial->t); > > + const Lookup trialNewKey = NewKeyFunction::rekey(trialOldKey); > > + HashNumber trialKeyHash = prepareHash(trialNewKey); > > + if (trial->matchHash(trialKeyHash) && trialOldKey == trialNewKey) { > > Given that we expect key comparison to be cheap, perhaps we should remove > the matchHash call and push the prepareHash(trialNewKey) down to the > 'curKeyHash' assignment line below? Heh, I strongly considered doing this, but didn't because it's not the "right" way to do hash tables. :-) > @@ +1227,5 @@ > > + * HM odds; > > + * odds.putNew(1, 'a'); > > + * odds.putNew(3, 'b'); > > + * // Note: rekey<OddToEven> will break if we uncomment the next line. > > + * // odds.putNew(4, '-'); > > Given this comment, this doesn't seem like a great example. How about just: > > struct Inc { unsigned rekey(unsigned k) { return k + 1; } }; My thinking was that including an example of what you *must not* do would be more instructive than simply ignoring the potential for disaster. Consider it like a warning sign: a gun pointed at a foot with a big red X over it.
(In reply to Terrence Cole [:terrence] from comment #3) Let's discuss the algorithm a bit more tomorrow. > I would dearly like to. Sadly, typename T is opaque to Entry and HashTable: > they only know about T::T(&T), operator=(&T), and HashPolicy::setKey. I think you can just add the necessary support to the Map/Set's HashPolicy's (analogous to getKey). > > > + * // Note: rekey<OddToEven> will break if we uncomment the next line. > > > + * // odds.putNew(4, '-'); > > > > Given this comment, this doesn't seem like a great example. How about just: > > > > struct Inc { unsigned rekey(unsigned k) { return k + 1; } }; > > My thinking was that including an example of what you *must not* do would be > more instructive than simply ignoring the potential for disaster. Consider > it like a warning sign: a gun pointed at a foot with a big red X over it. The rules for the NewHashFunction are simple enough: be injective and pure. I think it is more useful to have a succinct example of correct usage like all the other functions' comments in HashMap/Set.
(In reply to Luke Wagner [:luke] from comment #4) > I think you can just add the necessary support to the Map/Set's HashPolicy's > (analogous to getKey). I was under the impression that to do that I would need to update a significant number of the Map/Set users. It's probably the case, however, that enough of the users stick to the DefaultHasher that it would not be prohibitively difficult. I'll give it a shot. > > > > + * // Note: rekey<OddToEven> will break if we uncomment the next line. > > > > + * // odds.putNew(4, '-'); > > > > > > Given this comment, this doesn't seem like a great example. How about just: > > > > > > struct Inc { unsigned rekey(unsigned k) { return k + 1; } }; > > > > My thinking was that including an example of what you *must not* do would be > > more instructive than simply ignoring the potential for disaster. Consider > > it like a warning sign: a gun pointed at a foot with a big red X over it. > > The rules for the NewHashFunction are simple enough: be injective and pure. > I think it is more useful to have a succinct example of correct usage like > all the other functions' comments in HashMap/Set. Fair enough. Done.
Ah, updating the HashPolicy is even easier than I thought: there's just a wrapper in HashMap/HashSet.
Attached patch v1: With non-algorithm related review feedback. (obsolete) (deleted) — Splinter Review
I've applied all of the review feedback except for the algorithm questions, which we still need to discuss in more detail.
Attachment #594335 - Attachment is obsolete: true
Attached patch v2: With bug fixed. (obsolete) (deleted) — Splinter Review
This fixes the bug that we were discussing. Still need to rewrite the inner loop into two pieces.
Attachment #601335 - Attachment is obsolete: true
Attachment #601472 - Flags: review?(luke)
Because of bug 730933, it turns out that we frequently want to do a rekey that simultaneously removes unmarked elements from a map. I think it would be easiest to just add this functionality directly into this patch. It shouldn't be complicated: I think a check at the toplevel for loop and one when we swap.
This was roughly as trivial as I thought it would be, although there were (of course) subtleties. I've run the first 45,000 rand() sequences (and counting) against a table using 2**15 keys and not yet hit an error. I've been staring at the algorithm for a day now and I cannot, for the life of me, figure out how adding a third loop, as we discussed, would make this clearer. Maybe I'm just too close to my current implementation, but the problem I keep hitting is that we cannot split it into a separate find-the-right-spot-after-the-collision-chain and evict-until-we-find-an-empty-cells: these parts depend on each other.
Attachment #601472 - Attachment is obsolete: true
Attachment #601472 - Flags: review?(luke)
Attachment #601706 - Flags: review?(luke)
Comment on attachment 601706 [details] [diff] [review] v3: with inline removal support and explicit fuzzing support in the test. Review of attachment 601706 [details] [diff] [review]: ----------------------------------------------------------------- Thanks for addressing my comments, this is looking good. I think I found one bug below, so I'll wait to see what you say. ::: js/public/HashTable.h @@ +760,5 @@ > + JS_ASSERT(dst->isFree()); > + dst->keyHash = newKeyHash; > + HashPolicy::setKey(dst->t, newKey); > + HashPolicy::setValue(dst->t, HashPolicy::getValue(src->t)); > + src->setFree(); I think this needs to be remove(*src). The reason is that I think it is possible for src->hasCollision(): - so we rehash elem i and place a collision bit on elem j - the check we used for elem j was (oldKey(j) == newKey(j)) - if, when rekey gets to j we find that hash1(j) != j, we'll moveWithNewKey where src = j and we need to preserve the collision bit for the sake of i Assuming I haven't missed something in the above logic, could you write a test-case (or even better, soup up the fuzzer to find the case)? @@ +763,5 @@ > + HashPolicy::setValue(dst->t, HashPolicy::getValue(src->t)); > + src->setFree(); > + } > + > + template <class NewKeyFunction> Given that it now provides several things, perhaps just the more general name RekeyPolicy? (Also symmetric with HashPolicy) @@ +790,5 @@ > + > + /* If we want to remove this element, do so. */ > + Key curOldKey = HashPolicy::getKey(cur->t); > + if (NewKeyFunction::shouldBeRemoved(curOldKey)) { > + remove(*cur); I think you'll want to checkUnderloaded() at the end of rekey so that the table will shrink if you've removed enough of its elements. @@ +842,5 @@ > + /* Collision: double hash. */ > + unsigned sizeLog2 = sHashBits - hashShift; > + HashNumber h2 = hash2(curKeyHash, sizeLog2, hashShift); > + HashNumber sizeMask = (HashNumber(1) << sizeLog2) - 1; > + h1 = applyDoubleHash(h1, h2, sizeMask); Could you factor the first 3 lines analogous to how you did applyDoubleHash? The best way I can see is to add a new 2-field struct DoubleHash and helper: DoubleHash hash2(HashNumber curKeyHash, uint32_t hashShift); allowing you to write the above as just: h1 = applyDoubleHash(h1, hash2(curKeyHash, hashShift)); @@ +853,5 @@ > + * Swap trial and cur. We can (and do) leave behind some > + * invalid hash bits safely, saving another ~5% overhaed. > + */ > + Swap(trial->t, cur->t); > + updateKey(trial, curKeyHash, curNewKey); If perf matters here (the comment suggests 'yes'), then I'd check whether the compiler is being smart and eliminating the two dead stores to trial->t (keyHash and key). I bet it can, but if it isn't, then that suggests writing a swapWithNewKey that avoids the stores manually. @@ +1040,4 @@ > static const Key &getKey(Entry &e) { return e.key; } > + static const Value &getValue(Entry &e) { return e.value; } > + static void setKey(Entry &e, const KeyType &k) { const_cast<Key &>(e.key) = k; } > + static void setValue(Entry &e, const ValueType &v) { const_cast<Value &>(e.value) = v; } I don't think you need a const cast for e.value. @@ +1218,5 @@ > + * specifically, without allocating any new memory. > + * > + * The rekey function must be injective and pure. That is: > + * - k1 != k2 must imply rekey(k1) != rekey(k2) > + * - rekey(k) must equal rekey(k) s/The rekey/Function::rekey/. Also you can unindent this para, methinks.
Attachment #601706 - Flags: review?(luke)
Attached patch v4: Bug squashed and other comments applied. (obsolete) (deleted) — Splinter Review
The DoubleHash struct is a great idea. I toyed for a bit with a similar simplification using an extra rval in the args. I pulled it out because it was only a line shorter and not at all clearer -- this factoring though is well worth the effort.
Attachment #601706 - Attachment is obsolete: true
Attachment #603495 - Flags: review?(luke)
Comment on attachment 603495 [details] [diff] [review] v4: Bug squashed and other comments applied. Review of attachment 603495 [details] [diff] [review]: ----------------------------------------------------------------- So I found three tiny but potential bugs below; there really are a lot of corner cases in this algo. But I think I just realized a giant simplification! The following code would *almost* work right now (w/o any additions): for (HM::Enum e(hm); !e.empty(); e.popFront()) { Entry entry = e.front(); if (keyChanged(entry.key) { e.removeFront(); hm.put(newKeyFor(entry.key), entry.value); } } The way this breaks is that 'add' could resize the table would would invalidate 'e'. But what if, instead, we just added a Enum::rekeyFront(newKey) function that essentially did remove+add (analogous to Enum::removeFront). Then ~Enum can checkOverloaded (it already does checkUnderloaded b/c of removeFront). The actual "algorithm" would just be the above for-loop and could be embedded directly in the gengc code w/o any of the RekeyPolicy runaround. Note: if checkOverloaded fails to grow, it just stays put, so no GC/OOM worry. ::: js/public/HashTable.h @@ +780,5 @@ > + e0->keyHash = newKeyHash; > + if (collision0) > + e1->setCollision(); > + if (collision1) > + e0->setCollision(); IIUC, we want to swap the keyHash's but we want to the collision bits to stay put. In that case, by assigning e1->keyHash = e0->keyHash, I think this code propagates the collision bit from e0 to e1. Even better, it would be good to avoid branching: JS_ASSERT(!(newKeyHash & sCollBit)); e1->keyHash = (e1->keyHash & sCollBit) | (e0->keyHash & ~sCollBit) e0->keyHash = (e0->keyHash & sCollBit) | newKeyHash @@ +842,5 @@ > + while (true) { > + /* If the spot is empty, take it. */ > + /* BUT NOT REMOVED */ > + if (trial->isFree()) { > + moveWithNewKey(trial, cur, curNewKey, curKeyHash); Couldn't this work for removed if you preserved the collision bit? In that case, moveWithNewKey would do: dst->keyHash = (src->keyHash & ~sCollBit) | (dst->keyHash & sCollBit) Also, I think moveWithNewKey needs to account for removedCount: if the dst was "removed", --removedCount, if the src hasCollision, ++removedCount. @@ +865,5 @@ > + * If trial's current key matches its new key, it does not need > + * to be moved (or it has already been moved) so put it on the > + * collision path for curNewKey. > + */ > + const Lookup trialNewKey = RekeyPolicy::rekey(trialOldKey); Lookup values are used to lookup, Key values are stored, so, even though the types are usually equivalent, to be clear I think you should use Key in all these places. Also, it matches the identifier names.
Attachment #603495 - Flags: review?(luke)
Attached patch v5: A New Hope. (obsolete) (deleted) — Splinter Review
Is this approximately what you had in mind?
Attachment #603495 - Attachment is obsolete: true
Attachment #603913 - Flags: feedback?(luke)
Comment on attachment 603913 [details] [diff] [review] v5: A New Hope. Review of attachment 603913 [details] [diff] [review]: ----------------------------------------------------------------- Lookin good ::: js/public/HashTable.h @@ +248,5 @@ > + * this operation until the next call to |popFront()|. > + */ > + void rekeyFront(const Key &k) { > + if (table.match(*this->cur, k)) > + return; Is this necessary? Is it likely that the client will call with the same key? Are you thinking the client code would call rekeyFront unconditionally? @@ +252,5 @@ > + return; > + Entry e = *this->cur; > + HashPolicy::setKey(e.t, k); > + table.remove(*this->cur); > + table.add(static_cast<Lookup>(k), e); For this to work, Key == Lookup, so I think you can remove the static cast. Actually, the cast could type safety if the types were (accidentally) different. @@ +253,5 @@ > + Entry e = *this->cur; > + HashPolicy::setKey(e.t, k); > + table.remove(*this->cur); > + table.add(static_cast<Lookup>(k), e); > + added = removed = true; I can't see this making us underloaded, just overloaded... @@ +621,5 @@ > (void) changeTableSize(-1); > } > } > > + void add(const Lookup &l, const Entry &e) Could you put this next to the other add() impls?
Attachment #603913 - Flags: feedback?(luke) → feedback+
(In reply to Luke Wagner [:luke] from comment #15) > ::: js/public/HashTable.h > @@ +248,5 @@ > > + * this operation until the next call to |popFront()|. > > + */ > > + void rekeyFront(const Key &k) { > > + if (table.match(*this->cur, k)) > > + return; > > Is this necessary? Is it likely that the client will call with the same > key? Are you thinking the client code would call rekeyFront unconditionally? No, yes, yes, basically. There are enough examples in bug 726687 where I want to use this that I thought it would be worth moving these two "extra" lines into the common path. Even so, it may not be worth it, especially considering below. > @@ +252,5 @@ > > + return; > > + Entry e = *this->cur; > > + HashPolicy::setKey(e.t, k); > > + table.remove(*this->cur); > > + table.add(static_cast<Lookup>(k), e); > > For this to work, Key == Lookup, so I think you can remove the static cast. > Actually, the cast could type safety if the types were (accidentally) > different. I originally had rekeyFront take a Lookup and a Key separately. I'm torn on this because of above. I'd be willing to go either way. What do you think? > @@ +621,5 @@ > > (void) changeTableSize(-1); > > } > > } > > > > + void add(const Lookup &l, const Entry &e) > > Could you put this next to the other add() impls? Hurm. I did mess up on the ordering, now that I've looked more closely, but I don't think I messed it up that badly. As is, it (almost) exactly mirrors the existing code structure: HashTable { private: remove(Entry) checkUnderloaded() add(Entry) checkOverloaded() public: ...misc stuff... add(Ptr) remove(Ptr) } I thought you might ask me to make add(Ptr) call add(Entry) like we do for remove, if anything.
(In reply to Terrence Cole [:terrence] from comment #16) > I originally had rekeyFront take a Lookup and a Key separately. I'm torn on > this because of above. I'd be willing to go either way. What do you think? I think you were right to take a Key. My comment was just to exclusively mention Key in the function w/o any casts to Lookup. It is just a requirement of this method that Key == Lookup since otherwise we'd have to, as you said, take both values which would be pure annoyance for 100% of the use cases I'm sure you have for this :)
Attached patch v6: Fixed nits. (obsolete) (deleted) — Splinter Review
I tried rewriting a couple of the places where we need this and it turns out that doing the check outside of rekeyFront is going to be clearer usage. I think we can keep this from being a problem by moving a bunch of the simple rekeying into jsgcmark under something named like MarkHashMapKeys. I also added a pair of asserts to rekeyFront to ensure that we don't do something insane like skipping the check or passing in e->cur.
Attachment #603913 - Attachment is obsolete: true
Attachment #604236 - Flags: review?(luke)
Attached patch v6: Fixed nits. (deleted) — Splinter Review
Doh! Forgot to qref.
Attachment #604236 - Attachment is obsolete: true
Attachment #604236 - Flags: review?(luke)
Attachment #604241 - Flags: review?(luke)
Comment on attachment 604241 [details] [diff] [review] v6: Fixed nits. Awesome, looks great! >+ void add(const Lookup &l, const Entry &e) const Key &k
Attachment #604241 - Flags: review?(luke) → review+
This broke Windows builds, so I backed it out: https://hg.mozilla.org/integration/mozilla-inbound/rev/d067c50e01dc
Put Swap() in public/Utility.h, next to Move(), perhaps?
(In reply to Ms2ger from comment #23) You're a bit late. Maybe open up a new bug and CC Luke?
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla14
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: