Open
Bug 817446
Opened 12 years ago
Updated 2 years ago
Tons of time spent in js::ion::GetElementCache
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
NEW
Tracking | Status | |
---|---|---|
firefox20 | - | --- |
People
(Reporter: kael, Unassigned)
References
(Blocks 1 open bug, )
Details
(Whiteboard: [games:p?])
A jsfiddle by Notch, developer of Minecraft that does some raycasted rendering using canvas and putImageData went up on Hacker News today. On a whim, I decided to compare the performance in Nightly and Chrome.
The performance in Nightly is *dramatically* worse than that in Chrome. From profiling it, the problem appears to be that because he's using Array instead of typed arrays, some absurd amount of time is being spent in js::ion::GetElementCache for his array reads. I'm not able to tell from reading the code or looking at the profiles why this is so expensive, though:
http://people.mozilla.com/~bgirard/cleopatra/#report=a0b314e9da18533dd79046fd7f9c205fc15ec8f9&invertCallback=true&filter=%255B%257B%2522type%2522%253A%2522RangeSampleFilter%2522%252C%2522start%2522%253A7224%252C%2522end%2522%253A16872%257D%255D&selection=%255B%2522%28total%29%2522%252C%2522JSObject%253A%253AgetElement%28JSContext%2520*%252CJS%253A%253AHandle%253CJSObject%2520*%253E%252CJS%253A%253AHandle%253CJSObject%2520*%253E%252Cunsigned%2520int%252CJS%253A%253AMutableHandle%253CJS%253A%253AValue%253E%29%2522%252C%2522JSObject%253A%253AgetElement%28JSContext%2520*%252CJS%253A%253AHandle%253CJSObject%2520*%253E%252CJS%253A%253AHandle%253CJSObject%2520*%253E%252Cunsigned%2520int%252CJS%253A%253AMutableHandle%253CJS%253A%253AValue%253E%29%2522%255D
Changing the two Array() instances at the top makes the performance become competitive so I think this is the only real issue here. For comparison:
http://jsfiddle.net/uzMPU/1573/
My guess is that perhaps V8 is doing something really clever with the Arrays here that causes them to perform well, even if a typed array would be faster.
Comment 1•12 years ago
|
||
Is he ending up with slow arrays perhaps?
Comment 2•12 years ago
|
||
(In reply to Boris Zbarsky (:bz) from comment #1)
> Is he ending up with slow arrays perhaps?
The problem is that « map » is initialized in random order and not with an increasing number of index. So we use sparse arrays instead of dense arrays. Either we should handle such random initialization, or attempt convert these arrays to dense array when we notice that they are used in hot paths.
Changing the initialization order by swaping the z-loop with the x-loop[1] initializing the map make the animation way more smoother. And changing the size of the incremental gc slices can prevent pauses caused by full GCs.
[1] http://jsfiddle.net/Fp89p/
Comment 3•12 years ago
|
||
That dependency may not be quite right. Even after bug 586842 there will still be sparse arrays (just not slow arrays) and this problem could still arise. Bug 586842 is more for fixing cases where the array has named properties in addition to indexed ones.
IIRC v8 is much more tolerant of sparse JS arrays than we are, e.g. eagerly allocating the memory for Array(N) for very large N. It should be simple for us to just do something similar (or exactly the same, why not).
Reporter | ||
Comment 4•12 years ago
|
||
Yeah, I think it would be reasonable to say that if you go 'new Array(N)', it should probably not be sparse. On the other hand, if you just go 'new Array()' and then fill in random order, it seems fine to get a sparse array then.
Comment 5•12 years ago
|
||
Yeah, this is bug 799122. Is also a nice Kraken win so we should get that fixed.
IIRC, V8 allocates eagerly for N < ~10k. This test creates an array of size 64 * 64 * 64 = 262144 though, that's huge..
Comment 7•12 years ago
|
||
So there are two issues here:
1) Sparse vs dense. new Array(10000000000) should probably not go dense... and yes, people do that on the web.
2) Slow vs fast. As soon as we go sparse, we go fully slow. I believe V8 has somewhat fast paths even for sparse arrays: it stores the sparse indices in some sort of interval set data structure, not the generic property hashtable.
Would really like to see this fixed, as we shouldn't have regular Arrays be such a big perf trap.
tracking-firefox20:
--- → ?
Whiteboard: [games:p1]
Comment 10•12 years ago
|
||
The tracking flag is not meant for project-specific bugs, only release issues.
Comment 11•12 years ago
|
||
(In reply to Jan de Mooij [:jandem] from comment #5)
> Yeah, this is bug 799122. Is also a nice Kraken win so we should get that
> fixed.
>
> IIRC, V8 allocates eagerly for N < ~10k. This test creates an array of size
> 64 * 64 * 64 = 262144 though, that's huge..
Bug 835102 should help this a lot. It isn't as good as allocating the dense elements right off the bat (we still make a lot of shapes initially and won't be able to access holes from optimized code due to the possibility of sparse indexes) but works on arrays of any size and avoids needing a cutoff for the maximum size we will allocate eagerly with.
I think bug 799122 would be good to do but eagerly allocating such large arrays seems a concern unless we can determine somehow (e.g. bytecode analysis) that the array will be mostly filled in (don't know if that case applies here).
Depends on: 835102
Did this get better?
Whiteboard: [games:p1] → [games:p2]
Updated•11 years ago
|
Blocks: gecko-games
Assignee | ||
Updated•10 years ago
|
Assignee: general → nobody
Whiteboard: [games:p2] → [games:p?]
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•