Open
Bug 932767
Opened 11 years ago
Updated 2 years ago
Flag code paths tainted by unexpected NaNs to help eliminate canonicalizing floats loaded from typed arrays.
Categories
(Core :: JavaScript Engine: JIT, defect, P5)
Core
JavaScript Engine: JIT
Tracking
()
NEW
People
(Reporter: dougc, Unassigned)
References
(Blocks 1 open bug)
Details
Floating point values read from typed arrays currently need to be canonicalized to limit the range of NaNs. See bug 584158 and also bug 584168.
Unfortunately this can have a significant performance impact to some numerical code, and in particular asm.js style code that makes heavy use of typed arrays.
The Odin compiler avoids this canonicalization, and the Chrome JIT also appears to not needed this.
The Ion JIT compiler might flag the data flows of floating point values read from typed arrays and eliminate the canonicalization if the values are written back to typed arrays.
It might also be possible to move canonicalization out of a loop. For example, if a loop is reducing a collection of floating point values stored in a typed array then it would likely improve performance to canonicalize the reduced value at the end of the loop rather than canonicalizing each value read from the typed array.
Comment 1•11 years ago
|
||
This sounds good. Note: it's very important that on all bail paths, any speculatively-not-canonicalized doubles get canonicalized.
To wit, V8 avoids the canonicalization by not using NaN-boxing to represent their values: their values are represented as pointers to doubles, which are GC-allocated. In the JIT, of course, they avoid the heap allocation and just pass around unboxed doubles by value.
Comment 2•11 years ago
|
||
That would definitely help some real world JS frameworks, for instance gl-matrix, which perform a lot of loads by function calls.
For a specific benchmark I am looking at, there is up to 50% speedup if I just comment the canonicalizeFloat / canonicalizeDouble call in IonMacroAssembler::loadFromTypedArray.
Comment 3•11 years ago
|
||
(In reply to Benjamin Bouvier [:bbouvier] from comment #2)
> For a specific benchmark I am looking at, there is up to 50% speedup if I
> just comment the canonicalizeFloat / canonicalizeDouble call in
> IonMacroAssembler::loadFromTypedArray.
sunfish, can you think of a more efficient implementation of canonicalizeDouble on x86/x64 maybe?
Flags: needinfo?(sunfish)
Comment 4•11 years ago
|
||
You could try moving the NaN constant load out of line (such as with the OutOfLineCodeBase mechanism), so that you can make the non-NaN case a fallthrough. That probably won't completely fix it, but it might help.
Flags: needinfo?(sunfish)
Updated•8 years ago
|
Priority: -- → P5
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•