Closed
Bug 640897
Opened 14 years ago
Closed 14 years ago
TI+JM: large ai-astar performance regression
Categories
(Core :: JavaScript Engine, defect)
Core
JavaScript Engine
Tracking
()
RESOLVED
FIXED
People
(Reporter: jandem, Unassigned)
References
(Blocks 1 open bug)
Details
Yesterday we ran ai-astar in 800 ms, today it's 2191 ms (with -m -n). I think this is bug bug 639263 because ai-astar-data.js has this *insane* 50,000 lines object initializer:
http://hg.mozilla.org/projects/kraken/file/e119421cb325/tests/kraken-1.1/ai-astar-data.js
stanford-crypto-aes has a similar regression (450 -> 570 ms). It also has a huge object literal (6,000 lines):
http://hg.mozilla.org/projects/kraken/file/e119421cb325/tests/kraken-1.1/stanford-crypto-aes-data.js
The good news is that Kraken's total score improved nicely (550 ms faster). Mostly the imaging-* benchmarks (~2000 ms faster).
Is that under the harness? I didn't think the initializer was counted in the test time in kraken. (Still don't want to take a 3x hit on large initializer behaviour, to be sure!)
Comment 2•14 years ago
|
||
I just sharked this and we're spending 40% of our time in the equality stub. This is bug 619592 --- better type information for that gigantic initializer (what bug 639263 is doing) causes us to generate much worse code. Bug 639263 shouldn't affect the time to process the initializer much (measured overhead was 2-3% in that bug), but it changes the types and that can affect perf unpredictably (that will improve over time).
Depends on: 619592
Comment 3•14 years ago
|
||
JM+TI is now about 33% faster than JM+TM or TM on ai-astar (still slow vs. V8).
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•