Open
Bug 542144
Opened 15 years ago
Updated 2 years ago
package and ship already JIT'd chrome JavaScript
Categories
(Core :: JavaScript Engine, defect)
Core
JavaScript Engine
Tracking
()
NEW
People
(Reporter: dietrich, Unassigned)
References
(Blocks 1 open bug)
Details
(Whiteboard: [snappy:p3][ts])
Beltzner suggested this today. Can someone comment on the viability of this idea?
Reporter | ||
Updated•15 years ago
|
OS: Windows 7 → All
Hardware: x86 → All
Whiteboard: [ts][tsnap]
Updated•15 years ago
|
Assignee: nobody → general
Component: JIT Compiler (NanoJIT) → JavaScript Engine
Product: Tamarin → Core
QA Contact: nanojit → general
For trace-compiled scripts, this would be extremely difficult. We bake in all sorts of run-time addresses and goodies into traces that would need to be relocated. Once JägerMonkey is in gear it could be easier to cache whole method compilation results.
Out of curiosity, how much time is actually spent compiling JS on load?
Comment 2•15 years ago
|
||
yeah, need numbers on how long things take now.
Comment 3•15 years ago
|
||
Discussed this with sayrer. I have been interested in this for a while. I think from a technical perspective, chrome can be compiled well to native code statically for the most port. We should see a speedup to JIT-level and beyond if we use a backend like LLVM. Latency should improve too. However, code size will definitely go up. We should measure by how much. I wouldn't be surprised by 10x or so. So the trade-off here is latency/speed vs code size. How much does JS execution contribute to our frontend startup and execution time? We should shark that. sayrer suggested estimating the total speedup by assuming we can make JS infinitely fast. I think thats a good idea. Any volunteers?
Reporter | ||
Comment 4•15 years ago
|
||
See bug 522354 for some initial work into figuring out why and where js_Execute spends time.
Depends on: 522354
Comment 5•15 years ago
|
||
(In reply to comment #3)
> Discussed this with sayrer. I have been interested in this for a while. I think
> from a technical perspective, chrome can be compiled well to native code
> statically for the most port. We should see a speedup to JIT-level and beyond
> if we use a backend like LLVM.
Sounds complicated. We currently have two JS implementations, the interpreter and the tracer. Work is being done on a 3rd, JaegerMonkey. Would this be a fourth?
Comment 6•15 years ago
|
||
Life is complicated, especially if you are implementing JS in a browser, going against V8 and JSC, and with your cross-platform UI and widgets including HTML ones scripted using JS to boot.
We do not have separate JS implementations in interpreter and tracer and JM. A JS implementation consists of compiler too, and the bytecode interpreter, tracing JIT, and inline/call-threaded JIT (JM) are all part of one implementation. They share common data structures including GC.
The JM plan, not at first but eventually and soon enough, is to replace the old bytecode interpreter. So that will help.
An ahead-of-time compiler for chrome JS we ship is another mouth to feed, and more properly speaking a second JS implementation. Still might be worth it, but comment 3 may be underestimating code bloat issues.
Comment 4 points to analysis we need to do to make sure we know what we are optimizing. But static analysis of JS, for type inference mainly -- and then traditional AOT optimizations -- could be good.
/be
Comment 7•13 years ago
|
||
We could compile the JS code in the browser into the intermediate representation used by our JS engine.
This would eliminate some compiler passes (parsing and some optimizations).
Reporter | ||
Updated•13 years ago
|
Whiteboard: [ts][tsnap] → [snappy]
Reporter | ||
Updated•13 years ago
|
Whiteboard: [snappy] → [snappy][ts]
Updated•13 years ago
|
Whiteboard: [snappy][ts] → [snappy:p4][ts]
Updated•13 years ago
|
Whiteboard: [snappy:p4][ts] → [snappy:p3][ts]
The PyPy project used some interesting tricks with program specialization (lookup Futamura projections). I was thinking - wouldn't it be simpler to construct a AOT JS compiler via this method, in haskell, than retrofitting something already retrofitted for another purpose. GHC is already among the most performant high-level language compilers, piggybacking on it might actually be smarter than reusing JIT components, which factor in optimization cost.
Comment 10•12 years ago
|
||
How about a chrome JS-as-AST cache packed in one file (for reducing FS latency) that is generated on install and update? AFAICS this would be a pre-compiled omini-jar?
Tihomir: I suspect that adding and maintaining a second compiler written in a second programming language, even if that language is Haskell, would complicate things considerably.
Florian: I seem to remember that parsing is not an important cost. I suspect that a stored AST would also be much larger than the source code, hence increasing FS latency.
Assignee | ||
Updated•10 years ago
|
Assignee: general → nobody
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•