Closed
Bug 445178
(decimal)
Opened 16 years ago
Closed 7 years ago
Implement ES 3.1 Decimal Support
Categories
(Core :: JavaScript Engine, defect)
Core
JavaScript Engine
Tracking
()
RESOLVED
INVALID
People
(Reporter: jeresig, Assigned: rubys)
References
Details
There's been discussion about supporting Decimal in ECMAScript 3.1 and 4.0. Implementing a solution in Spidermonkey would be one way to determine its feasibility.
Sam Ruby already has an implementation coming along with tests:
http://code.intertwingly.net/public/hg/js-decimal/
"I have both instance (BigDecimal) and static (decNumber) methods
implemented -- the later based on a suggestion by Allen, and will be
rolled into the 3.1 spec post Oslo.
I still need to do some work on mapping exceptions, but in most cases
exceptions aren't raised, for example Decimal.divide(1,0) produces
Infinity. There also is some ES4 work to be done (e.g., infix operators
and decimal constants).
I have 81,000+ tests ready to be ported over for decNumber, and can
produce tests for the instance methods."
Flags: wanted1.9.1?
Reporter | ||
Updated•16 years ago
|
Assignee: general → rubys
Updated•16 years ago
|
Flags: wanted1.9.1? → wanted1.9.1+
Note that decNumber is C but with C++ style line comments. This can be addressed via something like this:
sed -i "s://\(.*\):/*\1 */:" *.c
For building on Windows, see readme.txt and the online User's Guide. The issue is that MSVC (at least through 9.0) does not provide the standard integer types from stdint.h described in the ANSI C99 standard. A workaround may be to add -I. to the CFLAGS and have the makefile produce a workable stdint.h for that environment.
Over the weekend, I took a stab at implementing decimal literals and unary and infix operators. More details here: https://mail.mozilla.org/pipermail/es4-discuss/2008-July/003350.html
Pointers as to how to approach test cases for this function, as well as an outline of what the next steps in the process would be would be most appreciated.
Status: NEW → ASSIGNED
Updated•16 years ago
|
Summary: Implement Decimal Support → Implement ES 3.1 Decimal Support
At Brendan's request, I ran the SunSpider benchmark against the codebase at this time (which includes the operator support):
http://svn.webkit.org/repository/webkit/trunk/SunSpider/
My methodology was to clone the latest head of mozilla-central in one directory, and in other directory do the same and add the contents of the js-decimal branch. In both directories, I simply did a "cd js/src; make -f Makefile.ref" with no special options. I than ran SunSpider three times per shell, alternating shells between runs. As runs got progressively faster, the best in each case was the third run. The full set of output can be found here:
http://intertwingly.net/stories/2008/07/29/
Totals reported on the best run of each:
js-decimal : 9549.0ms ± 0.0%
mozilla-central : 9548.8ms ± 0.1%
The hardware was an unremarkable mac-mini, chosen as it was unlikely to be running any other significant processes at the time.
If others could apply and reproduce these results, I would appreciate it.
Update: after updating to the latest tip (shouldn't make a difference), and rebuilding using "make -f Makefile.ref BUILD_OPT=1 OPTIMIZER=-Os" (which definitely does), new results tend to be in the range of 3082 to 3093ms, with js-decimal sometimes beating mozilla-central, and sometimes the reverse happening, without enough consistency in the results to draw any conclusions.
The one thing that does seem consistent enough to merit mention is that one microbenchmark, namely controlflow-recursive.js generally exhibits a 5% performance regression with the js-decimal base. Profiling that one test seems warranted. Volunteers and/or pointers appreciated.
Comment 5•16 years ago
|
||
Try building the shell like so:
> make -f Makefile.ref BUILD_OPT=1 INTERP_OPTIMIZER=-O3
JS regression tests are detailed here:
<http://developer.mozilla.org/en/docs/Creating_JavaScript_tests>
Given the ongoing discussion in the ES committee, I think it would be best to pursue landing this on the trunk at first, with it off by default. Other JS features (like e4x) are similarly configurable.
(In reply to comment #5)
> Try building the shell like so:
> > make -f Makefile.ref BUILD_OPT=1 INTERP_OPTIMIZER=-O3
Done.
http://intertwingly.net/stories/2008/08/15/
Now what? I'm not trying to be difficult, without an understanding of what the goals and parameters are, and without a understanding of how to do a more in-depth profiling of a specific test, I feel like I'm flying blind.
> JS regression tests are detailed here:
> <http://developer.mozilla.org/en/docs/Creating_JavaScript_tests>
One thing not mentioned on that page is how to actually run the tests. I do, however, see a few shell scripts. When run, they tend to indicate that I might need to install a "Sisyphus framework". That, too, isn't mentioned on the page. I can continue to bumble around and I'm sure I will ultimately be able to figure it out, but even so I won't be confident that I'm running the right tests in the right way. Any pointers would be appreciated.
> Given the ongoing discussion in the ES committee, I think it would be best to
> pursue landing this on the trunk at first, with it off by default. Other JS
> features (like e4x) are similarly configurable.
Uhh, agreed in general. If someone can point out an example where this was done before, I'm good at following examples. I see how e4x can be compiled out, is that what you mean? And if so: forgive me but I'm not sure that e4x is a good example. The way it was done left a lot of scarring in the code. Simply eliminating anything that makes any new functions or objects visible would go most of the way, and a change to the JSVAL_IS_DECIMAL to always return false would optimize out most of the rest.
What I am trying to say, perhaps clumsily, is that I would prefer something a little more surgical.
Beyond that, I would like to talk about ways to get this into people's hands so that they can experiment with it. An about:config option to make decimal visible would be ideal (though that would argue against pursuing a compile time switch). Perhaps a separate set of downloads accompanying each beta would suffice.
Comment 7•16 years ago
|
||
It shouldn't be hard to show no performance regressions due to the patch, in all of SunSpider, jresig's Dromaeo, and tryserver-built Tp/Ts/Txulwinopen/... tests. The js tests and mochitests are also required to pass with no new regressions.
E4X left scars in its spec, too. The spec deviates from ES3 so much that the code must too (methods are not visible from the prototype; identifiers can be QName objects; etc.). I agree that Decimal can be done more surgically, and fewer ifdef'ed macros always beat more open-coded ifdefs in any hand ;-).
/be
Comment 8•16 years ago
|
||
(In reply to comment #6)
>
> > JS regression tests are detailed here:
> > <http://developer.mozilla.org/en/docs/Creating_JavaScript_tests>
>
> One thing not mentioned on that page is how to actually run the tests.
I'll add more information as soon as devmo is writable again.
For one off testing use jsDriver.pl.
./jsDriver.pl --help for options
Example running all tests for spidermonkey debug (smdebug) excluding directories lc2 lc3 (liveconnect); excluding the tests in slow-n.tests (tests which may take a long time) and excluding the tests in spidermonkey-n-1.9.1.tests (tests which are obsolete)
./jsDriver.pl -e smdebug -L lc2 lc3 slow-n.tests spidermonkey-n-1.9.1.tests -s /pathtoshellexecutable/js
To include only a specific set of directories or tests uses the -l option followed by a list of directories, relative paths to individual tests, or files contains lists of relative paths to tests. e.g.
./jsDriver.pl -e smdebug -l e4x slow-n.tests spidermonkey-n-1.9.1.tests -s /pathtoshellexecutable/js
Comment 9•16 years ago
|
||
(In reply to comment #6)
>
> Now what? I'm not trying to be difficult, without an understanding of what the
> goals and parameters are, and without a understanding of how to do a more
> in-depth profiling of a specific test, I feel like I'm flying blind.
Performance regressions of any kind are unacceptable. It is relatively easy to profile a single test with any profiling tool. I've used callgrind and Shark in the past. SunSpider provides simple --shark/--shark20 options to run whole-program profiles, including shell startup and script parsing. Mozilla has Shark features built in that can be activated for specific portions of JS code:
http://developer.mozilla.org/en/docs/Profiling_JavaScript_with_Shark
On Linux, I use callgrind:
> valgrind --tool=callgrind ./js -f controlflow-recursive.js
This will produce an output file named callgrind.out.NNNN, which is handy to view in kcachegrind.
> kcachegrind callgrind.out.NNNN
Once we know where the delta is, we can analyze further. If we find that it's simply spending more time in the interpreter loop, I am willing to test the patch in alternative compilers to see if they show similar problems. We've found the code generated GCC to be very sensitive to seemingly innocuous changes.
> Beyond that, I would like to talk about ways to get this into people's hands so
> that they can experiment with it. An about:config option to make decimal
> visible would be ideal (though that would argue against pursuing a compile time
> switch).
If you want to try a runtime switch, you can try something like JSOPTION_DECIMAL. Such options are handled in
/dom/src/base/nsJSEnvironment.cpp
for browsers like Firefox and SeaMonkey. For the JS Shell, it needs to be handled in js.cpp.
http://mxr.mozilla.org/mozilla-central/search?string=JSOPTION
gives some examples of its use.
Assignee | ||
Comment 10•16 years ago
|
||
(In reply to comment #8)
>
> For one off testing use jsDriver.pl.
>
> ./jsDriver.pl --help for options
Thanks! I've now got a small but growing number of tests here:
http://code.intertwingly.net/public/hg/js-decimal/file/79ba2d59ded1/js/tests/decimal/
A snapshot of output from jsDriver as well as a concise summary of the tests themselves can be found here:
http://intertwingly.net/stories/2008/08/27/estest.html
Assignee | ||
Comment 11•16 years ago
|
||
A brief update prompted by these kinds words:
https://mail.mozilla.org/pipermail/es-discuss/2008-September/007529.html
After a brief period of wild fluctuation, we seem to be converging on the definition of the semantics. My implementation matches my understanding of the growing consensus, but needs some cleanup before this code gets pulled. Specifically, some of the function and macro names matched prior directions, and while some anachronisms are inevitable in mature code bases, new additions should be free of such. Additionally, while my approach has been to implement everything anybody could possible ask for with the idea that I could trim later, the approach for Firefox should be the opposite: start small and add only what is needed.
My real reason for this update was to expose people who care about the implementation of this feature to the overall design approach. The growing consensus is based on the assumption that there will be a decimal primitive and a wrapper class. In jsval, there simply is no more room for new primitives, at least not if we continue to reserve an entire bit for integers. And since performance of integers is more important than the performance of decimal, this means that both decimal "primitive" and wrappered values need to be implemented as JSObject, the implication being that a few select places end up saying things like IS_PRIMITIVE(foo) || IS_DECIMAL(foo). Perhaps we need a separate names for implementation primitives vs language primitives.
One upside to this approach is that it means that things like OP_ADD can continue to optimize for integers and strings, and only need to do an additional "if" check when either of the two operands is an JSObject -- clearly paths which are not performance critical in the first place.
Comment 12•16 years ago
|
||
(In reply to comment #11)
> My real reason for this update was to expose people who care about the
> implementation of this feature to the overall design approach. The growing
> consensus is based on the assumption that there will be a decimal primitive and
> a wrapper class. In jsval, there simply is no more room for new primitives, at
> least not if we continue to reserve an entire bit for integers.
Right. We could do as Tamarin does, and make int jsvals use only 29 bits. Then we would have room for decimal plus two more primitive types. We'd have to measure carefully to see whether losing two bits from int jsval matters (I suspect it does not but I'm ready to be wrong).
> And since
> performance of integers is more important than the performance of decimal,
The question is whether integers that fit in 31 bits (signed) but not 29 bits are noticeably important -- at all, according to common benchmarks -- for performance.
> this
> means that both decimal "primitive" and wrappered values need to be implemented
> as JSObject, the implication being that a few select places end up saying
> things like IS_PRIMITIVE(foo) || IS_DECIMAL(foo). Perhaps we need a separate
> names for implementation primitives vs language primitives.
Possibly. We might want to support more than two or three more "value types", so dignifying those with macrology could help.
> One upside to this approach is that it means that things like OP_ADD can
> continue to optimize for integers and strings, and only need to do an
> additional "if" check when either of the two operands is an JSObject -- clearly
> paths which are not performance critical in the first place.
That is a benefit. Suggest we go with your approach for now. If it starts to hurt more than naming conventions, holler.
/be
Assignee | ||
Comment 13•16 years ago
|
||
(In reply to comment #12)
> (In reply to comment #11)
> > My real reason for this update was to expose people who care about the
> > implementation of this feature to the overall design approach. The growing
> > consensus is based on the assumption that there will be a decimal primitive and
> > a wrapper class. In jsval, there simply is no more room for new primitives, at
> > least not if we continue to reserve an entire bit for integers.
>
> Right. We could do as Tamarin does, and make int jsvals use only 29 bits. Then
> we would have room for decimal plus two more primitive types. We'd have to
> measure carefully to see whether losing two bits from int jsval matters (I
> suspect it does not but I'm ready to be wrong).
My concern isn't that the range of int29 vs int31 is smaller, but that doing such breaks the following idiom:
if ((lval & rval) & JSVAL_INT)
Comment 14•16 years ago
|
||
In jit code we can keep ints 32-bit clean, even if the interpreter has a limited value range (we already do that for our int31s).
Comment 15•16 years ago
|
||
Traveling soon, will have time to help. I sent Sam a sketch (some might compile, even) of adding a DECIMAL jsval sub-tag built from the BOOLEAN tag. Pseudo-booleans above a small scalar value would be taken as aligned pointer high bits, and the low three jsval tag bits cleared, to form a jsdecimal pointer. Allocating jsdecimal requires yet another GC-thing type.
/be
Comment 16•15 years ago
|
||
The last TC39 meeting left us with a Harmony agenda item that generalizes value types with operators so that decimal, rational, complex, etc. could all be added, either natively or hosted in the future language, possibly even with lexical sugar, but not hardcoded -- instead done via libraries.
This doesn't help solve the usability bug 5658, but neither does anything else on the table.
The bug where we hope to prototype value types with operators is bug 467473.
/be
Depends on: 467473
Comment 17•15 years ago
|
||
(In reply to comment #16)
> This doesn't help solve the usability bug 5658, but neither does anything else
> on the table.
Typo, argh: bug 5856.
/be
Comment 18•7 years ago
|
||
New features to JavaScript should now be proposed to TC39 [1] before being implemented in SpiderMonkey, therefore closing as Invalid.
[1] https://github.com/tc39/proposals/blob/master/CONTRIBUTING.md
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → INVALID
You need to log in
before you can comment on or make changes to this bug.
Description
•