Closed Bug 156253 Opened 22 years ago Closed 16 years ago

Restore Speed to js-dtoa routines while maintaining memory safety

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED

People

(Reporter: scole, Assigned: crowderbt)

References

Details

(Keywords: helpwanted, js1.5, perf)

Bug 14044 finally added memory safety to the JS_DtoA routines, but at the
expense of speed, since every allocation is now checked for validity.  

A way to fix this is to ensure before we start that any call to those routines
will not fail an allocation.  Since there are finite number of possibilites for
the input arguments, there is a maximum limit to the allocations that the
routines will do: we need only determine what this limit is in order to
preallocate any ram needed and then remove the allocation validity checks.

I propose that we figure this limit computationally: try every 64-bit double,
with all possible parameters, and see what's allocated.  

This is a big task, requiring a machine to run tests in isloation for probably a
month or more.  Any volunteers out there?

--scole
Blocks: 156155
Keywords: helpwanted, js1.5, perf
*** Bug 156252 has been marked as a duplicate of this bug. ***
Do you have a more precise estimate on the amount of time this will take?

Doing a quick computation by hand seems to put it at about 20,000,000 years.
Hurm.  Yeah, 20,000,000 years does seem like a bit long.  Ok, perhaps we morph
this into a study of the code to try and determine the maximum required memory
for the potential input values.  (Perhaps even reading and understanding David
Gay's paper that this code originates from.  Or contacting David Gay?)

--scole
Reassigning to Kenton; cc'ing jband -
Assignee: rogerl → khanson
Blocks: 71668
This is from bug 14044:

------- Additional Comment #35 From Brendan Eich  2002-07-12 17:00 -------

I would still like to see js1.5 wrap up soon: RC5 at the end of July, Final
release at the end of August?

I think that the js engine should not differ materially until 1.5 releases
between trunk and branch.  We can probably get all the fixes in, although we may
need to wait till 1.2/1.0.2 to get drivers' approval.

So it seems best to me that we fix bug 156253 on branch and trunk, soon.  scole,
do you agree?  What do others think of my release plan sketch-question, above?

/be

------------------------------

Well, it certainly would be nice to get this in soon, but I'm not sure I'm the
guy to do it.  My plate is filling up rapidly, and this is not a critical issue
for our products, unlike bug 14044's memory safety.

But I'll go ahead and reread Gay's paper and see if I can get him interested in
this discussion...

--scole
20 million years might be about right.  Although it is necessary to test  all
possible formats, so maybe a billion years.  Then one must consider the string
to double direction.  I haven’t examined the limits of the acceptable decimal
strings accepted by David Gay’s code.  The current buffer sizes are fine tuned
for the numbers in the double precision range, which is well behaved, i.e.,
there is a maximum and minumum magnitude value for double precision numbers. 
However, decimal strings can have values like 1e+500000000 which might blow the
practical limits of David Gays program.  I’ll check this.

The absolute magnitude of the input exponent  pins at 19999, i.e., any exponent
whose magnitude is greater than 19999  gets presented to the conversion routines
as 19999.  This works except for biaarre cases such as 

	0.00<20000 zeros>0001E20000

Another probem with this approach is the multi threaded nature of JavaScript. 
How can we guarnatee available memory and the size of the blocks  available. 
Might the overhead costs defeat the purpose of this solution.
Just got this response from David Gay.  Haven't checked any of his suggestions
yet; things are busy here right now and I won't get to it until next week. 
Interested parties may which to check things out, though...  ---scole

==============================================

Sorry to be slow replying to your message of 13 July; time is scarce.

> The JavaScript engine in mozilla uses the d-to-a code that you wrote back in
> 1991,

The most recent update was 7 Feb. 2002.
As always, the current version is in netlib's fp directory, e.g.,

	http://netlib.bell-labs.com/netlib/fp/dtoa.c.gz
or
	ftp://netlib.bell-labs.com/netlib/fp/dtoa.c.gz
or
	http://www.netlib.org/fp/dtoa.c

etc.  There's a variant for other IEEE precisions and IEEE-like
arithmetics: gdota.tgz in the same netlib directory.

By sending the 2-word E-mail message

	subscribe fp

to netlib@netlib.bell-labs.com, you can arrange to receive automatic
notifications of changes to things in netlib/fp.  Things do not
change there very often, but it's good to check occasionally for
bug fixes.

> The easy way out of this situation
> is to simply perform one allocation at the start of the routines to allocate
> the maximum memory that the routines could possibly use, and then not bother
> with checking the return values of the allocations throughout the body of
> the code.

Since Jan. 1998, dtoa.c has by default used a private memory pool that
(at 2304 bytes) is large enough for almost all conversions and avoids
any need for locks or semaphores during conversions in a
multi-threaded environment (except in the rare cases when the private
pool is too small).  Since one can present strtod with arbitrarily
long decimal strings, it is always possible to exceed any given pool
size when using dtoa.c.  (Of course, algorithms can be devised that
always get by with a fixed amount of memory.  In this regard, some
years ago there was discussion on David Hough's floating-point mailing
list about how much precision is really needed.)

Since you're concerned with efficiency, have you considered using
JNI to use the current dtoa.c directly?

-- dmg
I'm not going to get to this on the JS 1.5 schedule, I'm afraid.  Any other
volunteers to chase down dmg's suggestions?

--scole
Some of the changes that he has done seem to be matched by changes in the
mozilla tree. Mozilla also uses a pool of memory.
Re: comment 9: The "private memory" stuff seems to be in the code, but it's
turned off via the Omit_Private_Memory define.  (With the comment: "Private
memory currently doesn't work with JS_THREADSAFE.")  

I wonder if we could turn it on now with the locking changes that went in
earlier this year.  That might be enough to make it work.

oyea (oyea.yu@163.com) noticed recently (in netscape.public.mozilla.jseng) that
if we do turn on private memory, then we need to adjust js_FinishDtoA to protect
against freeing chunks of ram from the middle of the private memory area.

--scole
Assignee: khanson → general
QA Contact: pschwartau → general
Should revisit this if/when bug 384244 gets some love.
Assignee: general → crowder
Depends on: 384244
Status: NEW → ASSIGNED
Fixed by the landing of bug 384244 (we don't OmitPrivateMemory anymore)
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.