Closed
Bug 551587
Opened 15 years ago
Closed 14 years ago
MathClass:_min() does not correctly handle -0
Categories
(Tamarin Graveyard :: Baseline JIT (CodegenLIR), defect, P3)
Tracking
(Not tracked)
RESOLVED
FIXED
Q3 11 - Serrano
People
(Reporter: cpeyer, Assigned: wmaddox)
References
Details
(Whiteboard: fixed-in-spicy,fixed-in-spicier)
Attachments
(4 files, 2 obsolete files)
(deleted),
patch
|
rreitmai
:
review+
edwsmith
:
superreview+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
wmaddox
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
cpeyer
:
review+
|
Details | Diff | Splinter Review |
as:
var nmin:Number = Math.min(-0, 0);
print(Math.atan2(nmin, nmin));
var umin = Math.min(-0, 0);
print(Math.atan2(umin, umin));
actual: 0, 0
expected: -3.14, -3.14
The negative zero is not preserved, or is mistakenly converted to an int.
Only happens in wordcode build.
Flags: in-testsuite+
Flags: flashplayer-triage+
Flags: flashplayer-qrb?
Flags: flashplayer-qrb? → flashplayer-qrb+
Target Milestone: --- → Future
Reporter | ||
Comment 1•15 years ago
|
||
Also happens in jit. Not an injection.
Summary: Math.min does not preserve -0 with avmshell_wordcode → Math.min does not preserve -0 with avmshell jit or wordcode
Comment 2•15 years ago
|
||
But not with the ABC interpreter? Weird!
Comment 3•15 years ago
|
||
I'm investigating but this is probably a bug in the translator pipeline that converts -0 to 0 -- that pipeline is shared by the jit and wordcode interpreter but doesn't affect the ABC interpreter.
You can see this if you instead of using -0 obtain one computationally. This expression
1 / (-Number.MIN_VALUE / 2)
which divides 1 by -0, should evaluate to -Infinity. It does, in the wordcode interpreter. Ergo it's the handling of the literal -0 that's the problem, not Math.min, and it's not the JIT or wordcode interpreter per se, but the translation pipeline.
Hard to believe this is not an injection, actually -- MIR would have had to have the same problem. Possible, but not terribly plausible.
Pinging QRB with a plea for further investigation before deferring to "Future".
Flags: flashplayer-qrb+ → flashplayer-qrb?
Comment 4•15 years ago
|
||
Actually it's more insidious than that, it looks like a broken constant-folding optimization that kicks in for the JIT and wordcode interpreter. Here's a test case:
var n1 = (-Number.MIN_VALUE / 2)
var n2 = -0;
print(1/n1);
print(1/n2);
var n3 = Math.min(n2,0);
var n4 = Math.min(0,n2);
print(1/n3);
print(1/n4);
var n5 = Math.min(-0,0);
var n6 = Math.min(0,-0);
print(1/n5);
print(1/n6);
This prints -Infinity six times in the ABC interpreter as it should, but for the JIT and wordcode interpreter the fifth result printed is "Infinity". This suggests a broken optimization of Math.min in that case. (The optimization may be broken for the sixth test too but the correct result happens to be computed.)
Updated•15 years ago
|
Summary: Math.min does not preserve -0 with avmshell jit or wordcode → Constant folding of Math.min does not preserve -0 with avmshell jit or wordcode
Reporter | ||
Comment 5•15 years ago
|
||
Verified not an injection - issue happens in Player 9_0_r159
Comment 6•15 years ago
|
||
(In reply to comment #5)
> Verified not an injection - issue happens in Player 9_0_r159
Oh foo. Now we have to version check.
Comment 7•15 years ago
|
||
If any of the other Math functions constant fold then we should worry about the same problem for them.
Comment 8•15 years ago
|
||
Absent more evidence, "Future" seems right for this bug.
Updated•15 years ago
|
Flags: flashplayer-needsversioning+
Updated•14 years ago
|
Blocks: interp_jit_semantics
Updated•14 years ago
|
Priority: -- → P3
Target Milestone: Future → flash10.2
Comment 9•14 years ago
|
||
Problem seems to be the early binding optimization to Math._min when the two args are known to be doubles. Putting printf's in MathUtils::min and _min, and running the tests:
-Infinity
-Infinity
min -0 0
min 0 -0
-Infinity
-Infinity
_min -0 0 <-- valid const-folded args
_min 0 -0
Infinity <-- incorrect result
-Infinity
This implies _min(-0, 0) is returning 0, but _min(0, -0) returns -0. The code is
double MathClass::_min(double x, double y) {
return (x < y || MathUtils::isNaN(x)) ? x : y;
}
-0 and 0 are not ordered; the bug seems to be that MathUtils::_min doesn't handle negative zero like MathUtils::min does, and the problem comes and goes with the optimization.
Updated•14 years ago
|
Component: Virtual Machine → JIT Compiler (NanoJIT)
Updated•14 years ago
|
Comment 10•14 years ago
|
||
on bug 588922, Werner proposed:
> A fix for Math:min/max negative zero problem - use LIR_led/LIR_ged for our test.
Assignee | ||
Updated•14 years ago
|
Assignee: nobody → wmaddox
Target Milestone: flash10.x - Serrano → flash10.2.x-Spicy
Assignee | ||
Updated•14 years ago
|
Summary: Constant folding of Math.min does not preserve -0 with avmshell jit or wordcode → MathClass:_min() does not correctly handle -0
Assignee | ||
Comment 12•14 years ago
|
||
Patch against tr-spicy. Revised code for _min and _max (2 args) is based on the existing code for min and max (>= 2 args). Don't try to be too clever. It would be better to generate a call to a different function, either the revised one, or a compatibility version, rather than checking bugCompatibility inside the helper. Better yet, the JIT should inline these cases, per Werner's suggestion. This patch, seems like the most expedient way, however, to resolve another JIT/interpreter incompatibility in time to hit the Spicy dates so OSR can ship.
Please comment on updates needed to the acceptance test suite -- I'm not sure what we do about versioning there.
Attachment #494562 -
Flags: superreview?(edwsmith)
Attachment #494562 -
Flags: review?(rreitmai)
Reporter | ||
Comment 13•14 years ago
|
||
I'm unable to reproduce buggy behavior with Math.max. Do you have a sample testcase for that?
Attachment #494802 -
Flags: review?(wmaddox)
Reporter | ||
Comment 14•14 years ago
|
||
(In reply to comment #13)
> Created attachment 494802 [details] [diff] [review]
> regression testcase
Note that you need the patch from Bug 616254 to run the test.
Assignee | ||
Comment 15•14 years ago
|
||
(In reply to comment #13)
> Created attachment 494802 [details] [diff] [review]
> regression testcase
>
> I'm unable to reproduce buggy behavior with Math.max. Do you have a sample
> testcase for that?
I use this test. Note that you need to check all combinations of 0.0 and
-0.0 in each argument position in order to get complete coverage.
var min_zz = 1.0/Math.min(0.0, 0.0);
var max_zz = 1.0/Math.max(0.0, 0.0);
var min_zm = 1.0/Math.min(0.0, -0.0);
var max_zm = 1.0/Math.max(0.0, -0.0);
var min_mz = 1.0/Math.min(-0.0, 0.0);
var max_mz = 1.0/Math.max(-0.0, 0.0);
var min_mm = 1.0/Math.min(-0.0, -0.0);
var max_mm = 1.0/Math.max(-0.0, -0.0);
print("1.0/Math.min(0.0, 0.0) = " + min_zz);
print("1.0/Math.max(0.0, 0.0) = " + max_zz);
print("1.0/Math.min(0.0, -0.0) = " + min_zm);
print("1.0/Math.max(0.0, -0.0) = " + max_zm);
print("1.0/Math.min(-0.0, 0.0) = " + min_mz);
print("1.0/Math.max(-0.0, 0.0) = " + max_mz);
print("1.0/Math.min(-0.0, -0.0) = " + min_mm);
print("1.0/Math.max(-0.0, -0.0) = " + max_mm);
Assignee | ||
Comment 16•14 years ago
|
||
Comment on attachment 494802 [details] [diff] [review]
regression testcase
You need to try -0.0 as both the left and the right argument.
Best to test all combinations: (0.0, 0.0), (0.0, -0.0), (-0.0, 0.0), (-0.0, -0.0)
Attachment #494802 -
Flags: review?(wmaddox) → review-
Reporter | ||
Comment 17•14 years ago
|
||
Now tests all max/min combos
Attachment #494802 -
Attachment is obsolete: true
Attachment #494830 -
Flags: review?(wmaddox)
Reporter | ||
Comment 18•14 years ago
|
||
Attachment #494830 -
Attachment is obsolete: true
Attachment #494831 -
Flags: review?(wmaddox)
Attachment #494830 -
Flags: review?(wmaddox)
Assignee | ||
Updated•14 years ago
|
Attachment #494831 -
Flags: review?(wmaddox) → review+
Comment 19•14 years ago
|
||
Just to clarify earlier comment "Verified not an injection - issue happens in Player 9_0_r159" .
If this is the case, do we really want to fix it?
Assignee | ||
Comment 20•14 years ago
|
||
(In reply to comment #19)
> Just to clarify earlier comment "Verified not an injection - issue happens in
> Player 9_0_r159" .
>
> If this is the case, do we really want to fix it?
Leaving it unfixed results in non-deterministic behavior under OSR, as you don't know what will be compiled and what will be interpreted in the face of user interaction or externally-provided data.
The long-standing nature of the bug is the justification for versioning it.
Comment 21•14 years ago
|
||
(In reply to comment #20)
> The long-standing nature of the bug is the justification for versioning it.
Right, but what I'm getting at is that if the behaviour was not guaranteed earlier, meaning if your method happened to be compiled you got one behaviour, and if not, you got another, has been in existence for quite a while, so do we need to support this behaviour.
And, we consider that the behaviour most often encountered was the JTI'd variant, then what about changing the interp? Yes this is heresy (breaking the interp rather than fixing the jit), but the alternate solution that is being proposed with version checking definitely gets in the way of optimizing (i.e. in-lining) this code in jit'd methods.
Not sure I believe this line of reasoning, but just throwing it out there, as I think putting these version checks in our code has rather large scale downstream effects.
Comment 22•14 years ago
|
||
Can we modify the fix, as follows:
* in the JIT, if the version is new, early bind to _min_fixed() or _max_fixed(). Otherwise, bind to _min_broken() or _max_broken(). Each individual function doesn't need a version check.
I think this will work because the version checking logic snifs the caller (via codeContext), and the JIT would be doing that check directly on the code that is calling Math.min.
Thoughts?
Assignee | ||
Comment 23•14 years ago
|
||
(In reply to comment #21)
> (In reply to comment #20)
> > The long-standing nature of the bug is the justification for versioning it.
>
> Right, but what I'm getting at is that if the behaviour was not guaranteed
> earlier, meaning if your method happened to be compiled you got one behaviour,
> and if not, you got another, has been in existence for quite a while, so do we
> need to support this behaviour.
Prior to OSR, the bug would exhibit consistent, repeatable behavior under any
given compilation policy, and, in player, there was only one -- interpret
static initializers and compile everything else. With OSR, the policy is
variable, and may depend on user interaction, input data, etc., so any
application that depends on the manifestation or non-manifestation of the bug
at some particular point in the code will now no longer operate reliably.
(For the sake of accuracy, I note that the current compilation policy is not
completely consistent across platforms, s JIT failure also cases
interpretation, and the stack frame limit varies across platforms.)
> And, we consider that the behaviour most often encountered was the JTI'd
> variant, then what about changing the interp? Yes this is heresy (breaking
> the interp rather than fixing the jit), but the alternate solution that is
> being proposed with version checking definitely gets in the way of optimizing
> (i.e. in-lining) this code in jit'd methods.
My presumption is that the JIT could generate different code depending on the
version. I know you questioned this (whether bug compatibility for a method
could be bound at the time the code is JIT'd, but Lars R+'d my patch to version
OSR itself in this manner without objection. I'd still like to get a
definititive reading on this, and understand exactly how
currentBugCompatibility is determined.
The out-of-line helper as modified is a bit more complex in order to handle -0
correctly, though I think it should be possible to eliminate one of the isNan
tets (and, if so, this should be reflected back to the N-ary case as well.
The actual version check kicks in only when we've gotten to the point that we
know we've got a -0 argument to deal with.
I don't really care that much if we break -0 in the interpreter.
The fact that -0 and 0 do not compare "correctly" according to
normal ECMAscript relational and equality comparisons makes the difference
between them almost unobservable, and the distinction when computing
Min/Max therefore almost useless. In particular, an attempt to
compute Min/Max inline with a simple comparison will not give the
same result. I'm not sure what a hard-core numerical analyst would
say about this, but I really doubt that our users care.
Comment 24•14 years ago
|
||
(In reply to comment #22)
> I think this will work because the version checking logic snifs the caller (via
> codeContext), and the JIT would be doing that check directly on the code that
> is calling Math.min.
>
> Thoughts?
Yes that sounds like it should preserve our ability to optimize, as long as we can maintain this in the face of background compilation.
Assignee | ||
Comment 25•14 years ago
|
||
(In reply to comment #22)
> Can we modify the fix, as follows:
>
> * in the JIT, if the version is new, early bind to _min_fixed() or
> _max_fixed(). Otherwise, bind to _min_broken() or _max_broken(). Each
> individual function doesn't need a version check.
I think this would work. I avoided getting into this because I don't understand how the existing early binding is specializing to _min and _max for the two-argument case, and thought that generating the code inline was the ultimate fix.
> I think this will work because the version checking logic snifs the caller (via
> codeContext), and the JIT would be doing that check directly on the code that
> is calling Math.min.
Can you look at the OSR versioning patch as well? I'm a bit unclear on how versioning is supposed to work. It seems to me that the version should be the one declared in the ABC or SWF that the executing or compiled bytecode came from, but Rick suggests it is more complicated than that, and indeed I've looked at some of the code manipulating CodeContext objects.
Assignee | ||
Comment 26•14 years ago
|
||
(In reply to comment #24)
> (In reply to comment #22)
> Yes that sounds like it should preserve our ability to optimize, as long as we
> can maintain this in the face of background compilation.
Should we stash the value of currentBugCompatibility() in the MethodInfo?
A clear determination that this can be done, and what value should be stored and when, would resolve my questions with respect to versioning.
--Bill
Comment 27•14 years ago
|
||
I think we have to look at how that call works. Say the check is inside _min(), and it walks up the stack to the next non-builtin stackframe. Then, we can do check in the JIT, if we are compiling a non-builtin function; otherwise, if we're compiling a builtin function that calls Math.min(), and the bug check is going to walk past the caller, then we can't do the check in the JIT.
In either case, I don't know what we get by stashing the value in MethodInfo, and i'm not sure which MethodInfo you meant -- Math.min's or the caller's.
Assignee | ||
Comment 28•14 years ago
|
||
(In reply to comment #27)
> I think we have to look at how that call works. Say the check is inside
> _min(), and it walks up the stack to the next non-builtin stackframe. Then, we
> can do check in the JIT, if we are compiling a non-builtin function; otherwise,
> if we're compiling a builtin function that calls Math.min(), and the bug check
> is going to walk past the caller, then we can't do the check in the JIT.
When you say "builtin function" here, then, you are referring to platform code written in AS3, not native code such as Math.min() itself.
> In either case, I don't know what we get by stashing the value in MethodInfo,
> and i'm not sure which MethodInfo you meant -- Math.min's or the caller's.
I mean the Math.min(). One motivation for stashing the value is to keep it available for background compilation when it gets around to compiling the method. But the real interest to me is that doing so would settle the question of whether each method can be uniquely associated with a single BugCompatibility object that determines the behavior of that method everywhere it is called, as that is the necessary condition to "compile in" a compatibility fix at JIT time, rather than generating code to check the BugCompatibility when the the method is called (and provide for either buggy or fixed behavior, selectable on a per-invocation basis).
Comment 29•14 years ago
|
||
(In reply to comment #28)
> (In reply to comment #27)
> When you say "builtin function" here, then, you are referring to platform code
> written in AS3, not native code such as Math.min() itself.
Correct.
> > In either case, I don't know what we get by stashing the value in MethodInfo,
> > and i'm not sure which MethodInfo you meant -- Math.min's or the caller's.
>
> I mean the Math.min(). One motivation for stashing the value is to keep it
> available for background compilation when it gets around to compiling the
> method. But the real interest to me is that doing so would settle the question
> of whether each method can be uniquely associated with a single
> BugCompatibility object that determines the behavior of that method everywhere
> it is called, as that is the necessary condition to "compile in" a
> compatibility fix at JIT time, rather than generating code to check the
> BugCompatibility when the the method is called (and provide for either buggy or
> fixed behavior, selectable on a per-invocation basis).
That's the crux. If the bug compatibility check doesn't care about the call stack, then we don't need to cache the value; it will be the same whenever the compiler decides to run.
If it does depend on the call stack, then we must have a runtime check, and caching the value in min()'s method info won't help anyway.
I beleive, without checking, that the check will walk up the call stack to the first non-builtin call frame, and return an answer based on that frame's version. But, time to check, I think, and double check with Steven if unsure.
Comment 30•14 years ago
|
||
> I beleive, without checking, that the check will walk up the call stack to the
> first non-builtin call frame, and return an answer based on that frame's
> version. But, time to check, I think, and double check with Steven if unsure.
That's correct, and the whole crux of the design. Caching a bugCompatibility in the MethodInfo would be absolutely wrong.
Comment 31•14 years ago
|
||
Okay, so:
* we're compiling a function that calls Math.min(). if min() walks up the call stack at runtime, they'll get to this function, first.
* if this function is not builtin, we can check bug compatibility at JIT time and the answer is always the same no matter when we compile.
* otherwise, we are compiling builtin code. the bug compatibility must be checked at runtime. If we have hot, builtin, code calling math.min, we should do something to make it not slow, but i really doubt its a problem.
sound right?
Assignee | ||
Comment 32•14 years ago
|
||
(In reply to comment #30)
> > I beleive, without checking, that the check will walk up the call stack to the
> > first non-builtin call frame, and return an answer based on that frame's
> > version. But, time to check, I think, and double check with Steven if unsure.
>
> That's correct, and the whole crux of the design. Caching a bugCompatibility in
> the MethodInfo would be absolutely wrong.
If I understand your and Edwin's comments, the bug compatibility for a method is inherited from the nearest non-builtin caller. Assuming that the builtin code never changed, this would indeed result in the builtin "magically" remaining compatible with the behavior expected by user code at the version level for which the user code was compiled. If the builtin code changes (presumably in a contract-preserving manner), however, only by explicitly accounting for the versioned behavior expected by the caller can the builtin code guarantee that it delivers on its contract. I claim that a sound and reliable would associate with each executing method two relevant bug compatibily objects -- one corresponding to the code itself, specifying the platform version to which the code was written, and one specifying the behavior expected by the caller, an implicit parameter equal to the code version of the caller. Inheritance can only be pairwise. Is the builtin code stable? Do we actually assess the effects of changes on behavior at earlier versions to determine the contract is still met? Or, as I fear, are we just playing fast and loose?
If Edwin's analysis is correct, I suppose we are only screwed for builtins, which will need to treated by the legacy compilation policy and not subject to OSR. (See bug 539094. The issue is that OSR is itself a versionable change, and by its nature must be on or off for a method, and not switchable on a per-call basis.)
Assignee | ||
Comment 33•14 years ago
|
||
> If I understand your and Edwin's comments, the bug compatibility for a method
^ builtin
> is inherited from the nearest non-builtin caller.
Comment 34•14 years ago
|
||
(In reply to comment #32)
Not entirely sure how this all works, but looking at CodeContext upon which the bugcompat flags are piggybacked, it appears that it can be created in and passed in/through any swf. Meaning that swf A, calling swf B , calling VM will have the context of A.
Comment 35•14 years ago
|
||
(In reply to comment #34)
> (In reply to comment #32)
>
> Not entirely sure how this all works, but looking at CodeContext upon which the
> bugcompat flags are piggybacked, it appears that it can be created in and
> passed in/through any swf. Meaning that swf A, calling swf B , calling VM
> will have the context of A.
Not so. Lowest non-builtin applies, so would be SWF B in this case.
Comment 36•14 years ago
|
||
(In reply to comment #35)
> (In reply to comment #34)
> > (In reply to comment #32)
> >
> > Not entirely sure how this all works, but looking at CodeContext upon which the
> > bugcompat flags are piggybacked, it appears that it can be created in and
> > passed in/through any swf. Meaning that swf A, calling swf B , calling VM
> > will have the context of A.
>
> Not so. Lowest non-builtin applies, so would be SWF B in this case.
This works via MethodFrame being set on method entry, based on MethodEnv->AbcEnv->CodeContext, if I recall correctly.
Comment 37•14 years ago
|
||
(In reply to comment #36)
> (In reply to comment #35)
> > (In reply to comment #34)
> > > (In reply to comment #32)
> This works via MethodFrame being set on method entry, based on
> MethodEnv->AbcEnv->CodeContext, if I recall correctly.
I see it now, AvmCore::codeContext() uses MethodFrame unless CodeContext is explicitly overriden *or* we're in builtins.
The overriden case is a bit worry some, doesn't this mean that it could change from call to call at a given call-site?
Other than that it looks like we're guaranteed the same bugcompat value at a given call-site (valid assumption of AbcEnv._scope does not change for a given MethodEnv).
Comment 38•14 years ago
|
||
(In reply to comment #23)
> I don't really care that much if we break -0 in the interpreter.
> The fact that -0 and 0 do not compare "correctly" according to
> normal ECMAscript relational and equality comparisons makes the difference
> between them almost unobservable, and the distinction when computing
> Min/Max therefore almost useless. In particular, an attempt to
> compute Min/Max inline with a simple comparison will not give the
> same result. I'm not sure what a hard-core numerical analyst would
> say about this, but I really doubt that our users care.
Not to pick on that comment in particular but it's a convenient locus at which to make a point: This is a language definition matter, not a VM implementation matter, so making a decision within the comment thread of this bug is out of the question.
That said...
The default assumption is that the AS3 language specification will specify ECMAScript semantics for max and min. As you say, few users care about whether -0 < 0 as far as max and min are concerned, but we do care about not breaking compatibility unless we really benefit from doing that. So the case for breaking compatibility really has to be made.
Survey of some mainstream languages and languages that care about numerics:
Java:
java.math.min and java.math.max have the same behavior as ECMAScript.
C#:
System.Math.Min and System.Math.Max leave it unspecified whether
-0 and 0 have a fixed relation.
C:
max and min are macros that use "<", so -0 and 0 are equivalent.
C++:
max and min are template functions that use "<" or a comparator function;
for our purposes, the "<" version leaves -0 and 0 as equivalent.
Common Lisp, Scheme:
Leave it unspecified but they also do not specify the representation
of floating-point numbers (or in Scheme's case, whether floating-point
numbers are even available.)
IEEE 784-1985:
I've mislaid my paper copy but from what I can glean from the web, -0 and 0
always compare equal and the behavior of Java is considered an outlier.
Corrections / augmentations are welcome.
Comment 39•14 years ago
|
||
(In reply to comment #38)
> C#:
> System.Math.Min and System.Math.Max leave it unspecified whether
> -0 and 0 have a fixed relation.
It may be "unspecified" but I'll bet you a dollar that the .NET runtime does something predictable (and thus it is de factor specified)
Comment 40•14 years ago
|
||
(In reply to comment #6)
> (In reply to comment #5)
> > Verified not an injection - issue happens in Player 9_0_r159
>
> Oh foo. Now we have to version check.
I want to challenge the assumption that we need a version check here.
(1) We don't have a version-check scheme that won't be a performance hit, nor does it seem likely that we'll be able to come up with one anytime soon, given that even a trivial check will be large relative to the inlined Math.min code.
(2) It's not clear that we have evidence that substantial content will break as a result of this; the difference between 0 and -0 make little difference in most code.
Comment 41•14 years ago
|
||
(In reply to comment #40)
> I want to challenge the assumption that we need a version check here.
More specifically, I'm proposing that we pick a behavior (per Comment 38) and force all versions to follow that, regardless of version.
Comment 42•14 years ago
|
||
Is anyone refuting my assertion that the JIT can do the bug-version check when user-code calls Math.min directly?
If yes: whats the counter argument? point to the bugcompat spec and explain.
If no, then only two performance problems are interesting:
1. builtin code calling Math.min(). Is there any, and further, any that would change behavior either way?
2. the cost of ordering -0 and 0
Doing anything besides just fixing this to match ecma-262 requires going up a level and making a concious change to the AS3 spec, as Lars outlined.
The easiest way to decouple the whole thing from OSR would be for OSR to just optimize Math.min one way or the other based on method->isStaticInit(). We haven't set a precedent for this strategy, but its an option.
Comment 43•14 years ago
|
||
(In reply to comment #42)
> Is anyone refuting my assertion that the JIT can do the bug-version check when
> user-code calls Math.min directly?
No, it can definitely do so; however, the cost of doing so using the BugCompatibility mechanism will dwarf the cost of the actual inlined operation.
> 1. builtin code calling Math.min(). Is there any, and further, any that would
> change behavior either way?
grep for Math.min in avmplus itself turns up no hits. in Flash/AIR glue code, ~44 hits turn up. Further investigation would be needed to determine if these are likely to be affected.
Assignee | ||
Comment 44•14 years ago
|
||
For what it is worth, the proposed patch only pays the cost of the version check when we know that a -0 is involved. The more significant part of the cost is the check for -0 itself, which is essential to the fix, versioning aside. If both arguments are zero (+ or -), we end up doing a division.
If we retain Java/ECMAscript compatibility, it's unlikely we'll see much of a performance penalty in practice for the compatibility check on -0 (though it would be easy enough to construct a microbenchmark that would make it look bad). I'm arguing against the approach suggested in comment #22 and defending my proposed patch.
I doubt we've seen the last of this, and will find ourselves at some point faced with a more compelling case where versioning is both needed and prohibitively costly, but maybe we can dodge the bullet here.
Assignee | ||
Comment 45•14 years ago
|
||
(In reply to comment #43)
> (In reply to comment #42)
> > Is anyone refuting my assertion that the JIT can do the bug-version check when
> > user-code calls Math.min directly?
>
> No, it can definitely do so; however, the cost of doing so using the
> BugCompatibility mechanism will dwarf the cost of the actual inlined operation.
I think Edwin was referring to the proposal in comment #31, that we could
check bugCompatibility *at JIT time* but only for *non-builtin* functions.
The possibility of override calls this into question, as Rick observed in comment #37. I'd like to hear Steven specifically address this issue.
Comment 46•14 years ago
|
||
(In reply to comment #45)
> I think Edwin was referring to the proposal in comment #31, that we could
> check bugCompatibility *at JIT time* but only for *non-builtin* functions.
Ahh --- yes, thanks, I misunderstood. Yes, I think that could totally be done safely.
> The possibility of override calls this into question, as Rick observed in
> comment #37. I'd like to hear Steven specifically address this issue.
No, actually it doesn't; even if there was a (wrong) override, non-builtin code always should be able to act as though it has its "native" version baked in.
Comment 47•14 years ago
|
||
> The overriden case is a bit worry some, doesn't this mean that it could change
> from call to call at a given call-site?
Can you elaborate? The only time the JIT even tries to do anyting special with a call site to Math.min, is when its known that we're calling the real Math.min(). Math is a final class.
Assignee | ||
Comment 48•14 years ago
|
||
(In reply to comment #46)
> > The possibility of override calls this into question, as Rick observed in
> > comment #37. I'd like to hear Steven specifically address this issue.
>
> No, actually it doesn't; even if there was a (wrong) override, non-builtin code
> always should be able to act as though it has its "native" version baked in.
This seems plausible. Then do you see any difficulty with rewriting currentBugCompatibility()in such a way that it is manifestly immune to override attempts?
Comment 49•14 years ago
|
||
(In reply to comment #47)
> > The overriden case is a bit worry some, doesn't this mean that it could change
> > from call to call at a given call-site?
>
I believe that at a given call site outside of builtins we should be able to optimize for a given version. For example, swf12 calling swf11 calling builtins. All methods in swf11 can be optimized to a specific version safety.
The builtins on the other hand cannot be, since swf11 calling the builtins expects a certain behaviour whereas swf12 calling into them expects something else.
We could get around this (if needed), by managing version specific optimized variants of the builtins (i.e. multiple JIT'd method pointers), but I doubt we want to introduce this complexity unless absolutely necessary.
Comment 50•14 years ago
|
||
(In reply to comment #48)
> (In reply to comment #46)
>
> > > The possibility of override calls this into question, as Rick observed in
> > > comment #37. I'd like to hear Steven specifically address this issue.
> >
> > No, actually it doesn't; even if there was a (wrong) override, non-builtin code
> > always should be able to act as though it has its "native" version baked in.
>
> This seems plausible. Then do you see any difficulty with rewriting
> currentBugCompatibility()in such a way that it is manifestly immune to override
> attempts?
Actually I'd make a stronger statement and say the current behavior that allows this is a bug.
Comment 51•14 years ago
|
||
(In reply to comment #49)
> I believe that at a given call site outside of builtins we should be able to
> optimize for a given version. For example, swf12 calling swf11 calling
> builtins. All methods in swf11 can be optimized to a specific version safety.
Agreed!
> The builtins on the other hand cannot be, since swf11 calling the builtins
> expects a certain behaviour whereas swf12 calling into them expects something
> else.
Agreed!
> > This seems plausible. Then do you see any difficulty with rewriting
> > currentBugCompatibility()in such a way that it is manifestly immune to override
> > attempts?
>
> Actually I'd make a stronger statement and say the current behavior that allows
> this is a bug.
Agreed! Needs a bugzilla!
Comment 52•14 years ago
|
||
Comment 53•14 years ago
|
||
(In reply to comment #38)
> C:
> max and min are macros that use "<", so -0 and 0 are equivalent.
That's not correct, max and min are not defined in ISO C 1999, nor are they in fact in K&R (either edition). Apologies. I will blaim over-long familiarity with sundry Unices and C compilers.
(Of course the canonical definition is ((a) < (b) ? (a) : (b)), so the first argument is always returned, ergo, the result isn't an arbitrary zero but the first zero.)
> C++:
> max and min are template functions that use "<" or a comparator function;
> for our purposes, the "<" version leaves -0 and 0 as equivalent.
And a note in the C++ spec specifies that if the arguments are equivalent then the first is returned, thus actually making the distinction between -0 and 0 but not ordering them.
Comment 54•14 years ago
|
||
(In reply to comment #53)
>
> (Of course the canonical definition is ((a) < (b) ? (a) : (b)), so the first
> argument is always returned, ergo, the result isn't an arbitrary zero but the
> first zero.)
(Sigh.) Obviously if a and b are equivalent the /second/ argument is returned. The main point stands, it's predictable.
Comment 55•14 years ago
|
||
Comment on attachment 494562 [details] [diff] [review]
Handle -0 correctly in _min and _max (patch for Spicy)
doing the bug-check in the JIT when legal (user code calling Math.min) can be a followon optimization. It could also interact with bug 606561. Nothing jumps out as a reason to hold up this patch though, and I buy the argument that -0 will be rare enough in practice that the bug check should not cost much.
Attachment #494562 -
Flags: superreview?(edwsmith) → superreview+
Comment 56•14 years ago
|
||
Comment on attachment 494562 [details] [diff] [review]
Handle -0 correctly in _min and _max (patch for Spicy)
r+ looks fine for me.
Attachment #494562 -
Flags: review?(rreitmai) → review+
Assignee | ||
Comment 57•14 years ago
|
||
Pushed to tr-spicy:
http://hg.mozilla.org/users/stejohns_adobe.com/tr-spicy/rev/080e6a7d7eca
Assignee | ||
Comment 58•14 years ago
|
||
Pushed to tr-spicy:
http://hg.mozilla.org/users/stejohns_adobe.com/tr-spicy/rev/e7bc0e9911cf
Assignee | ||
Comment 59•14 years ago
|
||
Assignee | ||
Comment 60•14 years ago
|
||
Comment on attachment 496708 [details] [diff] [review]
Fix botch in negative zero fix -- not always backward compatible
Hmm, shouldn't have let this slip through. Caught while peforming regression test hygiene.
Attachment #496708 -
Attachment description: Fix botch in negatize zero fix -- not always backward compatible → Fix botch in negative zero fix -- not always backward compatible
Assignee | ||
Comment 61•14 years ago
|
||
Pushed to tr-spicy:
http://hg.mozilla.org/users/stejohns_adobe.com/tr-spicy/rev/519ade712c1b
Assignee | ||
Updated•14 years ago
|
Whiteboard: fixed-in-spicy,fixed-in-spicier
Comment 62•14 years ago
|
||
changeset: 5831:46ab90e5764f
user: William Maddox <wmaddox@adobe.com>
summary: Bug 551587 - Handle -0 correctly in _min and _max (r=rreitmai,sr=edwsmith)
Regression test by cpeyer, reviewed by myself.
http://hg.mozilla.org/tamarin-redux/rev/46ab90e5764f
Comment 63•14 years ago
|
||
Comment on attachment 494831 [details] [diff] [review]
updated testcase
Chris,
this testcase does not pass differential testing with comparing JIT vs Interp since the output of each run will be different because different testcases are conditionally run. There are two options here:
1) make this testcase to be skipped in differential testing
2) move the conditional testcases to their own file and mark that to be skipped in conditional testing
I recommend option #2 since it will ensure that most of the testcases are covered under conditional testing.
Comment 64•14 years ago
|
||
Move testcases that are conditional based on shell runmode (jit vs interp) to a separate file so that it can be skipped during differential testing which could be running and comparing output between -Ojit and -Dinterp
Attachment #508823 -
Flags: review?(cpeyer)
Reporter | ||
Comment 65•14 years ago
|
||
Comment on attachment 508823 [details] [diff] [review]
tweak testcase
Minor typo in testconfig: comment should be "based on run mode"
Attachment #508823 -
Flags: review?(cpeyer) → review+
Comment 66•14 years ago
|
||
changeset: 5861:3c2797dfc765
user: Brent Baker <brbaker@adobe.com>
summary: Bug 551587: Move testcases that are conditional based on shell runmode (jit vs interp) to a separate file so that it can be skipped during differential testing which could be running and comparing output between -Ojit and -Dinterp (r+cpeyer)
http://hg.mozilla.org/tamarin-redux/rev/3c2797dfc765
Assignee | ||
Comment 68•14 years ago
|
||
(In reply to comment #67)
> Can this be marked fixed?
I think so. There is the possibility of a somewhat more efficient fix. In principle, it should be possible to version user code at JIT-time, but there are some unresolved questions that are unlikely to be sorted out in the short term.
Status: ASSIGNED → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•