Closed
Bug 473688
Opened 16 years ago
Closed 16 years ago
TM: provide a 2-level hash map (global, pc) -> tree
Categories
(Core :: JavaScript Engine, defect, P3)
Tracking
()
RESOLVED
FIXED
mozilla1.9.1b3
People
(Reporter: gal, Assigned: graydon)
References
Details
(Keywords: fixed1.9.1)
Attachments
(1 file, 1 obsolete file)
(deleted),
patch
|
gal
:
review+
|
Details | Diff | Splinter Review |
No description provided.
Reporter | ||
Comment 1•16 years ago
|
||
This blocks a blocker.
Assignee: general → gal
No longer blocks: 468988
Priority: -- → P3
Target Milestone: --- → mozilla1.9.1b3
Assignee | ||
Updated•16 years ago
|
Assignee: gal → graydon
Assignee | ||
Comment 2•16 years ago
|
||
Here's a sketch of the code we discussed. Let me know if this is what you had in mind. It currently crashes but I doubt it's a terribly deep bug. Gotta head home for the night.
Reporter | ||
Comment 3•16 years ago
|
||
Comment on attachment 358326 [details] [diff] [review]
Sketch of solution
>diff -r 5728b9d90bcd js/src/jscntxt.h
>--- a/js/src/jscntxt.h Thu Jan 22 18:15:57 2009 -0800
>+++ b/js/src/jscntxt.h Thu Jan 22 19:52:30 2009 -0800
>@@ -108,6 +108,15 @@
> # define CLS(T) void*
> #endif
>
>+#define FRAGMENT_TABLE_SIZE 4096
>+struct JSFragmentBucket
>+{
>+ struct JSFragmentBucket* next;
>+ const void* ip;
>+ uint32 globalShape;
>+ CLS(nanojit::Fragment) fragment;
>+};
We currently have Fragment and TreeInfo. I was thinking we change that to a sub-class VMFragment that derives from Fragment and contains next and globalShape and then we place VMFragment into the lirbuf and instead of JSFragmentBucket* the hash buckets are
VMFragment*. What do you think?
>+
> /*
> * Trace monitor. Every JSThread (if JS_THREADSAFE) or JSRuntime (if not
> * JS_THREADSAFE) has an associated trace monitor that keeps track of loop
>@@ -129,6 +138,8 @@
> jsval *reservedDoublePool;
> jsval *reservedDoublePoolPtr;
>
>+ struct JSFragmentBucket* fragmentBuckets[FRAGMENT_TABLE_SIZE];
>+
> /*
> * reservedObjects is a linked list (via fslots[0]) of preallocated JSObjects.
> * The JIT uses this to ensure that leaving a trace tree can't fail.
>diff -r 5728b9d90bcd js/src/jstracer.cpp
>--- a/js/src/jstracer.cpp Thu Jan 22 18:15:57 2009 -0800
>+++ b/js/src/jstracer.cpp Thu Jan 22 19:52:30 2009 -0800
>@@ -223,7 +223,7 @@
>
> #ifdef JS_JIT_SPEW
> void
>-js_DumpPeerStability(Fragmento* frago, const void* ip);
>+js_DumpPeerStability(JSTraceMonitor* tm, const void* ip);
> #endif
>
> /* We really need a better way to configure the JIT. Shaver, where is my fancy JIT object? */
>@@ -243,7 +243,7 @@
> /* Blacklists the root peer fragment at a fragment's PC. This is so blacklisting stays at the
> top of the peer list and not scattered around. */
> void
>-js_BlacklistPC(Fragmento* frago, Fragment* frag);
>+js_BlacklistPC(JSTraceMonitor* tm, Fragment* frag);
>
> Tracker::Tracker()
> {
>@@ -365,43 +365,46 @@
> */
>
> #define ORACLE_MASK (ORACLE_SIZE - 1)
>+#define FRAGMENT_TABLE_MASK (FRAGMENT_TABLE_SIZE - 1)
>+#define HASH_SEED 5381
>
> static inline void
>-hash_accum(uintptr_t& h, uintptr_t i)
>-{
>- h = ((h << 5) + h + (ORACLE_MASK & i)) & ORACLE_MASK;
>+hash_accum(uintptr_t& h, uintptr_t i, uintptr_t mask)
>+{
>+ h = ((h << 5) + h + (mask & i)) & mask;
> }
>
> JS_REQUIRES_STACK static inline int
> stackSlotHash(JSContext* cx, unsigned slot)
> {
>- uintptr_t h = 5381;
>- hash_accum(h, uintptr_t(cx->fp->script));
>- hash_accum(h, uintptr_t(cx->fp->regs->pc));
>- hash_accum(h, uintptr_t(slot));
>+ uintptr_t h = HASH_SEED;
>+ hash_accum(h, uintptr_t(cx->fp->script), ORACLE_MASK);
>+ hash_accum(h, uintptr_t(cx->fp->regs->pc), ORACLE_MASK);
>+ hash_accum(h, uintptr_t(slot), ORACLE_MASK);
> return int(h);
> }
>
> JS_REQUIRES_STACK static inline int
> globalSlotHash(JSContext* cx, unsigned slot)
> {
>- uintptr_t h = 5381;
>+ uintptr_t h = HASH_SEED;
> JSStackFrame* fp = cx->fp;
>
> while (fp->down)
> fp = fp->down;
>
>- hash_accum(h, uintptr_t(fp->script));
>- hash_accum(h, uintptr_t(OBJ_SHAPE(JS_GetGlobalForObject(cx, fp->scopeChain))));
>- hash_accum(h, uintptr_t(slot));
>+ hash_accum(h, uintptr_t(fp->script), ORACLE_MASK);
>+ hash_accum(h, uintptr_t(OBJ_SHAPE(JS_GetGlobalForObject(cx, fp->scopeChain))),
shaver suggested handing in globalObj from the caller instead of recalculating here.
>+ ORACLE_MASK);
>+ hash_accum(h, uintptr_t(slot), ORACLE_MASK);
> return int(h);
> }
>
> static inline size_t
> hitHash(const void* ip)
> {
>- uintptr_t h = 5381;
>- hash_accum(h, uintptr_t(ip));
>+ uintptr_t h = HASH_SEED;
>+ hash_accum(h, uintptr_t(ip), ORACLE_MASK);
> return size_t(h);
> }
>
>@@ -503,6 +506,77 @@
> _globalDontDemote.reset();
> }
>
>+static inline size_t
>+fragmentHash(const void *ip, uint32 globalShape)
>+{
>+ uintptr_t h = HASH_SEED;
>+ hash_accum(h, uintptr_t(ip), FRAGMENT_TABLE_MASK);
>+ hash_accum(h, uintptr_t(globalShape), FRAGMENT_TABLE_MASK);
>+ return size_t(h);
>+}
>+
>+static JSFragmentBucket*
>+getFragmentBucket(JSTraceMonitor* tm, const void *ip, uint32 globalShape)
>+{
>+ size_t h = fragmentHash(ip, globalShape);
>+ JSFragmentBucket* bucket = tm->fragmentBuckets[h];
>+ while (bucket &&
>+ ! (bucket->globalShape == globalShape &&
>+ bucket->ip == ip)) {
>+ bucket = bucket->next;
>+ }
>+ if (!bucket) {
>+ LirBufWriter writer(tm->lirbuf);
>+ bucket = (JSFragmentBucket*) writer.skip(sizeof(JSFragmentBucket));
check for OOM?
>+ JS_ASSERT(bucket);
>+ bucket->ip = ip;
>+ bucket->globalShape = globalShape;
>+ bucket->fragment = NULL;
>+ bucket->next = tm->fragmentBuckets[h];
>+ tm->fragmentBuckets[h] = bucket;
>+ }
>+ JS_ASSERT(bucket);
>+ return bucket;
>+}
>+
>+// FIXME: remove the default parameters for globalShape when we're
>+// actually keying by it.
>+
>+static nanojit::Fragment*
>+getLoop(JSTraceMonitor* tm, const void *ip, uint32 globalShape = 0)
>+{
>+ return getFragmentBucket(tm, ip, globalShape)->fragment;
>+}
I like the idea. We can get the patch in like this and then in a separate patch enable the ip/shape lookup.
>+
>+static nanojit::Fragment*
>+getAnchor(JSTraceMonitor* tm, const void *ip, uint32 globalShape = 0)
>+{
>+ LirBufWriter writer(tm->lirbuf);
>+ char *fragmem = (char*) writer.skip(sizeof(Fragment));
Same here. This might fail. Maybe we should stop using the constructor? new (fragmem) looks disgusting.
>+ Fragment *f = new (fragmem) Fragment(ip);
>+ JS_ASSERT(f);
>+
>+ JSFragmentBucket* bucket = getFragmentBucket(tm, ip, globalShape);
>+ Fragment *p = bucket->fragment;
>+
>+ if (p) {
>+ f->first = p;
>+ /* append at the end of the peer list */
>+ Fragment* next;
>+ while ((next = p->peer) != NULL)
>+ p = next;
>+ p->peer = f;
>+ } else {
>+ /* this is the first fragment */
>+ f->first = f;
>+ bucket->fragment = f;
>+ }
>+ f->anchor = f;
>+ f->root = f;
>+ f->kind = LoopTrace;
>+ return f;
>+}
>+
>
> #if defined(NJ_SOFTFLOAT)
> JS_DEFINE_CALLINFO_1(static, DOUBLE, i2f, INT32, 1, 1)
>@@ -2381,11 +2455,12 @@
>
> /* Compile the current fragment. */
> JS_REQUIRES_STACK void
>-TraceRecorder::compile(Fragmento* fragmento)
>-{
>+TraceRecorder::compile(JSTraceMonitor* tm)
>+{
>+ Fragmento* fragmento = tm->fragmento;
> if (treeInfo->maxNativeStackSlots >= MAX_NATIVE_STACK_SLOTS) {
> debug_only_v(printf("Trace rejected: excessive stack use.\n"));
>- js_BlacklistPC(fragmento, fragment);
>+ js_BlacklistPC(tm, fragment);
> return;
> }
> ++treeInfo->branchCount;
>@@ -2397,7 +2472,7 @@
> if (fragmento->assm()->error() == nanojit::OutOMem)
> return;
> if (fragmento->assm()->error() != nanojit::None) {
>- js_BlacklistPC(fragmento, fragment);
>+ js_BlacklistPC(tm, fragment);
> return;
> }
> if (anchor)
>@@ -2440,27 +2515,28 @@
>
> /* Complete and compile a trace and link it to the existing tree if appropriate. */
> JS_REQUIRES_STACK bool
>-TraceRecorder::closeLoop(Fragmento* fragmento, bool& demote)
>+TraceRecorder::closeLoop(JSTraceMonitor* tm, bool& demote)
> {
> bool stable;
> LIns* exitIns;
> Fragment* peer;
> VMSideExit* exit;
> Fragment* peer_root;
>+ Fragmento* fragmento = tm->fragmento;
>
> exitIns = snapshot(UNSTABLE_LOOP_EXIT);
> exit = (VMSideExit*)((GuardRecord*)exitIns->payload())->exit;
>
> if (callDepth != 0) {
> debug_only_v(printf("Stack depth mismatch, possible recursion\n");)
>- js_BlacklistPC(fragmento, fragment);
>+ js_BlacklistPC(tm, fragment);
> trashSelf = true;
> return false;
> }
>
> JS_ASSERT(exit->numStackSlots == treeInfo->stackSlots);
>
>- peer_root = fragmento->getLoop(fragment->root->ip);
>+ peer_root = getLoop(traceMonitor, fragment->root->ip);
> JS_ASSERT(peer_root != NULL);
> stable = deduceTypeStability(peer_root, &peer, demote);
>
>@@ -2514,11 +2590,11 @@
> ((TreeInfo*)peer->vmprivate)->dependentTrees.addUnique(fragment->root);
> }
>
>- compile(fragmento);
>+ compile(tm);
> } else {
> exit->target = fragment->root;
> fragment->lastIns = lir->insGuard(LIR_loop, lir->insImm(1), exitIns);
>- compile(fragmento);
>+ compile(tm);
> }
>
> if (fragmento->assm()->error() != nanojit::None)
>@@ -2610,29 +2686,29 @@
> }
> }
>
>- debug_only_v(js_DumpPeerStability(fragmento, peer_root->ip);)
>+ debug_only_v(js_DumpPeerStability(traceMonitor, peer_root->ip);)
> }
>
> /* Emit an always-exit guard and compile the tree (used for break statements. */
> JS_REQUIRES_STACK void
>-TraceRecorder::endLoop(Fragmento* fragmento)
>+TraceRecorder::endLoop(JSTraceMonitor* tm)
> {
> LIns* exitIns = snapshot(LOOP_EXIT);
>
> if (callDepth != 0) {
> debug_only_v(printf("Stack depth mismatch, possible recursion\n");)
>- js_BlacklistPC(fragmento, fragment);
>+ js_BlacklistPC(tm, fragment);
> trashSelf = true;
> return;
> }
>
> fragment->lastIns = lir->insGuard(LIR_x, lir->insImm(1), exitIns);
>- compile(fragmento);
>-
>- if (fragmento->assm()->error() != nanojit::None)
>- return;
>-
>- joinEdgesToEntry(fragmento, fragmento->getLoop(fragment->root->ip));
>+ compile(tm);
>+
>+ if (tm->fragmento->assm()->error() != nanojit::None)
>+ return;
>+
>+ joinEdgesToEntry(tm->fragmento, getLoop(tm, fragment->root->ip));
>
> debug_only_v(printf("recording completed at %s:%u@%u via endLoop\n",
> cx->fp->script->filename,
>@@ -3148,7 +3224,7 @@
> while (f->code() && f->peer)
> f = f->peer;
> if (f->code())
>- f = JS_TRACE_MONITOR(cx).fragmento->getAnchor(f->root->ip);
>+ f = getAnchor(&JS_TRACE_MONITOR(cx), f->root->ip);
>
> f->recordAttempts++;
> f->root = f;
>@@ -3174,7 +3250,7 @@
> since we are trying to stabilize something without properly connecting peer edges. */
> #ifdef DEBUG
> TreeInfo* ti_other;
>- for (Fragment* peer = tm->fragmento->getLoop(f->root->ip); peer != NULL; peer = peer->peer) {
>+ for (Fragment* peer = getLoop(tm, f->root->ip); peer != NULL; peer = peer->peer) {
> if (!peer->code() || peer == f)
> continue;
> ti_other = (TreeInfo*)peer->vmprivate;
>@@ -3348,7 +3424,7 @@
>
> bool demote;
> Fragment* f = r->getFragment();
>- r->closeLoop(fragmento, demote);
>+ r->closeLoop(tm, demote);
> js_DeleteRecorder(cx);
>
> /*
>@@ -3369,7 +3445,7 @@
> return false; /* we stay away from shared global objects */
> }
> #endif
>- Fragmento* fragmento = JS_TRACE_MONITOR(cx).fragmento;
>+ JSTraceMonitor* tm = &JS_TRACE_MONITOR(cx);
> /* Process deep abort requests. */
> if (r->wasDeepAborted()) {
> js_AbortRecording(cx, "deep abort requested");
>@@ -3379,10 +3455,9 @@
> if (r->isLoopHeader(cx))
> return js_CloseLoop(cx);
> /* does this branch go to an inner loop? */
>- Fragment* f = fragmento->getLoop(cx->fp->regs->pc);
>+ Fragment* f = getLoop(&JS_TRACE_MONITOR(cx), cx->fp->regs->pc);
> Fragment* peer_root = f;
> if (nesting_enabled && f) {
>- JSTraceMonitor* tm = &JS_TRACE_MONITOR(cx);
>
> /* Make sure inner tree call will not run into an out-of-memory condition. */
> if (tm->reservedDoublePoolPtr < (tm->reservedDoublePool + MAX_NATIVE_STACK_SLOTS) &&
>@@ -3416,7 +3491,7 @@
> AUDIT(noCompatInnerTrees);
> debug_only_v(printf("No compatible inner tree (%p).\n", f);)
>
>- Fragment* old = fragmento->getLoop(tm->recorder->getFragment()->root->ip);
>+ Fragment* old = getLoop(tm, tm->recorder->getFragment()->root->ip);
> if (old == NULL)
> old = tm->recorder->getFragment();
> js_AbortRecording(cx, "No compatible inner tree");
>@@ -3424,7 +3499,7 @@
> return false;
> if (old->recordAttempts < MAX_MISMATCH)
> oracle.resetHits(old->ip);
>- f = empty ? empty : tm->fragmento->getAnchor(cx->fp->regs->pc);
>+ f = empty ? empty : getAnchor(tm, cx->fp->regs->pc);
> return js_RecordTree(cx, tm, f, old);
> }
>
>@@ -3448,13 +3523,13 @@
> return true;
> case UNSTABLE_LOOP_EXIT:
> /* abort recording so the inner loop can become type stable. */
>- old = fragmento->getLoop(tm->recorder->getFragment()->root->ip);
>+ old = getLoop(tm, tm->recorder->getFragment()->root->ip);
> js_AbortRecording(cx, "Inner tree is trying to stabilize, abort outer recording");
> oracle.resetHits(old->ip);
> return js_AttemptToStabilizeTree(cx, lr, old);
> case BRANCH_EXIT:
> /* abort recording the outer tree, extend the inner tree */
>- old = fragmento->getLoop(tm->recorder->getFragment()->root->ip);
>+ old = getLoop(tm, tm->recorder->getFragment()->root->ip);
> js_AbortRecording(cx, "Inner tree is trying to grow, abort outer recording");
> oracle.resetHits(old->ip);
> return js_AttemptToExtendTree(cx, lr, NULL, old);
>@@ -3943,11 +4018,9 @@
> return false;
> }
>
>- Fragmento* fragmento = tm->fragmento;
>- Fragment* f;
>- f = fragmento->getLoop(pc);
>+ Fragment* f = getLoop(tm, pc);
> if (!f)
>- f = fragmento->getAnchor(pc);
>+ f = getAnchor(tm, pc);
>
> /* If we have no code in the anchor and no peers, we definitively won't be able to
> activate any trees so, start compiling. */
>@@ -4033,7 +4106,7 @@
> jssrcnote* sn = js_GetSrcNote(cx->fp->script, pc);
> if (sn && SN_TYPE(sn) == SRC_BREAK) {
> AUDIT(breakLoopExits);
>- tr->endLoop(JS_TRACE_MONITOR(cx).fragmento);
>+ tr->endLoop(&JS_TRACE_MONITOR(cx));
> js_DeleteRecorder(cx);
> return JSMRS_STOP; /* done recording */
> }
>@@ -4042,7 +4115,7 @@
> /* An explicit return from callDepth 0 should end the loop, not abort it. */
> if (*pc == JSOP_RETURN && tr->callDepth == 0) {
> AUDIT(returnLoopExits);
>- tr->endLoop(JS_TRACE_MONITOR(cx).fragmento);
>+ tr->endLoop(&JS_TRACE_MONITOR(cx));
> js_DeleteRecorder(cx);
> return JSMRS_STOP; /* done recording */
> }
>@@ -4100,10 +4173,10 @@
>
> /* If used on a loop trace, blacklists the root peer instead of the given fragment. */
> void
>-js_BlacklistPC(Fragmento* frago, Fragment* frag)
>+js_BlacklistPC(JSTraceMonitor* tm, Fragment* frag)
> {
> if (frag->kind == LoopTrace)
>- frag = frago->getLoop(frag->ip);
>+ frag = getLoop(tm, frag->ip);
> oracle.blacklist(frag->ip);
> }
>
>@@ -4128,11 +4201,11 @@
> return;
> }
> JS_ASSERT(!f->vmprivate);
>- js_BlacklistPC(tm->fragmento, f);
>+ js_BlacklistPC(tm, f);
> Fragment* outer = tm->recorder->getOuterToBlacklist();
> /* Give outer two chances to stabilize before we start blacklisting. */
> if (outer != NULL && outer->recordAttempts >= 2)
>- js_BlacklistPC(tm->fragmento, outer);
>+ js_BlacklistPC(tm, outer);
> js_DeleteRecorder(cx);
> /* If this is the primary trace and we didn't succeed compiling, trash the TreeInfo object. */
> if (!f->code() && (f->root == f))
>@@ -4200,6 +4273,7 @@
> #endif
> tm->globalSlots = new (&gc) SlotList();
> tm->reservedDoublePoolPtr = tm->reservedDoublePool = new jsval[MAX_NATIVE_STACK_SLOTS];
>+ memset(tm->fragmentBuckets, 0, sizeof(tm->fragmentBuckets));
> }
> if (!tm->reFragmento) {
> Fragmento* fragmento = new (&gc) Fragmento(core, 20);
>@@ -4307,6 +4381,7 @@
> fragmento->labels = new (&gc) LabelMap(core, NULL);
> #endif
> tm->lirbuf->rewind();
>+ memset(tm->fragmentBuckets, 0, sizeof(tm->fragmentBuckets));
> }
> if (cx->fp) {
> tm->globalShape = OBJ_SHAPE(JS_GetGlobalForObject(cx, cx->fp->scopeChain));
>@@ -8774,14 +8849,14 @@
> #ifdef JS_JIT_SPEW
> /* Prints information about entry typemaps and unstable exits for all peers at a PC */
> void
>-js_DumpPeerStability(Fragmento* frago, const void* ip)
>+js_DumpPeerStability(JSTraceMonitor* tm, const void* ip)
> {
> Fragment* f;
> TreeInfo* ti;
> bool looped = false;
> unsigned length = 0;
>
>- for (f = frago->getLoop(ip); f != NULL; f = f->peer) {
>+ for (f = getLoop(tm, ip); f != NULL; f = f->peer) {
> if (!f->vmprivate)
> continue;
> printf("fragment %p:\nENTRY: ", f);
>diff -r 5728b9d90bcd js/src/jstracer.h
>--- a/js/src/jstracer.h Thu Jan 22 18:15:57 2009 -0800
>+++ b/js/src/jstracer.h Thu Jan 22 19:52:30 2009 -0800
>@@ -496,9 +496,9 @@
> JS_REQUIRES_STACK nanojit::LIns* snapshot(ExitType exitType);
> nanojit::Fragment* getFragment() const { return fragment; }
> JS_REQUIRES_STACK bool isLoopHeader(JSContext* cx) const;
>- JS_REQUIRES_STACK void compile(nanojit::Fragmento* fragmento);
>- JS_REQUIRES_STACK bool closeLoop(nanojit::Fragmento* fragmento, bool& demote);
>- JS_REQUIRES_STACK void endLoop(nanojit::Fragmento* fragmento);
>+ JS_REQUIRES_STACK void compile(JSTraceMonitor* tm);
>+ JS_REQUIRES_STACK bool closeLoop(JSTraceMonitor* tm, bool& demote);
>+ JS_REQUIRES_STACK void endLoop(JSTraceMonitor* tm);
> JS_REQUIRES_STACK void joinEdgesToEntry(nanojit::Fragmento* fragmento,
> nanojit::Fragment* peer_root);
> void blacklist() { fragment->blacklist(); }
>diff -r 5728b9d90bcd js/src/nanojit/avmplus.h
>--- a/js/src/nanojit/avmplus.h Thu Jan 22 18:15:57 2009 -0800
>+++ b/js/src/nanojit/avmplus.h Thu Jan 22 19:52:30 2009 -0800
>@@ -157,6 +157,14 @@
> {
> return calloc(1, size);
> }
>+
>+ inline void*
>+ operator new(size_t size, char* c)
>+ {
>+ // We use placement-new in LIR buffers sometimes.
>+ memset(c, 0, size);
>+ return c;
>+ }
Ugly like sin.
>
> static void operator delete (void *gcObject)
> {
Assignee | ||
Comment 4•16 years ago
|
||
This variant incorporates the suggestions for OOM handling and use of a VMFragment subclass, removing the separate FragmentBucket type, and shrinks the hashtable size from 4095 buckets to 512. Ignores (for now) shaver's suggestion of calculating the global for a scope only once.
With these changes we seem to be ahead of trunk by a small margin: 1590ms -> 1580ms avg on a mac doing sunspider, 25s -> 22s on linux machine doing trace-test.
I think I'm getting happy with the patch. Any further suggestions?
Attachment #358326 -
Attachment is obsolete: true
Attachment #358964 -
Flags: review?(gal)
Reporter | ||
Comment 5•16 years ago
|
||
Looks good. I still hate the new placement. We can do the regexp's jits fragment lookup in a separate patch.
Reporter | ||
Updated•16 years ago
|
Attachment #358964 -
Flags: review?(gal) → review+
Reporter | ||
Comment 6•16 years ago
|
||
This is also pretty high risk, so needs a clean tree to go in.
Keywords: checkin-needed
Assignee | ||
Comment 7•16 years ago
|
||
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Comment 8•16 years ago
|
||
Flags: wanted1.9.1+
Keywords: checkin-needed
Comment 9•16 years ago
|
||
Keywords: fixed1.9.1
Updated•16 years ago
|
Flags: in-testsuite+
Flags: in-litmus-
Updated•16 years ago
|
Flags: in-testsuite+ → in-testsuite-
You need to log in
before you can comment on or make changes to this bug.
Description
•