Closed Bug 1283449 Opened 8 years ago Closed 8 years ago

Intermittent aspect-ratio-1a.xhtml | assertion count 24 is more than expected 0 assertions

Categories

(Core :: Audio/Video: Playback, defect, P3)

defect

Tracking

()

RESOLVED FIXED

People

(Reporter: cbook, Assigned: bc)

References

()

Details

(Keywords: intermittent-failure)

Attachments

(1 file)

I suspect this might be an autophone problem, I always see 24 unexpected assertions in the Mw job in the first test in the suite, but only on debug on try (see Bug 1276296.) It's possible these assertions might not be from the test being run but from something happening earlier when firefox starts, as the test flagged here is also the first test in the suite.
Rank: 35
Component: Audio/Video → Audio/Video: Playback
Priority: -- → P3
(In reply to Dan Minor [:dminor] from comment #1) > I suspect this might be an autophone problem, I always see 24 unexpected > assertions in the Mw job in the first test in the suite, but only on debug > on try (see Bug 1276296.) > > It's possible these assertions might not be from the test being run but from > something happening earlier when firefox starts, as the test flagged here is > also the first test in the suite. bob can you take a look at this, seems this is now also frequent ?
Flags: needinfo?(bob)
I should have added that the first test in the Mw suite is a no-op and doesn't actually test anything.
This is essentially bug 1283134 which is Rov. This is Rwv. A couple of ASSERTION: QueryInterface needed: 'query_result.get() == mRawPtr', file /builds/slave/m-cen-and-api-15-d-00000000000/build/src/obj-firefox/dist/include/nsCOMPtr.h, line 415 with more ASSERTION: Existing entry in StartupCache.: 'false', file /builds/slave/m-cen-and-api-15-d-00000000000/build/src/startupcache/StartupCache.cpp, line 358 When getting ready for Autophone Tier 2 I ran into several tests which failed due to assertions. I'll work up an asserts-if patch for both tests and do a try run to see if the assertions move around.
Attached patch bug-1283449-v1.patch (deleted) — Splinter Review
dminor is right that at least part of the issue is with ASSERTIONs occurring prior to the first test. It seems that ASSERTIONs occurring between tests is also a problem. This patch partially resolves the issue for my local devices by setting the assertion count in TestRunner.runNextText() just prior to calling TestRunner._makeIframe(). This alone was not sufficient since there were cases where additional ASSERTIONs would occur intermittently depending on some race condition prior to the next test starting. I added an additional assertion count reset to TestRunner._makeIFrame() which "solved" the particular issues I was seeing but introduced later failures in other tests which now would unexpectedly pass since they no longer counted the assertions which occurred between tests. The patch includes a similar assertion count reset for the browser chrome tests which I haven't tested. With this assertion count reset approach, we would no longer flag assertions as errors which appeared prior to the first test nor those that appeared between tests. We would however be able to notice when assertions appeared after a test finished and before another started which would allow us to better assign the appropriate test to the assertion. It appears the unexpected passes are just due to us ignoring these intra-test assertions anyway. Perhaps a better location to reset the assertion counts would be in SimpleTest.expectAssertions() ? Thoughts?
Flags: needinfo?(bob)
Attachment #8779346 - Flags: feedback?(gbrown)
Attachment #8779346 - Flags: feedback?(dminor)
Attachment #8779346 - Flags: feedback?(ahalberstadt)
Comment on attachment 8779346 [details] [diff] [review] bug-1283449-v1.patch Review of attachment 8779346 [details] [diff] [review]: ----------------------------------------------------------------- This approach makes good sense to me, but I'm not familiar enough with the Mochitest harness to be sure this won't cause other problems down the road.
Attachment #8779346 - Flags: feedback?(dminor) → feedback+
Comment on attachment 8779346 [details] [diff] [review] bug-1283449-v1.patch Review of attachment 8779346 [details] [diff] [review]: ----------------------------------------------------------------- I like the idea of distinguishing assertions prior to the first test - "startup assertions" - from other assertions. My initial reaction was that startup assertions should result in a test job failure, specifically noting that unexpected assertions were detected on startup. That seems like a well defined, sometimes important condition that would sometimes be actionable. But if it isn't important or actionable, then we would need another mechanism for ignoring startup assertions. So maybe it's not worth the effort of failing the job; you probably have it right. I think between-test assertions are less important to identify. Some (most?) of those are going to be a consequence of the previous test (or one before that!). We are already accustomed to the idea that an assertion is not always strongly associated with a test and there's always off-main-thread code running and sometimes failing. Ideally, I'd like to see between-test assertions counted against the next (perhaps previous, if possible?) test: Better to fail more tests more reproducably than to fail intermittently and wonder why the assertion only happens sometimes. All of this highlights the limitations of annotating tests to expect assertions. We need to make an effort to track down the root cause of assertions and remove assertions that cannot be resolved.
Attachment #8779346 - Flags: feedback?(gbrown) → feedback+
Comment on attachment 8779346 [details] [diff] [review] bug-1283449-v1.patch Review of attachment 8779346 [details] [diff] [review]: ----------------------------------------------------------------- I'm not really sure if this is acceptable or not. I.e, is the mochitest harness supposed to catch all assertions, or only assertions that happen during a test. Couldn't a test change state, which then causes an assertion later on in the run? Meaning that we *would* want to at least flag assertions that happen after the first test? I think this is something that we would need feedback from developers on before landing. Maybe a dev.platform post? Though I suspect the response from developers will be something along the lines of "We should figure out where the assertions happen and fix them". Either way, it looks like this is more of a mochitest harness issue than anything else.
Attachment #8779346 - Flags: feedback?(ahalberstadt)
Depends on: 1295138
This should be fixed by bug 1295138. My recent try run failed to run autophone tests. I'll look into that and resolve appropriately.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: