Closed
Bug 920191
Opened 11 years ago
Closed 2 years ago
JS exceptions aren't being propagated to several of the test harness' onerror handlers
Categories
(Testing :: General, defect)
Testing
General
Tracking
(Not tracked)
RESOLVED
INACTIVE
People
(Reporter: emorley, Unassigned)
References
(Depends on 2 open bugs, Blocks 2 open bugs)
Details
(Whiteboard: [tracking])
Attachments
(2 files)
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
Details | Diff | Splinter Review |
A recentish TBPL parser change has made JS exceptions during test runs more noticeable, since the regexp used for catching Python exceptions also catches many JS exceptions.
This has made it clear that we are failing to propagate JS exceptions in automation to the test harness' onerror handlers - since these runs show as passing. This is happening for at least mochitest, reftest/jsreftest, xpcshell.
There are both exceptions dumped straight to stdout, eg bug 910275, and those that are output via xpc::SystemErrorReporter() [1] eg bug 908426.
We should make uncaught exceptions fail the test run & make the output TBPL compatible - ie more of form:
TEST-UNEXPECTED-FAIL | testname | uncaught exception - FooError: bar is null at line 123
etc
[1] http://mxr.mozilla.org/mozilla-central/source/js/xpconnect/src/nsXPConnect.cpp#263
Reporter | ||
Comment 1•11 years ago
|
||
Meant to say, I imagine we may need to initially have a way to whitelist/annotate tests as causing uncaught exceptions, otherwise we'll constantly be playing catch up and never be able to land the changes (there are still many open bugs).
Reporter | ||
Updated•11 years ago
|
Reporter | ||
Comment 2•11 years ago
|
||
Clint, I don't suppose you can find someone to drive this? Once we get a WIP patch that causes runs to fail, the sheriffs are happy to pick apart the Try runs and file the bugs we need to get the failures fixed so it can land (or at least come up with a list of tests that need annotating initially) :-)
Flags: needinfo?(ctalbert)
(In reply to Ed Morley [:edmorley UTC+1] from comment #2)
> Clint, I don't suppose you can find someone to drive this? Once we get a WIP
> patch that causes runs to fail, the sheriffs are happy to pick apart the Try
> runs and file the bugs we need to get the failures fixed so it can land (or
> at least come up with a list of tests that need annotating initially) :-)
I thought we'd discussed this on IRC and came up with a solution, so I was surprised to see this still in my queue. :-( Do we still need a driver on this? I thought we found one and Ted filed a bug, but I'm unfortunately not finding it...
Flags: needinfo?(ctalbert)
Reporter | ||
Comment 4•11 years ago
|
||
(In reply to Clint Talbert ( :ctalbert ) from comment #3)
> I thought we'd discussed this on IRC and came up with a solution, so I was
> surprised to see this still in my queue. :-( Do we still need a driver on
> this? I thought we found one and Ted filed a bug, but I'm unfortunately not
> finding it...
I think that was the conversation in bug 914092 maybe, which is a slightly different issue?
Reporter | ||
Updated•11 years ago
|
Flags: needinfo?(ctalbert)
Reporter | ||
Comment 5•11 years ago
|
||
Joel, don't suppose you'd know who would be best to look at this? (Given Clint's now away for a bit)
Thank you :-)
Flags: needinfo?(jmaher)
Comment 6•11 years ago
|
||
while I don't understand this issue fully, it appears that we are not reading the output of the console in the javascript mochikit harness bits. If that is the case, I suspect we will have hundreds of errors, especially on debug.
if that is desired, the javascript (in browser portion) of the harness would need to examine the console logs and search for errors. this would ideally happen on test cleanup. Possibly jhammel, wlach, ted, dminor would be qualified candidates for this. I suspect we print out the console log to the file, so we would get double errors printed if we turned those into log.error() calls. the right way to do this would be have a common module to get these and define the parsing of the console messages and whitelist stuff in the same file.
Flags: needinfo?(jmaher)
Flags: needinfo?(ctalbert)
Comment 7•11 years ago
|
||
Seems like it should be the test harnesses' responsibility to catch exceptions - log parsing isn't going to be very reliable.
The SimpleTest-based harnesses are using an onerror handler (http://hg.mozilla.org/mozilla-central/annotate/cc4a3f3f899e/testing/mochitest/tests/SimpleTest/SimpleTest.js#l1148), and that misses some cases. A console observer should be able to catch them more reliably, I think.
Reporter | ||
Comment 8•11 years ago
|
||
This bug represents a not-insignificant gap in our test coverage - who is able to drive this? :-)
Severity: normal → major
Flags: needinfo?(gavin.sharp)
Reporter | ||
Updated•11 years ago
|
Flags: needinfo?(ctalbert)
Comment 9•11 years ago
|
||
Neil, could you look into this? I guess we'd want to start by adding more robust exception catching in the test harness (comment 7) and seeing how many new failures pop out...
Flags: needinfo?(gavin.sharp)
Flags: needinfo?(ctalbert)
Reporter | ||
Comment 10•11 years ago
|
||
(In reply to :Gavin Sharp (email gavin@gavinsharp.com) from comment #9)
> Neil, could you look into this? I guess we'd want to start by adding more
> robust exception catching in the test harness (comment 7) and seeing how
> many new failures pop out...
Flags: needinfo?(enndeakin)
Comment 11•11 years ago
|
||
I noticed this bugzilla entry from a discussion in dev-platform mailing list.
(In reply to Ed Morley [:edmorley UTC+0] from comment #0)
> This has made it clear that we are failing to propagate JS exceptions in
> automation to the test harness' onerror handlers - since these runs show as
> passing. This is happening for at least mochitest, reftest/jsreftest,
> xpcshell.
>
Although I know thunderbird is no longer supported by mozilla foundation per se, and is
now in community support mode, I cannot help report that
|make mozmill| of thunderbird test suite also does not catch
exceptions very well.
I am manually counting these exceiptions in my test run (with script of course),
and periodically report the top-offenders in the top positions in the list of descending order of frequencies in the test run.
So whatever will come out of this bugzilla entry,
it will be great if the exceptions in the |make mozmill| can be caught as well.
(But obviously |make mozmill| uses mozmill test harness. So it may need a different patch.)
It sure does not make sense to let a test pass when an exception is raised and not caught and thrown all the way to the top-level during its execution.
Up to now, I thought it was taken for granted because nobody seems to be complaining about it but me (?) but obviously, this was an oversight of the test harness. It finally makes sense to me.
TIA
Reporter | ||
Comment 12•11 years ago
|
||
Neil, any news on this? :-)
Comment 13•11 years ago
|
||
Any progress on this matter?
Comment 14•11 years ago
|
||
If I did this for talos, we would fail almost all of our tests. I have an example of a console listener in talos:
http://hg.mozilla.org/build/talos/file/2bcf422011d1/talos/startup_test/startup_test.html#l24
This just dumps the data out, should I start with ts and if there are errors in the console output turn that into failure?
Reporter | ||
Comment 15•11 years ago
|
||
(In reply to Joel Maher (:jmaher) from comment #14)
> If I did this for talos, we would fail almost all of our tests. I have an
> example of a console listener in talos:
> http://hg.mozilla.org/build/talos/file/2bcf422011d1/talos/startup_test/
> startup_test.html#l24
Yeah I fully believe us to have orange runs on most suites (including non-talos) until we fix a few things.
> This just dumps the data out, should I start with ts and if there are errors
> in the console output turn that into failure?
If you could, that would be great :-)
Reporter | ||
Comment 17•11 years ago
|
||
(In reply to :Gavin Sharp (email gavin@gavinsharp.com) from comment #9)
> Neil, could you look into this? I guess we'd want to start by adding more
> robust exception catching in the test harness (comment 7) and seeing how
> many new failures pop out...
(In reply to Ed Morley [:edmorley UTC+0] from comment #12)
> Neil, any news on this? :-)
Hi Neil - no news since October, please could you give us an ETA? If you don't have time for this please let us know and I'll ask Gavin to find someone else to take a look.
Cheers!
Comment 18•11 years ago
|
||
I'll throw this in the desktop backlog and see what we come up with.
Blocks: fxdesktopbacklog
Flags: needinfo?(enndeakin)
Updated•11 years ago
|
Whiteboard: [tracking]
Updated•11 years ago
|
Whiteboard: [tracking] → [triage] [tracking]
Comment 19•11 years ago
|
||
This is a work in progress. A thousand or so failures occur when running tests.
https://tbpl.mozilla.org/?tree=Try&rev=7b04eca9b21a
Assignee: nobody → enndeakin
Status: NEW → ASSIGNED
Updated•11 years ago
|
Updated•11 years ago
|
No longer blocks: fxdesktopbacklog
Whiteboard: [triage] [tracking] → [tracking]
Reporter | ||
Comment 21•11 years ago
|
||
Neil, I don't suppose you've had a chance to look at this any more? :-)
Flags: needinfo?(enndeakin)
Comment 22•11 years ago
|
||
I did investigate but there are so many different kids of failures revealed by this patch that it would need more understanding of the tests in question to fix them.
I did fix a pile of warnings with bug 475981 though. There's also a bunch of other invalid charset type warnings that could be disabled or fixed in bulk probably.
Flags: needinfo?(enndeakin)
Comment 23•11 years ago
|
||
Attached is a few fixes I had started on for some tests in case someone else wants to take a look. I don't recall the details of the issues though.
Comment 24•10 years ago
|
||
Ed, in the case of b2g/gaia, would this bug involve trawling the gecko.log or logcat for any errors that occurred even in the case that the test itself passed?
Flags: needinfo?(emorley)
Reporter | ||
Comment 25•10 years ago
|
||
(In reply to Zac C (:zac) from comment #24)
> Ed, in the case of b2g/gaia, would this bug involve trawling the gecko.log
> or logcat for any errors that occurred even in the case that the test itself
> passed?
I would hope the harness would catch it (along the lines of the first patch attached here) :-)
Flags: needinfo?(emorley)
Reporter | ||
Updated•10 years ago
|
Updated•9 years ago
|
Assignee: enndeakin → nobody
Status: ASSIGNED → NEW
Updated•2 years ago
|
Severity: major → S2
Updated•2 years ago
|
Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → INACTIVE
You need to log in
before you can comment on or make changes to this bug.
Description
•