add --skip-failures to web platform tests
Categories
(Testing :: web-platform-tests, enhancement)
Tracking
(Not tracked)
People
(Reporter: bc, Assigned: bc)
References
Details
Attachments
(2 obsolete files)
Since some web-platform tests are known to perma fail or time out, and this is recorded in the expectation manifest files, it may be advantageous to skip tests which are known to fail to reduce wasted effort and to improve load and run times.
I did a base line try with mach try fuzzy --query web-platform and compared it to skip failure try
total taskcluster run time
baseline | skip failures |
---|---|
97,469.98 | 79,196.79 |
19% reduction
This certainly seems like a good move.
Assignee | ||
Comment 1•5 years ago
|
||
Depends on D52191
Assignee | ||
Comment 2•5 years ago
|
||
Depends on D52192
Comment 3•5 years ago
|
||
The option to do this seems fine. Running it like this by default is very concerning. It totally breaks the wpt sync, which requires that we run all tests in case upstream changes affect the result. It also means that we will miss certain kinds of regressions (e.g. tests that start to crash). But most concerning of all it means that we miss the case where tests start to pass and authors don't update the metadata (which seems likely since it won't run in CI), which means that we can have cases where the behaviour is FAIL->PASS->FAIL and we have no way of detecting the regression despite nominally having test coverage.
Comment 4•5 years ago
|
||
I believe we can run once/day everything, but otherwise only tests that are expected to pass.
Comment 5•5 years ago
|
||
this is almost identical to bug 1572820, please consider changing scope or using data from that bug to work on this.
As for infrequent tests that run, we can bisect easily, the sheriffs are familiar with that. If we plan on that type of a workflow, what concerns do we have with missing out on a test with invalid meta data for a day?
Comment 6•5 years ago
|
||
this is almost identical to bug 1572820, please consider changing scope or using data from that bug to work on this.
As for infrequent tests that run, we can bisect easily, the sheriffs are familiar with that. If we plan on that type of a workflow, what concerns do we have with missing out on a test status for a day
Assignee | ||
Comment 7•5 years ago
|
||
(In reply to Joel Maher ( :jmaher ) (UTC-4) from comment #5)
this is almost identical to bug 1572820, please consider changing scope or using data from that bug to work on this.
(In reply to Bob Clary [:bc:] from comment #1)
Created attachment 9107231 [details]
Bug 1594796 - add --skip-failures to web platform tests to allow skipping tests which are known to fail, r=jgraham.Depends on D52191
I'll focus this bug solely on this patch. I've updated the patch to move the relevant parts to TestLoader.
(In reply to Bob Clary [:bc:] from comment #2)
Created attachment 9107232 [details]
Bug 1594796 - set --skip-failures for web platform mozharness configs, r=jgraham.Depends on D52192
I'll abandon this patch in favor of the as of yet undecided approach.
Assignee | ||
Comment 8•5 years ago
|
||
Actually I think I am the wrong person to work on this since their isn't agreement on what should be done and it is turning into a time sink. I'll WONTFIX it and abandon the revisions.
Updated•5 years ago
|
Updated•5 years ago
|
Comment 9•5 years ago
|
||
FTR I think that it's fine to add the features from https://phabricator.services.mozilla.com/D52192 (i.e. a command line argument to skip given statuses) to wpt independent of the other work. Being able to do things like --skip-status=PASS
to just check if you fixed failing tests seems like a useful feature (although mostly for reftests since it doesn't interact well with subtests).
Assignee | ||
Updated•5 years ago
|
Description
•