Closed Bug 833666 (tbpl-gaia-unit) Opened 12 years ago Closed 10 years ago

[Tracking Bug] Need Gaia Unit tests in buildbot automation

Categories

(Release Engineering :: General, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: jgriffin, Unassigned)

References

Details

David Scravaglieri, manager of the Gaia team, would like us to get Gaia Unit tests running in buildbot automation on b2g desktop builds. This is a distinct set of tests, separate from the Gaia UI tests already in progress in bug 802317. These tests utilize a Marionette JS client written by James Lal. This will be used as a tracking bug to track overall progress. Things needed, for which separate bugs will be filed as needed: 1 - Determine a target OS. We should use either linux64 or macosx64; we can't easily use linux32 since those test slaves are already overloaded. Vivien, do you have a preference between linux64 and macosx64? Eventually, do we want both OS's, and if so, can we focus on just one first? Would there be any benefit to running these tests on panda boards over desktop builds? 2 - Once we determine the target OS, begin building b2g desktop builds per-commit. 3 - Gaia Unit tests also require a Gaia profile, which is not built as part of the b2g desktop build. We will need to build these separately, either as part of the b2g desktop build, or not. 4 - Identify the dependencies for these tests, and make sure they're installed on the target test slaves. Last time I looked, these required a particular version of node.js and some specific node packages. James, can you provide a list of requirements? 5 - Write a mozharness script to invoke the tests. 6 - Deal with any test failures. David has agreed to normal sheriffing rules for these, once we get these in production; that is, test failures that can be pinned to a particular commit will be grounds for backout by the sheriffs.
Gaia unit tests and gaia integration tests are separate things... The unit tests run (and have a runner) that strictly runs in the browser (cross platform) and do not use marionette directly. These tests do not run in a real application context. They are similar to what web developers in general use. The integration tests use the marionette client described above. --- We have a python runner for the unit tests. I wrote it as a big hack but could be used as the basis of the runner for those: https://github.com/lightsofapollo/js-test-agent/tree/master/python/test_agent IMO - the biggest win would be to get the unit tests running first... We have a lot of activity here and a decent base of tests across major apps... The gaia-ui-tests are much further ahead and provide decent integration test coverage where the JS integration tests only provide some (small) tests for email, calendar and system.
Can we run it on the cloud? We're experimenting with Ubuntu 64-bits. On another note, can this be answered? (unless I missed it) "Would there be any benefit to running these tests on panda boards over desktop builds?"
Thanks for the clarification James. Could the Python runner for these be merged into the Gaia repo? It's a bit complicated using random github repos in buildbot automation. I believe we could run these on the cloud in Ubuntu64 VM's, but we'd have to try them on one of the VM's and verify they work.
Hey Jonathan, That code can live where ever you like. If you remember we had a brief discussion about this a few months back and this runner was a result of a request for a python runner for test-agent. It is far from perfect but is a good basis for addition. The node runner is far more stable/tested (we have engineers using those tools daily) that is integrated directly into our Makefile. Let me know how we can help best... There are a few more people across various timezones who are familiar with the unit tests.
James, thanks for the reminder. Both of the test runners have compiled module dependencies (various node modules for the node runner, and twisted for the Python runner). Neither dump output in a format that's compatible with TBPL. I think modifying the Python runner will be easier and will mesh a little more easily with mozharness, so I suggest we start with that and modify as needed. Armen, I don't think there's any reason why we couldn't run these on an Ubuntu64 VM. Can I get access to one and verify that I can run them there? Then we can begin working out the details of exactly how we can implement all the necessary infrastructure.
(In reply to Jonathan Griffin (:jgriffin) from comment #5) > > Armen, I don't think there's any reason why we couldn't run these on an > Ubuntu64 VM. Can I get access to one and verify that I can run them there? > Then we can begin working out the details of exactly how we can implement > all the necessary infrastructure. rail is working on them. If you poke him at the end of the day he will probably have it ready by then.
I've verified that the Ubuntu 64 VM's will work fine as a platform for running these tests. Will file some more specific bugs to get different pieces of this going.
Depends on: 837936
Depends on: 837938
Depends on: 837940
Depends on: 840268
Depends on: 841581
(In reply to Jonathan Griffin (:jgriffin) from comment #0) > This will be used as a tracking bug to track overall progress. > > Things needed, for which separate bugs will be filed as needed: > > 1 - Determine a target OS. We should use either linux64 or macosx64; we > can't easily use linux32 since those test slaves are already overloaded. > Vivien, do you have a preference between linux64 and macosx64? Eventually, > do we want both OS's, and if so, can we focus on just one first? Would > there be any benefit to running these tests on panda boards over desktop > builds? > We definitely should be using macosx64. My experience with linux64 was extremely suboptimal; there would be platform specific crashes/freezing. I had difficulty getting people to help resolve the issues since most of the devs are using b2g desktop on macosx, and the problems didn't show up there. Macosx64 builds have been very stable and far less prone to crashing.
(In reply to Jonathan Griffin (:jgriffin) from comment #7) > I've verified that the Ubuntu 64 VM's will work fine as a platform for > running these tests. Will file some more specific bugs to get different > pieces of this going. Hm, if they work consistently, that would be great. I'm concerned about getting support if/when we run into the startup segfaulting I've seen before on my ubuntu 12.04 machine and the AWS ubuntu images we were using a while back in our homebrew automation.
We won't be able to scale :(
(In reply to Armen Zambrano G. [:armenzg] from comment #10) > We won't be able to scale :( Ah, okay. Since this will be on tbpl, hopefully people will be more easily swayed to fixing platform issues in order to keep tests green.
Depends on: 855049
Depends on: 856133
Blocks: b2g-testing
No longer depends on: b2g-testing
Depends on: 860896
No longer depends on: 846384
Depends on: 861424
Depends on: 865379
Depends on: 868490
Depends on: 868552
We have our first run on cedar: 14:59:11 INFO - gaia-unit-tests INFO | Passed: 2332 14:59:11 INFO - gaia-unit-tests INFO | Failed: 12 14:59:11 INFO - gaia-unit-tests INFO | Todo: 0 It's red on TBPL due to an issue with the mozharness script which I'll fix. I'll file a bug for the 12 failing tests for gaia developers to fix.
Depends on: 868643
Depends on: 868646
Depends on: 868647
Depends on: 868651
Depends on: 868652
Depends on: 868653
Depends on: 872321
Depends on: 859389
Depends on: 876263
Depends on: 876265
Depends on: 882760
Depends on: 886602
Depends on: 887356
Depends on: 887354
Depends on: 887451
Depends on: 887591
Depends on: 889055
Depends on: 889165
Depends on: 889657
Depends on: 890079
Depends on: 890083
Depends on: 891174
Depends on: 891973
Depends on: 892658
Depends on: 894072
Depends on: 894721
Depends on: 894964
Depends on: 895209
Depends on: 895210
Depends on: 895212
Depends on: 895978
Depends on: 896212
Depends on: 897558
Depends on: 898108
Depends on: 898512
Depends on: 854110
Alias: tbpl-gaia-unit
Depends on: 902045
Depends on: 902551
Depends on: 902641
Depends on: 902973
I think the tests are green enough on b2g-inbound to unhide, so I've done that; I'll unhide on other trees as well after I verify that all the changes that are required to green them have migrated to those trees.
Rehidden for: https://wiki.mozilla.org/Sheriffing/Job_Visibility_Policy#6.29_Outputs_failures_in_a_TBPL-starrable_format Please can the remaining requirements there be met as well (eg documentation; does this work on trychooser etc?).
Ty :-)
Depends on: 903536
Product: mozilla.org → Release Engineering
Depends on: 904927
Depends on: 907669
Depends on: 907670
Depends on: 908288
Depends on: 908698
Depends on: 909968
Depends on: 911395
Depends on: 911946
Depends on: 914038
Depends on: 914040
Blocks: 917739
Depends on: 925279
Depends on: 924233
Depends on: 952302
Depends on: 929172
Depends on: 965604
Depends on: 965882
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Component: General Automation → General
You need to log in before you can comment on or make changes to this bug.