Closed Bug 1129843 Opened 10 years ago Closed 10 years ago

Convert Mozmill update tests to Marionette

Categories

(Testing :: Firefox UI Tests, defect, P1)

defect

Tracking

(Not tracked)

RESOLVED FIXED
mozilla39

People

(Reporter: whimboo, Assigned: whimboo)

References

Details

Attachments

(1 file)

(deleted), text/x-github-pull-request
chmanchester
: review+
chmanchester
: feedback+
chmanchester
: feedback+
Details
We have to convert the update tests from Mozmill to Marionette. The current tests can be found here: http://hg.mozilla.org/qa/mozmill-tests/file/default/firefox/tests/update For now we are blocked on bug 599290 which should land first and will supply us all the new classes for handling windows, which then can be more easily converted to Python.
Depends on: 1129713
Depends on: 1123758
Depends on: 1133752
Depends on: 1133753
Priority: -- → P1
Whiteboard: [blocked]
Depends on: 1137388
Depends on: 1137396
Depends on: 1121139
Depends on: 1138946
Blocks: 813629
Depends on: 1130220
Depends on: 1141483
Depends on: 1141519
Depends on: 1141975
The current status of work can be seen here: https://github.com/mozilla/firefox-ui-tests/pull/122 As of now I got the direct update test working just with the same update channel any without taking any CLI options into account. I filed a couple of blocking bugs in the last two days. So keep in mind that we need them fixed first before we can finalize our tests. My next steps are: * Get the fallback update working * Save states of the update steps and do a comparison at the end * Take CLI options into account * Backup binary for running direct and fallback right after each other * .... lets see whats more needed
Depends on: 1141988
Fallback updates don't work at the moment because we need the update-wizard window converted via bug 1141988 first.
(In reply to Henrik Skupin (:whimboo) from comment #2) > Fallback updates don't work at the moment because we need the update-wizard > window converted via bug 1141988 first. Are we planning to do this for Q1? Why is this only filed today?
Flags: needinfo?(hskupin)
Same as for the other bugs I have filed today for Marionette. I actually missed it that its needed for fallback given that I haven't worked on those type of update tests for a while. But it's a simple thing to do. So it doesn't take that much time. I will do it tomorrow.
Flags: needinfo?(hskupin)
Attached file github_pull_request.txt (deleted) —
Chris, I would appreciate if you could give me feedback for the harness changes. I did a bit of refactoring and would like to hear if all that makes sense.
Attachment #8576183 - Flags: feedback?(cmanchester)
Comment on attachment 8576183 [details] github_pull_request.txt I had some comments.
Attachment #8576183 - Flags: feedback?(cmanchester) → feedback+
So I hit a problem here. We have to do two types of updates, which are direct and fallback. For those we will have a single test file, which needs to be run twice. What I did right now is to override the run_tests() method to prepare necessary variables for the test, and call the base run_tests() twice: https://github.com/mozilla/firefox-ui-tests/pull/122/files#diff-b4613af246cc97e35cd821a697727ce2R104 What I see is a hang in wait_for_port because Firefox is shutdown after the first test run, but is not opened again. David, maybe this is the wrong way to implement this wanted behavior? Is there something else I would have to do? Maybe more outside of that method? If all would not work we would have to duplicate the code and also create two different test files. Please let me know what should be better based on your experience so far.
Flags: needinfo?(dburns)
:whimboo asked for my advice on irc. If you look at: https://dxr.mozilla.org/mozilla-central/source/testing/marionette/client/marionette/runner/base.py#756 We only start marionette (and therefore launch a gecko instance) when self.marionette isn't set. But we cleanup the instance at the end of run_tests without unsetting self.marionette, which would explain the behavior. A pattern I've seen is to make a trivial subclass of a test class that only overrides setUp, which has the effect of running the same test with a different configuration, which is what I think you want to achieve here. If you need to only run one or the other at certain times, you could even put these tests in different files and have separate manifests to reflect that.
Blocks: 1138946
No longer depends on: 1138946
Blocks: 1142805
(In reply to Chris Manchester [:chmanchester] from comment #8) > A pattern I've seen is to make a trivial subclass of a test class that only > overrides setUp, which has the effect of running the same test with a > different configuration, which is what I think you want to achieve here. If > you need to only run one or the other at certain times, you could even put > these tests in different files and have separate manifests to reflect that. I think a combination of this and using manifests is probably the best approach here. We can always add code to make sure that start_session() starts the process if there is an appropriate flag passed in
Flags: needinfo?(dburns)
I pushed another update for the PR. Now we have support for fallback updates too. Next step for me is to figure out how to best handle the test runner for those tests. I will check how to get the feedback from Chris and David implemented.
So here what I think about the proposals... I don't think implementing the logic for backup/restore of the binary, modifying different files of the binary and special-casing the tests to run should be done in the subclassed testcase class. As this class stands for it is for running a test including setup and teardown steps. It should not be overloaded with code like the one mentioned above. Also a subclassed testcase class should be re-usable, so that also a test outside of the update test runner is able to use it for update specific checks or tasks. This won't be manageable this way as it has been explained above. In the following I will put together all the possibilities update tests can be run: 1. Direct and fallback update * Install the application * Create a duplicate of the application, run the direct updates, remove the duplicate * Create a duplicate of the application, run the fallback updates, remove the duplicate * Uninstall the application 2. Direct-only update * Install the application * Create a duplicate of the application, run the direct updates, remove the duplicate * Uninstall the application 3. Fallback-only update * Install the application * Create a duplicate of the application, run the fallback updates, remove the duplicate * Uninstall the application For running the update tests we have several options, which can be passed-in via the command line. Those need to be parsed by the runner and made available for the update testcase class. Whereby this testcase class has to also work without those options specified. The runner has to take care of the binaries and so for duplicating the application and removing it. For update tests it needs to run a specific set of tests or the same test with other test vars, given the CLI options (see examples above). Therefore it needs the capability in executing run_tests() multiple times. Currently I don't know what happens with the result set of the first run, given that the second would overwrite test results. Maybe our runner class has to cache those results and combine them afterward, or is possible to send two different results. The latter is what we currently are doing.
So would it work when we remove this if definition and set self.tests to an empty array in run_tests() to not duplicate tests before starting them? https://dxr.mozilla.org/mozilla-central/source/testing/marionette/client/marionette/runner/base.py#761 If that is totally not the right approach please let me know.
Flags: needinfo?(cmanchester)
(In reply to Henrik Skupin (:whimboo) from comment #12) > So would it work when we remove this if definition and set self.tests to an > empty array in run_tests() to not duplicate tests before starting them? > > https://dxr.mozilla.org/mozilla-central/source/testing/marionette/client/ > marionette/runner/base.py#761 > > If that is totally not the right approach please let me know. I'm reluctant to advocate that approach, because as we've seen the run_tests method is doing some bookkeeping that makes it hazardous to call multiple times, but I can see how potentially working with a new binary for each test would suggest it. If it really solves this for you it's probably ok. David mentioned in comment 9 the ability to start a new process at the beginning of a test when specified, which also sounds like it would solve this problem. This could probably be added if it seems necessary, but I'm not really sure of the correct approach at this point given time constraints for this.
Flags: needinfo?(cmanchester)
Chris, so you would still do the handling of the binary in the customized test class then? If I do that and move all in there, I don't really know how being able to run both the direct and fallback tests when conditions have been added to the manifest file. This is mostly because of bug 1141200. > [test_direct_update.py] > direct = true > > [test_fallback_update.py] > fallback = true
Whiteboard: [blocked]
Comment on attachment 8576183 [details] github_pull_request.txt Quick update here. I pushed another set of commits today. The only remaining piece now is the backup logic of the application. I don't expect that much changes anymore. Chris, maybe you wanna have a look over the code for a feedback round?
Attachment #8576183 - Flags: feedback?(cmanchester)
Comment on attachment 8576183 [details] github_pull_request.txt I gave a few pieces of feedback, I'll look at this more closely tomorrow.
Attachment #8576183 - Flags: feedback?(cmanchester) → feedback+
Blocks: 1145638
Comment on attachment 8576183 [details] github_pull_request.txt I pushed another commit with clean-up code which fixes nearly all TODO items in the PR. There is only one left for UpdateTestRunner where I have to store exceptions as thrown by individual test executions, and raise a combined one if possible at the end. Not sure how to do it yet, maybe you have an idea.
Attachment #8576183 - Flags: review?(cmanchester)
Comment on attachment 8576183 [details] github_pull_request.txt There are still a few things to work out. Before I review much more, can you post some instructions for getting a build a supplying command line options that will let me run these tests locally?
Flags: needinfo?(hskupin)
Attachment #8576183 - Flags: review?(cmanchester)
Blocks: 1132701
Chris, what you want is simply a build of Firefox Nightly from a day before, which offers an update to the build of today. Then you can use the `firefox-ui-update` CLI command to get an update triggered. Without specifying anything a direct and fallback will be run. For specific limitations and checks see the --help option of the command.
Flags: needinfo?(hskupin)
Blocks: 604364
We talked over a few points today and I gave some more feedback. I'd like to see the commits again with updates from review, but we're close here.
Blocks: 1146847
Comment on attachment 8576183 [details] github_pull_request.txt Every review comment should be fixed now.
Attachment #8576183 - Flags: review?(cmanchester)
Comment on attachment 8576183 [details] github_pull_request.txt I had a few more comment in the PR, nothing that needs to block this. I would prefer that the changes to how prefs are handled are not included and we establish exactly what is needed in a follow up, but as long as we don't regress setting e10s from the command line this is fine.
Attachment #8576183 - Flags: review?(cmanchester) → review+
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 39
Blocks: 790538
Product: Mozilla QA → Testing
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: