Closed Bug 794292 Opened 12 years ago Closed 12 years ago

configuration needs tests

Categories

(Testing :: Talos, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: k0scist, Unassigned)

References

Details

(Whiteboard: [good first bug][mentor=jhammel][lang=py])

Attachments

(3 files, 7 obsolete files)

There is a huge amount of code in PerfConfigurator.py . The configuration system is rather elaborate. However we have no tests for it. We should. We should compile a list of things that should be tested and test them. It is noted that the upstream configuration.py does have tests, though we don't (and IMHO shouldn't) import them into talos: http://k0s.org/mozilla/hg/configuration/file/56db0b2b90af/tests
Whiteboard: [good first bug][mentor=jhammel][lang=py]
I shall give it a try. Just one brief question: """ It is noted that the upstream configuration.py does have tests, though we don't (and IMHO shouldn't) import them into talos: http://k0s.org/mozilla/hg/configuration/file/56db0b2b90af/tests """ Does that mean, we should or shouldn't use it as a base for writing the tests. If the latter is the case, have you got any other example code? By the way, where exactly does that code reside within talos. I couldn't find it. Thanks
(In reply to nebelhom from comment #1) > I shall give it a try. Just one brief question: > > """ > It is noted that the upstream configuration.py does have tests, though > we don't (and IMHO shouldn't) import them into talos: > > http://k0s.org/mozilla/hg/configuration/file/56db0b2b90af/tests > """ > > Does that mean, we should or shouldn't use it as a base for writing the > tests. So I envision the talos configuration test testing Talos-specific code: namely going through PerfConfigurator and making sure that what we get out is what we expect to get out. > If the latter is the case, have you got any other example code? Nope. If you want I can work out a bunch of things we can check for -- some of them easy, some of them less easy. > By > the way, where exactly does that code reside within talos. I couldn't find > it. If by 'that code' you mean configuration.py, it is http://hg.mozilla.org/build/talos/file/tip/talos/configuration.py > Thanks
I think it really is best if you give me a list of suggested tests. We can freely extend it, depending on my success with it. I suggest to start with the easy and relatively quick stuff. Maybe I can even think of some things to test as I go along... I tried to get a general testing harness running, based on the three files in http://k0s.org/mozilla/hg/configuration/file/56db0b2b90af/tests , but alas it failed miserably ;) so I need to take it step by step. by 'that code' I meant the files found in http://k0s.org/mozilla/hg/configuration/file/56db0b2b90af/tests . But I think I misunderstood something there yet again :P Thanks for your time.
The general procedure is: 1. invoke PerfConfigurator programmatically given - input files - command line arguments - (as a special case, you will need to pass a bogus path to firefox; this doesn't have to actually exist as a file) 2. Generate a (e.g.) YAML output file 3. Read the YAML output file 4. Ensure that the bits of configuration that you care about is what you expect to be. This should be as programmatic as possible, as if we change defaults we don't want tests to fail. ---- Some specific things we will want to test: - given a very minimal configuration, ensure that (some subset of) the defaults are what is specified in PerfConfigurator.py - given a test specification (e.g. `-a ts:tsvg`, could be a lot more intricate but we can build on this), ensure that the output tests are what is expected from PerfConfigurator.py and test.py - given a starting configuration file and "command line arguments" (really, arguments to http://hg.mozilla.org/build/talos/file/b1d61df65e8a/talos/PerfConfigurator.py#l838 or http://hg.mozilla.org/build/talos/file/b1d61df65e8a/talos/PerfConfigurator.py#l512 ) ensure that the resulting configuration is what you'd expect * non-specified items are the defaults * items specified in the file but not overridden are what they should be from the file * items specified from arguments should "win" - similar to the above, but specifically for extending tests - given two (or more) source configuration files, ensure that overriding works as above and that the configurations are merged together correctly While I don't think we should cover everything here, even just a smoketest for each of these cases would be helpful
Attached patch Litmus test (obsolete) (deleted) — Splinter Review
I attached a first test just to make sure I am going in the right direction with this. Let me know, if this is what you had in mind and I shall try and implement the rest in a similar fashion.
Comment on attachment 674883 [details] [diff] [review] Litmus test This looks like a great start. I am tempted to commit this immediately with a few cleanups. The indentation of the comments in the test file look to line up wrongly. Maybe one more indentation level is necessary? I would also recommend changing e.g. the path to firefox to a variable vs hard coding it.
Attached patch First Patch to this bug (obsolete) (deleted) — Splinter Review
Ok here goes. Is that already it, by the way? Strange, I thought there would be more to do like also testing against a phony test case to see if the correct error is raised and the like. Anyhoo, my comments: firefox-path variable: It's funny how in about every single patch there is one thing that I decide against and follow the template given, which turns out to be the right choice after all :P comment indentation: You were spot on about that. I don't know why I did that. Maybe I thought it looked daring and street ;) One other thing that annoys me in the code. When checking if the test name given corresponds to the one in the file, I wrote the following line: self.assertEqual(content['tests'][0]['name'], 'ts') "content['tests'][0]['name']" just looks untidy to me, but for the life of me, I couldn't think of a more elegant solution. Comments?
Attachment #674883 - Attachment is obsolete: true
Attached patch First Patch to this bug (obsolete) (deleted) — Splinter Review
Oops, forgot to take out the big chunk of string at the top
Attachment #675007 - Attachment is obsolete: true
Attached patch First Patch to this bug (obsolete) (deleted) — Splinter Review
removed two unused imports, thanks to pyflakes. sorry, I should have done all this straight away instead of revising the patch twice before it even happened. Please still look at the comments of the intial attachment. thanks
Attachment #675008 - Attachment is obsolete: true
Attachment #675134 - Flags: review?(jhammel)
Comment on attachment 675134 [details] [diff] [review] First Patch to this bug very close, but there are a few nits I'd like fixed first: +""" +unit tests +""" Please add a better docstring, at least "tests for PerfConfigurator.py" +ffox_path = 'test/path/to/firefox/' probably kill the trailing slash (even though its a dummy path and doesn't matter) Other than that, looks good.
Attached patch First revision of patch (obsolete) (deleted) — Splinter Review
Ok, here's the revision. I used PEP as the guideline here ( http://www.python.org/dev/peps/pep-0257/#multi-line-docstrings ) Why should we remove the trailing '/' in the ffox_path? It's not an excape sequence or anything...
Attachment #675134 - Attachment is obsolete: true
Attachment #675134 - Flags: review?(jhammel)
Attachment #675323 - Flags: review?(jhammel)
> Why should we remove the trailing '/' in the ffox_path? It's not an excape sequence or anything... On any OS, the firefox path will not end in a '/'. I just want to avoid confusion that it could be a path to a directory vs the executable. Though I'd also be fine just using e.g. `ffox_path = 'test-path-to-firefox'
Comment on attachment 675323 [details] [diff] [review] First revision of patch Looks good. The only thing is that the test should be in the tests directory, probably 'tests/test_PerfConfigurator.py' vs 'talos/unittest_PerfConfigurator.py'
Attachment #675323 - Flags: review?(jhammel) → review-
I'll be out of town next week, so if you have a final form patch, nebelhom, please ask :jmaher for review unless you don't mind waiting.
Attached patch Second revision of patch (deleted) — Splinter Review
np. I moved it to the ./talos/tests and adjusted content and title accordingly. @jmaher: I put you in as reviewer as jhammel suggested. Thanks a lot!
Attachment #675323 - Attachment is obsolete: true
Attachment #675902 - Flags: review?(jmaher)
Comment on attachment 675902 [details] [diff] [review] Second revision of patch Review of attachment 675902 [details] [diff] [review]: ----------------------------------------------------------------- looking at the history of this bug and the tests we have now, this patch is looking good. Ideally we would have tests for all the other options as well, but this does a good job of testing options. ::: tests/test_PerfConfigurator.py @@ +21,5 @@ > +from talos.configuration import YAML > + > +# globals > +here = os.path.dirname(os.path.abspath(__file__)) > +ffox_path = 'test/path/to/firefox' This assumes a unix style path system, would be inaccurate on windows.
Attachment #675902 - Flags: review?(jmaher) → review+
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
> > +ffox_path = 'test/path/to/firefox' > > This assumes a unix style path system, would be inaccurate on windows. True, but this is also only used because PerfConfigurator craps out if you don't give it anything. Could also be 'THIS-IS-NOT-THE-REAL-PATH-TO-FIREFOX'. PerfConfigurator doesn't actually check if the given path exists, as this should be done at run time, not configuration time. NEway, already committed so we are done here ;)
Attached patch Add-on (obsolete) (deleted) — Splinter Review
Hi, I attached the other test that I wrote testing for exception. It looks a little unwieldy because of the attempt to catch exceptions without assertRaises. Let me know, if I should still adjust anything there. Thanks
Attachment #678925 - Flags: review?(jhammel)
We should also clean up test*.yaml following testing. I'll file a follow-up bug
This shows a problem: ====================================================================== FAIL: test_errors (test_PerfConfigurator.PerfConfiguratorUnitTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jhammel/talos/tests/test_PerfConfigurator.py", line 151, in test_errors self.assertEqual(faults, []) AssertionError: Lists differ: ['invalid --filter passed test... != [] First list contains 1 additional elements. First extra element 0: invalid --filter passed test - ['invalid --filter passed test'] + [] ---------------------------------------------------------------------- Ran 22 tests in 68.232s FAILED (failures=1) <unittest.runner.TextTestResult run=22 errors=0 failures=1> Should be filed
(In reply to Jeff Hammel [:jhammel] from comment #20) > We should also clean up test*.yaml following testing. I'll file a follow-up > bug filed: https://bugzilla.mozilla.org/show_bug.cgi?id=809250
(In reply to Jeff Hammel [:jhammel] from comment #21) > This shows a problem: > > ====================================================================== > FAIL: test_errors (test_PerfConfigurator.PerfConfiguratorUnitTest) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/jhammel/talos/tests/test_PerfConfigurator.py", line 151, in > test_errors > self.assertEqual(faults, []) > AssertionError: Lists differ: ['invalid --filter passed test... != [] > > First list contains 1 additional elements. > First extra element 0: > invalid --filter passed test > > - ['invalid --filter passed test'] > + [] > > ---------------------------------------------------------------------- > Ran 22 tests in 68.232s > > FAILED (failures=1) > <unittest.runner.TextTestResult run=22 errors=0 failures=1> > > Should be filed Filed: https://bugzilla.mozilla.org/show_bug.cgi?id=809262 ; should be fixed before we test for it
Depends on: 809262
Attached patch Second revision of add-on (obsolete) (deleted) — Splinter Review
Here is the next patch. I added a 'remove_test_yamls' method to remove all the yamls in question, leaving the directory as it was. I'll be away for the next few days, so there is no need to rush it. thanks for filing that filter bug btw.
Attachment #678925 - Attachment is obsolete: true
Attachment #678925 - Flags: review?(jhammel)
Attachment #679078 - Flags: review?(jhammel)
Comment on attachment 679078 [details] [diff] [review] Second revision of add-on A great start, but a few fixes are necessary. The .yaml files should be cleaned up a la https://bugzilla.mozilla.org/show_bug.cgi?id=809250 . Likewise, they should probably be tempfiles vs being files randomly in the CWD. + try: + # invalid --activeTests input + options, args = example.parse_args(['--activeTests', 'badtest', '--develop', + '-e', ffox_path, "-o", "test3.yaml"]) + faults.append("invalid --activeTest passed test") + except ConfigurationError: + pass + except: + faults.append("invalid --activeTest raised an error that is not ConfigurationError") You repeat this pattern a lot. So make it a function, e.g. def assertConfigurationError(self, args, except_fault, non_raises_fault): try: options, args = example.parse_args(args) except ConfigurationError: return None except: return except_fault return non_raises_fault + def remove_test_yamls(self): These should probably be removed after they are created. There's no reason to have more than one (could be done in the above function).
Attachment #679078 - Flags: review?(jhammel) → review-
Attached patch Next Revision (deleted) — Splinter Review
hmmm... yea, that was a schoolboy error. I was so fixated on changing the code that I wasn't thinking at all. I took your suggested template and made it more general to accomodate for the BaseException, as well. I couldn't fit the last two try/except blocks into that template as it concerned a different function in Perfconfigurator, so I just left them as they were. I hope that is ok.
Attachment #679078 - Attachment is obsolete: true
Attachment #680122 - Flags: review?(jhammel)
hmmm... interesting what a small change in order can do... I fixed that particular failure, but on running the tests for filter, one of the tests stumbles over my newly added assert statement. Here is the read_out: ..F ====================================================================== FAIL: test_parse (__main__.TestFilter) test the filter name parse function ---------------------------------------------------------------------- Traceback (most recent call last): File "test_filter.py", line 64, in test_parse parsed = talos.filter.parse('foo:10.1,2,5.0,6.') File "/home/nebelhom/Mozilla-Dev/talos/talos/filter.py", line 144, in parse "--filter value not found in filters." AssertionError: --filter value not found in filters. ---------------------------------------------------------------------- Ran 3 tests in 0.001s FAILED (failures=1) Should I adjust that particular test, as well, so that it expects an AssertionError? Should I include this example in test_PerfConfigurator.py, as well? I am asking, because I am not familiar what the initial thinking behind the filter.parse() function was. Is it supposed to just blindly and mechanically make out of 'filter:arg' a tuple ('filter, arg) without any qualitative checks? If that is the case, any other use of filter.parse in talos may throw such an unexpected exception. As a consequence, I would remove the assert statement from filter.parse() and move it into PerfConfigurator.py What do you guys think?
Ach rubbish! Wrong bug report. See bug 809262 to get some context.
Depends on: 811361
Attached patch unbitrot (deleted) — Splinter Review
Comment on attachment 680122 [details] [diff] [review] Next Revision wfm
Attachment #680122 - Flags: review?(jhammel) → review+
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: