Closed Bug 579792 Opened 14 years ago Closed 14 years ago

Allow Mozmill tests to change the global timeout

Categories

(Testing Graveyard :: Mozmill, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: whimboo, Unassigned)

References

Details

With bug 504440 fixed we have a static value for the timeout until the application gets killed. It would be very helpful when we can expose the global timeout in the persisted object. That way a Mozmill test can automatically update the value. An example are i.e. our software update tests which will take a while until the update has been downloaded. Forcing the user to add another command line option is not the way I would like to see that feature working. Tests should only be able to set their own global timeout. For the next test it should get reset to the value given by the command line. I would propose persisted.timeout for this feature.
As the global timeout as per bug 504440 is a dead-man timeout, I don't think tests should be able to override this. There is a need for a per-test timeout: https://bugzilla.mozilla.org/show_bug.cgi?id=574871 . While most of the discussion has been about having a global per-test timeout, there has been some discussion (just no agreement) on allowing tests to override this timeout. Ultimately, timeouts are a workaround of other issues. The defaults should be large enough to tackle common usecases on common platforms, but there is no way to ensure that a timeout is large enough, either per test or across the harness. Machine characteristics will differ and performance will differ. And again, timeouts should normally only be hit if there is something wrong anyway or if the user is working on a horribly slow machine. Also, why don't you like having the timeout as a command line option?
(In reply to comment #1) > As the global timeout as per bug 504440 is a dead-man timeout, I don't think > tests should be able to override this. There is a need for a per-test timeout: > https://bugzilla.mozilla.org/show_bug.cgi?id=574871 . While most of the > discussion has been about having a global per-test timeout, there has been some > discussion (just no agreement) on allowing tests to override this timeout. Even we would have the per-test timeout, I assume that one cannot be higher as the global timeout value. When do we exactly reset the state of the current global timeout? Does it happen when a test function has been ended or whenever we call a controller function? > Ultimately, timeouts are a workaround of other issues. The defaults should be > large enough to tackle common usecases on common platforms, but there is no way > to ensure that a timeout is large enough, either per test or across the > harness. Machine characteristics will differ and performance will differ. And > again, timeouts should normally only be hit if there is something wrong anyway > or if the user is working on a horribly slow machine. This highly depends on my question above. If it's only per test function we could have trouble, while per test itself should be completely fine. > Also, why don't you like having the timeout as a command line option? Because people don't know what the duration of tests can be before running into a timeout. Only tests know about it.
(In reply to comment #2) > (In reply to comment #1) > > As the global timeout as per bug 504440 is a dead-man timeout, I don't think > > tests should be able to override this. There is a need for a per-test timeout: > > https://bugzilla.mozilla.org/show_bug.cgi?id=574871 . While most of the > > discussion has been about having a global per-test timeout, there has been some > > discussion (just no agreement) on allowing tests to override this timeout. > > Even we would have the per-test timeout, I assume that one cannot be higher as > the global timeout value. When do we exactly reset the state of the current > global timeout? Does it happen when a test function has been ended or whenever > we call a controller function? The counter is reset whenever communication occurs, either way, over the JSBridge. Since this happens every step, a timeout will only occur if a single step takes over the value (60s by default), a per-test timeout could very well be higher than the global timeout without inconsistency. In general, I would discourage going out of our way to make very long tests and especially very long tests the norm. > > Ultimately, timeouts are a workaround of other issues. The defaults should be > > large enough to tackle common usecases on common platforms, but there is no way > > to ensure that a timeout is large enough, either per test or across the > > harness. Machine characteristics will differ and performance will differ. And > > again, timeouts should normally only be hit if there is something wrong anyway > > or if the user is working on a horribly slow machine. > > This highly depends on my question above. If it's only per test function we > could have trouble, while per test itself should be completely fine. > > > Also, why don't you like having the timeout as a command line option? > > Because people don't know what the duration of tests can be before running into > a timeout. Only tests know about it.
As talked on IRC that would mean we have to replace a waitForEval("download==finished") call for the software updates with a custom for loop which regularly checks for download progress updates instead. Wontfix?
(In reply to comment #4) > As talked on IRC that would mean we have to replace a > waitForEval("download==finished") call for the software updates with a custom > for loop which regularly checks for download progress updates instead. > > Wontfix? Sounds like something we don't need to fix in this bug, correct. Let's address this with your work around and revisit the issue on a pertest timeout. This bug is WONTFIX.
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → WONTFIX
Product: Testing → Testing Graveyard
You need to log in before you can comment on or make changes to this bug.