Closed Bug 795074 Opened 12 years ago Closed 12 years ago

refactor sut_tools to be more flexible and optionally do parts

Categories

(Release Engineering :: General, defect)

ARM
Android
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: jmaher, Assigned: jmaher)

References

Details

Attachments

(3 files, 1 obsolete file)

the sut_tools need to be called as components (i.e. main() vs a raw script) and we should make many of the tasks optional via flags to the function.
Attached patch refactor many of the sut_tools (1.0) (obsolete) (deleted) — Splinter Review
one of the last patches
Assignee: nobody → jmaher
Status: NEW → ASSIGNED
Attachment #665599 - Flags: review?(bugspam.Callek)
Comment on attachment 665599 [details] [diff] [review] refactor many of the sut_tools (1.0) Review of attachment 665599 [details] [diff] [review]: ----------------------------------------------------------------- Looks good, I just want your answers on installApp.py before I r+ -- and I know this patch relies on the mozdevice one ;-) ::: sut_tools/installApp.py @@ +32,5 @@ > + try: > + print dm._runCmds([{'cwd': 'exec su -c "logcat -d -v time *:W"'}]) > + except devicemanager.DMError, e: > + setFlag(errorFile, "Remote Device Error: Exception hit while trying to run logcat: %s" % str(e)) > + sys.exit(1) nit: return 1 @@ +151,5 @@ > sys.exit(1) > > + except devicemanager.AgentError, err: > + log.error("remoteDeviceError: while doing one time setup for installation: %s" % err) > + return None, None Does this return None, None make sense with the sys.exit(1) right above it? @@ +175,5 @@ > + if not dm: > + sys.exit(1) > + > + if installOneApp(dm, devRoot, path_to_main_apk, proxyFile, errorFile): > + sys.exit(1) In-fact I see a large mix of sys.exit() vs return throughout this file... can we only use sys.exit() when we are __name__ == "__main__" if we're doing this? ::: sut_tools/sut_lib.py @@ +451,4 @@ > try: > dm._runCmds([{'cmd': 'settime %s' % s}]) > return True > + except devicemanager.AgentError, e: I'll take your word that this is right ;-) @@ +475,5 @@ > + if dm._sock: > + dm._sock.close() > + dm._sock = None > + dm.deviceRoot = None > + dm.retries = 0 ...this is the "abort retry faster"? or something else ::: sut_tools/verify.py @@ +269,1 @@ > return False NOTE: we don't yet ever try to fix the screen ;-)
Attachment #665599 - Flags: feedback+
@@ +475,5 @@ > + if dm._sock: > + dm._sock.close() > + dm._sock = None > + dm.deviceRoot = None > + dm.retries = 0 ...this is the "abort retry faster"? or something else ^ this code resets the retry limit so we actually attempt to reconnect after the first failure. Otherwise we sit in a loop try to get the deviceRoot without trying to connect. I can work on the sys.exit() and return values, good feedback!
updated to address return vs sys.exit.
Attachment #665599 - Attachment is obsolete: true
Attachment #665599 - Flags: review?(bugspam.Callek)
Attachment #666592 - Flags: review?(bugspam.Callek)
Comment on attachment 666592 [details] [diff] [review] refactor many of the sut_tools (1.1) Review of attachment 666592 [details] [diff] [review]: ----------------------------------------------------------------- will add to staging later today, or early tomorrow
Attachment #666592 - Flags: review?(bugspam.Callek) → review+
Blocks: 797697
Blocks: 797868
I suspect we'll need to adjust the smoketest script that joel has a review pending on me, but we noticed that jobs were failing in production (especially after try jobs get canceled) since we stopped checking for stalled xpcshell servers. This should fix it.
Attachment #668715 - Flags: review?(philringnalda)
Attachment #668715 - Flags: feedback?(jmaher)
Attachment #668715 - Attachment is patch: true
Comment on attachment 668715 [details] [diff] [review] [tools] regression fix - Actually checkStalled/watcherINI normally in prod That ought to behave more nicely.
Attachment #668715 - Flags: review?(philringnalda) → review+
So after I pushed the regression fix, it turned out we took out ALL tegras from the pool ::sad face:: 10/05/2012 20:03:03: INFO: /builds/tegra-173/error.flg Unable to identify if watcher.ini is current 10/05/2012 20:03:33: INFO: verifyDevice: failing to set watcher.ini program finished with exit code 1 elapsedTime=206.671093 ========= Finished Running verify.py failed (results: 5, elapsed: 3 mins, 26 secs) (at 2012-10-05 20:03:33.777404) ========= So I'm reverting the Boolean that sets it, and we'll work on fixing this code up no later than Monday. Since we now lost the ability to update watcherINI.
Attachment #668725 - Flags: feedback?(jmaher)
Comment on attachment 668715 [details] [diff] [review] [tools] regression fix - Actually checkStalled/watcherINI normally in prod Review of attachment 668715 [details] [diff] [review]: ----------------------------------------------------------------- aye, my bad
Attachment #668715 - Flags: feedback?(jmaher) → feedback+
Comment on attachment 668725 [details] [diff] [review] [tools] regression fix to regression fix - Don't update watcher.ini Review of attachment 668725 [details] [diff] [review]: ----------------------------------------------------------------- umm, you are flip flopping a lot, lets open a new bug to resolve this watcherini, on the panda board we can't look for it and I tried a few things to make it work.
Attachment #668725 - Flags: feedback?(jmaher) → feedback+
Product: mozilla.org → Release Engineering
Component: General Automation → General
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: