Closed
Bug 396574
Opened 17 years ago
Closed 17 years ago
JavaScript Speed Tests
Categories
(Core :: JavaScript Engine, defect, P3)
Tracking
()
RESOLVED
FIXED
People
(Reporter: jeresig, Assigned: anodelman)
References
()
Details
Attachments
(6 files, 1 obsolete file)
(deleted),
application/x-tar
|
Details | |
(deleted),
application/x-tar
|
Details | |
(deleted),
application/zip
|
jeresig
:
review+
|
Details |
(deleted),
patch
|
rcampbell
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
rcampbell
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
rcampbell
:
review+
|
Details | Diff | Splinter Review |
This is a bug to track the creation of the JavaScript Speed Test Suite and its subsequent integration into the main test system.
You can see some of the sample output here:
http://ejohn.org/apps/speed/
Reporter | ||
Comment 1•17 years ago
|
||
Ok, I've taken a first stab at generating some perf tests. The QA team will have to tell me how out-to-lunch I am, or not.
I've constructed a number of HTML files that consist of the following: A pre tag (which will hold all of the, formatted, test output), a script tag that loads quit.js (if it exists), a script tag that loads the test runner, and a script tag that has all of the module tests in it.
If you load the page in a browser it will (synchronously) run through the tests, appending the output to the pre as it goes - if a quit.js file is loaded then the browser will quit upon completion. If you load the page with a ?delay on the end it will asynchronously run through the tests (allowing any lay person to run them without it locking up the browser).
The output of a file will look something like this (based upon the test output description from another bug and with an addition of a "# of iterations" column):
http://ejohn.org/apps/speed/results/spidermonkey.txt
Now, it's important to note that it's NOT sufficient just to print out "X seconds were taken" as a result - this test runner is actually a full test suite capable of perceiving much finer-grained results than from what can occur in the normal perf process. I assume that there would have to be a tweak made to the existing Talos performance analysis methodology (where the runner will look at the output for a page, detecting __start_report... and just loading that data block as a substitute for the normal Talos output).
Reporter | ||
Comment 2•17 years ago
|
||
I'm not sure which format is easiest to integrate into Talos, but based upon Alice's comment here:
https://bugzilla.mozilla.org/show_bug.cgi?id=387148#c2
I've created a massive collection of single-html-file tests that each run an individual speed test, outputting the desired __start_reportNUMBER__end_report.
Reporter | ||
Comment 3•17 years ago
|
||
This takes the previous single tests and adapts them to the output style needed for integration with Talos. Specifically, the MOZ_INSERT_CONTENT_HOOK comment was added, tpRecordTime is run (if it exists), and a new perf-single.manifest file was generated with a list of all the tests in it.
Attachment #282202 -
Attachment is obsolete: true
Assignee | ||
Comment 4•17 years ago
|
||
I've taken a look at the latest attempt and I think that we need to answer some questions.
This runs through runner.js - which is effectively the page cycler, starting one test, storing data upon test completion, moving to the next test. I had assumed that by talos integration the js tests would run with one of the talos page cyclers (either tp2 or the new pageloader), which would then handle the moving from one page to the next and data management along with communication with talos. The other tests that I've integrated end up looking like a folder full of html files paired with a basic text manifest file listing the urls to load into the browser.
If this is not the case then we need to consider if this is actually something that we want to run in talos. It might be a better candidate for hooking into buildbot to run on its own. If you were wanting to send the collected data to the graph server I can pull out the appropriate code from talos (which I've been meaning to do anyway to make things more generic).
It just seems to be like we are attempting to integrate a framework into a framework and, while I'm a big proponent of talos, we don't have to consider it the only route to automation of performance tests.
So, we need to figure out:
- what are we hoping to get out of the talos integration?
- is this something that belongs in talos or would is succeed as a stand alone project?
Reporter | ||
Comment 5•17 years ago
|
||
(In reply to comment #4)
> - is this something that belongs in talos or would is succeed as a stand alone
> project?
Having it be its own, standalone, project would be fine. I was just under the impression that integration with Talos was "The Right Way To Integrate Speed Tests." However, that really doesn't seem to be the case here, and it should probably be branched off as its own suite (as you recommended).
> - what are we hoping to get out of the talos integration?
I was hoping to get good reporting on commits - so that if changes were made to the JS engine we could see those results (and see the differences between engine versions). Thus, it sounds like, this is something that could be done once the "charting code" is extracted from Talos.
Ok - this is fine then, let's try to progress along those lines, then. Let me know when you'd like to try some more integration.
Comment 6•17 years ago
|
||
running Talos through our existing buildbot automation is already a solved problem. That's one benefit to using it as an execution harness.
That said, if it makes more sense to run the JS speed tests on their own, and it sounds like it might be and there's already a fairly complete system for invoking it, we might have an easier time of just taking it as it is and wrapping it up in some buildbot code for execution. It could run after a talos run on the current performance farm.
How long do these tests take to run? How much setup is involved?
Assignee | ||
Comment 7•17 years ago
|
||
To send data to the graph server you can use a simple http post method. You basically end up sending the data collected point by point. I can help you get this working with graph-stage for testing purposes, especially since it might give us a better idea of what the headaches are going to be if we don't take the integrate-with-talos route.
Comment 8•17 years ago
|
||
Should add Perf key word.
Assignee | ||
Comment 9•17 years ago
|
||
I've made some alterations to the tests to make them work with talos.
1 - the tests are now set up to run onload, this is to allow time for tpRecordTime to be created, otherwise we end up calling it before it exists
2 - I've named the suite "jss" (for JavaScript Speed), we can rename if we want, I just wanted something short for the waterfall
Assignee | ||
Updated•17 years ago
|
Attachment #297089 -
Attachment is patch: false
Attachment #297089 -
Attachment mime type: text/plain → application/zip
Reporter | ||
Comment 10•17 years ago
|
||
Comment on attachment 297089 [details]
incorporate javascript speed tests into talos
This looks just fine to me - I'll see if I can't tweak the Makefile output, for you, to match this format - that way it'll be easier for you to dump in, in the future.
Attachment #297089 -
Flags: review?(jresig) → review+
Assignee | ||
Comment 11•17 years ago
|
||
For brevity, here is just a summary of the check in:
Checking in page_load_test/jss
/cvsroot/mozilla/testing/performance/talos/page_load_test/jss/*,v <-- *
initial revision: 1.1
done
All tests in the suite are now checked in to page_load_test/jss
Assignee | ||
Comment 12•17 years ago
|
||
This patch pushes the tests to the currently up staging machines. I want them to run there for a few cycles before we attempt to push to all the production machines.
Attachment #298544 -
Flags: review?(rcampbell)
Comment 13•17 years ago
|
||
Comment on attachment 298544 [details] [diff] [review]
push jss tests onto staging machines
*stamp*
Attachment #298544 -
Flags: review?(rcampbell) → review+
Assignee | ||
Comment 14•17 years ago
|
||
Checking in sample.config;
/cvsroot/mozilla/tools/buildbot-configs/testing/talos/perfmaster/configs/sample.config,v <-- sample.config
new revision: 1.8; previous revision: 1.7
done
Checking in sample.config.nochrome;
/cvsroot/mozilla/tools/buildbot-configs/testing/talos/perfmaster/configs/sample.config.nochrome,v <-- sample.config.nochrome
new revision: 1.2; previous revision: 1.1
done
Checking in sample.config.nogfx;
/cvsroot/mozilla/tools/buildbot-configs/testing/talos/perfmaster/configs/sample.config.nogfx,v <-- sample.config.nogfx
new revision: 1.8; previous revision: 1.7
done
Pushed to stage.
Assignee | ||
Comment 15•17 years ago
|
||
We only need to cycle through the jss tests once since each individual test contained in a given page is run multiple times. As it is, we are only slowing down the machine cycle time for no benefit.
Attachment #298844 -
Flags: review?(rcampbell)
Updated•17 years ago
|
Attachment #298844 -
Flags: review?(rcampbell) → review+
Assignee | ||
Comment 16•17 years ago
|
||
Checking in sample.config;
/cvsroot/mozilla/tools/buildbot-configs/testing/talos/perfmaster/configs/sample.config,v <-- sample.config
new revision: 1.9; previous revision: 1.8
done
Only cycle through jss once.
Checking in sample.config.nochrome;
/cvsroot/mozilla/tools/buildbot-configs/testing/talos/perfmaster/configs/sample.config.nochrome,v <-- sample.config.nochrome
new revision: 1.3; previous revision: 1.2
done
Checking in sample.config.nogfx;
/cvsroot/mozilla/tools/buildbot-configs/testing/talos/perfmaster/configs/sample.config.nogfx,v <-- sample.config.nogfx
new revision: 1.9; previous revision: 1.8
done
Assignee | ||
Comment 17•17 years ago
|
||
Seems to be running fine on stage. A warning though, once this is pushed to production it will increase the cycle time of talos machines by somewhere over 45 minutes.
Attachment #299845 -
Flags: review?(rcampbell)
Comment 18•17 years ago
|
||
What's the right number of tests here - bug 416251 has a request for 6 machines, but we are starting to run tight on the minis. Is 6 the right number, or some number less than that?
Assignee | ||
Comment 19•17 years ago
|
||
We currently have three winxp machines reporting to stage. We are waiting on bug 419071 for these to get pushed to production.
Priority: -- → P3
Assignee | ||
Comment 20•17 years ago
|
||
A trio of winxp talos machines are now reporting to the Firefox waterfall (qm-pxp-jss01/02/03). These machines run tjss and tsspider.
I believe that that covers all that this bug was trying to accomplish, the tests are fully incorporated into talos and now running in an automated fashion.
Status: ASSIGNED → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
Updated•17 years ago
|
Flags: in-testsuite-
Flags: in-litmus-
Comment 21•16 years ago
|
||
Comment on attachment 299845 [details] [diff] [review]
push jss tests to production
I'm assuming this was already done. Clearing my review queue.
Attachment #299845 -
Flags: review?(rcampbell) → review+
You need to log in
before you can comment on or make changes to this bug.
Description
•