Closed Bug 1316590 Opened 8 years ago Closed 1 year ago

Compare view should allow viewing subtests across all tests

Categories

(Tree Management :: Perfherder, enhancement, P3)

enhancement

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: ted, Unassigned)

References

(Blocks 2 open bugs)

Details

I've been using Perfherder's compare view to compare build times on try pushes for my sccache2 work, it's super useful! In this case the overall build time metric is not always helpful by itself, because there are so many build steps and so much variability in them. I wind up diving into the subtests to compare just some parts, like the compile tier. One thing that could be better for my use case would be if I could see subtests across all the test types, so I could compare the 'compile' subtest across all build types. One of the specific comparsions I'm looking at is: https://treeherder.mozilla.org/perf.html#/compare?originalProject=try&originalRevision=4f7d1609a34d485119d81f1b3e93bb4294fbabc1&newProject=try&newRevision=139e47cbd01b681ee39bdd1338de7d3b83c77871&framework=2&filter=buildbot&showOnlyImportant=0 You can see that the "build times opt buildbot-r3.xlarge" linux32 result shows low confidence, but clicking through to the subtests shows that the compile and configure subtests have good data with high confidence: https://treeherder.mozilla.org/perf.html#/comparesubtest?originalProject=try&originalRevision=4f7d1609a34d485119d81f1b3e93bb4294fbabc1&newProject=try&newRevision=139e47cbd01b681ee39bdd1338de7d3b83c77871&originalSignature=bbdcb9c1b0ad3e0794eb156c6b2d8df9f255b7e7&newSignature=bbdcb9c1b0ad3e0794eb156c6b2d8df9f255b7e7&framework=2
possibly a simple checkbox to "show all subtests" ?
Priority: -- → P3

:igoldan it turns out this will be useful for the tp6 page load tests, could you look into what it would take to support showing all subtests, or even specific subtests, across all tests? Basically, we want to see how the 'loadtime' subtest is impacted by a try run without needing to click into each test.

Flags: needinfo?(igoldan)
Priority: P3 → P1
Depends on: 1509216
Flags: needinfo?(igoldan)

(In reply to Dave Hunt [:davehunt] [he/him] ⌚️UTC from comment #2)

:igoldan it turns out this will be useful for the tp6 page load tests, could you look into what it would take to support showing all subtests, or even specific subtests, across all tests? Basically, we want to see how the 'loadtime' subtest is impacted by a try run without needing to click into each test.

We 1st need to land bug 1509216. Once this is done, I'll follow up with an estimation + start working on this feature.

Assignee: nobody → igoldan
Priority: P1 → P3

FYI - the performance/summary endpoint is currently designed to query for either tests or subtests, not both (this is for performance reasons). If you want to see subtests for multiple tests, you'd need to modify the parent_signature param here in order to accept multiple parent_signatures (assuming this is performant - otherwise it'll be n queries for each tests' subtests), and then the query would need to occur after the initial queries on the compare view resolves in order to get the parent signatures.

Priority: P3 → P2
Priority: P2 → P3

(In reply to Joel Maher ( :jmaher ) (UTC-4) from comment #1)

possibly a simple checkbox to "show all subtests" ?

Yes, I like this idea.

FYI - the performance/summary endpoint is currently designed to query for either tests or subtests, not both (this is for performance reasons). If you want to see subtests for multiple tests, you'd need to modify the parent_signature param here in order to accept multiple parent_signatures (assuming this is performant - otherwise it'll be n queries for each tests' subtests), and then the query would need to occur after the initial queries on the compare view resolves in order to get the parent signatures.

Indeed, these are the main aspects to consider when implementing this.
Basically, chain some promises that'll request from this API.
First promise will fetch the big tests. Then, if the "show all subtests" checkbox is checked, we'll
request from the same API (with different params) in digestable batches, until we go through all
the big tests. Maybe 2-3 big tests at a time.

In terms of UX, showing some spinning wheels under each big test will let the user know the UI is still waiting
for being fully loaded.

Type: defect → task

Indeed, these are the main aspects to consider when implementing this.
Basically, chain some promises that'll request from this API.
First promise will fetch the big tests. Then, if the "show all subtests" checkbox is checked, we'll
request from the same API (with different params) in digestable batches, until we go through all
the big tests. Maybe 2-3 big tests at a time.

In terms of UX, showing some spinning wheels under each big test will let the user know the UI is still waiting
for being fully loaded.

What would be preferable is for each test to have it's own 'show subtests' button rather than having one checkbox to select for all tests. This would reduce the load on the database by only fetching what is explicitly needed.

(In reply to Sarah Clements [:sclements] from comment #7)

What would be preferable is for each test to have it's own 'show subtests' button rather than having one checkbox to select for all tests. This would reduce the load on the database by only fetching what is explicitly needed.

One use case for this is to see how a change affects the page load metrics across multiple tests (sites). For example, using this compare view we might want to filter to 'tp6', hide incomparable results, and then show all (or maybe even specific) subtests across all tests.

A performance improvement for loadtime subtest on amazon might cause a regression for reddit. Whilst the geomean can demonstrate this, our engineers are often focused on specific subtest metrics. By showing these subtests and promoting the compare view, we might reduce the chance of being suprised by regressions to our key metrics.

Would it make sense to implement it like the alerts view, where each test name/header has a 'show all subtests' checkbox and each platform of a test has its own checkbox. So subtests will be fetched for all platforms of a test or for selected platforms only?

And rather than trying to show subtests on the main compare view after the user selects 'show subtests' for specific tests, what about having a 'compare subtests' button that will take you to the compare subtests view? This might be a cleaner way of doing it rather than trying to show subtests on the main compare view page (if that was the original concept) and would eliminate the need to click into specific subtests per each platform/test. So this would essentially be a customizable subtests view.

(In reply to Sarah Clements [:sclements] from comment #9)

Would it make sense to implement it like the alerts view, where each test name/header has a 'show all subtests' checkbox and each platform of a test has its own checkbox. So subtests will be fetched for all platforms of a test or for selected platforms only?

Are you referring to the Graphs view, at the 'Include subtests' checkbox? Because Alerts view doesn't have the 'show all subtests' or a similar feature like that.

And rather than trying to show subtests on the main compare view after the user selects 'show subtests' for specific tests, what about having a 'compare subtests' button that will take you to the compare subtests view? This might be a cleaner way of doing it rather than trying to show subtests on the main compare view page (if that was the original concept) and would eliminate the need to click into specific subtests per each platform/test. So this would essentially be a customizable subtests view.

I like the idea of customizable subtests view. Basically, developers inspecting the Compare view results would need to open a max of 2 views: one for all tests and another one for all subtests.

I see that the Compare subtests view (e.g.) already has an 'Show all tests and platforms' hyperlink, on the top-left.

If we would add a similar hyperlink named 'Show all associated subtests and platforms' on the Compare tests view (e.g.), it would be great!

This will likely imply some extra changes to the Compare subtests view, as it currently fetches the subtests for a single parent signature, which limits the results to a single platform (Windows, Linux, OSX or Android). The changes are also on UI/UX.

(In reply to Ionuț Goldan [:igoldan], Performance Sheriff from comment #10)

(In reply to Sarah Clements [:sclements] from comment #9)

Would it make sense to implement it like the alerts view, where each test name/header has a 'show all subtests' checkbox and each platform of a test has its own checkbox. So subtests will be fetched for all platforms of a test or for selected platforms only?

Are you referring to the Graphs view, at the 'Include subtests' checkbox? Because Alerts view doesn't have the 'show all subtests' or a similar feature like that.

I did mean the Alerts view, but it was poorly worded. :) For any alert summary, you can check that checkbox to select all alerts or you can select only individual alerts. I think this would be an optimal way to select only what subtests are needed (in the compare view) to show in a customized subtests view. So then there'd be a button/link named 'Show selected subtests' to take a user to that view.

We weren't able to get to this in Q3, and have already planned Q4 work. Ionut do you think this should take priority over any of the Q4 work we have planned, or should we consider this for 2020?

No longer blocks: 1565516
Flags: needinfo?(igoldan)
Blocks: 1565516
No longer blocks: 1568462

(In reply to Dave Hunt [:davehunt] [he/him] ⌚BST from comment #13)

We weren't able to get to this in Q3, and have already planned Q4 work. Ionut do you think this should take priority over any of the Q4 work we have planned, or should we consider this for 2020?

We should consider this for 2020, as it requires quite a bit of effort to implement.

Flags: needinfo?(igoldan)
Assignee: igoldan → nobody
Type: task → enhancement

This is much older than 18 months, thus closing it as INCOMPLETE (but recorded it for our future roadmaps).

Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → INCOMPLETE

There has been a lot of interest in this, so I'd prefer to keep it open so we don't lose sight of it.

Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---

@beatrice: is this something we'll support in PerfCompare? If not, perhaps we should file a new ticket and close this with a link.

Flags: needinfo?(bacasandrei)

No, this was not previously discussed for PerfCompare, I've created a new ticket for it in Jira https://mozilla-hub.atlassian.net/browse/PCF-279.

Flags: needinfo?(bacasandrei)

Closing as WONTFIX for Perfherder, but it's being tracked for PerfCompare.

Status: REOPENED → RESOLVED
Closed: 5 years ago1 year ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.