Closed Bug 1230571 Opened 9 years ago Closed 8 years ago

4% Win7 tp5o private bytes regression on Mozilla-Inbound (v.45) on December 03, 2015 from push 65f787c9fd4e

Categories

(Core Graveyard :: Plug-ins, defect)

defect
Not set
normal

Tracking

(e10s?, firefox45 affected)

RESOLVED WONTFIX
Tracking Status
e10s ? ---
firefox45 --- affected

People

(Reporter: jmaher, Assigned: dvander)

References

Details

(Keywords: perf, regression, Whiteboard: [talos_regression][e10s])

Talos has detected a Firefox performance regression from your commit 65f787c9fd4e5ed7013c32f26ae3f6dfcea88bd8 in bug 1217665.  We need you to address this regression.

This is a list of all known regressions and improvements related to your bug:
http://alertmanager.allizom.org:8080/alerts.html?rev=65f787c9fd4e5ed7013c32f26ae3f6dfcea88bd8&showAll=1

On the page above you can see Talos alert for each affected platform as well as a link to a graph showing the history of scores for this test. There is also a link to a treeherder page showing the Talos jobs in a pushlog format.

To learn more about the regressing test, please see: https://wiki.mozilla.org/Buildbot/Talos/Tests#tp5

Reproducing and debugging the regression:
If you would like to re-run this Talos test on a potential fix, use try with the following syntax:
try: -b o -p win32 -u none -t tp5o  # add "mozharness: --spsProfile" to generate profile data

To run the test locally and do a more in-depth investigation, first set up a local Talos environment:
https://wiki.mozilla.org/Buildbot/Talos/Running#Running_locally_-_Source_Code

Then run the following command from the directory where you set up Talos:
talos --develop -e <path>/firefox -a tp5o

Making a decision:
As the patch author we need your feedback to help us handle this regression.
*** Please let us know your plans by Monday, or the offending patch will be backed out! ***

Our wiki page oulines the common responses and expectations:
https://wiki.mozilla.org/Buildbot/Talos/RegressionBugsHandling
Are we creating more devices than before?
this seems to affect windows 7 regular and e10s.  e10s didn't register for this revision because all talos e10s tests were broken for about 10 pushes which included this one.  I am collecting more data to see what other tests are affected.

:dvander, can you take the lead here on determining why this is happening and what we should do?
Flags: needinfo?(dvander)
not sure if I have the right person, the author of the patch is danderson@mozilla.com, lets get the needinfo correct!  Sadly danderson@mozilla.com is not accepting needinfo requests.
Flags: needinfo?(dvander)
seems as though :dvander is the right person!  I believe I had this same mistake 5 months ago.  :dvander, can you please comment on this issue and maybe sort out your bugzilla/commit email address to avoid confusion in the future :)
Flags: needinfo?(dvander)
Since these patches almost entirely added unused code, I'm guessing the changes to DidComposite are to blame. I'll do some try pushes today to confirm.
Assignee: nobody → dvander
Status: NEW → ASSIGNED
Flags: needinfo?(dvander)
cool.  Let me know how I can help.  if you need help analyzing the try pushes, etc.
hmm, ts_paint seems to be affect by 5% on windows 7 as well.  I suspect this is the last regression to be associated which is good to know the full list :)
from digging into the try history, each of these built upon each other, so part 11 was backed out, then the next push kept 11 backed out and then backed out part 10, and so forth.

This means that we can see the cumulative effect of backing these out.  From what I can tell, part 11 was backed out first and part 1 last (which makes sense).  After doing retriggers on the jobs:
https://treeherder.mozilla.org/#/jobs?repo=try&author=danderson@mozilla.com&filter-searchStr=tp5%20win&fromchange=3b2c7b147679&selectedJob=14457629

it helps me believe that part 11 is the problem!  Looking at the base revisions the try pushes are based off:
https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&revision=5ba77225c957&filter-searchStr=Windows%207%20opt%20Talos%20Performance%20Talos%20tp%20T%28tp%29&selectedJob=2832899

I did some retriggers, looking at the baseline on m-c (win7 private bytes) we have ~214,000,000 bytes used.  

Do this for the first push (backout part 11):
https://treeherder.mozilla.org/#/jobs?repo=try&revision=6a43db2f5881

and we end up with values more in the ~205,000,000 range.

As a note, we only collect this private byte information on windows 7, so this could affect other platforms, we just don't collect the memory there.

I assume this information is useful, please let me know what else I can do to help out.
(In reply to Joel Maher (:jmaher) from comment #13)
>
> it helps me believe that part 11 is the problem!  Looking at the base
> revisions the try pushes are based off:
> https://treeherder.mozilla.org/#/jobs?repo=mozilla-
> central&revision=5ba77225c957&filter-
> searchStr=Windows%207%20opt%20Talos%20Performance%20Talos%20tp%20T%28tp%29&se
> lectedJob=2832899

Thanks, that makes sense. That patch started creating a D3D11 device that was previously never created on versions of Windows prior to Windows 7 pre-SP1. I guess this regression is therefore expected given that the test does not run on other versions of Windows and would not regress on other versions of Windows.

You mentioned a Linux tps regression as well - does anything in the above try runs point at a likely culprit?
Flags: needinfo?(jmaher)
collecting more data, we care about tps on e10s- it is easy to see the regression on a graph:
https://treeherder.mozilla.org/perf.html#/graphs?series=[mozilla-central,637a7f061cf5e18c4a14cf10f342b19a345f8e3c,1]&series=[mozilla-inbound,637a7f061cf5e18c4a14cf10f342b19a345f8e3c,1]&series=[fx-team,637a7f061cf5e18c4a14cf10f342b19a345f8e3c,1]

I am looking for when data goes from a range of:
original: 100-110
new: 105-115
Most likely patch 11 caused the tps regression for linux64 e10s:
https://treeherder.mozilla.org/perf.html#/graphs?series=[mozilla-central,637a7f061cf5e18c4a14cf10f342b19a345f8e3c,1]&series=[mozilla-inbound,637a7f061cf5e18c4a14cf10f342b19a345f8e3c,1]&series=[fx-team,637a7f061cf5e18c4a14cf10f342b19a345f8e3c,1

prior we had no data points returning >110, but with patch 11 added we have a few points >110.  As the range is overlapping, it does make it hard to know with certainty.  Overall, it does looks like the culprit.

The question is- what can we do to fix this?  Do we need to accept this as a fix is not realistic?  Maybe there is some simple fix to reduce this?
Flags: needinfo?(jmaher) → needinfo?(dvander)
this is now on Aurora, I just did some retriggers on the two try runs- lets see what the compare looks like in a half hour or so:
https://treeherder.mozilla.org/perf.html#/compare?originalProject=try&originalRevision=f31fcfb4c2e8&newProject=try&newRevision=b4bb3a57d509&framework=1
oh, this fixes the private bytes.  the 'tp5o Modified Page List Bytes opt' is showing a regression due to an outlier.
this is now on beta.
milan, update?
Flags: needinfo?(milan)
In comment #14 I explained that this was expected.
Flags: needinfo?(milan)
On Windows 7 SP1+PU and higher we create a D3D11 content device on startup. Versions of Windows older than this did not, until this patch. Since Win7 Talos does not run on SP1+PU, it would see this change in behavior, whereas its other versions of Windows would see no change.

Making this lazily initialized is probably not worth the effort for the complexity and benefit, unless we think Win 7 pre-SP1 users will be very adversely affected.
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.