Open
Bug 1298113
Opened 8 years ago
Updated 2 years ago
Revive tracking memory usage of DevTools
Categories
(DevTools :: General, defect, P3)
DevTools
General
Tracking
(Fission Milestone:Future)
NEW
Fission Milestone | Future |
People
(Reporter: intermittent-bug-filer, Unassigned)
References
(Blocks 1 open bug)
Details
(Keywords: intermittent-failure, leave-open, Whiteboard: dt-fission-future)
Attachments
(4 files, 1 obsolete file)
(deleted),
text/x-review-board-request
|
jryans
:
review+
wlach
:
feedback+
|
Details |
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
jryans
:
review+
|
Details | Diff | Splinter Review |
Updated•8 years ago
|
Component: Developer Tools: Memory → Developer Tools
Comment hidden (Intermittent Failures Robot) |
Seems like we need to update the test or slim down on memory. :ochameau, what do you think?
Flags: needinfo?(poirot.alex)
Priority: -- → P2
Comment 3•8 years ago
|
||
Personally, I think this test is not carrying its weight. We don't have any way to identify which changes are increasing memory footprint, just some unlucky stiff who gets blamed when they land the straw that breaks the camels back.
We should replace this with some kind of perfherder/DAMP thing that can actually track regressions down to the push granularity.
Comment 4•8 years ago
|
||
The test doesn't catch small regressions over time (like <5%), which talos doesn't do either. But definitely catch big mistakes.
Having said that I also dislike the fact that it blames random platform developers which just trigger the limit. Unfortunately, I don't think DAMP could probe the memory. I imagine it can only measure one precise probe which is the time to load a toolbox under various conditions.
Brian, Could you tell us if we can probe the memory usage in DAMP or would it require yet another talos task?
Flags: needinfo?(poirot.alex) → needinfo?(bgrinstead)
Comment 5•8 years ago
|
||
Comment 6•8 years ago
|
||
If tweaking DAMP to measure memory is hard, just bump that threshold.
To something high enough to work also on linux:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=5b0e358855e4&selectedJob=26667022
Comment 7•8 years ago
|
||
(In reply to Alexandre Poirot [:ochameau] from comment #4)
> The test doesn't catch small regressions over time (like <5%), which talos
> doesn't do either. But definitely catch big mistakes.
Talos does catch small regressions, we just choose not to act on them usually
> Having said that I also dislike the fact that it blames random platform
> developers which just trigger the limit. Unfortunately, I don't think DAMP
> could probe the memory. I imagine it can only measure one precise probe
> which is the time to load a toolbox under various conditions.
>
> Brian, Could you tell us if we can probe the memory usage in DAMP or would
> it require yet another talos task?
There's an option that might be even easier. If you export a string in the logs called PERFHERDER_DATA you can get perfherder tracking on any kind of data from any test: http://wrla.ch/blog/2015/11/perfherder-onward/. Then rather than setting a limit that causes the test to fail, we can track this over time and get alerts as it changes. Here's an example test that does that: https://dxr.mozilla.org/mozilla-central/source/mobile/android/tests/browser/chrome/test_awsy_lite.html. I think this might be a good case for using this feature instead of talos.
If we decide we'd rather fold this into talos: we can add a probe for any kind of number so I don't see why not (assuming that we can measure the memory in an addon + e10s context). We would need to normalize it so that the measurement had the right kind of weighting along with other probes (which are in ms). So we would have a couple of choices:
1. Make the number very small relative to other probes. In this case it would never cause alerts but we can manually track it and graph it over time
2. Make the number about the same relative to other probes so that it would cause an alert if the number jumped up
Flags: needinfo?(bgrinstead)
Comment 8•8 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #7)
> There's an option that might be even easier. If you export a string in the
> logs called PERFHERDER_DATA you can get perfherder tracking on any kind of
> data from any test: http://wrla.ch/blog/2015/11/perfherder-onward/.
Oh! I missed that, I will give that a try. Definitely easier than working around Talos!
Comment 9•8 years ago
|
||
(In reply to Alexandre Poirot [:ochameau] from comment #8)
> (In reply to Brian Grinstead [:bgrins] from comment #7)
> > There's an option that might be even easier. If you export a string in the
> > logs called PERFHERDER_DATA you can get perfherder tracking on any kind of
> > data from any test: http://wrla.ch/blog/2015/11/perfherder-onward/.
>
> Oh! I missed that, I will give that a try. Definitely easier than working
> around Talos!
Yeah, if you do this use the "awsy" framework. We're already tracking are-we-slim-yet and android memory measurements with that framework, so this would be a natural fit:
https://treeherder.mozilla.org/perf.html#/alerts?status=0&framework=4&page=1
Comment 10•8 years ago
|
||
Hum. I tried to emit PERFHERDER_DATA from xpcshell, but I don't think it will work on treeherder
as we don't see output of tests on treeherder for xpcshell tests. Only test final result:
10:28:23 INFO - TEST-PASS | xpcshell.ini:toolkit/mozapps/extensions/test/xpcshell/test_compatoverrides.js | took 11480ms
10:28:23 INFO - TEST-START | xpcshell.ini:toolkit/mozapps/extensions/test/xpcshell/test_corruptfile.js
10:28:29 INFO - TEST-PASS | xpcshell.ini:toolkit/mozapps/extensions/test/xpcshell/test_corrupt.js | took 16194ms
...
I'll try to come up with a mochitest, but memory is much more fuzzy there as we load Firefox and various random things. May be mochitest-chrome would be more stable.
Comment 11•8 years ago
|
||
Does it work if you print out the strings with `do_print`?
Comment 12•8 years ago
|
||
As long as PERFHERDER_DATA is in the log file (see talos jobs for examples), Treeherder/Perfherder should pick it up.
Comment hidden (Intermittent Failures Robot) |
Comment 14•8 years ago
|
||
Comment hidden (mozreview-request) |
Comment 16•8 years ago
|
||
It wasn't easy to convert it to mochitest because of the noise...
I had to use the horrible processNextEvents to have something stable locally, hopefully it will be stable on build machines as well!
Comment 17•8 years ago
|
||
xpcshell is a dead end as test output is only shown when a test fail.
There is no equivalent of SimpleTest.requestCompleteLog() on xpcshell.
Comment 18•8 years ago
|
||
Comment hidden (mozreview-request) |
Updated•8 years ago
|
Assignee: nobody → poirot.alex
Comment 20•8 years ago
|
||
Comment on attachment 8787184 [details]
Bug 1298113 - Convert devtools xpcshell memory assertion into a mochitest posting data to perfherder.
Here is a try build of it.
Do not hesitate to f?/r? any additional peer!
It seems to work fine. I'm wondering if we can see the intermediate values in perherder? It looks like we can only see the final one, which is not handy in this test which track multiple steps.
Attachment #8787184 -
Flags: feedback?(wlachance)
Comment 21•8 years ago
|
||
hum. try just got the results for many runs and it looks unstable going from 2M to 5M.
My forceGC+processNextEvent trick doesn't seem to be enough.
I suspect something from Firefox is running while the test is executed and allocate a bunch of random things...
Comment hidden (Intermittent Failures Robot) |
Comment 23•8 years ago
|
||
mozreview-review |
Comment on attachment 8787184 [details]
Bug 1298113 - Convert devtools xpcshell memory assertion into a mochitest posting data to perfherder.
https://reviewboard.mozilla.org/r/76024/#review74566
Looks like a nice approach to me. Hopefully we can stabilize the values enough to make this workable.
Attachment #8787184 -
Flags: review?(jryans) → review+
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 26•8 years ago
|
||
Comment 27•8 years ago
|
||
Comment on attachment 8786730 [details] [diff] [review]
Bump DebuggerServer.init memory threshold in test.
I'm unable to stabilize the memory probe in mochitest, so I think I will just bump the existing xpcshell threshold for now...
Attachment #8786730 -
Flags: review?(jryans)
Comment 28•8 years ago
|
||
mozreview-review |
Comment on attachment 8787184 [details]
Bug 1298113 - Convert devtools xpcshell memory assertion into a mochitest posting data to perfherder.
https://reviewboard.mozilla.org/r/76024/#review75066
::: devtools/server/tests/browser/browser_memory-footprint.js:77
(Diff revision 2)
> + let footprint = (gMgr.residentUnique - refMemory) / 1024;
> + let PERFHERDER_DATA = {
> + framework: { name: "awsy" },
> + suites: [{
> + subtests,
> + name: "Devtools memory usage", value: footprint
IMO it's a bit confusing the way that the subtests for this correspond to memory usage after different actions are taken. Unless there's a big benefit to measuring these, I'd probably just check memory usage after listTabs is called.
Comment 29•8 years ago
|
||
NI'ing Eric, who might have some insight on how to stabilize the memory numbers. Eric: see comment 21.
If we can't figure out how to make the numbers stable from mochitest, maybe we should consider adding the capability of outputting perfherder data from an xpcshell test. I suspect this won't be the only case where we might want to do that.
Flags: needinfo?(erahm)
Updated•8 years ago
|
Attachment #8787184 -
Flags: feedback?(wlachance) → feedback+
Comment 30•8 years ago
|
||
(In reply to William Lachance (:wlach) from comment #29)
> NI'ing Eric, who might have some insight on how to stabilize the memory
> numbers. Eric: see comment 21.
>
> If we can't figure out how to make the numbers stable from mochitest, maybe
> we should consider adding the capability of outputting perfherder data from
> an xpcshell test. I suspect this won't be the only case where we might want
> to do that.
If you want to try the mochitest route (we effectively do this w/AWSY) you probably want to do something like:
- Load test, wait 30s
- Force GC [1], maybe wait some more
- Get your baseline memory measurement
- Do your thing, wait 30s
- Force GC, maybe wait some more
- Get your memory-used-by-devtools measurement
If someone wants to implement it, I'd be okay with adding an additional AWSY test (as in, in the AWSY suite) for this.
[1] https://github.com/mozilla/areweslimyet/blob/master/benchtester/test_memory_usage.py#L190-L200
Flags: needinfo?(erahm)
Comment on attachment 8786730 [details] [diff] [review]
Bump DebuggerServer.init memory threshold in test.
Review of attachment 8786730 [details] [diff] [review]:
-----------------------------------------------------------------
Shouldn't we remove this test instead of bumping the level? It seems like we've agreed there's not much value in this existing test...?
Attachment #8786730 -
Flags: review?(jryans)
Comment 32•8 years ago
|
||
Comment 33•8 years ago
|
||
Comment 34•8 years ago
|
||
Looks unstable locally and report negative numbers.
https://treeherder.mozilla.org/#/jobs?repo=try&revision=895a450190a3
Comment 35•8 years ago
|
||
Comment 36•8 years ago
|
||
Ok, here is just the removal of the xpcshell test.
Attachment #8788965 -
Flags: review?(jryans)
Updated•8 years ago
|
Attachment #8786730 -
Attachment is obsolete: true
Comment on attachment 8788965 [details] [diff] [review]
Remove devtools xpcshell test asserting for memory consumption
Review of attachment 8788965 [details] [diff] [review]:
-----------------------------------------------------------------
Thanks! Please either file a new bug or mark this one leave-open so we can continue working out what the new approach should be (perfherder from mochitest, awsy, etc.).
Attachment #8788965 -
Flags: review?(jryans) → review+
Comment 38•8 years ago
|
||
Eric, See comment 33 and comment 34. Two new patches, with 30s pauses before and after, but it looks like there is still significant things being done after 30s. Or there is something special about devtools... I end up with even more memory being freed. I tried various very hacky things without any luck.
xpcshell would be easier as we better control what is being loaded, but it sounds as complex to change the test harness output :/
Flags: needinfo?(erahm)
Keywords: leave-open
Comment 39•8 years ago
|
||
https://hg.mozilla.org/integration/fx-team/rev/054d2453311b5adf511c5cfaa59e27db955405f7
Bug 1298113 - Remove devtools xpcshell test asserting for memory consumption. r=jryans
Comment 40•8 years ago
|
||
bugherder |
Comment hidden (Intermittent Failures Robot) |
Comment 42•8 years ago
|
||
(In reply to Alexandre Poirot [:ochameau] from comment #38)
> Eric, See comment 33 and comment 34. Two new patches, with 30s pauses before
> and after, but it looks like there is still significant things being done
> after 30s. Or there is something special about devtools... I end up with
> even more memory being freed. I tried various very hacky things without any
> luck.
>
> xpcshell would be easier as we better control what is being loaded, but it
> sounds as complex to change the test harness output :/
Sorry for the delay here, yes I think xpcshell is the best way to go if you want consistent numbers.
Flags: needinfo?(erahm)
Comment 43•8 years ago
|
||
(In reply to Eric Rahm [:erahm] from comment #42)
> (In reply to Alexandre Poirot [:ochameau] from comment #38)
> > Eric, See comment 33 and comment 34. Two new patches, with 30s pauses before
> > and after, but it looks like there is still significant things being done
> > after 30s. Or there is something special about devtools... I end up with
> > even more memory being freed. I tried various very hacky things without any
> > luck.
> >
> > xpcshell would be easier as we better control what is being loaded, but it
> > sounds as complex to change the test harness output :/
>
> Sorry for the delay here, yes I think xpcshell is the best way to go if you
> want consistent numbers.
I was expecting something from you about xpcshell support on perfherder.
I don't have the bandwidth to look at that. Are you planning to support it anytime soon?
Or is there anyone I could ping about that?
The issue being that xpcshell test suite doesn't print anything unless a test fail.
Comment 44•8 years ago
|
||
(In reply to Alexandre Poirot [:ochameau] from comment #43)
> (In reply to Eric Rahm [:erahm] from comment #42)
> > (In reply to Alexandre Poirot [:ochameau] from comment #38)
> > > Eric, See comment 33 and comment 34. Two new patches, with 30s pauses before
> > > and after, but it looks like there is still significant things being done
> > > after 30s. Or there is something special about devtools... I end up with
> > > even more memory being freed. I tried various very hacky things without any
> > > luck.
> > >
> > > xpcshell would be easier as we better control what is being loaded, but it
> > > sounds as complex to change the test harness output :/
> >
> > Sorry for the delay here, yes I think xpcshell is the best way to go if you
> > want consistent numbers.
>
> I was expecting something from you about xpcshell support on perfherder.
> I don't have the bandwidth to look at that. Are you planning to support it
> anytime soon?
> Or is there anyone I could ping about that?
> The issue being that xpcshell test suite doesn't print anything unless a
> test fail.
Ah okay, that's maybe a wlach question.
Flags: needinfo?(wlachance)
Comment 45•8 years ago
|
||
(In reply to Eric Rahm [:erahm] from comment #44)
> (In reply to Alexandre Poirot [:ochameau] from comment #43)
> > (In reply to Eric Rahm [:erahm] from comment #42)
> > > (In reply to Alexandre Poirot [:ochameau] from comment #38)
> > > > Eric, See comment 33 and comment 34. Two new patches, with 30s pauses before
> > > > and after, but it looks like there is still significant things being done
> > > > after 30s. Or there is something special about devtools... I end up with
> > > > even more memory being freed. I tried various very hacky things without any
> > > > luck.
> > > >
> > > > xpcshell would be easier as we better control what is being loaded, but it
> > > > sounds as complex to change the test harness output :/
> > >
> > > Sorry for the delay here, yes I think xpcshell is the best way to go if you
> > > want consistent numbers.
> >
> > I was expecting something from you about xpcshell support on perfherder.
> > I don't have the bandwidth to look at that. Are you planning to support it
> > anytime soon?
> > Or is there anyone I could ping about that?
> > The issue being that xpcshell test suite doesn't print anything unless a
> > test fail.
>
> Ah okay, that's maybe a wlach question.
Actually I don't really know either. :( jgriffin, who on our team would be best able to answer how hard it would be to get xpcshell to print out something to standard out in the case that tests don't fail?
Flags: needinfo?(wlachance) → needinfo?(jgriffin)
Comment 46•8 years ago
|
||
Ahal, can you answer comment #45?
Flags: needinfo?(jgriffin) → needinfo?(ahalberstadt)
Comment 47•8 years ago
|
||
They do print stuff on test pass already. Adding additional logging to stdout should be as easy as calling self.log.info(...).
The problem with xpcshell is that each test is run in parallel in a separate process, so TEST-START/TEST-END messages don't come one after another. We could potentially create a new formatter that displays output better if necessary.
Flags: needinfo?(ahalberstadt)
Alex, any interest in reviving this work? This would be a good path to tracking DevTools per-process memory usage if we can get it working.
For xpcshell, what about trying to force the test in sequential mode via the `run-sequentially`[1] manifest flag? Perhaps that will force the log to appear for Perfherder data?
[1]: https://searchfox.org/mozilla-central/search?q=sequentially&path=xpcshell
Flags: needinfo?(poirot.alex)
Blocks: 1439048
Comment 49•7 years ago
|
||
The xpcshell test was already using run-sequentially to have less noise in memory measurement.
It looks like it doesn't change xpcshell stdout behavior:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=9bcc7295ec50cb90d7d216394793be061ce0d366&selectedJob=170326419
https://taskcluster-artifacts.net/BcBcISgSQdC6kem-qM5elg/0/public/logs/live_backing.log
(look for test_memory.js)
I tried various ways to print something to stdout, but nothing is displayed.
I think everything is blocked from the python app running xpcshell tests.
Flags: needinfo?(poirot.alex)
Comment 50•7 years ago
|
||
We may use DAMP to record memory consumption, but I'm afraid we would have same issue as mochitests where the measurement has a lot of variance.
Updated•7 years ago
|
Blocks: dt-fission
No longer blocks: 1439048
Updated•6 years ago
|
Product: Firefox → DevTools
Summary: Intermittent devtools/server/tests/unit/test_memory_footprint.js | init_server - [init_server : 26] Footprint after DebuggerServer.init() is 152 kB (should be less than 150 kB). → Revive tracking memory usage of DevTools
Depends on: 1442361
Updated•6 years ago
|
Whiteboard: dt-fission
Updated•5 years ago
|
Blocks: dt-fission-framework
Updated•5 years ago
|
No longer blocks: dt-fission-framework
Updated•5 years ago
|
Blocks: dt-fission-framework
Updated•5 years ago
|
No longer blocks: dt-fission
Updated•5 years ago
|
Priority: P2 → P3
Updated•5 years ago
|
Whiteboard: dt-fission → dt-fission-reserve
Comment 51•5 years ago
|
||
Tracking Fission DevTools bugs for Fission Nightly (M6)
Fission Milestone: --- → M6
Updated•5 years ago
|
Assignee: poirot.alex → nobody
Comment 52•4 years ago
|
||
dt-fission-reserve bugs do not need to block Fission Nightly (M6).
Fission Milestone: M6 → ---
Comment 53•4 years ago
|
||
Tracking dt-fission-reserve bugs for Fission MVP until we know whether they really need to block Fission or not.
Fission Milestone: --- → MVP
Comment 54•4 years ago
|
||
Moving old "dt-fission-reserve" bugs to "dt-fission-future" because they don't block Fission MVP.
Fission Milestone: MVP → Future
Whiteboard: dt-fission-reserve → dt-fission-future
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•