Closed
Bug 975702
Opened 11 years ago
Closed 4 years ago
[OS.File] Port OS.File to C++
Categories
(Toolkit Graveyard :: OS.File, enhancement)
Toolkit Graveyard
OS.File
Tracking
(Not tracked)
RESOLVED
DUPLICATE
of bug 1231711
People
(Reporter: Yoric, Unassigned)
References
Details
(Whiteboard: [MemShrink:P2][Async])
Attachments
(2 files)
(deleted),
text/plain
|
Details | |
(deleted),
patch
|
Details | Diff | Splinter Review |
The OS.File worker thread uses ~1.7Mb of memory (possibly a bit less now that bug 801376 has landed). We could reduce this to almost 0 by porting all of OS.File to C++.
We should do measurements after the JS team's work to make workers lighter is done.
Comment 2•11 years ago
|
||
Yoric, do you have measurements since bhackett landed bug 964057 and bug 964059?
Reporter | ||
Comment 3•11 years ago
|
||
No, no measurements yet.
Reporter | ||
Comment 4•11 years ago
|
||
Actually, on my Nightly, I see OS.File using 5.5Mb, which is not very good.
Updated•11 years ago
|
Whiteboard: [MemShrink][Async] → [MemShrink:P2][Async]
Comment 5•11 years ago
|
||
(In reply to David Rajchenbach Teller [:Yoric] (please use "needinfo?") from comment #4)
> Created attachment 8390754 [details]
> about:memory
>
> Actually, on my Nightly, I see OS.File using 5.5Mb, which is not very good.
Can you give a breakdown of where that 5.5MB is going?
Flags: needinfo?(dteller)
Comment 6•11 years ago
|
||
(In reply to Nathan Froyd (:froydnj) from comment #5)
> Can you give a breakdown of where that 5.5MB is going?
The comment you are replying to is for an attachment that says where the memory is going. :)
Flags: needinfo?(dteller)
Comment 7•11 years ago
|
||
(In reply to Andrew McCreight [:mccr8] from comment #6)
> The comment you are replying to is for an attachment that says where the
> memory is going. :)
Doh, I was not paying attention. Thanks for pointing that out.
Comment 8•11 years ago
|
||
So, looking at the about:memory report, the following things seem suspicious to me:
│ │ │ │ ├──112,672 B (00.01%) -- string(length=32, copies=1006, "DirectoryIterator_prototype_next")
│ │ │ │ │ ├───80,480 B (00.01%) ── malloc-heap
│ │ │ │ │ └───32,192 B (00.00%) ── gc-heap
│ │ │ │ └───24,912 B (00.00%) -- string(length=4, copies=519, "stat")
│ │ │ │ ├──16,608 B (00.00%) ── gc-heap
│ │ │ │ └───8,304 B (00.00%) ── malloc-heap
These two strings together account for about half of the string memory. We shouldn't really have that many copies of them hanging around, though maybe the GC hasn't swept them up for us. Maybe we could make the main-thread-to-worker communication mechanism more efficient by not using strings.
│ │ │ ├──1,569,168 B (00.15%) ── unused-gc-things
The description of this, "Empty GC thing cells within non-empty arenas", makes it sound like this is all just wasted space.
│ │ ├──1,982,464 B (00.20%) ++ gc-heap
│ │ ├────679,680 B (00.07%) ++ runtime
Both of these are pretty big, especially if they are post-worker-improvements. :(
Comment 9•11 years ago
|
||
A GC heap dump would be useful for further analysis. I'm not sure if the button in about:memory will trigger a log for all processes, or if you should just run with MOZ_CC_LOG_ALL=1 MOZ_CC_LOG_THREAD=worker and try to find the relevant log. My heap log scripts stringy.py and census.py should give you an overview of what is going on, or you can just jump right into looking for a copy of the strings with a zillion copies, and seeing what holds it alive.
Comment 10•11 years ago
|
||
(in the g/ directory of my heap graph repo https://github.com/amccreight/heapgraph because "gc" conflicted with some existing Python definition)
Comment 11•11 years ago
|
||
It also wouldn't be too hard to modify the GC heap dumper to include more explicit information about wasted space in each arena, as there is already some logging about the size and allockind of an arena:
# arena allockind=17 size=32
Lemme file a bug about that. But figuring out why there are so many strings might be the low hanging fruit here.
Comment 12•11 years ago
|
||
> │ │ │ │ ├──112,672 B (00.01%) -- string(length=32, copies=1006,
> "DirectoryIterator_prototype_next")
This is from http://dxr.mozilla.org/mozilla-central/source/toolkit/components/osfile/modules/osfile_async_front.jsm#1234. The specific line looks like this:
> let promise = Scheduler.post("DirectoryIterator_prototype_next", [iterator]);
This is inside what looks like a hot function. Simply hoisting that string into a global variable should be enough to prevent it from being reinstantiated each time that function is called.
> │ │ │ │ └───24,912 B (00.00%) -- string(length=4, copies=519,
> "stat")
A similar story here: http://dxr.mozilla.org/mozilla-central/source/toolkit/components/osfile/modules/osfile_async_front.jsm#744.
Yoric, can you try this hoisting and see if it saves the ~136,000 bytes? Should be a decent win for two tiny changes. Comments explaining things would be good.
Flags: needinfo?(dteller)
Comment 13•11 years ago
|
||
> │ │ │ ├──1,569,168 B (00.15%) ── unused-gc-things
>
> The description of this, "Empty GC thing cells within non-empty arenas",
> makes it sound like this is all just wasted space.
It's probably fragmentation due to allocating lots of short-lived objects.
I've attached a small patch that modifies the JS engine to print out the location (in JS code) of every JS object allocation. I've used it effectively to optimize pdf.js; it could be useful for OS.File too.
Here's how to use it:
- Apply patch, rebuild Firefox.
- Run Firefox, piping stderr to file.
- Once finished, do this:
> cat <file> | sort | uniq -c | sort -n -r
That will give you a frequency list of the allocation points. If you get lots of "<internal>" entries, one cause that I know of is structured cloning, i.e. when objects are passed from a worker to the main thread.
Some object allocations are necessary, but at least for pdf.js there were lots of cases where a temporary object was allocated every time around a hot loop, and it was easy to hoist the creation of that object outside the loop and then reuse the object for each iteration (e.g. in the case of arrays, by setting |length| to zero).
Comment 14•11 years ago
|
||
Hmm, the patch works for pdf.js, but causes deadlocks when I load other general pages like nytimes.com. Not sure how to fix it, sorry.
Reporter | ||
Comment 15•11 years ago
|
||
I'm afraid I won't have time to do these tests in this in the near future.
Flags: needinfo?(dteller)
Updated•6 years ago
|
Updated•4 years ago
|
Comment 16•4 years ago
|
||
Marking this as a dupe since all the work is happening on the other bug.
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → DUPLICATE
Updated•3 years ago
|
Type: defect → enhancement
Updated•3 years ago
|
Updated•1 years ago
|
Product: Toolkit → Toolkit Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•