Closed
Bug 933398
Opened 11 years ago
Closed 8 years ago
avoid full copies when loading/storing large objects
Categories
(Core :: Storage: IndexedDB, defect)
Core
Storage: IndexedDB
Tracking
()
RESOLVED
DUPLICATE
of bug 964561
People
(Reporter: luke, Unassigned)
References
(Blocks 1 open bug)
Details
(Whiteboard: [games:p1])
While investigating OOM crashes in the Epic Sanctuary demo, I found at least two sites in IDB where we are performing large allocations when storing a large object:
https://crash-stats.mozilla.com/report/index/daffa982-b8ce-41cb-934d-f7b6e2131030
https://crash-stats.mozilla.com/report/index/f9cea53b-0cb3-45fa-872b-ed4e32131030
For a bit of context: a typical beefy Emscripten game has a big ArrayBuffer that represents the heap and big ArrayBuffer that represents the virtual file system. Emscripten also has an option that allows the file system array to be cached in IDB which is used by both the Epic Citadel and Sanctuary demos. Citadel has a smaller file system so it mostly runs fine in 32-bit, although we still get crash reports on some machines with the above signatures. Sanctuary has a lot bigger file system so it crashes 100% of the time at these sites. (We should make those allocations fallible, but that is a separate bug.)
Commenting out the IDB storage allows Sanctuary to run fine on 32-bit, so we've done that, but ideally, of course, we'd like IDB to be usable for this purpose on 32-bit devices. Fixing this might also generally help B2G apps which are also highly memory-constrained.
The latter crash signature seems to be for holding the result of compressing the file. Perhaps, past a certain size threshold, we compress and store files in chunks (say 10MB), rather than the whole thing at once?
Snappy doesn't let us compress in chunks, but I guess we could split the file itself?
Updated•11 years ago
|
Blocks: gecko-games
Comment 2•11 years ago
|
||
As soon as I have access to the Demos folder, I can upload a new build of Sanctuary with and without IDBFS enabled, to be able to run and see how it compares.
Comment 3•11 years ago
|
||
New builds are available at the Demos share.
Comment 4•9 years ago
|
||
I think this is now doable. Sqlite supports writing data to blobs incrementally - sqlite3_blob_write() and the snappy library provides Source and Sink objects for incremental compression/decompression.
However, implementation looks rather complex.
Comment 5•9 years ago
|
||
(In reply to Jan Varga [:janv] from comment #4)
> I think this is now doable. Sqlite supports writing data to blobs
> incrementally - sqlite3_blob_write() and the snappy library provides Source
> and Sink objects for incremental compression/decompression.
Note that incremental blob I/O does not work when using WITHOUT ROWID, as IndexedDB is doing since bug 1131776 for the object_data table where the structured clone data lives. (See http://www.sqlite.org/withoutrowid.html section 2.0 point 6.)
Additionally, using WITHOUT ROWID potentially changes the size constant for which structured clones should be kicked out into file storage (depends-on bug 964561) because content can be stored in non-leaf nodes which can badly impact search cost. (See http://www.sqlite.org/withoutrowid.html section 4.0) Alternately, structured clone data could also be stored into a normal rowid table for medium sizes, enabling incremental blob I/O and helping keeping the search overhead down.
> However, implementation looks rather complex.
In this case, which sounds like it would trigger the kicked-out-to-a-separate-file logic, there could be other benefits to creating a wrapper helper for the external files. Specifically, if we want to support IndexedDB in private browsing and these large-size emscripten'ed game scenarios, in-memory-only becomes potentially intractable and switching to a mode where the contents of IndexedDB are encrypted with an in-memory-only-for-the-lifetime-of-the-private-browsing-context key, the helper becomes particularly useful. For the files (and IndexedDB), the preferred mode of operation would be to include an IV with each file/db page (doable with -DSQLITE_HAS_CODEC to reserve SQLite pages), which necessitates a wrapper. Additionally, if using an AEAD construct to also enable the use-case of encrypted IndexedDB using WebCrypto keys, the wrapper would also need to store the extra authenticated hashy stuff. (And, depending on the crypto algorithm and implementation, it might need the file chunked for seeking/slicing efficiency too. For example, AES-GCM does not support seeking.) (Also note that this would require the quota manager and friends to not store the data in such a way that the paths and file names don't reveal so much information, etc. etc.)
Whiteboard: [games] → [games:p1]
Comment 6•8 years ago
|
||
So bug 964561 is now fixed. Simply said, structured clones bigger than 1MB are now stored outside of the database, they stored compressed and the entire process doesn't require full copies (data is read directly from structured clone data buffer list and streamed to the snappy output stream and that is streamed to the file output stream).
Reporter | ||
Comment 7•8 years ago
|
||
Great work! So if I postMessage() a huge ArrayBuffer, do we write directly from the ArrayBuffer's mutable internal array into the stream or is there first a copy made into the structured clone buffer and then we stream from *that* later?
Comment 8•8 years ago
|
||
(In reply to Luke Wagner [:luke] from comment #7)
> Great work! So if I postMessage() a huge ArrayBuffer, do we write directly
> from the ArrayBuffer's mutable internal array into the stream or is there
> first a copy made into the structured clone buffer and then we stream from
> *that* later?
Not sure if you meant to type "put" instead of "postMessage" here. postMessage's behavior is unchanged and buffer reuse depends on:
- the arraybuffer being put in the transferables list
- the serializing context not using StructuredCloneScope::DifferentProcess. MessagePort, for example, uses DifferentProcess even when everybody is known to be in the same process. This makes the arraybuffer !hasStealableContents mandating a new allocation be created even though the old allocation will also be deallocated.
If you meant "put", the changes in bug 964561 don't really enter into the serialization stage that you're talking about. The structured clone serialization still occurs synchronously when you call put() and it still gets shipped to PBackground like it always did. The different is how the serialized data makes its way to disk (and back again).
Where serializedSize is the size of the serialized structured clone produced by the add/put call, the changes in bug 964561 eliminate the following during the write:
- 1 serializedSize nsCString allocation of flatCloneData. ("needs" to exist because the structured clone data may be stored in multiple segments and we present a single pointer+length to snapp::RawCompress)
- 1 ~serializedSize allocation to store the worst-case compressed version of the data
- however many transient page allocations SQLite needs above the page cache; possibly none.
And on read:
- 1 <serializedSize allocation to store the raw compressed blob
- 1 serializedSize allocation to store the uncompressed data.
So basically 2*serializedSize peak allocations get eliminated in both cases.
It's possible that since you're talking about ArrayBuffers rather than object graphs that things would be better served by just using a Blob. Especially if we get https://github.com/w3c/FileAPI/issues/23 standardized and implemented by Gecko.
Comment 9•8 years ago
|
||
Yeah, exactly. It couldn't be written better.
Comment 10•8 years ago
|
||
But please note, that structured clone data now uses a buffer list.
Reporter | ||
Comment 11•8 years ago
|
||
Oops, yes, I did mean put() instead of postMessage().
Yes, this sounds like an enormous improvement. I was just trying to understand, from the JS engine's POV, whether there the serialization was able to stream *directly* from the ArrayBuffer's mutable bytes or whether a single intermediate copy was made (for the SC buffer). It sounds like the latter *must* be the case if the streaming is happening from a background thread b/c of the mutability of the AB. I suppose even if we were to write the AB's buffer to the stream directly from the main thread, it would end up being mostly buffered in memory *anyway* (or block the main thread). So probably there's no more work to do here and I can dup this to the fix.
Totally agreed that Blob is the way to go when dealing with anything large. Since this bug was filed (almost 3 years ago, wow), we've realized the same. (As a general comment, it seems that most people, even those pretty familiar with the web platform, don't appreciate the meaningful difference between ABs and Blobs.)
Thanks again for all the hard work here.
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → DUPLICATE
You need to log in
before you can comment on or make changes to this bug.
Description
•