Closed
Bug 1435588
Opened 7 years ago
Closed 7 years ago
Override `applyIncomingBatch` in the buffered bookmarks engine to insert all incoming records at once
Categories
(Firefox :: Sync, enhancement, P2)
Firefox
Sync
Tracking
()
RESOLVED
FIXED
Firefox 60
Tracking | Status | |
---|---|---|
firefox60 | --- | fixed |
People
(Reporter: lina, Assigned: tcsc, Mentored)
References
(Blocks 1 open bug)
Details
Attachments
(3 files)
The way we stage incoming records in the mirror isn't as efficient as it could be: we download everything into memory, decrypt, then loop over the records and insert them one-by-one into the mirror. This runs one transaction per record, which seems particularly unnecessary because we already have all the records in an array at that point.
As part of this, we may want to remove the shutdown blocker in https://searchfox.org/mozilla-central/rev/e06af9c36a73a27864302cd2f829e6200dee8541/toolkit/components/places/SyncedBookmarksMirror.jsm#247-248, to avoid shutdown hangs for large first syncs. This means that interrupting a large sync will cause us to download, decrypt, and try to store everything again on the next sync, though. (We could also experiment with smaller batches to find a sweet spot, as for history in bug 1416788). But adding bookmarks to the mirror single file doesn't make a lot of sense.
Updated•7 years ago
|
Priority: -- → P2
Updated•7 years ago
|
Assignee: nobody → tchiovoloni
Comment hidden (mozreview-request) |
Reporter | ||
Comment 2•7 years ago
|
||
mozreview-review |
Comment on attachment 8952497 [details]
Bug 1435588 - (part 1) Avoid inserting records one at a time when using the sync bookmark buffer
https://reviewboard.mozilla.org/r/221700/#review228264
Nice and simple, thanks! I'd love to have a test for interrupting and recovering after applying some of the chunks, but feel free to land as-is if it gets too annoying to write.
::: services/sync/modules/engines/bookmarks.js:1125
(Diff revision 1)
> },
>
> + async applyIncomingBatch(records) {
> + let buf = await this.ensureOpenMirror();
> + for (let chunk of PlacesSyncUtils.chunkArray(records, 500)) {
> + await buf.store(chunk);
Should we add a `jankYielder` here, or get a bug on file to have the mirror use it? (Or both?)
::: services/sync/modules/engines/bookmarks.js:1128
(Diff revision 1)
> + let buf = await this.ensureOpenMirror();
> + for (let chunk of PlacesSyncUtils.chunkArray(records, 500)) {
> + await buf.store(chunk);
> + }
> + // Array of failed records.
> + return [];
This looks right. My first thought was, what if we have a large batch and we're interrupted or fail after successfully applying some of the chunks? We don't want to start from the very beginning on the next sync, it's fine to keep the records we already wrote, and the high water mark will return the correct timestamp.
If it's not too onerous, would you mind adding a test for this? If it is, don't worry about it.
Attachment #8952497 -
Flags: review?(kit) → review+
Assignee | ||
Comment 3•7 years ago
|
||
mozreview-review |
Comment on attachment 8952497 [details]
Bug 1435588 - (part 1) Avoid inserting records one at a time when using the sync bookmark buffer
https://reviewboard.mozilla.org/r/221700/#review228268
::: services/sync/modules/engines/bookmarks.js:1125
(Diff revision 1)
> },
>
> + async applyIncomingBatch(records) {
> + let buf = await this.ensureOpenMirror();
> + for (let chunk of PlacesSyncUtils.chunkArray(records, 500)) {
> + await buf.store(chunk);
Filed a follow up: bug 1440334. It seems like we'd really want to yield inside the store() loop rather than here anyway.
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Reporter | ||
Comment 7•7 years ago
|
||
mozreview-review |
Comment on attachment 8953271 [details]
Bug 1435588 - (part 2) Request records oldest first with the bookmarks buffer
https://reviewboard.mozilla.org/r/222550/#review228678
Attachment #8953271 -
Flags: review?(kit) → review+
Reporter | ||
Comment 8•7 years ago
|
||
mozreview-review |
Comment on attachment 8953272 [details]
Bug 1435588 - (part 3) Avoid applying the bookmarks buffer if we failed to store all records
https://reviewboard.mozilla.org/r/222552/#review228680
Awesome, thank you for fixing this, and for the test!
Attachment #8953272 -
Flags: review?(kit) → review+
Pushed by tchiovoloni@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/44915b513676
(part 1) Avoid inserting records one at a time when using the sync bookmark buffer r=kitcambridge
https://hg.mozilla.org/integration/autoland/rev/bc0a9ed100f5
(part 2) Request records oldest first with the bookmarks buffer r=kitcambridge
https://hg.mozilla.org/integration/autoland/rev/a4b3c2b24e97
(part 3) Avoid applying the bookmarks buffer if we failed to store all records r=kitcambridge
Comment 10•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/44915b513676
https://hg.mozilla.org/mozilla-central/rev/bc0a9ed100f5
https://hg.mozilla.org/mozilla-central/rev/a4b3c2b24e97
Status: NEW → RESOLVED
Closed: 7 years ago
status-firefox60:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 60
You need to log in
before you can comment on or make changes to this bug.
Description
•