Closed Bug 814801 Opened 12 years ago Closed 4 years ago

[meta] Incremental two-phase bookmark sync for Android Sync

Categories

(Firefox for Android Graveyard :: Android Sync, defect, P5)

All
Android
defect

Tracking

(Not tracked)

RESOLVED INCOMPLETE

People

(Reporter: rnewman, Unassigned, Mentored)

References

Details

(Keywords: meta, Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] 2)

See Bug 769476 Comment 18, Bug 812348, Bug 730142.

Sync has an intrinsic design issue: records are applied as they are downloaded. That's fine when records are independent (like history items), but it's bad if records are interrelated (like a bookmark tree).

This design issue is the direct cause of the pages and pages of fragile reparenting and repositioning code edge cases that we maintain.

If a download is incomplete for any reason -- missing server data, interrupted connection, Android termination, client error, or a hard limit being hit -- then the client's local store can be left in an inconsistent state.

The client has a choice as it syncs: "fix up" structure *in the canonical local store* as it proceeds, or allow the client to end up in an inconsistent state. Neither is correct.

We should implement a scheme which incrementally maintains a local structured replica of server state, only syncing locally when that replica is determined to be up-to-date and consistent. As an optimization, subtrees of the bookmark data can be synced when they are 'complete', even if the full tree has not yet been downloaded.

It's expensive -- both in terms of device storage and developer time -- but it will finally allow us to handle more data than we can process in a single five-minute Android sync, to incrementally process individual folders, and also to perform more accurate reconciling by holding an approximate server changelog on disk.

Syncing will be multi-phase: download and fetch server changes, then (directly or indirectly) compose and reconcile "delta streams" from both server and client. The output of this reconciling is a pair of consistent, reconciled changesets that can be applied to each repository.

This would all be much easier if we took a dependency on Clojure's persistent data structures; that's a natural design fit for this problem.

Note for future me or future followers in my footsteps: this is a very thorny problem. There are incremental kinda-fixes that will mitigate chunks of the current known issues (e.g., use straight-up download batching, disable upload until no remaining batches exist), but these will introduce consistency issues, have blind spots, and are otherwise Hacky As Hell®. I think it's time to do this right.
Blocks: 745408
Whiteboard: [sync:bookmarks]
Blocks: 655722
Blocks: 655728
Blocks: 647605
Additionally relevant: syncing bookmarks is really, really inefficient. Bug 655722 is one reason why.

"Instant sync" means that for some users, *every time they bookmark a page* their browser hangs for ten seconds while Sync hammers the places DB, perhaps triggering GCs and being generally awful.

Even if we don't do it "instantly", we're causing harmful stalls during browsing.

In the course of addressing the correctness issues here, we have an opportunity to be much more efficient about persisting state, which will reduce the amount of work we need to do to construct structured representations in memory.
Whiteboard: [sync:bookmarks] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale]
Product: Mozilla Services → Android Background Services
I think I just found a user who experienced a different variant of this underlying issue.

The symptom: bookmarks in a different order on the phone than on the desktop, but both stable (no change in two weeks).

Here's what I wrote:

---
• You set up your two clients close together in time.
• Your desktop had uploaded only some of its bookmarks. (With 3,000 that's very likely.)
• Your phone downloaded and applied those. It corrected them as best it could to achieve consistency — gaps in numeric ordering due to a partial write, for example, would be fixed by compacting.
• Your phone uploaded those corrected records and finished its first sync.
• Your desktop finished uploading, and fast-forwarded its last-sync timestamp accordingly.
• And so on.
---

This is a special case of the generic "you're screwed" problem, with Sync's time-gap flaw actually saving his desktop bookmarks from propagated corruption.
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?]
Blocks: 1085128
Hey Richard,

This bug sounds like the beginnings of what you implemented on iOS for the bookmark syncing engine (buffer/mirror/local tables). We don't have this two-phase mechanism for history/logins/tabs/etc though. Is this something you'd like to see implemented on iOS as well or is not an issue since problems are mostly related to bookmarks?
Flags: needinfo?(rnewman)
Speaking to iOS:

Tabs isn't necessary — each record stands alone, and we don't do multiple fetches to grab the records, so either the GET succeeds or it fails. And the records are very independent.

History is fairly robust from the perspective of individual records, so redownload is safe.

We already do a kind of two-phase application of logins. You'll see we don't batch-download, so we might see another client's partial writes, but we will never get a partial read. We don't upload anything until incoming records are successfully applied, and we don't apply until the download is successful. If we ever batch-download logins, or start to worry about interrupting a write (not a concern when all deployed clients use batch uploads!) then we should buffer them before application.

Desktop and Android stream incoming records, download in batches, and otherwise leave themselves vulnerable to interruption, so they would probably benefit from two-phase application of logins and form data as well as bookmarks.


Speaking more broadly: independent records are safer to partially apply. There can be cross-record inconsistencies -- you might see *some* history deletions, or *some* password changes -- but there's nothing structural to break, and the client should recover with re-download. "Eventual kinda consistency" is the name of the game.

Bookmarks are inherently structural, and so benefit the most from two-phase application.
Flags: needinfo?(rnewman)
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS s1.1]
Assignee: nobody → dlim
Part 1: buffer incoming records, then apply contents of buffer if we think we have enough time. Similar to Bug 1201108.

This alone will allow us to remove (or significantly raise) the existing 5000 bookmark limit, and will remove some interruption-based errors: the meat of syncing will occur only when we're confident that we've downloaded everything.


Part 2: add some amount of consistency checking to the buffer before application.

When we've downloaded records but not yet applied them, we have an opportunity to verify that incoming (and outgoing!) data is complete and safe to apply. This is still not a true multi-phase application, but adding heuristic checks (like on iOS[1]) will buy us yet more safety.


Part 3: structured record application, similar to the current state of iOS. This is true multi-phase application: we completely compute a final merged state from the incoming records and local state, using that to atomically apply changes to remote, buffer, and local.

This is the hardest step, and so while it buys the most safety and confidence (and makes the storage layer itself very simple), it's also the end of the journey, if we ever get there.


Let me know if you'd like to discuss more background about this bug, or otherwise have questions.


[1] <https://github.com/mozilla/firefox-ios/blob/master/Storage/SQL/SQLiteBookmarksSyncing.swift#L679>
Mentor: rnewman
Status: NEW → ASSIGNED
Hardware: ARM → All
(In reply to Richard Newman [:rnewman] from comment #7)
> Part 1: buffer incoming records, then apply contents of buffer if we think
> we have enough time. Similar to Bug 1201108.
> 
> This alone will allow us to remove (or significantly raise) the existing
> 5000 bookmark limit, and will remove some interruption-based errors: the
> meat of syncing will occur only when we're confident that we've downloaded
> everything.
> 
> 
> Part 2: add some amount of consistency checking to the buffer before
> application.
> 
> When we've downloaded records but not yet applied them, we have an
> opportunity to verify that incoming (and outgoing!) data is complete and
> safe to apply. This is still not a true multi-phase application, but adding
> heuristic checks (like on iOS[1]) will buy us yet more safety.
> 
> 
> Part 3: structured record application, similar to the current state of iOS.
> This is true multi-phase application: we completely compute a final merged
> state from the incoming records and local state, using that to atomically
> apply changes to remote, buffer, and local.
> 
> This is the hardest step, and so while it buys the most safety and
> confidence (and makes the storage layer itself very simple), it's also the
> end of the journey, if we ever get there.
> 
> 
> Let me know if you'd like to discuss more background about this bug, or
> otherwise have questions.
> 
> 
> [1]
> <https://github.com/mozilla/firefox-ios/blob/master/Storage/SQL/
> SQLiteBookmarksSyncing.swift#L679>

Thanks or the outline, Richard!

How far do you think we can get with just Part 1 as a starting point? It seems to me that from the "source of problems" point of view, network partitioning is probably the top culprit. Moving the meat of syncing to be purely local is probably the biggest win here, and it should significantly reduce number of problems at this scope (but certainly won't eliminate them). For bonus points, it's also the simplest (and the very first) step.
Flags: needinfo?(rnewman)
Thinking more about this, and chatting with nalexander, I think a good tractable first step would be the Part 1 by itself. If I say this quickly, we'll be adding a new "buffer repository" for storing encrypted records we receive them from the remote source, and modifying repository synchronization flow to be something like:

remote -> buffer, buffer -> local, local -> remote.

I think with some upfront thinking this should be doable in a way that all data types benefit from this, not just bookmarks.
Another consideration is doing the above work in a way that doesn't hinder future efforts around Part 2/3 outlined in Comment 7.
Network interruption is likely to be a big source of failures, but the codebase simply doesn't have enough telemetry to know for sure. We do know that there's a lot of server-side corruption from desktop Sync inadequacies, so I would be hesitant to say that it's okay to lose steam after part 1.

I'd turn this into a meta bug and file three parts, then work and measure. Buffering is necessary regardless of how much further we go. 

I'm not sure that I would buffer encrypted records. It's certainly simplest, but it makes further work more difficult. Having the buffer be fully unpacked requires a little thinking (what do you do if an incoming record is malformed?) but it makes validation and future record application tractable.
Flags: needinfo?(rnewman)
That's not to say that buffering encrypted records has no value; it does. But you'd still have to do some bookkeeping for when to drop stuff from the buffer, and you might be pushing yourself into a migration later. Think about it.
Depends on: 1291821
Summary: Incremental two-phase bookmark sync → [meta]Incremental two-phase bookmark sync
Depends on: 1291822
Depends on: 1291823
Keywords: meta
Summary: [meta]Incremental two-phase bookmark sync → [meta] Incremental two-phase bookmark sync for Android Sync
No longer depends on: 730142
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS s1.1] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS s1.2]
Assignee: dlim → nobody
Status: ASSIGNED → NEW
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS s1.2] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS backlog]
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS backlog] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS]
Priority: -- → P3
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS] 2
Priority: P3 → P2
Assignee: nobody → gkruglov
Priority: P2 → P1
Depends on: 730142
Assignee: gkruglov → nobody
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS] 2 → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] 2
Mentor: gkruglov
Depends on: 1343726
Removing a P flag from a meta bug.
Priority: P1 → --
Priority: -- → P3
Product: Android Background Services → Firefox for Android
Re-triaging per https://bugzilla.mozilla.org/show_bug.cgi?id=1473195

Needinfo :susheel if you think this bug should be re-triaged.
Priority: P3 → P5
We have completed our launch of our new Firefox on Android. The development of the new versions use GitHub for issue tracking. If the bug report still reproduces in a current version of [Firefox on Android nightly](https://play.google.com/store/apps/details?id=org.mozilla.fenix) an issue can be reported at the [Fenix GitHub project](https://github.com/mozilla-mobile/fenix/). If you want to discuss your report please use [Mozilla's chat](https://wiki.mozilla.org/Matrix#Connect_to_Matrix) server https://chat.mozilla.org and join the [#fenix](https://chat.mozilla.org/#/room/#fenix:mozilla.org) channel.
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → INCOMPLETE
Product: Firefox for Android → Firefox for Android Graveyard
You need to log in before you can comment on or make changes to this bug.