Closed
Bug 559923
Opened 15 years ago
Closed 12 years ago
Client-side compression of data
Categories
(Firefox :: Sync, enhancement)
Firefox
Sync
Tracking
()
RESOLVED
DUPLICATE
of bug 821009
Future
People
(Reporter: Tobbi, Unassigned)
Details
I have tested weave for the first time now with Fennec and a desktop build. And the first thing I noticed is that the syncing takes very long.
Could we enable a client-side compression of the user's data so that the sending doesn't take that long?
My suggestion for a good compression algorithm would be LZMA Solid (in case there are no licensing issues).
Comment 1•15 years ago
|
||
This may be useful for easing storage demands, but IIRC, the bulk of upload time is spent in encryption, so we wouldn't see a major improvement at this time.
Summary: Client-side compression for user's data before sending it to the server → Client-side compression of data
Target Milestone: --- → Future
Comment 2•14 years ago
|
||
Adding client-side compression would probably _increase_ sync time. Clients already produce a ton of garbage doing JSON parse -> decrypt -> JSON parse, and adding another decompression phase after decryption (because ciphertext doesn't compress) would be even more costly.
We might want to revisit this if we end up syncing very large payloads (e.g., localStorage), but not for the current sync engines.
Comment 3•13 years ago
|
||
What about using HTTP body compression?
Comment 4•13 years ago
|
||
I did some research recently on using gzip or bzip compression on Sync payloads.
I did this by grabbing payloads and encrypting them with a sample size ~200.
Relevant snippet:
* Average record size for history, 450 bytes. For bookmarks, similar.
* Raw records, with ciphertext, in base64 inside JSON, compressed about 12% with gzip and 4% with bzip2 (too small input for bzip2 to stretch its legs, I imagine).
* Decrypted payloads didn't compress much better. 200 history items had 45,282 bytes beforehand, 39,731 after: about 12%. Of course, this then gets base64ed, encrypted…
12% compression on the wire really isn't likely to be noticeable, and devices on constrained connections will likely also have constrained CPU.
As with storage space reduction: the win would come from dropping JSON-with-string-keys-and-base64, not from compression. Sending raw bytes in, e.g., protocol buffers over SPDY? That'd be a win.
Comment 5•13 years ago
|
||
I agree that a binary payload would net the biggest win. And, if we are transmitting binary data, I'd rather go all in and use something lower-level than SPDY, like protocol buffer's built-in RPC or websockets. If we go with SPDY, that is still HTTP-inspired, which means we are wasting bytes with text headers, many of which could be enumerated as single byte constants. Yes, SPDY compresses everything, including headers, so it wouldn't be that bad. But, this extra layer of compression wouldn't help us too much since the now-binary payload likely won't compress well. So, we'd just be wasting cycles.
Comment 6•12 years ago
|
||
This will effectively be solved by Bug 821009, which simply makes records smaller, so I'm going to dupe to that.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → DUPLICATE
Assignee | ||
Updated•6 years ago
|
Component: Firefox Sync: Backend → Sync
Product: Cloud Services → Firefox
You need to log in
before you can comment on or make changes to this bug.
Description
•