Closed Bug 671971 Opened 13 years ago Closed 9 years ago

Use telemetry to discover best HTTP cache compression level and max RAM cache object size.

Categories

(Core :: Networking: Cache, defect)

defect
Not set
normal

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: jduell.mcbugs, Unassigned)

References

(Blocks 1 open bug)

Details

Bjarne tells me we allow a single object to consume up to 90% of the memory cache. I suspect we'll get a better hit rate (esp for fennec, which is only using RAM cache for now) if we lower that to a smaller %. Should be fairly easy to add telemetry call (for overall hit rate), browse the same set of pages with different max %'s, and see if there's an easy win here. (I'm going to assume that for mobile especially, higher hit rate is a reasonable overall metric here: high latency ~= more cache hits for smaller objects is a win vs. b/w saved by storing bigger things). Note: we could also use the prefs in bug 650995 to handle this, i.e. set a maximum absolute size, rather than a %.
OK, I'm merging this bug with a followup from bug 648429, where we added compression to the HTTP cache but haven't dug deeply into measuring what compression levels (and/or algorithms) give best performance. I'm envisioning this as a fast-and-furious data gathering project. It would be awesome if we had some sort of good automated benchmark to measure cache perf, and/or wide-scale A-B testing in the wild, but we don't, so this is my plan for what seems reasonable now (please feel free to make suggestions): Plan: - use fennec to browse some set of pages (just browse a while and add all URLs traversed to an HTML page, then use that page to repeat the next time with different settings) Size of history needs to be > RAM cache. Ideally at least 2x? And, of course, a set of pages that re-uses items. In a perfect world this would be a set of unchanging URLs we could use again in 6 months. That may be hard, so I'm really happy with anything for now, even just browsing the NYTimes or whatever. - Add the overall hit rate telemetry, and then try these runs with varying levels of max object size in RAM cache. - Once we've got a sensible max % (or absolute max size) for RAM cache, add telemetry for compression: - size of compressed/uncompressed cache objs - time to compress - time to decompress - Try runs with differing levels of compression, report results here, and we can haggle about what level has best space/CPU tradeoff. I'm agnostic about whether we should measure/optimize max object size before compression level. I'm guessing we can't measure compressed size before we do the size > max test? Are we at least Extra credit for 1) measuring desktop firefox for compression level too, since may have different tradeoffs (I think it's less worth separately testing RAM cache % for desktop) 2) testing snappy. Snappy coverage would be great--it may work a lot better--but I'm guessing we may want to skip it for now unless it's very easy to add. I don't want this delaying other work (e.g. getting disk cache to work on mobile). Oh, and let's fix a couple minor nits while we're at it: rename CACHE_COMPRESSION_LEVEL to CACHE_COMPRESSION_LEVEL_DEFAULT, and use it here instead of "1": 5.52 + , mCacheCompressionLevel(1)
Assignee: nobody → gbrown
Summary: Look into reducing max % of memory cache consumed by any one object → Use telemetry to discover best HTTP cache compression level and max RAM cache object size.
I never seem to get around to this investigation...would someone else like to take over?
Assignee: gbrown → nobody
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.