Closed
Bug 665707
Opened 13 years ago
Closed 9 years ago
Need a Necko (per-document?) memory cache to which to attach arbitrary objects
Categories
(Core :: Networking, defect)
Tracking
()
RESOLVED
DUPLICATE
of bug 1231565
People
(Reporter: joe, Unassigned)
References
(Blocks 1 open bug)
Details
Right now there are lots of separate caches built on top of the Necko cache, like stylesheets, fonts and images. What I think most people would *really* like is the ability to attach arbitrary pieces of data to a somewhat souped-up Necko cache. I know Imagelib would prefer that.
This cache would follow HTTP semantics, would deal with redirects properly, and would allow its users to lock entries into the cache (for example, you can't actually free memory by removing images that are in use on a web page). It should also let users change the size of entries (for example, by discarding decoded images) and give hints to entries that they might be evicted if they don't reduce their size.
This cache might be per-document, but global is also fine.
Comment 1•13 years ago
|
||
Bjarne/Michal: implementing bug 663979 might make this easy to support?
Joe: Are there specific API changes to the caching IDLs you have in mind?
Blocks: http_cache
Reporter | ||
Comment 2•13 years ago
|
||
It's probably easiest to describe via an irc or phone conversation. What I want is basically the ability to attach an arbitrary object to an nsICacheEntryDescriptor. It doesn't have to be persisted across sessions, which is what I mean by "memory cache." This object would then be retrievable from the nsICachingChannel in OnStartRequest.
Comment 3•13 years ago
|
||
> This cache would follow HTTP semantics, would deal with redirects properly,
This is out of scope of the cache code. All the logic is in HTTP protocol.
> It's probably easiest to describe via an irc or phone conversation. What I
> want is basically the ability to attach an arbitrary object to an
> nsICacheEntryDescriptor. It doesn't have to be persisted across sessions,
> which is what I mean by "memory cache." This object would then be
> retrievable from the nsICachingChannel in OnStartRequest.
The current cache code doesn't allow to attach object to a stream based cache entry. You can store objects in non stream based cache session, but you need to pair stream based entries with non stream based objects yourself. For per-document caching you would create non stream based session for every document (e.g. client ID of the session could be hash of the URL). It would be easy to find particular object or enumerate all objects related to a resource. The tricky part would be to keep the non stream based objects in sync with the stream based disk/memory entries, e.g. removing objects from memory when the stream based entry on the disk was evicted due to full cache.
Comment 4•13 years ago
|
||
If I am understanding the request correctly, the request is about providing a memoization mechanism for various functions f(x), g(x), h(x), where x is a HTTP response (body), and where the memoized data is garbage collected (only) when all uses of that HTTP response (body) are discarded. For example, a lot of sites use the resource https://ssl.google-analytics.com/ga.js, and we might want to share the AST for the JavaScript across all open documents, discarding this shared data when all the sites using the script are closed.
I think HTTP semantics would be too strict for many/most of these caches. For example, it is probably true that https://ssl.google-analytics.com/ga.js is the same as http://www.google-analytics.com/ga.js, but HTTP consider them to be different. Consequently, I am not sure the HTTP cache is the best place for this memoization. Instead, it might be better to make the HTTP cache a client of some new memoization service. It would help to talk more about the specific use cases in mind to find out what would be the best solution.
Comment 5•13 years ago
|
||
Inline scripts and inline stylesheets are another reason why we might not want to have this managed by the HTTP cache directly. For example, a site that uses Google Analytics probably has the exact same inline JS to load ga.js on every page. Maybe it would be useful to memoize the results of various functions (e.g. parsing) across these inline scripts too. Or, maybe the savings would be insubstantial.
Also, this sharing might be something that is done not just across active documents, but also for resources in the BFCache.
Comment 6•13 years ago
|
||
An example of the type of thing we're looking for is WebKit's CachedResource class. There are a bunch of subclasses that inherit from it:
- CachedImage
- CachedScript
- CachedFont
- CachedXSLStyleSheet
- CachedCSSStyleSheet
and each can hang object specific data off of it: the decoded image, the platform font, etc.
Comment 7•13 years ago
|
||
Brian, the main idea here is in fact to cache "compiled" representations without violating HTTP semantics. That is, the existence of the compiled-representation cache needs to be transparent to the web page.
Coalescing compiled representations across different resources _may_ be doable but is a more difficult problem. In particular, whether two resources have the same compiled representation depends not just on the bytes of the resource body but also on various metadata. In some cases that metadata includes the resource URI (e.g. this is the case for stylesheets).
In particular, the problem characterization in comment 4 is not quite right; f, g, and h are in general functions of the HTTP response (headers and body) _and_ of the HTTP request.
And no, the idea is not to GC the compiled representation when no one is looking at it anymore. The whole idea is to have that representation available across page transitions, and possibly across browser restarts.
I agree that inline scripts and stylesheets do not fit well into this model, but for those the situation is somewhat simpler: they compiled representation is just a function of the text and the document base URI. Furthermore, we can cache them for as long as we want, because there's no HTTP semantics involved: as long as the text and base URI match, we're good. That seems like a separate issue from what this bug is about. For external scripts/sheets we'd sort of like to avoid loading the text to start with if we can....
Comment 8•13 years ago
|
||
We were talking recently about splitting JSScript into two pieces: an immutable shareable part and a compartment-local mutable part. Boris, you probably have the best understanding of both: do you think this cache would be right for the immutable JSScript part? If so, when the cache hits in memory, it seems like this would make a *big* difference for overall page-load time and memory use. (Fun data point: I measured techcrunch spending 1.5 *seconds* compiling JSScript and spending about 1MB on 'script' per tab.)
Comment 9•13 years ago
|
||
Luke, the JSScript case is a bit of a pain. In theory, this sort of cache would be perfect for the immutable part if the immutable part were really shareable across pages. Unfortunately, COMPILE_N_GO is a wrench in those works.
But yes, I would love it if we resolved the COMPILE_N_GO issues and could cache compiled JSScripts (maybe with XDR to disk even).
Comment 10•13 years ago
|
||
Part of this mutable/immutable split would be to kill COMPILE_N_GO; we think we can do all the critical optimizations via the jit compiler.
Comment 11•13 years ago
|
||
In that case, the result would be _ideal_ for the sort of cache this bug is about. And yes, it would save multiple seconds on many sites. Last I profiled a gmail load, 50% of the pageload time was also compiling scripts.
Comment 12•13 years ago
|
||
On the subject of XDR: I think the ideal situation is that the cache would allow us to attach the immutable-script-part (in its in-memory non-XDR form) and cache hits would be given this in-memory structure and thus require zero processing. Only if the cache wanted to serialize to disk would it ask the element for a byte stream and only then would XDR encoding occur. Does this sound plausible?
Comment 13•13 years ago
|
||
That is exactly my ideal world, yes.
Comment 14•13 years ago
|
||
Righteous. Anyone thinking about working on this?
Comment 15•13 years ago
|
||
I did some investigation how this works in WebKit:
The CachedResource and CachedResourceLoader act completely separately from the network layer and it's associated cache. There are a couple of implications from this:
1. These memory caches are per process.
2. The caching logic is duplicated in the network layer and in CachedResourceLoader (see CachedResourceLoader::determineRevalidationPolicy())
3. If a resource isn't in the local memory cache it will contact the network layer which can service it as needed. (In chrome this will do the IPC etc.)
Comment 16•13 years ago
|
||
Yes. #2 is suboptimal and sort of forced on them by the fact that they don't control the network layer at all. We do.
Comment 17•13 years ago
|
||
Necko lays the framework for some other Js changes that could win big on memory use, whats the progress like? still in research?
Comment 18•13 years ago
|
||
Nobody is working on it and it seems like the most relevant people on the team aren't sold on the idea yet. But, both things can be changed.
Let's make sure we understand the issue clearly. First of all, I think "per document" in the summary of this bug is wrong and should be removed. Agreed?
It seems to me that in the long-run, there are probably going to be components (like the JS engine) that are going to be able to tell the Necko *disk* cache how to store things in a better format (e.g. XDR for JS) than it currently does. I think this is something that we we should pursue, and something we should keep in mind when designing how this should work, but actually implementing custom disk formats for entries is something we should defer until after we have something useful for sharing components in memory. Agreed? (We can resolve bugs like bug 679942 without custom disk formats.)
So, then what is left is the idea of sharing some kind of compiled in-memory representation of a resource (originally fetched over HTTP) across documents, in such a way that we can ensure that the compiled representation is not stale according to HTTP caching rules. I do not think that this is hard to do. But, also, I think most of this logic should live outside of Necko. And, definitely, we shouldn't add non-Necko dependencies on nsICache*. I think we will be able to add a very simple (easy to use, and easy to implement) conditional request API to Necko, along with some tuning hints like "do not store the response to this request in the memory cache," so that we could easily and efficiently implement a memoization service on top of (and outside of) Necko. I would like to talk to bz, Jeff, Luke, and/or Joe to make sure I am understanding exactly what they need before I make a more concrete suggestion for an API to implement.
(In reply to Jeff Muizelaar [:jrmuizel] from comment #15)
> The CachedResource and CachedResourceLoader act completely separately from
> the network layer and it's associated cache. There are a couple of
> implications from this:
>
> 1. These memory caches are per process.
One advantage of per-process caches is that those cached representations can be object graphs with pointers between the objects. A multi-process cache would be more limited in what the cached representations can look like. The second main advantage is that the compilers of the cached representations (the JS compiler, the font engine, etc.) are very completely sandboxed, so that an error in any of them only affects the content process they are operating in. We should make sure we are happy with the security/performance tradeoff we make here.
> 2. The caching logic is duplicated in the network layer and in
> CachedResourceLoader (see
> CachedResourceLoader::determineRevalidationPolicy())
>
> 3. If a resource isn't in the local memory cache it will contact the network
> layer which can service it as needed. (In chrome this will do the IPC etc.)
I am thinking of doing something very similar to what Webkit does, except that #2 is eliminated and #3 becomes "if the Necko says that our cached resource isn't stale, use it; otherwise get the new bits from Necko and compile them into a new cached representation." But, I wouldn't be surprised if we ended up doing even #2 in some cases for any multi-process product we build, for performance reasons. IMO, #2 is not that big of a deal, especially if you can just factor out that duplicated logic into a single implementation.
Comment 19•13 years ago
|
||
Right; there's no problem with _runtime_ duplication of the logic. I just don't want us having two copies of the _code_ around.
Comment 20•13 years ago
|
||
This would definitely help with downloadable fonts where we'd like to be able to cache activated font references (platform specific entities which are the result of sanitizing the font data and activation via OS calls).
Comment 21•13 years ago
|
||
It seems like you can can already do something like this today, without modifying Necko at all. Look at nsICachingChannel::LOAD_ONLY_IF_MODIFIED:
/**
* This load flag controls what happens when a document would be loaded
* from the cache to satisfy a call to AsyncOpen. If this attribute is
* set to TRUE, then the document will not be loaded from the cache. A
* stream listener can check nsICachingChannel::isFromCache to determine
* if the AsyncOpen will actually result in data being streamed.
*
* If this flag has been set, and the request can be satisfied via the
* cache, then the OnDataAvailable events will be skipped. The listener
* will only see OnStartRequest followed by OnStopRequest.
*/
const unsigned long LOAD_ONLY_IF_MODIFIED = 1 << 31;
Thus, you can build a cache on top of Necko that works as follows:
1. Store your structured data in a URL -> object map.
2. When you need to refresh your cache, check to see if the URL is in the map. If it is, pass the LOAD_ONLY_IF_MODIFIED option to the channel that would normally load the resource; in your OnDataAvailable, replace the entry in the URL -> object map with the new compiled representation. If you never get an OnDataAvailable, then your cached object will not have been changed.
Now, this doesn't solve the problem of how to share this URL -> object map across multiple tabs. But, that isn't necessarily a problem to be solved by Necko, but by some kind of higher-level cache manager.
One thing that people might not realize: Necko does *NOT* cache HTTP resources in memory in the usual case. Instead, it relies on the OS's filesystem buffer cache to keep recently-used cached resources in memory. The only exception is cached resources that are marked no-store, which ARE stored in the memory cache. However, no-store is uncommon, especially for JS, CSS, and webfonts, so it might not be worth implementing a special optimization for that.
AFAICT, there *may* be one important limitation of the LOAD_ONLY_IF_MODIFIED flag: If we need to send a conditional request to the server and get a 304 Not Modified response, ideally we should not be calling OnDataAvailable when thhe LOAD_ONLY_IF_MODIFIED flag was passed, but it seems like we are. However, fixing that seems like it would be very simple.
Thoughts?
Comment 22•13 years ago
|
||
(In reply to Brian Smith (:bsmith) from comment #21)
> 1. Store your structured data in a URL -> object map.
> 2. When you need to refresh your cache, check to see if the URL is in the
> map. If it is, pass the LOAD_ONLY_IF_MODIFIED option to the channel that
> would normally load the resource; in your OnDataAvailable, replace the entry
> in the URL -> object map with the new compiled representation. If you never
> get an OnDataAvailable, then your cached object will not have been changed.
3. In OnStopRequest, pull the object out of the URL -> object map.
> One thing that people might not realize: Necko does *NOT* cache HTTP
> resources in memory in the usual case. Instead, it relies on the OS's
> filesystem buffer cache to keep recently-used cached resources in memory.
Besides making Necko handle LOAD_ONLY_IF_MODIFIED for 304 Not Modified correctly, it may also be worth using the LOAD_ONLY_IF_MODIFIED flag as a hint to Necko to use fadvise(FADV_NOREUSE) or similar when reading the resource out of the disk cache.
Comment 23•13 years ago
|
||
> Thus, you can build a cache on top of Necko that works as follows:
No, you actually can't. Imagelib tried to do this, in fact. It broke badly when redirects were involved...
Comment 24•13 years ago
|
||
(In reply to Boris Zbarsky (:bz) from comment #23)
> > Thus, you can build a cache on top of Necko that works as follows:
>
> No, you actually can't. Imagelib tried to do this, in fact. It broke badly
> when redirects were involved...
I guess you are talking about bug 552605. Are there other such cases that are problematic? Is there some reason we couldn't just fix LOAD_ONLY_IF_MODIFIED to do something sensible for redirects and then use the technique I described (which is, AFAICT, almost exactly what imagelib is/was doing)?
Comment 25•13 years ago
|
||
> Are there other such cases that are problematic?
I don't know. I haven't thought about it for a bit.
> Is there some reason we couldn't just fix LOAD_ONLY_IF_MODIFIED to do something sensible
> for redirects
I couldn't think of a way to make it do something sensible back when we were looking at bug 552605, but if someone else can figure it out, great.
Updated•9 years ago
|
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → DUPLICATE
You need to log in
before you can comment on or make changes to this bug.
Description
•