Closed
Bug 175600
Opened 22 years ago
Closed 15 years ago
Only 8192 objects (entries) can be stored in disk cache.
Categories
(Core :: Networking: Cache, defect)
Core
Networking: Cache
Tracking
()
RESOLVED
FIXED
mozilla1.9.3a5
Tracking | Status | |
---|---|---|
blocking2.0 | --- | alpha5+ |
People
(Reporter: thorgal, Assigned: alfredkayser)
References
(Blocks 1 open bug)
Details
Attachments
(4 files)
(deleted),
patch
|
darin.moz
:
review+
Biesinger
:
superreview+
|
Details | Diff | Splinter Review |
(deleted),
image/jpeg
|
Details | |
(deleted),
image/png
|
Details | |
(deleted),
image/png
|
Details |
Mozilla 2002100710/Linux-i686
I've configured rather large disk cache - 160MB. However, it never seems to fill
up completely, but rather oscillates around 100MB mark (a few MBs back and
forth). I've noticed that the "Number of entries:" value in "about:cache" never
exceeds 8192. It looks as if the cache could only store 8192 different objects.
Is it indented behavior?
Comment 1•22 years ago
|
||
Could it be a Linux OS limit of 8k files per directory? Why would an application
set its own limits rather than follow the system APIs?
Reporter | ||
Comment 2•22 years ago
|
||
Not really. The filesystem I'm using can handle up to 2^31 (i.e. over two
billion) files per directory.
This is a limit of the current disk cache design. It is related to bug 110163.
We would like to make the size of the cache map file dynamic, and we have a
preliminary plan, but we've been working on more critical bugs. I'd really like
to fix it though, so I'll set the priority level to 2.
Status: NEW → ASSIGNED
OS: Linux → All
Priority: -- → P2
Target Milestone: --- → mozilla1.3beta
Comment 4•22 years ago
|
||
Using Mac Mozilla v1.1 for OS X (FreeBSD) any cache directory containing more
than 3,000 files causes access to cache files (like adding/deleting)
exponentially slower. Using the default cache size of 50Mb, the cache on my
wife's profile finally had to purge and took an hour to complete (had about
8,000 files). If you change the cache setting design, I'd encourage the old
fashioned Netscape UNIX tree structure that used two letter hex sub-directory
names to keep directory file contents low. For example, you had .../cache/aa
and .../cache/ab and .../cache/ac
(and so on) where each subdirectory held only a small (<1000?) number of files.
Comment 5•20 years ago
|
||
Does anyone knows why this bug has target milestone set to mozilla1.3beta? As
far as I know it's not fixed in Mozilla 1.7 or Firefox 0.9.
This bug is related to cache design and it seems to me that it won't be fixed
until Mozilla 2.0... :(
Comparing to IE Mozilla's cache sucks.
Comment 6•20 years ago
|
||
Vladimir, could your approach in unified storing help in this area?
(In reply to comment #6)
> Vladimir, could your approach in unified storing help in this area?
That's for Darin to answer; the storage interface itself doesn't have any
similar limitation. It all depends on Darin's mozStorage-based cache
implementation.
Comment 8•20 years ago
|
||
the mozStorage based cache impl doesn't have this bug.
http://lxr.mozilla.org/mozilla/source/netwerk/cache/src/nsDiskCacheDeviceSQL.cpp
(not yet part of the build)
Comment 9•20 years ago
|
||
(In reply to comment #8)
> the mozStorage based cache impl doesn't have this bug.
>
> http://lxr.mozilla.org/mozilla/source/netwerk/cache/src/nsDiskCacheDeviceSQL.cpp
>
> (not yet part of the build)
Hm... Does it mean this bug is fixed in the specified version of
nsDiskCacheDeviceSQL.cpp? And, therefore, will be fixed in the next Mozilla version?
Comment 10•20 years ago
|
||
> Hm... Does it mean this bug is fixed in the specified version of
> nsDiskCacheDeviceSQL.cpp? And, therefore, will be fixed in the next Mozilla
version?
Yes, this code fixes the problem, but it probably won't be enabled in the
default builds of Mozilla until next year sometime when we start on 1.9 alpha.
I don't expect this to be enabled for 1.8. Our intent is to move most if not
all of Mozilla's profile data into a SQLite database, and we're only just
getting started.
Assignee | ||
Comment 11•19 years ago
|
||
Since the fix of you can now up the upper limit of number of files in the disk
cache:
/netwerk/cache/src/nsDiskCacheMap.h, line 91 -- #define kMaxRecordCount 8192
Just change the number to a bigger 2<<xx number, such as 16384 or 32768...,
compile, test and run...
Assignee | ||
Comment 13•19 years ago
|
||
As the nsDiskCacheMap now can grow and shrink dynamically, the size is not really limited, but a safety limit is allways desirable.
Changing the max from 8192 ot 16384 means that the _CACHE_MAP file now can grow to about 260KB (instead of 132KB). Bigger limit is diskspace wise not a problem, but will make the current internal structure less efficient (buckets will become to big). Ok, explanation is now longer that the patch, so asking for a review...
Attachment #208963 -
Flags: review?(darin)
Comment 14•19 years ago
|
||
Comment on attachment 208963 [details] [diff] [review]
One line patch to increase max. disk cache size (in number of entries)
As the number of cache entries increases the number of hash collisions also increases. The cache doesn't deal well with hash collisions (it just dooms the older of the two entries). Perhaps if we want the cache to grow more, we should solve the hash collision problem too?
Attachment #208963 -
Flags: review?(darin) → review+
Assignee | ||
Comment 15•19 years ago
|
||
My experience so far is that there are very few hash collisions, so let's postpone that issue to the SQL version... ;-)
Comment 16•19 years ago
|
||
It's funny, I recall observing a hash collisions on http://slashdot.org/. The unfortunate thing in that case was that the toplevel HTML file was colliding with some small item on the page, and as a result the toplevel HTML file was not being cached. I'm sure the moons had to be perfectly aligned for that one to happen, and for me to notice it, but nonetheless that is what I observed.
Comment 17•18 years ago
|
||
I would also like to see the UI allow me to set a disk cache size larger than 999KB. With modern disk drives in the hundreds of GB, this limit makes no sense.
Comment 18•18 years ago
|
||
John, there are at least three products you could mean, so please file that issue in the appropriate product.
Updated•18 years ago
|
Assignee: gordon → dcamp
Status: ASSIGNED → NEW
Comment 20•18 years ago
|
||
I don't think we want to block on this. It'd be nice to have a fix, and the posted one works modulo Darin's concerns...
Flags: blocking1.9? → blocking1.9-
Whiteboard: [wanted-1.9]
Updated•18 years ago
|
Attachment #208963 -
Flags: superreview?(dveditz)
Comment 21•18 years ago
|
||
Seems like growing the cache size w/o fixing the collision problem is not a great idea. Ccing Rob Arnold in case he has cycles. Rob you or Biesi going to have a chance to check this out?
Comment 22•18 years ago
|
||
It seems to me that growing that cache size, even with the collision problem (I have filed bug 387545 on the issue) should be fine. The disk cache seems to use an LRU eviction policy according to nsDiskCacheDevice.cpp line 505. If increasing the number of entries in the disk cache causes extra collisions, then those collisions cause the older items to be evicted from the cache, correct? But those are exactly the entries that would have been evicted anyway because the cache has a limit of 8192 entries currently. It seems like increasing the number of cache entries can only turn cache misses into cache hits, and never a cache hit into a cache miss. In other words, it can do no harm. If the cache even sometimes evicted the newer entry on a collision, that would be a completely different scenario.
Am I reading the code incorrectly, or is there a flaw in my logic, or something that I'm not taking into account that makes the current patch not a great idea?
Comment 23•18 years ago
|
||
> If increasing the number of entries in the disk cache causes extra collisions,
then those collisions cause the older items to be evicted from the cache,
correct?
No. The item deleted is the item that collides, which may be very recently used. LRU eviction is only used when adding an entry that does not collide, but which pushes the total memory usage over the limit.
Comment 24•18 years ago
|
||
Yes, I understand that the object that collides might be very recently used, and thus could be evicted because of the collision. But that would have happened both with a small number for kMaxRecordCount and with a large number for kMaxRecordCount, right? In other words, increasing kMaxRecordCount does not cause evictions to occur that would not have otherwise occurred, correct? If so, that means that any evictions caused by *extra* collisions are evictions that would have occurred with a small number for kMaxRecordCount, so increasing kMaxRecordCount can cause no harm.
Updated•17 years ago
|
Flags: wanted1.9+
Whiteboard: [wanted-1.9]
Comment 25•17 years ago
|
||
(In reply to comment #23)
> No. The item deleted is the item that collides, which may be very recently
> used. LRU eviction is only used when adding an entry that does not collide,
> but which pushes the total memory usage over the limit.
Are collisions a real problem? If so should we take a pass at improving them in a second bug?
Comment 26•17 years ago
|
||
is there any sense in looking at this with cache service in it's current implementation? i am wondering if it would be pertinent to migrate the cache service to an sqlite database? this would solve this issue and other corruption issues when firefox is closed abruptly.
Updated•16 years ago
|
Attachment #208963 -
Flags: superreview?(dveditz) → superreview?(cbiesinger)
Updated•16 years ago
|
Attachment #208963 -
Flags: superreview?(cbiesinger) → superreview+
Updated•16 years ago
|
Keywords: helpwanted → checkin-needed
Comment 27•16 years ago
|
||
i have complied my own version of firefox (from the 3.0.1 branch) with this patch and have experienced no issues at all, the cache grows nicely and there doesn't seem to be any collisions.
as items from the cache aren't evicted as often, firefox seems to build up a better cache which from purely unscientific surfing certainly improves caching performance.
Comment 28•16 years ago
|
||
I have used this patch for years and found no problems.
Comment 29•16 years ago
|
||
i've noticed something quite strange with this patch after using it for a while:
after the _CACHE_MAP_ grows to 512kb from 256kb (there are about 12700 items in my cache), caching seems to stop.
sites where images were always being cached are now never cached.
is there are limit with the _CACHE_MAP_ file?
does anybody know why this is happening?
Comment 30•16 years ago
|
||
If anybody wants a good test try http://favicoop.com/
Comment 31•16 years ago
|
||
Mark, beat me, I can't go above 7181 entries.
Comment 32•16 years ago
|
||
in fact, firefox seems to stop caching once the cache has been filled. if you set the cache size to a value which is very big, click the http://favicoop.com/
to fill the cache, caching won't happen again until the cache is cleared.
this certainly seems to be a separate bug - can anybody else reproduce?
Comment 33•16 years ago
|
||
just to clarify on the above:
clear your cache the old fashioned way be deleting all the files from the profile folder:
now, in the gui, tools > options > offline storage > enter a really big number here like 2000 mb.
surf to http://favicoop.com/ to fill up the cache object entries, keep surfing until the about:cache tells you that you've got 8192 items in the cache, or as close as it will go.
now visit another site which is normally cachable, but isn't currently cached.
right click an image, select properties and note how firefox hasn't cached it.
looks like the eviction of objects when space isn't constrained is broken.
Comment 34•16 years ago
|
||
clearing checkin-needed because I don't think that's actually what dcamp wants.
Keywords: checkin-needed
Updated•16 years ago
|
Flags: blocking1.9.1?
Comment 36•16 years ago
|
||
Please don't nominate random bugs for blocking. This didn't block 1.9, it's not going to block 1.9.1, especially not when we're in the endgame there.
Flags: blocking1.9.1? → blocking1.9.1-
Comment 37•16 years ago
|
||
But why are you not fixing this bug....after all, Firefox's disk caching mechanism is the most ancient among all the advanced browsers. Just Caches 8192 entries.......then why have you given a option to set to limit the Disk size in MiB...
Isn't that broken...?
All the other browsers only limit cache by it's size...not by entries....If you limit no of entries in Firefox cache...then you should give an option to limit no of entries...
Hope this will be fixed in Firefox 3.5.1
Comment 38•16 years ago
|
||
One thing more that I forgot...
As you have begun migrating gradually from old storage technologies to new technologies such as SQLite/JSON with 1.9 branch. SO why don't upgrade Firefox cache to sqlite... as you have done with Places feature...
Updated•16 years ago
|
Flags: wanted1.9.2?
Assignee | ||
Comment 39•16 years ago
|
||
There was an attempt to convert the cache code to SQLite, but that was much slower. Anyway that is a different topic than this bug: # of items in cache.
Note, raising the limit of 8192 doesn't impact Firefox in normal settings, it will only help those people that set a very large cache space.
Comment 40•16 years ago
|
||
(In reply to comment #39)
> Note, raising the limit of 8192 doesn't impact Firefox in normal settings, it
> will only help those people that set a very large cache space.
this bug must impact overall performance - if you don't set a large cache size your cache gets blown away when you download something big and then your cache isn't primed for all your favourite sites.
this bug was first reported in 2002, surely it's time to fix it?
Assignee | ||
Comment 41•16 years ago
|
||
Bugzilla is not a discussion forum.
P.s. downloading big items will not impact cache that much, as soon as the target location of the download is known the item is stored outside the cache.
Comment 42•16 years ago
|
||
It is not so. I download a 60 MB file in my 'Downloads' folder, and when i tried to download that file again, it was saved within seconds in my specified location. So it's clear that Firefox had stored that 60 MB in it's cache.
Comment 43•16 years ago
|
||
this screenshot shows the cache full with 8192 items, but not full to disk capacity .
Comment 44•16 years ago
|
||
This screen shot shows the properties of an image from a site which is normally cachable. The image is not cached, caching does not happen for any new content, although existing items are served from cache - no more caching seems to occur, even though the cache is set big enough.
Comment 45•16 years ago
|
||
as you can see, image is normally cachable and cache functions as normal when cleared.
Comment 46•16 years ago
|
||
has anybody noticed my previous experience, as noted in comment #29 has increased with firefox 3.5 rc(1-3), in that if you set your cache size to say 250mb, it grows and eventually stop caching until cleared.
i have attached 3 images documenting this issue.
Updated•15 years ago
|
QA Contact: tever → networking.cache
Target Milestone: mozilla1.3beta → ---
Comment 47•15 years ago
|
||
Oh my, now I understand why I have never seen any gain in performance from changing the cache size to 500Meg. Now that I have checked it stays at around 60 :(
Shouldn't it be an important performance bug for 3,6?
Updated•15 years ago
|
Keywords: checkin-needed
Comment 48•15 years ago
|
||
Hi, this bug is opened since 2002 and while other browsers like Chrome, Safari and even IE are improving their performance Firefox has still this 8192 cache entries limit.
If we take 14 KB as an average cachable HTML object size, only 115 MB will be used for disk cache at maximum.
IE is now using 1GB by default for its disk cache. Why Firefox couldn't do the same or even better ?
Is there any plans to improve Firefox's disk cache with bigger default values, better cache replacement algorithms etc... ?
Comment 49•15 years ago
|
||
Malbrouck, this bug is just waiting to be checked in, then this bug can be closed.
Not really; that patch wasn't checked in because it doesn't really fix the problems in our cache, of which there are many (hash collisions and fixed object count not the least). We need to overhaul our cache in a pretty bad way.
Comment 51•15 years ago
|
||
Vlad, You are probably right, but something has to be done to improve the current situation. If moving from 8K to 16K will change the amount of collisions from 1% to 4%, it will still mean that alot more files are being served from the local cache.
The only reason this bug has 30 votes instead of 30K votes is that people are believing the GUI which lies to them when they "increase" the size of the cache .....
In 2010 with the tens of css and JS and image files used by the social sites this should be a blocking bug.
Comment 52•15 years ago
|
||
So should this actually be checked in? Seems like we've been at this point before in comment 34.
Updated•15 years ago
|
Keywords: checkin-needed
Comment 53•15 years ago
|
||
So let's fix the hash (bug 290032), see if it reduces collisions, and if it does, then land this. I don't think we should wait for some larger cache overhaul to fix this.
Updated•15 years ago
|
Blocks: http_cache
Comment 54•15 years ago
|
||
Over 8 years and yet no one has a clue how to fix this?. Maybe remove entry limit but add a maximum storage size to maybe 500MB so that it won't cache anything after it reaches that limit and users just need to manually removed big files in cache which takes up that space (mainly youtube vids etc)
Comment 55•15 years ago
|
||
(In reply to comment #53)
> So let's fix the hash (bug 290032), see if it reduces collisions, and if it
> does, then land this. I don't think we should wait for some larger cache
> overhaul to fix this.
Sorry, but I still fail to understand why is it even related. Will the proposed patch here have any negative effect? if not, then there is no reason not to submit it. The other issues are clearly more complicated, so solve them later.... even redesign the whole caching code, but for 3.7 please do something ....
Should this bug wait until andriod and maemo users will discover that they pay more for their cellular bandwidth because of this issue?
I have switched lately from "unlimited" landline to "limited" cellular based internet connection, and this bug by itself makes me consider switching browsers.
Assignee | ||
Comment 56•15 years ago
|
||
The problem is that with a larger cache, there will be more issues with the hashing.
Bug 559729 proposes to fix all three bugs:
1) we should see how well bug 175600 (8192 file max count in cache), bug 290032
(better hashing algorithm), bug 193911 (double default cache size) play
together, and land them if they improve our hit rates.
So, it is a matter of doing all three, of which the hash part is the hard part.
My idea would be to start with the numbers, and parallel develop an hash solution.
Comment 57•15 years ago
|
||
But will the benifits outweigh the problems we may find with colissions? Let's think of the end user.
5 minutes of using Google Maps fills the cache with thousands of PNG files. We do need a new cache system, but could we just land this for now, see how I goes in moz-central, let the QA teams and end users test the builds, and fix the collisions later.
Updated•15 years ago
|
Summary: Only 8192 objects can be stored in disk cache. → Only 8192 objects (entries) can be stored in disk cache.
Comment 58•15 years ago
|
||
Bug 290032 is fixed now, so the risk of collisions is reduced - I'm all for increasing the default cache size, but can't we land this change now, to immediately benefit users who are exhausting their cache count before they exhaust their cache storage. Indeed, I wonder if we should be growing it even more substantially - is 10-fold an unreasonable notion?
blocking2.0: --- → alpha5+
Priority: P2 → --
Hardware: x86 → All
Target Milestone: --- → mozilla1.9.3
Version: Trunk → unspecified
Updated•15 years ago
|
Assignee: dcamp → nobody
Comment 59•15 years ago
|
||
Patch landed and followup bug 569709 filed to figure out what actually makes sense for this limit.
http://hg.mozilla.org/mozilla-central/rev/6138ea8d53c3
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Updated•15 years ago
|
Assignee: nobody → alfredkayser
Target Milestone: mozilla1.9.3 → mozilla1.9.3a5
Comment 60•14 years ago
|
||
(In reply to comment #58)
> [...] Indeed, I wonder if we should be growing [the default cache size] even
> more substantially - is 10-fold an unreasonable notion?
The current rule of thumb is that the size of websites triples every 5 years.
So, if we take 2002 and 50mb as starting point, we end up with the following formula:
Math.round(Math.pow(3, ((new Date()).getUTCFullYear() - 2002) / 5) * 50)
2010 = 290
2011 = 361
2012 = 450
2013 = 561
[...]
2020 = 2610
Looks pretty reasonable to me.
Comment 61•14 years ago
|
||
I stil have this bug.
Firefox 3.6.13
Mac OS X 10.6.6
about:cache
Disk Cache -> Number of entries:
My number of entries never exceed 8192
(Using between 7 to 15% of my 1GB maximum)
I always have
about:config
browser.cache.disk_cache_ssl : true
Comment 62•14 years ago
|
||
The fix for this isn't in Firefox 3.6. It is in Firefox 4 betas.
Updated•14 years ago
|
Flags: wanted1.9.2?
You need to log in
before you can comment on or make changes to this bug.
Description
•