Closed Bug 441751 (CVE-2009-0358) Opened 17 years ago Closed 16 years ago

Directives not to cache pages ignored.

Categories

(Core :: General, defect, P1)

defect

Tracking

()

RESOLVED FIXED
mozilla1.9.1b3

People

(Reporter: paul.nel, Assigned: dcamp)

References

Details

(4 keywords, Whiteboard: [sg:want] post 1.8-branch)

Attachments

(5 files, 3 obsolete files)

User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022) Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9) Gecko/2008052906 Firefox/3.0 From your description, you're encountering some problem with disable client-side browser cache in firefox browser. Regarding on this, I've also performed some test on my side. Generally, the following code should be necessary to disable client-side cache: ======================= Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Response.Cache.SetAllowResponseInBrowserHistory(False) Response.Cache.SetCacheability(HttpCacheability.NoCache) Response.Cache.SetNoStore() Response.Expires = 0 Response.Write("<br/>" & DateTime.Now.ToLongTimeString()) End Sub ============================== And my local tests showed that it works corretly for IE7 and firefox 2.0. However, for firefox 3.0, it does not work. I think it is due to the implemenation of the firefox 3. BTW, the cache control http header is a advisory info and the actual implemation is not mandatory and depend on the browser, so I suggest you also try submit this issue to firefox community to see whether this is an by design behavior. Reproducible: Always Steps to Reproduce: 1. Load page with "no-cache" or "no-store" directive 2. Go forward to next page 3.Go back button on browser, page loads from cache. Actual Results: Page loads from cache, but should re-load from server or show message that page has expired.
Why did you mark this bug security sensitive? Doesn't look to me like a security issue that needs to be kept confidential. Do you have a live site that can be used to test this?
Keywords: regression
Product: Firefox → Core
QA Contact: general → general
Hi, I marked it as sensitive because when a page is marked as no-cache, quite often it is to ensure that sensitive info is not cached and can't be displayed again after clicking the browser's back button. Unfortunately I do not currently have a live site where this can be demonstrated. Further, I should perhaps refine my original description. The cache directives are ignored when you page through different wizard steps of the Microsoft .Net framwework wizard control. To reproduce, add the following code in an aspx page, add a wizardcontrol (call it wizard1) to the page, compile and run. If using the browswer's back button after advancing a step, you will see that the time printed in the wizard is the cached time. In IE the step shows as expired. Although these are wizardsteps, they are in fact emitted as a new html page for each step (each step has the same page name though). Partial Class _Default Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Response.Cache.SetAllowResponseInBrowserHistory(False) Response.Cache.SetCacheability(HttpCacheability.NoCache) Response.Cache.SetNoStore() Response.Expires = 0 Response.Write("<br/>" & DateTime.Now.ToLongTimeString()) End Sub Protected Sub Wizard1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Wizard1.Load Wizard1.ActiveStep.Title = (DateTime.Now.ToLongTimeString()) End Sub End Class Regards Paul
(In reply to comment #2) > I marked it as sensitive because when a page is marked as no-cache, quite often > it is to ensure that sensitive info is not cached and can't be displayed again > after clicking the browser's back button. Ah, OK. Keeping this bug report hidden doesn't really protect anyone, though - in fact opening it increases the number of people who can investigate further, and means that people who are affected (assuming it is valid) can know about it and take steps to address the problem (by purging caches manually, etc.).
Group: security
Hi, I guess you are right. It should not be marked as confidential. Cheers /Paul
Hello everyone. We have detected this same issue affecting one of our applications. Firefox3 users (firefox 2, safari, IE6/7 users aren't affected) can use the back button to view a page which contains sensitive data without firefox trying to retrieve it again from the server even if cache-control directives are sent both in the http response and the html code. Currently this is the response we send: http://pastebin.com/m261596f6 HTTP http://pastebin.com/m7d550841 HTML we've tried also many other combinations of cache-control directives without success.
I don't know if it's only my case but its related to forms using POST method, I've yet to implement session data but you can test in http://www.pikanya.net/testcache/ it sends cache-control directives but you can use the back button without reloading the page from the server. Firefox 2 will show a dialog warning you that going back will redo any action to prevent accidental double posting or affecting data. Firefox 3 just silently uses the cached page ignoring any cache control directive used.
The problem is also with Ajax requests. FireBug shows that ajax request is sent, but the data for sure comes from previous request. If you add at the end of the link/request path e.g. timestamp - it works because FF3 thinks that this is new link.
Flags: blocking1.9.0.2?
Whiteboard: [sg:moderate]
This bug could be very bad for us. I have these settings in httpd.conf in Apache: <Files ~ "\.(xml|swf)$"> Header set Cache-Control "no-cache, must-revalidate" </files> Since I Firefox 3 (now on 3.0.1), they are ignored, and each time I publish a new swf file the browser still gives me the old one. For me it's just a small inconvenience, but if thousands of our users start calling our support team it will not be fun here. We need our client's browsers to check if XML and SWF files have been modified. We encouraged them all to use Firefox. Disabling the cache would cause too much traffic on our servers. Is there any workaround?
Setting 'Cache-Control: no-store' seems to prevent FF3 from caching things. This is still a rather horrible bug though -- trying to actually do validated caching as HTTP intended (sending a Last-Modified, checking If-Last-Modified, and sending a 304 when the page is still valid) is completely broken by this. Duplicating it is simple as could be -- set up your webserver to send a 'Cache-Control: max-age=0, must-revalidate', and use Tamper Data or Firebug to watch the browser happily re-use its cache without validating. The HTTP spec is extremely clear on this, it is not allowed.
I've been able to reproduce this issue in many sites with sensitive data, just log in, fill any form inside the session, log out and you'll be able to use the back button if the last page was requested with POST thus letting you see the last page. You are still unable to interact with it if the server has properly destroyed the session, but things like taxing information from your government or your last broking/banking operation could be visible by unintended people with access to the computer or even malicious software which can read the cache.
(In reply to comment #0) > Steps to Reproduce: > 1. Load page with "no-cache" or "no-store" directive > 2. Go forward to next page > 3. Go back button on browser, page loads from cache. > Actual Results: > Page loads from cache, but should re-load from server or show message that page has expired. To Paul Nel(bug opener): Do you use Fx with bfcache=On? (Fast Back&Forward. Non-ZERO browser.sessionhistory.max_total_viewers ) (Fx's default : browser.sessionhistory.max_total_viewers=-1 ) ( -1 : (a) Unlimited, when initial bfcache implemetation) ( (b) Till internal max with trunk ) ( I don't know which with Fx 3 ) What will happen when browser.sessionhistory.max_total_viewers=0 is set? AFAIK, bfcache is never "cache" defined by HTTP even though string of "cache" is used in his name, and "Open new page then Back with bfcache=on" is similar to "Open new tab and switch to the new tab, then back to original tab by tab click". i.e. bfcache changed meaning/action of Back button from traditional one (Back==Reload) to one similar to "Tab hiding + Tab switching", without sufficient documentation, without sufficient explanation to users. Then, it produced/is producing many confusions, and produced claims like yours and claims like bugs listed in meta Bug 415889. Good grief...
(In reply to comment #8) > Since I Firefox 3 (now on 3.0.1), they are ignored, Abe san, do you mean "no problem with Fx 2, but problem started to occur with Fx 3.0.0 and continues with Fx 3.0.1"? For SWF, Ajax etc. page which is held in bfcache : As I wrote, data held in bfcache is similar to a tab(but hidden). I saw bug report(s) for phenomenon due to continuing execution at the "hidden tab". - High CPU usage because of heavy workload by script in the "hidden tab" - Continuous request to server by Ajax from the "hidden tab" Sorry but I can't recall bug number.
I've recalled Bug 327790 Comment #11 by Jesse Ruderman, which refers to next document. > http://developer.mozilla.org/en/docs/Using_Firefox_1.5_caching > (Major three cases that page is not held in bfcache) > the page uses an unload handler > the page sets "cache-control: no-store" > the page sets "cache-control: no-cache" and the site is HTTPS. Simplest workaround of unwanted behavior due to bfcache seems to be <body onUnload="return;">.
FYI. Bug 327790 Comment #11 also says: > Cache-control has to be sent as an http header, not a <meta http-equiv>. See bug 202896. Please exclude "cache-control: no-store or no-cache" in <meta> tag case. (In reply to comment #0) > ======================= > Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load > Response.Cache.SetAllowResponseInBrowserHistory(False) > Response.Cache.SetCacheability(HttpCacheability.NoCache) > Response.Cache.SetNoStore() > Response.Expires = 0 > Response.Write("<br/>" & DateTime.Now.ToLongTimeString()) > End Sub > ============================== Firefox won't read your program source held at server. Fx reads really sent HTTP headers only. To Paul Nel(bug opener): What are really sent HTTP headers? Is there any conflicting or ambiguous specification in really sent headers? (e.g. Cache-Control: no-cache, private, max-age=100, max-age=50, ...) (HTTP 1.1 spec for conflicted/ambiguous specification is not so clear)
I'm not sure this is the same thing that comment #10 is talking about, but if you go to http://marijn.haverbeke.nl/ffcache/ , and press the button a bunch of times, you can use your favourite http-debugger to verify that FF3 does not make a new request for every XHR call, while a "Cache-Control: max-age=0, must-revalidate" header *is* being sent. At least, that's what happens on my setup.
I disabled Firebug 1.2.0b6 and suddenly it works as expected. Maybe we blamed the wrong application?
Same here. I can't even begin to imagine why and how Firebug is preventing these requests, but it seems this is indeed not a FF problem.
Confirmed that this behavior only happens if firebug is enabled.
(In reply to comment #16, comment #17, comment #18) When Firebug is used, problem of Bug 398249 can occur, if some options of Firebug are enabled. It can make HTTP flow funny(looks to be protocol violation). To Abe, Marijn Haverbeke, ps@gft.com: Is phenomenon of Bug 398249 involved in your problem?
No, that looks like a different problem to me.
(In reply to comment #7) > The problem is also with Ajax requests. FireBug shows that ajax request is sent, (snip) To Lukasz: Can you re-produce your problem without Firebug? To Paul Nel(bug opener): Are you using Firebug? If yes, Can you re-produce your problem without Firebug?
In reply to #19, I have not seen any JavaScript errors. I can only say I receive outdated XML and SWF files (read from cache instead of revalidated). When I deactivated Firebug, problem was gone. When I switched it back on, problem was immediately back.
(In reply to comment #19) Same as Abe here. It could be an interaction between FF3 and Firebug, but it seems to be firebug's fault only.
Can someone who can reproduce this easily find a regression range, using binary search through builds labelled "-trunk" at http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/ ? That would help a lot in figuring out why this behavior changed in Firefox 3.
Honza, you were doing some of the network panel bits... Is this a known problem?
(In reply to comment #6) > http://www.pikanya.net/testcache/ > Simplest "FORM POST to same URI" case with > Cache-Control: private, no-cache, no-store, must-revalidate, max-age=0 Test result with Fx 3.0.1, with extension of LiveHTTPHeaders only, on MS Win-XP SP3. 1. Load test page => /testcache/ (say P-1, nothing in about:cache) 2. Submit(POST) => same /testcache/ (say P-2, nothing in about:cache) 3. Submit(POST) => same /testcache/ (say P-3, nothing in about:cache) 4. Submit(POST) => same /testcache/ (say P-4, nothing in about:cache) 5. Load new URI in the Tab (say Q-1) 6. Back => Nothing happens 7. Expand Back/Forward button => P-1 to P-4, and Q-1 are listed 8. Click P-4/P-3/P-2 => Nothing happens 9. Click P-1 => HTTP GET was issued (Reload was executed) Above was observed with both browser.sessionhistory.max_total_viewers = -1 & 0. It looks to be history.forward/back/go problem when POST & no-cache(and/or no-store). Firebug doesn't have relation to problem(s) when your test case. Phenomenon with Firebug should be analyzed independently, if different phenomenon when Firebug is used. Comment #0 sounds for me to be phenomenon at above step 6, with GET/Fx 3.0.0/(bfcache=On/Off is still unclear. I guess bfcache=On), without Firebug, although Comment #0 is possibly Firebug's issue like SWF/Ajax+Firebug cases reported by other comment posters. To ps@gft.com: I recommend you to open separate bug for ease off analysis, because minimum/reliable test case is already available by you, and because POST involves different issues from GET in many cases. By the way, your case returns following header. > Cache-Control: private, no-cache, no-store, must-revalidate, max-age=0 "private" means "enable caching", and conflicts with "no-cache" which disables caching. Although HTTP 1.1 says "restrictive one is to be used" and Fx treats it as "no-cache" request instead of "private" request, inconsistent/ambiguous/duplicated header/parameter should be carefully excluded from test case. If such headers/parameters are used in bug report, we have to start from needless analysis of HTTP 1.1's unclear spec(sometimes, nothing is described) for inconsistent/ambiguous/duplicated header/parameter, and we have to do needless analysis of Fx's design/spec/code for inconsistent/ambiguous/duplicated header/parameter.
Another confirmation that this is only happening for me with Firebug enabled.
I've spent an hour or two working through the various Minefield nightly builds for Firefox 3. According to my tests this behaviour is first observable in the build 2007-07-09-04-trunk (http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/2007/07/2007-07-09-04-trunk/) it is not observable in the build 2007-07-08-04-trunk. The contents of 'Help -> About' is: "version 3.0a7pre (Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9a7pre) Gecko/2007070904 Minefield/3.0a7pre)" In order to identify this I used a fresh Vista virtual machine with no other software installed. I then installed a Minefield build, tested and uninstalled Minefield. I did this many times. Installation was done from the Windows executable for the build. In the build dated 8th July 2007 clicking the back button results in a Resend/Cancel dialogue which will resubmit the request as expected. In the build dated 9th July 2007 clicking the back button results in a cached page being displayed with no request being sent to the server. This is despite "cache-control: no-cache, no-store, must-revalidate, max-age=0" being set. The Minefield install had no addons. No Firebug, no DOM Inspector, no Feedback Agent. Hope this helps! :-)
(In reply to comment #28) I've uploaded the pages I used for testing to http://www12.asphost4free.com/ffcachetest/
(In reply to comment #28) > In the build dated 8th July 2007 clicking the back button results in a > Resend/Cancel dialogue which will resubmit the request as expected. > In the build dated 9th July 2007 clicking the back button results in a cached page being displayed with no request being sent to the server. > This is despite "cache-control: no-cache, no-store, must-revalidate, max-age=0" being set. > The Minefield install had no addons. No Firebug, no DOM Inspector, no Feedback Agent. Jason Deabill, thanks a lot for finding regression range of POST case. To Paul Nel(bug opner): Same regression range when your original problem of Comment #0? To ps@gft.com: Can you verify the regression range? (your POST case)
It's critical issue, especially when you can browse back through secured pages posted by https. Please fix it ASAP.
(In reply to comment #29) > I've uploaded the pages I used for testing to http://www12.asphost4free.com/ffcachetest/ Tested with your case(without Firebug, on MS Win-XP SP3). I could reproduce POST problem. With Fx 3.0.1 : No warning of POST data upon Back. No HTTP GET upon Back. With Fx 2.0.0.14 : Warning of POST data, then Cancel But I couldn't reproduce problem with GET case(no-cache,no-store,...) with both Fx 2.0.0.14 and Fx 3.0.1. HTTP GET was issued upon Back any time. (This result perhaps indicates Comment #0 is problem with Firebug...) To Jason Deabill: You could reproduce problem with GET case(No HTTP GET upon Back) without Firebug? FYI. Bug 327790 can have relation to problem in POST case.
> You could reproduce problem with GET case(No HTTP GET upon Back) without > Firebug? I have never reproduced this problem with GET (with or without Firebug). From my testing this is only a problem with POST. Is anyone able to tell me - is there anything I can do to assist in moving this issue from UNCONFIRMED status?
(In reply to comment #33) > I have never reproduced this problem with GET (with or without Firebug). > From my testing this is only a problem with POST. Comment #0 is for GET case, and bug opener says "Fx 3 only problem(no problem with Fx 2)". So Comment #0, original problem of this bug, is apparently different problem from your problem in POST case. Jason Deabill, open separate bug for POST case, please. (additional test with onUnload script, var x=Math.Random() etc. will be required)
(In reply to comment #34) > Comment #0 is for GET case Sorry but are you sure? I can't see any mention in the original issue description specifying GET or POST being the problem. To Paul Nel: Are you able to confirm the scope of your original issue?
(In reply to comment #35) > Sorry but are you sure? I can't see any mention in the original issue > description specifying GET or POST being the problem. Oh, sorry for may confusing comment. I'd like to say; If Comment #0 was for POST case, since comment #0 says that it was fx 3 only problem(no problem with Fx 2), I can't imagine situation that bug opener opened this bug without referring to "No warning about POST data exists". So I guessed and said that Comment #0 was for GET case. I believe following is appropriate, since Comment #0, original problem of this bug, is still not unclear. Open separate bugs for (A) GET related problem with Firebug and (B) POST related problem without Firebug. And, close this bug as DUP of (A) or (B), or continue analysis of Comment #0 case.
This problem seems to be somehow related to the browser configuration. I just tested, using Firefox 3.0.1 (as present in Ubuntu): - my *usual* browser, with horde of extensions installed, does not allow to back to the pages protected with cache control (= behaves properly) - separate empty profile (just MOZ_NO_REMOTE=1 firefox -P and new Profile created) allows to Back to such page. The same confirmed on some Windows computer
(it may be also a matter of some other non-default browser setting) I tried some time to narrow it, but I can't catch the reason. Nevertheless - if somebody fails to reproduce the problem, it can make sense to try with fresh profile.
(In reply to comment #37) > This problem seems to be somehow related to the browser configuration. To Marcin Kasperski(comment poster): What is your "This problem"? At least 3 kinds of issues are reported to this bug. Which? All of them? (1) Original problem of Comment #0 by bug opener. Comment #0 says Fx 3 only problem(no problem with Fx 2). "GET case or POST case or Both" is still unclear (I guess 'GET' case). "With Firebug or without Firebug" is still unclear. Really sent HTTP headers are still unclear. (bug opener provided his server side application logic only.) (bug opener doesn't provide really HTTP header sent by his server yet.) With bfcache=On only, or bfcache=Off only, or both bfcache=On&bfcache=Off, is still unclear. (bug opener still doesn't provide his set up which relates to bfcache.) (2) With Firebug only problem. (2-A) With Ajax (2-B) With Flash(SWF) (Looks to be "GET" case. I don't know bfcache related issue or not.) (3) POST case Warning of "POST data exists" is issued by Fx 2, but no warnig with Fx 3. Problem occurs with bfcahe=On & bfcache=Off. Firebug has no relation to the issue. Real HTTP header sent by server is clear(test page is already provided.) > Cache-Control: no-cache, no-store, must-revalidate, max-age = 0 Note: If HTTP GET, apparently "no problem", accordigng to test results.
(3) I have POST-based application (initial GET, then POST, POST, POST, POST, ...). All responses are flagged by Cache-Control: no-cache (switching to Cache-Control: no-cache, no-store, must-revalidate, max-age = 0 does not change anything in the behaviour described). In Firefox 3 I can freely Back to previous page without being prompted for re-POST. It is configuration-dependant (as I said, the same Firefox, the same application, old profile warns, fresh profile makes Back). It is also 'mysterious-factor' dependant (two applications, using the same technology, returning the same headers, one allows for Backs, another one does not) Only Fx 3.
(In reply to comment #40) > It is configuration-dependant > (as I said, the same Firefox, the same application, old profile warns, fresh > profile makes Back). Are you able to attach the prefs.js file from each of your profiles to this issue? That may help move us forward.
Booom... Checked - just to make sure - and today both my profiles allow for Back. I don't remember changing anything myself (except restarting Firefox a moment before...) Now I am curious whether the session history can matter. Of course I can send my prefs.js from the - say - simpler profile, but I doubt it will be of help in this case :-(
... found something. *The same* Firefox profile. a) I open Firefox, open only the service I test. Firefox allow for backs. b) I open 5 new tabs, loading some webpage in every of them (in my case it were chessbase.com, pilka.interia.pl, sport.pl, info.onet.pl and gazeta.pl). Whoa whoa, since then Firefox behaves properly and asks to rePOST the data !!!
(and it keeps behaving properly even after I close those extra tabs) If I remember correctly, also other tests where Firefox allowed for backs were made on newly started browser instance. Some specific behaviour when memory cache is not yet filled?
The same behaviour confirmed on some windows machine (my tests were on Linux) - Firefox allows (improperly) to make Backs, I open 5 new tabs with reasonably heavy pages, Firefox starts asking whether to rePOST.
(In reply to comment #41) > That may help move us forward. To Jason Deabill: At least 4 persons, you, ps@gft.com, Marcin Kasperski, and me, could re-produce problem in POST case. In order to "move us forward", please open separate bug for POST case only, with your minimum & simple & very good test case, for ease of problem analysis, to make problem analysis efficient, and to avoid confusion. (Please note that original problem of this bug, comment #0, is still unclear.) (Are you certain that comment #0 is same case as your POST case?)
Attached file prefs.js (deleted) —
This is prefs.js from my smaller Profile. Mostly new, post-install Ubuntu Firefox I verified that this Profile proves the same behaviour: newly started Firefox 3.0.1 allows to make Back in spite of (no) cache controls, when I open some tabs and visit a couple of resource-heavy pages, Firefox starts to ask whether to rePOST. (I won't attach prefs.js from the other Profile as it is 200kB, contains some private data - and both profiles behave exactly the same).
Unconfirmed bugs won't get blocking. Let's get this confirmed then renominate for 1.9.0.3 (1.9.0.2 is almost frozen).
Flags: blocking1.9.0.2? → blocking1.9.0.2-
(In reply to comment #48) > Unconfirmed bugs won't get blocking. Let's get this confirmed then renominate > for 1.9.0.3 (1.9.0.2 is almost frozen). Samuel Sidler: Are you able to advise on what needs to be done to get this bug confirmed? It is reproducable, the regression range in the nightly builds has been found, there is a defined tests case. What other artefacts/information is required? Thanks, Jason.
I can confirm on FF 3.0.1 that Firebug 1.2.0b13 is the direct cause. Caching works fine without Firebug enabled.
The Firebug cache bug, I believe, is a separate bug. The Firebug cache bug seems to require Javascript to alter the browser history for the bug to appear, and the page to be forcibly cached. Not that there couldn't be an interaction with the issue on this page and Firebug's issue, but this one is experienced without Firebug. For reference: http://code.google.com/p/fbug/issues/detail?id=1029
(In reply to comment #28) > I've spent an hour or two working through the various Minefield nightly builds > for Firefox 3. According to my tests this behaviour is first observable in the > build 2007-07-09-04-trunk > (http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/2007/07/2007-07-09-04-trunk/) > it is not observable in the build 2007-07-08-04-trunk. Great, thanks Jason. That gives a range of: http://bonsai.mozilla.org/cvsquery.cgi?date=explicit&mindate=2007-07-08+02%3A00&maxdate=2007-07-09+05%3A00 In that range, bug 373231 and bug 372969 landed, and they're the only things that look like they might have caused a bug like this. dcamp, any ideas?
I also have what seems to be the same problem which only happens when Firebug is running. It seems to happen only on one particular case for me which is when I set the src attribute of an image to an empty string and set it to any other non empty string after. Here's an example (in PHP): <?php header('Content-Type: application/xhtml+xml; charset=UTF-8'); header('Cache-Control: no-cache, must-revalidate, proxy-revalidate, max-age=0'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Test cache</title> </head> <body> <p><?php print rand(); ?></p> <script type="text/javascript"> var testImage = new Image() testImage.src = "smdflkmdfs" testImage.src = "" testImage.src = "werwer" </script> </body> </html> The page will not refresh until I clear the cache. The page should show a different number everytime I navigate to it. If you comment out the testImage.src = "" line, it works normally.
It also works normally if you comment out the testImage.src = "werwer" line but leave the testImage.src = "" line. The testImage.src = "smdflkmdfs" line could be removed as it makes no difference.
I have been trying to find out why enabled Firebug (specifically console and net panel) is the reason that pages with *not to cache* directive are stored. I have narrowed the problem to the point when I have a separate/simple FF extension, which can be used to reproduce the problem. Please see my comment (with the test extension attached) here: http://code.google.com/p/fbug/issues/detail?id=1029#c46 I have been testing it with: http://www.janodvarko.cz/firebug/tests/1029/Issue1029.php The core problem, briefly (see my comment for more details) seems to be the openCacheEntry method from nsICacheSession interface). If I am using the method the problem is there (e.g. the URL is stored in the cache even if it should not) if not the problem is gone. Could anybody please verify the theory? Perhaps the interface is just used in a wrong way? Honza
It seems that the problem is setting cacheSession.doomEntriesIfExpired to false. Honza
This is definately an issue. What can we do to move this to confirmed status? I have tested this with several combinations of http headers and head meta tags using HTTPS. All with the same results: All POST requests are stored in the memory cache. Thanks Tim Huffam
I found that the following site recaps this caching issue very well: http://blogs.imeta.co.uk/JDeabill/archive/2008/07/14/303.aspx The caching issue is as follows: 1) A POST request returns a response with cache-control: no-cache, or using any other cache control headers including Pragma and Expires. 2) Move away from the POST response page via any method, e.g. go to www.google.com, etc. 3) Use the browser back button to go back to the POST response page, which previously had no-cache specified. 4) With Firefox 3.0.x (including 3.0.3 as of 9/29/08). The POST response page is displayed via the cache, which is a security risk if the POST response page contains credit card numbers or personal information. In other browsers, some kind of a "Warning: Page has Expired" (IE6 as an example), would be displayed. 5) This erroneous POST response caching behavior is regardless of the form submission via HTTP or HTTPS. 6) There is a trick to get Firefox 3 to refresh from the site instead of from cache, and it is to load a few other sites in the same window by opening up new Tabs. This "workaround" however is intermittent. This POST response caching issue is plaguing an application that I am working on, with secured information shown back to the user via browser cache history. If there is a way to speed up the process of getting this fixed, the more secured and caching standard compliant Firefox is.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: privacy
OS: Windows XP → All
Hardware: PC → All
(In reply to comment #58) > I found that the following site recaps this caching issue very well: > http://blogs.imeta.co.uk/JDeabill/archive/2008/07/14/303.aspx I'd given up on this issue, pleased to see it's getting some interest again. :-) I've done a quick check and the test pages at http://www12.asphost4free.com/ffcachetest/ are still available. If there's any other information I can provide, please just ask. :-)
Not following no-cache or no-store directives for https is really bad. Comment 58 sums it up. This needs to block.
Flags: blocking1.9.1?
Flags: blocking1.9.0.6?
OK, this is definitely a regression from bug 373231. The change that caused this is this part in nsHttpChannel::CheckCache: - // If we were only granted read access, then assume the entry is valid. - // unless it is INHBIT_CACHING - if (mCacheAccess == nsICache::ACCESS_READ && - !(mLoadFlags & INHIBIT_CACHING)) { + // Don't bother to validate LOAD_ONLY_FROM_CACHE items. + // Don't bother to validate items that are read-only, + // unless they are read-only because of INHIBIT_CACHING. + if (mLoadFlags & LOAD_ONLY_FROM_CACHE || + (mCacheAccess == nsICache::ACCESS_READ && + !(mLoadFlags & INHIBIT_CACHING))) { mCachedContentIsValid = PR_TRUE; return NS_OK; } This changed the semantics of LOAD_ONLY_FROM_CACHE. Before this change, LOAD_ONLY_FROM_CACHE meant to do normal validation, but abort if network access was required. LOAD_FROM_CACHE meant (and still means) to skip validation. Before this change, we got into CheckCache() with the following cache-related flags set on this history traversal: VALIDATE_NEVER|LOAD_ONLY_FROM_CACHE. Our mCacheAccess was NOT ACCESS_READ (another change the patch in bug 373231 made for LOAD_ONLY_FROM_CACHE), so we skipped that first return. Neither LOAD_FROM_CACHE not VALIDATE_ALWAYS were set, so we ended up in the VALIDATE_NEVER case, which checked for no-store or SSL no-cache and forced validation, with a comment pointing to bug 112564. This bug is basically a regression of bug 112564. Now we bail out without validation on that early return whose condition got changed, and hence don't hit the condition in Connect() that says to abort if validation is needed and LOAD_ONLY_FROM_CACHE is set. Note that docshell explicitly adds LOAD_ONLY_FROM_CACHE only to POST history loads, so it was most certainly depending on the old behavior. What needs to happen is one of two things: 1) Restore the old meaning of LOAD_ONLY_FROM_CACHE without LOAD_FROM_CACHE (don't force read-only, don't force skip-validation, don't force reading from cache, only force not hitting the network). 2) Provide docshell with some other way of triggering the desired behavior. I think #1 is the way to go. Offline code should set both flags if that's the behavior it wants (forced reading from cache _and_ forced not hitting the network) instead of changing the meaning of LOAD_ONLY_FROM_CACHE.
Blocks: 373231
Oh, and the logic in CheckCache seems really fragile in general; in particular, the dependency on the mCacheAccess mode is really weird.
Assignee: nobody → dcamp
Attached patch trunk patch (obsolete) (deleted) — Splinter Review
This basically reverts the changes from 373231 that were causing problems, and updates mozIsLocallyAvailable to use the pair of flags. It also adds some tests for the various flags. The ACCESS_READ thing is ugly. I know that at last the offline loading was relying on it (we mark cache entries read-only to prevent validation and updating the entry), so I added a clause to that big-annoying-if that explicitly avoids validation for application cache loads, in case we want to try to remove that ACCESS_READ check in the future.
Attachment #349073 - Flags: superreview?(bzbarsky)
Attachment #349073 - Flags: review?(bzbarsky)
Attachment #349073 - Flags: superreview?(bzbarsky)
Attachment #349073 - Flags: review?(bzbarsky)
Comment on attachment 349073 [details] [diff] [review] trunk patch This causes a test regression, investigating.
Attached patch new trunk patch (obsolete) (deleted) — Splinter Review
test_redirect_caching was using the wrong flag. New version fixes that.
Attachment #349073 - Attachment is obsolete: true
Attachment #349113 - Flags: superreview?(bzbarsky)
Attachment #349113 - Flags: review?(bzbarsky)
Attached patch branch patch (deleted) — Splinter Review
Roughly the same patch as the trunk patch, except that we don't have mLoadedFromApplicationCache, so this DOES continue to rely on mCacheAccess == ACCESS_READ to avoid updating an offline cache. Note that the no-store tests assume the existence of a working memory cache. Bug 454878 causes this to fail often on the branch. So before we land this patch we'll need to either a) drop the no-cache parts of the test, b) get the fix for 454878 on the branch, or c) land the workaround in 454587.
(b) is probably Very Hard, so let's do (c) or (a). I'd probably prefer (c) in any case, but we can do (a) if that hasn't happened before this wants to land on branch.
Scratch that, looks like 454878 is included on the branch now, I was out of date.
Attachment #349113 - Flags: superreview?(bzbarsky)
Attachment #349113 - Flags: superreview+
Attachment #349113 - Flags: review?(bzbarsky)
Attachment #349113 - Flags: review+
Comment on attachment 349113 [details] [diff] [review] new trunk patch >+++ b/netwerk/protocol/http/src/nsHttpChannel.cpp Wed Nov 19 19:38:50 2008 -0800 >+ if (offline || mLoadFlags & INHIBIT_CACHING) { Toss in parens around the '&' expression to disambiguate? Other than that, looks good. I didn't review the test very carefully; let me know if you think I should. That said, the test only tests http://. There's a behavior difference for https://: no-cache https should behave just like no-store as far as validation is concerned. Would be good to test that too.
(In reply to comment #70) > That said, the test only tests http://. There's a behavior difference for > https://: no-cache https should behave just like no-store as far as validation > is concerned. Would be good to test that too. Honza, how tough would it be to add the ssltunnel stuff you did for mochitest to an xpcshell unit test?
(In reply to comment #71) > Honza, how tough would it be to add the ssltunnel stuff you did for mochitest > to an xpcshell unit test? xpc-shell tests run server as instance of an xpc component. ssltunnel is an executable that has to be run with reference to a configuration file as an argument. the configuration file tells the ssltunnel on which port(s) to listen and which certificates to present on them, to which address and port to forward (to nsHttpServer) and where the cert db is located. also it can configure the ssltunnel as an http(s) proxy allowing additional customizations or just a pure ssl tunnel. if you don't want more then to connect to e.g. https://localhost:4443/, then we have to create a new directory with cert db and the ssltunnel config file dedicated to xpc-shell tests. it would be static, no run-time changes needed. ssltunnel has to be run on demand, best from a test as the server is instantiated, using some helper function. I can imagine ssltunnel be turned to a component what would be much more elegant and flexible, but it is a lot of work. so, what I need for the simplest functionality is: - decide on location of the config files (e.g. tools\test-harness\xpcshell-simple installed to obj-dir\_tests\xpcshell-simple\sslsupport) - create a server cert for 'localhost', self signed or signed by the pgo ca, and check-in a cert db containing it - at run-time add this self signed cert or the pgo ca to profile cert db of the xpcshell test environment as trusted or install on all channels an nsIBadCertListener2 to accept untrusted certificate - create a helper function to run the ssltunnel process as simply as possible - make sure to kill the ssltunnel process at the end
Blocks: 466452
New patch adds a commented-out test for no-cache over ssl, pending a fix for bug 466524 (allow ssl in xpcshell tests).
Attachment #349113 - Attachment is obsolete: true
Attachment #349814 - Flags: approval1.9.1?
Attached patch branch patch, review comments fixed (obsolete) (deleted) — Splinter Review
Branch patch doesn't change the tests, because I wouldn't expect us to land the ssl-for-xpcshell stuff on the branch if it comes. Asking for approval1.9.0.5, but I'll take what I can get.
Attachment #349819 - Flags: approval1.9.0.5?
Attached patch fixed branch patch (deleted) — Splinter Review
oops, cvs diff is significantly slower than hg diff, and hadn't finished before I posted.
Attachment #349819 - Attachment is obsolete: true
Attachment #349820 - Flags: approval1.9.0.5?
Attachment #349819 - Flags: approval1.9.0.5?
Comment on attachment 349820 [details] [diff] [review] fixed branch patch We'll look for 1.9.0.6.
Attachment #349820 - Flags: approval1.9.0.5? → approval1.9.0.6?
As far as I can tell, this particular bug has made Firefox 3 to be excluded from online elections ("élections prud'homales") in France, causing quite some stir... FX3 was excluded because it enabled other people that would come to your desktop and hit back to see for who you had voted, I was told. I'm not positive about this since I have not received confirmation for the different parties, but this bug is the closest thing I can find that could have led to FX3 being rejected. Glad to see this regression fixed! More details (in French) on http://standblog.org/blog/post/2008/11/20/Firefox-3-et-les-elections-prud-homales
Attachment #349814 - Flags: approval1.9.1? → approval1.9.1+
Comment on attachment 349814 [details] [diff] [review] trunk patch, review comments fixed a191=beltzner
Cool to see this patch approved for Firefox 3.1. Any chance to also fix this privacy regression for 3.1.x?
There's a 3.0.x-aimed patch here, but review hasn't been requested yet, so I'm not sure if it's ready.
The branch patch is set to go review-wise. It just needs branch driver approval.
Flags: wanted1.9.0.x+
Flags: blocking1.9.0.6?
Flags: blocking1.9.0.6+
So this still needs trunk and 1.9.1 landing, right? I'm kinda sad that this didn't make b2; I'd thought that given the approval from Mike it would have. :(
Please land this on trunk and 1.9.1 asap so we can take this in 1.9.0.
Landed on trunk as http://hg.mozilla.org/mozilla-central/rev/e33f490c8764 , will land on 1.9.1 soon...
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Attached patch fix for offline cache updating (deleted) — Splinter Review
(In reply to comment #64) > The ACCESS_READ thing is ugly. I know that at last the offline loading was > relying on it (we mark cache entries read-only to prevent validation and > updating the entry), so I added a clause to that big-annoying-if that > explicitly avoids validation for application cache loads, in case we want to > try to remove that ACCESS_READ check in the future. ... and this caused problems, because it prevents us from validating when updating the offline cache. This patch always forces validation (or at least bypasses the ACCESS_READ check) when we're updating the offline cache.
Attachment #352049 - Flags: superreview?(bzbarsky)
Attachment #352049 - Flags: review?(bzbarsky)
This was backed out for the unit test failures (fixed by the latest attachment)
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Attachment #352049 - Flags: superreview?(bzbarsky)
Attachment #352049 - Flags: superreview+
Attachment #352049 - Flags: review?(bzbarsky)
Attachment #352049 - Flags: review+
Comment on attachment 352049 [details] [diff] [review] fix for offline cache updating OK, I guess. I hate the complexity here. :(
Dave, do we need an updating 1.9.0 patch to include the fixes in the latest attachment? Also, we need it to land on 1.9.1 before taking it on 1.9.0.
(In reply to comment #89) > Dave, do we need an updating 1.9.0 patch to include the fixes in the latest > attachment? Also, we need it to land on 1.9.1 before taking it on 1.9.0. The bug that the new attachment fixed was not in the patch for the 1.9.0 patch, we don't need a new patch there.
http://hg.mozilla.org/mozilla-central/rev/02931735f600 campd had laptop problems, so I offered to push the patch for him with him standing creepily over my shoulder. :)
Status: REOPENED → RESOLVED
Closed: 16 years ago16 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla1.9.1b3
Flags: in-testsuite+
Version: unspecified → Trunk
Whiteboard: [sg:moderate] → [sg:want]
Comment on attachment 349820 [details] [diff] [review] fixed branch patch Approved for 1.9.0.6, a=dveditz for release-drivers.
Attachment #349820 - Flags: approval1.9.0.6? → approval1.9.0.6+
(In reply to comment #67) > Note that the no-store tests assume the existence of a working memory cache. > Bug 454878 causes this to fail often on the branch. So before we land this > patch we'll need to either a) drop the no-cache parts of the test, b) get the > fix for 454878 on the branch, or c) land the workaround in 454587. (In reply to comment #69) > Scratch that, looks like 454878 is included on the branch now, I was out of > date. Apparently we backed out the nspr version bump that included this fix. I asked for approval on bug 454587, and am going to land with the no-store tests disabled for now, unless anyone objects.
Checking in dom/src/base/nsGlobalWindow.cpp; /cvsroot/mozilla/dom/src/base/nsGlobalWindow.cpp,v <-- nsGlobalWindow.cpp new revision: 1.1017; previous revision: 1.1016 done Checking in netwerk/protocol/http/src/nsHttpChannel.cpp; /cvsroot/mozilla/netwerk/protocol/http/src/nsHttpChannel.cpp,v <-- nsHttpChannel.cpp new revision: 1.335; previous revision: 1.334 done RCS file: /cvsroot/mozilla/netwerk/test/unit/test_cacheflags.js,v done Checking in netwerk/test/unit/test_cacheflags.js; /cvsroot/mozilla/netwerk/test/unit/test_cacheflags.js,v <-- test_cacheflags.js initial revision: 1.1 done
Keywords: fixed1.9.0.6
The tests are failing on mac unit test boxes, testing really simple max-age=0 caching that wasn't changed by this patch. If the unit tests aren't getting a profile dir on the test boxes, the nspr bug would cause this. The tests are passing on linux and windows, and on trunk they're passing on all three platforms. I talked with ss, and I'm going to remove the test for now. I filed bug 470530 to reland once either the new nspr or the workaround in 454587 hits the branch.
Depends on: 470530
no 1.8 issue ... setting wanted flags accordingly.
Flags: wanted1.8.1.x-
Flags: wanted1.8.0.x-
Verified for 1.9.0.6 with Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.6pre) Gecko/2009010604 GranParadiso/3.0.6pre. I used http://www12.asphost4free.com/ffcachetest/ and http://www.pikanya.net/testcache/. We're getting warnings when going back and forth when POST is used.
Flags: blocking1.9.1? → blocking1.9.1+
Priority: -- → P1
I've tested the latest Minefield build and indeed it shows _warnings_ and then to my dismay offers to resubmit posted data. Other browsers, like IE7 and Chrome simply show a page expired page, with no chance of careless resubmitting any data by hitting "ok" button on a message box. The current FF behavior simply provides an interceptor message box as hack fix for a huge problem which is in my opinion not secure enough. Please honor the cache-control directives in the most secure way, as Firefox is positioned as a secure browser which at this point it's not, as far as caching behavior is concerned.
Whether to resubmit POST data is, and must be, the user's choice.
Alias: CVE-2009-0358
Well that's your opinion I guess.
(In reply to comment #101) > Well that's your opinion I guess. And as testing shows, the general accepted behaviour by all browsers. Case closed.
Whiteboard: [sg:want] → [sg:want] post 1.8-branch
(In reply to comment #58) Cache-Control directives are targeted at HTTP caches, and RFC2616 explicitly states that caches and history lists are separate; see http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.13 In particular, see the note at the end. I'm concerned that while this text is confusing and indeed not very helpful at the moment, re-defining the semantics of existing cache directives in an ad hoc fashion is going to create more confusion and interoperability problems. In other words, we need to document this better, especially if the way that the directives work is going to change. The HTTPbis Working Group in the IETF is chartered to revise RFC2616, so this is a good opportunity to do that; for example, we could: * Rewrite section 13.13 to explicitly define CC: no-store as applying to history lists (which is within HTTPbis' charter), or * Define one or more new directives to explicitly control history lists (which isn't in HTTPbis' charter, but we can provide a forum for discussion and feedback), or * Refine the text in other ways. To make that happen, however, we need browser vendors (you) to actively participate in discussions; ideally, making proposals for text changes. You can see the current drafts at: http://tools.ietf.org/wg/httpbis/ and participate on the list at: http://lists.w3.org/Archives/Public/ietf-http-wg/ I've raised a HTTPbis ticket regarding this particular issue at: http://trac.tools.ietf.org/wg/httpbis/trac/ticket/197 (I'll leave it up to you to dediced whether to reopen this, open a new bug, etc.) Mark Nottingham IETF HTTPbis WG Chair / Part 6 (caching) Editor
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: