Closed Bug 618835 Opened 14 years ago Closed 13 years ago

Cannot login to phpmyadmin if I made a typo during my first login attempt

Categories

(Core :: Networking: Cache, defect)

defect
Not set
normal

Tracking

()

VERIFIED FIXED
mozilla7
Tracking Status
firefox5 - wontfix
firefox6 - fixed
firefox7 + fixed

People

(Reporter: libor, Assigned: bjarne)

References

Details

(Keywords: regression)

Attachments

(3 files, 5 obsolete files)

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; cs; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; cs; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) When I try to login to phpmyadmin (cookies-based login configuration) and enter incorrect credentials, I cannot login even with the correct credentials until I restart Firefox. It started with version 3.6. Older versions work fine. Tested on Win XP, 7, Vista. Tried to disable everything in Firefox but no change. Reproducible: Always Steps to Reproduce: 1. login to phpmyadmin with incorrect credentials 2. after you're told the access was denied, try it again with correct ones 3. you will again be told that access was denied. If you used different username, the field value will revert to the incorrect one. Actual Results: could not log-in Expected Results: log-in after correcting username or password
Experiencing same thing with FF 3.6. Note that although Firefox still displays "#1045 - Access denied for user 'root'@'localhost' (using password: YES)", you are in fact logged in to phpMyAdmin after the second correct attempt, and a refresh will GET the correct page. An odd feature of phpMyAdmin is that it returns a 302 Found response (redirect) with a "Last-Modified:" some months in the past, and a "Location:" value that may be equal to the POST request producing the current page. I read RFC 2616 as implying that the browser should follow that redirect request - GET http://x may give a different result from POST http://x (and in this case does). I believe this is a bug in the Core Networking component (either HTTP or Cache), possibly a recent security change to prevent redirect loops: "Not following 302 Location header following POST".
There's another example of this issue, related to phpmyadmin : https://bugs.gentoo.org/show_bug.cgi?id=332207#add_comment
IMHO this is a subtle but annoying bug, which can result in phpMyAdmin users believing there is a database fault or they have forgotten their password, causing lengthy investigation, when all they need to do is remember Ctrl+R. RFC 2616 seems clear that the troublesome cached page should be removed, but a simple workaround for each application affected would be to set it to be not cached in the first place. This can be done by changing the session headers in phpmyadmin/libraries/session.inc.php as in the attached. phpMyAdmin may have caching preferences the wrong way around anyway; session data could be "nocache" (as enforced by PHP), while the headers to prevent any caching in phpmyadmin/libraries/session.inc.php usually cannot override those from PHP, but could be useful to allow limited private caching. Incidentally, section 10.3.3-4 of the RFC says the redirecting reply code to a POST should be 303 See Other, rather than the PHP default of 302, but even with 303, this bug still manifests in Firefox 3.6. --- /var/www/phpmyadmin/libraries/common.lib.php.bak 2011-01-14 02:53:43.000000000 +0000 +++ /var/www/phpmyadmin/libraries/common.lib.php 2011-01-14 02:53:53.000000000 +0000 @@ -666,7 +666,7 @@ if (PMA_IS_IIS && defined('PMA_COMING_FROM_COOKIE_LOGIN')) { header('Refresh: 0; ' . $uri); } else { - header('Location: ' . $uri); + header('Location: ' . $uri, TRUE, 303); } } }
After investigation, I believe the major issue here is that the browser seems to ignore RFC 2616 13.10: "Some HTTP methods MUST cause a cache to invalidate an entity. This is either the entity referred to by the Request-URI, or by the Location or Content-Location headers (if present). These methods are: PUT, DELETE, POST". This was looked at in bug 327765, but the fix may have been incomplete. In particular, phpMyAdmin allows private caching of its authentication response (fetched using a query string), but that response should be removed from cache before fetching, since its URI was in the Location header of the (302) response to a POST. From looking at about:cache, what appears to be happening is that neither the Request-URI nor the Location are removed or marked invalid. Instead, the fetch count increases and no HTTP request is sent. This is in contrast to other browsers such as Galeon and Konqueror. Most likely some other code was suppressing this bug: FF 3.5 (Gecko 1.9.1) similarly failed to send a request reacting to the POST Location response, provided that the cached target had no "Expires" header. The bug manifests for phpMyAdmin in revision 26327 (bug 203271) in both 1.9.2 and mozilla-central repository, and 26319 appears good. So I can confirm the bug was exposed by the added condition for max-age added at http://hg.mozilla.org/mozilla-central/rev/5dd87bff4e11. Presumably because the Expiry in the past made the query stale, the URI was fetched again, thus mitigating the bug. I may revisit bug 327765 to think about a patch. Related tests and discussion are in bug 203271, bug 468594 and http://www.mnot.net/javascript/xmlhttprequest/cache.html, and bug 490095. The attached file is a simplified test case with a 303 Location redirect, and 3-hour cache headers typical in PHP (sorry, not sure how to create headers in JS). To use it, install on Apache with PHP as cache-bug-test.php, and point an affected browser at it. Click the submit button and the 303 redirects to the query URI and the time increments; click a second time within the 3 hours, and see if any attempt was made to validate the cache. The expected result (seen on other browsers) is that the POST request invalidates the cache of the Location (or failing that the Request-URI) before testing the request against the cache, and a GET is issued and the time increments on the page. Also incidental observations that a max-age of 10s does not provide any freshness to a cached file at all; and that strictly, according to RFC 2616, a 302 should not forward from a POST without a warning. I don't have permission to edit bugs: could this bug please be marked as confirmed, and applying to core component Networking:Cache, all platforms (exists on Ubuntu 10.04 on x86), and Mozilla 1.9.2, 2.0 and trunk? Or maybe I should file a new bug titled "POST not invalidating cache of Location". Or should I file a new bug titled "POST not invalidating cache of Location", or reopen bug 327765?
Version: unspecified → 3.6 Branch
OS: Windows XP → All
Hardware: x86 → All
Attached patch Patch to also invalidate Location headers (obsolete) (deleted) — Splinter Review
This issue applies to the 4.0 branch and trunk, as well as 3.6. Attached attempt at a patch is against the trunk. The cache invalidation in bug 327765 works correctly, except that it does not apply to the Location header. Once again, RFC 2616 section 13.10 states the POST request (and other non-safe requests, PUT and DELETE) "MUST cause a cache to invalidate an entity. This is either the entity referred to by the Request-URI, or by the Location or Content-Location headers (if present)." I read this as meaning that if there is a Content-Location or Location header in the response, that is the relevant entry to invalidate, provided it "MUST only be performed if the host part is the same as in the Request-URI". Failing that, this patch invalidates the GET request corresponding to the POST request as before. Sections 9.5 and 10.3.4 state that the page fetched in response to a 303 header (redirect) is "cacheable", but I interpret this as meaning that the cached is valid only for future GET requests. Possibly other browser simply bypass the cache when following a redirect after a POST. The patch has been tested against the original phpMyAdmin case.
Attachment #506227 - Flags: review?(networking.cache)
Attachment #506227 - Flags: approval2.0?
Attachment #506227 - Flags: approval1.9.2.14?
I've commented on bug 518734 and bug 623899 as possible duplicates of this one.
Comment on attachment 506227 [details] [diff] [review] Patch to also invalidate Location headers Should get this reviewed before requesting approval. Requesting review from a fictional tracking account (networking.cache@core.bugs) isn't going to work -- people check their own personal queues and will never see it. Please see http://www.mozilla.org/Modules/All#Necko for likely suspects, or hop on IRC (irc.mozilla.org:6667) and find someone in #developers.
Attachment #506227 - Flags: review?(networking.cache)
Attachment #506227 - Flags: review?
Attachment #506227 - Flags: approval2.0?
Attachment #506227 - Flags: approval1.9.2.14?
Comment on attachment 506227 [details] [diff] [review] Patch to also invalidate Location headers OK, thank. Was just chancing it; whatever moves the process on. Will an xpcshell test be needed?
Attachment #506227 - Flags: review? → review?(cbiesinger)
Component: General → Networking: Cache
Product: Firefox → Core
QA Contact: general → networking.cache
Version: 3.6 Branch → Trunk
Status: UNCONFIRMED → NEW
Ever confirmed: true
Comment on attachment 506227 [details] [diff] [review] Patch to also invalidate Location headers Jason, can you take a look at this, given Christian's usual lagginess? If not, who can? I don't HTTP spec-lawyer well enough to tell what the right behavior here is....
Attachment #506227 - Flags: review?(jduell.mcbugs)
Comment on attachment 506227 [details] [diff] [review] Patch to also invalidate Location headers I can take a look at this, being the culprit of the affected method.
Attachment #506227 - Flags: review?(bjarne)
So, to get this clear in my mind: This issue arise when a mutating operation (PUT, DELETE etc) responds with a 3xx (redirect). The fix for bug #327765 invalidates the URL of the mutating operation but fails to invalidate the redirect-URL (very nice catch, btw :) ). Hence, next time we do the mutating operation (e.g. retries the login) the redirect-URL is found in cache although it should have been invalidated. RFC2616 section 13.10 clearly states that URIs in Location or Content-Location headers MUST be invalidated. I'm a little unhappy about the words: "This is either the entity referred to by the Request-URI, or by the Location or Content-Location headers" which indicates that we must chose one of them, but I'd argue that it's safe to invalidate both (or all, actually). I'd be happy to get input on this, though. Is my understanding correct? I'll try to come up with an xpcshell-test as well, which may be simpler to read.
Comment on attachment 506227 [details] [diff] [review] Patch to also invalidate Location headers A general comment on this solution: IMO we should invalidate all URIs involved in the redirect, i.e. the Request-URI, the Location-header and also the Content-Location header (if set, of-course). You may want to introduce a method (parametrized with a string/URL) for this. >+ if (location) { >+ // code copied from nsHttpChannel::AsyncProcessRedirection >+ nsCOMPtr<nsIURI> mResultingURI; >+ // create a new URI using the location header and the current URL >+ // as a base... >+ nsCOMPtr<nsIIOService> ioService; >+ rv = gHttpHandler->GetIOService(getter_AddRefs(ioService)); >+ if (NS_SUCCEEDED(rv)) { >+ // the new uri should inherit the origin charset of the current uri >+ nsCAutoString originCharset; >+ rv = mURI->GetOriginCharset(originCharset); >+ if (NS_FAILED(rv)) >+ originCharset.Truncate(); >+ rv = ioService->NewURI(nsDependentCString(location), >+ originCharset.get(), >+ mURI, >+ getter_AddRefs(mResultingURI)); Nit: since I'm a huge fan of re-using code I'd suggest to put what you copied in a method, parametrize it as necessary and use it also in AsyncProcessRedirection() ;) >+ // Strip any trailing #anchor before using it as the key >+ const char *p = strchr(location, '#'); >+ if (p) >+ tmpCacheKey.Append(location, p - location); >+ else >+ tmpCacheKey.Append(location); Copied from GenerateCacheKey()? Note that GenerateCacheKey() also adds prefixes to the key in certain cases. I'd suggest to use GenerateCacheKey() also here (modify it's signature if necessary) to make sure keys are in sync, and also to avoid possible future issues if we suddenly change the way the key is generated.
Attachment #506227 - Flags: review?(bjarne) → review-
(In reply to comment #12) > I'll try to come up with an xpcshell-test as well, I had a look at the test for bug #327765 and you can safely copy it to make an xpcshell-test for this bug. You can simplify it by not checking for multiple methods (e.g. POST is sufficient). You may also extend it to test for redirecting in more than one step, i.e. the response to the POST is 303, response from the target-URL is another 303, etc. (I'm not sure this would work the way we currently replace channels, though, since we probably loose the information about the mutating operation. But IMO this must be dealt with also and we can get back on how to implement it.)
Attached patch Possible starting-point for a test (obsolete) (deleted) — Splinter Review
I was looking at some other (related) things and ended up with this. Feel free to use it as a starting-point.
> IMO we should invalidate all URIs involved in the redirect, i.e. the > Request-URI, the Location-header and also the Content-Location header Sounds good to me. Where are we with the patch? Cedric, are you interested in finishing this? Otherwise, Bjarne, maybe you can pick it up?
I think Cedric has done an nice job so far and should be allowed to wrap up and land this. If he doesn't have the time or for some other reason doesn't want to do this, I can take it.
Attachment #506227 - Flags: review?(jduell.mcbugs)
Attachment #506227 - Flags: review?(cbiesinger)
Tracking firefox5 since duped bug 659569 was tracking.
Over to bjarne who's looking into driving this in.
Assignee: nobody → bjarne
Keywords: regression
First an observation: The Content-Location header does not seem to be supported (see bug #109553 then #238654 and bug #503078 for a more recent report). Furthermore, I find no code in necko where it is used.
According to the reporter this happened in 3.6. Not a new regression so we are not tracking it specifically for a specific Firefox version. Removing flags
(In reply to comment #21) > First an observation: The Content-Location header does not seem to be > supported (see bug #109553 then #238654 and bug #503078 for a more recent > report). Furthermore, I find no code in necko where it is used. This should change. First, to support this cache invalidation, and second to support the case mentioned in HTTPbis where a non-GET response can be cached if the Content-Location matches the request URI.
As long as whatever is put in place doesn't fall down when the Content-Location is completely bogus, that's probably fine.
Attached patch Proposed fix with test (obsolete) (deleted) — Splinter Review
Patch based on Cedrics original proposal. It invalidates cached entries for the request-URI (like the original code did) as well as URIs in Location and Content-Location headers in the response. Multiple instances of Location/Content-Location are not handled (if this is an issue). Fixes problem described in bug #659569, passes local testing - pushed to tryserver. Note that this patch only *invalidates* cached entries for the mentioned URIs - there is no code anywhere afaics which uses Content-Location for anything, fwiw.
Attachment #506227 - Attachment is obsolete: true
Attachment #508384 - Attachment is obsolete: true
Attachment #537654 - Flags: review?(bzbarsky)
Bjarne, the patch didn't get uploaded correctly.
Attached patch Proposed fix with test (obsolete) (deleted) — Splinter Review
Uploading again...
Attachment #537654 - Attachment is obsolete: true
Attachment #537744 - Flags: review?(bzbarsky)
Attachment #537654 - Flags: review?(bzbarsky)
(In reply to comment #25) > - pushed to > tryserver. Tryserver looks reasonable. There is a leak in OSX/dbg/mochitest-oth but I cannot reproduce it with the self-service API and the leaked objects does not seem to be related to the patch. Awaiting review...
Comment on attachment 537744 [details] [diff] [review] Proposed fix with test >+/*static*/void Space before "void", please. >+nsHttpChannel::AssembleCacheKey(const char *spec, PRUint32 postID, PRUint32 >+ loadFlags, nsACString &key) Please put the linebreak before the "PRUint32 loadFlags", not in the middle of it. Also, why did you rename cacheKey to key? Just keep it as cacheKey so you don't have to change all that existing code to the new name. >+nsHttpChannel::CreateNewURI(const char *loc, nsIURI **newURI) { Move the '{' to the next line, please. >+ return ioService->NewURI(nsDependentCString(loc), >+ originCharset.get(), >+ mURI, >+ newURI); Fix the indentation. >@@ -4977,47 +4988,97 @@ nsHttpChannel::MaybeInvalidateCacheEntry There are two identical code blocks here. Why are those not factored out into a helper function? >+ nsCOMPtr<nsIURI> mResultingURI; resultingURI, please. This is not a member! >+ // why this...? What's this comment supposed to mean? >+ // key for a GET-request to 'location' with current load-flags >+ AssembleCacheKey(location, 0, mLoadFlags, tmpCacheKey); If all the callers to AssembleCacheKey pass mLoadFlags, why does it need that argument to start with? >+nsHttpChannel::DoInvalidateCacheEntry(nsACString &key) { Move '{' to separate line. >+ nsresult rv = gHttpHandler->GetCacheSession(storagePolicy, > getter_AddRefs(session)); Fix indent. >+++ b/netwerk/protocol/http/nsHttpChannel.h Mon Jun 06 23:33:13 2011 +0200 >+ // Ref RFC2616 13.10: "invalidation... MUST only be performed if >+ // the host part is the same as in the Request-URI" Should we check the port too? Or not worry about it? r- because we really need to not have that duplicated code and the various nits need addressing...
Attachment #537744 - Flags: review?(bzbarsky) → review-
(In reply to comment #29) > >+ // why this...? > What's this comment supposed to mean? It means I don't understand why that code is there - do you have any insight? I'm happy to remove either code or comment or both.... > If all the callers to AssembleCacheKey pass mLoadFlags, why does it need > that argument to start with? I wanted AssembleCacheKey to be static (thus independent of instances). The initial thinking was that since it essentially represents how we assemble a cache-key, it might be useful to have it isolated. Not sure how relevant this is though - advice appreciated. > Should we check the port too? Or not worry about it? Neither RFC 2616 section 13.10 nor the HTTPbis spec section 2.5 says anything about the port. The restriction seems to be there to prevent DOS attacks - would checking the port improve this?
> It means I don't understand why that code is there Which exact code? > I wanted AssembleCacheKey to be static (thus independent of instances) I wouldn't worry about it, if it always depends on mLoadFlags. Just make it a member method. > would checking the port improve this? Only if unrelated websites run on different ports...
(In reply to comment #31) > Which exact code? The if-statement immediately below the comment. I don't understand why we re-assign location like that. Input is appreciated. > > I wanted AssembleCacheKey to be static (thus independent of instances) > > I wouldn't worry about it, if it always depends on mLoadFlags. Just make it > a member method. Fair enough. > > would checking the port improve this? > > Only if unrelated websites run on different ports... And we consider this a potential threat..?
> I don't understand why we re-assign location like that. We need to use the absolute URI for the cache invalidation. There's no particular reason the variable involved needs to be |location|. > And we consider this a potential threat..? Well, that was my question. Do we? If we don't, that's fine.
The "attack" is simply that a website X could easily force invalidation for resources at website Y, causing increased load on website Y, without any cooperation on Y's part. The spec just says to check the "host part" and I think we should err on the side of invalidating too much rather than not enough, so I don't think we should check that the ports match. The risk is low, and websites should expect that, if they are running on the same hostname (but different port) as a potentially-malicious website, there are likely to be all kinds of problems.
(In reply to comment #33) > > I don't understand why we re-assign location like that. > > We need to use the absolute URI for the cache invalidation. There's no > particular reason the variable involved needs to be |location|. What if the if-test fails? We still invalidate |location|... > > And we consider this a potential threat..? > > Well, that was my question. Do we? If we don't, that's fine. I have absolutely no idea... :) I'd say we drop the port based on the fact that the specs don't mention it.
> What if the if-test fails? How to handle that is a good question. I would be fine with not invalidating anything in that situation. Probably better to do that, in fact.
Attached patch Updated with comments from review (obsolete) (deleted) — Splinter Review
Nits and code-duplication should be wrapped up. Passes local testing - sent to tryserver.
Attachment #537744 - Attachment is obsolete: true
Attachment #538215 - Flags: review?(bzbarsky)
Grrr... if I qref after running xpcshell-tests I get a diff in "default.sqlite" which litters the patch and makes "test_database_replaceOnStartup.js" fail in all builds... otherwise tryserver-run looks ok. Requesting review on the cleaned-up patch...
Attachment #538215 - Attachment is obsolete: true
Attachment #538244 - Flags: review?(bzbarsky)
Attachment #538215 - Flags: review?(bzbarsky)
Comment on attachment 538244 [details] [diff] [review] Updated with comments from review, removed binary junk r=me
Attachment #538244 - Flags: review?(bzbarsky) → review+
Requesting check-in
Keywords: checkin-needed
Status: NEW → RESOLVED
Closed: 13 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → mozilla7
FWIW, this has been a problem with http logins as well. When I go to our phpMyAdmin instance and enter my username and password into the http pop-up, if the password is wrong, Firefox will just hang. It will not pop the box up again prompting me for the correct password, it will just keep going like it's waiting for a response. Hitting cancel and refresh does nothing. I have to close Firefox and restart and go back to the page to get the http auth box again.
"Has been" - meaning that it works now? Or is this still a problem?
Still a problem, with Firefox 5.0
I never seem to be able to keep track of what goes where these days, but I don't believe this patch made it into FF5. Could you try a recent nightly?
> I never seem to be able to keep track of what goes where these days This is why we set the target milestone field of the bug to the release that will have the fix! In this case, it's pretty clearly set to "mozilla7", which means Firefox 7, in September.
Clear as ink... ;) (Thx, btw)
I would still like drivers to consider taking this fix for 6. It's a regression from 4 that was noticed too late to fix in 5, and the workaround for this problem when one hits it is very non-obvious (need to clear the cache and try again). The patch has been on trunk for almost 2 weeks w/o any known problems, and it's an easy backout if anything shows up in further aurora/beta testing.
Attachment #538244 - Flags: approval-mozilla-aurora?
This bug introduced in FF% is breaking sites, not only a minor inconvenience. A release of FF5 fixing this should be released as soon as possible.
> I would still like drivers to consider taking this fix for 6. +1 in case necko module peer's opinion matters (I'm not saying it does! :)
Attachment #538244 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
Comment on attachment 538244 [details] [diff] [review] Updated with comments from review, removed binary junk Requesting check-in (on aurora)
Attachment #538244 - Flags: checkin?
Attachment #538244 - Flags: checkin? → checkin+
In my case listed above, I noticed that a coworker did not have the same issue as me (i.e., the form would pop back up for him when he entered a wrong password), and he had FF5 installed. So, I uninstalled FF, completely deleted any Mozilla profile and folders from my Applications data folders etc, reinstalled, and all has been well since. Something must have horned along the way of upgrades through betas (back even from Beta 4) and just never resolved on subsequent upgrades. But the fresh install fixed it.
Mozilla/5.0 (Windows NT 6.1; rv:7.0) Gecko/20100101 Firefox/7.0 I've runned the testcase in this bug, and the bug is not present anymore. I've tried to login to phpmyadmin with incorrect credentials, afterwards with the correct ones and login succeeded. I think that this bug was fixed since the issue no longer reproduces on FF7. Setting resolution to VERIFIED FIXED.
Status: RESOLVED → VERIFIED
Depends on: 779899
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: