Closed Bug 479880 (CVE-2009-1836) Opened 16 years ago Closed 16 years ago

Non-200 responses to proxy CONNECT requests lead to attacks on HTTPS

Categories

(Core :: Networking: HTTP, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla1.9.2a1

People

(Reporter: abarth-mozilla, Assigned: jduell.mcbugs)

References

()

Details

(4 keywords, Whiteboard: [sg:high])

Attachments

(5 files, 5 obsolete files)

I bet this bug is already on file, but I don't have access to all the security bugs so I'm filing this one in the hopes that it gets duped against an existing bug. Threat Model: 1) The user has a proxy server configured. 2) There is an active network attacker. 3) The user visits https://www.paypal.com/ Attack: The browser issues a CONNECT request to the proxy server. The active network attacker intercepts this request and replies with the following bytes: HTTP/1.1 404 Not Found Content-Type: text/html; charset=ISO-8859-1 <script>alert(document.cookie)</script> The browser happily runs the network attacker's script in the https://www.paypal.com security origin, causing disaster. Chromium received this bug report (and additional related bugs involving redirects) from Microsoft. My understanding is that some researchers from MSR are going to present about these bugs as the upcoming Oakland Security Conference. We should coordinate so we fix this issue in the same way. More details are available at <http://code.google.com/p/chromium/issues/detail?id=7338>, which I can give you access to. Thanks, Adam
Attached file test case (deleted) —
Hum... This hasn't gotten duped yet. Here's a test case. Download this file and run the following command: nc -l 8080 < resp Now, configure Firefox to use localhost:8080 as your proxy. Visit https://www.paypal.com/. Notice that your cookie gets alerted. I have a copy of Shuo's paper. I'll ask him if I can share it with you.
Apparently I'm not allowed to share the paper until May 19, the day of the conference, because it also describes IE vulnerabilities. Shuo said he reported these issues a year ago, but they don't seem to be fixed in Firefox 3. I haven't tested the nightly. I think the cleanest fix is to treat any non-200 response to a CONNECT request as a network error and disregard any redirects or response body. The benefit of this approach is that it's easy to see that all these vulnerabilities are fixed. The cost of this approach is that the user doesn't get to see fancy error messages from proxies for HTTPS sites.
Whiteboard: [sg:high]
I don't see any obvious dups in core-security, at least not with proxy, ssl, https, connect in the summary. nsHttpConnection::OnHeadersAvailable calls nsHttpTransaction::SetSSLConnectFailed. That sets mSSLConnectFailed, which as far as I can tell is only looked at in nsHttpChannel::ProcessResponse in the case of a 407 response; looks to me like if we call trans->SSLConnectFailed() in ProcessResponse before doing anything else we can stop dealing with the response as though it were a normal response.
Summary: Non-200 responses to CONNECT requests lead to attacks on HTTPS → Non-200 responses to proxy CONNECT requests lead to attacks on HTTPS
FWIW, Chrome is likely to ship a fix for this issue in its next stable release. If you like, I can ask them to keep the issue description out of their release notes since it might lead others to test Firefox for this issue.
My understanding is that Shuo is planning to go public with this vulnerability on May 19.
Jason - how high does this bubble up on your priority list?
Attached patch proof-of-concept demo patch (obsolete) (deleted) — Splinter Review
I've verified that Waldo's suggestion in comment #3 works: I can cause the browser to display an error message instead of allowing the exploit by checking for SSLConnectFailed() early in HttpChannel::ProcessResponse(). But I'm not sure on some important details that we'd need to sort out before we can fix this: 1) What *exactly* is the behavior we want to have when we encounter this? Adam suggests treating this as a "network error" for non-200 responses in comment #2. Waldo suggests we barf on *all* requests by putting the check early on in the function. My hunch is that if SSLConnectFail has been called, we probably want to treat it as an error, regardless of HTTP response code--but I don't know HTTPS proxy/client protocols very well. 2) Assuming Waldo is correct, how early exactly do we want to do this check, and what's the right behavior? In my patch I've called Cancel() and returned with NS_ERROR_NET_RESET (the closest thing I could find in netwerk/base/public/nsNetError.h to a "network error"). I've done this early enough that if we encounter this failure, we don't notify the ""http-on-examine-response" observers, or set cookies, etc. But I don't know this code at all, so I'm merely guessing that this is correct. Can anyone point me at a spec for HTTPS proxying?
I think we want to block all status codes except 407, which is "Proxy Authentication Required". It's essential that we not read unauthenticated bytes from the message payload, even on a 407, but we do want to get far enough in the proxy auth process to show the user the auth challenge dialog.
Jason, HTTPS proxy tunneling is originally specified in an expired Internet-Draft titled "Tunneling TCP based protocols through Web proxy servers". You can find the Internet-Draft by doing a Web search for that title. It is also specified in Section 5 (specifically Sections 5.2 and 5.3) of RFC 2817.
On the patch, I guess you want the mConnectionInfo->UsingSSL() guard that's in the current codepath that hits the SSL-failed check, but I haven't looked to see what the consequences are. (In reply to comment #7) > 1) What *exactly* is the behavior we want to have when we encounter this? > Adam suggests treating this as a "network error" for non-200 responses in > comment #2. Waldo suggests we barf on *all* requests by putting the check > early on in the function. To be clear, my suggestion was off-the-cuff and mostly motivated by the existing code structure and by a desire to fail generally rather than only in specific cases (potentially missing some along the way). I wasn't thinking much about the "right" response or what it should be so much as about ensuring a forceful abort of normal handling in all cases outside the clearly-correct ones. Putting up the proxy-auth dialogs for the 407 case and only using the response data in the 200 case where the handshake must succeed for the data to be used seems most correct on further thought. (The RFC says any 2xx response, but what would a 206 response to a CONNECT request even mean? We'll need to test 200, 203, 206, and at least one other 2xx response here to verify correct behavior in all cases, at least if the current code setup isn't substantially changed.) > My hunch is that if SSLConnectFail has been called, we probably want to treat > it as an error, regardless of HTTP response code--but I don't know HTTPS > proxy/client protocols very well. I was going to say treat them the same as some typical HTTPS error, maybe untrusted sender, but then the user can click through -- and given that one hopes non-malicious instances of this in the real world are rare, it would be better not to give users the gun in the first place. > 2) Assuming Waldo is correct, how early exactly do we want to do this check, > and what's the right behavior? In my patch I've called Cancel() and returned > with NS_ERROR_NET_RESET (the closest thing I could find in > netwerk/base/public/nsNetError.h to a "network error"). NS_ERROR_PROXY_CONNECTION_REFUSED ("The connection attempt to a proxy failed", so the name is a slight misnomer for the description) seems better here (we'd treat responses as though the proxy gave a malformed response), and I'm pretty sure that wouldn't allow a way to click through the error page. > I've done this early enough that if we encounter this failure, we don't > notify the ""http-on-examine-response" observers, or set cookies, etc. But > I don't know this code at all, so I'm merely guessing that this is correct. Seems right, biesi may know more. > Can anyone point me at a spec for HTTPS proxying? http://curl.haxx.se/rfc/draft-luotonen-web-proxy-tunneling-01.txt http://www.ietf.org/rfc/rfc2817.txt are about the closest I could find when I was looking at this in connection with bug 428009 (they seem to be what comment 9 was suggesting). The RFCs for this aren't organized very coherently or discoverably as far as I can tell, unfortunately.
(In reply to comment #10) > > My hunch is that if SSLConnectFail has been called, we probably want to treat > > it as an error, regardless of HTTP response code--but I don't know HTTPS > > proxy/client protocols very well. > > I was going to say treat them the same as some typical HTTPS error, maybe > untrusted sender, but then the user can click through -- and given that one > hopes non-malicious instances of this in the real world are rare, it would be > better not to give users the gun in the first place. > > > 2) Assuming Waldo is correct, how early exactly do we want to do this check, > > and what's the right behavior? In my patch I've called Cancel() and returned > > with NS_ERROR_NET_RESET (the closest thing I could find in > > netwerk/base/public/nsNetError.h to a "network error"). > > NS_ERROR_PROXY_CONNECTION_REFUSED ("The connection attempt to a proxy failed", > so the name is a slight misnomer for the description) seems better here (we'd > treat responses as though the proxy gave a malformed response), and I'm pretty > sure that wouldn't allow a way to click through the error page. It won't. The overrides on cert errors are specific to a small set, and that's not one of them, so unless it was added explicitly, it gets a full stop. Having said that - whatever expedient solution we use for branches that are string frozen, it feels to me like the best thing to do for upcoming versions is just to introduce a new error, and strings, for this case. Hopefully we haven't used up all our error codes? I'd be happy to recommend strings for a mozilla-central patch, if you agree.
(In reply to comment #11) > It won't. The overrides on cert errors are specific to a small set, and that's > not one of them, so unless it was added explicitly, it gets a full stop. Mm, good. > Having said that - whatever expedient solution we use for branches that are > string frozen, it feels to me like the best thing to do for upcoming versions > is just to introduce a new error, and strings, for this case. Hopefully we > haven't used up all our error codes? Oh, right, too much in the mindset of not being able to change these things...new error is best for trunk, for sure. > I'd be happy to recommend strings for a mozilla-central patch, if you agree. Sounds good -- although of course we can hack in bogo-strings in the meantime, until final UI becomes the only holdup here.
Re: 2xx responses to CONNECT requests: I don't know what a non-200 2xx response to CONNECT means either. So in Chromium's network stack, I only consider a 200 response as successful tunnel setup, and I disallow any response body after the response headers. So far I haven't received any bug report that this is overly strict. Please remember to test the URL https://no.such.host/ against a proxy. This will give you a vivid idea what we lose by handling any non-200 non-407 response as a network error. For example, the proxy I use returns a 404 response whose body contains an informative error message: While trying to retrieve the URL: no.such.host:443 The following error was encountered: Unable to determine IP address from host name for The dnsserver returned: Name Error: The domain name does not exist. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. In our testing we have seen our proxy return 403, 404, and 501 with a useful error page in the response body. It's too bad that we won't be able to display this error page to users for security reasons.
We could potentially show some built in error pages to differentiate the different status codes. In principle, we could use heuristics (like greping the error pages) to figure out if its some common error like "host not found."
Attached patch Possibly correct patch (obsolete) (deleted) — Splinter Review
OK, so it looks to me like the existing strategy should work fine: by the time we get to the code in the patch we will have set SSLConnectFailed in OnHeadersAvailable() iff we do an SSL proxy connect and the status != 200. So we can and should bail out here if SSLConnectFailed is true. The only exception I've heard are 407 requests, which we let drop through. I don't know the code well enough to say if my method of "bailing out" is the right one. I'm going to have biesi take a look, as he'll know. The last remaining issue is the error code to use for branches and the trunk. It sounds like we want a new error code for the trunk; anyone have a suggestion? For now I've stuck in NS_ERROR_PROXY_CONNECTION_REFUSED.
Attachment #371339 - Attachment is obsolete: true
Attachment #371776 - Flags: superreview?(cbiesinger)
Attachment #371776 - Flags: review?(cbiesinger)
You can't just let 407 through. You have to make sure we don't render the entity body, even if the user cancels the auth request, etc.
> In our testing we have seen our proxy return > 403, 404, and 501 with a useful error page in > the response body. It's too bad that we won't > be able to display this error page to users > for security reasons. > You can't just let 407 through. You have to > make sure we don't render the entity body, > even if the user cancels the auth request, etc. So, the more I think about this, isn't a more global solution to allow the browser to treat 403/404/407/501 responses with failed SSL "normally" (i.e. render content), but just not as if the HTML were coming from the final destination (i.e paypal)? I don't know this code base very well yet, but surely we've got some way to let the code upstream know that the content is coming from the proxy server (or from some dummy "unsafe" URL that can't have cookies, etc.), and not the final site. Anyone know of the correct frob for this?
That's really dicey when you consider that the bogus content might be trying to mount a phishing attack. Better to block the evil content.
Jason, Your proposal would be the ideal solution, but it may requires a lot of work to get it right. (In fact, that's what I asked Adam to do originally.) The solution Adam and I suggested is a drastic measure, but it is much simpler to get right and be secure. If you decide to proceed with this solution, make sure you set up a proxy that requires HTTP authentication, visit https://bugzilla.mozilla.org/, and cancel the proxy's auth dialog. The proxy's 407 response should not be displayed. I have such a proxy at work, so I can perform this test for you.
> surely we've got some way to let the code upstream know that the content is > coming from the proxy server Not that I can think of, and even if we did the url bar would still have the URI the user typed in, no?
Right, the URL bar would still have the URI the user typed in, but it would have the default white background (as opposed to yellow), and there would be no lock icon.
I think for the phishing situation that's really not sufficient, esp for the security-conscious user who clicks a bookmark they know points to https... they might not even notice the lack of other indicators, since they "know" their setup is "correct" in terms of not being phishable without screaming in your face warnings about cert mismatches.
I've attached a test case that verifies that if we let 407 requests through without further code changes (i.e. if we use the 371776 patch), we are vulnerable to the same cookie/etc attack, as the proxy server's "auth failed" page is treated by Firefox as if it came from paypal.com. I think we really want to keep these proxy server msgs if we can, so we should try to find a way to display them w/o treating them as from the target destination.
Bug 488386 is a dup (I didn't mark it as such, will leave to a more seasoned hand to do that); this problem seems to be accumulating attention, although it could just be a complete coincidence. Worth ramping up the pace here since it's something other people are stumbling on?
Attached patch Partial patch for fix (obsolete) (deleted) — Splinter Review
I've got half a fix here (I'm still tracking down the correct way to handle 407 responses). But since there's some UI-ish decisions here, and I want to try to get this into 3.5b4 in the next day or two, I thought I'd run what I've got by you all. So we're basically in the game here of trying to match up HTTP response codes from the proxy to the most helpful boilerplate Mozilla error page available. I've poked around a bit at squid's code, to see how it works, and RFC 2616, and the logic in the patch is what seems the most sensible to me. There are a couple of codes where I had to make a choice (500 as DNS failure vs. as internal server error; 504 as DNS vs target failure), and I'm also guessing at what to do for 501 (Wan-Teh, you said you were seeing meaningful pages from proxies with 501: what kind of message?). There's also the question of what the default page should be for other error codes. There's really nothing else I can see that's a valid response from a proxy, so I shoveled in NS_ERROR_UNSAFE_CONTENT_TYPE to give the user an "unsafe content" message, to indicate someone may be messing with them. I am not at all certain that's the best UI choice. So: take a look at the patch, and any comments or opinions would be welcome.
Attachment #371776 - Attachment is obsolete: true
Attachment #371776 - Flags: superreview?(cbiesinger)
Attachment #371776 - Flags: review?(cbiesinger)
Attached patch Patch now handles all cases: needs review (obsolete) (deleted) — Splinter Review
OK, I think I've figured out how to handle the 407 case, too (it's fairly simple), so this patch is ready for review. Please see notes to previous patch, all of which are still valid issues/questions. I've verified that this code works both with the 407 test case attached to this bug (which doesn't pop up an auth window, as it lacks the correct headers), and with squid, which does pop up an auth window. We need to handle both cases, of course...
Assignee: nobody → jduell.mcbugs
Attachment #372752 - Attachment is obsolete: true
Status: NEW → ASSIGNED
Attachment #373024 - Flags: ui-review?(johnath)
Attachment #373024 - Flags: superreview?(cbiesinger)
Attachment #373024 - Flags: review?(jwalden+bmo)
Comment on attachment 373024 [details] [diff] [review] Patch now handles all cases: needs review I don't know the netwerk code that well, so my review isn't authoritative. Just trying to help... Assuming that nsHttpChannel::ProcessResponse is the right place to handle SSL proxy connection failures, your patch looks correct. I have three nits. 1. I don't know what "fsck" means. 2. Add a "break" statement to the 'default' case in the switch statement you added. 3. In this LOG statement, you are missing the corresponding format specifiers for 'this' and 'httpStatus': >+ LOG(("Cancelling failed SSL proxy connection\n", this, httpStatus)); Compare that with this LOG statement, which has the format specifiers: > LOG(("nsHttpChannel::ProcessResponse [this=%x httpStatus=%u]\n", > this, httpStatus));
OK, I've incorporated Wan-Teh's suggestions, I've also changed the default response (for HTTP codes that don't make sense) to "Proxy server refused connection" (the last patch returned NS_ERROR_UNSAFE_CONTENT_TYPE, which gave a page about the "file type being unsafe", which I now think is likely to just confuse people). Wan-Teh, could you tell me what sort of error page you've gotten from a proxy with a 501 status, as you mentioned in comment #13? Thanks.
Attachment #373024 - Attachment is obsolete: true
Attachment #373182 - Flags: ui-review?(johnath)
Attachment #373182 - Flags: superreview?(cbiesinger)
Attachment #373182 - Flags: review?(jwalden+bmo)
Attachment #373024 - Flags: ui-review?(johnath)
Attachment #373024 - Flags: superreview?(cbiesinger)
Attachment #373024 - Flags: review?(jwalden+bmo)
Attachment #373182 - Flags: ui-review?(johnath) → ui-review+
Comment on attachment 373182 [details] [diff] [review] Incorporate Wan-Teh's changes, plus change default error page >+ case 403: // HTTP/1.1: "Forbidden" >+ case 407: // if user's auth failed >+ case 501: // HTTP/1.1: "Not Implemented" >+ // user sees boilerplate Mozilla "Proxy Refused Connection" page. >+ rv = NS_ERROR_PROXY_CONNECTION_REFUSED; >+ break; Makes sense to me >+ // Squid replies with 404 if DNS fails. >+ // - note: regular "page not found" 404 from target server handled over >+ // SSL; different codepath. >+ case 404: // HTTP/1.1: "Not Found" >+ // RFC 2616 notes that "some deployed proxies are known to return 400 or >+ // 500 when DNS lookups time out." (Squid uses 500 if it runs out of >+ // sockets: so we have a conflict here). >+ case 400: // HTTP/1.1 "Bad Request" >+ case 500: // HTTP/1.1: "Internal Server Error" >+ /* User sees: "Address Not Found: Firefox can't find the server at >+ * www.foo.com." >+ */ >+ rv = NS_ERROR_UNKNOWN_HOST; >+ break; I think I would have expected 500 to go up above, with "Proxy Refused Connection". If a proxy is 500'ng, whether it's a DNS thing or not on the proxy's end, I think it's probably more helpful to the user to say "The proxy doesn't like you. Take it up with your proxy admin." than to say "Maybe you typed in the address wrong." Without the RFC reference, I would have assumed the same about 400, tbh. With it there though, no really committed either way. >+ case 502: // HTTP/1.1: "Bad Gateway" (invalid resp from target server) >+ // Squid returns 503 if target request fails for anything but DNS. >+ case 503: // HTTP/1.1: "Service Unavailable" >+ /* User sees: "Failed to Connect: >+ * Firefox can't establish a connection to the server at >+ * www.foo.com. Though the site seems valid, the browser >+ * was unable to establish a connection. >+ */ >+ rv = NS_ERROR_CONNECTION_REFUSED; Fine by me >+ // RFC 2616 uses 504 for both DNS and target timeout, so not clear what to >+ // do here: picking target timeout, as DNS covered by 400/404/500 >+ case 504: // HTTP/1.1: "Gateway Timeout" >+ // user sees: "Network Timeout: The server at www.foo.com >+ // is taking too long to respond." >+ rv = NS_ERROR_NET_TIMEOUT; >+ break; Yup >+ // Assume we're being fsck'd with, and give message about scary content. >+ default: >+ rv = NS_ERROR_UNSAFE_CONTENT_TYPE; >+ } This message will be a little confusing, but in a scary way so it's probably the best play given that we can't change strings. This is an improvement over the current state of affairs, so modulo getting your opinion on 500 above, and filing a follow-up to do this with proper strings on trunk, ui-r=me
> I think I would have expected 500 to go up above, with "Proxy Refused > Connection". ... I would have assumed the same about 400... Yeah, I'm basing the treatment of 400 & 500 purely based on the comment in RFC 2616, and who knows if the "already deployed proxies" are still around 10 years later. That said, squid is using 404 for DNS failure, which is at least as dumb as 400/500. I really don't have a strong sense of what to do here, but at least 400/500 errors should be fairly infrequent. >+ // Assume we're being fsck'd with, and give message about scary content. >+ default: >+ rv = NS_ERROR_UNSAFE_CONTENT_TYPE; So note that I've changed this in the latest patch to NS_ERROR_PROXY_CONNECTION_REFUSED. See my reasoning above. Somehow blaming things on an "unsafe file type" seems likely to confuse people, so just a regular "proxy refused" error seems better to me. > filing a follow-up to do this with proper strings on trunk I'm not sure what other strings you've got in mind.
(In reply to comment #31) > > filing a follow-up to do this with proper strings on trunk > > I'm not sure what other strings you've got in mind. If we weren't in a string freeze, I think the way we would have dealt with this would have been to create some new errors, and strings for the error page, instead of recycling existing mostly-similar errors, no?
I suppose it is worth a look over the exact error text to see if we want to change the text. But they would only need to be fairly minor changes, i.e. instead of "firefox can't find the server at www.foo.com", "your proxy server can't find ...", etc. But it's probably worth doing. As long as I don't have to be the one who comes up with the actual text :)
Jason, we have seen the following response codes from proxies: 403: Squid seems to only allow SSL tunnels to certain ports. If I visit https://bugzilla.mozilla.org:80/, Squid responds with a 403 error page: ERROR The requested URL could not be retrieved While trying to retrieve the URL: bugzilla.mozilla.org:80 The following error was encountered: * Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. 404: Squid returns 404 for host not found. Test case is https://no.such.host/ 501: An internal proxy we use for testing purposes doesn't support HTTPS, so it responds with 501 (not implemented) to tunnel CONNECT requests. Although this proxy is not generally available, its use of 501 in this case seems reasonable.
OK, I think the current patch covers Wan-Teh's cases appropriately. Getting "proxy refused" from a server that doesn't implement SSL tunneling is close enough, in my opinion--there's no other string-frozen message that we've got that's any better.
(In reply to comment #34) > 501: An internal proxy we use for testing purposes doesn't > support HTTPS, so it responds with 501 (not implemented) > to tunnel CONNECT requests. Although this proxy is not > generally available, its use of 501 in this case seems > reasonable. 501 is explicitly sanctioned for this purpose in RFC 2616 section 5.1.1: An origin server SHOULD return the status code [...] 501 (Not Implemented) if the method is unrecognized or not implemented by the origin server.
Comment on attachment 373182 [details] [diff] [review] Incorporate Wan-Teh's changes, plus change default error page >diff --git a/netwerk/protocol/http/src/nsHttpChannel.cpp b/netwerk/protocol/http/src/nsHttpChannel.cpp > nsHttpChannel::ProcessResponse() > { > nsresult rv; > PRUint32 httpStatus = mResponseHead->Status(); > > LOG(("nsHttpChannel::ProcessResponse [this=%x httpStatus=%u]\n", > this, httpStatus)); > >+ if (mTransaction->SSLConnectFailed() && httpStatus != 407) >+ return ProcessFailedSSLConnect(httpStatus); Returning back to my first note in comment 10, expand this into: >+ if (mTransaction->SSLConnectFailed() && httpStatus != 407) { >+ NS_ABORT_IF_FALSE(mConnectionInfo->UsingSSL(), >+ "SSL connect failed but not using SSL?"); >+ return ProcessFailedSSLConnect(httpStatus); >+ } as a sanity check -- inspection shows the guard isn't necessary, but if we're going to rely on the invariant we should enforce it in debug builds to be safe. Looks good with that change.
Attachment #373182 - Flags: review?(jwalden+bmo) → review+
ProcessFailedSSLConnect is called in two places, and should only ever be called if SSL is being used, so I put the UsingSSL() check at the top of the function. Skipping UI review, since that hasn't changed from the last patch.
Attachment #373182 - Attachment is obsolete: true
Attachment #373419 - Flags: superreview?(cbiesinger)
Attachment #373419 - Flags: review?(jwalden+bmo)
Attachment #373182 - Flags: superreview?(cbiesinger)
Attachment #373419 - Flags: review?(jwalden+bmo) → review+
Marking this a P2 blocker per discussion with beltzner. Sam (and others), this exploit will be made public on 5/16/2009, we should coordinate landing this in the relevant releases.
Flags: blocking1.9.1+
Flags: blocking1.9.0.10?
Priority: -- → P2
Target Milestone: --- → mozilla1.9.1
Thanks for the heads up. We probably need to block 1.9.0.10 on this. As it stands, we're targeting a May 19 ship date, but I'll talk with Dan about whether this means we need to fix it sooner.
Flags: wanted1.9.0.x+
Flags: blocking1.9.0.10?
Flags: blocking1.9.0.10+
(In reply to comment #39) > Marking this a P2 blocker per discussion with beltzner. Sam (and others), this > exploit will be made public on 5/16/2009, we should coordinate landing this in > the relevant releases. 5/16 or 5/19? Previous comments from Adam say 5/19.
Here is what Shou said: "MSR has told them that the information will become public on May 19th" Where, I think, "them" is the IE product team. Presumably MS will patch their version of this issue on 5/12 "patch Tuesday."
Yeah, 5/19, my bad.
FYI: if we ever want to get smarter about how we reply, it sounds like Bug 482874 is tackling some of the same issues for 404 pages.
Priority: P2 → --
Target Milestone: mozilla1.9.1 → ---
Priority: -- → P2
Target Milestone: --- → mozilla1.9.1
Attachment #373419 - Flags: superreview?(cbiesinger) → superreview?(bzbarsky)
This should have a unit test for 1.9.0 and 1.9.1. Can we get one?
This blocks the, now firedrill, 1.9.0.10 release.
Flags: blocking1.9.0.10+
Attachment #373419 - Flags: superreview?(bzbarsky) → superreview+
Comment on attachment 373419 [details] [diff] [review] Add sanity check to ensure SSL is being used Approved for 1.9.0.10 and 1.9.0.11. a=ss Please land on 1.9.0 as well as on GECKO190_20090406_RELBRANCH.
Attachment #373419 - Flags: approval1.9.0.11+
Attachment #373419 - Flags: approval1.9.0.10+
Checking in netwerk/protocol/http/src/nsHttpChannel.h; /cvsroot/mozilla/netwerk/protocol/http/src/nsHttpChannel.h,v <-- nsHttpChannel.h new revision: 1.98; previous revision: 1.97 done Checking in netwerk/protocol/http/src/nsHttpChannel.cpp; /cvsroot/mozilla/netwerk/protocol/http/src/nsHttpChannel.cpp,v <-- nsHttpChannel.cpp new revision: 1.337; previous revision: 1.336 done Checking in netwerk/protocol/http/src/nsHttpChannel.h; /cvsroot/mozilla/netwerk/protocol/http/src/nsHttpChannel.h,v <-- nsHttpChannel.h new revision: 1.97.10.1; previous revision: 1.97 done Checking in netwerk/protocol/http/src/nsHttpChannel.cpp; /cvsroot/mozilla/netwerk/protocol/http/src/nsHttpChannel.cpp,v <-- nsHttpChannel.cpp new revision: 1.336.6.1; previous revision: 1.336 done
Pushed http://hg.mozilla.org/mozilla-central/rev/72fd3bd01cbc This still needs tests, right?
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Flags: in-testsuite?
Resolution: --- → FIXED
> This should have a unit test for 1.9.0 and 1.9.1. Can we get one? Not right away. Waldo tells me we'll need some httpd.js changes to support testing this. And it'll be a lot of work to test a fairly simple patch.
Testing this with the two manual tests, the behavior looks fixed with the original test but I notice no difference in behavior with the second test between 1.9.0.9 and 1.9.0.10. They act in the same manner (both getting a prompt for a username and password). This was tested on OS X 10.5.6 with Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.10) Gecko/2009042315 Firefox/3.0.10.
Al: for the second test, you need to cancel the username/password prompt and see what happens. With the fix, you should not see an error page from the proxy server.
Thanks, Wan-Teh. I tested that and you do not receive the evil error page with 1.9.0.10 (and do with 1.9.0.9).
Verified fixed in 1.9.0.11 with Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.11pre) Gecko/2009051111 GranParadiso/3.0.11pre using existing testcases.
We've got a bug that seems likely to be a result of this fix: https://bugzilla.mozilla.org/show_bug.cgi?id=491818 They've got a setup where their proxy redirects unauthorized users to a login page (rather than using the std 407 auth prompt). The issue: do we have a way to honor a redirect without parsing any page content that the redirect contains? That's what we need in order to make this style of redirect work w/o opening back up the security hole.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Blocks: 491818
> We've got a bug that seems likely to be a result of this fix: That's the place to fix that issue, then. Please don't reopen bugs unless the patch is actually backed out.
Status: REOPENED → RESOLVED
Closed: 16 years ago16 years ago
Resolution: --- → FIXED
No longer blocks: 491818
Depends on: 491818
Depends on: 493699
Jason: Please attach a 1.8-branch version of this fix for Thunderbird2 and SeaMonkey.
Group: core-security
Flags: wanted1.8.1.x+
Flags: blocking1.8.1.next+
Attached patch Patch for 1.8 branch (deleted) — Splinter Review
Attachment #380256 - Flags: superreview?(bzbarsky)
Attachment #380256 - Flags: review?(bzbarsky)
Comment on attachment 380256 [details] [diff] [review] Patch for 1.8 branch Note: This doesn't include the fix for bug 491818, so redirects are not going to work.
We need to backport that fix too; your call on whether you prefer to do it separately or want to roll it in.
Comment on attachment 380256 [details] [diff] [review] Patch for 1.8 branch Assuming bz's review (please wait for that), approved for 1.8.1.22, a=dveditz for release-drivers
Attachment #380256 - Flags: approval1.8.1.next+
Attachment #380256 - Flags: superreview?(bzbarsky)
Attachment #380256 - Flags: superreview+
Attachment #380256 - Flags: review?(bzbarsky)
Attachment #380256 - Flags: review+
Patch checked into the 1.8(.1) branch Checking in netwerk/protocol/http/src/nsHttpChannel.cpp; /cvsroot/mozilla/netwerk/protocol/http/src/nsHttpChannel.cpp,v <-- nsHttpChannel.cpp new revision: 1.256.2.23; previous revision: 1.256.2.22 done Checking in netwerk/protocol/http/src/nsHttpChannel.h; /cvsroot/mozilla/netwerk/protocol/http/src/nsHttpChannel.h,v <-- nsHttpChannel.h new revision: 1.70.4.6; previous revision: 1.70.4.5 done
Keywords: fixed1.8.1.22
Is this behavior Firefox specific? When testing with 1.8.1 Seamonkey, I see no difference between last night's build and the last shipped Seamonkey (1.1.16). In both instances, the alert, which contains cookie information when this is unpatched, is empty. Unfortunately, I just checked this in Firefox 3.0.10 and 3.0.11 again on OS X and, for some reason, the 3.0.11 build 2 is exhibiting the bug. With both of the testcases here, the alert is showing cookie information. I went back to the May 11 build that I verified with (Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.11pre) Gecko/2009051104 GranParadiso/3.0.11pre) and it *is* fixed there. Something reverted the fix. I'm not sure if something somehow reverted or
(Ignore the last line above. Bad editing.)
This is still fixed on the 1.9.1 nightly.
When I check in any 1.9.0 nightly from 5/11 through 6/4, the bug IS fixed. When I check in Firefox 3.0.10 release and 3.0.11 build 2 (the current build), the cookie information is shown in the alerts. I'm utterly baffled here.
With 3.0.10: $ nc -l 8080 < resp GET http://en-us.start2.mozilla.com/firefox?client=firefox-a&rls=org.mozilla:en-US:official HTTP/1.1 Host: en-us.start2.mozilla.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.10) Gecko/2009042315 Firefox/3.0.10 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive Cookie: __utma=183859642.770291850.1244137527.1244137527.1244137527.1; __utmz=183859642.1244137527.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); s_vi=[CS]v1|4A280836000043CA-A02081B000000C1[CE] $ nc -l 8080 < evil_407_SSL_proxy.reply GET http://en-us.start2.mozilla.com/firefox?client=firefox-a&rls=org.mozilla:en-US:official HTTP/1.1 Host: en-us.start2.mozilla.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.10) Gecko/2009042315 Firefox/3.0.10 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive Cookie: __utma=183859642.770291850.1244137527.1244137527.1244137527.1; __utmz=183859642.1244137527.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); s_vi=[CS]v1|4A280836000043CA-A02081B000000C1[CE] With 3.0.11: ABillings-Laptop:Desktop abillings$ nc -l 8080 < resp GET http://en-us.start2.mozilla.com/firefox?client=firefox-a&rls=org.mozilla:en-US:official HTTP/1.1 Host: en-us.start2.mozilla.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.11) Gecko/2009060214 Firefox/3.0.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive Cookie: __utma=183859642.770291850.1244137527.1244137527.1244137527.1; __utmz=183859642.1244137527.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); s_vi=[CS]v1|4A280836000043CA-A02081B000000C1[CE] ABillings-Laptop:Desktop abillings$ nc -l 8080 < evil_407_SSL_proxy.reply GET http://en-us.start2.mozilla.com/firefox?client=firefox-a&rls=org.mozilla:en-US:official HTTP/1.1 Host: en-us.start2.mozilla.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.11) Gecko/2009060214 Firefox/3.0.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive Cookie: __utma=183859642.770291850.1244137527.1244137527.1244137527.1; __utmz=183859642.1244137527.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); s_vi=[CS]v1|4A280836000043CA-A02081B000000C1[CE] with nightly 1.9.0 build: ABillings-Laptop:Desktop abillings$ nc -l 8080 < evil_407_SSL_proxy.reply GET http://www.mozilla.org/projects/granparadiso/ HTTP/1.1 Host: www.mozilla.org User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.12pre) Gecko/2009060404 GranParadiso/3.0.12pre Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive ABillings-Laptop:Desktop abillings$ nc -l 8080 < resp GET http://www.mozilla.org/projects/granparadiso/ HTTP/1.1 Host: www.mozilla.org User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.12pre) Gecko/2009060404 GranParadiso/3.0.12pre Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive
You're testing with HTTP, but this fix is only for HTTPS :) Also note that the correct behavior is to see NO alert, so even seeing an empty alert is a test failure. I just tested the 3.0.10 release (OSX) and the 3.0.11 build 2 for OSX from here ftp://ftp.mozilla.org/pub/firefox/nightly/3.0.11-candidates/build2/ It works fine, i.e. I get the boilerplate "proxy refused" page, with no alert popup of any kind.
doh, httpS. *sigh* I feel stupid now. Yes, it works correctly with https in 3.0.11 build 2: ABillings-Laptop:Desktop abillings$ nc -l 8080 < resp CONNECT www.paypal.com:443 HTTP/1.1 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.11) Gecko/2009060214 Firefox/3.0.11 Proxy-Connection: keep-alive Host: www.paypal.com ABillings-Laptop:Desktop abillings$ nc -l 8080 < evil_407_SSL_proxy.reply CONNECT www.paypal.com:443 HTTP/1.1 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.11) Gecko/2009060214 Firefox/3.0.11 Proxy-Connection: keep-alive Host: www.paypal.com
Verified for 1.8.1.22 with Seamonkey (Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.22pre) Gecko/20090604 SeaMonkey/1.1.17pre) as well.
Alias: CVE-2009-1836
Target Milestone: mozilla1.9.1 → mozilla1.9.2a1
For the record - CVE-2009-2061 was assigned to the redirect / III.B attack described in the paper: Mozilla Firefox before 3.0.10 processes a 3xx HTTP CONNECT response before a successful SSL handshake, which allows man-in-the-middle attackers to execute arbitrary web script, in an https site's context, by modifying this CONNECT response to specify a 302 redirect to an arbitrary https web site. Also CVE-2009-2065 for III.C (is there a separate bug for that)?
Sorry for the late response, but I wasn't aware of this discussion before today. This change will cause major headache for users behind proxies and the administators of those proxies. The proxy error messages often do contain important information the user is supposed to see. Replacing these messages with general boilerplate messages is not a good idea. We have already seen this with MSIE for a long time (all protocols) and it's a very frequent problem cause from a usability perspective. Error messages from proxies is not only the dull standard technical messages which has been partially quoted here, but also notifications when the user accesses content he is not allowed to access, terms of use etc. And even for the technical error messages such as "connection failed" "host not found" etc the error message as such is important in tracking down the error cause. Doing a string search to identify these will not work out well as the error messages is often localized or even replaced entirely with custom site-specific messages. But I do understand the very valid concerns about phising. I have however a proposal along the lines of Jason comment earlier. * Before rendering the response, pop up an error notification dialog, similar to the certificate error dialog, requiring user action before showing the content. * When rendering the content, make sure to render the content in a different context blocking any access to cookies etc. I.e. render the content with a different internal requested-URI. * When following any links from this content, DON'T use the originilal requested-URI in Referer. Using the internally generated dummy URI mentioned above is fine (assuming it does not contain user specific details), or none at all. * Try to block any active content / plugins from being used in rendering the response. Regarding 407 responses the auth dialog needs to be handled as usual, even for NTLM/Negotiate authentication. Which means the connection as such isn't considered broken. If the user cancels the auth dialog to render the error message then start at the top of the list above.
I strongly agree. It is not resolved nor fixed in its current state. I have a proxy that provides detailed certificate status information in error messages and now Firefox just drops it completely. (In reply to comment #75) > Sorry for the late response, but I wasn't aware of this discussion before > today. > > This change will cause major headache for users behind proxies and the > administators of those proxies. The proxy error messages often do contain > important information the user is supposed to see. Replacing these messages > with general boilerplate messages is not a good idea. We have already seen this > with MSIE for a long time (all protocols) and it's a very frequent problem > cause from a usability perspective. > > Error messages from proxies is not only the dull standard technical messages > which has been partially quoted here, but also notifications when the user > accesses content he is not allowed to access, terms of use etc. > > And even for the technical error messages such as "connection failed" "host not > found" etc the error message as such is important in tracking down the error > cause. Doing a string search to identify these will not work out well as the > error messages is often localized or even replaced entirely with custom > site-specific messages. > > But I do understand the very valid concerns about phising. I have however a > proposal along the lines of Jason comment earlier. > > * Before rendering the response, pop up an error notification dialog, similar > to the certificate error dialog, requiring user action before showing the > content. > > * When rendering the content, make sure to render the content in a different > context blocking any access to cookies etc. I.e. render the content with a > different internal requested-URI. > > * When following any links from this content, DON'T use the originilal > requested-URI in Referer. Using the internally generated dummy URI mentioned > above is fine (assuming it does not contain user specific details), or none at > all. > > * Try to block any active content / plugins from being used in rendering the > response. > > Regarding 407 responses the auth dialog needs to be handled as usual, even for > NTLM/Negotiate authentication. Which means the connection as such isn't > considered broken. If the user cancels the auth dialog to render the error > message then start at the top of the list above.
(oops, sorry for overquoting, just pressed "submit" too early)
(In reply to comment #74) > For the record - CVE-2009-2061 was assigned to the redirect / III.B attack > described in the paper: > > Mozilla Firefox before 3.0.10 processes a 3xx HTTP CONNECT response > before a successful SSL handshake, which allows man-in-the-middle > attackers to execute arbitrary web script, in an https site's context, > by modifying this CONNECT response to specify a 302 redirect to an > arbitrary https web site. It looks to me like this CVE is fixed by this bug. Shouldn't the CVE be updated ?
I realize that I'm way late to the party here, but I want to echo what Henrik and Alex said - this fix is terrible for proxy admins (myself included). Our proxy returns a 403 forbidden for filtered SSL content and an error page about why the content is blocked, and Firefox just drops all of it on the floor. :(
To fix the unfortunate situation this bug created for proxy and captive portals users, Firefox should ad handling of HTTP error 511 Network Authentication Required as soon as possible (see bug #728658)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: