Closed Bug 1158191 Opened 10 years ago Closed 4 years ago

On first connection to a new site (typed without protocol), try https first

Categories

(Firefox :: Address Bar, enhancement, P3)

enhancement

Tracking

()

RESOLVED DUPLICATE of bug 1613063
Tracking Status
firefox40 --- affected

People

(Reporter: jruderman, Unassigned)

References

(Blocks 1 open bug)

Details

(Keywords: sec-want)

If I type a hostname that I do *not* have in history: * Try https first * Fall back to http if we can't connect securely. * Race with http if the https connection stalls (>1s) This would slightly speed up sites that redirect to https, and slightly slow down sites that don't support https at all. The automatic fallback means this wouldn't protect against active attacks on the first connection. But many attackers won't interfere with https connections because they don't know which connections have the fallback enabled; other users would notice. Also, having https in bookmarks/history will protect users on subsequent connections (even some typed-without-protocol: see bug 658707 comment 13).
Could starting both connections at once (kind of parallel prefetch) and drop HTTP if HTTPS works help performance?
(In reply to Jesse Ruderman from comment #0) > But many attackers won't interfere with https > connections because they don't know which connections have the fallback > enabled If we lookup domain www.foo.com and then open 2 connections to the same IP, one over plaintext http and one over https, that seems like a dead giveaway. Adding complexity to hide that seems not worth it.
OS: Unspecified → All
Hardware: Unspecified → All
Bug 1002724 is the opposite of this, proposing at minimum HTTPS attempt on failed HTTP attempt. This route would be far preferable, in my opinion.
Related bug in Chromium: https://crbug.com/435298
How will you handle mixed content issues: https://www.dell.com/
Another thing to consider is that some websites show entirely different webpages for HTTP and HTTPS. An example is https://web.mit.edu/ and http://web.mit.edu/. If the default is HTTPS, how should browsers fallback in this case?
@lancess: Nothing special happens for mixed content (why would it?); you just get the default behavior. @fn84b: Yes, a server can show entirely different content for HTTP vs HTTPS. If someone really wants the HTTP page, they type HTTP. The fact that the browser takes an unqualified hostname and prepends HTTP:// is a historical artifact (and security vulnerability) that should simply be changed to try HTTPS:// first.
(In reply to lancess from comment #7) > How will you handle mixed content issues: https://www.dell.com/ Website incompetence should not factor into this decision. If HTTPS connects securely, then it should be expected to work correctly. Comment 0 already proposes falling back to HTTP if HTTPS is available but a connection is not possible due to security error, though this case should probably warn the user somehow. (In reply to fn84b from comment #8) > Another thing to consider is that some websites show entirely different > webpages for HTTP and HTTPS. We should not consider different HTTP & HTTPS served pages to be a legitimate use-case, unless one is on a non-standard port.
Mixed active content (https://www.dell.com) and 40x errors (https://web.mit.edu) could trigger the http fallback.
Having active mixed content trigger a fallback to HTTP is a pretty silly idea, and isn't practically implementable (since you'd have to load the page and execute its scripts completely to find it).
(In reply to Jesse Ruderman from comment #11) > Mixed active content (https://www.dell.com) and 40x errors > (https://web.mit.edu) could trigger the http fallback. I support this. (In reply to Eric from comment #12) > Having active mixed content trigger a fallback to HTTP is a pretty silly > idea, and isn't practically implementable (since you'd have to load the page > and execute its scripts completely to find it). How about displaying mixed content by default if users didn't type the protocol in the address bar?
(In reply to Dave Garrett from comment #10) > > Website incompetence should not factor into this decision. If HTTPS connects > securely, then it should be expected to work correctly. I do think we need to take the mixed content issue into consideration. It is fairly common that some websites have valid certificates but have active mixed-content on their homepages. If we try HTTPS by default and block the active mixed-content, what should users do when they see a broken page? They could manually change the protocol to http or turn off mixed-content protection, but neither is secure, and we actually implicitly train our users to manually fallback to insecure practice. Automatic fallback or automatic displaying mixed content might be a good balance. But is this too complicated? Also, I wonder why there are so many websites that support HTTPS perfectly but are reluctant to enforce HTTPS, i.e. do not redirect HTTP to HTTPS and not enable HSTS, like http://hg.mozilla.org/, http://people.mozilla.org/, http://code.google.com/, http://en.wikipedia.org/, http://www.ietf.org/, etc. What are their concerns?
(In reply to Jesse Ruderman from comment #11) > Mixed active content (https://www.dell.com) and 40x errors > (https://web.mit.edu) could trigger the http fallback. Falling back on HTTP error loading the page is expected, but falling back on mixed active content could actually be dangerous. Sites with 3rd party scripts loading up active content could choose to load securely or insecurely. (3rd party scripts loading arbitrary other scripts is dangerous enough as-is) If a 3rd party script is able to trigger an active mixed content block itself, and the browser were to automatically fall back from HTTPS to HTTP on mixed active content, then this would effectively allow an attacker that has compromised a 3rd party script to downgrade the entire connection intentionally. Falling back on anything determined upon parsing fetched HTML is a can of worms that I don't think should be opened. Falling back on HTTP or TLS errors is far simpler.
(In reply to fn84b from comment #14) > If we try HTTPS by default and block the > active mixed-content, what should users do when they see a broken page? The active mixed content blocker UI should be updated to allow users to more easily override it if the connection was the result of an automatic upgrade to HTTPS when HTTP is available. (but not when the protocol was explicitly provided) Not ideally safe, yeah, but deals with that issue well enough, I think. In the future, when HTTPS is far more expected than currently, things could be made stricter.
From a server perspective, this seems reasonable and good - as long as there is fallback if the cert doesn't match the hostname.
> as long as there is fallback if the cert doesn't match the hostname. It sounds slightly more complicated to implement - What's the rationale for that? There's no reason to avoid using SNI on the server.
The EFF's "HTTPS Everywhere" add-on FAQ is in line with comment 8 and (I think) comment 3: <https://www.eff.org/https-everywhere/faq#faq-Why-use-a-whitelist-of-sites-that-support-HTTPS?-Why-can%27t-you-try-to-use-HTTPS-for-every-last-site,-and-only-fall-back-to-HTTP-if-it-isn%27t-available%3F> ------------ There is no guarantee that sites are going to give the same response via HTTPS that they give via HTTP. As of 2015, Forbes is a good example of this problem: compare these HTTP and HTTPS responses. <http://forbes.com/> <https://forbes.com/> Also, it's not possible to test for HTTPS in real time without introducing security vulnerabilities (What should the extension do if the HTTPS connection attempt fails? Falling back to insecure HTTP isn't safe). ------------ If the downgrade-attack problem is solvable, then perhaps a blacklist would work for the disparate-sites problem. The HTTPS Everywhere developers might have data on just how common it is for hosts to behave like web.mit.edu and forbes.com do.
(In reply to Matthew Paul Thomas from comment #19) It's not really a downgrade attack, per se. As-is, these attempts are going to HTTP. At worst, it's a prevention of an automatic upgrade. That's not the same thing as a downgrade, at least until we can get HTTPS to be the Internet-wide norm. The goal of the HTTPS Everywhere addon is to, quite obviously, try to get HTTPS everywhere. In the context of this discussion, we're aiming for "HTTPS anywhere". > Forbes is a good example of this problem: compare these > HTTP and HTTPS responses. <http://forbes.com/> <https://forbes.com/> That is kinda pathetic, but not our problem. In no uncertain terms, they are serving that as their website. If they choose to serve a page on HTTPS from their primary domain, that is their page; full stop. It is irrelevant if it's not what they "wanted" it to be for the bulk of their traffic; they still did it. A blacklist for this junk is a horrible idea. Sites are completely within their power to serve whatever they want, and Firefox should not make any attempt to pick and choose. HTTPS Everywhere has to tip-toe on eggshells with broken sites, but Firefox won't have to (as much), because it will have enough of an installation base for users and admins to know exactly what to expect and do. As long as Firefox falls back when it is completely broken, things will be fine. Bigger sites, like those mentioned, will likely fix their stuff quickly.
It's also important to remember that HTTPS Everywhere upgrades existing HTTP connections, whilst the proposal here does not.
(In reply to Dave Garrett from comment #20) I called it a downgrade attack because that's the term used by one of the Chromium committers, who (I assume) knows much more about it than I do. <https://crbug.com/435298#c7> > In no uncertain terms, they are serving that as their website. If they > choose to serve a page on HTTPS from their primary domain, that is their > page; full stop. It is irrelevant if it's not what they "wanted" it to > be for the bulk of their traffic; they still did it. That has several uncertain terms, most importantly "their website", since they have more than one. RFC 7230 says: "Resources made available via the 'https' scheme have no shared identity with the 'http' scheme even if their resource identifiers indicate the same authority ... They are distinct namespaces and are considered to be distinct origin servers." So you don't even have a *theoretical* right to expect, if <http://example.com:100/abc> and <https://example.com:100/abc> both exist (even with the same port number!), that they will return the same resource. And even if the RFC said the opposite, that wouldn't help the parent looking through a sheaf of university brochures, typing mit.edu into Firefox and wondering why the site is so uninformative. This bug report is based on the idea that we can *practically* expect HTTP+HTTPS hosts to serve the same stuff over both, often enough to try HTTPS first. That depends partly on how many exceptions like forbes.com and mit.edu exist: (A) too few to worry about, (B) few enough that a blacklist would work, or (C) too many to be practical. If the answer is C, then relying on site admins to reconfigure their sites to placate Firefox is a risky proposition given Firefox's small user share (cf. the organizations telling people to drop Chrome because it just stopped allowing Java applets). It would help if other browsers made the same change at roughly the same time.
(In reply to Matthew Paul Thomas from comment #22) > RFC 7230 says: "Resources made available via the > 'https' scheme have no shared identity with the 'http' scheme even if their > resource identifiers indicate the same authority ... They are distinct > namespaces and are considered to be distinct origin servers." I consider the HTTP/1.1 expectations to be sufficiently obsolete at this point. (note of course, that the new RFCs are merely updates of the specification without intending to change anything fundamentally) Whilst I do agree that having differences between content served on HTTP & HTTPS is not in violation of the specification, it is not unreasonable to favor one over the other when none is specified, either. This issue is merely about dropping the assumption that an incomplete URI should be considered to be HTTP in all instances. > So you don't even have a *theoretical* right to expect, if > <http://example.com:100/abc> and <https://example.com:100/abc> > both exist (even with the same port number!) It is entirely reasonable to expect no attempts at falling back when non-default ports are specified. There's no way to know what protocol to assume. In fact, I think it might be a good idea to require that all URIs specifying ports other than port 80 or 443 should be considered invalid and not loaded if no scheme is provided. (currently, it appears that leaving out a scheme and specifying any port assumes HTTP, even if port 443 is given; these should both explicitly apply the corresponding scheme IFF unspecified) > that wouldn't help the parent looking through a sheaf of > university brochures, typing mit.edu into Firefox and wondering why the site > is so uninformative. I reject all "it will break the web" arguments. A lot of the web needs breaking, quite desperately. In this particular case, if MIT, of all places, can't configure their server correctly, then I think we have a problem warranting potential minor confusion to the few people that didn't get to the site via Google. > It would help if other browsers made the same change at roughly > the same time. Always true; rarely done. I don't think the lack of coordination should prevent all progress, however. :/
(In reply to Dave Garrett from comment #23) > In this particular case, if MIT, of all places, > can't configure their server correctly, then I think we have a problem > warranting potential minor confusion to the few people that didn't get to > the site via Google. (In reply to Matthew Paul Thomas from comment #22) > that wouldn't help the parent looking through a sheaf of > university brochures, typing mit.edu into Firefox and wondering why the site > is so uninformative. I sent an email to MIT webmasters to ask them to send same content for http and https. It will be encouraging if they are willing to fix. If I have time I will scan Top 1 million and see how prevalent such misconfiguration is.
(In reply to Matthew Paul Thomas from comment #22) > This bug report is based on the idea that we can *practically* expect > HTTP+HTTPS hosts to serve the same stuff over both, often enough to try > HTTPS first. That depends partly on how many exceptions like forbes.com and > mit.edu exist: (A) too few to worry about, (B) few enough that a blacklist > would work, or (C) too many to be practical. If the answer is C, then > relying on site admins to reconfigure their sites to placate Firefox is a > risky proposition given Firefox's small user share (cf. the organizations > telling people to drop Chrome because it just stopped allowing Java > applets). It would help if other browsers made the same change at roughly > the same time. Here is another example: http://arxiv.org and https://arxiv.org/. If you type https://arxiv.org/, it redirects to https://arxiv.org/help/ssl, and it tells that SSL support is limited.
(In reply to Hugo Osvaldo Barrera from comment #18) > > It sounds slightly more complicated to implement - What's the rationale for > that? There's no reason to avoid using SNI on the server. Because a given host can serve many sites for both HTTP and HTTPS, and a site served via HTTP may not be available on HTTPS. I.e., you can't assume that just because 443 is listen()ing, a site on 80 is available through it. This is common configuration on CDNs as well as my own cloud host (which serves redbot.org and mnot.net as HTTP + HTTPS (with HSTS), but isitrestful.com and a few others just as HTTP). SNI doesn't help here, because server behaviour when there isn't a matching name isn't well-defined or consistent, and fixing that would require a lot of server-side changes (which AFAICT is *not* the intent of this bug). I think we need a more precise proposal to fully evaluate the effects of this change; e.g., how does FF handle: * Expired cert * mismatch hostname * bad cert chain * HTTP 5xx errors * HTTP 4xx errors * dropped conn * conn timeout * TLS alerts * insufficient crypto * etc. I think the answer to all of these is "fall back to HTTP." Mind you, that fallback could be a failure page with a message to the effect of "We assumed HTTPS and it didn't work; do you want to try HTTP?" -- but that's one for the UX people...
(In reply to Dave Garrett from comment #23) > I consider the HTTP/1.1 expectations to be sufficiently obsolete at this > point. (note of course, that the new RFCs are merely updates of the We'll keep that in mind, Dave :) > specification without intending to change anything fundamentally) Whilst I > do agree that having differences between content served on HTTP & HTTPS is > not in violation of the specification, it is not unreasonable to favor one > over the other when none is specified, either. > > This issue is merely about dropping the assumption that an incomplete URI > should be considered to be HTTP in all instances. Exactly so, and an entirely reasonable thing to consider. The discussion about equivalence between http:// and https:// URLs here is a red herring; this proposal isn't trying to make them equivalent. OTOH, there *are* people (including TimBL) who are interested in talking about making them equivalent in some fashion; however, that's not in-scope for this bug.
> Here is another example: http://arxiv.org and https://arxiv.org/. If you type https://arxiv.org/, it redirects to https://arxiv.org/help/ssl, and it tells that SSL support is limited. The provide no explanation on why they provide TLS support on a per-page basis. I don't see any point in supporting random configurations like this one (unless a valid justification can be provided). In any case, I see firefox implementing this as a opt-in option, and giving time for these sort of websites to implement a sane configuration before it becomes out-out. > This is common configuration on CDNs as well as my own cloud host (which serves redbot.org and mnot.net as HTTP + HTTPS (with HSTS), but isitrestful.com and a few others just as HTTP). Why don't you serve both with HTTP+HTTPS? > SNI doesn't help here, because server behaviour when there isn't a matching name isn't well-defined or consistent, and fixing that would require a lot of server-side changes (which AFAICT is *not* the intent of this bug). It has already been suggested before that there would be a "fall back to http" behaviour when there's a certificate mismatch (which I guess would be the case in your scenario). > I think we need a more precise proposal to fully evaluate the effects of this change; e.g., how does FF handle: Here's what sounds ok to me (just so we can start discussing these details): > * Expired cert Show the expiration warning. > * mismatch hostname Fall back to plain-text (http). > * bad cert chain Fall back to plain-text (http). At least for now. I'd guess these scenarios are self-signed certs, mostly. > * HTTP 5xx errors This is valid content, show the 5xx. It's already a different network layer. > * HTTP 4xx errors Ditto. > * dropped conn Fall back to plain-text. No https support (apprently). > * conn timeout Fall back to plain-text. No https support (apprently). > * TLS alerts Which ones? > * insufficient crypto Show the error to the user. > The discussion about equivalence between http:// and https:// URLs here is a red herring; this proposal isn't trying to make them equivalent. Very well said. It's merely about changing which would be assumed to be the default.
(In reply to Hugo Osvaldo Barrera from comment #28) > > This is common configuration on CDNs as well as my own cloud host (which serves redbot.org and mnot.net as HTTP + HTTPS (with HSTS), but isitrestful.com and a few others just as HTTP). > > Why don't you serve both with HTTP+HTTPS? They're (very) low traffic sites and it costs money. I'll probably HTTPS them when Let's Encrypt launches. I think I agree with your take on the various error conditions, a couple of notes below. > > * TLS alerts > > Which ones? Not sure, EKR, rsalz etc. can probably say something more meaningful here. > > * insufficient crypto > > Show the error to the user. I'd like to understand how this would affect existing sites; this may make it harder to raise the bar for crypto in browsers.
(In reply to mnot from comment #26) > I think we need a more precise proposal to fully evaluate the effects of > this change; e.g., how does FF handle: > > * Expired cert > * mismatch hostname > * bad cert chain > * HTTP 5xx errors > * HTTP 4xx errors > * dropped conn > * conn timeout > * TLS alerts > * insufficient crypto > * etc. > > I think the answer to all of these is "fall back to HTTP." I'm inclined to think fallback should be on any connection error other than HSTS or HPKP, but there might be a few 4xx codes that should always be handled with an aborted connection attempt. For example, if a server says "414 Request-URI Too Long", attempting again in plaintext with just the 's' shaved off probably won't help. > Mind you, that fallback could be a failure page with a message to the effect > of "We assumed HTTPS and it didn't work; do you want to try HTTP?" -- but > that's one for the UX people... I think we shouldn't be adding any UI beyond a pref in about:config, at least for this specific bug. For upgrades of HTTP->HTTPS when the scheme was given, yes, that will need quite a bit of UX work. We would ~eventually~ want to warn users of a failed HTTPS assumption for an unspecified scheme, but we're nowhere near being able to attempt that yet. (In reply to Hugo Osvaldo Barrera from comment #28) > > * Expired cert > > Show the expiration warning. God no. There are quite a few HTTP servers that technically support HTTPS, but have long since lapsed certificates. This is one of the more annoying barriers to adoption of TLS which ACME will hopefully cure us of. This is exactly the sort of server we can expect to need to fall back on. > > * HTTP 5xx errors > > This is valid content, show the 5xx. It's already a different network layer. No, this should fall back. It could be an indication that the server is simply not configured to serve content over HTTPS. > > * TLS alerts > > Which ones? Probably everything but inappropriate_fallback(86) (SCSV; RFC 7507). > > * insufficient crypto > > Show the error to the user. Probably not, but it depends on what we mean with this.
(In reply to Dave Garrett from comment #30) > Probably everything but inappropriate_fallback(86) (SCSV; RFC 7507). I think disabling insecure fallback breaks much fewer sites than this one. So probably we don't need to consider sending SCSV when implementing this.
(In reply to mnot from comment #29) > (In reply to Hugo Osvaldo Barrera from comment #28) > > > > This is common configuration on CDNs as well as my own cloud host (which serves redbot.org and mnot.net as HTTP + HTTPS (with HSTS), but isitrestful.com and a few others just as HTTP). > > > > Why don't you serve both with HTTP+HTTPS? > > They're (very) low traffic sites and it costs money. What cost? IIRC, google has some report that enabling TLS on all their servers had a <5% CPU load increase. I admit that this implies a cost, but surely 5% extra CPU usage is not prohibitively expensive. (In reply to Dave Garrett from comment #30) > (In reply to Hugo Osvaldo Barrera from comment #28) > > > * Expired cert > > > > Show the expiration warning. > > God no. There are quite a few HTTP servers that technically support HTTPS, > but have long since lapsed certificates. This is one of the more annoying > barriers to adoption of TLS which ACME will hopefully cure us of. This is > exactly the sort of server we can expect to need to fall back on. > Not much of a hassle to change a certificate honestly. This sort of issues with make the admins change them fast enough, hence, adding to the goal of "increase HTTPS adoption". > > > * HTTP 5xx errors > > > > This is valid content, show the 5xx. It's already a different network layer. > > No, this should fall back. It could be an indication that the server is > simply not configured to serve content over HTTPS. > It most definitely is not: https://tools.ietf.org/html/rfc2616#section-10.5
(In reply to Hugo Osvaldo Barrera from comment #32) > Not much of a hassle to change a certificate honestly. This sort of issues > with make the admins change them fast enough, hence, adding to the goal of > "increase HTTPS adoption". If certificates weren't a hassle, everyone would have one already. This is not the sort of issue that gets fixed quickly if they still expect HTTP. There are small servers with essentially abandoned HTTPS configurations with crap certs. We need to make sure things fall back here. > > No, this should fall back. It could be an indication that the server is > > simply not configured to serve content over HTTPS. > > It most definitely is not: https://tools.ietf.org/html/rfc2616#section-10.5 I didn't say this "should" be an indication, but rather that it "could" be. Servers throw up 5xx errors all the time when they're running but not actually serving up any content. Again, this is something worth falling back on. Essentially, I'm saying it should be pessimistic until we get an HTTP 2xx, then optimistic afterwards, proceeding exactly the same as every other HTTPS connection. The caveat of that being HSTS & HPKP. An amendment to comment 0, by the way: Attempting HTTPS first can notably speed up some sites which support HTTPS, not just due to redirects. I have stumbled onto sites which support SPDY but stick with HTTP normally. We will probably see some improperly configured HTTP/2 servers that do the same.
Temporarily enabling mixed content might be hard or impossible to get right. Imagine following scenario: 1. You go to example.com 2. https://example.com with mixed content is displayed 3. You click a link to https://example.com/subpage. The browser has to remember that mixed content is still OK for https://example.com/subpage. 4. You send the link to somebody. In order to prevent mixed content warnings on the other side, Firefox has to downgrade the link to plain http. Is this what the user expects. Or another one: 1. You go to example.com 2. https://example.com redirects you to https://www.example.com/ 3. https://www.example.com/ redirects you to https://www.example.com/index.php 4. https://www.example.com/index.php has mixed content issue. Browser should remember from step #1 that mixed content is OK. I believe that auto-mixed-content policy would be rather complex to do right and doing so might break some reasonable user expectations. On opt-in mechanism: The opt-in mechanism will be rarely used. If one wants something like this, redirect plus HSTS can do more… Theoretically, it could be used for testing before it becomes opt-out. On opt-out mechanism: The opt-out seems to be important. Imagine you want to migrate your website to HTTPS. You silently enable HTTPS first. (Now, user gets warnings if they use HTTPS, because your site is self-signed.) Later, you get a valid certificate. But you still don't want to migrate to HTTPS, as your site might contain some incompatibilities (e.g. mixed content). We hopefully don't want to discourage from transition to HTTPS. * Displaying certificate warning after enabling HTTPS would be insane. Auto-acccepting invalid certificate is easy to get wrong in many ways (e.g. cookies). The best option seems to be HTTP fallback. (This is technically not a downgrade, just a failed upgrade.) * When the HTTPS is not considered to be ready, there should be a way to disable the auto-upgrade, maybe through an HTTPS header. Moreover, the opt-out way should be known. (How to achieve it? Maybe by adding some note to developer console about both HSTS and opt-out?)
Active mixed content must not be enabled on HTTPS, even temporarily. Doing so permanently poisons the HTTPS origin. I still think it would be okay to see active mixed content as a sign of a "failed upgrade". I'm less sure after reading comment 15, though. Maybe just fall back to HTTP for types of mixed content that, if loaded, would be able to top nav (so scripts and non-sandboxed iframes). That way, if an attacker takes over your HTTPS stylesheet server and makes the stylesheet redirect to HTTP, the main page stays HTTPS.
You could do something like fast-fallback-to-IPv4 here [1]. Open a connection to :443 (try TLS 1.3, otherwise abort) and :80 both. If there is the DNSSEC/DANE chain stapling TLS extension in use (bug 672600), and/or if there is an existing TLSA record in the dns, immediately switch to https:// and remember this for the current session (=until Firefox gets closed). Respect HSTS, but ignore max-age=0 as long there is TLSA in use. I would prefer if you would just check for an TLSA RR in the dns when I am hitting enter in the address bar and directly connect to the right port based on the reply. Fallblack to :80 if there is no dns reply within 500ms (?). [1] https://wiki.terrax.net/wiki/Fast_fallback_to_https (wrong title here. It's more like fast fallback to http)
mixed content: I'm on Eric's and Vít Šesták 'v6ak' opinion: FF should not do any fallback on mixed content issues. But FF could try to access mixed content using HTTPS, too using similar mechanisms. (So most sites will work as expected). captive portals: Do we/you have to consider changes to the behavior in captive portals? I think won't be possible to display web sites in captive portals with hints how to login. A common configuration is to redirect everything temporary to a instruction or login website.
You could do this in a more conservative way: If I type in "mozilla.org" without a protocol [for the first time], then try to speculative connect to both http:// and https://. Currently Firefox would connect to http:// (and maybe get a 301 to https and TLS needs some time). Please direcly show the https:// variant if it has a Strict-Transport-Security header bigger than 0. Maybe you want to wait until far more servers will reply to a TLS 1.3 X25519 key_share sent by Firefox. Or, at first, you check if the connection gets refused on port 443 (define a very short time how long to wait for a timeout). The maximum waiting time defined for the timeout is the same we are waiting for https to check for a HSTS header at all. This should be behind a pref, at first. There should be collected some telemetry of the users who enabled this pref: How many websites we normally connect via http to have an open port 443, are there certificate errors on this speculative connect, which tls version and cipher could we get, what have been the http and https reaction times, etc.? The pref (I have no name for) should affect http:// urls and manally typed in ones without any protocol: 0 (default): connect via http:// by default 1: speculative connect via both http and https and switch to https if HSTS greater than 0 was found 2: connect via https by default if no protocol was given, but via http:// if it was in the url 3: always connect via https, regardless of not having a protocol given or if it's a http:// url. Maybe set this pref to "1" in an experiment with Nightly users and test some different maximum waiting times. HSTS Preloading already enforces "3" today for some domains.
Would you mind filing a new bug for your proposal(s), depending on Bug 1348275 and see-also-ing or blocking this bug? WRT Bug 1348275, check out https://hg.mozilla.org/mozilla-central/rev/071beab1c31e#l4.52 if this should be changed or amended.
Flags: needinfo?(jan)
My bug 1426934 was marked duplicate of this one, so I'm happy to continue here. Defaulting to http is a security issue. The only secure way is to use https and https only. Falling back to http automatically won't solve anything, active attackers could easily trigger such a fallback. If users really want insecure, they should request it explicitly, either by typing in http:// or by acknowledging it after the browser failed to load https. Most comments on this ticket were written before letsencrypt was launched. Most (> 50%) of the web is now using https: https://www.eff.org/deeplinks/2017/02/were-halfway-encrypting-entire-web It's time to change the default in the browsers. Certainly, user experience might suffer a bit from this change. But I think this is acceptable for the sake of security. Site operators are to blame if users are unhappy and those who still haven't deployed https will do so soon enough if users start complaining to them.
As much as this would be great if we could prevent downgrade attacks but it doesn't seem like this would ever be possible. We have alternate bugs that we would like to tackle instead over varying timescales: Bug 1002724 - Fallback to HTTPS if HTTP isn't working. Bug 1339928 - Make HTTPS the default protocol Bug 1002724 looks like something we could implement now without breaking the web, it actually will mean that sites that don't open port 80 work directly from the URL bar, which will help with new sites that choose to only support HTTPS. Bug 1339928 is very much the end game and likely will need the 90%+ adoption as April is suggesting before we can consider changing that. There may be other approaches that we can look into similar to the HSTS priming approaches we have experimented with or rewriting of users history when we have seen enough redirects to HTTPS. Overall we want to support HTTPS as much as possible, but leaving bugs open that we don't plan to tackle isn't worthwhile. This bug also seems like what :April was suggesting in https://bugzilla.mozilla.org/show_bug.cgi?id=1002724#c23 I'm also happy to be wrong if there is an approach we can resolve the downgrade attacks with. Thanks!
Status: NEW → RESOLVED
Closed: 7 years ago
Flags: needinfo?(jan)
Resolution: --- → WONTFIX
As I said in the other bug, the risk involved with dropping back to HTTP is essentially the same risk as trying HTTP. Whereas it has a lot of benefits against passive eavesdroppers. With this behavior: HTTPS gets blocked by MITM, fallback to HTTP gets MITM'd Without this behavior: HTTP gets MITM'd I definitely think that the ideal case is a world where HTTPS is the only option (with no fallback), just trying to figure out ways to make things at least mildly better than trying HTTP first.
(In reply to April King [:April] from comment #49) > As I said in the other bug, the risk involved with dropping back to HTTP is > essentially the same risk as trying HTTP. Whereas it has a lot of benefits > against passive eavesdroppers. > > With this behavior: HTTPS gets blocked by MITM, fallback to HTTP gets MITM'd > Without this behavior: HTTP gets MITM'd > > I definitely think that the ideal case is a world where HTTPS is the only > option (with no fallback), just trying to figure out ways to make things at > least mildly better than trying HTTP first. So should we reopen this? :-)
Flags: needinfo?(april)
I am usually in favour of anything that helps with HTTPS adoption, but not this time. Those who are serious with HTTPS already have redirect, which adds almost the same level of protection against passive attacks (OK, the attacker would see the first request and not just domain) and the same level of protection against active attacks. Not much benefits there. The problem is it adds much complexity and potential issues* for a little benefit. My more detailed reasoning against this feature is in comment 34. *) Those can even discourage one from HTTPS.
https://bugzilla.mozilla.org/show_bug.cgi?id=1426934 hardly add any complexity (aside from a new warning popup). It protects from active MITM. It doesn't break anything, at least not in a technical sense. Sure, some users might get annoyed a bit. But that only helps with deploying https even more. Making https the default *without* an automatic fallback to http is the best way to address this security issue. It's the right thing to do!
:Gijs, it doesn't seem there is much in the way of consensus in the affirmative on the matter, so let's just leave it closed for now. Maybe once things get closed to pure-HTTPS, we can revisit our choices and assumptions.
Flags: needinfo?(april)
> the risk involved with dropping back to HTTP is essentially the same risk as trying HTTP. I'm pretty concerned that we might incentivise MitM by making this move. Also we know from the HSTS priming research that many middle boxes would make a upgrade then a degrade a very bad experience too due to timeouts. Perhaps we could explore this technique if a few conditions are met: - Explicitly use this process only user typed URLs - Server must respond in x ms with a OK header (I think this should be something aggressive like 150ms) - Always show the broken padlock on degraded content - Explain to users that in a certain time frame 1years? we will show a full page prompt to downgraded content However mixed content on the HTTPS version is still a problem however we also might be addressing that this year. > Making https the default *without* an automatic fallback to http is the best way to address this security issue. It's the right thing to do! We should do this when we are at a higher % of adoption. Breaking on loading a HTTP site is still unfortunately too aggressive. We are continuing to further warn the users about HTTP content this year and we should do this before explicitly making the user click through a warning prompt to degrade to HTTP. We also should implement Bug 1002724 first before this. We really need to be on the home straight of deprecating HTTP before we do this as users will ignore the warning if it happens too often.
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
Priority: -- → P3

HI,

2 years later, is this still valid?

Making https the default without an automatic fallback to http is the best way to address this security issue. It's the right thing to do!>
We should do this when we are at a higher % of adoption. Breaking on loading a HTTP site is still unfortunately too aggressive.

It is a little bit awkward to tell people, that they have to put https:// before the domain. Because of Firefox still defaults to http://
And no, we really do not want to have anything running on port 80, only for a redirect to port 443.

According to https://letsencrypt.org/stats/#percent-pageloads we're now at 80% page loads through https.
How much longer until this security flaw is finally fixed?

I think the proposed solution nowadays is https-only mode

Status: REOPENED → RESOLVED
Closed: 7 years ago4 years ago
Resolution: --- → DUPLICATE

I think the proposed solution nowadays is https-only mode

I just installed firefox 83 and so far I'm really happy with that mode.
Thanks a lot!

Thanks for the work put into this guys, happy to see it live!

You need to log in before you can comment on or make changes to this bug.