Closed
Bug 460627
Opened 16 years ago
Closed 10 years ago
Can we heuristically distinguish likely MitMs from other self-signed certs?
Categories
(Core :: Security, defect)
Tracking
()
RESOLVED
INCOMPLETE
People
(Reporter: johnath, Unassigned)
References
Details
A conversation amongst security group recently diverged onto a thread about whether a useful heuristic could be developed to indicate more/less risky certificates. Since it doesn't describe a vulnerability per se, just an opportunity for improvement, I don't think it has to live in security group.
Suggestions have included:
- Watching for failed certs on AUS/plugin blocklist pings (which should never happen)
- Lists of "high value targets", pulled by the browser like other site lists, with sites and corresponding cert fingerprints
and a whole list from boris:
1) Is the hostname you're accesing actually just an IP address (common for routers, I would think, right?). If so, downgrade attack likelihood a bit.
2) Is the hostname you're accessing a .com? If so upgrade attack likelihood.
3) Is the hostname you're accessing not a FQDN? If so, downgrade attack likelihood.
4) Is the IP address you're accessing on your local subnet (or similar checks on domain names; the domain name check would be better if we can pull it off, since presumably the attacker controls DNS)? If so, downgrade attack likelihood. This might be hard to make work.
This bug is related to, but not the same as, bug 431826 (improve cert error messages in Firefox) and bug 398721, and was indirectly spawned from discussion of bug 460374.
Comment 1•16 years ago
|
||
(In reply to comment #0)
> - Watching for failed certs on AUS/plugin blocklist pings (which should never
> happen)
How much would this cost us in performance?
> 2) Is the hostname you're accessing a .com? If so upgrade attack likelihood.
Why? Is https://english.leumi.co.il/ not a legitimate target?
> 3) Is the hostname you're accessing not a FQDN? If so, downgrade attack
> likelihood.
What about phishers? Or a combination of both phishing and MITM?
> 4) Is the IP address you're accessing on your local subnet (or similar checks
> on domain names; the domain name check would be better if we can pull it off,
> since presumably the attacker controls DNS)? If so, downgrade attack
> likelihood. This might be hard to make work.
ISPs sometimes use and assign local subnet addresses for their users, so do WiFi access points - the same network "they" could be using. "They" would control DNS already and use local subnets then. I think this isn't a really good indicator either.
Elsewhere I suggested to require changing some about:config settings in order to be able override cert errors. I think this would be more appropriate than trying to facilitate self-signed certificates and appease 0.1% of the users who really needs them. Or is everybody configuring routers suddenly?
Comment 2•16 years ago
|
||
One more thought:
(In reply to comment #0)
> - Watching for failed certs on AUS/plugin blocklist pings (which should never
> happen)
Wouldn't attackers learn this list and let those through?
> - Lists of "high value targets", pulled by the browser like other site lists,
> with sites and corresponding cert fingerprints
First of all wasn't EV supposed to change that all? Also from where would this list come from? Built into the browser? If so, what happens when a certificate suddenly changes? If not, then can't the attacker supply that list instead? It would have to be signed I guess (not over secure connection, but a really signed file perhaps).
Overall I don't believe this to be a particular good idea. I think the question isn't if there are more/less risky certificates, but if self-signed certificates do any good. And who really needs them. And how can the browser facilitate for the professionals their needs while still protecting the average user.
i think that by ip address, we mean ip without a domain name, i.e. https://12.34.56.78.
in the case of a rogue wifi access point, it's clear that it could trivially return an arbitrary local ip address, public or private.
wrt EV certificates, i think the only fair approach is to send incrementally our list of ALL known EV certificates, similar to how we send data for url classifier. anyone investing in EV deserves protection.
Comment 4•16 years ago
|
||
> Why? Is https://english.leumi.co.il/ not a legitimate target?
I didn't say you donwngrade risk of attacks for others. And I didn't say this is a final proposal; it was just brainstorming. This part could be localized. You did read the thread this started with, right?
> What about phishers? Or a combination of both phishing and MITM?
I don't see what that has to do with accessing non-FQDNs... Typically, it's very rare that a user would send financial info or some such to a non-FQDN.
> ISPs sometimes use and assign local subnet addresses for their users
Which is why hostname is a better check. Again, see the thread this came from.
> Wouldn't attackers learn this list and let those through?
Yes; this was also mentioned in the original thread. The claim was made that no matter what we end up with an arms race and this might be a good first mitigating step.
> i think that by ip address, we mean ip without a domain name, i.e.
For purposes of my item 1, precisely.
As far as high-value targets go, Emma just pointed out that if the issue is a rogue open wireless network that a user is on, and if they're at home, then there is a good chance that the operator of the network also has access to the user's snail-mail (and certainly the user's geographic location) and can thus target the attacks at targets that are low-value from a global perspective but high-value in the relevant context. Think local banks.
Comment 5•16 years ago
|
||
Boris, neither is my response meant to be a final assessment about the idea, but rather a continuation of the brainstorming and raising of additional thoughts. I've followed most relevant threads as they evolved and partly also participated.
In particular I'd like to understand how necessary the proposed idea is, if we think about the fact that PKI is supposed to provide sufficient answers to solve exactly those problems. It's the browser's design to allow users to ignore the errors by all means. Bug 460374 is an excellent example: Would the browser have made it *impossible* to access those sites with a clear explanation like "The site you are trying to visit uses a illegitimate certificate, this should not happen!", than PKI would have succeeded in its primary task. PKI in itself provides the capabilities to prevent such attacks. It's the browser who doesn't comply (and allows to ignore).
Now, obviously this is due to the fact that "bad" certificates can be ignored already for a long time and Mozilla has taken great steps towards eliminating past shortcomings (of all browsers). I strife for and would suggest to find a solution which would facilitate the professional user with a way to accomplish his job, which sometimes involves the need to deal with all kinds of self-signed certificates (which as a matter of fact sometimes can't be avoided for now). But it should be in such a way and order that really only the professional and knowledgeable user is able to do it (like editing the about:config page), while otherwise clearly continue to protect the average user, who doesn't have the same understanding nor can judge the situation (bug 460374).
Additionally the designers here have to ask themselves, if it's correct that 1% or less can held 99% of the other Firefox users hostage in order to allow them the convenience to continue using self-signed certificates. This is the real reason and nothing else! Those that represent less then 1% mostly have their own web sites and don't want to secure their sites with certificates issued by an authority. It's the same crowd which uses PGP for the same reasons. We must recognize that! All other arguments are excuses to keep the browser ignoring self-signed certificates. At least lets get real on this issue...
(Needless to say that I've worked very hard to provide a viable alternative instead of paid-for SSL certificates, but for some, even this isn't good enough.)
During the time EV certificates were discussed, the argument was made that high-profile sites and brands will be better protected by the clear distinction provided by the UI. But if the browser keeps allowing to click through all errors not matter what, the very same effort has failed. EV remains as useless as regular certificates as well (which would have prevented the attack in bug 460374 in the same manner).
For those who really prefer to use certificates not issued by an included authority, there is a correct way doing it. Importing a CA root and explicitly trusting it and using certificates issued from that root is IMO the correct way to accomplish independence from the authorities shipped with NSS.
Other errors which can happen in conjunction with SSL secured sites may be treated differently, for example partial domain mismatch (sub domains which don't match for example) or perhaps even expired ones to a certain extend.
I think that digital certificates already today provide the solution to the problem you are trying to solve - you only have to apply it correctly! Starting a guessing game about which self-signed certificates are worse or better, is simply ignoring the value legitimate certificates provide - ignoring the work done by the authorities and ignoring the work done over at the NSS team. In addition to that, it's a game you can't win either...
Sorry for the lengthy reply - I didn't meant to write that much :-)
Comment 6•16 years ago
|
||
Eddy, it's not an option to completely forbit self-signed certs. That's a *very* old discussion which keeps coming up again and again, and the answer is always the same: There are very good reasons for self-signed certs, even beyond professionals. Small-site webmail for example. No, your SmartSSL is not an option, as my experiences showed. Please don't keep beating that "no self-signed certs" horse.
To this bug, I personally don't think this is a good idea. I don't like heuristics per se, they are wrong more often than not, and create more problems than they solve.
I proposed something else: Make a list of important sites which are and must be protected by a proper EV certs signed by a proper CA at all times. E.g. all banks (operating towards private users, there are probably about 1000 or so), ebay (all used country domains, 10-20), paypal and similar. Maybe only those in this list which signed up for EV certs.
If one of these gives a bad cert, we cry foul. (And hope that PayPal didn't screw up again.)
This will mean:
* These sites can't be attacked by self-signed certs or low-confidence DV certs (an improvement over current state).
* There's no arms race - I can't see how it could be easily circumvented (on SSL level).
* You protect the most valuable targets, making the attack much less financially interesting. For attackers, it's very much a effort/gain consideration.
Maybe there's a more scalable way to keep these lists, e.g. fetching the list of EVs from CAs, OSCP (using known cert) or whatever.
The user should maybe also be able to easily add sites to the list. E.g. a "make sure that this thing always has a valid cert" button, and then draw the URL of that in an even greener than the EV sites. This will have the user to see/notice at one glance whether he's on the site he keeps visiting (e.g. his bank or ebay or the server administration panel).
Maybe that (the very same button) would be a nice UI for self-signed certs, too - they'd be added to the cert store and the domain to the list of sites requiring a valid cert.
Comment 7•16 years ago
|
||
(In reply to comment #6)
> Eddy, it's not an option to completely forbit self-signed certs. That's a
> *very* old discussion which keeps coming up again and again...
...apparently because that's how PKI is meant to be. Strict!
> and the answer is
> always the same: There are very good reasons for self-signed certs, even beyond
> professionals.
For every half-good reason in favor of self-signed certificates, I'll give you two good reasons against! Self-signed certificates are the very reason for all this mess and you can't deny it. Without them, the risks would be eliminated to a very, very high degree. Period.
We could than concentrate to improve the quality of the CAs instead.
> Small-site webmail for example.
Ben, if you didn't realized, we are approaching the year 2009 and not 1999. Long are gone the times of little choice and high prices. Did you pay for your domain name? I bet you can afford about the same amount for a cert.
> No, your SmartSSL is not an
> option, as my experiences showed.
Perhaps you want to share your experience with the audience? ;-)
* Besides, its name is StartSSL and yes, the small-site webmail is a perfect candidate for StartCom's Class 1 certs, no doubt.
Comment 8•16 years ago
|
||
Eddy, it's *only* SSL/CA people that argue against self-signed certs. Frankly, I don't care what PKI is supposed to be. Many or most other security people (esp. independents) don't share the "PKI only" view. So, please don't keep beating that "no self-signed certs" horse at any remotely related thread, as it's highly distracting and disruptive, thanks. (Same goes to Nelson.)
Comment 9•16 years ago
|
||
Ben, Don't presume to tell me what to do.
On topic: Even if we could determine PERFECTLY whether an MITM attack was
going on or not, it would not tell us if the site actually belonged to the
party that it claims to represent. The mere absence of an MITM attacker
between you and www.bankofamerica.ru doesn't make that server a genuine
Bank of America server. Telling a user that a cert is valid merely because
no MITM is detectable only benefits attackers, even if the MITM detection
is perfect.
Comment 10•16 years ago
|
||
(In reply to comment #8)
> Eddy, it's *only* SSL/CA people that argue against self-signed certs.
Ben, it's a very small minority of users which advocates self-signed certificates and with it, is willing to compromise the security of the vast majority of the millions of Firefox users.
> Frankly, I don't care what PKI is supposed to be.
I know! Your words are very clear and it's undeniable...
> So, please don't keep
> beating that "no self-signed certs" horse at any remotely related thread, as
> it's highly distracting and disruptive, thanks. (Same goes to Nelson.)
Dear Ben,
I present my opinion and make my arguments in a polite manner and I'm listening and respect yours. The combined knowledge of Nelson and mine goes somewhat beyond running an "openssl" command and you are NOT going to shut my mouth nor that of Nelson! Instead I suggest that you make your arguments to support your case as I do, even if you don't like to hear what I have to say and even when you don't agree with me!
Now, I advocate "to find a solution for the professional user without putting the average user at risk" (see comment 5) in order to prevent occurrence of bug 460374, instead of finding a "heuristic that might distinguish between MitMs from other self-signed certs". This is what PKI is made for, that's why huge resources are invested to provide cryptography and this is why the Mozilla Foundation makes a considerable effort to assure a certain quality of the authorities shipped with NSS. Disallowing self-signed certificates, preventing them without the interaction an average user is unlikely to perform, is one of the possibilities (like editing of about:config or similar). Otherwise all these investments and efforts are in vain as proved in bug 460374.
The current tendencies are clear: With the arrival of this generation of browsers, certificate errors are less and less tolerated. I expect competing browsers to follow current trends up to the point where self-signed certificates will simply not work anymore. This is 2009 and not 1999 and the Internet is growing up!
Comment 11•16 years ago
|
||
Eddy, Nelson, I think what you're missing is that if we could detect MitM with a bit more reliability we could have a better stance for providing no UI whatsoever for overriding the error page in the detected MitM cases. As recent experience shows, even our existing pretty convoluted UI is not proof against a determined user who just thinks that Firefox is broken and presses on.
So the question is whether we can detect more reliably cases when the user is likely to be in trouble (as opposed to just possibly in trouble, as now) and make the messaging in those cases less ambiguous and the override more difficult (or impossible)?
Now can we stop the sales-pitches and super-long blathering about unrelated things and try to address this bug as filed? To be honest, my gut reaction right now is to either move this discussion to a (small) private list or to a newsgroup where I'll be able to killfile users and threads that get in the way of getting somewhere.
Comment 12•16 years ago
|
||
(In reply to comment #11)
> Eddy, Nelson, I think what you're missing is that if we could detect MitM with
> a bit more reliability we could have a better stance for providing no UI
> whatsoever for overriding the error page in the detected MitM cases.
Boris, PKI is made to protect against MITM attacks, now you propose to protect PKI from itself? Or on top of PKI?
> As recent
> experience shows, even our existing pretty convoluted UI is not proof against a
> determined user who just thinks that Firefox is broken and presses on.
That's because Firefox allows that. Remove that option.
> Now can we stop the sales-pitches and super-long blathering about unrelated
> things and try to address this bug as filed?
Sorry about that, but I feel it's very much related and you try to provide a solution for which a solution already exists. I'm very sorry if I failed to explain that correctly.
Comment 13•16 years ago
|
||
Eddy, you have explained your stance at length, both here and in .security in the past. No need to keep evangelizing it.
PKI protects against MITM attacks, sorta. In practice people don't use PKI in all cases. Maybe they should, but they don't. That's life.
> That's because Firefox allows that. Remove that option.
It's needed for people to be able to configure their home routers, for crying out loud, not to even mention the various other places where self-signed certificates make perfect sense in a controlled network.
If you can get the router makers to fix their use of SSL, that will be a start. In the meantime, simple removal, though possibly desirable, is a non-starter and we have to do something more complicated. Now can you please get out of the way of figuring out what that something might be, if anything, if you don't plan to do more than naysay the whole idea? You've made it clear you think it's a bad one, and that's been noted.
Comment 14•16 years ago
|
||
(In reply to comment #13)
> PKI protects against MITM attacks, sorta. In practice people don't use PKI in
> all cases. Maybe they should, but they don't. That's life.
This is absurd! You are talking about MITM attacks and self-signed certs (see title) and now this? What's the connection? This proposal is centered around certificates.
> If you can get the router makers to fix their use of SSL, that will be a start.
> In the meantime, simple removal, though possibly desirable, is a non-starter
I did NOT propose that! Instead I proposed, average users from clicking through! There is a big difference between what you want to understand and what I actually said! I believe the solution lies there and not in a complicated heuristic. Out!
Comment 15•16 years ago
|
||
I will present a different idea about detecting MITM in some cases and let you shoot it down if you can:
1) Have a "reliable" service in the cloud with which Firefox can communicate.
2) Upon encountering an invalid certificate, Firefox talks to the service in the cloud, sends it the certificate (the channel must be reliable, of course) and asks the service if it sees the same certificate as Firefox. The idea is to have the service retrieve the certificate independently.
3) If a different certificate is seen by the service then we've detected an MITM attack. (In which case we can be firm about not proceeding further with the connection.)
Reporter | ||
Comment 16•16 years ago
|
||
I believe Ivan is describing something very much like the perspectives project.
http://www.cs.cmu.edu/~perspectives/
Comment 17•16 years ago
|
||
The only reason that people are excited about the perspectives project is
that they believe it will create a world in which self-signed certs will
be treated as just as valid as verified CA-issued certs when no MITM attack
is detected. One or more browser plugins to do just that are under development. :(
The prospect of having self-signed certs be treated as valid automatically,
just as CA-issued certs are (when the CA-issued certs have been validated),
is very attractive to those who loathe CAs.
The detection of an MITM attack is a good reason to stop things cold
and say "no, you can't go there". But the absence of a detectable MITM
attack, by itself, is NOT a good reason to automatically treat a
self-signed cert as valid.
This is for two reasons:
a) not all MITM attacks are detectable, and
b) the absence of an MITM attack does not mean that the cert really
belongs to the party named as its subject. Even if there is no MITM,
and all browsers in the world see the same cert for it, www.bankofamerica.ru
probably doesn't really belong to Bank of America.
Comment 18•16 years ago
|
||
> The detection of an MITM attack is a good reason to stop things cold
> But the absence of a detectable MITM attack, by itself, is
> NOT a good reason to automatically treat a self-signed cert as valid.
Agreed.
Very simple case: The MITM attack is near the origin server (e.g. at the provider/ISP's network). All places of the Internet would see the same (and if done right, always the same), but wrong cert. The server is not hijacked, just the network in front of it. Carnivore, the EU surveillance of communication metadata and other government schemes sit at the provider, so it's not unrealistic.
> www.bankofamerica.ru probably doesn't really belong to Bank of America.
That's no argument, because CAs don't protect against that either. You can get a CA cert for bankofamerica.ru easily assuming it's your domain.
Comment 19•16 years ago
|
||
> That's no argument, because CAs don't protect against that either.
In some generic TLDs (gTLDs) they do.
Comment 20•16 years ago
|
||
In response to some of the comments so far:
1) Although there are some similarities between what I proposed and the Perspectives project, I did not propose to implement their suggestions. In particular, I don't believe in self-signed certificates. (Ever.) I do think that their idea of independent certificate verification (did they mention history tracking?) has merit.
2) While we can think of a few cases where this approach to MITM detection would not work, I think that misses the point. I think we should focus on the cases where it _would_ work: attacks against individual users. I think these attacks will be the most frequent (think insecure Wi-Fi networks). I also think that these users will be very vulnerable, and I also think browsers must do their best to defend these users. If someone starts intercepting _all_ communication to a host (one of the cases where the MITM detection would fail), then that's going to be a very obvious attack that is quickly going to be discovered (if anyone cares).
Comment 21•10 years ago
|
||
There is no useful discussion here, and the discussion died 6 years ago, so I'll go ahead and mark this INCOMPLETE.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → INCOMPLETE
You need to log in
before you can comment on or make changes to this bug.
Description
•