Closed Bug 1334485 Opened 8 years ago Closed 6 years ago

Tracking using intermediate CA caching

Categories

(Core :: Security: PSM, defect, P3)

51 Branch
defect

Tracking

()

RESOLVED FIXED

People

(Reporter: a, Unassigned, NeedInfo)

References

(Blocks 2 open bugs)

Details

(Keywords: privacy, Whiteboard: [psm-backlog][tor])

User Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0 Build ID: 20160425115046 Steps to reproduce: I realized that intermediate CA caching can be used to fingerprint which intermediate CAs are installed for a certain user/profile. To check if a CA is installed for a user, try to load an image from a webserver which has a valid certificate but is not delivering the intermediate CA certificate. If the image loads (onload fires), this means that the user has the intermediate CA installed and thus loading succeeds, if the image does not load (onerror fires), the intermediate CA is not installed. Note that this allows to fingerprint users who switched to private mode, since they use the same cached certificates. Also, it might be able to gain additional information about the user such as geographic location or interests based on which CAs are installed (I for example have various german academia certificates installed - thanks to DFN PKI creating sub CAs for all universities, from which you might infer information about me). It would also be possible to use this in a "super-cookie" style by letting users visit a certain subset of hosts which use "rare" CAs (such as ones from tiny universities) and then at a later stage querying if those CAs are cached to re-identify the user. Actual results: I have created a proof of concept exploit which shows a table of identified intermediates for more than 300 CAs at https://finprinca.0x90.eu/poc_PRIVATE/ to demonstrate the problem. I'd be interested to hear how many cached CAs you have in your daily profile. Expected results: I am not entirely sure what the best solution to this problem would be - completely disabling intermediate CA caching? Caching but not loading content when the complete chain is not delivered?
I'm not sure anything could realistically be done about this, short of trying to ship a comprehensive set of intermediate CAs, which may or may not be practically feasible. Gerv? (I'd move this to Core :: Security:PSM or CA policy or whatever, but I suspect that we'll want to open this up, in which case moving the bug would hinder rather than help. Feel free to open up and move if you're confident that's the right thing to do.)
Flags: needinfo?(gerv)
Shipping a comprehensive, up-to-date set of intermediates does not sound very feasible to me - during my research I identified 3366 intermediate CAs which chain to a Firefox-trusted root.
(In reply to Alexander Klink from comment #0) > I realized that intermediate CA caching can be used to fingerprint which > intermediate CAs are installed for a certain user/profile. To check if a CA > is installed for a user, try to load an image from a webserver which has a > valid certificate but is not delivering the intermediate CA certificate. If > the image loads (onload fires), this means that the user has the > intermediate CA installed and thus loading succeeds, if the image does not > load (onerror fires), the intermediate CA is not installed. Yes, this is technically possible. In practice, it's a fairly terrible way of fingerprinting/tracking because each intermediate certificate provides only one bit of information (present/absent). Therefore, you would need 16-24 or so different SSL connections to different servers in order to give you a useful identifier, and that would probably be bad for the performance of your website. You'd also need to buy certificates from 16-24 different CAs, which would be a big hassle. Most of the "rare" CAs don't offer, or don't easily offer certs to the general public. And then, your tracking ID would get messed up as soon as the user visited some other secure websites and cached some of the intermediates. > I am not entirely sure what the best solution to this problem would be - > completely disabling intermediate CA caching? Caching but not loading > content when the complete chain is not delivered? There's no point in caching but not loading the content - you might as well not have a cache. We could eliminate the caching behaviour because it also creates seeming non-determinism in whether a site works or not (depending on what sites you visited previously). I suspect this cache is good for performance, but I'm not sure. rbarnes/keeler? Gerv
(In reply to Alexander Klink from comment #3) > Shipping a comprehensive, up-to-date set of intermediates does not sound > very feasible to me - during my research I identified 3366 intermediate CAs > which chain to a Firefox-trusted root. There are actually around 2720 which are unconstrained, unexpired and unrevoked; see https://crt.sh/mozilla-disclosures . There is some talk of shipping them all but I'm strongly sceptical of the value and, regardless, it's not happening soon. Gerv
Flags: needinfo?(gerv)
(In reply to Gervase Markham [:gerv] from comment #4) > of your website. You'd also need to buy certificates from 16-24 different > CAs, which would be a big hassle. Most of the "rare" CAs don't offer, or > don't easily offer certs to the general public. True, but I would not necessarily need to buy those, I can load content from correctly-configured sites and use incorrectly-configured ones to check for the existence of the certificate in the cache - my PoC does exactly that based on publicly available internet-wide SSL scans (scans.io/Project Sonar).
> > Shipping a comprehensive, up-to-date set of intermediates does not sound > > very feasible to me - during my research I identified 3366 intermediate CAs > > which chain to a Firefox-trusted root. > > There are actually around 2720 which are unconstrained, unexpired and > unrevoked; see https://crt.sh/mozilla-disclosures . Interesting site, thanks for the link. The 3366 do indeed contain constrained ones and I did not check for revocation either. > There is some talk of shipping them all but I'm strongly sceptical of the > value and, regardless, it's not happening soon. It seems to be too much of a moving target for me, too - new intermediates probably come and go at a relatively quick pace.
(In reply to Alexander Klink from comment #6) > True, but I would not necessarily need to buy those, I can load content from > correctly-configured sites and use incorrectly-configured ones to check for > the existence of the certificate in the cache - my PoC does exactly that > based on publicly available internet-wide SSL scans (scans.io/Project Sonar). Hmm. Yes, OK, that might work. Perhaps we need a random cache eviction policy; that might disrupt any high-bit-depth identifier enough. But it probably wouldn't stop a site tagging one individual key user with an intermediate from a really, really obscure CA. The Tor guys may want to turn this cache off... Gerv
(In reply to Gervase Markham [:gerv] from comment #4) > (In reply to Alexander Klink from comment #0) > > I realized that intermediate CA caching can be used to fingerprint which > > intermediate CAs are installed for a certain user/profile. To check if a CA > > is installed for a user, try to load an image from a webserver which has a > > valid certificate but is not delivering the intermediate CA certificate. If > > the image loads (onload fires), this means that the user has the > > intermediate CA installed and thus loading succeeds, if the image does not > > load (onerror fires), the intermediate CA is not installed. > > Yes, this is technically possible. In practice, it's a fairly terrible way > of fingerprinting/tracking because each intermediate certificate provides > only one bit of information (present/absent). Without wanting to express any other opinions on the technical issues or whatever, I'll just note that it seems to me part of the problem the bug identifies might be less about having a reliable tracking identifier for a user, and more about deriving information about the user from the presence/absence of certs (such as the ones the Belgian government chains personal ID certs to, or maybe <insert country> military-associated certs, or, in the case of comment #0, the German academic ones). I don't know if that's something we care about, but I wanted to flag it up separately from the tracking problem. :-)
I think we need more info from security code hackers about this cache, and its performance characteristics, and the consequences of turning it off. Gerv
Flags: needinfo?(dkeeler)
The intermediate CA cache is primarily a band-aid for misconfigured servers that don't send the correct intermediates (where "correct" means if mozilla::pkix used only the certificates sent by the server, it would be able to find a trusted path to a built-in root certificate). If a user has visited a properly configured site and cached the intermediate(s) that a different, broken site relies on, rather than seeing the "unknown issuer" error page, the user will be able to visit the site. Note that if the user visit the sites in the opposite order, they will see the unknown issuer error page on the broken site. Another approach to this problem is to do AIA chasing (that is, fetch the URI in each certificate to attempt to find an issuer). Firefox currently does not do this for performance and privacy reasons. Doing each fetch is slow (you would probably want a cache anyway, and so would still be vulnerable to this attack). Also, a CA (or even an untrusted party, since you can't verify the trustworthiness of the AIA field until after you've already fetched data from it) can use this to track users.
Flags: needinfo?(dkeeler)
(In reply to David Keeler [:keeler] (use needinfo?) from comment #11) > The intermediate CA cache is primarily a band-aid for misconfigured servers > that don't send the correct intermediates (where "correct" means if > mozilla::pkix used only the certificates sent by the server, it would be > able to find a trusted path to a built-in root certificate). If a user has > visited a properly configured site and cached the intermediate(s) that a > different, broken site relies on, rather than seeing the "unknown issuer" > error page, the user will be able to visit the site. Note that if the user > visit the sites in the opposite order, they will see the unknown issuer > error page on the broken site. Would it be possible not to use the cache for non-toplevel-document loads? That would limit the exploitability of this a lot, and it seems much less likely to affect real-world sites (it might still affect sites in iframes, which seems like an acceptable-ish trade-off, maybe?). While a malicious actor can of course open popups to external sites (if they find a way around popup blockers), they can't reliably detect that they load and it would be much less feasible to bulk-examine sites or do so without the user noticing.
(In reply to :Gijs from comment #12) > Would it be possible not to use the cache for non-toplevel-document loads? Or even better, not use it for non-toplevel loads _unless_ the toplevel load used the cache for the same cert. Gerv
I agree that implementing AIA chasing is not the correct solution. The question is whether we can modify the behaviour of the cache to mitigate this issue, or whether we turn off the cache, or whether we decide to ignore the issue and leave it turned on. Gerv
Flags: needinfo?(dkeeler)
(In reply to Gervase Markham [:gerv] from comment #13) > (In reply to :Gijs from comment #12) > > Would it be possible not to use the cache for non-toplevel-document loads? > > Or even better, not use it for non-toplevel loads _unless_ the toplevel load > used the cache for the same cert. Potentially, although it would be difficult to map toplevel vs non-toplevel loads to specific certificate verification requests (particularly with HTTP/2 connection pooling). (In reply to Gervase Markham [:gerv] from comment #14) > I agree that implementing AIA chasing is not the correct solution. The > question is whether we can modify the behaviour of the cache to mitigate > this issue, or whether we turn off the cache, or whether we decide to ignore > the issue and leave it turned on. I think it would be hard to modify the behavior of the cache in a way that would both mitigate the issue and yet still be useful to users. Keep in mind it's already confusing when one profile connects just fine but another doesn't. If we modify the cache to not be used for non-toplevel loads, suddenly JS from a subdomain/CDN will silently break. I also don't think we can turn it off without first measuring how often it's useful to users (that is, we need a measure of how frequently users are hitting misconfigured servers where the cache allows them to connect successfully). For context, this is about as powerful as the HSTS super cookie attack. While it would be good to address these, I don't think either are so dire that we need to drop everything until we fix them.
Flags: needinfo?(dkeeler)
What are we doing about HSTS super cookies, if anything? Bug #? Can we put in some telemetry to get some metrics on how much this cache is used? Gerv
Group: firefox-core-security
Component: Untriaged → Security: PSM
Keywords: privacy
Product: Firefox → Core
Whiteboard: [fingerprinting]
(In reply to Gervase Markham [:gerv] from comment #16) > What are we doing about HSTS super cookies, if anything? Bug #? It's noted in the spec: https://tools.ietf.org/html/rfc6797#section-14.9 I don't think we have a specific plan or bug yet. > Can we put in some telemetry to get some metrics on how much this cache is > used? I imagine that would be helpful. I filed bug 1336226.
Priority: -- → P3
Whiteboard: [fingerprinting] → [fingerprinting][psm-backlog]
(In reply to Gervase Markham [:gerv] from comment #16) > What are we doing about HSTS super cookies, if anything? Bug #? First-party isolation of the HSTS and HPKP caches is underway in bug 1323644. Tor Browser's current defense against cached certificate-based tracking is to set "security.nocertdb" to true (a pref implemented in bug 629558). A better solution might be to apply first-party isolation to the certificate cache instead, similar to the HSTS/HPKP effort.
Whiteboard: [fingerprinting][psm-backlog] → [fingerprinting][psm-backlog][tor]
As far as I know, the full certificate chain is required by the TLS protocol handshake. So why not make this a requirement for successful connections? Mozilla has been doing many big steps to improve TLS recently, forcing webserver administrators to improve their setups. Of course this cannot be done without preparation, because it will break a lot of webservers. But the long-term goal should be something like that. PS: The sending of the full chain is probably not desired for performance reasons, but the habbit of every major website to load contents from hundreds of domains is a problem of its own for security and privacy anyway. It would not hurt the user in the long term if every additional content source was a little bit more expensive for the website operator. It would force web developers to limit the number of external sources to the necessary level. (It would not limit the number of external files at all. It would only limit the number of domains.)
¡Hola Alexander! This might be completely unrelated but commenting anyhow just in case. I tried https://fiprinca.0x90.eu/poc_PRIVATE/ on Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:54.0) Gecko/20100101 Firefox/54.0 ID:20170224030232 CSet: be661bae6cb9a53935c5b87744bf68879d9ebcc5 The result was: "Testing 326 different intermediate CAs (326 images created). 0 results still pending. 46 cached intermediate CAs identified." Shortly after Nightly locked up completely and I had to crash it with https://ci.appveyor.com/api/buildjobs/oe0b55ardbgfkcuc/artifacts/x64/Release/crashfirefox.exe the resulting crash was https://crash-stats.mozilla.com/report/index/bp-211d8099-c73d-49c1-a020-158b92170224 Have you seen your POC consistently locking or crashing Nightly? ¡Gracias! Alex
Flags: needinfo?(a)
If we understand this correctly, this is not a fingerprinting issue but a first party isolation and super cookie problem.
Whiteboard: [fingerprinting][psm-backlog][tor] → [psm-backlog][tor]
Summary: Fingerprinting using intermediate CA caching → Tracking using intermediate CA caching

Dana, is this fixed with intermediate preloading?

Flags: needinfo?(dkeeler)

Should be, yes.

Flags: needinfo?(dkeeler)
Status: UNCONFIRMED → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.