Open Bug 660749 (CVE-2011-0082) Opened 13 years ago Updated 1 year ago

Firefox doesn't (re)validate certificates when loading a HTTPS page from the cache

Categories

(Core :: Networking: Cache, defect, P5)

defect

Tracking

()

Tracking Status
firefox-esr10 - ---

People

(Reporter: huzaifas, Unassigned)

References

(Depends on 1 open bug, Blocks 1 open bug, )

Details

(Keywords: perf, sec-moderate, Whiteboard: [ETA:2012-03-28][psm-cert-errors][workaround comment 6][STR comment 19][necko-would-take])

Attachments

(1 file)

User-Agent:       Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.16) Gecko/20110322 Fedora/3.6.16-1.fc14 Firefox/3.6.16
Build Identifier: Mozilla/5.0 (X11; Linux i686; rv:2.0.1) Gecko/20100101 Firefox/4.0.1

A Debian bug report indicated that Firefox 4.0.x handled the
validation/revalidation of SSL certificates improperly.  If a user were to
visit a site with an untrusted certificate, Firefox would correctly display the
warning about the untrusted connection.  If a user were to confirm the security
exception for a single session (not check off the "permanently store this
exception"), then restart the browser and re-load the page, the contents of the
page would be displayed from the Firefox cache.  Upon reloading the page, the
security warning would appear, but incorrectly indicates that the site provides
a valid, verified certificate and there is no way to confirm the exception.

This is not the case in Firefox 3.6.17 where when re-loading the browser and
visiting the page, the untrusted connection warning comes up immediately,
without showing the contents of the page, and allowing you to confirm the
exception.


Reproducible: Always

Steps to Reproduce:
1) Visit a site with a self-signed certificate (such as https://kitenet.net/)
and click "I Understand The Risks", click "Add Exception", uncheck "Permanently
store this exception", click "Confirm Security Exception".  The site's contents
will be displayed.

2) Exit the browser.

3) Start Firefox again and visit the page you visited in step 1.  The browser
will show the contents of the page, even though its certificate should no
longer be considered valid.

4) Refresh the page.  The untrusted connection warning will display again. 
Click "I Understand The Risks", click "Add Exception".  Firefox will indicate
that "This site provides valid, verified identification" and does not allow you
to confirm the security exception.





References:
[1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=627552
[2] https://bugzilla.redhat.com/show_bug.cgi?id=709165

Note: I am able to successfully reproduce this on firefox 4.0.1 on a Fedora-15 fresh install
Version: unspecified → 4.0 Branch
My guess is that this is a consequence of bug 531801. If you set browser.cache.disk_cache_ssl to false, are you still able to reproduce with Firefox 4?
Haven't confirmed this, but it almost certainly lives in PSM
Component: Security → Security: PSM
Product: Firefox → Core
QA Contact: firefox → psm
Version: 4.0 Branch → Trunk
Whiteboard: [sg:high?]
Alias: CVE-2011-0082
Status: UNCONFIRMED → NEW
Ever confirmed: true
Group: core-security
Group: core-security
I'm not too concerned we're reading the content from the cache even though we should no longer be trusting the certificate -- the certificate was trusted at the time we saw that content. I could go either way on that, but if we're not hitting the network then it's analogous to the case when a certificate was revoked after the content was cached.

The state where we can't re-add the exception because it's trusted and not at the same time is a bug that needs to be fixed. If just that one dialog is confused it may not be that bad, but if we're confused at a deeper level it could result in loading untrusted content in some contexts.
Blocks: 531801
Keywords: regression
Whiteboard: [sg:high?] → [sg:low?]
(In reply to comment #0)
> 3) Start Firefox again and visit the page you visited in step 1.  The browser
> will show the contents of the page, even though its certificate should no
> longer be considered valid.

I think the people who implemented caching in Firefox deliberately made this work like this. I think it's acceptable, if all we do is displaying the exact copy of what was displayed earlier.


> 4) Refresh the page.  The untrusted connection warning will display again. 
> Click "I Understand The Risks", click "Add Exception".  Firefox will indicate
> that "This site provides valid, verified identification" and does not allow
> you
> to confirm the security exception.


I confirm the bug.
Workaround to add the exception: Clear the cache.

Go to: 
  Tools
  Clear recent history
  select "everything"
  keep "Cache" checked, uncheck everything else
  clear now

After that, you should be able to add the exception.
Can you confirm the workaround helps you?
(In reply to Huzaifa Sidhpurwala from comment #0)
> 4) Refresh the page.  The untrusted connection warning will display again. 
> Click "I Understand The Risks", click "Add Exception".  Firefox will indicate
> that "This site provides valid, verified identification" and does not allow
> you
> to confirm the security exception.

My company makes products that use web-interfaces to configure embedded computers, and we frequently encounter this bug.  I would love to see it get addressed.

Using FF 6.0.2 I have confirmed that using browser.cache.disk_cache_ssl to false will prevent this from happening.  If that is set to true, then clearing the cache will also workaround this bug.

(I posted a writeup of how to reproduce the problem in bug 659736, which covers the same issue.)

As a test I added a flag LOAD_BYPASS_CACHE to ignore the cache in mozilla-release/security/manager/pki/resources/content/exceptionDialog.js.  This worked, too.  Any chance of getting this, or some other workaround, added into the code base?

  ...
  var req = new XMLHttpRequest();
  try {
    if(uri) {
      req.open('GET', uri.prePath, false);
      req.channel.notificationCallbacks = new badCertListener();
      req.channel.loadFlags |= Components.interfaces.nsIRequest.LOAD_BYPASS_CACHE;
      req.send(null);
    }
  ...
I know the cause and I think I have an idea for a fix. I *think* we can fix this issue within PSM without doing major changes to the cache or to the way Necko loads pages out of the cache. That solution involves changing the implementation of nsNSSSocketInfo::Read() to check that a verification has been done and failing if it doesn't. However, that solution is pretty wasteful; we have to do all the work of locating the entry in the cache and reading its metadata out of the cache, but then we never use that response.

IMO, what we *really* should do this this:

(1) Change the cache entry writing logic so that it stores a SHA-384 hash of the certificate, instead of the serialized nsNSSSocketInfo and nsSSLStatus instances. Also, do not write the cache entry if there were any SSL errors (including any cert errors or cert overrides).

(2) Change the cache entry reading logic so that it doom any cache entry for a HTTPS page we come across without this SHA-384 hash, and so that it verifies that a cert with that hash has already been verified in the current session.

(3) Remove the nsISerializable implementations for nsNSSSocketInfo and nsSSLStatus.

(4) Change Necko so that it starts the HTTPS connection without checking the cache first, if the resource is HTTPS and we haven't validated a certificate for the host in the current session.

(All of this is approximately what IE does, BTW.)

This latter implementation would be better for security *AND* use less disk space. Right now, every cache entry writes a serialized nsNSSSocketInfo, which includes a serialized copy of the cert and a serialized nsSSLStatus, which also includes a serialized copy of the cert. (That is, the same cert is written to disk twice, AFAICT.) This latter implementation would reduce the SSL-specific entry overhead from ~3,000-10,000 bytes per entry to <100 bytes per entry.

I will meet with Bjarne and Michal (and Patrick) to coordinate this work.
Component: Security: PSM → Networking: HTTP
OS: Linux → All
QA Contact: psm → networking.http
Summary: Firefox doesn't (re)validate certificates when loading HTTPS page → Firefox doesn't (re)validate certificates when loading a HTTPS page from the cache
Whiteboard: [sg:low?] → [sg:moderate?]
Brian, if you are going to work on this bug, then please take it (change Assigned To), I originally wanted to work on this, but it seems you are a step forward.
Brian, let me know if there's anything I can do to help.  I'm happy to run some tests after this gets reworked.

Matt
Whiteboard: [sg:moderate?] → [sg:moderate?][secr:imelven]
bsterne and I both looked at the proposal in comment 8 and it sounds good to us. i've asked other secteam members to take a look and comment in the bug if they have further questions/concerns.
The proposal in comment 8 doesn't work for WYCIWYG entries. For WYCIWYG, we need to serialize the nsIAssociatedContentSecurity information, at least, in addition to the the entire server certificate chain. Also, we cannot refuse to load a WYCIWYG resource out of the cache just because we haven't previously validated the cert yet. That means that we will not be able to avoid validating the cert during cache loads.

So, here is an amendment to the proposal in comment 8 for WYCIWYG entries only (comment 8 would still be applicable for non-WYCIWYG entries):

----

(1) Store the nsIAssociatedContentSecurity information *and* the full cert chain in the cache entry.

(2) Validate the certificate chain in the cache entry before returning it from the cache to Necko. This must happen off the socket transport thread. Ideally it would be asynchronous with the cache thread too, so that the cache thread doesn't get blocked.

----

Additionally, there are other uses of the nsISerializable implementation of nsNSSSocketInfo and nsSSLStatus--they are serialized for e10s. See bug 568502 and bug 568504. In order to avoid unnecessary re-validations of certs for the e10s case, the value of isExtendedValidation() must be serialized/deserialized in nsSSLStatus.

The straightforward way to fix this would break compatibility with previously-cached entries (both WYCIWYG and non-WYCIWYG), but I think we can do so without changing the cache version (i.e. we don't have to blow away the entire cache for this transition). The consequence of this is that session restore for HTTPS pages wouldn't work during the upgrade restart.

If we aren't happy with that, then we can make the fix more complicated, and separate the serialization/deserialization logic used for the cache from the serialization/deserialization logic used for e10s. And/or, ideally, we would remove the need to serialize/deserialize the security info for e10s at all--that is, keep it all in the chrome process and never copy it to the child process.
The above analysis has assumed incorrectly that revalidation should only happen when loading from the disk cache. However, we should also revalidate when loading from the memory cache if the cert validation might be stale. For example, if the browser has been running for three days, a memory cache entry might have a cert that was valid three days ago but which expired yesterday. However, we don't want to do re-validation excessively (for every cache load), because performance would be horrible. Instead, we will have to implement a cache of SSL cert validation results that allows us to avoid revalidation for a period of time that we think is reasonable w.r.t. staleness. Wan-Teh said Chrome also does this and they invalidate cache SSL cert validation cache entries after they are 30 minutes old. I think a policy like that is reasonable.

This means the cert validation logic should happen in nsCacheService::SearchCacheDevices or higher in the call stack.
Assignee: nobody → bsmith
What should happen when the server returns a 304 Not Modified response and the cached document has different security properties (e.g. different cert) from the connection? 

Let's assume that the client previously cached a document from a malicious server, e.g. by adding a cert override. Then, we don't really want to use the cached document. Similarly, if the connection returning the 304 is from a malicious server, but the cached document was from a good server, then we would not want to use the cached document either, because it might contain sensitive information that could be retrieved by the attacker that sent the 304 response.

I propose that, when we look up the validators for a cached entry, we compare the cert to the cert that we will use for the connection; if they match, carry on; if they are different certs, then don't provide those validators in our (conditional) request.

However, a malicious server could return a 304 response even to an unconditional request. That means we would also have to ensure that we do, or have already done, the "same cert" check for whatever resource we look up from the cache in response to the 304.
Re: comment 16. We may be able to do the above checks for a 304 somewhat sensibly, but then a 304 would work differently from a 301/302/303 redirect to the same server, which seems wrong. It seems like, if we want to apply "same cert" logic, we must do so consistently in many places, including just regular document navigation or document/subdocument structures, or not at all. And, the Citibank problem makes me think "not a all" is probably the right answer.
Whiteboard: [sg:moderate?][secr:imelven] → [sg:moderate][secr:imelven]
Summary: Firefox doesn't (re)validate certificates when loading a HTTPS page from the cache → Firefox doesn't (re)validate certificates when loading a HTTPS page from the cache, unable to add exception
Whiteboard: [sg:moderate][secr:imelven] → [sg:moderate][secr:imelven][psm-cert-errors]
bug 524500 comment 16 has good steps to reproduce.

This is a major usability issue.
Whiteboard: [sg:moderate][secr:imelven][psm-cert-errors] → [sg:moderate][secr:imelven][psm-cert-errors][workaround comment 6][STR comment 19]
Blocks: 688822
Priority: -- → P1
More duplicates:
bug 457573
bug 659736 (and its duplicate bug 654846)
I would really like to ask the people who had introduced this regression to come to help with this bug.
Kai, why do you think this is a regression? When did we ever re-validate SSL certificates from cached documents (in the memory cache or from disk cache)? I tried to search back in hg history to see if some such logic got removed, but I couldn't find any such removed logic.
The regression was caused by enabling caching for https pages.
Searching bugzilla further reveals even more duplicates:
Bug 712280
Bug 697972
Bug 683454
Bug 682263
Bug 512343

Those have been filed for branches 3.5, 6, 7 and 10, so this bug has been separately filed for almost every version :-(

This bug is really painful when doing work with embedded devices, where self-signed certificates get automatically regenerated after firmware updates.
I nominate to land any fix for this bug on the ESR branch. Adding tracking keyword.
Hannu, thanks for reporting the duplicates.
(In reply to Kai Engert (:kaie) from comment #26)
> The regression was caused by enabling caching for https pages.

I think that this may have been made *worse* by enabling HTTPS pages to be cached on disk. However, I think that it probably existed previously in a milder form, when we allowed HTTPS pages to be cached in memory.

(In reply to Kai Engert (:kaie) from comment #28)
> I nominate to land any fix for this bug on the ESR branch. Adding tracking
> keyword.

The proper fix for this will be too much for ESR. It will require the SSL thread removal that is in the release after ESR, multiple changes to the HTTP cache, as-yet-unfinished in-memory certificate validation result caching, etc.

I am not sure that this is an issue that qualifies for ESR at all. If we need a solution for ESR, I suggest that we make that solution be "disable all caching of HTTPS pages," for ESR only. And/or, document the workaround of clearing the caches.
(In reply to Brian Smith (:bsmith) from comment #35)
> (In reply to Kai Engert (:kaie) from comment #28)
> > I nominate to land any fix for this bug on the ESR branch. Adding tracking
> > keyword.
> 
> The proper fix for this will be too much for ESR. It will require the SSL
> thread removal that is in the release after ESR, multiple changes to the
> HTTP cache, as-yet-unfinished in-memory certificate validation result
> caching, etc.
From ESR corporate usability perspective, the worst part of this bug is the inability to add a new exception (as the "add exception" dialog thinks that there already exists a valid exception). Firefox should enable the user to replace the old exception with a new one, if he so wishes. Hopefully at least the dialog might be fixed rather soon, also for ESR.

One more duplicate: bug 637944
(In reply to Brian Smith (:bsmith) from comment #35)
> I am not sure that this is an issue that qualifies for ESR at all. If we
> need a solution for ESR, I suggest that we make that solution be "disable
> all caching of HTTPS pages," for ESR only. And/or, document the workaround
> of clearing the caches.

https://wiki.mozilla.org/Release_Management/ESR_Landing_Process

After reading the above, this does not meet the requirement for ESR. I will let release-drivers have the final say though.
I've only nominated for ESR.
I propose that we wait until we actually have a fix before we make a decision whether such a fix is appropriate or inappropriate for ESR.
This does not meet the criteria for ESR as Brian notes in Comment#37. We are only considering fixes that are regressions in FF10, major stability issues, or security issues.
In bug 406187 comment 0 Christian wrote:

> In particular, the case of a certificate that expired between
> caching and reusing the cache should be handled properly
> (i.e. not show a warning).

I think this is already what happens, and is partially the opposite behavior of what is being requested in this bug.

I think that if there is any cert error, then the cache entry should be doomed and the resource reloaded from the network.
The difficulties in adding an exception due are now tracked in bug 659736. It seems very likely that fixing this bug will fully fix bug 659736 but I am not sure yet.
Summary: Firefox doesn't (re)validate certificates when loading a HTTPS page from the cache, unable to add exception → Firefox doesn't (re)validate certificates when loading a HTTPS page from the cache
Keywords: regression
Whiteboard: [sg:moderate][secr:imelven][psm-cert-errors][workaround comment 6][STR comment 19] → [ETA:2012-03-28][sg:moderate][secr:imelven][psm-cert-errors][workaround comment 6][STR comment 19]
Removing myself as secr. Curtis, please re-assign to a security assurance member (if needed, probably discuss with bsmith first)
Whiteboard: [ETA:2012-03-28][sg:moderate][secr:imelven][psm-cert-errors][workaround comment 6][STR comment 19] → [ETA:2012-03-28][sg:moderate][psm-cert-errors][workaround comment 6][STR comment 19]
Not seeing a need to sec review this, if you want to initiate a review let me know.
No longer blocks: 659736
Blocks: 659736
Kaie, I've been bitten by what seems to be this bug (see duplicate). The symptoms include an error page that says that the website provides no identity information (using another browser shows that the website *does* have a proper certificate) and no way at all to solve it. I can't even add an exception to says "please let me see the website".

The other bug report (from me) mentions that this starts happening in Firefox 16 so there's potential for a regression here.
Brian - can you provide a summary of what is left to do here?
Whiteboard: [ETA:2012-03-28][sg:moderate][psm-cert-errors][workaround comment 6][STR comment 19] → [ETA:2012-03-28][psm-cert-errors][workaround comment 6][STR comment 19]
Assignee: brian → nobody
This is a bug in the HTTP cache. The HTTP cache needs to validate the certificate for the cached entry. PSM provides an API for doing this: SSLServerCertVerificationJob::Dispatch. You can see how this is used in the function AuthCertificateHook in security/manager/ssl/src/SSLServerCertVerification.cpp:

    socketInfo->SetCertVerificationWaiting();
    SECStatus rv = SSLServerCertVerificationJob::Dispatch(
                     certVerifier, static_cast<const void*>(fd), socketInfo,
                     serverCert, stapledOCSPResponse, providerFlags, now);

The HTTP cache needs to do something similar. Note that currently the HTTP cache doesn't store all the information that is needed to re-validate the cached entry.
Component: Networking: HTTP → Networking: Cache
Assignee: nobody → valentin.gosu
Status: NEW → ASSIGNED
Judging by the attached patch, and the behaviour I've noticed, the problem is in nsNSSCertificate.cpp
I have changed the Write method so that it always saves ev_status_unknown as the cached status.
Turns out that after the certificate and status are read from cache, rechecking the status immediately will cause mCachedEVStatus to be set to ev_status_invalid. Later calls would return a valid status, but because mCachedEVStatus is set to a definite state, later checks aren't even performed.
This causes certificates loaded from cache to be missing the green EV mark, and even a hard refresh doesn't restore it (browser restart sometimes works).
In the patch I added a mCachedEVStatusIsStale attribute, which forces revalidation for certificates loaded from cache. The behaviour I'm observing is that the first load doesn't have the secure EV mark, but a refresh will cause a revalidation, and the tag appears.

Also, I was unable to use SSLServerCertVerificationJob::Dispatch, even in nsNSSCertificate.cpp (it doesn't seem to be a public API).

I don't think my fix is optimal, or even safe for that matter, but it does illustrate that we can/should fix the issue in the PSM code, rather than in the cache code.

Brian, do you think I'm interpreting the behaviour correctly, or is there something I missed?
Assignee: valentin.gosu → nobody
Blocks: 1040086
Flags: needinfo?(brian)
Dana, could you also weigh in on comment 52? Bug 1040086 seems pretty urgent, and maybe we can fix this before it moves from beta.
Flags: needinfo?(dkeeler)
So, when we from some reason (must be checked yet!) store the ev cached state as "unknown" in the cache entry, the cached certificate fails to do OCSP check since CertVerifier::FLAG_LOCAL_ONLY is set when nsNSSCertificate::GetIsExtendedValidation gets called and we so far don't have any OCSP response in the local cache:


 	xul.dll!mozilla::psm::NSSCertDBTrustDomain::CheckRevocation(MustBeCA, {...}, {...}, 0x00000000, 0x003fe684) Line 451	C++

  if (mOCSPFetching == LocalOnlyOCSPForEV) {
    if (cachedResponseResult != Success) {
      return cachedResponseResult;
    }
>   return Result::ERROR_OCSP_UNKNOWN_CERT;
  }

 	xul.dll!mozilla::pkix::PathBuildingStep::Check({...}, 0x00000000, false) Line 192	C++
 	xul.dll!mozilla::psm::NSSCertDBTrustDomain::FindIssuer({...}, {...}, {...}) Line 138	C++
 	xul.dll!mozilla::pkix::BuildForward({...}, {...}, {...}, keyCertSign, id_kp_serverAuth, {...}, 0x00000000, 0x00000001) Line 274	C++
 	xul.dll!mozilla::pkix::PathBuildingStep::Check({...}, 0x00000000, false) Line 177	C++
 	xul.dll!mozilla::psm::NSSCertDBTrustDomain::FindIssuer({...}, {...}, {...}) Line 138	C++
 	xul.dll!mozilla::pkix::BuildForward({...}, {...}, {...}, digitalSignature, id_kp_serverAuth, {...}, 0x00000000, 0x00000000) Line 274	C++
 	xul.dll!mozilla::pkix::BuildCertChain({...}, {...}, {...}, MustBeEndEntity, digitalSignature, id_kp_serverAuth, {...}, 0x00000000) Line 320	C++
 	xul.dll!mozilla::psm::BuildCertChainForOneKeyUsage({...}, {...}, {...}, digitalSignature, keyEncipherment, keyAgreement, id_kp_serverAuth, {...}, 0x00000000) Line 167	C++
 	xul.dll!mozilla::psm::CertVerifier::VerifyCert(0x18903810, 0x0000000000000002, {...}, 0x00000000, 0x00000000, 0x00000003, 0x00000000, 0x00000000, 0x003fec18) Line 286	C++
 	xul.dll!nsNSSCertificate::hasValidEVOidTag(SEC_OID_UNKNOWN, false) Line 1415	C++
 	xul.dll!nsNSSCertificate::getValidEVOidTag(SEC_OID_UNKNOWN, false) Line 1437	C++
 	xul.dll!nsNSSCertificate::GetIsExtendedValidation(0x003fecab) Line 1470	C++
 	xul.dll!nsSSLStatus::GetIsExtendedValidation(0x003fecab) Line 121	C++
 	xul.dll!nsSecureBrowserUIImpl::EvaluateAndUpdateSecurityState(0x15cc9434, 0x18763f80, true, false) Line 517	C++
 	xul.dll!nsSecureBrowserUIImpl::OnLocationChange(0x0da81c14, 0x15cc9434, 0x167d6cc0, 0x00000000) Line 1486	C++
 	xul.dll!nsDocLoader::FireOnLocationChange(0x0da81c14, 0x15cc9434, 0x167d6cc0, 0x00000000) Line 1285	C++


Using bug 1040086#c10, cannot reproduce.

The thing is that "security-info" meta is not set on the cache entry sooner than after OnStartRequest that doesn't happen sooner than after the cert is verified (we block on OCSP).  I'm no expert to the cert verification code either, brian (re)wrote it.  Since I cannot reproduce, this is hard for me to figure out easily.

Note: during non-cached load the certificate is set on the ssl info object at (SSL Cert #1 thread):

	xul.dll!mozilla::psm::`anonymous namespace'::AuthCertificate({...}, 0x14555c50, 0x1780c010, {...}, 0x18bfd0d0, 0x00000000, {...}) Line 803	C++

    if (status && !status->mServerCert) {
>     status->mServerCert = nsc;
      PR_LOG(gPIPNSSLog, PR_LOG_DEBUG,
             ("AuthCertificate setting NEW cert %p\n", status->mServerCert.get()));
    }

 	xul.dll!mozilla::psm::`anonymous namespace'::SSLServerCertVerificationJob::Run() Line 900	C++
 	xul.dll!nsThreadPool::Run() Line 222	C++

The cert (nsc) is created with nsNSSCertificate::Create(cert, &evOidPolicy); call few lines above, so it has set the cached ev status from its very life start.

Then we restart the handshake (suspended) on the sts thread.
I don't actually think fixing this bug will improve the situation for bug 1040086. Keep in mind that a certificate will only verify as EV if we either have cached OCSP responses or can access the network. Since we don't save OCSP responses persistently, on startup we basically have no OCSP cache (our work on OCSP GET will improve this, but not all responders will support GET, so this will always apply). Thus, when loading a site that was EV from the cache, if we re-validate the certificate, we will either take the performance hit on fetching an OCSP response or we won't show the EV indicator.

To fix bug 1040086 without sacrificing performance, we should figure out why mCachedEVStatus is getting saved in the cache with an incorrect value.
Flags: needinfo?(dkeeler)
As far as I could tell, there is no way to determine if the mCachedEVStatus has been computed, from the cache code.
Also I feel it is a problem that if mCachedEVStatus is set to invalid, due to not having the OCSP response, there is no way of revalidating it until a restart is performed. I think we should make this possible, at least when doing a forced refresh.
I'm dropping tracking on this bug now that bug 1040086 has been reopened.
(In reply to Valentin Gosu [:valentin] from comment #52)
> Brian, do you think I'm interpreting the behaviour correctly, or is there
> something I missed?

Discussed over email.
Flags: needinfo?(brian)
Blocks: 1092369
No longer blocks: 1092369
anything new here? Seems nearly unbelievable that such an issue is open for several years... o.O

This may also be a cause for bug #1076440.
Whiteboard: [ETA:2012-03-28][psm-cert-errors][workaround comment 6][STR comment 19] → [ETA:2012-03-28][psm-cert-errors][workaround comment 6][STR comment 19][necko-would-take]
Bulk change to priority: https://bugzilla.mozilla.org/show_bug.cgi?id=1399258
Priority: P1 → P5
Flags: needinfo?(youthcornerr)
This is a general TLS bug and doesn’t block the implementation of FTPS.
Blocks: https-everything
No longer blocks: ftps
Status: ASSIGNED → NEW

In the process of migrating remaining bugs to the new severity system, the severity for this bug cannot be automatically determined. Please retriage this bug using the new severity system.

Severity: major → --
Severity: -- → S3
Flags: needinfo?(huzaifas)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: