Closed Bug 554121 Opened 14 years ago Closed 12 years ago

[SECURITY] HTML attachments can contain phishing or malicious redirects

Categories

(bugzilla.mozilla.org :: General, defect)

Production
defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: mcoates, Unassigned)

References

Details

(Whiteboard: WONTFIX [infrasec:input] [extension])

Attachments

(1 file)

Attached file (deleted) —
Attachments can be added to any bug with the type text/HTML. The page is then rendered at a separate subdomain specific to the bug id (e.g. https://bug552550.bugzilla.mozilla.org/attachment.cgi?id=<bugID>)

This setup has been recently enhanced to use the subdomain to help protect the user. Also cookies are marked with HTTPOnly to mitigate XSS risks. 

However, this setup is vulnerable to credible phishing attacks which would be hosted at a mozilla.org URL and also redirects to malicious websites hosting CSRF attacks or browser exploits. Phishing attacks using this scenario would be difficult to detect.

View the attachment for a proof of concept.

Remediation:
It is recommended to not support the response type of text/HTML. Instead files should be returned as plain text. If it is necessary to view the rendered HTML, the bugzilla user can save the file locally as a html file and open it in a local browser.
Additional consideration: If this bug were not marked "security sensitive" then the attachment could be viewed without logging into bugzilla. This means the html content could contain inappropriate content and would be hosted at mozilla.org. The direct URL could then be posted or mailed to the public.
There's already an option 'allow_attachment_display' that basically does what you want, but we feel the cost is too great for that, as it would directly impact usability by developers who rely on the current method for testcases and other things.

This should either be duped somewhere or marked WONTFIX, imho.
Group: bugzilla-security
OS: Mac OS X → All
Hardware: x86 → All
Summary: [Security] HTML Attachments Can Contain Phishing or Malicious Redirects → [SECURITY] HTML attachments can contain phishing or malicious redirects
Whiteboard: DUPEME WONTFIX?
Note: The proof of concept is an HTML page that I created. If you click "login" you'll see that the form actually executes JavaScript. An attacker could use this to steal the submitted credentials.
Can you explain a bit more? There is a use case where developers want bugzilla to accept valid HTML attachments which are rendered immediately?  The recommended fix is to return the HTML as plain text and if desired, the developer can save it as a local HTML.

The concern with this current approach is that any user can be subjected to a malicious html attachment. Also, attackers can host illicit content in attachments and reference direct links to the content.
I recommend reading the entirety of bug 38862 and bug 472206. They should answer your questions better than I can summarize here in a short period.
Attached file (deleted) —
Attachment #434004 - Attachment mime type: application/octet-stream → text/html
Attachment #434004 - Attachment mime type: text/html → text/plain
I read through those bugs. Here is a compromise that may work:

Primary Risks Attempting to Mitigate:
1. Authenticated users subjected to malicious arbitrary html via attachments.
2. Rogue user creates an account and uploads 100s of attachments with html, sets them to be "public" and essentially has his own malicious website hosted at bugxxx.bugzilla.mozilla.org

New Recommendation:
1. Create a per user setting that allows the user to view attachments as plain text or as the rich content (e.g. our current setting).
2. To prevent risks 1 and 2 above, default all users and anonymous viewers of public attachments to "plain text"
3. For the developers that rely on this functionality allow them to change the setting in their profile.  The developers then know they could be subject to malicious attachments, but are taking that risk due to the requirement to use that functionality. Basically taking "allow_attachment_display" and setting it to default to false and allowing each user to change it to true if they desire.

This is a pretty big security concern and I think moving to a "default secure" approach would be very beneficial.
(In reply to comment #7)
> 2. To prevent risks 1 and 2 above, default all users and anonymous viewers of
> public attachments to "plain text"

  No way. This trades off way too much functionality to protect against a security situation that has never happened on any of the thousands of Bugzilla installations in the world.

  Also, you're just special-casing HTML attachments--we've had this discussion before, in bug 472206.

  If you want to protect against phishing Bugzilla logins, that's a whole other issue, and that's something we could do. There are lots of good, standard methods of phishing protection, like personal picture/phrase recognition (although I have no idea if somebody has a patent on that).
  Also, although the form on your PoC could certainly be submitted elsewhere, it couldn't steal *existing* authentication credentials with JS, since they are never sent to the attachment. (That's what bug 38862 was about.)

  Secondly, the danger is relatively mitigated by running under SSL (which we generally recommend all Bugzillas do), as the attacker then only has the choice of submitting the form to a non-SSL URL (in which case the browser will warn) or to a site with a different cert (probably self-signed, in which case the browser will warn again).
> Secondly, the danger is relatively mitigated by running under SSL

SSL doesn't help at all.  It's not hard to set up or compromise a random SSL server. Also, the evil attachment doesn't have to submit a form to exfiltrate a password.
Michael, I don't understand what kind of attack you're trying to protect against.  Is your concern that blackhats will use Bugzilla as web hosting to attack random internet users?  Or that hosting malicious content specifically on bug*.bugzilla.mozilla.org creates phishing risk?
Attached file (deleted) —
Added an SSL proof of concept to illustrate that SSL is not a mitigating control.
Regarding comment 11, 

There are 2 attacks that we are trying to prevent here.
1. An attacker wants to compromise a bugzilla user (likely someone in the security group). They create either a phishing page (similar to my POC) or a page which redirects to a site hosting browser exploits and then adds that as an attachment to a bug. They submit the bug to the security group and then wait for us to click and fall victim to the malicious attachment.

2. As you mentioned, blackhats or other random users host malicious content on bugzilla servers. This could be porn, warez or malicious files.

As Mark mentioned in comment 9, this situation does not present a risk to a currently logged in user. I agree there. The above scenarios are my concern.
Regarding comment 8, we can't discount the threat of an attack through this vector just because we don't have knowledge of it being currently used. We want to be proactive and identify/mitigate high security risks before they become an issue.  The likelihood of an attack through this vector is relatively high since it does not take too much knowledge to craft a good looking phishing page. Also, any user can sign up and submit a bug and attachment.
(In reply to comment #7)
> 2. Rogue user creates an account and uploads 100s of attachments with html,
> sets them to be "public" and essentially has his own malicious website hosted
> at bugxxx.bugzilla.mozilla.org

Do note that something like that would be noticed within seconds (thanks to numerous bug-activity-to-IRC gateways we have), bugzilla admins in #bmo notified, and the account disabled with all offending attachments deleted. We're pretty efficient and quick when it comes to that type of stuff. ;)
Attachment #434041 - Attachment description: SSL PoC → SSL phishing PoC (don't be fooled!)
  To a large degree, this is re-hashing the discussion in bug 472206. Particularly on bmo, there's no way that people are going to accept having to manually switch something on for each attachment in order for it to display as text/html. If you have some other solution (short of parsing/filtering HTML, which we won't do) that doesn't degrade important functionality, I'd consider it.

  If you want phishing protection in Bugzilla, then we should add some sort phishing protection, not try to imagine and protect every possible vector somebody could phish Bugzilla from.

  Marking this as security since Michael's attachments are too easy to copy and use to start attacking Bugzilla in this way.
Group: bugzilla-security
(In reply to comment #17)
> Particularly on bmo, there's no way that people are going to accept having to
> manually switch something on for each attachment in order for it to display as
> text/html.

  Not to mention that IE6 (and I believe IE7?) will render the content as HTML anyway.
Attachment #433992 - Attachment is private: true
Attachment #434004 - Attachment is private: true
Attachment #434041 - Attachment is private: true
Attachments hidden. Unhiding this bug again.
Group: bugzilla-security
(In reply to comment #17)
>   To a large degree, this is re-hashing the discussion in bug 472206.
> Particularly on bmo, there's no way that people are going to accept having to
> manually switch something on for each attachment in order for it to display as
> text/html. If you have some other solution (short of parsing/filtering HTML,
> which we won't do) that doesn't degrade important functionality, I'd consider
> it.


I'm advocating a one time switch for the user accounts - view attachments safely as text or view attachments as whatever dangerous document they may be.  The default is text, but a user can change it in their profile one time and go the other route. This will provide a secure option and be secure by default. User's that want the current functionality can make that decision. Again, this option would be remembered for each user, so the user would only have to make a change 1 time.

>   If you want phishing protection in Bugzilla, then we should add some sort
> phishing protection, not try to imagine and protect every possible vector
> somebody could phish Bugzilla from.

Arbitrary html uploads provides an attacker with a variety of options. I'm looking for ways to mitigate this overall risk. Phishing is a possible attack and so are instant redirects to malicious external sites. 

>   Marking this as security since Michael's attachments are too easy to copy and
> use to start attacking Bugzilla in this way.

Thanks. It started out marked as a security issue. Not sure why it was unmarked at all.
This bug is WONTFIX to me. Having a user pref won't help as if you turn it on for a safe HTML page, you are then "vulnerable" to all subsquent HTML attachments, including malicious ones. About rendering them as plain text, this doesn't work in Internet Explorer as it will sniff the content of the attachments and render them as HTML anyway. And there is IMO no good way to parse the HTML attachments to remove malicious code. Displaying the attachment in an iframe with a big warning in the parent page is also not a solution I guess, depending on how HTML attachments are supposed to work.
The internet explorer "sniff" is a configurable option within the browser itself. As of IE7 (IE8 for sure) it defaults to not "sniffing" and adheres to the mime type response.

I agree that the suggested user prefs won't help a user after they switch into the "dangerous" mode. But the point is to at least provide a "safe" mode for those that want it. I would opt for a scenario where some people choose to operate in the dangerous mode and some are safe, versus one where all people are in dangerous mode.
(In reply to comment #22)
> The internet explorer "sniff" is a configurable option within the browser
> itself.

That's a parameter which nobody will set as most IE users just use the default config (they aren't geeks and won't know anything about this topic). An alternative to your user pref would be to add JS to the link in the attachment tables for HTML attachments only throwing a popup that the attachment may contain malicous code, and asks if you really want to display it anyway or not.
(In reply to comment #23)
> (In reply to comment #22)
> > The internet explorer "sniff" is a configurable option within the browser
> > itself.
> 
> That's a parameter which nobody will set as most IE users just use the default
> config (they aren't geeks and won't know anything about this topic). An
> alternative to your user pref would be to add JS to the link in the attachment
> tables for HTML attachments only throwing a popup that the attachment may
> contain malicous code, and asks if you really want to display it anyway or not.

The configurable option defaults secure as of IE8 (possibly IE7).

To take a step back, is there any estimation on what percentage of bugzilla users regularly deal with html attachments? If that percentage is low, then I think the per user setting of "render attachment" vs "display as text" would work well. It seems to me that many attachments are already text based and only a small number of users leverage html based attachments (correct me if I'm wrong).  This would result in a large number of users being secure by default and provide the same functionality to the users that need html attachments.
I'd say most of the *privileged* bugzilla.mozilla.org users deal with HTML attachments on a daily basis.
  Michael: I don't disagree with you that there's a risk here, it's just that the proposed solution, in the experience of myself and LpSolit (we're the primary two Bugzilla developers), would be detrimental or confusing to a significant portion of the userbase.

  The reason that we marked the bug public is that the basic notion that attachments can be harmful has been discussed quite a bit in bug 38862, which was itself public for about 9 years. Pretty much any "attachments can be harmful" bug would now be duped to bug 38862, bug 472206, or made public.

  Note that we do offer a hook wherein attachment data can be checked or modified before entering the database, so there's also the possibility of adding a virus or spam scanner there.

  In the past, when spammy attachments have been added, the Bugzilla administrators have been quick to notice and remove them.
(In reply to comment #14)
> There are 2 attacks that we are trying to prevent here.
> 1. An attacker wants to compromise a bugzilla user (likely someone in the
> security group). They create either a phishing page (similar to my POC) or a
> page which redirects to a site hosting browser exploits and then adds that as
> an attachment to a bug. They submit the bug to the security group and then wait
> for us to click and fall victim to the malicious attachment.

The users most likely to be targeted by this type of attack are precisely the type of users who would enable the option to view attachments as HTML, because those are exactly the people who always need to be able to deal with them. This makes such a preference fairly useless as a means of protecting these people.
Yep. I can't see a way of mitigating this risk at all without breaking useful functionality. :-| Security Group members just need to be on their toes.

Perhaps we should send an email to the s-g mailing list warning people to be careful of unexpected login prompts when using Bugzilla.

Gerv
(In reply to comment #27)
> The users most likely to be targeted by this type of attack are precisely the
> type of users who would enable the option to view attachments as HTML, because
> those are exactly the people who always need to be able to deal with them. This
> makes such a preference fairly useless as a means of protecting these people.

Maybe the security group is a bad example. We have other groups within our installation besides the various security groups which has information we probably don't want to get out. These users are non technical users and this is a real risk for them.
This has been a good discussion. Thanks for the various comments on usability and what can/can't work.

I open for some suggestions on how to implement some mitigating controls. Its clear that a large number of users are actively using this feature as its currently designed.

Can we protect the portion of users that aren't using rich attachments?

Or perhaps we could send an email reminder as mentioned in comment 28 to all privileged account holders (e.g. can see sensitive bugs of any sort) to urge them to be cautious of phishing attacks via attachments. Also we could combine that with anti-phishing features on the login page like a pre-chosen image or phrase (like passmark).

Thoughts, comments? I'd really like to get some compensating security controls in here since the feature itself can't be modified from its current state.

Thanks!
(In reply to comment #30)
> Or perhaps we could send an email reminder as mentioned in comment 28 to all
> privileged account holders (e.g. can see sensitive bugs of any sort) to urge
> them to be cautious of phishing attacks via attachments.

I personally would ignore such an email. And users with no Bugzilla account would still be vulnerable to pishing (Bugzilla credentials are not the only ones you could try to collect).


> that with anti-phishing features on the login page like a pre-chosen image or
> phrase (like passmark).

Why/how would this be useful?
Severity: critical → normal
Whiteboard: DUPEME WONTFIX? → WONTFIX
Michael: A pre-chosen image or passmark is a good idea. However, there are some problems. Mostly that login happens via a form in the header, and that username/password are entered at the same time. Although in places like banks, people expect and understand two-phase login, for Bugzilla I think it would be a hassle, both in terms of implementation and in terms of user experience. I haven't quite thought up a solution for that.
(In reply to comment #31)
> (In reply to comment #30)
> > Or perhaps we could send an email reminder as mentioned in comment 28 to all
> > privileged account holders (e.g. can see sensitive bugs of any sort) to urge
> > them to be cautious of phishing attacks via attachments.
> 
> I personally would ignore such an email. 
Yea, I imagine it would be of limited value :(

> And users with no Bugzilla account
> would still be vulnerable to pishing (Bugzilla credentials are not the only
> ones you could try to collect).

If we continue to allow html attachments open to unauthenticated users, then
there is no way we can avoid this. This is a fair amount of risk to assume. I
can't say I'm in favor of that option.  Is there objections to requiring
users to login before viewing an attachment? Or is public/anonymous viewing of
attachments a required use case?
(In reply to comment #33)
> Is there objections to requiring
> users to login before viewing an attachment? Or is public/anonymous viewing of
> attachments a required use case?

  Public/anonymous viewing of everything non-security-related in Bugzilla is a required use case--there are a lot of people who don't want to go through the hassle (as minor as it may be) of creating a user account; they just want to view the data that's here.
No longer blocks: q2-review-bmo
Whiteboard: WONTFIX → WONTFIX [infrasec:input]
I was going to report this as an issue myself, though I have an alternative solution that should have little to no impact on usability:

A 'splash screen' that informs the visitor about the potential danger of attachments, to enable any 'noscript' or appropriate defenses if they want to, which the user has to click through before they can view the attachment. This would be very similar to how Firefox deals with potentially insecure SSL certs (except I propose it would not be nearly so annoying or patronising to the user).

Additionally, to minimise the impact the visitor could then set a cookie saying 'don't bug me again' via a checkbox, whilst a Bugzilla user could disable it in their preferences.

Having no protection is simply unacceptable. I disable much of my security features when I visit Bugzilla, eg. NoScript. A redirect is all that is necessary to perform a hugely convincing phishing attack (bugzil.la, complete with $5 SSL) or to hit someone with, say, a Java root exploit. I don't want to treat Bugzilla as a 'dangerous application'.

I recently wrote Planet Mozilla post about Google having a policy of ignoring this very security issue: http://rushyo.com/42bit/?p=54. I ran a quick study using that Google site as an example and found out that my very tech-savvy friends are completely vulnerable to such an attack in a way that would never work with an arbitrary URL. I've since used it as a vector when pen-testing (with the aforementioned Java root exploit) - it worked.

I'd really like to see it addressed and I don't think the cost is that great.
Here is a mitigating idea: if a user does not have editbugs, restrict their file uploads to a whitelist of types (image/*, text/* converted to -> text/plain, everything else converted to -> application/octet-stream). 

There is at least some restrictions on getting editbugs - you have to have demonstrated a useful contribution. So it would prevent some random person coming along and uploading malicious attachments. They would need to do some work to gain some trust first.

Anyone who has editbugs can, of course, change the content type of the attachment later if they deem it non-malicious, and they know multiple people will be wanting to view it.

It's not perfect, but I suggest it's an improvement. We could do it as a local customization for b.m.o.

Gerv
(In reply to comment #37)
> Here is a mitigating idea: if a user does not have editbugs, restrict their
> file uploads to a whitelist of types

I don't want something based on the permissions. Many installations use default group settings, i.e. everybody has editbugs privs by default. And about bmo specifically, this would prevent new contributors to attach valid testcases.
I am talking about bmo only. I am not suggesting we prevent anyone attaching anything, I am suggesting we translate the MIME type on the fly to a safe (non-browser-executable) type. Once the attachment has been assessed, the type can be changed by a sufficiently empowered user if it's necessary for convenience.

Gerv
I think this might be appropriate territory for an Extension, at least to prototype things that we could possibly some day bring into the core codebase.
Whiteboard: WONTFIX [infrasec:input] → WONTFIX [infrasec:input] [extension]
Attached patch Patch v.1 (deleted) — Splinter Review
Here's a straw-man patch. Michael: do you think this sort of protection would help? Everyone else: do you think this sort of protection would be a great inconvenience?

Gerv
Assignee: attach-and-request → gerv
Status: NEW → ASSIGNED
Attachment #508363 - Flags: review?(mcoates)
(In reply to comment #41)
> Here's a straw-man patch. Michael: do you think this sort of protection would
> help? Everyone else: do you think this sort of protection would be a great
> inconvenience?

Yes, it would be an inconvenience that's not currently needed. I don't want this on BMO. There's no cause for it, and any attempt to use attachments maliciously has been caught quickly and handled well in the last 10+ years BMO (and its predecessors) ha(s|ve) been operating.
"Quickly" has nothing to do with it.

1. Malicious user creates domain and buys a $5 SSL (easy).
2. Malicious user creates an attachment on Bugzilla in a module where no one is active (easy).
3. Malicious attachment redirects visitors to aforementioned domain, a perfect copy of the login page and steals their cerdentials (easy - why would they check the little thing at the corner at the page when they're expecting a bugzilla site? - people only check security badges AFTER something happens they don't expect).
4. Using this newly acquired password, a script automatically (and immediately) logins in as those users - even captchas not withstanding (you just have the attackers on standby ready to type them in!).
5. If they are an administrator - it does something nefarious. If not, it sends one of a bucket of legit-looking messages to a random admin's email, encouraging them to click on the link.
6. Now we have the passwords of the users, we might as well also go and scrape their email from their profile and try out their Facebook, Twitter, Stack Overflow, etc - all in the space of a few seconds - long before an administrator can react (easy) and perform attacks using those. We could even use those to also proliferate the original attack!

You could do that really quickly and easy. Not only could you do it, but you could wrap it up in a piece of software and give it to other people. Even assuming it doesn't work first time, the prevelence of the attacks would be a gigantic inconvenience which would probably end up with attachments disabled entirely. You could even turn the whole process into an automated worm!

The argument "no one has done it before" is not a valid security argument. The fact that mild attacks have been dealt with in the past is not an indicator of what is possible and therefore cannot be used to form such a judgement.

Not only that but even assuming it is a non-issue now it may combine with another 'minor' vulnerability to form a serious one. Defense in depth practices demand that action be taken.

A trained penetration tester can have a field-day with such a vulnerability. So much so that MITRE + SANS consider it amongst the "Top 25 Most Dangerous Software Errors". As a security consultant I feel I would be utterly remiss not to state the importance of dealing with this issue.

It has to work just once.
Just to add: The defense in depth argument stands independent of additional phishing protections (such as the apt suggestion by mcoates that users could have personalised questions when they login). Any one Firefox (+addons) security vulnerability (the aforementioned Java root exploit springs to mind) is enough to cause huge damage. This would be mitigated for security-conscious users on other sites through use of NoScript, etc - but people, like myself, whitelist mozilla.org. In fact, I've now blacklisted mozilla.org on all my tools - I have to assume it will be an attack vector now.
(In reply to comment #41)
> Everyone else: do you think this sort of protection would be a great
> inconvenience?

  Yes, I think it would be a fairly significant inconvenience. We would have to fix the MIME types of every testcase and every patch submitted by people without editbugs. And that's just the immediate inconvenience that I can think of.

  I also agree with what reed said, that by solving a problem you can't yet prove is really a problem, you're creating a problem.

  http://www.codesimplicity.com/post/if-it-aint-broken/
(In reply to comment #45)
>   I also agree with what reed said, that by solving a problem you can't yet
> prove is really a problem, you're creating a problem.
> 
>   http://www.codesimplicity.com/post/if-it-aint-broken/

I think this is the wrong argument to be making. We’ve clearly demonstrated that this issue can be easily exploited (see proof of concept) and while we have some compensating controls, none of these would protect against a well-designed and automatic attack.  Has anyone attacked us through this vector yet? Not to our knowledge, but then again, if it was a good and targeted attack could we even detect it?

As you know, security is a proactive practice where the goal is to identify attack vectors and implement solutions or mitigating controls to reduce risk. To be effective we need to identify security risks before attackers. 

This bug is a good example of identifying a security concern that has not yet been exploited by attackers and attempting to provide potential security controls. We’ve had a good debate on potential solutions and whether the resulting impact on usability is acceptable. 

I’d really like to see some sort of security control be added in this area. It sounds like an extension (mentioned in comment 40) is the way to go since we can't get a global solution that has acceptable usability trade-offs.
"  I also agree with what reed said, that by solving a problem you can't yet
prove is really a problem, you're creating a problem."

What's not proven? This is a common attack vector, recognized and identified as a serious issue (http://cwe.mitre.org/data/definitions/601.html). The PoC already attached proves its possible - and even a cursory understanding of the application observes it is possible.

I'm forced to favour the extension approach as well, although I still haven't seen an argument against my initial suggestion that we present the user with a 'splash screen' that informs them of the impact of what they're about to do - I think that's a low impact, high gain solution. It helpfully demarcates the trust issue to just that page, giving the end user the opportunity to mitigate it. You can provide two methods of opt-out, through cookies (for external visitors) and through user prefs (for persistent opt-out). The cost then? One unwanted button click of 'Don't pester me again'.

As mcoates states, security is proactive. If you wait for something to happen, it's too late and your security practices have failed. You have to identify and resolve noted issues before someone strikes - or accept that you just don't care about the potential consequences (and the latter is not something I or anyone else expects to see from a project such as this, obviously).

If I can think up malicious, dangerous avenues of attack whilst I'm just sat here with my tea then a hacker who has something to gain and more time on his hands can surely come up with something better.

This isn't an enhancement request - it's a security flaw... it's important that people leave their bug fixing hats at the door and bring their security hats to the table. This is the first time I've seen a WONTFIX slapped on a valid security request in an open-source project solely because the fix could be a little bit inconvenient and 'it hasn't happened yet'.

Can you imagine how this would look if users were aware of it? Do you think the response would be 'good old Mozilla, protecting me from having to press a button'? I don't see that being the response.
(In reply to comment #47)
> even a cursory understanding of the application observes it is possible.

  However, perhaps a cursory understanding of the application is not enough to make major design decisions about it that will affect millions of users.

> I'm forced to favour the extension approach as well, 

  Cool. So we agree on that.

> although I still haven't
> seen an argument against my initial suggestion that we present the user with a
> 'splash screen' that informs them of the impact of what they're about to do -

  Well, to be fair, as a designer, I probably should not have to explain in detail why every single suggestion is not acceptable; otherwise I'd spend quite a bit of my (entirely volunteer) time simply typing instead of being productive. If your goal is to know how to create a more productive suggestion, though, I'd be happy to explain it for that purpose.

> You have to identify and resolve noted issues before someone strikes

  This is, to some degree, true. After all, security is important. 

  In another view, though, it's is a paranoid attitude that leads to poor software design. So there has to be a balance between practicality and paranoia. If you can provide a solution that has no significant impact on users (a solution I can't currently imagine despite knowing this system better than anybody--except perhaps LpSolit who is the Attachments maintainer) then that's practical because I do understand the risk even if it's never been exploited. But if there's an impact on every single user of Bugzilla for something that isn't actually currently affecting them, then that's poor software design.

> Can you imagine how this would look if users were aware of it?

  Not only are users aware of it, this fact has been public knowledge for nearly the entire history of the Bugzilla Project--since about 1998. I suspect that internal developers at Netscape were aware of it even before then. Attachments used to be even *more* dangerous (the possibility of XSS) and even *that* was never exploited despite the value of bugzilla.mozilla.org as a target.


  Probably the ideal solution here would be some sort of support in the browser for preventing the current page from redirecting to another domain, and preventing the current page from submitting a form to another domain.
"  However, perhaps a cursory understanding of the application is not enough to
make major design decisions about it that will affect millions of users."

I was stating that in response to a specific argument. It was not a QED. Consider the context - it was in response to a statement that there was no evidence presented that it behaves in a certain way. The PoC or a cursory investigation shows it does.

"  Well, to be fair, as a designer, I probably should not have to explain in
detail why every single suggestion is not acceptable; otherwise I'd spend quite
a bit of my (entirely volunteer) time simply typing instead of being
productive. If your goal is to know how to create a more productive suggestion,
though, I'd be happy to explain it for that purpose."

If you want to disregard it out of hand for a reason, please at least state what that reason is. I presume we are peers here - and that peer suggestions are not going to be automatically rejected without any argument or explanation. If suggestions are going to be rejected without any discussion we would have to give up any pretense that this is a collaborative effort.

The fact is I could also be spending my time working on Firefox privacy patches - but I think there's bigger gains to be had in addressing this issue. I am dedicating my time as well.

As a scientist, I expect nothing less than to have to respond to others' questions. That's precisely what being a peer amongst peers is all about. If I can't respond to suggestions and criticisms, then I would not expect anyone to hold my word as credible. As a volunteer computer scientist, I accept that responsibility. I will address every point on every issue everyone makes, every time. If I could not, then I would fully expect people to implicitly assume my position must be flawed and ignore it entirely.

"But if there's an impact on every single user of Bugzilla for something that
isn't actually currently affecting them, then that's poor software design."

The point I'm trying to make, explicitly, is that when a problem is identified that is the only time you can fix it. If a security flaw is actively being exploited against a user it is too late. You don't get a report saying "Oh by the way, my password was stolen and now I've lost my job because of the comments a malicious user posted. Can you please assign it to someone?". I want to emphasize that a security flaw is not a conventional bug. One crash can be a bad thing, but one compromised password can be used to destroy a person's life. A crash might be a blocker, but a security flaw that (directly or indirectly) can lead to the loss of a password is normally 'urgent security patch' material.

In that sense, I perceive that software design must accommodate security concerns - not compete with it for importance.

"Attachments used to be even *more* dangerous (the possibility of XSS) and even
*that* was never exploited despite the value of bugzilla.mozilla.org as a
target."

Sometimes security flaws don't get exploited. Sometimes they do. Relying on pot luck when it comes to people's passwords doesn't strike me as a valid option.

If 'it rarely happens' was a valid argument, there would be absolutely no justification for the huge inconvenience that Firefox's response to invalid SSL certificates represents. To exploit that issue requires significantly more skill, doesn't happen much (if at all) in the wild and ultimately the damage is exactly the same: False trust.

If you think I'm highly paranoid for wanting a single ignorable button click as a solution to a trust issue, then hypothetically Firefox's response to invalid SSL certificates must appear to be the work of a team of schizophrenics.

In that context, trust-based security has been deemed so important that it has completely overridden the imperatives of the rest of the user experience - becoming the simple largest intentional impediment to an activity on any piece of software I use! I am not suggesting that here, I am suggesting what I perceive to be a comparatively trivial annoyance in the user's experience in exchange for a large mitigation of a similar security threat. That seems very, very reasonable to me in comparison.

"  Probably the ideal solution here would be some sort of support in the browser
for preventing the current page from redirecting to another domain, and
preventing the current page from submitting a form to another domain."

You mean a sort of X-No-Redirect response? That sounds ideal but much more awkward to implement. There's many things that cause re-directions and many browsers and user-agents that do it. Wide adoption would take a very long time and significant effort by a large number of entities. The effort taken to standardise, evangelise and implement such a chance would far exceed the time and effort exerted by users clicking a button on a splash screen.

"Cool. So we agree on that."

I think 'favour' was the wrong term actually. I accept it, at least. It's not my favourite choice - and I (evidently) will still argue for alternatives.
Hey Danny. Thanks for your input. I read all of it, and it will be considered in the determination of what to do with and about this bug. In the mean time, if you and mcoates want to start discussing with browser vendors the possibility of browser-level controls for this (or tell me if they already exist), I would really appreciate it.
If we are going to do anything about this problem, we need three separable things:

1) A way of determining which attachments are not trusted
2) A way of marking them as such, which can be removed
3) Something to do differently to untrusted attachments

My suggestion for 1) for bmo is "those attached by people without editbugs".

2) could be either a change of content type, a flag on the content type, or a new boolean field in the database. We can't just check the user's status because the flag needs to be removable on a per-attachment basis.

Above, I suggested that 3) could be "render as plain text or other harmless type". I think this is workable, because most new bug filers attach screenshots. And if they are attaching HTML testcases, they should be given editbugs.

But if that's not possible, here are some other ideas:

- Inject CSS to hide all password fields or give them a red background or something else
- Remove all password fields from the HTML or change their type
- Get an EV certificate for bugzilla.mozilla.org (only) and frame untrusted attachments; if there is a top-level redirect, the EV designation will disappear

Sadly, the current CSP (Content Security Policy) spec does not have a restriction on where forms can be submitted, although one was in my original proposal.

Gerv
Why don't we just have a click-through type thing where you have to explicitly allow viewing any text/html attachment? That would solve this, which stops us from having to neuter people's content-types for uploaded attachments.
>>> Malicious, active content on mozilla.org <<<

This part is easy enough to solve. We just need to switch from https://bugN.bugzilla.mozilla.org/ to something like https://N.bugtachments.nu/.

If a user clicks "Edit" rather than "View", the iframe should show the attachment's source code rather than loading the attachment directly.  This would even be useful for crash testcases: you'd be able to edit their flags and view their source much more easily.


>>> Open redirects on mozilla.org <<<

Also easy.  Make the old URLs go to the "Edit" page, not the "View" page.


>>> Phishing users who decide to view an attachment <<<

This is hard.  Bugzilla attachments are inherently more confusing than the external links in a typical web app.  They can be simultaneously private and untrustworthy; part of Bugzilla and not part of Bugzilla.  It is entirely plausible that viewing a private attachment would require your Bugzilla password.  Preventing phishing by Bugzilla attachments is going to be much more difficult than, say, preventing phishing by pages that a Gmail message links to.

Disallowing text/html attachments or using "Content-Disposition: attachment" may work for many Bugzilla instances.  But for bugzilla.mozilla.org, it would create horrendous usability problems (we need layout testcases) and security problems (file: URLs get many additional privileges, and in some browsers can read your entire file system).

Here are some ideas for what to do with text/html attachments, on a scale from subtle to beating Bugzilla users over the head:

* Change "View" to "View (external link)", using one of the standard external-link icons.  Many sites do this, so some users already know what the icon means.  We'd be using it with a slightly different meaning.
http://www.maxdesign.com.au/articles/external/
http://www.hhs.gov/web/policies/webstandards/disclaimer.html

* Make the link open in a new tab. Again, many sites do this, so it might serve as an additional hint that you clicked an external link.

* When hovering over or focusing the "View" link, show a tooltip saying "Don't enter your password here".  The tooltip could be shaped to point at the external-link icon.

* When hovering over the "View" link, change the mouse cursor to one that indicates an external link.

* After clicking the "View" link, show an interstitial page for 1 second.  Something to /make users suspicious/ of a subsequent login form, without /telling/ them to be.  Perhaps a fast animation consisting of a fake progress bar and a big green check mark when it fills.

* After clicking the "View" link, show a click-through warning.  Annoying, blame-shifty, and likely ineffective no matter what it says.


>>> Browser-integrated solutions <<<

The upcoming <iframe sandbox> attribute is probably not enough for the Edit page.  (Implementation in Gecko is bug 341604.)

The upcoming "text/html-sandboxed" MIME type is probably not enough for the View page: I don't think it prevents redirects.

The proposed "X-No-Redirect" header is an interesting idea (comment 48, comment 49).  It would be a bit like a full-page <iframe sandbox>.  The idea here is to prevent a malicious attachment from redirecting you to a more plausible URL and then asking for your Bugzilla password.

The "Content-Disposition: attachment" header isn't appropriate, at least for bugzilla.mozilla.org (see above).


>>> Conclusions <<<

We don't need a click-through warning.

I've never seen a better example of why passwords need to die.
Also, here's an interim solution for anyone who is worried. Bugzilla allows you to override the content type of an attachment when displaying it. Write a Greasemonkey script to add "&content_type=text/plain" on bug pages to the end of attachment URLs for HTML attachments.

Gerv
(In reply to comment #53)
> This part is easy enough to solve. We just need to switch from
> https://bugN.bugzilla.mozilla.org/ to something like
> https://N.bugtachments.nu/.

Launchpad uses launchpadlibrarian.net, ISTR.

> untrustworthy; part of Bugzilla and not part of Bugzilla.  It is entirely
> plausible that viewing a private attachment would require your Bugzilla
> password.

But not if it's attached to a private bug which you just viewed entirely successfully.

> Disallowing text/html attachments or using "Content-Disposition: attachment"
> may work for many Bugzilla instances.  But for bugzilla.mozilla.org, it would
> create horrendous usability problems (we need layout testcases) and security
> problems (file: URLs get many additional privileges, and in some browsers can
> read your entire file system).

Yep; no-one is suggesting either of these.

> * After clicking the "View" link, show an interstitial page for 1 second. 
> Something to /make users suspicious/ of a subsequent login form, without
> /telling/ them to be.  Perhaps a fast animation consisting of a fake progress
> bar and a big green check mark when it fills.

You think this is OK, but a direct click-through is bad? Or do you think this sucks too?

> * After clicking the "View" link, show a click-through warning.  Annoying,
> blame-shifty, and likely ineffective no matter what it says.

"You are properly logged-in. You are about to view an untrusted attachment. Do not enter your password."

? Also, we could time it out like the "install addon" prompt rather than have a "Continue" button.

> The upcoming "text/html-sandboxed" MIME type is probably not enough for the
> View page: I don't think it prevents redirects.
> 
> The proposed "X-No-Redirect" header is an interesting idea (comment 48, comment
> 49).  It would be a bit like a full-page <iframe sandbox>.

So text/html-sandboxed is _not_ like a full-page <iframe sandbox>? Confusing.

> I've never seen a better example of why passwords need to die.

You think Bugzilla should start supporting client cert auth?

Gerv
(In reply to comment #55)
> (In reply to comment #53)
> > I've never seen a better example of why passwords need to die.
> 
> You think Bugzilla should start supporting client cert auth?

... or, *at least*, two-factor auth.
(In reply to comment #56)
> (In reply to comment #55)
> > (In reply to comment #53)
> > > I've never seen a better example of why passwords need to die.
> > 
> > You think Bugzilla should start supporting client cert auth?
> 
> ... or, *at least*, two-factor auth.

There is a bug in for this and couldn't agree more. bug 570252.
Key fobs doesn't help against phishing.  I guess client certs do help, but they're currently a pain.  Let's wait and see what the Mozilla Labs Account Manager folks come up with.
(In reply to comment #51)
> Above, I suggested that 3) could be "render as plain text or other harmless
> type". 

  You're leaving IE 6 and IE 7 users vulnerable, though, FWIW, there.

> And if they are attaching HTML testcases, they should be given
> editbugs.

  I attached many HTML testcases back in the day before having editbugs. But I could have asked for it earlier, I suppose.

> - Inject CSS to hide all password fields or give them a red background or
> something else

  Definitely would be too complex and prone to workarounds.

> - Remove all password fields from the HTML or change their type

  Same.

> Sadly, the current CSP (Content Security Policy) spec does not have a
> restriction on where forms can be submitted, although one was in my original
> proposal.

  That's unfortunate. What would it take to propose a new system for that?
(In reply to comment #58)
> Key fobs doesn't help against phishing.  I guess client certs do help, but
> they're currently a pain.  Let's wait and see what the Mozilla Labs Account
> Manager folks come up with.

  Yeah, agreed. Would a "this is your login image" system help? We'd have to add a step between username and password then, though, which could be somewhat obnoxious.
Possibly one of the more feasible and immediate solutions would be that "View" always shows attachments in an iframe with a bar above them saying something like "attachment X on bug Y". That'd also help people get back to the bug from the viewed attachment, which has been a request that's happened from time to time.
(In reply to comment #61)
> Possibly one of the more feasible and immediate solutions would be that "View"
> always shows attachments in an iframe with a bar above them saying something
> like "attachment X on bug Y". That'd also help people get back to the bug from
> the viewed attachment, which has been a request that's happened from time to
> time.

I thought of that one. Unfortunately, the contents of the iframe can always "frame-bust" out of it onload, so unless someone's paying very careful attention, an attacker can just dismiss the bar.

(In reply to comment #60)
>   Yeah, agreed. Would a "this is your login image" system help? We'd have to
> add a step between username and password then, though, which could be somewhat
> obnoxious.

Depending on the image size, we could brand every header with it, right next to the password box.

Or, we could remove the login stuff from the header, to be replaced with a link to the dedicated login page.. [I actually hate it there, because for some reason it uses urlbase() rather than relative URLs, which means it breaks if (due to SSH tunnelling) I am accessing the Bugzilla on a port other than the one configured in urlbase. But perhaps that's specialised ;-)]

> That's unfortunate. What would it take to propose a new system for that?

I could talk to Sid and Brandon. Currently, the functions and syntax of CSP are being debated, but there doesn't seem to be much appetite for yet more features.

Gerv
Assignee: gerv → attach-and-request
(In reply to Jesse Ruderman from comment #53)
> If a user clicks "Edit" rather than "View", the iframe should show the
> attachment's source code rather than loading the attachment directly.

This has been implemented in bug 716283. Bugzilla 4.0.4 and higher have this fix.
Moving this bug to b.m.o as this bug and its patch became clearly bmo-specific.
Assignee: attach-and-request → nobody
Component: Attachments & Requests → General
Product: Bugzilla → bugzilla.mozilla.org
QA Contact: default-qa → general
Version: unspecified → Current
Comment on attachment 508363 [details] [diff] [review]
Patch v.1

Removing myself as review. Bug is wontfix
Attachment #508363 - Flags: review?(mcoates)
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: