Closed Bug 26257 Opened 25 years ago Closed 16 years ago

[SECURITY] Bugzilla should prevent malicious webpages from making bugzilla users submit changes to bugs.

Categories

(Bugzilla :: Creating/Changing Bugs, defect, P1)

defect

Tracking

()

RESOLVED FIXED
Bugzilla 3.2

People

(Reporter: terry, Assigned: LpSolit)

References

Details

(Keywords: selenium, Whiteboard: security [wanted-bmo])

Attachments

(2 files, 10 obsolete files)

To paraphrase mail received from dshea@netscape.com: It's possible to create an url that is the equivalent of a form post to the Bugzilla process_bug.cgi, from a page that is not served by Bugzilla. This would give someone the ability to create a link that says, "Click on Me!" which would actually have an url to http://bugzilla.mozilla.org/process_bug.cgi, with params to make it make undesirable changes to that bug. The changes would appear to be made by the stupid user who clicks on the link, since it will use that person's login cookie It seems to me a simple way to solve this is to require a magic token on such URLs. Bugzilla would generate these magic tokens and put them in hidden form elements. The server would check that the received token came from the machine it was sent to and that it was generated within the last few days.
tara@tequilarista.org is the new owner of Bugzilla and Bonsai. (For details, see my posting in netscape.public.mozilla.webtools, news://news.mozilla.org/38F5D90D.F40E8C1A%40geocast.com .)
Assignee: terry → tara
I'm going to mark this as a blocker of the metabug for bugzilla security, bug 38852. When bugzilla rejects a submission, instead of just saying "changes rejected", it should also echo back the page. This functionality exists for posting bugs ("save as bookmarkable template") bug apparently doesn't for changing existing bugs. Would requiring the HTTP referer to be http://bugzilla.mozilla.org/* work (as an alternative to using ip-specific hidden form elements)?
Blocks: 38852
Changing summary.
OS: Linux → other
Hardware: PC → Other
Summary: Bugzilla vulnerable to attacks on stupid users. → Bugzilla should prevent malicious webpages from making bugzilla users submit changes to bugs.
One would assume this also applies to attaching files, submitting new bugs, etc.
Status: NEW → ASSIGNED
I'm going to nominate this for 3.0, as I suspect it will require a redesign of some things. Otherwise we should move this forward to 2.14 IMO.
QA Contact: matty
Whiteboard: 3.0
You need to be logged in to make changes to a bug. So the malicious URL would also have to contain a valid Bugzilla user and password to succeed. If this really happened, all that would have to be done is to disable that account. Wouldn't that be enough? Or the author of that URL would have to assume that everybody who clicks on it is already logged in on Bugzilla and so has the login cookies.
Whiteboard: 3.0 → 3.0,security
Yes, I think it would be pretty much under the assumption of already being logged on thru cookies, but this is about closing a theoretical hole on principle rather than bothering trying to work out how likely it is in practice.
This won't be too big of a redesign that we can't do it in 2.14...
Whiteboard: 3.0,security → security
Target Milestone: --- → Bugzilla 2.14
It is relatively simple to check the referer string against the installation's base URL: ----- my $urlbase = Param('urlbase'); $ENV{'HTTP_REFERER'} =~ /^$urlbase/ || DisplayError("You can only do that from a form/URL generated by Bugzilla.") && exit; -----
It seems that relying on the urlbase Param would have some ill effects with optionally using bugzilla over https and also accessing it via the real host name or ip address when the virtual host is down (such as DNS problems). Perhaps something like $ENV{'HTTP_REFERER'} =~ m@^https?://$ENV{HTTP_HOST}@
Attached patch Validate the HTTP_REFERER based on HTTP_HOST (obsolete) (deleted) — Splinter Review
Keywords: patch, review
Forgive me if my understanding of my knowledge of Perl is sketchy, but from what I can see this requires the HTTP_REFERER to be present. Not all browsers are necessarily going to send Referer because some people see it as a privacy issue. Hence if it is off we should still allow it (there's no way I know of for a page to turn it off). However this begs the question of whether the cookie solution should also be implemented for when a browser has Referer off. It seems to me that it's essential that it checks the address is correct, but again we will face the same problem as in bug #20122 with machine based authentication. If a user preference is implemented for bug #20122 it could be reused here.
Yes, this does make it so there has to be a HTTP_REFERER. It's the classic give/take, security/ease of use, etc, etc, etc. :) I'd guess that if somebody has HTTP_REFERER turned off for 'cause they think it's a privacy issue, they probably have cookies off, too. I really don't know what a good "moderate" solution is here. Anybody?
at the risk of making editparams even uglier, maybemake this a param setting so it can switched on/off.
Is that necessary? If users turn off their Referer string then they'll need to accept this attack (at least until we supplement this check) or they can't alter anything in Bugzilla. Sure, the most secure system is an unreachable system, but that may be taking things too far ... We could warn the user if we detect the situation. Also are we doing this for all actions, eg posting attachments, editing user prefs, etc?
setting owner to match the patch author just to make life easier for me. :)
Assignee: tara → jake
Status: ASSIGNED → NEW
This patch as written causes an "undefined value" error if you try to change a bug with the Referrer header turned off, since $::ENV{HTTP_REFERRER} doesn't exist. How about we do this this way: 1) if there is no Referrer header, just let it through. We'll assume if the user is savvy enough to disable Referrer headers in their browser they probably know what they're doing. 2) if there IS a Referrer header, and it doesn't match, throw the error, since it's dead obvious someone's screwing with the system. 3) if there is a Referrer header, and it does match, then everything's hunky dory.
Keywords: patch, review
And let me just state that this should be checked _before_ the login check. If not, a log in might be required and in that case the "Referer" will be OK when it shouldn't be.
Attached patch Patch v2 (obsolete) (deleted) — Splinter Review
Keywords: patch, review
The expression in attachment 40608 [details] [diff] [review] compares the entire string (m@^...$@) which is not a good thing. HTTP_REFERER will most likely be something like http://bugzilla.mozilla.org/show_bug.cgi?id=26257 which won't match ^https?://bugzilla.mozilla.org$. The case insensitivity, however, is good :)
Attached patch Patch v3 (doh!) (obsolete) (deleted) — Splinter Review
Attached patch patch v4 (obsolete) (deleted) — Splinter Review
v3 was missing a ')' at the end of the if statement. w/that change, it works.
>if there is no Referrer header, just let it through. This won't work, because it's possible for an attacker to make your browser not send a referrer. Mozilla doesn't send a referrer when you click a link in a mail message. IE doesn't send a referrer when you come from an about: page, which a web page can send you to.
So what are you suggesting, we require people to enable their Referer header in order to use Bugzilla?
I think this solution is incomplete, viz: http://bugzilla.mozilla.org.spacebar.org/ Maybe checking that there's a slash (or :80, etc.) after the hostname would work, but there may still be ways around this (http://bugzilla.mozilla.org:pass@host.com/ in some browsers, perhaps?). I think the best solution is for the form to include a hidden sequence number, as previously suggested. Then an attacking page would need to fetch the page and parse it to get a sequence number (a lot of trouble); if this number is derived from or otherwise associated with the username, then I don't think it is exploitable at all. It's probably not smart to base it on the IP, since some people browse from proxies which have multiple IP addresses.
I can't see how IE not sending a referer header from an about: page has anything to do with Bugzilla security, since no IE about: page has a link to Bugzilla (and if someone hacks it to create such a link, then that is a bug in IE). No referers in links from email messages bugs me though, since someone could spoof an email message about a bug in Bugzilla that caused a developer to go to that bug and change it. I'd like to accommodate people who turn off their referer headers, so I think the right solution is to return a confirmation screen (containing a hidden randomly-generated token) when a user submits changes to a bug without a referer header. However, unless someone can convince me that there are a significant number of Bugzilla users browsing with referer headers off (or legitimate emails being sent out with URLs for posting bugs), requiring a referer is a good short-term solution.
I guess I convinced myself: Between early April and today (July 18), 11506 unique IP addresses POSTed to process_bug.cgi, of which 534 of those addresses did not include a "Referer" header. That is a rate of almost 5%, which is significant. If we want to accommodate these users (feel free to try to talk me out of it if you think excluding them is better or just don't want the pain), then the alternative is to throw up a confirmation screen for non-referers that contains a hidden token. The new tokens table might be useful for this purpose, while you might be able to steal the form for confirming the submission from the mid-air collision screen. In case you are curious about how I came up with the numbers: Unique IP addresses POSTing to process_bug.cgi: grep -h "POST /process_bug\.cgi" access_log.* | sort -u -k 1,1 - | grep -c . Addresses that did not include a "Referer" header: grep -h "POST /process_bug\.cgi.*\"-\"" access_log.* | sort -u -k 1,1 - | grep -c .
cc:ing Ben Bucksch for his opinion since his Mozilla-based Beonex web browser doesn't send Referer headers in HTTP requests.
>I can't see how IE not sending a referer header from an about: page has anything >to do with Bugzilla security, since no IE about: page has a link to Bugzilla >(and if someone hacks it to create such a link, then that is a bug in IE). In IE, about:<b>foo</b> shows an HTML page with <b>foo</b> as the contents. Web pages can link to or redirect to these URLs. So while no built-in about: URLs contain links to Bugzilla, a malicious web page could create an about: URL with such a link or automatic form submission.
Does IE consider this a feature?
I doubt it.
I mean, I doubt IE considers it a feature that about: URLs can be used by malicious web pages to avoid sending the referrer field.
> cc:ing Ben Bucksch for his opinion I think that Referer was a bad idea. You won't be able to make me turn it on. I do have some cookies on, of which the Bugzilla login cookie is arguably the mostuseful one. Beonex Communicator ships with Cookies on, "Sessnion cookies" on, Referer off. There is no UI to turn the Referer on. Having Referer off does not usually imply that cookies are "Jake" wrote: > It's the classic give/take, security/ease of use, etc, etc, etc. :) No, webapps must work no matter how my tradeoff preferences for the browser are. > I'd guess that if somebody has HTTP_REFERER turned off for 'cause they think > it's a privacy issue, they probably have cookies off, too. No. Many important sites don't work with cookies off. That's not true for Referers, luckily. And it shouldn't change. Suggested fix: IMO, it is unwise to rely on Referer or cookies. Embed a session key (or "magic token", as terry called it) in the page as hidden INPUT, which must be present in the submit. That's the only way I know to ensure that the submit originates from the bugzilla page.
removing patch keyword... from the discussion it's obvious this still needs work.
Keywords: patch, review
If there's not a generally agreed upon patch on this by Monday night (7/30) I will push this back to 2.16. There's too much argument on this to let it hold up 2.14.
ok, I'm a week and a half past due, but here's what I threatened to do. -> 2.16 If for some reason this gets a patch everyone can agree on before 2.14 ships, feel free to move it back to 2.14
Priority: P3 → P1
Target Milestone: Bugzilla 2.14 → Bugzilla 2.16
primary motivation for breaking the dependency is that I really don't want to retarget the tracking bug, it needs to stay on 2.16. However, the tracking bug is for "untrusted content being echoed to bugzilla users", which this bug isn't anyway, so it really didn't belong there anyway. Although this bug could be considered a security issue, it's about blocking attempts to do bad things from other sites, not about making Bugzilla content trustable.
No longer blocks: 38852
tracking bug needs to stay on 2.14, scuse me.
Status: NEW → ASSIGNED
-> New Bugzilla Product Also, reassign to default owner
Assignee: jake → myk
Status: ASSIGNED → NEW
Component: Bugzilla → Creating/Changing Bugs
Product: Webtools → Bugzilla
Version: other → unspecified
A cheap, but working trick would be to require the username to be present in one of the form elements like: <INPUT TYPE="hidden" NAME="bugzillauid" VALUE="tinus_mozilla@blib.win.tue.nl"> The submission page could check if this matches the logged in user. I can't think of an easy way to hack that.
> A cheap, but working trick would be to require the username to be present Doesn't work, if the attacker knows/guesses my username (which is my email address).
Comment on attachment 40635 [details] [diff] [review] patch v4 needs-work based on previous comments on bug
Attachment #40635 - Flags: review-
Comment on attachment 40628 [details] [diff] [review] Patch v3 (doh!) ditto
Attachment #40628 - Attachment is obsolete: true
Attachment #40628 - Flags: review-
Comment on attachment 40608 [details] [diff] [review] Patch v2 ditto
Attachment #40608 - Attachment is obsolete: true
Attachment #40608 - Flags: review-
Comment on attachment 36943 [details] [diff] [review] Validate the HTTP_REFERER based on HTTP_HOST ditto
Attachment #36943 - Attachment is obsolete: true
Attachment #36943 - Flags: review-
We're too close to 2.16 to have to to review this and this obviously is going to require more work than we can do in two weeks to keep everyone happy. -> 2.18
Target Milestone: Bugzilla 2.16 → Bugzilla 2.18
As long as bugzilla supports attachemnts with type (text/html) that can be added to bugs, and those attachements are hosted on the same domain, this will be harder to fix than anybody here has imagined. Neither checking the referrer nor embedding a ticket into the form will prevent somebody from attaching a html file that adds comments to a bug. My attachment is less than 20 lines and it will do just that (except that it only submits the form without filling in any fields.) You are prevented from excercising this attack from other servers by the javascript security model. However when there is a a cross site scripting vulnerability (such as the attachment mechanism) it allows a hacker to put html code on the server that is trusted by javascript and makes this problem impossible to fix. So the attack boils down to getting the victim to visit the attachment. Not too hard. The attachement could be attached to a different bug and one could craft the attack such that the victim did not know they had made a comment by submitting the form as an image source, making the frame too small to see, or giving it a css display:none property. One could also customize the javscript to only target certain users or to work only at a specified time.
1010mozilla@Ostermiller.com: using a different hostname for attachments is bug 38862.
Bug 38862 blocks this bug then. Can somebody add it to the dependency?
here we go
Depends on: 38862
get serious
happen to be here
Attached file foo.doc... (obsolete) (deleted) —
testing...
There is a much easier way to do this.... Include the userid and either the token (wise or unwise??) or some other reference to something stored in the logincookies table in the form data and make process_bug compare that to the logged in user before accepting the data without repeating the login page. This probably needs a per-user (per-ip-address? both?) override to keep from breaking simple scripts that use wget, for example.
The thing to appreciate about the REFERER header is that it is really only sent in one special case scenario, for every other scenario, the referer can not be determined and is not sent. The fact that the special case also happens to be the most common case does not change the reality that there are many legitimate instances in which it is not sent. The only time that it can be sent is when one server based web page links to another server based web page. The emphasis being on the "server". Any webpage that is on a local computer or in email etc. will not generate a referer. this is also true for any program that invokes a webpage. thus there are many ways to generate the bogus webpage request that is being discussed here. Now, I realise that most devs on this site would not be foolish enough to run outlook express, (or at least not admit it). But the same can not be said of the target audience for bugzilla. All that is needed is to send a webmail message, that invokes a hidden pop-up or does a redirect etc etc etc. Send that message to a mailing list which happens to be of devs with accounts and cookies for the database.... and presto.. you have just forged a bug entry, complete with a legit ip address. frankly, whatever wall you erect, there will always be a way to get around it. The question then becomes, what is the level of damage that can be done by a forged entry? And how difficult can you make it to forge the entry without adversly affecting the usability of the product and also without expending a disporportionate amount of dev time on the problem. Just My 2 cents worth, hope that it is helpful. Erik Perrohe
Sounds like another good argument for configurable login token invalidation. If the cookie would not be accepted if it had not been used in a certain amount of time, the risk of this would be reduced.
Flags: blocking2.18?
This problem has existed since Bugzilla was born, and someone upgrading from 2.16 to 2.18 isn't going to lose any security by upgrading if this isn't fixed. I don't think we stand a chance of fixing this for 2.18 and still have 2.18 ship in a reasonable amount of time.
Flags: blocking2.18? → blocking2.18-
Target Milestone: Bugzilla 2.18 → Bugzilla 2.20
remove the bug immediately
I have my idea about this question. Taking!
Assignee: myk → LpSolit
OS: other → All
Hardware: Other → All
Attached patch patch, v5 (obsolete) (deleted) — Splinter Review
I use tokens here to determine if the user can submit changes or not. By default, all modifications from an external website are rejected. If the referer is not known, a confirmation request is displayed. The user can change the behavior thanks to a new parameter in the permissions page. This may not be optimised. This is my first idea. justdave, what is your opinion about this?
Attachment #40635 - Attachment is obsolete: true
Attachment #77473 - Attachment is obsolete: true
Attachment #124599 - Attachment is obsolete: true
Attachment #168918 - Flags: review?(justdave)
As discussed in IRC: you're adding things to profiles, and you're changing the userprefs so that it allows users to set individual preferences; those are not part of this bug, but part of the groundwork you need before this bug can be addressed. As such, it seems like this bug (or at least your current solution) should be marked as 'dependson bug 98123'; if you try and implement your own method of allowing individual user preferences for things, the two solutions will conflict with one another.
> I use tokens here to determine if the user can submit changes or not. Does this uglify every URL with a token? Does it prevent people making modifications to Bugzilla using scripts such as bugzilla-submit? > If the referer is not known, a confirmation request is displayed. Does this make Bugzilla unusable for those who turn off the Referer header for privacy reasons? Gerv
(In reply to comment #63) > Does this uglify every URL with a token? No. Tokens are only used when the user comes from an external website or its origin is not known (no referer). An option allows the user to decrease the "security level" if desired, so that no confirmation is required when the referer information is not given to bugzilla. > Does it prevent people making > modifications to Bugzilla using scripts such as bugzilla-submit? I don't think so. Disabling the option mentioned above allows you to deactivate confirmations if the referer is not known. By default, this option is turned on as most users do not use such scripts. Actually, if the user is identified to submit changes from outside the installation, then either changes are rejected (by default) or a confirmation is required (when the option is turned off). What I saw in previous patches submitted here was to reject all changes, which may appear too restrictive. > Does this make Bugzilla unusable for those who turn off the Referer header for > privacy reasons? No. As said above, if the referer is turned off, a confirmation is required to submit changes, but you still can. By turning the option off, no confirmation is required to make changes. In summary: 1) referer header turned on + changes submitted from the outside: changes rejected (default) or confirmation required (option turned off) 2) referer header turned off: confirmation required (default) or changes accepted without any confirmation (option turned off) 3) changes submitted from inside the bz installation: changes accepted without any confirmation! The patch I submitted here is only a first try. I first want to know if justdave and myk agree with such a solution. Details, especially where to put the option about the "security level" (actually in userprefs.cgi under the permissions tab) will be fix later. All comments are welcome! :)
Status: NEW → ASSIGNED
The theory on this sounds good. post_bug should be a little more lenient than process_bug of course, since it's much more likely to have off-site submits that create legitimate bugs. Something else I suggested on IRC that doesn't look like it made it here... if the user's credentials are passed in as part of the form post, you can probably assume it's legit. 99% of those scripts don't use cookies and pass in the creditials with the form post every time they submit something. If the user wasn't already logged in and is forced to as part of the submit, this will also be the case. Hopefully the quick stop at the login screen would clue them in.
Depends on: 98123
Target Milestone: Bugzilla 2.20 → Bugzilla 2.22
Frédéric: one thing worries me about your configuration system. > 1) referer header turned on + changes submitted from the outside: > changes rejected (default) or confirmation required (option turned off) > > 2) referer header turned off: > confirmation required (default) or changes accepted without any confirmation > (option turned off) It seems that for one of these cases, turning the option off enables the confirmations, whereas for the other, turning the option off disables the confirmations. I think this might end up being rather confusing. How about the following simpler scheme: For all circumstances except "referer turned on, submission from inside Bugzilla installation" (when, of course, everything Just Works), confirmation is required. If you turn the option off, confirmations are turned off. Gerv
Sounds reasonable too. OK to turn confirmations off in all cases if the "security" option is turned off. But what do we prefer if this option is on? Unconditionally reject changes or request a confirmation before submitting? I think a confirmation is fine here too... Previous patches submitted here unconditionally rejected changes when coming from the outside. Do we still want this??
question: how do we pass through that the change has been confirmed if the user has referer headers disabled? Adding a param to the form data when they submit that says they've already seen the commit confirmation dialog? If so, Joe Hacker that we're trying to prevent will just add this param to his URL and bypass the dialog.
(In reply to comment #68) > question: how do we pass through that the change has been confirmed if the user > has referer headers disabled? My answer is for the case when the "security" option is turned on (if turned off, no check, no confirmation, under the responsibility of the user): When you reach bugzilla and either the referer is on and identified as coming from the outside or the referer is off, a new token is created as saved in the tokens table. The user has to confirm in order to submit changes. The confirmation page contains this token which has to be the same as the one in the tokens table in order to go further. Right after that, the token is removed from the tokens table so that it cannot be used again. There is no way to spoof this because if you try to create your own token, this one will not be the same as the one in the tokens table, if any!
The standard solution for this kind of hole ("cross-site request forgery" or "session riding") is to include a token in every form. If the token is missing or incorrect, show the form again with a new token and ask the user to confirm the submission. The referrer header should never be used for security: some users have extensions to disable all referrers (or even spoof all referrers to be the requested URL), and you can already tell whether the form was on Bugzilla by verifying that the token is correct.
But the problem with requiring a token for everything is that scripts aren't going to have tokens, and we don't want to break scripts. But like I said earlier, most of the scripts authenticate with every request, and an attacker isn't going to know the user's password to stick it in a URL. So if they used a cookie from a previous login to authenticate, then we require the token, but if they passed credentials in with this request then we don't? Does that sound logical? We'd need an API to be able to ask Bugzilla::Auth which method was used, since some plugins don't even use cookies... oof
That makes sense. If the request includes username/password parameters, ignore the cookie and don't require a token.
We already require a token with every request. It's called the login cookie. The only situations where this isn't appropriate is during "session riding", as X says. So in that case, when the submission comes from outside the Bugzilla Using Referer is fine for this. A malicious form in a web page can't forge the Referer. If you aren't session riding, of course you can forge it - but then you don't have a login cookie, so you are no further forward. We fail closed - if there's no Referer, we ask for confirmation. This isn't insecure unless someone is daft enough to install an extension which sets the Referer to e.g. the same as the target URL. And that's just violating the HTTP RFC. If Referer is present, it should be correct. And anyway, who's going to write an exploit for the three people with such an extension? I don't like the idea of a token on every URL. This would break bookmarks, for example, as the token would expire. And saved queries, which wouldn't have one. And it's ugly. Gerv
There are two ways that I know of that a malicious cracker could circumvent the referrer check and force users to make changes to bugs. 1) bugzilla attachements: bug 38862 (which I don't appear to have access to anymore) 2) Users that use a proxy that sets external referrers to be the home page of the current site. (a 'privacy' type of setting) For case #1, you could check that attachments.cgi is not in the referrer (something that the current patch does not attempt to do). For case #2, I know of not referrer workaround. You are basicilly screwing those users with, "if you set up a proxy that fakes the referrer, its your problem". Except the data in bugzilla doesn't belong to them and they may or may not care. The token is the best way to fix this bug. The token should only be required for actions such making changes to a bug, adjusting settings, or creating a new bug. Therefore the token doesn't need to be in urls and most bookmarks would work just fine. A token needed to submit changes. Nothing special needed to view information.
Stephen: your point 1) is reasonable. There's work going on in the bug you can't see to try and sort that out. We need to decide whether this is just the general case of that, and whether the two problems can be solved by the same mechanism. Re: point 2, it's possible. But the chances of such a person with sufficient Bugzilla permissions on the correct Bugzilla coming across such an exploit are extremely slim. If tokens were only required for changes, that would be better than for everything, certainly. Gerv
Referers are untrusted data, and should not be used in a security context. I could do all sorts of things with referers, I'm sure, even from the server side. There are probably clever things I could do with JavaScript to defeat a Referrer. What we really need are persistent, server-side, in-memory data structures to hold an authentication token from the show_bug page, but of course those are impossible with CGIs in the current way that we work. :-) safe_mode should be a param, not a preference, I would think. I think the admin would rather control it. Of course, with the "defaults" we're working on in bug 98123, the admin could control the preferenec globally, so it's OK that it's a preference, I suppose. Perhaps a better solution would be for Bugzilla to store a randomly-generated (once-per-login) token in a cookie, and require it on all submissions from show-bug? show_bug can just read it in as a hidden input variable.
show_bug.cgi does not need to read or define a token. Displaying bugs is fine in any case (as long as you are allowed to see them). What we don't want is to modify existing bugs; that's why my patch uses tokens in process_bug.cgi only. Do we also want checks in post_bug.cgi (creation of new bugs)? And what about edit*.cgi pages in case you have sufficient privileges to access them? I agree with gerv in his comment 75 when saying "If tokens were only required for changes, that would be better than for everything, certainly.".
(In reply to comment #77) > show_bug.cgi does not need to read or define a token. Then I'm unclear on how this is a workable mechanism to prove that we actually came from a Bugzilla page. Somewhere there must be evidence that comes *from another page* that goes into the processing page. Everything else can be worked-around, AFAIK.
I think he actually means "does not need to read, but does need to define". I'm still unclear as to under what circumstances the proposed Referer checks can be subverted by someone who is doing session riding. Could someone give me an example? Gerv
Having some forms check referrers instead of using tokens for all forms would be: * Annoying for users who disable referrers. They would have to confirm the submission of every form. * A security hole for users who spoof referrers. Session riding attacks will work against these users. Relying on referrers would also force Bugzilla to reject GET requests for those forms, because users can add links in Bugzilla comments.
(In reply to comment #80) > * Annoying for users who disable referrers. They would have to confirm the > submission of every form. Not if they turned off the pref for their account. Yes, this would leave them possibly open to attack, but they'd be a tiny minority. > * A security hole for users who spoof referrers. Session riding attacks will > work against these users. Well, they shouldn't violate RFCs, then. (The RFC doesn't explicitly say "don't lie in this field", but I'd expect that to be assumed for all information you send.) > Relying on referrers would also force Bugzilla to reject GET requests for > those forms, because users can add links in Bugzilla comments. That, however, is a better argument. :-| Fair enough, then. Gerv
(In reply to comment #81) > (In reply to comment #80) > > Relying on referrers would also force Bugzilla to reject GET requests for > > those forms, because users can add links in Bugzilla comments. > > That, however, is a better argument. :-| Fair enough, then. Bugzilla should reject GET requests that try to change anything in the database, that's what POST is for.
Comment on attachment 168918 [details] [diff] [review] patch, v5 Waiting for 98123...
Attachment #168918 - Flags: review?(justdave)
(In reply to comment #83) > (From update of attachment 168918 [details] [diff] [review] [edit]) > Waiting for 98123... bug 98123 is in.
Comment on attachment 168918 [details] [diff] [review] patch, v5 I don't think this is the right approach. I don't think we should do a darn this with the referer header because that header is almost useless The one thing that bugs me is some people are known to run extensions that make the referer always come from the same site, mostly in order to get around referer restrictions on other sites (see http://www.squarefree.com/securitytips/web-developers.html ) I think the best thing to do here is to embed tokens in the forms, and make sure on submit that the token was issued to the user submitting. A form can be accepted without a token if the login credentials are passed as part of the form submit (which would let scripts continue to work). Eventually we should get it where submitting with an expired token would show you the form again with a new token embedded and allow you to resubmit it, but that's another bug.
Attachment #168918 - Flags: review-
I don't think the Referer header is useless at all. It's sent by 99%+ of the world's browsers, and we can certainly use it to enable an optional feature which increases security; those who don't send it will just be in the same position as now. Those who are sending the URL of the current page are configuring their browsers to break the spec (which says the header is optional, not that it can contain untruthful values), and we should pay no attention to them. Using the Referer to add additional optional security which solves the problem for those 99%+ has the great advantage that it would only be a few lines of code. Still, the token-based scheme is also fine, although it's a fair amount of work to implement and we could end up either storing old tokens for ages and ages (remember, one will be issued with each bug viewing) or annoying people who come back to their computer after the weekend and add a comment to a bug they loaded the previous Friday. Gerv
Target Milestone: Bugzilla 2.22 → Bugzilla 2.24
QA Contact: mattyt-bugzilla → default-qa
We just fixed this for the admin pages in bug 281181. I assume the same fix could be made to the user-facing pages as well, and probably easier now that the infrastructure to handle it is in place.
Depends on: 281181
(In reply to comment #87) > We just fixed this for the admin pages in bug 281181. I assume the same fix > could be made to the user-facing pages as well, and probably easier now that > the infrastructure to handle it is in place. I'm not sure my fix in bug 281181 is compatible with email_in.pl, unless we don't require a token if the bug is changed by email.
Due to scripts like email_in.pl, the approach used in bug 281181 won't work. So we have to find another approach. This bug is now pretty low priority in my TODO list so reassigning to the default assignee so that someone else can play with it.
Assignee: LpSolit → create-and-change
Status: ASSIGNED → NEW
(In reply to comment #86) > I don't think the Referer header is useless at all. Sadly, referrer is useless for this purpose. It can be forged with a flash app: http://www.webappsec.org/lists/websecurity/archive/2006-07/msg00069.html or the (many years unfixed) XmlHttpRequest HTTP header splitting vulnerability in IE: http://www.cgisecurity.com/lib/XmlHTTPRequest.shtml The best approach to avoiding CSRF is something like the one I recommended in bug #354274.
(In reply to comment #89) > Due to scripts like email_in.pl, the approach used in bug 281181 won't work. So > we have to find another approach. email_in.pl is email, it has nothing to do with web page access. It can have its own methods of authenticating who sent it. Don't let that block this.
Whiteboard: security → security [wanted-bmo]
Whiteboard: security [wanted-bmo] → [sg:want]security [wanted-bmo]
Putting this back in the security group to make sure it doesn't get lost, and shows up in the reports showing open security bugs.
Group: bugzilla-security
Mozilla had penetration testers checking out our various websites this last week, and they pointed out the XSRF issues in the following pages: attachment.cgi (edit attachment details) userprefs.cgi process_bug.cgi
While fixing the XSRF attacks is a worthy goal, it won't fix the bug while attachment.cgi lets any user display arbitrary active content from the same origin as the bugzilla installation. While that XSS hole exists, I can write a page that will open the real forms (possibly in a hidden iframe) and submit them. This will bypass pretty much any XSRF countermeasure available.
That's bug 38862.
Due to related implications between this bug and bug 38862, I'm going to fix this one at the same time.
Assignee: create-and-change → LpSolit
Status: NEW → ASSIGNED
... This means this bug is a blocker too. I will use a much simpler approach than what I did 4 years ago.
Flags: blocking3.2.1+
Flags: blocking3.0.7+
How are they related, and why would this be a blocker??
(In reply to comment #99) > How are they related, and why would this be a blocker?? Because you can call process_bug.cgi from an <iframe> in an attachment, making the other patch completely useless against this kind of attacks.
(In reply to comment #100) > (In reply to comment #99) > > How are they related, and why would this be a blocker?? > > Because you can call process_bug.cgi from an <iframe> in an attachment, making > the other patch completely useless against this kind of attacks. That wouldn't matter--you can't be logged in on the attachment host, so you couldn't call process_bug. That's the whole point of the attachment patch.
Oh, but you could load the main site, I see. In that case, really, I don't see how the attachment patch does us any good.
(In reply to comment #102) > Oh, but you could load the main site, I see. Yes, exactly. I tried, and this works (unfortunately) well. > how the attachment patch does us any good. Wrong bug for this discussion. :) The answer is already in the other bug.
This bug, when fixed (and we expect to fix it in our upcoming point releases) will affect all integrators with Bugzilla who access process_bug.cgi directly. You will now need to load show_bug.cgi, parse out the token from the HTML, and then include it with your calls to process_bug.cgi.
I'm assuming that email_in.pl will continue to work after this fix. Just checking.
(In reply to comment #105) > I'm assuming that email_in.pl will continue to work after this fix. Just > checking. Yes, don't worry. email_in.pl will create a token itself and pass it to process_bug.cgi.
You know, one problem with this is that it will generate a *ton* of tokens that will never be used. Most bugs that are looked at are not modified. We should keep the token timeout on this shorter than 3 days. In fact, we should probably keep the token timeout shorter than 3 days for all our XSRF tokens, and we should clean them up with some regularity. Also, as far as I'm aware, this will be the only INSERT that show_bug.cgi does (which will also change some performance aspects of Bugzilla, I suspect).
(In reply to comment #107) > You know, one problem with this is that it will generate a *ton* of tokens that > will never be used. Right, and that's a problem. > Also, as far as I'm aware, this will be the only INSERT that show_bug.cgi does > (which will also change some performance aspects of Bugzilla, I suspect). Any other idea to fix this bug? :)
(In reply to comment #108) > (In reply to comment #107) > > You know, one problem with this is that it will generate a *ton* of tokens that > > will never be used. > > Right, and that's a problem. > > > > Also, as far as I'm aware, this will be the only INSERT that show_bug.cgi does > > (which will also change some performance aspects of Bugzilla, I suspect). > > Any other idea to fix this bug? :) The token maybe doesn't have to be written into the db. Instead, it can be cryptographically signed with a secret local to the site and the user's login. Let me know if that doesn't make sense and I'll elaborate. It's hard to type on my iPhone :-)
Terry, where would you store the token? In a cookie? What happens if cookies are rejected by your browser?
(In reply to comment #110) > Terry, where would you store the token? In a cookie? What happens if cookies > are rejected by your browser? That's the beauty of it. The server doesn't have to store the token at all; it just regenerates it again when it needs it. Here it is, with details: There needs to be (in localconfig or something like that) a site-wide secret. This is just a random string, probably generated by checksetup.pl. When you go to show_bug.cgi, it generates a token by doing something like this: append together the current time, the bug number, the user's userid, and the site-wide secret. Take all that and calculate its MD5 checksum. The current time appended with that checksum is your token. show_bug.cgi puts the token in the page in some hidden form element. It does not write it down anywhere else. When process_bug.cgi gets called, it retrieves the token and checks it. It makes sure that the timestamp is not more than an hour or two old. It uses it, the bug number, the user's userid, and the site-wide secret and it re-calculates the checksum, and makes sure that it matches. Only then will it go ahead and process the bug. I think this has all the security benefits of the randomly created tokens, and it does not require any writes to the database.
Tweak: you must use a separator char when concatenating the bits of data, to avoid various classes of attack. But otherwise, great :-) Gerv
Instead of the user's userid, I'd use their current login token, if we can get it. Otherwise, I have vague concerns about this method, but nothing concrete. It does seem a little too easy to lose the site-wide secret and then have it all go bad. Perhaps we could regenerate a new secret on a regular basis, though most attackers could do anything they'd like to do within a few minutes, so I'm not sure that'd help.
(In reply to comment #113) > Instead of the user's userid, I'd use their current login token, if we can get > it. I would prefer to use the user ID, so we can directly have it from Bugzilla->user, and we don't have to worry if the browser doesn't accept cookies. > It does seem a little too easy to lose the site-wide secret and then have it > all go bad. As it would be stored in localconfig, this information would be as hard to get as the DB name and password. And nothing would go really bad. Even if a hacker manages to get it, it still cannot do more harm than what he can do currently.
(In reply to comment #104) > This bug, when fixed (and we expect to fix it in our upcoming point releases) > will affect all integrators with Bugzilla who access process_bug.cgi directly. This shouldn't affect Deskzilla -- before calling process_bug.cgi Deskzilla always loads full form data from show_bug.cgi, and just submits the parameters it doesn't understand. Dunno about Mylyn, if they don't do that as well, they probably should. Max, thanks for having added me to CC.
Is there any additional risk in just using the DB password as the secret? It would be easier than having to generate one, find somewhere to store it (we shouldn't really be writing to localconfig ourselves) and remembering to protect it and set all the permissions correctly. I guess it would reveal the DB password if MD5 were reversible. Fairly unlikely, though - all current attacks have been about finding collisions. You can't reverse a hash if it's lossy. Shouldn't we be using SHA anyway? Gerv
Not all DBs have passwords. It's easy enough to generate a string and add it to localconfig--checksetup.pl can safely add a new variable to localconfig without modifying existing ones. We should be using SHA, but it's not available by default with Perl, and I don't really want to require a new module just for that.
(In reply to comment #116) > Is there any additional risk in just using the DB password as the secret? Wow, I don't like this idea. This gives another way to try to guess the password. Also, there is nothing wrong in adding the secret key in localconfig.
(In reply to comment #116) > Is there any additional risk in just using the DB password as the secret? It > would be easier than having to generate one, find somewhere to store it (we > shouldn't really be writing to localconfig ourselves) and remembering to > protect it and set all the permissions correctly. One risk I can think of is losers (like me :-)) who think that their database is safely behind a firewall and so they don't bother setting a password at all. Also, this would give people a method to do offline mass attacks on the database password. They view a bug on your site, get a token, copy it onto their machine, and spend a zillion CPU hours trying every likely 8-letter password and seeing which ones comes up with the same password. I think you want a different secret for this. And you can make it longer and more obscure than the typical db password, because nobody is ever going to have a situation where they actually need to type it in.
We can generate a completely random string in checksetup.pl to add to localconfig. We can put a comment in that it can be changed to whatever you want as long as it's unique to your installation, and that we generated it at random during install. Not sure if it's worth pointing out that each server answering to the same domain name needs to have the same string or not. For Mozilla it wouldn't matter because we rsync the same file to all of the webheads anyway, but who knows.
OK, fair enough, forget that idea :-) Still, Terry's scheme is much better than storing tokens in the DB. Let's do it :-) Gerv
Attached patch patch for tip, v6 (obsolete) (deleted) — Splinter Review
Terry, does it do what you suggested? :)
Attachment #168918 - Attachment is obsolete: true
Attachment #351642 - Flags: review?(terry)
Attachment #351642 - Flags: review?(mkanat)
Comment on attachment 351642 [details] [diff] [review] patch for tip, v6 Sure looks reasonable to me.
Attachment #351642 - Flags: review?(terry) → review+
The secret needs to be long to prevent cracking since that may be the only secret part of the proposed concatenation. The time and bug number are known and the user ids might be known/guessable. For example, many users are trusted with seemingly minor abilities like granting canconfirm rights from which the ids for every user can be gotten. User ids also appear on shared saved searches so the id of any user who has shared a search can be used to try to crack the secret. For that matter everyone can get their own ID from their cookies, and that and a single token is all you need to try to crack the secret offline. Doesn't matter if it takes a week or a month, once you've got it it's likely to be good for a long time. It'd probably be better if the secret were more dynamic, and I guess that means storing it in a table somewhere to deal with scaling. On the other hand, even knowing the secret the token still has to be customized for each user. You can't just CSRF random victims so this scheme would be a huge win over nothing. Bugs aren't the only things that need to be protected. The attachments edit page too, but they have a nice handy associated number you can work into the same scheme. User setting forms are another set of pages that need protection. They don't fit the summary here, do they need to go in a separate bug or will this be a generic fix?
Attached patch patch for tip, v7 (obsolete) (deleted) — Splinter Review
Here is a better approach if the token is missing, invalid or too old. I now display a confirmation page asking if the changes being submitted are really what the user wants to do. It's much better than throwing an error. dveditz: the randomly generated site secret is 256 characters long by default. About attachment.cgi and userprefs.cgi, I will probably include them in this patch as we cannot disclose this bug without fixing them too, as several comments mention these pages. I will do this tomorrow. 4:45am is late enough for me. ;)
Attachment #351642 - Attachment is obsolete: true
Attachment #351652 - Flags: review?(mkanat)
Attachment #351642 - Flags: review?(mkanat)
CSRF confirmation pages are fun clickjacking targets :/
(In reply to comment #124) > The secret needs to be long to prevent cracking since that may be the only > secret part of the proposed concatenation. [snip] Yes, by default in the patch it's 256 characters in the range 0-9,a-z,A-Z. Is that enough? I have no idea how fast something like that could be brute-forced, nowadays. > It'd probably be better if the secret were more dynamic, and I guess that means > storing it in a table somewhere to deal with scaling. Yeah, we could do that. It wouldn't be too hard to give it its own table (or increase the size of the current tokens table). We could just regenerate it roughly once a day. We'd keep around two secrets, the old and the new, and just try them both on every submit. Each secret would live for two days, but would only be used to generate tokens for one day. > Bugs aren't the only things that need to be protected. The attachments edit > page too, but they have a nice handy associated number you can work into the > same scheme. Yes, but that should go into a separate bug. > User setting forms are another set of pages that need protection. > They don't fit the summary here, do they need to go in a separate bug or will > this be a generic fix? If you're talking about userprefs.cgi, that would also be in a separate bug. We'd use our more normal token system for that, since that page doesn't get updated nearly as much.
(In reply to comment #126) > CSRF confirmation pages are fun clickjacking targets :/ True, but my understanding is that there's nothing we can do about that, and that clickjacking is a browser security issue, not a web app security issue.
(In reply to comment #125) > About attachment.cgi and userprefs.cgi, I will probably include them in this > patch as we cannot disclose this bug without fixing them too, as several > comments mention these pages. Please do that in separate bugs. (At least for userprefs.cgi, which can use the standard tokens system instead of this system.)
(In reply to comment #128) > True, but my understanding is that there's nothing we can do about that, and > that clickjacking is a browser security issue, not a web app security issue. "nothing you can do about it" is demonstrably false - you can avoid using click through confirmation pages, for one. You're in a better position to evaluate the tradeoffs of potential solutions than I am, and I'm not saying that there's a clear answer here, but "it's a browser security issue" is a weak argument. Your users won't care which link in the chain is at fault when their Bugzilla account is compromised, and shrugging the issue off as someone else's problem isn't very constructive.
Comment on attachment 351652 [details] [diff] [review] patch for tip, v7 >+ $vars->{'token'} = issue_hash_token($bug->id); You know, just as a thought, maybe we should make this function accessible to the templates so that we don't have to call it everywhere. >Index: Bugzilla/Token.pm >+ # The concatenated string is of the form >+ # token creation time + site-wide secret + user ID + data >+ my $time = time(); It might be good to use Time::HiRes (which is in core Perl), here, to get a more random string. >+ # Prepend the token creation time, unencrypted, so that the token >+ # lifetime can be validated. >+ return $time . '-' . $token; With the way you're doing this, I can re-use this token any time in the next three days, to submit any changes to this bug, as many times as I want. Perhaps we should include the delta_ts of the bug as part of the token, because if that changes we'd be displaying the mid-air collision page in any case, then. >+ my $str = $time . Bugzilla->localconfig->{'site_wide_secret'} . Bugzilla->user->id . $data; Gerv mentioned that you have to add a separator character between each of these in order to protect against some classes of attack. This is particularly important for buglist.cgi, where otherwise this token will be valid for an entire second on any page in Bugzilla. >+ if ($full_token) { >+ ($time, $token) = split(/-/, $full_token); >+ # Regenerate the token based on the information we have. >+ my $str = $time . Bugzilla->localconfig->{'site_wide_secret'} . Bugzilla->user->id . $data; For clarity's sake, it'd be best to have a _generate_token function that takes $time and $data and generates a token (it can be used by both issue_ and check_). >+ $template->process('global/confirm-action.html.tmpl', $vars) >+ || ThrowTemplateError($template->error()); Don't we already have something like this that we use for the admin pages? We can't re-use it? >Index: Bugzilla/Install/Localconfig.pm >+# This secret key is used by your installation for the creation and >+# validation of encrypted tokens to prevent unsollicited changes, >+# such as bug changes. A random string is generated by default. Typo: unsolicited Also add: # It's very important that this key is kept secret. It also must be very long. >Index: template/en/default/global/confirm-action.html.tmpl >+ # The Original Code is the Bugzilla Bug Tracking System. >+ # >+ # The Initial Developer of the Original Code is Frédéric Buclin. You are missing the rest of that Initial Developer block. >+ [% ELSIF reason == "missing_token" %] >+ It looks like you didn't come from the right page (you have no token >+ to submit changes to [% script_name FILTER html%]). I think that in this case we don't need to explain the "token" thing--most people won't know that that is, and this will be the most common error. We can leave that out.
Attachment #351652 - Flags: review?(mkanat) → review-
> This is particularly > important for buglist.cgi, where otherwise this token will be valid for an > entire second on any page in Bugzilla. That was supposed to be part of the Time::HiRes comment, not part of the separator comment.
(In reply to comment #130) > you can avoid using click through confirmation pages, for one. We would also have to avoid every single form submission button in all of Bugzilla, pretty much. And all the checkboxes or radio buttons. > a clear answer here, but "it's a browser security issue" is a weak argument. It wasn't an argument, it was a statement of my understanding. Are browser or plugin clickjacking vulnerabilities unfixable or not going to be fixed? Is it an attack that web apps should be expecting to exist forever (like XSS or XSRF) and should be protecting against?
(In reply to comment #133) > We would also have to avoid every single form submission button in all of > Bugzilla, pretty much. And all the checkboxes or radio buttons. Sure. As I said, you need to evaluate the tradeoffs for any solution, and you're in a better position to do that than I am. Your comment just seemed dismissive of the concern based on who was at fault, and that rubbed me the wrong way. > Is it an attack that web apps should be expecting to exist forever (like XSS > or XSRF) and should be protecting against? Maybe not "forever", but in the short term, I think you should definitely be trying to do what you can protect against it.
(In reply to comment #131) > It might be good to use Time::HiRes (which is in core Perl), here, to get a > more random string. I don't see how this will help. What matters is if your token is too old or not. Having < 1 sec precision won't change the way our validation code works. > we should include the delta_ts of the bug as part of the token Good idea. I will include it in the token. > This is particularly > important for buglist.cgi, where otherwise this token will be valid for an > entire second on any page in Bugzilla. Huh? This token is valid only for mass-change, not for all pages. There is no way to build a token which is specific to the list of bugs you have in the page. And even less based on the selected bugs. > Don't we already have something like this that we use for the admin pages? We > can't re-use it? We have admin/confirm-action.html.tmpl, yes, which is the template I copied to create this one. And no, I don't want to reuse it as the admin one has much detailed information which we don't have access to here. > You are missing the rest of that Initial Developer block. I copied it from the other template. This kind of comments is going to irritate me. :) We should have somewhere an empty template with the exact text to copy in new templates (or fix all existing templates so that we can copy this block from any template we want).
Blocks: 468249
Attached patch patch for tip, v8 (obsolete) (deleted) — Splinter Review
This patch is much more robust than the last one. All generated tokens can be used only once: For single bug changes, the token is now generated based on both the bug ID and delta_ts (which changes everytime the bug is edited). For mass-change, I reuse the sessions token mechanism used for admin pages, i.e. a unique token stored in the DB. Mass-change is rare enough, and damages you can cause using it are large enough to worth using it, without any impact on the performance. Using session tokens means displaying the admin confirm-action template if something goes wrong, which is fine (powerless users cannot access this page anyway). I now use '*' to separate fields when generating the token as this character cannot (or at least shouldn't) appear in any of the fields used. I also tested email_in.pl, and it's working fine.
Attachment #351652 - Attachment is obsolete: true
Attachment #352004 - Flags: review?(mkanat)
It looks like you didn't add delta_ts to the previous patch, but are instead using that as the time. But you're still checking for stale tokens older than a day. Isn't delta_ts often more than a day? Or is delta_ts different from the "Modified date" I see in the bugzilla ui?
(In reply to comment #137) > But you're still checking for stale tokens older than a > day. Isn't delta_ts often more than a day? Or is delta_ts different from the > "Modified date" I see in the bugzilla ui? delta_ts is often older than a day, yes, but that's not the time I check. I check the token creation time, which must be younger than 3 days (MAX_TOKEN_AGE = 3). And yes, delta_ts is the modification time you see in the UI.
> I check the token creation time Do you? That's what the comments say in issue_hash_token, but in the previous version 7 $time was time(), but now it's an argument. And everywhere it's passed in it looks like $delta_ts. I guess it'll still be time() if there's no passed-in delta_ts. I'm not totally sure and could be reading it wrong, haven't used perl in years.
Comment on attachment 352004 [details] [diff] [review] patch for tip, v8 >process_bug.cgi: $cgi->param('token', issue_hash_token([$first_bug->id, $first_bug->delta_ts])); >email_in.pl: $cgi->param('token', issue_hash_token([$bug->id, $bug->delta_ts])); >edit.html.tmpl: [% issue_hash_token([bug.id, bug.delta_ts]) FILTER html %] All three calls pass the bug ID and the bug last modification time in an arrayref (enclosed in [ ]). There is no 2nd argument. >Index: Bugzilla/Token.pm >+sub issue_hash_token { >+ my ($data, $time) = @_; >+ $data ||= []; >+ $time ||= time(); Here, the arrayref is stored in $data, and as there is no 2nd argument, $time is set to time(), as desired. So we really use the current time as the creation time of the token. >+sub check_hash_token { >+ $expected_token = issue_hash_token($data, $time); Here is the single place where the 2nd argument is used. As we are validating the token, we have to use its creation time, not the current time.
Attachment #352004 - Flags: review?(mkanat) → review+
Comment on attachment 352004 [details] [diff] [review] patch for tip, v8 This looks nice. I haven't tested it, but I do trust your testing, and we'll of course run this through QA tests before release. You know, I like global/confirm-action.html.tmpl more than our current confirmation template. Could we just replace our current template with text pretty much like that? It's got a lot more explanation than our current template.
Pushing this bug in our radar. It still need backports (unless I'm lucky and the patch applies cleanly on branches).
Flags: approval?
Attached patch patch for tip + 3.2, v8.1 (deleted) — Splinter Review
This patch removes two obsolete variables I left in the code, and it also fixes a problem where you can have tokens which are piling up when hacking the URL. All old tokens are now automatically deleted from the confirmation page, keeping only the newly generated one. The patch also applies cleanly on 3.2.
Attachment #352004 - Attachment is obsolete: true
Attachment #353500 - Flags: review?(mkanat)
Comment on attachment 353500 [details] [diff] [review] patch for tip + 3.2, v8.1 Yeah, looks good.
Attachment #353500 - Flags: review?(mkanat) → review+
Backport for 3.0.7 coming in a while.
Flags: approval3.2?
Due to the way process_bug.cgi works in 3.0.x (the midair collision code is called in the "foreach bug ID" loop, which itself locks tables, which is not compatible with methods defined in Token.pm), it's not possible to backport this patch to the 3.0 branch. So this fix will only be taken in 3.2 and 3.3.
Flags: blocking3.0.7-
Flags: blocking3.0.7+
Flags: blocking2.18-
Summary: Bugzilla should prevent malicious webpages from making bugzilla users submit changes to bugs. → [SECURITY] Bugzilla should prevent malicious webpages from making bugzilla users submit changes to bugs.
Target Milestone: Bugzilla 3.0 → Bugzilla 3.2
Comment on attachment 353500 [details] [diff] [review] patch for tip + 3.2, v8.1 >Index: Bugzilla/Install/Localconfig.pm >+ desc => <<EOT >+# This secret key is used by your installation for the creation and >+# validation of encrypted tokens to prevent unsolicited changes, >+# such as bug changes. A random string is generated by default. >+# It's very important that this key is kept secret. It also must be >+# very long. >+ >+EOT Note to self (or to the one committing this patch if it's not me): remove the useless empty line before EOT.
We use no user pref for this bug, so we don't depend on bug 98123 anymore.
No longer depends on: 98123
Blocks: 466748
Flags: approval?
Flags: approval3.2?
Flags: approval3.2+
Flags: approval+
tip: Checking in buglist.cgi; /cvsroot/mozilla/webtools/bugzilla/buglist.cgi,v <-- buglist.cgi new revision: 1.391; previous revision: 1.390 done Checking in email_in.pl; /cvsroot/mozilla/webtools/bugzilla/email_in.pl,v <-- email_in.pl new revision: 1.22; previous revision: 1.21 done Checking in process_bug.cgi; /cvsroot/mozilla/webtools/bugzilla/process_bug.cgi,v <-- process_bug.cgi new revision: 1.417; previous revision: 1.416 done Checking in Bugzilla/Template.pm; /cvsroot/mozilla/webtools/bugzilla/Bugzilla/Template.pm,v <-- Template.pm new revision: 1.99; previous revision: 1.98 done Checking in Bugzilla/Token.pm; /cvsroot/mozilla/webtools/bugzilla/Bugzilla/Token.pm,v <-- Token.pm new revision: 1.57; previous revision: 1.56 done Checking in Bugzilla/Install/Localconfig.pm; /cvsroot/mozilla/webtools/bugzilla/Bugzilla/Install/Localconfig.pm,v <-- Localconfig.pm new revision: 1.14; previous revision: 1.13 done Checking in template/en/default/admin/confirm-action.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/admin/confirm-action.html.tmpl,v <-- confirm-action.html.tmpl new revision: 1.3; previous revision: 1.2 done Checking in template/en/default/bug/edit.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/bug/edit.html.tmpl,v <-- edit.html.tmpl new revision: 1.151; previous revision: 1.150 done Checking in template/en/default/list/edit-multiple.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/list/edit-multiple.html.tmpl,v <-- edit-multiple.html.tmpl new revision: 1.55; previous revision: 1.54 done Checking in template/en/default/global/confirm-action.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/global/confirm-action.html.tmpl,v<-- confirm-action.html.tmpl initial revision: 1.1 done 3.2: Checking in buglist.cgi; /cvsroot/mozilla/webtools/bugzilla/buglist.cgi,v <-- buglist.cgi new revision: 1.374.2.6; previous revision: 1.374.2.5 done Checking in email_in.pl; /cvsroot/mozilla/webtools/bugzilla/email_in.pl,v <-- email_in.pl new revision: 1.19.2.2; previous revision: 1.19.2.1 done Checking in process_bug.cgi; /cvsroot/mozilla/webtools/bugzilla/process_bug.cgi,v <-- process_bug.cgi new revision: 1.410.2.2; previous revision: 1.410.2.1 done Checking in Bugzilla/Template.pm; /cvsroot/mozilla/webtools/bugzilla/Bugzilla/Template.pm,v <-- Template.pm new revision: 1.89.2.2; previous revision: 1.89.2.1 done Checking in Bugzilla/Token.pm; /cvsroot/mozilla/webtools/bugzilla/Bugzilla/Token.pm,v <-- Token.pm new revision: 1.55.2.1; previous revision: 1.55 done Checking in Bugzilla/Install/Localconfig.pm; /cvsroot/mozilla/webtools/bugzilla/Bugzilla/Install/Localconfig.pm,v <-- Localconfig.pm new revision: 1.12.2.1; previous revision: 1.12 done Checking in template/en/default/admin/confirm-action.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/admin/confirm-action.html.tmpl,v <-- confirm-action.html.tmpl new revision: 1.2.2.1; previous revision: 1.2 done Checking in template/en/default/bug/edit.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/bug/edit.html.tmpl,v <-- edit.html.tmpl new revision: 1.125.2.19; previous revision: 1.125.2.18 done Checking in template/en/default/global/confirm-action.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/global/confirm-action.html.tmpl,v<-- confirm-action.html.tmpl new revision: 1.1.2.2; previous revision: 1.1.2.1 done Checking in template/en/default/list/edit-multiple.html.tmpl; /cvsroot/mozilla/webtools/bugzilla/template/en/default/list/edit-multiple.html.tmpl,v <-- edit-multiple.html.tmpl new revision: 1.49.2.4; previous revision: 1.49.2.3 done
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Removing this bug from the security group, as the Security Advisory was sent (bug 468249)
Group: bugzilla-security
For rich clients to Bugzilla such as Mylyn (http://eclipse.org/mylyn) that make use of the process_bug.cgi script to modify bugs and never load the html page (relying strictly on their xml), what is the recommended practice for obtaining a valid token?
(In reply to comment #151) > For rich clients to Bugzilla such as Mylyn (http://eclipse.org/mylyn) that make > use of the process_bug.cgi script to modify bugs and never load the html page > (relying strictly on their xml), what is the recommended practice for obtaining > a valid token? If you cannot load show_bug.cgi and extract the token, I'm not sure you can do anything about it. That's the point to use tokens, to prevent direct changes from process_bug.cgi.
We do make use of show_bug.cgi but request ctype=xml. Could we get the token included in the xml output as well? I think this is all we'd require. Assuming these tokens are valid for a few days, if upon submit we detect a token failure we can re-request the xml for the bug and resubmit with a new token.
(In reply to comment #153) > We do make use of show_bug.cgi but request ctype=xml. Could we get the token > included in the xml output as well? I think this is all we'd require. Assuming > these tokens are valid for a few days, if upon submit we detect a token failure > we can re-request the xml for the bug and resubmit with a new token. Seems reasonable. File a new RFE for that, please? I already have a wip patch for it. :)
(In reply to comment #154) > Seems reasonable. File a new RFE for that, please? I already have a wip patch > for it. :) Excellent. Created bug#476678
Blocks: 476943
Blocks: 476678
Flags: testcase?
Whiteboard: [sg:want]security [wanted-bmo] → security [wanted-bmo]
Attached patch selenium test, v1 (deleted) — Splinter Review
Committing to: bzr+ssh://lpsolit%40gmail.com@bzr.mozilla.org/bugzilla/qa/4.0/ modified t/test_bug_edit.t Committed revision 170. Committing to: bzr+ssh://lpsolit%40gmail.com@bzr.mozilla.org/bugzilla/qa/3.6/ modified t/test_bug_edit.t Committed revision 142. test_bug_edit.t doesn't exist in 3.4 and older.
Attachment #509967 - Flags: review+
Flags: testcase? → testcase+
Keywords: selenium
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: