Closed
Bug 293363
Opened 20 years ago
Closed 9 years ago
Fix all non-origin URL load processing to track origin principals
Categories
(Core :: Networking, enhancement, P1)
Core
Networking
Tracking
()
RESOLVED
FIXED
mozilla1.8beta3
People
(Reporter: and, Assigned: sicking)
References
Details
(Keywords: sec-want, Whiteboard: [sg:want P1] [ETA ?])
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.7) Gecko/20050418 Firefox/1.0.3 (MOOX M2)
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.7) Gecko/20050418 Firefox/1.0.3 (MOOX M2)
Today sees the publication of another two serious security holes involving
javascript: pseudo-URIs.
One allows arbitrary code execution by web content cross-scripting into chrome.
IE has had many, many errors like this (cross-zone scripting into the My
Computer zone), and we are almost certainly going to see more. This is not a
problem that can be fixed once; every new feature involving URIs has the
potential to introduce more scripting-into-chrome flaws.
The problem is that authors - both of web browsers and of web applications - see
URIs as pointers to resources, as the name implies. However javascript: URIs,
uniquely, aren't: they're commands to execute in current context. Every time a
coder forgets this and naively accesses a passed URI, there is a potential
security breach. This has happened again and again and has caused scores of bugs
in Moz, IE, Opera and other browsers, as well as countless vulnerable web
applications.
There is, moreover, no real use case for javascript: URIs. They were thrown in
at the time when Netscape were competing vigorously by adding as many new
features to the browser as possible, without regard to whether they were
actually useful. They have never been included as part of a standards document.
And in practice, everything a web site or application might want to do with them
is better done with event handlers.
So I contend that, to reduce the potential for further serious security issues
in Firefox, and the potential for harm to the web in general, Mozilla should
move towards deprecating javascript: URIs.
This could be done smoothly by adding a pref to allow, disallow or prompt each
time a javascript: URI is accessed, and slowly migrating the default setting
from allow -> prompt-with-ask-me-next-time-tickybox-unchecked ->
prompt-with-it-checked -> disallow as conditions allow.
(The one function of javascript: URIs that has any purpose is bookmarklets.
Either they could be allowed in this particular area only, or, preferably, a
different way of allowing the user to inject stored javascript code could be
provided.)
I realise that changing the habits of poor web authors is not something that
will happen very quickly. But taking the first steps towards ridding the web of
the most pointless yet dangerous single feature it has ever been saddled with
would be beneficial to everyone.
Reproducible: Always
Steps to Reproduce:
Comment 1•20 years ago
|
||
Deprecating them won't help. Dropping support would help, but we can't do that,
as that would break millions of pages.
What we probably should do, though, is outlaw javascript: in chrome. As in, if
javascript: notices that it is executing in a chrome context, it should abort.
Comment 2•20 years ago
|
||
Ugh.. "javascript:" loaded via nsIWebNavigation appears to be the only "stable"
way for an embedder to inject javascript into the context of the currently
loaded page. We should have better interfaces for that purpose, but it's just
one more thing that would have to be resolved before we could reduce when JS
URLs may run.
Comment 3•20 years ago
|
||
This bug is misstated, and filed based on lack of knowledge. Always a bad
thing, even in the face of real, worse bugs.
I'm therefore morphing this bug into something useful.
/be
Priority: -- → P1
Summary: Consider deprecation of javascript: URLs → Fix all non-origin URL types to track their origins
Target Milestone: --- → mozilla1.8beta3
Assignee | ||
Comment 4•20 years ago
|
||
IMHO we should make it so that javascript uris are never ever given chrome
privileges. That will not take care of the worst symptoms of this problem, but
probably means that we'll have to fix some of our chromecode that currently
relies on javascript uris with chrome privileges. But that seems like a good
thing anyway.
This won't take are of the problem of cross site exploits so we should also do
what brendan suggests in comment 3.
Comment 5•20 years ago
|
||
(In reply to comment #4)
> This won't take are of the problem of cross site exploits so we should also do
> what brendan suggests in comment 3.
You mean what he suggests in his summary change?
Fix all non-origin URL types to track their origins
Assignee | ||
Comment 6•20 years ago
|
||
Yes. I assumed it ment what we talked about the other day over irc, tracking the
document that the uri originated from (either from an attribute in the markup or
from script in the document) and then give the javascript-uri the same
access-privileges as that document
sicking: as it happens, that would *guarantee* that any extension (or heck,
mozilla native chrome) that uses _content will break.
Assignee | ||
Comment 8•20 years ago
|
||
Which part? And Why? And how is _content related to this?
Comment 9•20 years ago
|
||
Filed bug 293394 on the no-chrome-javascript: idea.
Comment 10•20 years ago
|
||
We've had view-source: and data: attacks, too. These URI schemes along with
javascript: must be associated by our trusted URI-handling code with their
origin, which is always a real URI scheme (file:, http:, etc.). Let's call the
non-origin URI schemes "fake" to contrast with "real", for simplicity of jargon.
In the case where markup mentions a fake URI, that URI's origin is the markup's
origin. The basis case is that the markup is loaded from a real URI, whose
scheme, host, and port concatenated together make up the markup's origin.
For markup generated by the user typing a fake: URL into the location toolbar,
the origin should be the special null origin.
Inductively it's easy to see that any URI loaded in markup content as a src,
href, or similar attribute value has either a real origin, or the null origin.
What about URI-bearing markup generated by a document.write, or a javascript:
URL that evaluates to a non-void result that's converted to a string and used as
text/html? In both of those cases, the origin of the URIs is the markup's
origin, which is the origin of the script that called document.write, or the
origin of the javascript: URL that was evaluated to produce markup content.
This was all implemented in the Nescape 2-4 timeframe. As shaver just noted on
IRC, we joel-on-softwared ourselves pretty well in rewriting all that old code
for the Gecko-based Mozilla codebase, and of course leaving out the crucial
transitive origin relation.
/be
Status: UNCONFIRMED → NEW
Ever confirmed: true
Comment 11•20 years ago
|
||
timeless, I think you are confused. Nothing breaks except exploits if we track
origin correctly.
I don't see the point in butchering our code with more ad-hoc DISALLOW_SCRIPT,
DISALLOW_SCRIPT_OR_DATA, etc. flags, in lieu of proper origin tracking. So I'm
not in favor of fixing bug 293394.
/be
Comment 12•20 years ago
|
||
dveditz points out other fake URI schemes: jar, wyciwyg. I'll talk about the
last one briefly.
In Netscape 3 and 4, generated documents -- those created entirely or partly via
document.write, or entirely via javascript: non-void result loading -- were kept
in the cache, identified by a wysiwyg://<docid>/<real-url-goes-here> URL. Such
a URL was needed to distinguish two distinct generations, using the docid (or a
more elaborate generation number, in the old codebase -- frame lifetime special
cases intruded).
When creating such a wysiwyg: URL, it was trivial to compute the origin by a
string operation on the real URL that was being prefixed to make the fake one.
BTW, Gecko should use its wyciwyg: fake URL for partly generated document.write
documents, and for javascript:-generated documents, but it doesn't. Slackers
who shall remain nameless implemented wyciwyg ('c' for "cache") only for the
entirely-generated document.write case.
/be
Comment 13•20 years ago
|
||
bz points out that origin is one trust identifier we store use as a principal in
our capability-based security model. We also have certificate principals for
signed scripts. The Mozilla Classic codebase never tracked principals from
URIs, only origins, but we should build on the incomplete and buggy
channel-owner jazz to track principals.
Darin, what's the best way to do this? nsIURI2?
/be
Comment 14•20 years ago
|
||
We are pretty free to redefine nsIChannel::owner to be whatever we feel that it
should be. The frozen documentation gives us plenty of flexibility IMO:
"The owner, corresponding to the entity that is responsible for this
channel. Used by the security manager to grant or deny privileges to
mobile code loaded from this channel."
As for tagging URI objects, that could be done via a new interface optionally
QI'able from a nsIURI. nsIURI2 is probably the wrong name for that interface.
nsIPropertyBag maybe, with support for a property that yields a nsIPrincipal
perhaps?
Are we sure we need to tag URI objects? (nsIChannel::owner is not good enough?)
We have the problem that URI objects are sometimes converted to strings, and
then converted back to objects. Any meta data associated with the URI (such as
nsIURI::originCharset) is lost in the process. We can probably make any URI
object without associated principal be a "loser" URI object.
Also, keep in mind that any new interfaces required upon nsIURI implementors
will impact theoretical extensions and embeddings.
Comment 15•20 years ago
|
||
darin: I'm oversimplifying based on the ancient days. If channel suffices, then
great. We have to identify origin, or rather _principal_, sources and sinks,
and make sure that the data flows between them can use a channel field.
Sources of principals are the parent document and the calling script.
Sinks are the JS API evaluate, compile, and execute entry points, at least.
Also the places where principals are strongly referenced, such as documents.
/be
Comment 16•20 years ago
|
||
As I've said before, I think this is a very bad idea. Our URI objects are passed
around, cached, and compared without any particular regard to where they came
from. IMO this should be handled by tracking who opens a channel, not who
creates the URL object.
Comment 17•20 years ago
|
||
bsmedberg: you're picking an implementation nit that I freely withdraw if it's
not necessary, you are not objecting to the *idea* ("this is a very bad idea").
The idea of associating origin principals with fake URL loads (if not with the
URLs themselves, conceptually or in implementations) is essential, and we're
hurting badly for lack of any sound implementation of it, except in the case
where the javascript: URL's channel happens to have a non-null owner.
One place where the owner is null: loading a javascript: URL from session
history (which should not even re-evaluate the javascript: URL, but that's a
different bug, alluded to here in comment 12, and brought to mind by the recent
bfcache work).
/be
Comment 18•20 years ago
|
||
Revising summary to avoid drawing more implementation-nit ire.
/be
Summary: Fix all non-origin URL types to track their origins → Fix all non-origin URL load processing to track origin principals
Comment 19•20 years ago
|
||
I'm guilty of casting the bug in terms of data structures that don't match our
world today, and I like the idea of URLs as immutable exemplars that can be
(de-)serialized and kept on a shelf, and still mean the same thing. So using
the channel is fine, provided we sink the right sources' principals.
I think sicking is going to help on this, so Darin, please feel free to sketch a
plan of attack, or even hand off the bug with Jonas's agreement.
/be
Comment 20•20 years ago
|
||
However, it's not uncommon for the caller who opens the channel to be different
from the origin of the URI. (Consider bug 292499, which describes how the
exploit in bug 293302 gets chrome privileges.) So there's a definite advantage
to using the URI object rather than the channel.
Comment 21•20 years ago
|
||
Yeah, dbaron's point is well taken, and it's the reason in part for comment #4.
I had spoken with dbaron about this before, and while we didn't really come to
a conclusion, several things seemed to point in favor of tagging URI objects.
Lot's of other networking libraries do not actually make a distinction between
URI object and "channel" object. In fact, I think they were the same object in
the old netlib. There is a bit of blurry line between the two, so we need to
figure out how we want that to change if at all.
Brendan: I don't really have a good plan of attack :-/
Comment 22•20 years ago
|
||
s/comment #4/comment #14/ -- sorry about that!
Comment 23•20 years ago
|
||
I think it is critical that we do this. Some things to note:
* This wouldn't help with a case where JS from origin A passes a string to
a page in origin B (e.g. via the chrome install API) and origin B treats
the string as a trusted URI. This was mentioned by dbaron in comment 20.
This would be handled by bug 293394.
* A data: URI entered from the command line should not have a null origin.
It should have a unique origin. Otherwise, a data: URI in one window can
access the DOM of a data: URI in another window, despite the two being
unrelated.
* javascript: not being cached is bug 206531.
* When a javascript: or data: is returned as part of an HTTP redirect, the
origin should come from the URI that caused the redirect, not the document
that caused the request in the first place. Assuming we do this correctly,
we can then drop the DISALLOW_SCRIPT_OR_DATA on redirects.
Comment 24•20 years ago
|
||
bz informs me that a "null origin" is exactly what I said it should be. Good.
Another point: we must make sure that a javascript: URI entered in the location
bar (or taken from bookmarks) gets the origin of the currently loaded page,
whereas a data: URI in the same situation gets a null origin. javascript: is the
only special case here, I think.
view-source:javascript: should be killed altogether, see bug 204779; so it
shouldn't need to worry about its origin. Other view-source: URIs should use the
origin of the embedded URI, as far as I can tell.
Comment 25•20 years ago
|
||
(In reply to comment #24)
> bz informs me that a "null origin" is exactly what I said it should be. Good.
See http://lxr.mozilla.org/classic/source/lib/libmocha/lm_taint.c#483 -- IIRC
someone was working on this when we reset the project around Gecko.
Anyway, yeah: null origin is like IEEE-754 NaN -- null != null and null != (any
real origin).
> Another point: we must make sure that a javascript: URI entered in the
> location bar (or taken from bookmarks) gets the origin of the currently
> loaded page, whereas a data: URI in the same situation gets a null origin.
> javascript: is the only special case here, I think.
Agreed.
> view-source:javascript: should be killed altogether, see bug 204779; so it
> shouldn't need to worry about its origin.
OTOH, why worry if the right thing happens under the general rule:
> Other view-source: URIs should use the
> origin of the embedded URI, as far as I can tell.
/be
Comment 26•20 years ago
|
||
brendan: i'm not confused, just terse. but that's now the subject of the other bug.
Comment 27•20 years ago
|
||
So, Jonas and I spoke about this and related issues for a while today. We
concluded that it's a "really hard problem" -- no sh*t, you might say! The crux
of the problem, as I see it, is all of these APIs that take a URL as a DOMString
parameter. Those APIs do not afford us any way to track the origin of the
string beyond the API call. We use the current JS context to help with this
problem, but often times our code stashes away the DOMString and processes it later.
Having a mechanism to tag the origin of a URL is great, but it is only useful if
we tag early enough. It is as if we really want to tag a DOMString, which
doesn't sound like fun ;-)
The xpinstall trigger problems seem to revolve around just this problem. We
take URL strings and then load them later from a PLEvent (or from some chrome
JS) when the JS context has changed.
nsJSChannel learns about the origin of its URL by seeking out the
nsIScriptGlobalObject from its notification callbacks. This is a very backwards
way of learning about its origin. Instead, I think we need to tell the channel
what origin to use and have it fallback to a restricted (about:blank) origin in
the absense of something specific.
The problem then comes down to how to convey the origin to a channel. One idea
is to tag the URL object, so that the channel can learn the origin by inspecting
its nsIURI. That works, and is slightly better than nsIChannel::owner because
it can be set earlier on, but it may still not be good enough. The problem I
described in the first paragraph is the real underlying problem that is hard to
solve.
Assignee | ||
Comment 28•20 years ago
|
||
The passing of a uri as a string can be really subtle which adds extra layers of
hardness to this. For example it's not uncommon that chrome code picks up a url
from the content DOM and does something with it.
What makes it extra hard in that case is that in these cases to the DOM code and
everything it calls (which includes code that opens channels) it will look like
it's chrome-code that is just going about its thing. It will have no way of
knowing that the string originated from content-DOM. I.e. it's basically
impossible for any gecko-level code to differ this from a genuine
chrome-originated uri.
Comment 29•20 years ago
|
||
There is no secure way to give chrome-initiated fake URI loads the privileges of
the chrome caller when the URI is passed as a string. A string is a string, not
a labeled type.
We could in the long run try to label all data, along the lines of secure
programming languages (JIF, Lambda[DSec]). That's a big job. It looks worth
investigating, at this point. Note that you have to deal with implicit flows
(covert channels involving branches not taken, timing channels, etc.).
In the short run, I don't think we should allow chrome to load fake URLs via
strings and give those (javascript: namely) fake URI schemes that execute the
privileges of the caller.
Note that this won't break use of javascript: in chrome markup.
So I don't see a hard short-term theoretical problem, just some work to invent
reliable null principals, enforce their use, and fix what breaks when we make
string-passed javascript: URLs downgrade to them.
/be
Comment 30•20 years ago
|
||
Notice that content data flow is different: because of same-origin sandboxing,
we know that any string-expressed URI set as an image src, e.g., by content
comes from content (or from trusted chrome, if greasemonkey or something like it
is in the loop).
Therefore we do not have to downgrade content-set javascript: URLs to the null
principal, and in fact must not for compatibility back to Nav2 -- these should
use the page's origin or cert principal.
Chrome is different.
/be
Comment 31•20 years ago
|
||
So, if we build out our URL objects to optionally support an interface that
gives consumers the ability to tag the URL object with an origin nsIURI, then
that could be part of the solution. The other part of the solution would be to
convert strings to nsIURIs as early as possible and tag them right away. Then
we have to go through our chrome code and do away with stuff that converts
nsIURI to string to nsIURI. In the case of a HTTP redirect, we would assign the
origin of the new URL to be that of the redirected URL.
The big hole in this approach is the conversions from nsIURI to string to
nsIURI. Those are going to be a problem, and they are going to be tough to weed
out unless we break something hard (like deny chrome privs to javascript: URLs
constructed from a string).
----
Brendan, these two statements confuse me:
> There is no secure way to give chrome-initiated fake URI loads the privileges
> of the chrome caller when the URI is passed as a string.
...
> Note that this won't break use of javascript: in chrome markup.
They seem to contradict. What am I missing?
Comment 32•20 years ago
|
||
(In reply to comment #31)
> So, if we build out our URL objects to optionally support an interface that
> gives consumers the ability to tag the URL object with an origin nsIURI, then
> that could be part of the solution. The other part of the solution would be
> to convert strings to nsIURIs as early as possible and tag them right away.
> Then we have to go through our chrome code and do away with stuff that
> converts nsIURI to string to nsIURI.
That seems wasteful anyway, so not a bad thing on its own account, although
small potatoes unless profiling shows otherwise.
> The big hole in this approach is the conversions from nsIURI to string to
> nsIURI. Those are going to be a problem, and they are going to be tough to
> weed out unless we break something hard (like deny chrome privs to
> javascript: URLs constructed from a string).
The current proposal is to give such a javascript: URL the null principal,
chrome caller or no chrome caller.
> Brendan, these two statements confuse me:
>
> > There is no secure way to give chrome-initiated fake URI loads the
> > privileges of the chrome caller when the URI is passed as a string.
> ...
> > Note that this won't break use of javascript: in chrome markup.
>
> They seem to contradict. What am I missing?
Two things:
1. Even with null principals, some javascript: URLs will work.
2. We talked yesterday about distinguishing markup-expressed href, src, and
other attribute values from script-set ones. Shaver pointed out that in HTML,
it's hard to tell what was primary source and what was generated (via innerHTML
or document.write or a javascript: URL).
But for XUL, could we tell that a javascript: URL used in a XUL attribute was
primary source, therefore trustworthy? If chrome fetches a random string from
content and makes a new DOM node using that string as an attribute value, where
the attribute value may be loaded as a URL, then we must use null principals. If
OTOH our native C++ code can tell it's dealing with elements the parser created
from a real URL load (the classic codebase could tell), then we could use the
caller's (chrome) principals.
Our security model assumes same-origin sandboxing protecting content from other
content's data, so we don't have to label every datum. Chrome accessing content
violates this assumption badly.
That's a strong argument for separating chrome and content, but even if we avoid
content spoofing chrome by overriding DOM properties via JS, the content DOM
itself, as built from primary source, could have fake URLs in places that chrome
might foolishly fetch, and load. Separating XPConnect wrappers for DOM natives
(bug 281988) won't help in this case.
The best course while developing this bug's fix is to give string-expressed fake
URIs null principals, and break some things (the javascript console, at least)
temporarily, then see what needs fixing.
/be
Comment 33•20 years ago
|
||
> If OTOH our native C++ code can tell it's dealing with elements the parser
> created
I assume you mean "attributes", not "elements"? Chrome can happily take a
string from content and set it as an attribute on a parser-created chrome node.
Comment 34•20 years ago
|
||
(In reply to comment #32)
> > The big hole in this approach is the conversions from nsIURI to string to
> > nsIURI. Those are going to be a problem, and they are going to be tough to
> > weed out unless we break something hard (like deny chrome privs to
> > javascript: URLs constructed from a string).
>
> The current proposal is to give such a javascript: URL the null principal,
> chrome caller or no chrome caller.
I mean, in the chrome-calling case.
For content javascript: URLs, we should use the origin principals, which we need
to propagate. That doesn't seem too hard to fix.
The case where nsJSThunk::EvaluateScript can't find an owner arises during
history navigation, and we shouldn't even be running javascript: URLs in that
case. But since we have to re-evaluate on back or forward, until wyciwyg: URLs
are extended to handle javascript:-generated documents, we will have to use the
null principal in this "else (no owner)" case.
If anyone knows of another no-owner case than history navigation, shout out.
/be
Comment 35•20 years ago
|
||
(In reply to comment #33)
> > If OTOH our native C++ code can tell it's dealing with elements the parser
> > created
>
> I assume you mean "attributes", not "elements"? Chrome can happily take a
> string from content and set it as an attribute on a parser-created chrome node.
Yes, sorry: attributes. Do we have ways to tell whether these have been changed
by script, or the entire element has been created by script?
/be
Comment 36•20 years ago
|
||
> If anyone knows of another no-owner case than history navigation, shout out.
Any time C++ is setting attributes, I would assume.... For example (based on
another bug I saw today), xbl:inherits. If I have a
<xul:button image="javascript:whatever">
The default (chrome!) XBL binding will set that attribute on the anonymous
<xul:image> element it creates. This is all done from C++. So that's a
no-owner case.
For that matter, use of javascript: for anything other than toplevel document
loads is probably a no-owner case... Or at least should be investigated
somewhat carefully.
Comment 37•20 years ago
|
||
> Do we have ways to tell whether these have been changed
> by script, or the entire element has been created by script?
Not that I'm aware of...
Assignee | ||
Comment 38•20 years ago
|
||
Another problem is DOMParser where we're parsing a string created in JS.
nsDOMAttributeNode::SetPrefix will soon also set attributes with aNotify=false
(i.e. that looks like it's comming from parser).
Basically it seems RealHard to reliably tell if a attribute-set comes from
script or not, we have simply too many features these days that through various
mechanisms set attributes.
Assignee | ||
Comment 39•20 years ago
|
||
I should say that RealHard != Impossible. We certainly could try. The question
is, how much added value is it? The only chrome-javascript-uri case that I know
of (jsconsole) sets the attribute through string and wouldn't be helped by us
supporting parser-generated uris.
Comment 40•20 years ago
|
||
sicking: we should think about how to fix this data-labeling problem, because if
we can't, then I'm concerned that even with split wrappers (bug 281988), and
even with string-passed javascript: URLs downgrading to null principals, chrome
will be vulnerable to content attacks.
We may simply have to "be careful", because I don't see how to solve the "chrome
accesses content's DOM" security problem without more complete data labeling and
static and dynamic checking.
/be
Assignee | ||
Comment 41•20 years ago
|
||
Agreed. Not even wrapper separation is neccesarily a help since the dangerous
data can live right in the content markup and DOM.
Comment 42•20 years ago
|
||
Right. Developers simply need to validate untrusted input, there's no system
that'll completely protect against sloppy coding if the developers don't keep
this in mind at all times. In that sense maybe explicit wrapping (read
XPCNativeWrapper) is a good thing since it makes people think a bit more about
what they're doing and why existing code does what it does.
Assignee | ||
Comment 43•20 years ago
|
||
Actually, i'm not convinced that XPCNativeWrapper is a good idea. We've had it
around for some time and it obviously doesn't get used enough. But I guess this
isn't really the bug to debate that.
So i'm still a little confused as to what we want to do as far as protecting
against js-uris comming from content. Let me try to explain the problem my brain
is still fighting about. The solution we're suggesting is tagging all uris after
we've created them using the principal of the 'creater' of the uri (if this
tagging isn't done the uri will use the null principal so we'll default to safe).
So say in the code for <a> elements we would before after having parsed the
href-attribute tag it with the principal of its document (or possibly the
principal of the js calling .setAttribute). This principal will then be used
when a channel is opened using this uri. This will ensure that things like
'back' will never reevaluate the js-uri using the wrong principal.
But lets say we have some piece of chrome-js that gets a url from content and
creates a link in the chrome pointing to that uri (we've had code like this
before). The chrome does this by creating an a-element, setting its .href and
then inserting it in its DOM. Alternativly by setting the .href of an already
existing a-element that's in the DOM. The <a>-element will then grab the
principal of the chrome-page and tag the href-uri with that. When the user
clicks the link all hell will break loose since if the uri is a js-uri it will
execute with chrome priviliges since the <a>-element had no idea that the string
originally came from content.
So to fix this we could make all places where we tag uris with a principal
simply not do this if the principal is a chrome principal (i.e. the system
principal). Instead chrome-code would have to manually tag the uri with the
chrome-principal when it really needed to have a js-uri with chrome privileges.
There's two problems with this though.
1. In many cases, like the <a>.href example above, there is no api for the
chrome-code to get to the uri-object and tag it.
2. Even in the cases where it can get to the uri, existing code and extensions
does of course not do this yet, so they'll break.
basically doing this would cause as much problem as doing bug 293394. I can
think of two ways of handling this
A. Not make any special exceptions for chrome-principals and instead rely on
that the chrome-js does the checking it needs to before setting uris on
things or loading uris that come from content
B. Break existing code and extensions after an announced heads-up and let people
either tag their uris best they can, or find other solutions then relying
on js-uris with chrome privileges.
What do people think?
Comment 44•20 years ago
|
||
(In reply to comment #43)
> Actually, i'm not convinced that XPCNativeWrapper is a good idea. We've had it
> around for some time and it obviously doesn't get used enough. But I guess
> this isn't really the bug to debate that.
See bug 281988 comment 79.
> So say in the code for <a> elements we would before after having parsed the
> href-attribute tag it with the principal of its document (or possibly the
> principal of the js calling .setAttribute). This principal will then be used
> when a channel is opened using this uri. This will ensure that things like
> 'back' will never reevaluate the js-uri using the wrong principal.
That's true, but note that reevaluating javascript: URLs on history navigation
is just flat wrong. With or without bfcache, we should just re-present what was
seen when the user left that page. This was done using wysiwyg: URLs in the Nav
3-4 daze, and we should do likewise with wyciwyg: URLs soon. Note that they
must be pinned in the cache till their history entry is overwritten or pruned.
> But lets say we have some piece of chrome-js that gets a url from content and
> creates a link in the chrome pointing to that uri (we've had code like this
> before). The chrome does this by creating an a-element, setting its .href and
> then inserting it in its DOM. Alternativly by setting the .href of an already
> existing a-element that's in the DOM. The <a>-element will then grab the
> principal of the chrome-page and tag the href-uri with that. When the user
> clicks the link all hell will break loose since if the uri is a js-uri it will
> execute with chrome priviliges since the <a>-element had no idea that the
> string originally came from content.
You were so close to general data labeling, but you didn't go there! Good, but
here's the path: if we labeled all data, and the chrome link's href data were
based on content and chrome, then the 'join' of their trust labels would taint
the resulting string. We could then downgrade appropriately based on that label
(including any control dependencies).
But we're not doing any such labeling in the short run. What we must instead do
for chrome in the short run is *always* use the null principals.
> So to fix this we could make all places where we tag uris with a principal
> simply not do this if the principal is a chrome principal (i.e. the system
> principal). Instead chrome-code would have to manually tag the uri with the
> chrome-principal when it really needed to have a js-uri with chrome
> privileges.
Right, although we should identify those places first, since they should involve
chrome explicitly using javascript: URLs, and never involve chrome using data
(possibly from content) to compute a URL.
> There's two problems with this though.
> 1. In many cases, like the <a>.href example above, there is no api for the
> chrome-code to get to the uri-object and tag it.
That's a cryin' shame ;-). Null principals, bzzzt.
> 2. Even in the cases where it can get to the uri, existing code and extensions
> does of course not do this yet, so they'll break.
I know of no such extension -- do you?
> basically doing this would cause as much problem as doing bug 293394.
Only if we don't have places (like the JS console) where chrome explicitly uses
javascript: URLs. As you note above, we could make those cases work, with some
effort.
> I can think of two ways of handling this
>
> A. Not make any special exceptions for chrome-principals and instead rely on
> that the chrome-js does the checking it needs to before setting uris on
> things or loading uris that come from content
We have to rely on chrome to be careful, since we are sharing DOM data no matter
what else we do. We're in a world where there's no sandbox isolating chrome
from content (this would be true even if we wrote all chrome code in C++, or
whatever language -- chrome code must take care).
But, there's nothing wrong with true defense in depth. If we think javascript:
URLs in chrome should default to null principals, and we can unbreak the few
things this breaks on a carefully audited, case-by-case basis, that wins, over
against auditing all the places data flow affects URLs that chrome loads (even
if only by subtle control dependencies, timing channels, etc.).
> B. Break existing code and extensions after an announced heads-up and let
> people either tag their uris best they can, or find other solutions then
> relying on js-uris with chrome privileges.
I favor this, but we should go through the exercise of unbreaking the JS console
and a few other things (if any) in our apps that we break, so we know others can
do likewise.
/be
Comment 45•20 years ago
|
||
I'm having a hard time imagining the need for any chrome code to take a URL from
content and load it in chrome. (Favicon is the only case I can imagine.) Do
you have an example from an actual extension or some chrome code in the browser?
Maybe things like the link toolbar? In all of those cases at least, it would
seem that the extension should create a small "content island" in which to load
the URL from content. For example, the favicon should probably not be loaded in
the context of a chrome document. Why isn't it loaded in the context of an
<iframe> with reduced privs?
Assignee | ||
Comment 46•20 years ago
|
||
Another place we have is the pageinfo dialog. That one has a whitelist of
protocols so it's safe. Then we used to have the element-properties dialog that
when I originally wrote it used to contain links. I can think of a few maybes
too. Basically it's hard to say where these things pop up. It was always one of
the first questions when I attended the security meetings back at netscape.
I agree with brendans attack-plan here (though i'm less convinced that the
data-tainting thing can be done). If nothing else we'll get a chance to see how
possible it is to take the safe route.
Comment 47•20 years ago
|
||
(In reply to comment #46)
> I agree with brendans attack-plan here (though i'm less convinced that the
> data-tainting thing can be done).
It can be done (see http://www.cs.cornell.edu/andru/ as a starting point), but I
never said we should do it. I even said "don't go there!" ;-). It's not likely
to be practical for us, and it's not necessary for us to secure our chrome
implementations.
/be
Assignee | ||
Comment 48•20 years ago
|
||
Ugh, there are some 150 files that call NS_NewURI so this is going to be a
decently sized patch...
Assignee | ||
Updated•19 years ago
|
Assignee: darin → bugmail
Updated•19 years ago
|
Blocks: branching1.8
Comment 49•19 years ago
|
||
When chrome loads URLs it gets from content (e.g. favicon, page info, set as
wallpaper, open link in new window), in addition to getting javascript:
principals right, it needs to make sure the page is allowed to reference the URL
(CheckLoadURI). Can we solve both problems at once, eliminating the need for
some CheckLoadURI checks as well as eliminating the need to check for
javascript: URLs?
Whiteboard: [sg:investigate]
Comment 50•19 years ago
|
||
(In reply to comment #49)
> When chrome loads URLs it gets from content (e.g. favicon, page info, set as
> wallpaper, open link in new window), in addition to getting javascript:
> principals right, it needs to make sure the page is allowed to reference the URL
> (CheckLoadURI). Can we solve both problems at once, eliminating the need for
> some CheckLoadURI checks as well as eliminating the need to check for
> javascript: URLs?
Yes, that's the goal (or a goal). Those patchwork checks, especially excluding
javascript: (and sometimes data: for no good reason) were just symptom-treating
hack attempts. They didn't cure the disease, and they sometimes cost us a limb
or two in senseless amputation of functionality.
/be
Updated•19 years ago
|
Flags: blocking1.8b4?
Updated•19 years ago
|
No longer blocks: branching1.8
Flags: blocking1.8b4? → blocking1.8b4+
Updated•19 years ago
|
Whiteboard: [sg:investigate] → [sg:investigate] [ETA ?]
Comment 51•19 years ago
|
||
if someone comes up with a patch, please request approval and we'll evaluate then.
Flags: blocking1.8b5+ → blocking1.8b5-
Updated•19 years ago
|
Whiteboard: [sg:investigate] [ETA ?] → [sg:want P2] [ETA ?]
Updated•16 years ago
|
Blocks: 464620
Flags: wanted1.9.2?
Flags: blocking1.9.2?
Whiteboard: [sg:want P2] [ETA ?] → [sg:want P1] [ETA ?]
Comment 52•15 years ago
|
||
sicking if you're working in XBL2 should somebody else take this bug?
Flags: blocking1.9.2? → blocking1.9.2-
Assignee | ||
Comment 53•15 years ago
|
||
Yes, but I think this bug is basically FIXED these days. We don't really guess on the loading principal the way we used to.
Though I would love for nsIDocShellLoadInfo.ownerIsExplicit to always be set to true.
Updated•12 years ago
|
Flags: wanted1.9.2?
Comment 54•9 years ago
|
||
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•