Closed Bug 746280 Opened 13 years ago Closed 3 years ago

[meta] Tracking: Run content processes with lowered rights

Categories

(Core :: DOM: Content Processes, defect, P3)

ARM
Gonk (Firefox OS)
defect

Tracking

()

RESOLVED INACTIVE

People

(Reporter: cjones, Unassigned)

References

(Depends on 4 open bugs, Blocks 1 open bug, )

Details

(Keywords: meta)

Content processes don't need elevated privileges or fs access ... except for webgl, because in our current code we have to use the system GL implementation. So the project here is to drop the rights of content processes to the absolute minimum needed to run webgl. (All other problems can be fixed in gecko.)
blocking-basecamp: --- → +
I really do not think that this should be a blocker. Getting to the point where we meaningfully block content processes from accessing the file system will take *a lot* of work. Note that simply removing IO from the content process isn't enough. We also have to remove the ability for content processes to send a filename to the parent process and have the parent process do the IO and send back the data. Instead we would need to either disable the ability to receive filenames from the child process entirely (which is unlikely given the poor IPC-nsIInputStream support we currently have) or verify all requests are for files that that child process has the right to read. This includes making the parent process be aware of which device-storage files that a child process has been granted access to through security dialogs. Keep track of where all IPC-Blobs are coming from and going to, etc. I agree that doing this has great benefits. But there is no way we can accomplish all of this before basecamp code freeze, unless we drop a lot of other features.
I'm not really sure where to start with that comment. It sounds like we have very different conceptions of the security model. If the master process isn't applying security policy, then we're just wasting time wrt defense-in-depth, and should drop any pretense of trying to achieve it. (In reply to Jonas Sicking (:sicking) from comment #1) > I agree that doing this has great benefits. But there is no way we can > accomplish all of this before basecamp code freeze, unless we drop a lot of > other features. Can you be more specific?
(In reply to Chris Jones [:cjones] [:warhammer] from comment #2) > I'm not really sure where to start with that comment. It sounds like we > have very different conceptions of the security model. > > If the master process isn't applying security policy, then we're just > wasting time wrt defense-in-depth, and should drop any pretense of trying to > achieve it. That is exactly my point. We are very far from being at the point when we are meaningfully enforcing a sandbox on the child process. In order to create a really secure sandbox we need to audit all ipdl protocols to make sure that they verify all incoming data such that it only performs actions that that child process should be allowed to do. Taking the two ipdl protocols that I know, for neither of them this is the case. For example the IndexedDB ipdl protocol sends what amounts to the filename to the parent process, which the parent process then uses to create a filename. Likewise, the cookie code sends what amounts to the application identifier to the parent process, which it then uses to decide which cookie database to access. In neither of these cases do we do any verification that the data coming through ipdl is properly formatted. Based on your comments I'm guessing that ipdl itself always verifies that all data coming from the child process follows the protocol specified in ipdl, but that doesn't mean that application-level integrity is maintained. All of these things are certainly fixable. But the rabbit hole will go deep. I strongly suspect that we'll run out of time before we have secured it all, and unless we secure it all we haven't actually improved anything. (In reply to Chris Jones [:cjones] [:warhammer] from comment #3) > http://people.mozilla.com/~cjones/gmail.jpg Any time you are using gmail in firefox, you are doing exactly that. Well, not root, but I hope we're not running gecko with root either. I totally agree that it's bad, but I simply don't think we have time to implement such a major feature. Feature freeze is today FWIW.
(In reply to Jonas Sicking (:sicking) from comment #4) > (In reply to Chris Jones [:cjones] [:warhammer] from comment #2) > > I'm not really sure where to start with that comment. It sounds like we > > have very different conceptions of the security model. > > > > If the master process isn't applying security policy, then we're just > > wasting time wrt defense-in-depth, and should drop any pretense of trying to > > achieve it. > > That is exactly my point. > No, it's not. You're arguing that we shouldn't even try to do this, that the master process shouldn't even *attempt* to apply security policy. > We are very far from being at the point when we are meaningfully enforcing a > sandbox on the child process. > That doesn't match my understanding, which is why I asked for more details. > In order to create a really secure sandbox we need to audit all ipdl > protocols to make sure that they verify all incoming data such that it only > performs actions that that child process should be allowed to do. > Of course. > Taking the two ipdl protocols that I know, for neither of them this is the > case. For example the IndexedDB ipdl protocol sends what amounts to the > filename to the parent process, which the parent process then uses to create > a filename. > Ben and I discussed this during review, and we punted because the required mechanisms weren't in place. The work Mounir is doing is hopefully that mechanism. If not, we've had a failure in design / communication somewhere. > All of these things are certainly fixable. But the rabbit hole will go deep. > I strongly suspect that we'll run out of time before we have secured it all, > and unless we secure it all we haven't actually improved anything. > That's not at all true. I don't even know what "secure it all" means. Verify every machine instruction and read from memory? > (In reply to Chris Jones [:cjones] [:warhammer] from comment #3) > > http://people.mozilla.com/~cjones/gmail.jpg > > Any time you are using gmail in firefox, you are doing exactly that. Whoa! o_O I think we should have a chat about OS design at some point. > Well, > not root, but I hope we're not running gecko with root either. The master process in gecko has to control wifi, bluetooth, sensors, hardware framebuffer, etc. It has system-level privileges. Just like system daemons do on other OSes. That's why all the code that runs in the master process must be verified by Mozilla. > I totally > agree that it's bad, but I simply don't think we have time to implement such > a major feature. This isn't a feature in that sense, IMHO. The permissions model was the feature; the work here is using that mechanism in all the places that we've left FIXME comments. This is a basic product requirement that we've been discussing for at least 6 months. You still haven't provided specific examples of what you think is risky, so it's hard to know if we need to reevaluate / add more resources. If so, we should.
> > Taking the two ipdl protocols that I know, for neither of them this is the > > case. For example the IndexedDB ipdl protocol sends what amounts to the > > filename to the parent process, which the parent process then uses to create > > a filename. > > Ben and I discussed this during review, and we punted because the required > mechanisms weren't in place. The work Mounir is doing is hopefully that > mechanism. If not, we've had a failure in design / communication somewhere. The work we have done over the past week certainly is required for any of this to work, but it isn't nearly enough. For example we still don't track all of the apps which are running in a given child process, so none of the data accesses that happen in the parent process verify that it's for data related to the app (or apps) running in a given child process. While indexedDB does have a concept of a parent browser window, we still allow child processes to open chrome databases. Does the multiprocess blob support that we are about to land solve all of the cases where child processes need to do file handling or need to handle filenames? Do we even know which ipdl protocols that we inherited from Firefox which directly or indirectly allows reading/writing data which we would need to disable? The total ipdl API surface we are already exposing is huge, and it's getting bigger by the day. Will the ipdl implementation ensure that processes can't send IPC messages with the parent process will treat as something other than ipdl messages? > > All of these things are certainly fixable. But the rabbit hole will go deep. > > I strongly suspect that we'll run out of time before we have secured it all, > > and unless we secure it all we haven't actually improved anything. > > > > That's not at all true. I don't even know what "secure it all" means. > Verify every machine instruction and read from memory? It means checking all ipdl APIs to make sure that they either don't access sensitive data, even in the face of malicious input, or that they always verify that the data they are accessing is approved for the app (or apps) running in a given child process. > > (In reply to Chris Jones [:cjones] [:warhammer] from comment #3) > > > http://people.mozilla.com/~cjones/gmail.jpg > > > > Any time you are using gmail in firefox, you are doing exactly that. > > Whoa! o_O I think we should have a chat about OS design at some point. Are you saying that you are running desktop firefox in a process which has access to less of your private data than the current app processes in B2G?
(In reply to Jonas Sicking (:sicking) from comment #6) > > > Taking the two ipdl protocols that I know, for neither of them this is the > > > case. For example the IndexedDB ipdl protocol sends what amounts to the > > > filename to the parent process, which the parent process then uses to create > > > a filename. > > > > Ben and I discussed this during review, and we punted because the required > > mechanisms weren't in place. The work Mounir is doing is hopefully that > > mechanism. If not, we've had a failure in design / communication somewhere. > > The work we have done over the past week certainly is required for any of > this to work, but it isn't nearly enough. For example we still don't track > all of the apps which are running in a given child process, There is exactly one app per content process. That's part of the security model, documented. > Does the multiprocess blob support that we are about to land solve all of > the cases where child processes need to do file handling or need to handle > filenames? From content, AFAIK yes, for v1. It's looking like FileWriter might slip. > Do we even know which ipdl protocols that we inherited from Firefox which > directly or indirectly allows reading/writing data which we would need to > disable? The total ipdl API surface we are already exposing is huge, and > it's getting bigger by the day. > All protocols implementing controlled APIs (like telephony) need to apply capability checks. There hasn't been a mechanism for that until now, hopefully with what Mounir added. Many of the protocols, such as the ones for gfx, don't need capability checks. > Will the ipdl implementation ensure that processes can't send IPC messages > with the parent process will treat as something other than ipdl messages? > I'm not sure I fully understand, but yes, all data read off the OS sockets are validated by the parent. This means everything from deserialized data to message IDs to protocol state machines to actor identities. > > > All of these things are certainly fixable. But the rabbit hole will go deep. > > > I strongly suspect that we'll run out of time before we have secured it all, > > > and unless we secure it all we haven't actually improved anything. > > > > > > > That's not at all true. I don't even know what "secure it all" means. > > Verify every machine instruction and read from memory? > > It means checking all ipdl APIs to make sure that they either don't access > sensitive data, even in the face of malicious input, or that they always > verify that the data they are accessing is approved for the app (or apps) > running in a given child process. > Yes, this is the work we need to do. > > > (In reply to Chris Jones [:cjones] [:warhammer] from comment #3) > > > > http://people.mozilla.com/~cjones/gmail.jpg > > > > > > Any time you are using gmail in firefox, you are doing exactly that. > > > > Whoa! o_O I think we should have a chat about OS design at some point. > > Are you saying that you are running desktop firefox in a process which has > access to less of your private data than the current app processes in B2G? No, I'm saying I'm running gmail in an OS process which doesn't have the ability to load kernel modules, access raw hardware, or otherwise pwn my entire desktop machine. On my phone, I'm running gmail in an OS process that can't send and receive SMS without my knowledge, place phone calls, etc. Firefox is not the example to cite for OS security because we rely on the underlying OS as a backstop to limit the effect of our security bugs. On b2g, gecko *is* the OS, we don't have a backstop.
Also, I should note that IPDL has been designed with this in mind from day one, and we have many possibilities to automate capability checks, like annotating protocols with required capabilities and auto-generating the code to actually ask whatever is managing permissions.
I still don't think there's any way we'll nearly be able to do this. However we don't need to agree. If you want to try you should go for it, it won't affect implementation of the various APIs that much. I still think that we should write the code such that we do things like send application IDs and serialized principals from the child process to the parent process whenever we perform actions. This has a few advantages. * Principals contain more than the application ID. The most basic additional part it contains is which origin within an app a request is coming from. So for example the browser app needs to tell the parent process that it's a webpage from "http://example.com" that is wanting to use GeoLocation. Obviously the origin can't be trusted, but until we have process-per-origin (which we are far from due to iframes), it's the best we can do. * By sending the app ID, we have the option to make multiple apps share process if we run low on memory. This is certainly less secure, but we can mitigate this by being smart about only sharing processes between apps that have similar privileges. We might never take advantage of this, but it's just as easy to implement for now, and it keeps our options open. * It lets us move forward with the implementation of various APIs until we have the ipdl features that you are talking about for doing automatic capability checking. I also totally agree that the gecko app processes shouldn't be running with rights to install kernel modules etc. I didn't realize that we do. Fixing that seems like it wouldn't take any changes to gecko code other than the code that is starting the processes, so that certainly seems more doable.
It's completely pointless to send security principals from content processes to the master process. We should either implement real security, or admit that we suck and not waste time.
So how are you planning on dealing with the origin?
The master process already knows the security principals of processes requesting access to privileged APIs.
An app can open iframes to many different origins. How are you planning on handling that? Likewise, the browser app opens iframes to many different origins. When getting a request from the app process, how are we going to know which origin is making the request?
(In reply to Jonas Sicking (:sicking) from comment #13) > An app can open iframes to many different origins. How are you planning on > handling that? Apps with elevated privileges need appropriate CSPs that forbid this. > Likewise, the browser app opens iframes to many different > origins. When getting a request from the app process, how are we going to > know which origin is making the request? It's not the iframe requesting privileges, it's the browser app. The browser app has the union of permissions it's allowed to grant to "browser tabs".
(In reply to Chris Jones [:cjones] [:warhammer] from comment #14) > (In reply to Jonas Sicking (:sicking) from comment #13) > > An app can open iframes to many different origins. How are you planning on > > handling that? > > Apps with elevated privileges need appropriate CSPs that forbid this. First of all, if you think we should not allow this, then you need to bring this up on a mailing list. This is not something anyone else has suggested. Second, what do you mean by "Apps with elevated privileges"? Do you include untrusted apps, which do have access to things like camera and gps? > > Likewise, the browser app opens iframes to many different > > origins. When getting a request from the app process, how are we going to > > know which origin is making the request? > > It's not the iframe requesting privileges, it's the browser app. The > browser app has the union of permissions it's allowed to grant to "browser > tabs". Sure, but when gaia is rendering the dialog asking the user to grant access to gps, that dialog needs to contain the origin of the webpage requesting to use gps. That dialog can't just say "some webpage rendered by firefox is requesting know your location". The user will want to know exactly which website.
(In reply to Jonas Sicking (:sicking) from comment #15) > (In reply to Chris Jones [:cjones] [:warhammer] from comment #14) > > (In reply to Jonas Sicking (:sicking) from comment #13) > > > An app can open iframes to many different origins. How are you planning on > > > handling that? > > > > Apps with elevated privileges need appropriate CSPs that forbid this. > > First of all, if you think we should not allow this, then you need to bring > this up on a mailing list. This is not something anyone else has suggested. > This has been discussed to death, and the security team has documented it on the wiki page describing the security model (IIRC). Chrome web apps have the same model. > Second, what do you mean by "Apps with elevated privileges"? Do you include > untrusted apps, which do have access to things like camera and gps? > It means whatever we want it to mean, and it can evolve. For starters, all "certified apps" should fall into this category.
(In reply to Chris Jones [:cjones] [:warhammer] from comment #16) > (In reply to Jonas Sicking (:sicking) from comment #15) > > (In reply to Chris Jones [:cjones] [:warhammer] from comment #14) > > > (In reply to Jonas Sicking (:sicking) from comment #13) > > > > An app can open iframes to many different origins. How are you planning on > > > > handling that? > > > > > > Apps with elevated privileges need appropriate CSPs that forbid this. > > > > First of all, if you think we should not allow this, then you need to bring > > this up on a mailing list. This is not something anyone else has suggested. > > This has been discussed to death, and the security team has documented it on > the wiki page describing the security model (IIRC). Chrome web apps have > the same model. No-one has suggested that trusted apps shouldn't be able to create iframes to different origins. Chrome web apps also allow pointing <iframe>s at arbitrary content. The only thing that has been suggested to be limited, and the only thing that is limited in chrome apps, is what can be used in pages that are running as same-origin as the app. > > Second, what do you mean by "Apps with elevated privileges"? Do you include > > untrusted apps, which do have access to things like camera and gps? > > It means whatever we want it to mean, and it can evolve. For starters, all > "certified apps" should fall into this category. Ok. If we're only talking about certified apps then we still need to transfer nsIPrincipals for untrusted apps and in web pages. Based on our discussion on irc, there seems to have been some confusion regarding the word "principal". What I was referring to was the gecko nsIPrincipal.
"I don't have resources" doesn't mean that. It usually means either "I don't think this is important", "I think this is too hard", or "I don't know how to implement this". I'm not going to attach my name to a product in which gecko bugs allow malicious content to pwn a user's entire phone. One of three things is going to happen (1) I'm going to implement this. (2) Someone else is going to implement this. (3) I'm going to quit the project before we ship product.
(In reply to Jonas Sicking (:sicking) from comment #13) > An app can open iframes to many different origins. How are you planning on > handling that? Likewise, the browser app opens iframes to many different > origins. When getting a request from the app process, how are we going to > know which origin is making the request? You'd have to assume that the most privileged origin which could be in the app frame is making the request. Or, put another way, you grant no more privilege than that which is due to the most privileged content which could be in that frame. I don't think this would decrease the strength of the security model, though. An attack in which you trick Gecko into sending bogus principals is strictly more powerful than an XSS attack where you take control of the privileged app frame. Of course, in order for this defense to be effective, we need a process boundary between "high-privileged" and "less-privileged" code. But we don't have to have a process boundary between every origin, if we accept some attacks.
etherpad updated.
(In reply to Chris Jones [:cjones] [:warhammer] from comment #18) > (3) I'm going to quit the project before we ship product. Lets make sure we don't get to 3). People are stressed and focused on very specific parts of the product, but lets grant each other some confidence and work on implementing as much of this as we can for basecamp. A choice between perfection and giving up is a false dilemma. As Chris mentioned, this has been part of https://wiki.mozilla.org/B2G/Architecture/Runtime_Security for a long time.
Not blocking because meta-bug. All dependencies have been processed.
blocking-basecamp: ? → ---
Depends on: 779935
Component: IPC → DOM: Content Processes
Some of the dependent bugs are still relevent for desktop sandboxing. Assigning to myself to triage and figure out whats still important.
Assignee: nobody → ptheriault
Keywords: meta
Priority: -- → P3
Assignee: ptheriault → nobody

This bug is a bit weird.

Platform: ARM Gonk (Firefox OS) - so obsolete?

Depends on bug 776847 - Create a valgrind tool to taint data sent to and received from content processes, but IIUC the data validation concept at least for IPC is covered these days by IPC_FAIL

Blocks bug 1287730 - [META] Security Assurance for Sandboxing, that brings me to you, :gcp

Can we just close this?

Flags: needinfo?(gpascutto)
Summary: Tracking: Run content processes with lowered rights → [meta] Tracking: Run content processes with lowered rights
Status: NEW → RESOLVED
Closed: 3 years ago
Flags: needinfo?(gpascutto)
Resolution: --- → INACTIVE
You need to log in before you can comment on or make changes to this bug.