Closed Bug 733414 Opened 13 years ago Closed 12 years ago

SecReview for SocialAPI

Categories

(mozilla.org :: Security Assurance: Review Request, task)

task
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: curtisk, Assigned: amuntner)

References

()

Details

(Whiteboard: [secreview completed][start 05/18/2012][target mm/dd/yyyy])

Can we get a sec review filed for Social API Please include Mike Hansen and link to the following http://people.mozilla.com/~mhanson/socialServiceAPI.htm -- Michael Coates mcoates@mozilla.com
Whiteboard: [secr:curtisk]
Product: Mozilla Services → Core
QA Contact: general → general
Assignee: nobody → curtisk
Curtis, We're ready to reach out to the following to schedule the API review at the next available spot. Mike Hansen Shane Caraveo Mark Hammond
mail sent requesting a date for the review and for the questions to be answered for this bug. subject:[Security Review][Action Required]Social API (Mark, Shane & Michael H.) Per bug 733414 comment 1 we appear ready to schedule a review for Social API. Can you all tell me what date from the Security Review Calendar that works for you? (Available dates say just "SecReview"). And then please add the questions from here as a comment to bug 733414. / Curtis
Component: General → Security Assurance: Review Needed
Product: Core → mozilla.org
QA Contact: general → security-assurance
Whiteboard: [secr:curtisk] → [secr:curtisk][pending secreview]
Version: unspecified → other
Mike H., Shane, Mark: any chance we have a date on our calendar (https://mail.mozilla.com/home/ckoenig@mozilla.com/Security%20Review.html) that will work for you guys?
Status: NEW → ASSIGNED
25th at 1:00 or 27th at 10:00 are both good.
OK, I dropped the ball on this one. :mhanson, is 4 or 11 May good for you?
Assignee: curtisk → nobody
Whiteboard: [secr:curtisk][pending secreview] → [pending secreview]
Assignee: nobody → curtisk
Whiteboard: [pending secreview] → [pending secreview][start mm/dd/yyyy][target mm/dd/yyyy]
:mhanson ping, we need to pick a new date. How does the 18th look for you?
18th at 10:00 AM is good.
Whiteboard: [pending secreview][start mm/dd/yyyy][target mm/dd/yyyy] → [pending secreview][start mm/dd/yyyy][target mm/dd/yyyy][triage needed 2012.05.09][lead needed]
Whiteboard: [pending secreview][start mm/dd/yyyy][target mm/dd/yyyy][triage needed 2012.05.09][lead needed] → [secreview sched][start 05/18/2012][target mm/dd/yyyy][triage needed 2012.05.09][lead needed]
will this be optional?(if not strongly consider) hate google/facebook & others creeping in on every websites & also slows the page(& more data usage on a limited connection)
Yes, of course it will be optional, and fully under the user's control.
Item to be reiviewed: Social API When: 18-May-2012 Link to calendar entry: https://mail.mozilla.com/home/ckoenig@mozilla.com/Security%20Review.html?view=month&action=view&invId=110490-110489&pstat=AC&exInvId=110490-177245&useInstance=1&instStartTime=1337360400000&instDuration=3600000 SecReview Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=733414 Security Lead: Michael Coates / Yvan Boily Required Reading List: *https://wiki.mozilla.org/Labs/SocialAPI *https://bugzilla.mozilla.org/show_bug.cgi?id=733414 * https://github.com/mozilla/socialapi-dev/blob/develop/docs/socialAPI.md * https://github.com/michaelrhanson/socialapi-hacking has some hacked up examples, not truely providers, but the yammer one makes some use of the socialapi * https://github.com/michaelrhanson/mozilla-demo-social-service has a better example of a provider with a node.js server * https://github.com/mozilla/socialapi-dev is the git development repo which gets pulled into the hg repo (If possible prefill this area for copying to the notes section of the review) Introduce Feature (5-10 minutes) [can be answered ahead of time to save meeting time] - Goal of Feature, what is trying to be achieved (problem solved, use cases, etc) - What solutions/approaches were considered other than the proposed solution? * Our understanding is that there is another api with similar functionality for b2g. Given the overall unification effort across platforms, what is the rationale for there being two platforms? - Why was this solution chosen? - Any security threats already considered in the design and why? * Threat Brainstorming (30-40 minutes) * manifest file - what are the security requirements for entrance? Can a website say, "click to add whateverbook," and really add a MITM site to your manifest, with legit ssl key? * Under "To figure out, it says, "Do we need to blacklist some URLs for "recommend"? (i.e. anything with security-sensitive GET params)" - Are there any examples you have in mind? hopefully the websites just don't do this. * API ref says, for Client to user notification "these notifications may be used to trigger a variety of attention-getting interface elements, including "toast" or "Growl"-style ephemeral windows, ambient notifications (e.g. glowing, hopping, pulsing), or collections (e.g. pull-down notification panels, lists of pending events)" **concerns: *** toast/growl style windows - might user trust instructions received in this window, and follow them? if so, it could be used to trick the user into doing something bad. *** glowing/hopping ambient notifications, collections - DoS against user's display? What about public terminals? Could users end up installing SPs on these and forgetting to uninstall them? Think kiosk mode - user might not be able to easily close the browser? * Conclusions / Action Items (10-20 minutes) The Social Browsing feature adds a subsystem to the browser that provides a persistent connection to one or more "social service providers." The goal of the feature is to: * Allow deeper engagement with social services, for users that desire it. * Provide a standard mechanism for social service providers to engage in "marginalia" conversations about the web * Simplify the user interface of "social recommendation" of web content, in order to start to reduce the NASCAR-button-spam we see on many pages today * Provide a clean user interface abstraction for real-time communication between users on a social network What sort of communication? Inc vid? * Enhance user choice by allowing users to "bring their own network" to the web (as opposed to the current system of iframe embedding, which requires the site developer to choose which network to use for social recommendation) * Lay the groundwork for user-provided contact list and activity stream consumption by the user agent Implementation overview: The current implementation depends on a JSON-encoded metadata file that identifies a social service. This includes a name, icon, a JavaScript Worker URL, a Sidebar URL, and a URL prefix. This metadata file is parsed and stored in a sqlite database of "social services", and an XPCOM service providing access to this database is registered. At startup time, we create a "background worker" for each active service. Since we don't actually have background workers, we do it like this (in frameworker.js): We create an iframe on the hidden window, and load the JS URL in it (as content!). Then we attach a Sandbox to the iframe and copy a couple objects from the iframe to the sandbox. Then we eval the JS in the sandbox. This gives us a JS context which runs in the principal of the JS URL, but has no DOM. We copy the XMLHttpRequest, WebSockets, indexedDB, localStorage, btoa, atob, setTimeout, setInterval, clearTimeout, clearInterval, dump, FileReader, Blob, and navigator objects. The Worker runs effetively for the duration of the browser session (though see note on Private Browsing) We synthesize a MessagePort object which provides text-only messaging between the Worker and the browser. We invoke a method on the Worker passing in a MessagePort, which it saves, and posts messages to, later. We inspect the "topic" attribute of messages that come out of the Worker; if they start with the "social." prefix, we handle them internally (more below); otherwise we forward them to sidebar and window content. At window overlay time, we create three new UI elements: * A recommendation button in the URL bar * A toolbar item which is positioned at the end of the nav-bar * A sidebar browser element which is positioned to the right of content and may be initially hidden. * Private Browsing The design intent is that going into Private Browsing mode should cause all Social objects to be unloaded. The Worker should be destroyed and all sidebar/toolbar/recommendation buttons should be destroyed. * Activating the Feature Our intent is that the entire system defaults to "off". We would like a social service provider to have the power to turn the feature on, for its own domain, while the user is visiting their site. I suggest that this be implemented as: On pages whose domain matches the URLPrefix of an installed service provider, a JS function ("activateSocialBrowsing") is enabled. Calling this function prompts the user with a "want to turn on social browsing?" panel; if selected, this enables the feature and selects the current provider. If the user declines to turn it on, we should have the option to remember this choice and not present the panel in future. turn it on, we should have the option to remember this choice and not present the panel in future. *** Threat thoughts: Two big categories, I think. Privacy threats from installed service providers: Can a service provider make malicious use of browsing data provided through this API? * The current design is that no browsing information is passed to the service provider without a user action. The only context currently provided, in fact, is a click on the "recommend" button, which passes the URL of the current page to the Worker. * In future, though, I think the feature would be improved by more information sharing. i.e. extracting metadata from visited pages and passing it to the Worker. This has potential for user surveillance and tracking if used aggressively. For future releases, we may want to build a logging/notification system to let the user know exactly what is being shared, and when, and give the user full control over that. ** Can the Worker be MITMed? That could be bad. Require SSL? (yes) ** MITM on sidebar content? Could get at the getWorker() call, so you could spoof interactions with the sidebar. Require SSL? Phishing threats from spoofing the social browsing UX: (ack losing connectivity more later)
I made notes of issues that were brought up, and have created bugs for all but one, since I forgot the details of what I meant to write about it. - educating the user about the background service, bug 756596 - curation of loading the manifests ??? - block/black list? bug 756591 - require ssl? bug 756593 - ensure manifest urls belong to origin, bug 756587 - import scripts same origin, bug 756589 - test: make an unresponsive script in the worker, bug 756588 - manifest updates, bug 756590
Whiteboard: [secreview sched][start 05/18/2012][target mm/dd/yyyy][triage needed 2012.05.09][lead needed] → [secreview completed][start 05/18/2012][target mm/dd/yyyy][triage needed 2012.05.09]
A couple questions: "The current implementation depends on a JSON-encoded metadata file that identifies a social service. This includes a name, icon, a JavaScript Worker URL, a Sidebar URL, and a URL prefix" Does it contain an embedded icon? an url to an icon? how is it retrieved? where is it stored? " This metadata file is parsed and stored in a sqlite database of "social services", and an XPCOM service providing access to this database is registered." Is it using https://developer.mozilla.org/en/Storage?
(In reply to Adam Muntner :adamm from comment #13) > A couple questions: > > "The current implementation depends on a JSON-encoded metadata file that > identifies a social service. This includes a name, icon, a JavaScript > Worker URL, a Sidebar URL, and a URL prefix" > > Does it contain an embedded icon? an url to an icon? how is it retrieved? > where is it stored? urls, though I suppose it could be a data url, we don't prevent that for the icon url. > " This metadata file is parsed and stored in a sqlite database of "social > services", and an XPCOM service providing access to this database is > registered." > > Is it using https://developer.mozilla.org/en/Storage? yes.
Simon and I are working on architecture diagrams, but a few things aren't clear to us yet. Let me rephrase what I understand so far, please, let me know if I'm misunderstanding There are a few questions embedded in this list, in side parenthesis. Each is numbered, 1-4 There will be more than one diagram - these first questions are about the instantiation/startup process 1. Browser starts up, etc 2. Social API starts. (1. can you explain the startup process and point me to the right code?) 3. a file containing data in JSON format is requested from the filesystem. This file contains Name, Icon (2. embedded or url?), Javascript (pseudo-background) worker url, Sidebar url, URL prefix (Not sure what this is?) 4. Social API parses the file, instantiates SQLite via the Mozilla Storage API, registering an XPCOM interface. 5. SocialAPI requests the Javascript URL from each Social Service Provider in the JSON file 6. SocialAPI's Service Worker creates a pseudo-background worker for each entry: an iframe on the hidden window (3. what/where is the hidden window?), and load the JS URL in it (as content). 7. SocialAPI attaches a Sandbox to the iframe (4. which iframe?) and copies the following objects from the iframe to the sandbox: XMLHttpRequest, WebSockets, indexedDB, localStorage, btoa, atob, setTimeout, setInterval, clearTimeout, clearInterval, dump, FileReader, Blob, and navigator 8. Eval the JS in the sandbox, as content. This gives us a JS context which runs in the principal of the JS URL, but has no DOM. 9. The Worker runs effectively for the duration of the browser session. 10. Private Browsing should cause all Social objects to be unloaded. The Worker should be destroyed and all sidebar/toolbar/recommendation buttons should be destroyed.
(In reply to Adam Muntner :adamm from comment #15) > Simon and I are working on architecture diagrams, but a few things aren't > clear to us yet. Let me rephrase what I understand so far, please, let me > know if I'm misunderstanding > > There are a few questions embedded in this list, in side parenthesis. Each > is numbered, 1-4 > > There will be more than one diagram - these first questions are about the > instantiation/startup process > > 1. Browser starts up, etc > 2. Social API starts. (1. can you explain the startup process and point me > to the right code?) the initial startup code is here (see social_main): https://github.com/mozilla/socialapi-dev/blob/develop/content/main.js On startup, the registry gets any providers from the database, and figures out whether the UI should be displayed. As well, we read the manifest file for any builtin providers. There is a bit that goes on during startup, I'm not sure how much detail you want. > 3. a file containing data in JSON format is requested from the filesystem. > This file contains Name, Icon (2. embedded or url?), Javascript > (pseudo-background) worker url, Sidebar url, URL prefix (Not sure what this > is?) We have some hacky builtin providers were using for demo fodder right now. Those are what we reading from disk. Whether we ship with any builtin providers is TBD. You can see our providers at: https://github.com/michaelrhanson/socialapi-hacking Each provider has an app.manifest which is what we load to configure the provider. Normally, these manifest files would live on an actual web server and be "installed" by the user. URLPrefix is recently gone, replaced by "origin" which is used only with the builtin providers. For remotely loaded providers origin == nsIURI.prePath of the location of the manifest file. Because we load our demo services from resource urls, we need the origin in order to know what pages we're loading from the real service. > 4. Social API parses the file, instantiates SQLite via the Mozilla Storage > API, registering an XPCOM interface. I'm not sure about what you are saying. We use Services.storage, which is sqlite, to store the manifests that we have retrieved. There is no registered xpcom interface. > 5. SocialAPI requests the Javascript URL from each Social Service Provider > in the JSON file we create an iframe (for each provider) on the hidden window with the src attribute set to workerURL from the providers manifest. The content retrieved is copied and eval'd in the sandbox (below). > 6. SocialAPI's Service Worker creates a pseudo-background worker for each > entry: an iframe on the hidden window (3. what/where is the hidden window?), > and load the JS URL in it (as content). every running instance of firefox has a hidden window. https://mxr.mozilla.org/mozilla-central/search?string=hiddenWindow > 7. SocialAPI attaches a Sandbox to the iframe (4. which iframe?) and copies > the following objects from the iframe to the sandbox: XMLHttpRequest, > WebSockets, indexedDB, localStorage, btoa, atob, setTimeout, setInterval, > clearTimeout, clearInterval, dump, FileReader, Blob, and navigator the sandbox is attached to the iframe created in the step above. > 8. Eval the JS in the sandbox, as content. This gives us a JS context which > runs in the principal of the JS URL, but has no DOM. correct > 9. The Worker runs effectively for the duration of the browser session. unless social browsing is disabled, or the specific provider is disabled. > 10. Private Browsing should cause all Social objects to be unloaded. The > Worker should be destroyed and all sidebar/toolbar/recommendation buttons > should be destroyed. correct.
A couple questions about what will block attempting to land. Bug 756591 - blocklist support for manifests How far is good enough? There doesn't seem to be any current mechanism to blocklist anything other than addons. socialapi currently checks safebrowsing prior to fetching any remote manifest files. URLs in the manifest files must be same-origin as the manifest itself. Bug 756588 - testcase: non-responsive worker I cannot find any current unit tests showing how to test a non-responsive script. Sandbox doesn't seem to have any hooks to allow us to generate such tests. IMO this means platform needs to implement something that would allow us to handle this. Do we block for that?
(In reply to Shane Caraveo (:mixedpuppy) from comment #17) > A couple questions about what will block attempting to land. > > Bug 756591 - blocklist support for manifests > > How far is good enough? There doesn't seem to be any current mechanism to > blocklist anything other than addons. socialapi currently checks > safebrowsing prior to fetching any remote manifest files. URLs in the > manifest files must be same-origin as the manifest itself. > my 2¢ is that if we are doing what we do in other areas in regards to this aspect than that is good enough for now. I know we want to expand our use of other parts of safebrowsing but I don't know the time line on that. > Bug 756588 - testcase: non-responsive worker > > I cannot find any current unit tests showing how to test a non-responsive > script. Sandbox doesn't seem to have any hooks to allow us to generate such > tests. IMO this means platform needs to implement something that would > allow us to handle this. Do we block for that? I know that I have seen some script stuff like this in Thunderbird (might just be for enigmail) but maybe we could copy the kind of things they are doing or copy their tests for this and modify them as we need. I would not block for this at this time, but this could be a vector for causing a Denial of Service. :adamm what do you think?
Simon and I met to discuss this today, I took notes and did some followup research. The results are below, each item is separate by ----------- > There is a bit that goes on during startup, I'm not sure how much detail you want. > We have some hacky builtin providers were using for demo fodder right now. Those are what we reading from disk. Whether we > ship with any builtin providers is TBD." We'd could definitely use a step-by-step explanation of how it is intended to work. Pedantic is good, more detail is better than less. Potential threat: Built-in provider functionality could be hijacked Simon and I believe that if the decision is made to ship without built-in providers, the functionality to read them must be removed, otherwise it will be discovered and abused. Our concern is that providers could be placed in the appropriate place on the local disk by some other process or a malicious individual, thus allowing inadvertent loading of an evil socialapi mitm proxy, malicious javascript, etc. Proposed solution: If we are going to ship with built-in providers in the json text file format, they should be digitally signed by a Mozilla key. I would envision it working like this: Mozilla selling the rights to have socialapi providers shipped by default, much like Google pays to be the preferred search engine. Provider gives field details to Mozilla or provides a file. Mozilla signs the file as part of the build process. SocialAPI starts up, looks for provider json files, verifies the signature, and proceeds ----------- Since we anticipate companies wanting to use their own internal socialapi providers for communication between employees, we should provide a way to deploy socialapi within a company. It looks like we can leverage NSS and Certutil for this: NSS - https://wiki.mozilla.org/VE_07KeyMgmt certutil - http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html It is envisioned that company XYZ would create the JSON texfile and sign it with their own private key. They would then use certutil to load their corresponding public key into the NSS keystore When socialapi starts up, it would check to see whether the JSON file's signature should be verifiable either against the Mozilla public key, or against the public key previously loaded into the keystore. Additionally, this would allow a 'reset to default configuration' option by clearing the sqlite db and reloading it with known good values from the JSON files after verifying them. ----------- >I'm not sure about what you are saying. We use Services.storage, which is sqlite, to store the manifests that we have retrieved. >There is no registered xpcom interface." Can other ff extensions or other processes on the system reach this sqlite db and read from/write to it? ----------- > the sandbox is attached to the iframe created in the step above." Which direction is this sandboxed from? Can other browser add-ins abuse ambient authority from socialapi to inject data? Can they look into it to snarf data? ----------- > we create an iframe (for each provider) on the hidden window with the src attribute set to workerURL from the providers > manifest. The content retrieved is copied and eval'd in the sandbox (below)." What prevents javascript or other active content types from running in the initial hidden window? ----------- > Bug 756591 - blocklist support for manifests > > How far is good enough? There doesn't seem to be any current mechanism to > blocklist anything other than addons. socialapi currently checks > safebrowsing prior to fetching any remote manifest files. URLs in the > manifest files must be same-origin as the manifest itself. In looking at https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html section 4.7 (below) it says "Whenever the user agent fetches a URI (including when following redirects) in the course of one of the following activities, if the URI does not match the allowed frame sources, the user agent must act as if it had received an empty HTTP 400 response: o Requesting data for display in a nested browsing context in the protected resource created by an iframe or a frame element. o Navigating such a nested browsing context." It seems like there might be a few scenarios where this would still allow some significant kinds of attacks with regard to 756591, which says, "socialapi currently checks safebrowsing prior to fetching any remote manifest files" 1. What happens if the manifest file could point to a local file:// resource or UNC or other fileshare resource instead of a remote HTTPS resource? If a manifest is retrieved from a local file:// uri and only exists to start a malicious js process, such as to implement a javascript portscanner and sending the results to a website 2. What if the malicious manifest is hosted on ftp instead of http? Would this be processed by the browser? How about other uri types? I created a list of all official and common URI types and am grinding them against the safebrowsing test (highly throttled so as not to trip googles robot filters) So far, I noticed that the ftp:// uri is processed by Safebrowsing. Example: http://www.google.com/safebrowsing/diagnostic?site=ftp://mozilla.org A site that hosted malicious content via ftp would never make it into Safebrowsing, as Google isn't scanning ftp. The malicious ftp url could also have an embedded password, like ftp://evil:password@ftp.evilsite.com Both chrome and firefox load content into iframes from ftp: <IFRAME SRC="ftp://ftp.mozilla.org/index.html" WIDTH=450 HEIGHT=100> If you can see this, your browser doesn't understand IFRAME. However, we'll still <A HREF="ftp://ftp.mozilla.org/index.html">link</A> you to the file. </IFRAME> ----------- Regarding active content types: Do we want to allow other active content types other than js in the content sandbox? java, flash are the major attack vectors. 1. Block outright? 2. Block old versions? Just because we handle them in the main browser window in one way does not imply that we should handle them the same way in a restricted sandbox. ----------- With regard to Bug 756588 - testcase: non-responsive worker, Simon and I agree that it should be a blocker. ----------- Can you folks reach https://mana.mozilla.org/wiki/display/INFRASEC/Social+API+Security+Review ----------- Functionality question: Have you considered using sync to replicate a user's socialapi configuration? Is this viable?
Update: In recent conversations Shane and I have moved towards wanting to save the network-provider metadata in prefs, instead of another sqlite file. Pros: Consistent interface, encrypted storage with master password, much easier onramp for Sync Cons: Obvious and easily manipulable by addons (and on-disk if no master password) There is still the issue of "built-in" providers and the question of where their metadata comes from. Our usual distribution.js technique seems like a good fit - and seems to have many fields that are at the same, or higher, level of trust (AUS, Safe Browsing, etc.)
Michael, The easily manipulable by other addons piece is of concern to Simon and I. My understanding is that, even though the metadata would be encrypted with a master password, this would only protect it from an on-disk attack, while a malicious add-in would be able to read and write the metadata. I don't have real-world data on how to model whether the on-disk or addon/profile attack is more likely, but both would have the same impact - nonrepudiation of the metadata isn't something that can be counted on. After digging into this, it looked like NSS and certutil to protect data in a SQLite db would protect against on-disk and addon attacks, is that correct? My understanding is that other addons wouldn't be able to read the encrypted SQLite db. Thanks guys, I look forward to your response to this and #19 so we can iron out the rest.
I'm not sure how to track all these issues in a single bug, could we move out to multiple bugs linked to this one? It would be easier to see when we have everything resolved. There is a lot in comment 19 to respond to, I'll try to break it down into a couple areas. There is a lot in both your questions and my responses, if it helps to get on vidyo/skype and have some real-time conversation lets do that. == malicious addons and external processes == I'm going to skip a bunch of stuff with a general answer. Any questions that you asked about any 3rd party addon being able to do something, the answer is "yes, addons can do that". Addon code runs privileged and has access to anything in the system. == provider manifest database and metadata == We're moving away from using sqlite to using prefs, at least in the near term, for storing the provider information. We expect that any given user will install less than 3 or 4 providers, and most will likely only have 1. Using a full db for such a small amount of data is overkill and the change to prefs removes a good chunk of code. Any issues/concerns related specifically to the sqlite db can be removed. The second part of the metadata related issues is around encryption of that data. Firefox stores a lot of critical metadata in the clear, in text files, such as it's own update url, and the update url for all addons, safebrowsing, etc. It may be a good security improvement to have some crypto signing of that data, which any feature or addon could then also take advantage of. I think this would be a good bug for the platform or security roadmap, but not a part of the socialapi scope. Even if we sign our urls and somehow ensure that they are 100% unchangeable, an addon or external process need only change the proxy settings of firefox (unsigned pref settings on disk), or of the underlying OS in order to mitt the socialapi, as well as any other web content loaded into the browser. Abusing the proxy settings would be much simpler than abusing the socialapi. As well, once the url is set on the worker iframe, or any social content panel, there is no way to prevent any addon from simply changing that url to something else. """ Since we anticipate companies wanting to use their own internal socialapi providers for communication between employees, we should provide a way to deploy socialapi within a company. """ The socialapi feature requires that users are able to install social providers from any website they choose to (barring safebrowsing, invalid ssl certs, etc). We will not be able to require that users install a public key prior to install of a new social provider. For installing new social providers, the safeguards in place include: requiring valid ssl certs, safebrowsing checks, same-origin policy of any urls in the manifest. Bug 756591 asks whether this is sufficient, if we have to implement a new service to support blacklisting it will have to be on a future roadmap. For the initial landing, it is likely we will not include the ability to install new providers, but we will ensure that developers can easily add new providers via preferences or addons that set those preferences. == more specific answers == ----------- """ > the sandbox is attached to the iframe created in the step above." Which direction is this sandboxed from? """ I'm not really sure how to answer what direction, the use of Cu.Sandbox allows chrome to inject code for content to use, some of which may presumably safely call back into chrome functionality. IMHO The question here is, have we used the sandbox correctly. We had the code looked over in bug 751241, and further again by ddahl (more an off-the-record review for a question I had). As part of the full code review the sandbox use should be scrutinized. If there are risk problems with the sandbox itself, that needs to go to the javascript engine team. ------------ """ 1. What happens if the manifest file could point to a local file:// resource or UNC or other fileshare resource instead of a remote HTTPS resource? If a manifest is retrieved from a local file:// uri and only exists to start a malicious js process, such as to implement a javascript portscanner and sending the results to a website """ Currently, code loaded from a manifest is sandboxed with a smaller API than what is available to normal web content in a browser tab. I suppose it would be possible to create a port scanner somehow using WebSocket, but if so then that is a platform security issue that is outside the domain of the socialapi. Even if the code had full access to the normal iframe content, it is still controlled by iframe content policy enforced at the platform layer. Builtin providers are allowed to point to file system resources via the resource scheme, which is necessary to implement the feature. They must provide an origin value, which any non-resource uris are resolved against. ----------- """ > we create an iframe (for each provider) on the hidden window with the src attribute set to workerURL from the providers > manifest. The content retrieved is copied and eval'd in the sandbox (below)." What prevents javascript or other active content types from running in the initial hidden window? """ I not clear on the question, but I interpret it as you are concerned with content from websites having access to firefox internals. The remote code is loaded into a sandboxed content iframe without access to chrome privileges or the hidden xul window. ----------- """ 2. What if the malicious manifest is hosted on ftp instead of http? Would this be processed by the browser? How about other uri types? """ Right now (an likely not for the initial landing), the only way to install a remote manifest file (other than a malicious addon) is by browsing to a website that has a "link rel=manifest href=path" tag in the the html head section. That link path must be same-origin to the page containing it, and urls within the manifest must be same origin as the manifest file itself. The channel must be secure with a valid ssl cert. The safebrowsing check is just an additional check prior to these measures. ----------- """ Regarding active content types: Do we want to allow other active content types other than js in the content sandbox? java, flash are the major attack vectors. 1. Block outright? 2. Block old versions? Just because we handle them in the main browser window in one way does not imply that we should handle them the same way in a restricted sandbox. """ Cu.Sandbox runs javascript, and we currently dont provide DOM access even though I want to change that. Without DOM access, you cannot include the plugins to run java, flash, etc. Even with DOM access, we can disable those, and I was intending to add those few lines of code regardless, I just added bug 764215 for that. ----------- """ With regard to Bug 756588 - testcase: non-responsive worker, Simon and I agree that it should be a blocker. """ I have to push back on this. Cu.Sandbox doesn't provide a way for us to test for non-responsive scripts, so making this a blocker will prevent the feature from moving forward at this time. I think that kind of functionality should actually be integral to Cu.Sandbox itself rather than features utilizing it, it would be a good addition, but something for js engine. Placing the non-responsive test at the sandbox level will also provide that protection to the many places sandbox is used throughout firefox. The real worker implementation should probably be in scope for that as well. ----------- """Functionality question: Have you considered using sync to replicate a user's socialapi configuration? Is this viable?""" Moving to using prefs for storing the social provider metadata gives us sync for free.
Thanks for your detailed response. You're right, the bug is getting too dense I'm going to start moving some of this stuff to here: https://wiki.mozilla.org/Security/Reviews/SocialAPI merging what we've done so far on this thread, and so we can get together to identify what to put into separate bugs as blockers Once I do, agree - another call would def help sort this stuff out!
Shane/Michael - there's a lot to digest on there - let Simon and I know when you are ready to meet to discuss
Depends on: 771346
Depends on: 771352
Depends on: 771353
BTW For they people who hate Social or Value Privacy will it be optional, and fully under the user's control. (Better Fully Disabled?)
My understanding is that it should default to "off." https://wiki.mozilla.org/Security/Reviews/SocialAPI "Activating the Feature Our intent is that the entire system defaults to "off". We would like a social service provider to have the power to turn the feature on, for its own domain, while the user is visiting their site. I suggest that this be implemented as: On pages whose domain matches the URLPrefix of an installed service provider, a JS function ("activateSocialBrowsing") is enabled. Calling this function prompts the user with a "want to turn on social browsing?" panel; if selected, this enables the feature and selects the current provider. If the user declines to turn it on, we should have the option to remember this choice and not present the panel in future. turn it on, we should have the option to remember this choice and not present the panel in future." Can someone confirm? Shane? Gavin?
Silius: https://bugzilla.mozilla.org/show_bug.cgi?id=764869# "For the initial implementation of social features, we're planning to have a hardcoded list of social providers. The functionality will be disabled by default and will require user opt-in to be enabled. We need to find the best way to provide that opt-in, ideally only to existing users of the social providers in question." I'mn reviewing that bug right now
(In reply to Adam Muntner :adamm from comment #27) > My understanding is that it should default to "off." > > Can someone confirm? Shane? Gavin? That is my understanding as well.
https://bugzilla.mozilla.org/show_bug.cgi?id=770679 Just wanting to note this bug here, so i can revisit it during a future security code review/blackbox
Revisit during code review/blackbox expose a reduced 'navigator' object to frameworker workers https://bugzilla.mozilla.org/show_bug.cgi?id=773160
No longer depends on: 766622
I moved which bug https://bugzilla.mozilla.org/show_bug.cgi?id=766622 "visual cue for security of sidebar" blocks to bug 755136 but am open to managing it differently.
Assignee: curtisk → amuntner
changing to resolved-fixed as we did complete a secreview, will change to verified-fixed when all dependant bugs are closed
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Whiteboard: [secreview completed][start 05/18/2012][target mm/dd/yyyy][triage needed 2012.05.09] → [secreview completed][start 05/18/2012][target mm/dd/yyyy]
You need to log in before you can comment on or make changes to this bug.