Closed Bug 1037560 Opened 10 years ago Closed 8 years ago

Safebrowsing pleasereset resets all tables

Categories

(Toolkit :: Safe Browsing, defect, P2)

x86_64
Linux
defect

Tracking

()

VERIFIED FIXED
mozilla51
Tracking Status
firefox51 --- verified

People

(Reporter: mmc, Assigned: dimi)

References

Details

(Whiteboard: #sbv4-m1)

Attachments

(3 files, 4 obsolete files)

If the server sends back pleasereset, the client resets all tables. http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/nsUrlClassifierDBService.cpp#562 Since we now allow per-table update urls, it does not make sense for one server to be able to reset tables managed by another server.
If we run into problems serving updates for tracking protection, we need to be able to reset the table without affecting phishing and malware tables.
Blocks: 1029886
Comment 2 will never happen because of tracking protection because it's not implemented on our server. However we are still subject to it from other providers (Google) and as a safety mechanism we should not allow a service provider to cause Firefox to re-download everything from another service provider.
Assignee: nobody → gpascutto
Status: NEW → ASSIGNED
QA Contact: mwobensmith
Component: DOM: Security → Safe Browsing
Product: Core → Toolkit
Assignee: gpascutto → dlee
Priority: -- → P2
Hi Francois I guess current implementation for v2 would like following, please correct me if anything wrong. 1.receive 'pleasereset'[1], get the provider that send the 'pleasereset' response. 2.get all the tables belongs to the provider by using Preference ('browser.safebrowsing.provider' + provider + 'lists'). 3.Remove hashstore, lookupcache, and diskfiles for those tables (Implement in Classifier.cpp). Since step1 and step3 would be implemented in worker thread and step2 has to be implemented in main thread because of accessing Preference. So this approach requires passing job between worker thread and main thread which make this code a bit complicated. If we think about v2,v4 coexist implementation, according to v4 provider discussion in work week(add google4 provider as v4) and bug 1254763, then we will have google, mozilla, google4 ... etc directory for each provider. When receive 'pleasereset' response from v2 server, we can use provider information and scan the provider's directory to get all the tables for the provider[2] and then delete data accordingly. In this way we won't have to bother mainthred just for list of all the tables So maybe it will be better working on this bug after bug 1254763 is fixed? (means this would be implemented with v4). Do you have any suggestion ? thanks! [1] https://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/nsUrlClassifierDBService.cpp#545 [2] https://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/Classifier.cpp#423
Flags: needinfo?(francois)
Attached patch WIP patch (obsolete) (deleted) — Splinter Review
This is a WIP patch, the approach here is mentioned in Comment 3(Passing job between main thread and work thread).
After checking with Henry, bug 1254763 will only handle v4 protocol, we won't do migration for existing v2 tables. So the approach i suggest in Comment 3 won't work. I will use origin method unless i find a better way to do it.
Flags: needinfo?(francois)
Attached patch P1. Safebrowsing pleasereset resets all tables (obsolete) (deleted) — Splinter Review
In this patch 'pleasereset' will only reset tables included in the update request
Attachment #8765789 - Attachment is obsolete: true
Attachment #8771327 - Flags: feedback?(hchang)
Attachment #8771328 - Flags: feedback?(hchang)
Comment on attachment 8771327 [details] [diff] [review] P1. Safebrowsing pleasereset resets all tables Review of attachment 8771327 [details] [diff] [review]: ----------------------------------------------------------------- Very neat code. Thanks! ::: toolkit/components/url-classifier/Classifier.cpp @@ +213,5 @@ > + } > + } > + NS_ENSURE_SUCCESS(rv, rv); > + > + return NS_OK; Can we just "return rv;" here?
Attachment #8771327 - Flags: feedback?(hchang) → feedback+
Comment on attachment 8771328 [details] [diff] [review] P2. Modify testcase to test pleasereset should not reset all tables Review of attachment 8771328 [details] [diff] [review]: ----------------------------------------------------------------- Looks good to me. Thanks!
Attachment #8771328 - Flags: feedback?(hchang) → feedback+
Attached patch P1. Safebrowsing pleasereset resets all tables (obsolete) (deleted) — Splinter Review
Address henry's suggestion in Comment 8
Attachment #8771327 - Attachment is obsolete: true
Attachment #8771969 - Flags: review?(francois)
Attachment #8771328 - Flags: review?(francois)
Attachment #8771328 - Flags: review?(francois)
Attachment #8771969 - Flags: review?(francois)
Comment on attachment 8772234 [details] Bug 1037560 - P1. Safebrowsing pleasereset resets all tables. https://reviewboard.mozilla.org/r/65086/#review62844 ::: toolkit/components/url-classifier/Classifier.cpp:215 (Diff revision 1) > + NS_WARNING(nsPrintfCString("Fail to remove file %s from the disk", > + leafName.get()).get()); > + } > + } > + } > + NS_ENSURE_SUCCESS(rv, rv); This is defined to return nsresult, but it will silently drop some errors. It's also unclear what the caller could do with the error code (all the current callers ignore it). I'd suggest making it void.
Attachment #8772234 - Flags: review+
Comment on attachment 8772235 [details] Bug 1037560 - P2. Modify testcase to test pleasereset should not reset all tables. https://reviewboard.mozilla.org/r/65088/#review62848
Comment on attachment 8772235 [details] Bug 1037560 - P2. Modify testcase to test pleasereset should not reset all tables. https://reviewboard.mozilla.org/r/65088/#review62978 I found the existing test very confusing, so I think we should take this opportunity to describe it better in comments. I've suggested a few improvements below. Also, I'm not sure the callback control flow is quite right. ::: toolkit/components/url-classifier/tests/unit/test_streamupdater.js:198 (Diff revision 1) > } > }; > > // Tests a database reset request. > function testReset() { > + // moz-* tables should not be reset when the update request Replace this comment with: "The moz-phish-simple table is populated separately from the other update in a separate update request. Therefore it should not be reset when we run the updates later in this function." ::: toolkit/components/url-classifier/tests/unit/test_streamupdater.js:209 (Diff revision 1) > + "urls" : mozAddUrls > + }]); > + > + var dataUpdate = "data:," + encodeURIComponent(mozUpdate); > + > + streamUpdater.downloadUpdates("moz-phish-simple", "", Are we sure that this will terminate before the rest of the function? This looks like callbacks/promises to me, they could run out of order, in which case the reset could happen before the updates are applied in moz-phish-simple. ::: toolkit/components/url-classifier/tests/unit/test_streamupdater.js:229 (Diff revision 1) > { "chunkNum" : 3, > "urls" : addUrls3 > }]); > > var assertions = { > - "tableData" : "test-phish-simple;a:3", > + "tableData" : "moz-phish-simple;a:1\ntest-phish-simple;a:3", I would add this comment at the end of the line: // tables that should still be there ::: toolkit/components/url-classifier/tests/unit/test_streamupdater.js:230 (Diff revision 1) > "urls" : addUrls3 > }]); > > var assertions = { > - "tableData" : "test-phish-simple;a:3", > + "tableData" : "moz-phish-simple;a:1\ntest-phish-simple;a:3", > "urlsExist" : addUrls3, Shouldn't we check that mozAddUrls exist as well after the reset has happened? Also, I would add this comment to the end of the line: // addUrls3 added after the reset ::: toolkit/components/url-classifier/tests/unit/test_streamupdater.js:231 (Diff revision 1) > }]); > > var assertions = { > - "tableData" : "test-phish-simple;a:3", > + "tableData" : "moz-phish-simple;a:1\ntest-phish-simple;a:3", > "urlsExist" : addUrls3, > "urlsDontExist" : addUrls1 I would add this comment to the end of the line: // addUrls1 added prior to the reset ::: toolkit/components/url-classifier/tests/unit/test_streamupdater.js:234 (Diff revision 1) > - "tableData" : "test-phish-simple;a:3", > + "tableData" : "moz-phish-simple;a:1\ntest-phish-simple;a:3", > "urlsExist" : addUrls3, > "urlsDontExist" : addUrls1 > }; > > doTest([update1, update2, update3], assertions, false); Add this comment before this line: // Use these update responses in order. The update request only contains test-*-simple tables so the reset will only apply to these. ::: toolkit/components/url-classifier/tests/unit/test_streamupdater.js:236 (Diff revision 1) > "urlsDontExist" : addUrls1 > }; > > doTest([update1, update2, update3], assertions, false); > + > + do_register_cleanup(function() { Does this have to go at the end of the function? Maybe it would be clearer if all of the moz-phish-simple setup was done in a block at the beginning of this function.
Attachment #8772235 - Flags: review?(francois) → review-
Attachment #8772234 - Flags: review?(francois)
Thanks for you suggestions! (In reply to François Marier [:francois] from comment #15) > > Are we sure that this will terminate before the rest of the function? > > This looks like callbacks/promises to me, they could run out of order, in > which case the reset could happen before the updates are applied in > moz-phish-simple. The updates for moz-phish-simple and test-*-simple will be in order because the second update will be pending until the first one is finished[1][2]. [1] https://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp#214 [2] https://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp#424
Comment on attachment 8772234 [details] Bug 1037560 - P1. Safebrowsing pleasereset resets all tables. Review request updated; see interdiff: https://reviewboard.mozilla.org/r/65086/diff/1-2/
Attachment #8772235 - Flags: review- → review?(francois)
Comment on attachment 8772235 [details] Bug 1037560 - P2. Modify testcase to test pleasereset should not reset all tables. Review request updated; see interdiff: https://reviewboard.mozilla.org/r/65088/diff/1-2/
Whiteboard: #sbv4-m1
Comment on attachment 8772235 [details] Bug 1037560 - P2. Modify testcase to test pleasereset should not reset all tables. https://reviewboard.mozilla.org/r/65088/#review65150 Thanks for improving the test Dimi. It's much easier to read now!
Attachment #8772235 - Flags: review?(francois) → review+
Attachment #8771328 - Attachment is obsolete: true
Attachment #8771969 - Attachment is obsolete: true
Pushed by ryanvm@gmail.com: https://hg.mozilla.org/integration/autoland/rev/28f815fd2675 P1. Safebrowsing pleasereset resets all tables. r=gcp https://hg.mozilla.org/integration/autoland/rev/9171cae4d729 P2. Modify testcase to test pleasereset should not reset all tables. r=francois,gcp
Keywords: checkin-needed
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla51
François, can you suggest a way to verify that this bug was fixed? I see the test here, but was wondering if you felt there were other ways to verify that this works as expected. Other than simulating the safe browsing service, I am not sure how I'd go about it.
Flags: needinfo?(francois)
(In reply to Matt Wobensmith [:mwobensmith][:matt:] from comment #25) > François, can you suggest a way to verify that this bug was fixed? I see the > test here, but was wondering if you felt there were other ways to verify > that this works as expected. Other than simulating the safe browsing > service, I am not sure how I'd go about it. I would do the following: 1. create a shavar server that responds with the reset command whenever there is an update 2. start Firefox and let it download the TP list 3. replace the update URL for Google with the local server 4. trigger an update of the google lists 5. look at the console logs (browser.safebrowsing.debug) to see if the TP list gets deleted and re-downloaded instead of just getting a 204 like normal In a release prior to this bugfix, the fake Google server should wipe the TP list. After this fix, the Google lists should be wiped but not the Mozilla ones.
Flags: needinfo?(francois)
(In reply to François Marier [:francois] from comment #26) > 1. create a shavar server that responds with the reset command whenever > there is an update Thanks for all of this. Can you define "reset command" for me? I tried looking at the Google shavar documentation that I used to test with before but I'm getting some kind of permission error upon accessing the document. I assume this is one of the parameters that is sent with the shavar response, but tell me if I'm wrong.
Flags: needinfo?(francois)
(In reply to Matt Wobensmith [:mwobensmith][:matt:] from comment #27) > (In reply to François Marier [:francois] from comment #26) > > > 1. create a shavar server that responds with the reset command whenever > > there is an update > > Thanks for all of this. Can you define "reset command" for me? I tried > looking at the Google shavar documentation that I used to test with before > but I'm getting some kind of permission error upon accessing the document. Yeah, unfortunately Google took down the V2 docs when they published V4. There's a copy on the Internet Archive though: https://web.archive.org/web/20160422212049/https://developers.google.com/safe-browsing/developers_guide_v2#HTTPResponseForData > I assume this is one of the parameters that is sent with the shavar > response, but tell me if I'm wrong. Based on reading the above, it's in the response from the shavar server. It would look like: n:1000 r:pleasereset
Flags: needinfo?(francois)
(In reply to François Marier [:francois] from comment #26) > 5. look at the console logs (browser.safebrowsing.debug) to see if the TP > list gets deleted and re-downloaded instead of just getting a 204 like normal > > In a release prior to this bugfix, the fake Google server should wipe the TP > list. After this fix, the Google lists should be wiped but not the Mozilla > ones. I'm having a hard time deciphering the activity in the logs to determine if something is indeed being wiped. I don't see any HTTP codes - 204 or otherwise - in this output. Can you give me an example of what we should see before/after, in terms of actual debug output? Or would it be best if I just attached logs here? Also, to trigger the update, I can either change the google provider's nextUpdateTime or updateURL and it seems to do something. Is that the preferred way to force an update, or is there another way I should be doing this?
Flags: needinfo?(francois)
(In reply to Matt Wobensmith [:mwobensmith][:matt:] from comment #29) > I'm having a hard time deciphering the activity in the logs to determine if > something is indeed being wiped. I don't see any HTTP codes - 204 or > otherwise - in this output. You're right, you won't see an HTTP status code in the case of successful updates. These are the relevant log lines: listmanager: 15:50:43 GMT-0700 (PDT): update request: { "tableList": "mozfull-track-digest256,mozstd-trackwhite-digest256,mozplugin-block-digest256", "tableNames": {}, "request": "mozstd-trackwhite-digest256;a:1472162826\nmozfull-track-digest256;a:1471874828\n" } listmanager: 15:50:43 GMT-0700 (PDT): makeUpdateRequestForEntry_: request mozstd-trackwhite-digest256;a:1472162826 mozfull-track-digest256;a:1471874828 update: https://shavar.services.mozilla.com/downloads?client=navclient-auto-ffox&appver=48.0.1&pver=2.2 tablelist: mozfull-track-digest256,mozstd-trackwhite-digest256,mozplugin-block-digest256 listmanager: 15:50:43 GMT-0700 (PDT): pending update, queued request until later ... listmanager: 15:50:48 GMT-0700 (PDT): update success for mozfull-track-digest256,mozstd-trackwhite-digest256,mozplugin-block-digest256 from https://shavar.services.mozilla.com/downloads?client=navclient-auto-ffox&appver=48.0.1&pver=2.2: 3600 If you get an error, you'll see something like: listmanager: 11:50:40 GMT-0700 (PDT): pending update, queued request until later listmanager: 11:50:40 GMT-0700 (PDT): download error for googpub-phish-proto,goog-malware-proto,goog-unwanted-proto: 400 > Also, to trigger the update, I can either change the google provider's > nextUpdateTime or updateURL and it seems to do something. Is that the > preferred way to force an update, or is there another way I should be doing > this? I usually set nextupdatetime to 1 and then restart the browser.
Flags: needinfo?(francois)
Attached file logs.zip (deleted) —
Thanks François, that is helpful. While I still don't see that exact behavior in the affected build, I do see differing behavior pre- and post-fix that leads me to believe we've done something right. I'm uploading two logs. The first one was done with Fx50 from 2016-08-22. The second with a build of Fx51 nightly from 2016-09-15. I performed all of the steps from comment 26, using a custom HTTP server running on localhost:13001 that serves the pleasereset command. After I perform step 4 from comment 26, I restart the browser and capture the logging activity. I did this on both browsers, each with clean profiles, identical steps. On the older build with the bug, I see that makeUpdateRequestForEntry_ gets called twice - once for the Google list, and once for the Mozilla TP list. On the newer build with the fix, I see that makeUpdateRequestForEntry_ is only called once - just for the Google list. Does any of this indicate the desired behavior?
Flags: needinfo?(francois)
Hi Matt Sorry i didn't notice that there is a discussion on this bug. From the log it seems that the newer build didn't call makeUpdateRequestForEntry_ for mozilla TP list because of nextupdatetime is not set to 1, which step did you set the preference ? For me the best way to verify this is after step4 from comment 26, just close the browser and check the profile directory to see if there is still goog-badbinurl-shavar.sbstore, goog-badbinurl-shavar.pset goog-xxx ... etc files under safebrowsing folder.
(In reply to Dimi Lee[:dimi][:dlee] from comment #32) > > For me the best way to verify this is after step4 from comment 26, just > close the browser and check the profile directory to see if there is still > goog-badbinurl-shavar.sbstore, goog-badbinurl-shavar.pset goog-xxx ... etc > files under safebrowsing folder. Also, tracking lists from mozilla should still exist instead of being deleted.
Flags: needinfo?(francois)
Thanks Dimi and Francois for your continued help and patience. I finally observed the correct behavior. The easiest way was to view the lists on the local file system while performing the steps above. Before the fix, the pleasereset command from the Google updateURL deleted all lists, while post-fix, only the Google list files were deleted. Interestingly, if I used the same updateURL for the Mozilla TP list, I did not observe the same behavior; it did not delete any lists. Perhaps we never supported pleasereset on the TP side, but I'm guessing that's irrelevant. Glad that you fixed this bug!
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: