Closed Bug 904607 Opened 11 years ago Closed 11 years ago

Write protocol parser for digest256 list format

Categories

(Toolkit :: Safe Browsing, defect)

x86_64
Linux
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla26

People

(Reporter: mmc, Assigned: mmc)

References

(Blocks 1 open bug)

Details

Attachments

(1 file, 4 obsolete files)

See gcp's comment from the original bug. +++ This bug was initially created as a clone of Bug #842828 +++ In Chrome, before the target file is set, there's a call to check locally if the download already has a verdict (good or bad), before calling out to the application reputation API. http://code.google.com/searchframe#OAMlx_jo-ck/src/chrome/browser/download/chrome_download_manager_delegate.cc&q=checkdownloadurl%20file:%5Esrc/chrome/browser/&l=222 We should do the same thing. This also involves changing the safebrowsing code to download the allowlists and blocklists that Chrome is using.
Blocks: 842828
No longer depends on: 842828
New format: sha256 List Format The “sha256” list format uses (not surprisingly) 256­bit sha256 hashes in binary format. For add chunks, the encoded chunk data is a simple list of one or more sha256 hashes in binary format. For sub chunks, the encoded chunk data is similar except that it is preceded by the add chunk number which contains the hash to be subbed. ADD­DATA = (HASH)+ SUB-DATA = (ADDCHUNKNUM HASH)+ HASH = <32 unsigned bytes> ADDCHUNKNUM = <4 byte unsigned integer in network byte order> Old format from https://developers.google.com/safe-browsing/developers_guide_v2#ShavarListFormat: ADD-DATA = (HOSTKEY COUNT [PREFIX]*)+ HOSTKEY = <4 unsigned bytes> # 32-bit hash prefix COUNT = <1 unsigned byte> PREFIX = <HASHLEN unsigned bytes> SUB-DATA = (HOSTKEY COUNT (ADDCHUNKNUM | (ADDCHUNKNUM PREFIX)+))+ HOSTKEY = <4 unsigned bytes> # 32-bit hash prefix COUNT = <1 unsigned byte> ADDCHUNKNUM = <4 byte unsigned integer in network byte order> PREFIX = <HASHLEN unsigned bytes>
Assignee: nobody → mmc
Comment on attachment 797434 [details] [diff] [review] Protocol parser for digest256 tables ( Review of attachment 797434 [details] [diff] [review]: ----------------------------------------------------------------- Almost ready for review, I want to fix one test and clean up first.
Attachment #797434 - Attachment is obsolete: true
Comment on attachment 797569 [details] [diff] [review] Protocol parser for digest256 tables ( Hi Gian-Carlo, Sorry for the delay. This change modifies ProtocolParser.cpp to include ProcessDigestChunk after the new list format, and tests that change by running a local http server that serves updates that I downloaded manually from the safebrowsing service. If you like this approach, I'd like to eventually convert test_streamupdater.js to use it -- since that uses a lot of non-production-code and takes 30 seconds to run. Many of these changes are comment-only, as I was finding my way around this code I noticed many interfaces that are only used for testing. Thanks, Monica
Attachment #797569 - Flags: review?(gpascutto)
Attached patch Add protocol parser for -digest256 lists ( (obsolete) (deleted) — Splinter Review
Forgot test data files.
Attachment #797569 - Attachment is obsolete: true
Attachment #797569 - Flags: review?(gpascutto)
Attached patch Add protocol parser for -digest256 lists ( (obsolete) (deleted) — Splinter Review
Forgot test data files.
Attachment #797583 - Flags: review?(gpascutto)
Attachment #797576 - Attachment is obsolete: true
Comment on attachment 797583 [details] [diff] [review] Add protocol parser for -digest256 lists ( Review of attachment 797583 [details] [diff] [review]: ----------------------------------------------------------------- Hmm, the raw attachment shows digest1.chunk but the navigation page does not. Sorry for the noise, I think they are really there.
Comment on attachment 797583 [details] [diff] [review] Add protocol parser for -digest256 lists ( Review of attachment 797583 [details] [diff] [review]: ----------------------------------------------------------------- Commented where I could, looks good to me but I don't think we want to land as is? The "XXX-like" comments are a bit strange to just add, not sure if you plan to fix them in followup commits or do you want to create split bugs for them? ::: netwerk/base/public/nsIURIClassifier.idl @@ +41,5 @@ > * @param aCallback > * The URI classifier will call this callback when the URI has been > * classified. > * > + * I would like to change this crazy interface to always call the callback. Stray comment? ::: toolkit/components/url-classifier/Classifier.cpp @@ +572,5 @@ > } > > if (!validupdates) { > + LOG(("Classifier::ApplyTableUpdates(%s) no valid updates", > + PromiseFlatCString(aTable).get())); IIRC this case can happen if the update was only valid for one table, so it's not a real error. We want to bail early so as not to open the store needlessy. (Just pointing out this isn't an error condition). ::: toolkit/components/url-classifier/Entries.h @@ +3,5 @@ > * License, v. 2.0. If a copy of the MPL was not distributed with this > * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ > > +// This header file defines the storage types of the actual safebrowsing > +// chunk data, which may be variable length hash prefixes. Chunk numbers are They're not really "variable" length. There are two lengths in use, Prefix (32-bit) and Completion (256). @@ +160,5 @@ > }; > > struct AddComplete { > + // Please never use unions. They are always a sign that something nasty is > + // about to happen. Let's fix it then :) This probably just needs a ToPrefix() method or something. ::: toolkit/components/url-classifier/HashStore.cpp @@ +324,5 @@ > const uint32_t CHECKSUM_SIZE = 16; > > + // Never use nsICryptoHash or MD5 in native code. Use PSM/NSS and any other > + // secure hash function instead. This sure looks like main thread blocking > + // IO. This is a file integrity check so a "secure" hash wasn't needed - it could have been CRC32 instead. All of this runs on a background thread, so why would it block the main thread? @@ +387,5 @@ > nsresult > HashStore::ReadHashes() > { > if (!mInputStream) { > + // How is this possibly ok? Looks like an ugly leftover to me. If none of the callers gives any indication why that could be sane let's axe it. @@ +517,5 @@ > + nsAutoCString checking; > + begin->CompleteHash().ToHexString(checking); > + LOG(("Updating hash %s (%X)", checking.get(), begin->CompleteHash().ToUint32())); > + begin++; > + } This sould be in a seperate function and probably in #ifdef DEBUGs? ::: toolkit/components/url-classifier/HashStore.h @@ +95,5 @@ > > const nsCString& TableName() const { return mTableName; } > > nsresult Open(); > + // WTF is this for? Prefixes (Add data) are stored partly in the PrefixSet (contains the Prefix data organized for fast lookup/low RAM usage) and partly in the HashStore (Add Chunk numbers - only used for updates, slow retrieval). This function joins the seperate datasets into one complete prefixes+chunknumbers dataset. @@ +185,4 @@ > AddPrefixArray mAddPrefixes; > + SubPrefixArray mSubPrefixes; > + > + // Fake completion data that seems to be used only in unittesting. Yes, but only because of bug 806422 - i.e. a protocol change that's not documented aside from that bug. ::: toolkit/components/url-classifier/LookupCache.cpp @@ +192,5 @@ > bool* aHas, bool* aComplete) > { > *aHas = *aComplete = false; > > + uint32_t prefix = htonl(aCompletion.ToUint32()); Why the htonl here? The code assumes the Prefix data is stored in native byte order everywhere. ::: toolkit/components/url-classifier/ProtocolParser.h @@ +70,5 @@ > uint32_t *aStart); > nsresult ProcessHostSubComplete(uint8_t numEntries, const nsACString& aChunk, > uint32_t* start); > + // Digest256 chunks are very similar to shavar chunks. The difference is that > + // they always contain the full 265-bit hash, so there is no need for chunk typo ::: toolkit/components/url-classifier/nsUrlClassifierDBService.cpp @@ +1302,5 @@ > new UrlClassifierLookupCallbackProxy(callback); > > // Queue this lookup and call the lookup function to flush the queue if > // necessary. > + // Why don't we just call HandlePendingLookups directly? At a guess because you're on the wrong (=main) thread and don't want to block?
Attachment #797583 - Flags: review?(gpascutto) → review-
Thanks for the feedback! I am cleaning up stray comments and found that this change breaks test_partial, which I am debugging now (hopefully it is due to the stray htonl you mentioned which I meant only to apply to logging).
Comment on attachment 797583 [details] [diff] [review] Add protocol parser for -digest256 lists ( Review of attachment 797583 [details] [diff] [review]: ----------------------------------------------------------------- Hi Gian-Carlo, I think I addressed all your comments. Filed bug 912353 to remove ProcessPlaintextChunk, all tests in toolkit/components/url-classifier pass locally, waiting on try. Thanks, Monica ::: toolkit/components/url-classifier/Entries.h @@ +160,5 @@ > }; > > struct AddComplete { > + // Please never use unions. They are always a sign that something nasty is > + // about to happen. Mostly fixed. I was not able to get rid of the union in LookupCache, unfortunately. ::: toolkit/components/url-classifier/HashStore.cpp @@ +324,5 @@ > const uint32_t CHECKSUM_SIZE = 16; > > + // Never use nsICryptoHash or MD5 in native code. Use PSM/NSS and any other > + // secure hash function instead. This sure looks like main thread blocking > + // IO. I thought that nsICryptoHash couldn't be used off the main thread, but that's not the case. It is thread-unsafe, though. https://bugzilla.mozilla.org/show_bug.cgi?id=402295 @@ +517,5 @@ > + nsAutoCString checking; > + begin->CompleteHash().ToHexString(checking); > + LOG(("Updating hash %s (%X)", checking.get(), begin->CompleteHash().ToUint32())); > + begin++; > + } Cruft, removed. ::: toolkit/components/url-classifier/HashStore.h @@ +95,5 @@ > > const nsCString& TableName() const { return mTableName; } > > nsresult Open(); > + // WTF is this for? Added a comment. ::: toolkit/components/url-classifier/LookupCache.cpp @@ +192,5 @@ > bool* aHas, bool* aComplete) > { > *aHas = *aComplete = false; > > + uint32_t prefix = htonl(aCompletion.ToUint32()); This was a real bug, thanks for catching that. ::: toolkit/components/url-classifier/nsUrlClassifierDBService.cpp @@ +1302,5 @@ > new UrlClassifierLookupCallbackProxy(callback); > > // Queue this lookup and call the lookup function to flush the queue if > // necessary. > + // Why don't we just call HandlePendingLookups directly? I could be wrong, but I think that mWorkerProxy->Lookup(nullptr) doesn't do anything interesting except call HandlePendingLookups (on this thread)
Comment on attachment 797583 [details] [diff] [review] Add protocol parser for -digest256 lists ( ># HG changeset patch ># User Monica Chew <mmc@mozilla.com> ># Date 1377821337 25200 ># Thu Aug 29 17:08:57 2013 -0700 ># Node ID fdbfc8daaa66a42430bd1581542cd038a60ebac2 ># Parent 14b1e8c2957e71c58052d9f0162f28637b5731c8 >Bug 904607: Add protocol parser for -digest256 lists (r=gcp). > >diff --git a/netwerk/base/public/nsIURIClassifier.idl b/netwerk/base/public/nsIURIClassifier.idl >--- a/netwerk/base/public/nsIURIClassifier.idl >+++ b/netwerk/base/public/nsIURIClassifier.idl >@@ -29,24 +29,25 @@ interface nsIURIClassifierCallback : nsI > /** > * The URI classifier service checks a URI against lists of phishing > * and malware sites. > */ > [scriptable, uuid(617f1002-ec55-42c4-a7b0-ebb221ba9fa2)] > interface nsIURIClassifier : nsISupports > { > /** >- * Classify a Principal using it's URI, appId and InBrowserElement state. >+ * Classify a Principal using its URI. > * > * @param aPrincipal > * The principal that should be checked by the URI classifier. > * @param aCallback > * The URI classifier will call this callback when the URI has been > * classified. > * >+ * I would like to change this crazy interface to always call the callback. > * @return <code>false</code> if classification is not necessary. The > * callback will not be called. > * <code>true</code> if classification will be performed. The > * callback will be called. > */ > boolean classify(in nsIPrincipal aPrincipal, > in nsIURIClassifierCallback aCallback); > }; >diff --git a/toolkit/components/url-classifier/ChunkSet.h b/toolkit/components/url-classifier/ChunkSet.h >--- a/toolkit/components/url-classifier/ChunkSet.h >+++ b/toolkit/components/url-classifier/ChunkSet.h >@@ -10,19 +10,20 @@ > #include "Entries.h" > #include "nsString.h" > #include "nsTArray.h" > > namespace mozilla { > namespace safebrowsing { > > /** >- * Store the chunks as an array of uint32_t. >- * XXX: We should optimize this further to compress the >- * many consecutive numbers. >+ * Store the chunk numbers as an array of uint32_t. We need chunk numbers in >+ * order to ask for incremental updates from the server. >+ * XXX: We should optimize this further to compress the many consecutive >+ * numbers. > */ > class ChunkSet { > public: > ChunkSet() {} > ~ChunkSet() {} > > nsresult Serialize(nsACString& aStr); > nsresult Set(uint32_t aChunk); >diff --git a/toolkit/components/url-classifier/Classifier.cpp b/toolkit/components/url-classifier/Classifier.cpp >--- a/toolkit/components/url-classifier/Classifier.cpp >+++ b/toolkit/components/url-classifier/Classifier.cpp >@@ -196,17 +196,19 @@ Classifier::TableRequest(nsACString& aRe > } > } > > nsresult > Classifier::Check(const nsACString& aSpec, LookupResultArray& aResults) > { > Telemetry::AutoTimer<Telemetry::URLCLASSIFIER_CL_CHECK_TIME> timer; > >- // Get the set of fragments to look up. >+ // Get the set of fragments based on the url. This is necessary because we >+ // only look up at most 5 URLs per aSpec, even if aSpec has more than 5 >+ // components. > nsTArray<nsCString> fragments; > nsresult rv = LookupCache::GetLookupFragments(aSpec, &fragments); > NS_ENSURE_SUCCESS(rv, rv); > > nsTArray<nsCString> activeTables; > ActiveTables(activeTables); > > nsTArray<LookupCache*> cacheArray; >@@ -223,25 +225,26 @@ Classifier::Check(const nsACString& aSpe > for (uint32_t i = 0; i < fragments.Length(); i++) { > Completion lookupHash; > lookupHash.FromPlaintext(fragments[i], mCryptoHash); > > // Get list of host keys to look up > Completion hostKey; > rv = LookupCache::GetKey(fragments[i], &hostKey, mCryptoHash); > if (NS_FAILED(rv)) { >- // Local host on the network >+ // Local host on the network. > continue; > } > > #if DEBUG && defined(PR_LOGGING) > if (LOG_ENABLED()) { > nsAutoCString checking; >- lookupHash.ToString(checking); >- LOG(("Checking %s (%X)", checking.get(), lookupHash.ToUint32())); >+ lookupHash.ToHexString(checking); >+ LOG(("Checking fragment %s, hash %s (%X)", fragments[i].get(), >+ checking.get(), lookupHash.ToUint32())); > } > #endif > for (uint32_t i = 0; i < cacheArray.Length(); i++) { > LookupCache *cache = cacheArray[i]; > bool has, complete; > rv = cache->Has(lookupHash, &has, &complete); > NS_ENSURE_SUCCESS(rv, rv); > if (has) { >@@ -539,41 +542,43 @@ Classifier::RecoverBackups() > > /* > * This will consume+delete updates from the passed nsTArray. > */ > nsresult > Classifier::ApplyTableUpdates(nsTArray<TableUpdate*>* aUpdates, > const nsACString& aTable) > { >- LOG(("Classifier::ApplyTableUpdates(%s)", >- PromiseFlatCString(aTable).get())); >+ LOG(("Classifier::ApplyTableUpdates(%s)", PromiseFlatCString(aTable).get())); > > nsAutoPtr<HashStore> store(new HashStore(aTable, mStoreDirectory)); > > if (!store) > return NS_ERROR_FAILURE; > > // take the quick exit if there is no valid update for us > // (common case) > uint32_t validupdates = 0; > > for (uint32_t i = 0; i < aUpdates->Length(); i++) { > TableUpdate *update = aUpdates->ElementAt(i); > if (!update || !update->TableName().Equals(store->TableName())) > continue; >+ > if (update->Empty()) { > aUpdates->ElementAt(i) = nullptr; > delete update; > continue; > } > validupdates++; > } > > if (!validupdates) { >+ LOG(("Classifier::ApplyTableUpdates(%s) no valid updates", >+ PromiseFlatCString(aTable).get())); > return NS_OK; > } > > nsresult rv = store->Open(); > NS_ENSURE_SUCCESS(rv, rv); > rv = store->BeginUpdate(); > NS_ENSURE_SUCCESS(rv, rv); > >diff --git a/toolkit/components/url-classifier/Entries.h b/toolkit/components/url-classifier/Entries.h >--- a/toolkit/components/url-classifier/Entries.h >+++ b/toolkit/components/url-classifier/Entries.h >@@ -1,13 +1,17 @@ > //* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ > /* This Source Code Form is subject to the terms of the Mozilla Public > * License, v. 2.0. If a copy of the MPL was not distributed with this > * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ > >+// This header file defines the storage types of the actual safebrowsing >+// chunk data, which may be variable length hash prefixes. Chunk numbers are >+// represented in ChunkSet.h. >+ > #ifndef SBEntries_h__ > #define SBEntries_h__ > > #include "nsTArray.h" > #include "nsString.h" > #include "nsICryptoHash.h" > #include "nsNetUtil.h" > >@@ -16,16 +20,17 @@ > #endif > > namespace mozilla { > namespace safebrowsing { > > #define PREFIX_SIZE 4 > #define COMPLETE_SIZE 32 > >+// This is the struct that contains 4-byte hash prefixes. > template <uint32_t S, class Comparator> > struct SafebrowsingHash > { > static const uint32_t sHashSize = S; > typedef SafebrowsingHash<S, Comparator> self_type; > uint8_t buf[S]; > > nsresult FromPlaintext(const nsACString& aPlainText, nsICryptoHash* aHash) { >@@ -77,16 +82,29 @@ struct SafebrowsingHash > > #ifdef DEBUG > void ToString(nsACString& aStr) const { > uint32_t len = ((sHashSize + 2) / 3) * 4; > aStr.SetCapacity(len + 1); > PL_Base64Encode((char*)buf, sHashSize, aStr.BeginWriting()); > aStr.BeginWriting()[len] = '\0'; > } >+ >+ void ToHexString(nsACString& aStr) const { >+ static const char* const lut = "0123456789ABCDEF"; >+ // 32 bytes is the longest hash >+ size_t len = 32; >+ >+ aStr.SetCapacity(2 * len); >+ for (size_t i = 0; i < len; ++i) { >+ const char c = static_cast<const char>(buf[i]); >+ aStr.Append(lut[(c >> 4) & 0x0F]); >+ aStr.Append(lut[c & 15]); >+ } >+ } > #endif > uint32_t ToUint32() const { > return *((uint32_t*)buf); > } > void FromUint32(uint32_t aHash) { > *((uint32_t*)buf) = aHash; > } > }; >@@ -100,73 +118,85 @@ public: > return 1; > } else if (first == second) { > return 0; > } else { > return -1; > } > } > }; >+// Use this for 4-byte hashes > typedef SafebrowsingHash<PREFIX_SIZE, PrefixComparator> Prefix; > typedef nsTArray<Prefix> PrefixArray; > > class CompletionComparator { > public: > static int Compare(const uint8_t* a, const uint8_t* b) { > return memcmp(a, b, COMPLETE_SIZE); > } > }; >+// Use this for 32-byte hashes > typedef SafebrowsingHash<COMPLETE_SIZE, CompletionComparator> Completion; > typedef nsTArray<Completion> CompletionArray; > > struct AddPrefix { >+ // The truncated hash. > Prefix prefix; >+ // The chunk number to which it belongs. > uint32_t addChunk; > > AddPrefix() : addChunk(0) {} > >+ // Returns the chunk number. > uint32_t Chunk() const { return addChunk; } > const Prefix &PrefixHash() const { return prefix; } > > template<class T> > int Compare(const T& other) const { > int cmp = prefix.Compare(other.PrefixHash()); > if (cmp != 0) { > return cmp; > } > return addChunk - other.addChunk; > } > }; > > struct AddComplete { >+ // Please never use unions. They are always a sign that something nasty is >+ // about to happen. > union { > Prefix prefix; > Completion complete; > } hash; > uint32_t addChunk; > > AddComplete() : addChunk(0) {} > > uint32_t Chunk() const { return addChunk; } >+ // The 4-byte prefix of the sha256 hash. > const Prefix &PrefixHash() const { return hash.prefix; } >+ // The 32-byte sha256 hash. > const Completion &CompleteHash() const { return hash.complete; } > > template<class T> > int Compare(const T& other) const { > int cmp = hash.complete.Compare(other.CompleteHash()); > if (cmp != 0) { > return cmp; > } > return addChunk - other.addChunk; > } > }; > > struct SubPrefix { >+ // The hash to subtract. > Prefix prefix; >+ // The chunk number of the add chunk to which the hash belonged. > uint32_t addChunk; >+ // The chunk number of this sub chunk. > uint32_t subChunk; > > SubPrefix(): addChunk(0), subChunk(0) {} > > uint32_t Chunk() const { return subChunk; } > uint32_t AddChunk() const { return addChunk; } > const Prefix &PrefixHash() const { return prefix; } > >diff --git a/toolkit/components/url-classifier/HashStore.cpp b/toolkit/components/url-classifier/HashStore.cpp >--- a/toolkit/components/url-classifier/HashStore.cpp >+++ b/toolkit/components/url-classifier/HashStore.cpp >@@ -318,16 +318,19 @@ HashStore::CalculateChecksum(nsAutoCStri > nsresult rv = seekable->Seek(nsISeekableStream::NS_SEEK_SET, 0); > > nsCOMPtr<nsICryptoHash> hash = do_CreateInstance(NS_CRYPTO_HASH_CONTRACTID, &rv); > NS_ENSURE_SUCCESS(rv, rv); > > // Size of MD5 hash in bytes > const uint32_t CHECKSUM_SIZE = 16; > >+ // Never use nsICryptoHash or MD5 in native code. Use PSM/NSS and any other >+ // secure hash function instead. This sure looks like main thread blocking >+ // IO. > rv = hash->Init(nsICryptoHash::MD5); > NS_ENSURE_SUCCESS(rv, rv); > > if (!aChecksumPresent) { > // Hash entire file > rv = hash->UpdateFromStream(mInputStream, UINT32_MAX); > } else { > // Hash everything but last checksum bytes >@@ -380,16 +383,17 @@ HashStore::ReadChunkNumbers() > > return NS_OK; > } > > nsresult > HashStore::ReadHashes() > { > if (!mInputStream) { >+ // How is this possibly ok? > return NS_OK; > } > > nsCOMPtr<nsISeekableStream> seekable = do_QueryInterface(mInputStream); > > uint32_t offset = sizeof(Header); > offset += (mHeader.numAddChunks + mHeader.numSubChunks) * sizeof(uint32_t); > nsresult rv = seekable->Seek(nsISeekableStream::NS_SEEK_SET, offset); >@@ -502,16 +506,24 @@ HashStore::ApplyUpdate(TableUpdate &upda > rv = Merge(&mSubChunks, &mSubPrefixes, > update.SubChunks(), update.SubPrefixes()); > NS_ENSURE_SUCCESS(rv, rv); > > rv = Merge(&mSubChunks, &mSubCompletes, > update.SubChunks(), update.SubCompletes(), true); > NS_ENSURE_SUCCESS(rv, rv); > >+ AddComplete* begin = update.AddCompletes().Elements(); >+ AddComplete* end = update.AddCompletes().Elements() + update.AddCompletes().Length(); >+ while (begin != end) { >+ nsAutoCString checking; >+ begin->CompleteHash().ToHexString(checking); >+ LOG(("Updating hash %s (%X)", checking.get(), begin->CompleteHash().ToUint32())); >+ begin++; >+ } > return NS_OK; > } > > nsresult > HashStore::Rebuild() > { > NS_ASSERTION(mInUpdate, "Must be in update to rebuild."); > >@@ -814,24 +826,24 @@ HashStore::WriteFile() > rv = NS_NewCheckSummedOutputStream(getter_AddRefs(out), storeFile, > PR_WRONLY | PR_TRUNCATE | PR_CREATE_FILE); > NS_ENSURE_SUCCESS(rv, rv); > > uint32_t written; > rv = out->Write(reinterpret_cast<char*>(&mHeader), sizeof(mHeader), &written); > NS_ENSURE_SUCCESS(rv, rv); > >- // Write chunk numbers... >+ // Write chunk numbers. > rv = mAddChunks.Write(out); > NS_ENSURE_SUCCESS(rv, rv); > > rv = mSubChunks.Write(out); > NS_ENSURE_SUCCESS(rv, rv); > >- // Write hashes.. >+ // Write hashes. > rv = WriteAddPrefixes(out); > NS_ENSURE_SUCCESS(rv, rv); > > rv = WriteSubPrefixes(out); > NS_ENSURE_SUCCESS(rv, rv); > > rv = WriteTArray(out, mAddCompletes); > NS_ENSURE_SUCCESS(rv, rv); >@@ -997,17 +1009,17 @@ HashStore::ProcessSubs() > LOG(("All databases seem to have a consistent sort order.")); > #endif > > RemoveMatchingPrefixes(mSubPrefixes, &mAddCompletes); > RemoveMatchingPrefixes(mSubPrefixes, &mSubCompletes); > > // Remove any remaining subbed prefixes from both addprefixes > // and addcompletes. >- KnockoutSubs(&mSubPrefixes, &mAddPrefixes); >+ KnockoutSubs(&mSubPrefixes, &mAddPrefixes); > KnockoutSubs(&mSubCompletes, &mAddCompletes); > > // Remove any remaining subprefixes referring to addchunks that > // we have (and hence have been processed above). > RemoveDeadSubPrefixes(mSubPrefixes, mAddChunks); > > #ifdef DEBUG > EnsureSorted(&mAddPrefixes); >diff --git a/toolkit/components/url-classifier/HashStore.h b/toolkit/components/url-classifier/HashStore.h >--- a/toolkit/components/url-classifier/HashStore.h >+++ b/toolkit/components/url-classifier/HashStore.h >@@ -12,16 +12,19 @@ > #include "nsTArray.h" > #include "nsIFile.h" > #include "nsIFileStreams.h" > #include "nsCOMPtr.h" > > namespace mozilla { > namespace safebrowsing { > >+// A table update is built from a single update chunk from the server. As the >+// protocol parser processes each chunk, it constructs a table update with the >+// new hashes. > class TableUpdate { > public: > TableUpdate(const nsACString& aTable) > : mTable(aTable), mLocalUpdate(false) {} > const nsCString& TableName() const { return mTable; } > > bool Empty() const { > return mAddChunks.Length() == 0 && >@@ -29,64 +32,76 @@ public: > mAddExpirations.Length() == 0 && > mSubExpirations.Length() == 0 && > mAddPrefixes.Length() == 0 && > mSubPrefixes.Length() == 0 && > mAddCompletes.Length() == 0 && > mSubCompletes.Length() == 0; > } > >+ // Throughout, uint32_t aChunk refers only to the chunk number. Chunk data is >+ // stored in the Prefix structures. > void NewAddChunk(uint32_t aChunk) { mAddChunks.Set(aChunk); } > void NewSubChunk(uint32_t aChunk) { mSubChunks.Set(aChunk); } > > void NewAddExpiration(uint32_t aChunk) { mAddExpirations.Set(aChunk); } > void NewSubExpiration(uint32_t aChunk) { mSubExpirations.Set(aChunk); } > > void NewAddPrefix(uint32_t aAddChunk, const Prefix& aPrefix); > void NewSubPrefix(uint32_t aAddChunk, const Prefix& aPrefix, uint32_t aSubChunk); >+ > void NewAddComplete(uint32_t aChunk, const Completion& aCompletion); > void NewSubComplete(uint32_t aAddChunk, const Completion& aCompletion, > uint32_t aSubChunk); > void SetLocalUpdate(void) { mLocalUpdate = true; } > bool IsLocalUpdate(void) { return mLocalUpdate; } > > ChunkSet& AddChunks() { return mAddChunks; } > ChunkSet& SubChunks() { return mSubChunks; } > >+ // Expirations for chunks. > ChunkSet& AddExpirations() { return mAddExpirations; } > ChunkSet& SubExpirations() { return mSubExpirations; } > >+ // Hashes associated with this chunk. > AddPrefixArray& AddPrefixes() { return mAddPrefixes; } > SubPrefixArray& SubPrefixes() { return mSubPrefixes; } > AddCompleteArray& AddCompletes() { return mAddCompletes; } > SubCompleteArray& SubCompletes() { return mSubCompletes; } > > private: > nsCString mTable; > // Update not from the remote server (no freshness) > bool mLocalUpdate; > >+ // The list of chunk numbers that we have for each of the type of chunks. > ChunkSet mAddChunks; > ChunkSet mSubChunks; > ChunkSet mAddExpirations; > ChunkSet mSubExpirations; >+ >+ // 4-byte sha256 prefixes. > AddPrefixArray mAddPrefixes; > SubPrefixArray mSubPrefixes; >+ >+ // 32-byte hashes. > AddCompleteArray mAddCompletes; > SubCompleteArray mSubCompletes; > }; > >+// There is one hash store per table. > class HashStore { > public: > HashStore(const nsACString& aTableName, nsIFile* aStoreFile); > ~HashStore(); > > const nsCString& TableName() const { return mTableName; } > > nsresult Open(); >+ // WTF is this for? > nsresult AugmentAdds(const nsTArray<uint32_t>& aPrefixes); > > ChunkSet& AddChunks() { return mAddChunks; } > ChunkSet& SubChunks() { return mSubChunks; } > AddPrefixArray& AddPrefixes() { return mAddPrefixes; } > AddCompleteArray& AddCompletes() { return mAddCompletes; } > SubPrefixArray& SubPrefixes() { return mSubPrefixes; } > SubCompleteArray& SubCompletes() { return mSubCompletes; } >@@ -121,52 +136,60 @@ private: > nsresult SanityCheck(); > nsresult CalculateChecksum(nsAutoCString& aChecksum, uint32_t aFileSize, > bool aChecksumPresent); > nsresult CheckChecksum(nsIFile* aStoreFile, uint32_t aFileSize); > void UpdateHeader(); > > nsresult ReadChunkNumbers(); > nsresult ReadHashes(); >+ > nsresult ReadAddPrefixes(); > nsresult ReadSubPrefixes(); > > nsresult WriteAddPrefixes(nsIOutputStream* aOut); > nsresult WriteSubPrefixes(nsIOutputStream* aOut); > > nsresult ProcessSubs(); > >+ // This is used for checking that the database is correct and for figuring out >+ // the number of chunks, etc. to read from disk on restart. > struct Header { > uint32_t magic; > uint32_t version; > uint32_t numAddChunks; > uint32_t numSubChunks; > uint32_t numAddPrefixes; > uint32_t numSubPrefixes; > uint32_t numAddCompletes; > uint32_t numSubCompletes; > }; > > Header mHeader; > >+ // The name of the table (must end in -shavar or -digest256, or evidently >+ // -simple for unittesting. > nsCString mTableName; > nsCOMPtr<nsIFile> mStoreDirectory; > > bool mInUpdate; > > nsCOMPtr<nsIInputStream> mInputStream; > >+ // Chunk numbers, stored as uint32_t arrays. > ChunkSet mAddChunks; > ChunkSet mSubChunks; > > ChunkSet mAddExpirations; > ChunkSet mSubExpirations; > >+ // Chunk data for shavar tables. See Entries.h for format. > AddPrefixArray mAddPrefixes; >+ SubPrefixArray mSubPrefixes; >+ >+ // Fake completion data that seems to be used only in unittesting. > AddCompleteArray mAddCompletes; >- SubPrefixArray mSubPrefixes; > SubCompleteArray mSubCompletes; > }; > > } > } >- > #endif >diff --git a/toolkit/components/url-classifier/LookupCache.cpp b/toolkit/components/url-classifier/LookupCache.cpp >--- a/toolkit/components/url-classifier/LookupCache.cpp >+++ b/toolkit/components/url-classifier/LookupCache.cpp >@@ -188,17 +188,17 @@ LookupCache::Dump() > #endif > > nsresult > LookupCache::Has(const Completion& aCompletion, > bool* aHas, bool* aComplete) > { > *aHas = *aComplete = false; > >- uint32_t prefix = aCompletion.ToUint32(); >+ uint32_t prefix = htonl(aCompletion.ToUint32()); > > bool found; > nsresult rv = mPrefixSet->Contains(prefix, &found); > NS_ENSURE_SUCCESS(rv, rv); > > LOG(("Probe in %s: %X, found %d", mTableName.get(), prefix, found)); > > if (found) { >@@ -510,17 +510,17 @@ LookupCache::GetLookupFragments(const ns > paths.AppendElement(EmptyCString()); > > for (uint32_t hostIndex = 0; hostIndex < hosts.Length(); hostIndex++) { > for (uint32_t pathIndex = 0; pathIndex < paths.Length(); pathIndex++) { > nsCString key; > key.Assign(hosts[hostIndex]); > key.Append('/'); > key.Append(paths[pathIndex]); >- LOG(("Chking %s", key.get())); >+ LOG(("Checking fragment %s", key.get())); > > aFragments->AppendElement(key); > } > } > > return NS_OK; > } > >diff --git a/toolkit/components/url-classifier/ProtocolParser.cpp b/toolkit/components/url-classifier/ProtocolParser.cpp >--- a/toolkit/components/url-classifier/ProtocolParser.cpp >+++ b/toolkit/components/url-classifier/ProtocolParser.cpp >@@ -217,16 +217,17 @@ ProtocolParser::ProcessControl(bool* aDo > > if (line.EqualsLiteral("e:pleaserekey")) { > mRekeyRequested = true; > return NS_OK; > } else if (mHMAC && mServerMAC.IsEmpty()) { > rv = ProcessMAC(line); > NS_ENSURE_SUCCESS(rv, rv); > } else if (StringBeginsWith(line, NS_LITERAL_CSTRING("i:"))) { >+ // Set the table name from the table header line. > SetCurrentTable(Substring(line, 2)); > } else if (StringBeginsWith(line, NS_LITERAL_CSTRING("n:"))) { > if (PR_sscanf(line.get(), "n:%d", &mUpdateWait) != 1) { > LOG(("Error parsing n: '%s' (%d)", line.get(), mUpdateWait)); > mUpdateWait = 0; > } > } else if (line.EqualsLiteral("r:pleasereset")) { > mResetRequested = true; >@@ -325,22 +326,40 @@ ProtocolParser::ProcessChunkControl(cons > return NS_ERROR_FAILURE; > } > > if (!(mChunkState.hashSize == PREFIX_SIZE || mChunkState.hashSize == COMPLETE_SIZE)) { > NS_WARNING("Invalid hash size specified in update."); > return NS_ERROR_FAILURE; > } > >- mChunkState.type = (command == 'a') ? CHUNK_ADD : CHUNK_SUB; >- >- if (mChunkState.type == CHUNK_ADD) { >- mTableUpdate->NewAddChunk(mChunkState.num); >- } else { >- mTableUpdate->NewSubChunk(mChunkState.num); >+ if (StringEndsWith(mTableUpdate->TableName(), >+ NS_LITERAL_CSTRING("-shavar")) || >+ StringEndsWith(mTableUpdate->TableName(), >+ NS_LITERAL_CSTRING("-simple"))) { >+ // Accommodate test tables ending in -simple for now. >+ mChunkState.type = (command == 'a') ? CHUNK_ADD : CHUNK_SUB; >+ } else if (StringEndsWith(mTableUpdate->TableName(), >+ NS_LITERAL_CSTRING("-digest256"))) { >+ LOG(("Processing digest256 data")); >+ mChunkState.type = (command == 'a') ? CHUNK_ADD_DIGEST : CHUNK_SUB_DIGEST; >+ } >+ switch (mChunkState.type) { >+ case CHUNK_ADD: >+ mTableUpdate->NewAddChunk(mChunkState.num); >+ break; >+ case CHUNK_SUB: >+ mTableUpdate->NewSubChunk(mChunkState.num); >+ break; >+ case CHUNK_ADD_DIGEST: >+ mTableUpdate->NewAddChunk(mChunkState.num); >+ break; >+ case CHUNK_SUB_DIGEST: >+ mTableUpdate->NewSubChunk(mChunkState.num); >+ break; > } > > return NS_OK; > } > > nsresult > ProtocolParser::ProcessForward(const nsCString& aLine) > { >@@ -401,21 +420,26 @@ ProtocolParser::ProcessChunk(bool* aDone > nsAutoCString chunk; > chunk.Assign(Substring(mPending, 0, mChunkState.length)); > mPending = Substring(mPending, mChunkState.length); > > *aDone = false; > mState = PROTOCOL_STATE_CONTROL; > > //LOG(("Handling a %d-byte chunk", chunk.Length())); >- if (StringEndsWith(mTableUpdate->TableName(), NS_LITERAL_CSTRING("-shavar"))) { >+ if (StringEndsWith(mTableUpdate->TableName(), >+ NS_LITERAL_CSTRING("-shavar"))) { > return ProcessShaChunk(chunk); >- } else { >- return ProcessPlaintextChunk(chunk); > } >+ if (StringEndsWith(mTableUpdate->TableName(), >+ NS_LITERAL_CSTRING("-digest256"))) { >+ return ProcessDigestChunk(chunk); >+ } >+ // This is only used in testing and should be removed. >+ return ProcessPlaintextChunk(chunk); > } > > /** > * Process a plaintext chunk (currently only used in unit tests). > */ > nsresult > ProtocolParser::ProcessPlaintextChunk(const nsACString& aChunk) > { >@@ -503,16 +527,70 @@ ProtocolParser::ProcessShaChunk(const ns > } > NS_ENSURE_SUCCESS(rv, rv); > } > > return NS_OK; > } > > nsresult >+ProtocolParser::ProcessDigestChunk(const nsACString& aChunk) >+{ >+ if (mChunkState.type == CHUNK_ADD_DIGEST) { >+ return ProcessDigestAdd(aChunk); >+ } >+ if (mChunkState.type == CHUNK_SUB_DIGEST) { >+ return ProcessDigestSub(aChunk); >+ } >+ return NS_ERROR_UNEXPECTED; >+} >+ >+nsresult >+ProtocolParser::ProcessDigestAdd(const nsACString& aChunk) >+{ >+ MOZ_ASSERT(aChunk.Length() % 32 == 0, >+ "Chunk length in bytes must be divisible by 4"); >+ uint32_t start = 0; >+ while (start < aChunk.Length()) { >+ Completion hash; >+ hash.Assign(Substring(aChunk, start, COMPLETE_SIZE)); >+ start += COMPLETE_SIZE; >+ mTableUpdate->NewAddComplete(mChunkState.num, hash); >+ } >+ return NS_OK; >+} >+ >+nsresult >+ProtocolParser::ProcessDigestSub(const nsACString& aChunk) >+{ >+ // The ABNF format for sub chunks is (ADDCHUNKNUM HASH)+, where ADDCHUNKNUM >+ // is a 4 byte chunk number, and HASH is 32 bytes. >+ MOZ_ASSERT(aChunk.Length() % 36 == 0, >+ "Chunk length in bytes must be divisible by 36"); >+ uint32_t start = 0; >+ while (start < aChunk.Length()) { >+ // Read ADDCHUNKNUM >+ const nsCSubstring& addChunkStr = Substring(aChunk, start, 4); >+ start += 4; >+ >+ uint32_t addChunk; >+ memcpy(&addChunk, addChunkStr.BeginReading(), 4); >+ addChunk = PR_ntohl(addChunk); >+ >+ // Read the hash >+ Completion hash; >+ hash.Assign(Substring(aChunk, start, COMPLETE_SIZE)); >+ start += COMPLETE_SIZE; >+ >+ mTableUpdate->NewSubComplete(addChunk, hash, mChunkState.num); >+ } >+ return NS_OK; >+} >+ >+nsresult > ProtocolParser::ProcessHostAdd(const Prefix& aDomain, uint8_t aNumEntries, > const nsACString& aChunk, uint32_t* aStart) > { > NS_ASSERTION(mChunkState.hashSize == PREFIX_SIZE, > "ProcessHostAdd should only be called for prefix hashes."); > > if (aNumEntries == 0) { > mTableUpdate->NewAddPrefix(mChunkState.num, aDomain); >@@ -585,16 +663,17 @@ nsresult > ProtocolParser::ProcessHostAddComplete(uint8_t aNumEntries, > const nsACString& aChunk, uint32_t* aStart) > { > NS_ASSERTION(mChunkState.hashSize == COMPLETE_SIZE, > "ProcessHostAddComplete should only be called for complete hashes."); > > if (aNumEntries == 0) { > // this is totally comprehensible. >+ // My sarcasm detector is going off! > NS_WARNING("Expected > 0 entries for a 32-byte hash add."); > return NS_OK; > } > > if (*aStart + (COMPLETE_SIZE * aNumEntries) > aChunk.Length()) { > NS_WARNING("Chunk is not long enough to contain the expected entries."); > return NS_ERROR_FAILURE; > } >@@ -679,10 +758,10 @@ ProtocolParser::GetTableUpdate(const nsA > // updates can be transferred to DBServiceWorker, which passes > // them back to Classifier when doing the updates, and that > // will free them. > TableUpdate *update = new TableUpdate(aTable); > mTableUpdates.AppendElement(update); > return update; > } > >-} >-} >+} // namespace safebrowsing >+} // namespace mozilla >diff --git a/toolkit/components/url-classifier/ProtocolParser.h b/toolkit/components/url-classifier/ProtocolParser.h >--- a/toolkit/components/url-classifier/ProtocolParser.h >+++ b/toolkit/components/url-classifier/ProtocolParser.h >@@ -54,39 +54,51 @@ public: > private: > nsresult ProcessControl(bool* aDone); > nsresult ProcessMAC(const nsCString& aLine); > nsresult ProcessExpirations(const nsCString& aLine); > nsresult ProcessChunkControl(const nsCString& aLine); > nsresult ProcessForward(const nsCString& aLine); > nsresult AddForward(const nsACString& aUrl, const nsACString& aMac); > nsresult ProcessChunk(bool* done); >+ // Remove this, it's only used for testing > nsresult ProcessPlaintextChunk(const nsACString& aChunk); > nsresult ProcessShaChunk(const nsACString& aChunk); > nsresult ProcessHostAdd(const Prefix& aDomain, uint8_t aNumEntries, > const nsACString& aChunk, uint32_t* aStart); > nsresult ProcessHostSub(const Prefix& aDomain, uint8_t aNumEntries, > const nsACString& aChunk, uint32_t* aStart); > nsresult ProcessHostAddComplete(uint8_t aNumEntries, const nsACString& aChunk, > uint32_t *aStart); > nsresult ProcessHostSubComplete(uint8_t numEntries, const nsACString& aChunk, > uint32_t* start); >+ // Digest256 chunks are very similar to shavar chunks. The difference is that >+ // they always contain the full 265-bit hash, so there is no need for chunk >+ // data to contain prefixes. >+ nsresult ProcessDigestChunk(const nsACString& aChunk); >+ nsresult ProcessDigestAdd(const nsACString& aChunk); >+ nsresult ProcessDigestSub(const nsACString& aChunk); > bool NextLine(nsACString& aLine); > > void CleanupUpdates(); > > enum ParserState { > PROTOCOL_STATE_CONTROL, > PROTOCOL_STATE_CHUNK > }; > ParserState mState; > > enum ChunkType { >+ // Types for shavar tables. > CHUNK_ADD, >- CHUNK_SUB >+ CHUNK_SUB, >+ // Types for digest256 tables. digest256 tables differ in format from >+ // shavar tables since they only contain complete hashes. >+ CHUNK_ADD_DIGEST, >+ CHUNK_SUB_DIGEST > }; > > struct ChunkState { > ChunkType type; > uint32_t num; > uint32_t hashSize; > uint32_t length; > void Clear() { num = 0; hashSize = 0; length = 0; } >@@ -101,16 +113,18 @@ private: > nsCOMPtr<nsICryptoHMAC> mHMAC; > nsCString mServerMAC; > > uint32_t mUpdateWait; > bool mResetRequested; > bool mRekeyRequested; > > nsTArray<ForwardedUpdate> mForwards; >+ // Keep track of updates to apply before passing them to the DBServiceWorkers. > nsTArray<TableUpdate*> mTableUpdates; >+ // Updates to apply to the current table being parsed. > TableUpdate *mTableUpdate; > }; > > } > } > > #endif >diff --git a/toolkit/components/url-classifier/content/listmanager.js b/toolkit/components/url-classifier/content/listmanager.js >--- a/toolkit/components/url-classifier/content/listmanager.js >+++ b/toolkit/components/url-classifier/content/listmanager.js >@@ -1,13 +1,14 @@ > # This Source Code Form is subject to the terms of the Mozilla Public > # License, v. 2.0. If a copy of the MPL was not distributed with this > # file, You can obtain one at http://mozilla.org/MPL/2.0/. > > >+// This is the implementation of nsIUrlListManager. > // A class that manages lists, namely white and black lists for > // phishing or malware protection. The ListManager knows how to fetch, > // update, and store lists. > // > // There is a single listmanager for the whole application. > // > // TODO more comprehensive update tests, for example add unittest check > // that the listmanagers tables are properly written on updates >@@ -70,16 +71,17 @@ function PROT_ListManager() { > 4 /* num requests */, > 60*60*1000 /* request time, 60 min */, > backoffInterval /* backoff interval, 60 min */, > 8*60*60*1000 /* max backoff, 8hr */); > > this.dbService_ = Cc["@mozilla.org/url-classifier/dbservice;1"] > .getService(Ci.nsIUrlClassifierDBService); > >+ // This is only used in testing and should be deleted. > this.hashCompleter_ = Cc["@mozilla.org/url-classifier/hashcompleter;1"] > .getService(Ci.nsIUrlClassifierHashCompleter); > } > > /** > * xpcom-shutdown callback > * Delete all of our data tables which seem to leak otherwise. > */ >@@ -421,16 +423,17 @@ PROT_ListManager.prototype.makeUpdateReq > "&wrkey=" + this.keyManager_.getWrappedKey(); > } catch (e) { > G_Debug(this, 'invalid url'); > return; > } > > this.requestBackoff_.noteRequest(); > >+ // This is where all the tables get updated > if (!streamer.downloadUpdates(tableList, > request, > this.keyManager_.getClientKey(), > BindToObject(this.updateSuccess_, this), > BindToObject(this.updateError_, this), > BindToObject(this.downloadError_, this))) { > G_Debug(this, "pending update, wait until later"); > } >diff --git a/toolkit/components/url-classifier/nsIUrlClassifierDBService.idl b/toolkit/components/url-classifier/nsIUrlClassifierDBService.idl >--- a/toolkit/components/url-classifier/nsIUrlClassifierDBService.idl >+++ b/toolkit/components/url-classifier/nsIUrlClassifierDBService.idl >@@ -74,19 +74,19 @@ interface nsIUrlClassifierUpdateObserver > * This is a proxy class that is instantiated and called from the JS thread. > * It provides async methods for querying and updating the database. As the > * methods complete, they call the callback function. > */ > [scriptable, uuid(e326ec41-46fd-4127-ad3c-3c58b2cdf196)] > interface nsIUrlClassifierDBService : nsISupports > { > /** >- * Looks up a key in the database. >+ * Looks up a URI in the database. > * >- * @param key: The principal containing the information to search. >+ * @param principal: The principal containing the URI to search. > * @param c: The callback will be called with a comma-separated list > * of tables to which the key belongs. > */ > void lookup(in nsIPrincipal principal, > in nsIUrlClassifierCallback c); > > /** > * Lists the tables along with which chunks are available in each table. >@@ -99,16 +99,17 @@ interface nsIUrlClassifierDBService : ns > * goog-white-regexp;a:1-3,5 > */ > void getTables(in nsIUrlClassifierCallback c); > > /** > * Set the nsIUrlClassifierCompleter object for a given table. This > * object will be used to request complete versions of partial > * hashes. >+ * This is only used for testing and should be deleted. > */ > void setHashCompleter(in ACString tableName, > in nsIUrlClassifierHashCompleter completer); > > //////////////////////////////////////////////////////////////////////////// > // Incremental update methods. > // > // An update to the database has the following steps: >@@ -193,17 +194,18 @@ interface nsIUrlClassifierDBService : ns > * Reset the url-classifier database. This call will delete the existing > * database, emptying all tables. Mostly intended for use in unit tests. > */ > void resetDatabase(); > }; > > /** > * Interface for the actual worker thread. Implementations of this need not >- * be thread aware and just work on the database. >+ * be thread aware and just work on the database. Is this interface really >+ * supposed to be public? > */ > [scriptable, uuid(0445be75-b114-43ea-89dc-aa16af26e77e)] > interface nsIUrlClassifierDBServiceWorker : nsIUrlClassifierDBService > { > // Provide a way to forcibly close the db connection. > void closeDb(); > > [noscript]void cacheCompletions(in CacheCompletionArray completions); >diff --git a/toolkit/components/url-classifier/nsIUrlClassifierHashCompleter.idl b/toolkit/components/url-classifier/nsIUrlClassifierHashCompleter.idl >--- a/toolkit/components/url-classifier/nsIUrlClassifierHashCompleter.idl >+++ b/toolkit/components/url-classifier/nsIUrlClassifierHashCompleter.idl >@@ -38,19 +38,21 @@ interface nsIUrlClassifierHashCompleterC > * NS_OK if the request completed successfully, or an error code. > */ > void completionFinished(in nsresult status); > }; > > /** > * Clients updating the url-classifier database have the option of sending > * partial (32-bit) hashes of URL fragments to be blacklisted. If the >- * url-classifier encounters one of these truncated hashes, it will ask >- * an nsIUrlClassifierCompleter instance to asynchronously provide the >- * complete hash, along with some associated metadata. >+ * url-classifier encounters one of these truncated hashes, it will ask an >+ * nsIUrlClassifierCompleter instance to asynchronously provide the complete >+ * hash, along with some associated metadata. >+ * This is only ever used for testing and should absolutely be deleted (I >+ * think). > */ > [scriptable, uuid(ade9b72b-3562-44f5-aba6-e63246be53ae)] > interface nsIUrlClassifierHashCompleter : nsISupports > { > /** > * Request a completed hash. > * > * @param partialHash >diff --git a/toolkit/components/url-classifier/nsIUrlClassifierStreamUpdater.idl b/toolkit/components/url-classifier/nsIUrlClassifierStreamUpdater.idl >--- a/toolkit/components/url-classifier/nsIUrlClassifierStreamUpdater.idl >+++ b/toolkit/components/url-classifier/nsIUrlClassifierStreamUpdater.idl >@@ -18,16 +18,17 @@ interface nsIUrlClassifierStreamUpdater > * The Url to download from. Should be plain ascii text. > */ > attribute ACString updateUrl; > > /** > * Try to download updates from updateUrl. Only one instance of this > * runs at a time, so we return false if another instance is already > * running. >+ * This is used in nsIUrlListManager as well as in testing. > * @param aRequestTables Comma-separated list of tables included in this > * update. > * @param aRequestBody The body for the request. > * @param aClientKey The client key for checking the update's MAC. > * @param aSuccessCallback Called after a successful update. > * @param aUpdateErrorCallback Called for problems applying the update > * @param aDownloadErrorCallback Called if we get an http error or a > * connection refused error. >diff --git a/toolkit/components/url-classifier/nsUrlClassifierDBService.cpp b/toolkit/components/url-classifier/nsUrlClassifierDBService.cpp >--- a/toolkit/components/url-classifier/nsUrlClassifierDBService.cpp >+++ b/toolkit/components/url-classifier/nsUrlClassifierDBService.cpp >@@ -150,16 +150,17 @@ private: > nsresult AddNoise(const Prefix aPrefix, > const nsCString tableName, > uint32_t aCount, > LookupResultArray& results); > > nsCOMPtr<nsICryptoHash> mCryptoHash; > > nsAutoPtr<Classifier> mClassifier; >+ // The class that actually parses the update chunks. > nsAutoPtr<ProtocolParser> mProtocolParser; > > // Directory where to store the SB databases. > nsCOMPtr<nsIFile> mCacheDir; > > // XXX: maybe an array of autoptrs. Or maybe a class specifically > // storing a series of updates. > nsTArray<TableUpdate*> mTableUpdates; >@@ -323,16 +324,17 @@ nsUrlClassifierDBServiceWorker::DoLookup > c->LookupComplete(completes.forget()); > > return NS_OK; > } > > nsresult > nsUrlClassifierDBServiceWorker::HandlePendingLookups() > { >+ LOG(("Handling pending lookups")); > MutexAutoLock lock(mPendingLookupLock); > while (mPendingLookups.Length() > 0) { > PendingLookup lookup = mPendingLookups[0]; > mPendingLookups.RemoveElementAt(0); > { > MutexAutoUnlock unlock(mPendingLookupLock); > DoLookup(lookup.mKey, lookup.mCallback); > } >@@ -453,16 +455,17 @@ nsUrlClassifierDBServiceWorker::BeginUpd > rv = nsUrlClassifierUtils::DecodeClientKey(clientKey, mUpdateClientKey); > NS_ENSURE_SUCCESS(rv, rv); > LOG(("clientKey present, marking update key")); > } > > return NS_OK; > } > >+// Called from the stream updater. > NS_IMETHODIMP > nsUrlClassifierDBServiceWorker::BeginStream(const nsACString &table, > const nsACString &serverMAC) > { > LOG(("nsUrlClassifierDBServiceWorker::BeginStream")); > > if (gShuttingDownThread) > return NS_ERROR_NOT_INITIALIZED; >@@ -534,16 +537,17 @@ nsUrlClassifierDBServiceWorker::UpdateSt > { > if (gShuttingDownThread) > return NS_ERROR_NOT_INITIALIZED; > > NS_ENSURE_STATE(mInStream); > > HandlePendingLookups(); > >+ // Feed the chunk to the parser. > return mProtocolParser->AppendStream(chunk); > } > > NS_IMETHODIMP > nsUrlClassifierDBServiceWorker::FinishStream() > { > if (gShuttingDownThread) > return NS_ERROR_NOT_INITIALIZED; >@@ -1294,16 +1298,17 @@ nsUrlClassifierDBService::LookupURI(nsIP > if (!callback) > return NS_ERROR_OUT_OF_MEMORY; > > nsCOMPtr<nsIUrlClassifierLookupCallback> proxyCallback = > new UrlClassifierLookupCallbackProxy(callback); > > // Queue this lookup and call the lookup function to flush the queue if > // necessary. >+ // Why don't we just call HandlePendingLookups directly? > rv = mWorker->QueueLookup(key, proxyCallback); > NS_ENSURE_SUCCESS(rv, rv); > > return mWorkerProxy->Lookup(nullptr, nullptr); > } > > NS_IMETHODIMP > nsUrlClassifierDBService::GetTables(nsIUrlClassifierCallback* c) >diff --git a/toolkit/components/url-classifier/nsUrlClassifierDBService.h b/toolkit/components/url-classifier/nsUrlClassifierDBService.h >--- a/toolkit/components/url-classifier/nsUrlClassifierDBService.h >+++ b/toolkit/components/url-classifier/nsUrlClassifierDBService.h >@@ -65,17 +65,18 @@ public: > > private: > // No subclassing > ~nsUrlClassifierDBService(); > > // Disallow copy constructor > nsUrlClassifierDBService(nsUrlClassifierDBService&); > >- nsresult LookupURI(nsIPrincipal* aPrincipal, nsIUrlClassifierCallback* c, >+ nsresult LookupURI(nsIPrincipal* aPrincipal, >+ nsIUrlClassifierCallback* c, > bool forceCheck, bool *didCheck); > > // Close db connection and join the background thread if it exists. > nsresult Shutdown(); > > // Check if the key is on a known-clean host. > nsresult CheckClean(const nsACString &lookupKey, > bool *clean); >diff --git a/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp b/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp >--- a/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp >+++ b/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp >@@ -22,16 +22,18 @@ static const char* gQuitApplicationMessa > #if defined(PR_LOGGING) > static const PRLogModuleInfo *gUrlClassifierStreamUpdaterLog = nullptr; > #define LOG(args) PR_LOG(gUrlClassifierStreamUpdaterLog, PR_LOG_DEBUG, args) > #else > #define LOG(args) > #endif > > >+// This class does absolutely nothing, except pass requests onto the DBService. >+ > /////////////////////////////////////////////////////////////////////////////// > // nsIUrlClassiferStreamUpdater implementation > // Handles creating/running the stream listener > > nsUrlClassifierStreamUpdater::nsUrlClassifierStreamUpdater() > : mIsUpdating(false), mInitialized(false), mDownloadError(false), > mBeganStream(false), mUpdateUrl(nullptr), mChannel(nullptr) > { >@@ -102,23 +104,25 @@ nsUrlClassifierStreamUpdater::FetchUpdat > uint32_t loadFlags = nsIChannel::INHIBIT_CACHING | > nsIChannel::LOAD_BYPASS_CACHE; > rv = NS_NewChannel(getter_AddRefs(mChannel), aUpdateUrl, nullptr, nullptr, this, > loadFlags); > NS_ENSURE_SUCCESS(rv, rv); > > mBeganStream = false; > >+ // If aRequestBody is empty, construct it for the test. > if (!aRequestBody.IsEmpty()) { > rv = AddRequestBody(aRequestBody); > NS_ENSURE_SUCCESS(rv, rv); > } > > // Set the appropriate content type for file/data URIs, for unit testing > // purposes. >+ // This is only used for testing and should be deleted. > bool match; > if ((NS_SUCCEEDED(aUpdateUrl->SchemeIs("file", &match)) && match) || > (NS_SUCCEEDED(aUpdateUrl->SchemeIs("data", &match)) && match)) { > mChannel->SetContentType(NS_LITERAL_CSTRING("application/vnd.google.safebrowsing-update")); > } > > // Make the request > rv = mChannel->AsyncOpen(this, nullptr); >@@ -209,18 +213,21 @@ nsUrlClassifierStreamUpdater::DownloadUp > > mIsUpdating = true; > *_retval = true; > > nsAutoCString urlSpec; > mUpdateUrl->GetAsciiSpec(urlSpec); > > LOG(("FetchUpdate: %s", urlSpec.get())); >- //LOG(("requestBody: %s", aRequestBody.get())); >+ //LOG(("requestBody: %s", aRequestBody.Data())); > >+ LOG(("Calling into FetchUpdate")); >+ // Normal usage of FetchUpdate is only for a single table. Somehow this >+ // triggers it for all of them? > return FetchUpdate(mUpdateUrl, aRequestBody, EmptyCString(), EmptyCString()); > } > > /////////////////////////////////////////////////////////////////////////////// > // nsIUrlClassifierUpdateObserver implementation > > NS_IMETHODIMP > nsUrlClassifierStreamUpdater::UpdateUrlRequested(const nsACString &aUrl, >@@ -233,16 +240,18 @@ nsUrlClassifierStreamUpdater::UpdateUrlR > if (!update) > return NS_ERROR_OUT_OF_MEMORY; > > // Allow data: and file: urls for unit testing purposes, otherwise assume http > if (StringBeginsWith(aUrl, NS_LITERAL_CSTRING("data:")) || > StringBeginsWith(aUrl, NS_LITERAL_CSTRING("file:"))) { > update->mUrl = aUrl; > } else { >+ // Terrible, terrible. This should be SSL, but then unittesting against >+ // localhost doesn't work. > update->mUrl = NS_LITERAL_CSTRING("http://") + aUrl; > } > update->mTable = aTable; > update->mServerMAC = aServerMAC; > > return NS_OK; > } > >@@ -348,16 +357,17 @@ nsUrlClassifierStreamUpdater::UpdateErro > strResult.AppendInt(static_cast<uint32_t>(result)); > if (errorCallback) { > errorCallback->HandleEvent(strResult); > } > > return NS_OK; > } > >+// This is only used for testing and should be deleted. > nsresult > nsUrlClassifierStreamUpdater::AddRequestBody(const nsACString &aRequestBody) > { > nsresult rv; > nsCOMPtr<nsIStringInputStream> strStream = > do_CreateInstance(NS_STRINGINPUTSTREAM_CONTRACTID, &rv); > NS_ENSURE_SUCCESS(rv, rv); > >@@ -385,16 +395,17 @@ nsUrlClassifierStreamUpdater::AddRequest > > /////////////////////////////////////////////////////////////////////////////// > // nsIStreamListenerObserver implementation > > NS_IMETHODIMP > nsUrlClassifierStreamUpdater::OnStartRequest(nsIRequest *request, > nsISupports* context) > { >+ LOG(("nsUrlClassifierStreamUpdater::OnStartRequest")); > nsresult rv; > bool downloadError = false; > nsAutoCString strStatus; > nsresult status = NS_OK; > > // Only update if we got http success header > nsCOMPtr<nsIHttpChannel> httpChannel = do_QueryInterface(request); > if (httpChannel) { >@@ -413,16 +424,17 @@ nsUrlClassifierStreamUpdater::OnStartReq > NS_ENSURE_SUCCESS(rv, rv); > > if (!succeeded) { > // 404 or other error, pass error status back > LOG(("HTTP request returned failure code.")); > > uint32_t requestStatus; > rv = httpChannel->GetResponseStatus(&requestStatus); >+ LOG(("HTTP request returned failure code: %d.", requestStatus)); > NS_ENSURE_SUCCESS(rv, rv); > > strStatus.AppendInt(requestStatus); > downloadError = true; > } > } > } > >@@ -457,17 +469,16 @@ nsUrlClassifierStreamUpdater::OnDataAvai > nsresult rv; > > // Copy the data into a nsCString > nsCString chunk; > rv = NS_ConsumeStream(aIStream, aLength, chunk); > NS_ENSURE_SUCCESS(rv, rv); > > //LOG(("Chunk (%d): %s\n\n", chunk.Length(), chunk.get())); >- > rv = mDBService->UpdateStream(chunk); > NS_ENSURE_SUCCESS(rv, rv); > > return NS_OK; > } > > NS_IMETHODIMP > nsUrlClassifierStreamUpdater::OnStopRequest(nsIRequest *request, nsISupports* context, >diff --git a/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.h b/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.h >--- a/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.h >+++ b/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.h >@@ -45,27 +45,31 @@ private: > > // When the dbservice sends an UpdateComplete or UpdateFailure, we call this > // to reset the stream updater. > void DownloadDone(); > > // Disallow copy constructor > nsUrlClassifierStreamUpdater(nsUrlClassifierStreamUpdater&); > >+ // Used only for testing. > nsresult AddRequestBody(const nsACString &aRequestBody); > >+ // Fetches an update for a single table. > nsresult FetchUpdate(nsIURI *aURI, > const nsACString &aRequestBody, > const nsACString &aTable, > const nsACString &aServerMAC); >+ // Dumb wrapper so we don't have to create URIs. > nsresult FetchUpdate(const nsACString &aURI, > const nsACString &aRequestBody, > const nsACString &aTable, > const nsACString &aServerMAC); > >+ // Fetches the next table, from mPendingUpdates. > nsresult FetchNext(); > > bool mIsUpdating; > bool mInitialized; > bool mDownloadError; > bool mBeganStream; > nsCOMPtr<nsIURI> mUpdateUrl; > nsCString mStreamTable; >diff --git a/toolkit/components/url-classifier/tests/unit/data/digest1.chunk b/toolkit/components/url-classifier/tests/unit/data/digest1.chunk >new file mode 100644 >index 0000000000000000000000000000000000000000..3850373c19efeca42b55542bd7f9385fbd74e696 >GIT binary patch >literal 939 >zc$_P?GO{uTA~P-q1_u7Uj0LkA?S9YMZN9CyWtu;SY-!RTr{|nW8Zr(Sim$2dDz-8P >ztF{CymlW?mtNG%1oZ9p|vJrbO%%4;|_t28?2KU5DOPNIt-i{}MYNUK44g`xFoqaZ9 >zY5&!|{U<ag#kZ77otXQf+*5#UBVTnSNKGh@+tn>kWiD_2|Mh&&f`}X2b!Bz*)irWM >ztfuZxeh`)cQj@TG>Y~fXv*WpzyOe0pe&;#wc1Ll<(#7&(n;iDtVBFUVa!=l*T{~=* >zSxzynFb@dje!J7`FT0zD@;`C+>a>Vgh06;-<}`Y25PbS0V~WMoaFN*StbT2WN}ej0 >zIRxdfdp4L(?$iXC)2F4*nm0L!;X}6No98l9r&a!p-TX9^K}~T_Q=V0zw;@Q)jBu;E >zYM&X*^FBw%nOe9pEt&W}=wR!q)LCkM!e9O|{{z{z?A@8`%QuE^ZPGs%zI^i~(XHvb >zuLykCxfxw6JV!e6PajCl_oy%Shqdy=`MIiZ+}Zia$8*i`^xKd2$9xw`wd*|`eFkLL >z&lIMGEBg2zzSfKlKK^LSt((o~)DE&tn*FkR>67-jXzgMv6GS8`Z~by)|EdqG_q}J? >zKDGK?>s^NaGg<a7ueZ&9COYTJilv;zR;CEmawgxx7tOfZx~%)g8HR&G^WzsK<Sj2= >zeyxLp_3OTXL#2YnR%Rg8h9)K;<pNyEMtjqrSM9pEk3s2X9#dj^W32d+{h=qbRvgUg >z>HcC0RKqp*u%V>-{)Q8i7kFw<`0;G6)Y}4QW<h<Ms@RQ-&-fiK1FB&;5UI3_-E@E1 >zr}Ab?fwb2b=SS{8<^6oiWS@zi?RP5~rUBJ(znZ#9`gM$euf+bq&Morg%eKVK&{^V9 >zCo_R(?-8}V*Bpzj%n=?DJuLa<pDL$jL^XS)^`4JK`yQ;acUWEj-u1+S>^ZAX&AeG` >zWdRH(1Fm8#ODJt<0Mc%51kx_O?XjTo`>NEb4HNFH=vke8bN`R;!Rt4;*X)xOJvsTP >z@@0^k*Be-(pQIgJ=4#`~tW+d;#Wp42rSO*pou-x-1lW_Vl>v=3M7T&dYVP;t7N!g; >z76<hW|3^l*?Y#f$RP?gF2aJxNe*4_vl^9StQs&dWy|B2le_P)v?+_PZ<;=joq}kI| >RAOAUhTXS3Et?2XB+5jv}i+%tA > >diff --git a/toolkit/components/url-classifier/tests/unit/data/digest2.chunk b/toolkit/components/url-classifier/tests/unit/data/digest2.chunk >new file mode 100644 >--- /dev/null >+++ b/toolkit/components/url-classifier/tests/unit/data/digest2.chunk >@@ -0,0 +1,2 @@ >+a:5:32:32 >+Ê_Há^aÍ7ÂÙ]´=#ÌnmåÃøúnæoÌQ >\ No newline at end of file >diff --git a/toolkit/components/url-classifier/tests/unit/test_digest256.js b/toolkit/components/url-classifier/tests/unit/test_digest256.js >new file mode 100644 >--- /dev/null >+++ b/toolkit/components/url-classifier/tests/unit/test_digest256.js >@@ -0,0 +1,144 @@ >+Cu.import("resource://gre/modules/XPCOMUtils.jsm"); >+ >+XPCOMUtils.defineLazyModuleGetter(this, "NetUtil", >+ "resource://gre/modules/NetUtil.jsm"); >+XPCOMUtils.defineLazyModuleGetter(this, "Promise", >+ "resource://gre/modules/Promise.jsm"); >+// Global test server for serving safebrowsing updates. >+let gHttpServ = null; >+// Global nsIUrlClassifierDBService >+let gDbService = Cc["@mozilla.org/url-classifier/dbservice;1"] >+ .getService(Ci.nsIUrlClassifierDBService); >+// Security manager for creating nsIPrincipals from URIs >+let gSecMan = Cc["@mozilla.org/scriptsecuritymanager;1"] >+ .getService(Ci.nsIScriptSecurityManager); >+ >+// A map of tables to arrays of update redirect urls. >+let gTables = {}; >+ >+// Construct an update from a file. >+function readFileToString(aFilename) { >+ let f = do_get_file(aFilename); >+ let stream = Cc["@mozilla.org/network/file-input-stream;1"] >+ .createInstance(Ci.nsIFileInputStream); >+ stream.init(f, -1, 0, 0); >+ let buf = NetUtil.readInputStreamToString(stream, stream.available()); >+ return buf; >+} >+ >+// Registers a table for which to serve update chunks. Returns a promise that >+// resolves when that chunk has been downloaded. >+function registerTableUpdate(aTable, aFilename) { >+ let deferred = Promise.defer(); >+ // If we haven't been given an update for this table yet, add it to the map >+ if (!(aTable in gTables)) { >+ gTables[aTable] = []; >+ } >+ >+ // The number of chunks associated with this table. >+ let numChunks = gTables[aTable].length + 1; >+ let redirectPath = "/" + aTable + "-" + numChunks; >+ let redirectUrl = "localhost:4444" + redirectPath; >+ >+ // Store redirect url for that table so we can return it later when we >+ // process an update request. >+ gTables[aTable].push(redirectUrl); >+ >+ gHttpServ.registerPathHandler(redirectPath, function(request, response) { >+ do_print("Mock safebrowsing server handling request for " + redirectPath); >+ let contents = readFileToString(aFilename); >+ response.setHeader("Content-Type", >+ "application/vnd.google.safebrowsing-update", false); >+ response.setStatusLine(request.httpVersion, 200, "OK"); >+ response.bodyOutputStream.write(contents, contents.length); >+ deferred.resolve(contents); >+ }); >+ return deferred.promise; >+} >+ >+// Construct a response with redirect urls. >+function processUpdateRequest() { >+ let response = "n:1000\n"; >+ for (let table in gTables) { >+ response += "i:" + table + "\n"; >+ for (let i = 0; i < gTables[table].length; ++i) { >+ response += "u:" + gTables[table][i] + "\n"; >+ } >+ } >+ do_print("Returning update response: " + response); >+ return response; >+} >+ >+// Set up our test server to handle update requests. >+function run_test() { >+ gHttpServ = new HttpServer(); >+ gHttpServ.registerDirectory("/", do_get_cwd()); >+ >+ gHttpServ.registerPathHandler("/downloads", function(request, response) { >+ let buf = NetUtil.readInputStreamToString(request.bodyInputStream, >+ request.bodyInputStream.available()); >+ let blob = processUpdateRequest(); >+ response.setHeader("Content-Type", >+ "application/vnd.google.safebrowsing-update", false); >+ response.setStatusLine(request.httpVersion, 200, "OK"); >+ response.bodyOutputStream.write(blob, blob.length); >+ }); >+ >+ gHttpServ.start(4444); >+ run_next_test(); >+} >+ >+function createURI(s) { >+ let service = Cc["@mozilla.org/network/io-service;1"] >+ .getService(Ci.nsIIOService); >+ return service.newURI(s, null, null); >+} >+ >+// Just throw if we ever get an update or download error. >+function handleError(aEvent) { >+ do_throw("We didn't download or update correctly: " + aEvent); >+} >+ >+add_test(function test_update() { >+ let streamUpdater = Cc["@mozilla.org/url-classifier/streamupdater;1"] >+ .getService(Ci.nsIUrlClassifierStreamUpdater); >+ streamUpdater.updateUrl = "http://localhost:4444/downloads"; >+ >+ // Load up some update chunks for the safebrowsing server to serve. >+ registerTableUpdate("goog-downloadwhite-digest256", "data/digest1.chunk"); >+ registerTableUpdate("goog-downloadwhite-digest256", "data/digest2.chunk"); >+ >+ // Download some updates, and don't continue until the downloads are done. >+ function updateSuccess(aEvent) { >+ // Timeout of n:1000 is constructed in processUpdateRequest above and >+ // passed back in the callback in nsIUrlClassifierStreamUpdater on success. >+ do_check_eq("1000", aEvent); >+ do_print("All data processed"); >+ run_next_test(); >+ } >+ streamUpdater.downloadUpdates( >+ "goog-downloadwhite-digest256", >+ "goog-downloadwhite-digest256;\n", "", >+ updateSuccess, handleError, handleError); >+}); >+ >+add_test(function test_url_not_whitelisted() { >+ let uri = createURI("http://example.com"); >+ let principal = gSecMan.getNoAppCodebasePrincipal(uri); >+ gDbService.lookup(principal, function handleEvent(aEvent) { >+ // This URI is not on any lists. >+ do_check_eq("", aEvent); >+ run_next_test(); >+ }); >+}); >+ >+add_test(function test_url_whitelisted() { >+ // Hash of "whitelisted.com/" (canonicalized URL) is: >+ // 93CA5F48E15E9861CD37C2D95DB43D23CC6E6DE5C3F8FA6E8BE66F97CC518907 >+ let uri = createURI("http://whitelisted.com"); >+ let principal = gSecMan.getNoAppCodebasePrincipal(uri); >+ gDbService.lookup(principal, function handleEvent(aEvent) { >+ do_check_eq("goog-downloadwhite-digest256", aEvent); >+ run_next_test(); >+ }); >+}); >diff --git a/toolkit/components/url-classifier/tests/unit/xpcshell.ini b/toolkit/components/url-classifier/tests/unit/xpcshell.ini >--- a/toolkit/components/url-classifier/tests/unit/xpcshell.ini >+++ b/toolkit/components/url-classifier/tests/unit/xpcshell.ini >@@ -6,8 +6,9 @@ tail = tail_urlclassifier.js > [test_backoff.js] > [test_dbservice.js] > [test_hashcompleter.js] > # Bug 752243: Profile cleanup frequently fails > skip-if = os == "mac" || os == "linux" > [test_partial.js] > [test_prefixset.js] > [test_streamupdater.js] >+[test_digest256.js]
Attachment #797583 - Attachment is obsolete: true
OK, I am not marking attachments obsolete anymore! Sorry :(
Comment on attachment 799293 [details] [diff] [review] Add protocol parser for -digest256 lists Review of attachment 799293 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/components/url-classifier/Classifier.cpp @@ +570,5 @@ > validupdates++; > } > > if (!validupdates) { > + // This can happen if the valid was only valid for one table. valid is valid? ::: toolkit/components/url-classifier/tests/unit/test_digest256.js @@ +104,5 @@ > + .getService(Ci.nsIUrlClassifierStreamUpdater); > + streamUpdater.updateUrl = "http://localhost:4444/downloads"; > + > + // Load up some update chunks for the safebrowsing server to serve. > + registerTableUpdate("goog-downloadwhite-digest256", "data/digest1.chunk"); Is this digest1.chunk missing from the patch?
Attachment #799293 - Flags: review?(gpascutto) → review+
Thanks Gian-Carlo! > ::: toolkit/components/url-classifier/tests/unit/test_digest256.js > @@ +104,5 @@ > > + .getService(Ci.nsIUrlClassifierStreamUpdater); > > + streamUpdater.updateUrl = "http://localhost:4444/downloads"; > > + > > + // Load up some update chunks for the safebrowsing server to serve. > > + registerTableUpdate("goog-downloadwhite-digest256", "data/digest1.chunk"); > > Is this digest1.chunk missing from the patch? No, but bugzilla splinter review appears not to display it correctly. Filed https://bugzilla.mozilla.org/show_bug.cgi?id=913582 Try was green except for unrelated bugs: https://tbpl.mozilla.org/?tree=Try&rev=e5cfedcd2d83 I'll fix the comment and check in. Thanks, Monica
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla26
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: