Closed Bug 853388 Opened 12 years ago Closed 11 years ago

Convert XPIProvider.jsm from sqlite to JSON

Categories

(Toolkit :: Add-ons Manager, defect)

defect
Not set
normal

Tracking

()

VERIFIED FIXED
mozilla26

People

(Reporter: Irving, Assigned: Irving)

References

(Depends on 1 open bug, Blocks 1 open bug)

Details

(Whiteboard: [Snappy] c=startup_addons u= p=)

Attachments

(9 files, 33 obsolete files)

(deleted), patch
Unfocused
: review+
Irving
: checkin+
Details | Diff | Splinter Review
(deleted), patch
Unfocused
: review+
Irving
: checkin+
Details | Diff | Splinter Review
(deleted), patch
Unfocused
: review+
Irving
: checkin+
Details | Diff | Splinter Review
(deleted), patch
Unfocused
: review+
Irving
: checkin+
Details | Diff | Splinter Review
(deleted), patch
Unfocused
: review+
Irving
: checkin+
Details | Diff | Splinter Review
(deleted), patch
Yoric
: review+
Unfocused
: review+
Irving
: checkin+
Details | Diff | Splinter Review
(deleted), patch
Irving
: review+
Irving
: checkin+
Details | Diff | Splinter Review
(deleted), patch
Irving
: review+
Irving
: checkin+
Details | Diff | Splinter Review
(deleted), patch
Unfocused
: review+
Details | Diff | Splinter Review
There are a number of issues with SQLITE in the add-ons manager and XPI provider, including jank (bug xxx), main thread slow SQL (bug 729330), memory and disk space (bug 726556, bug 768312) For details about the planned change please see the etherpad at https://etherpad.mozilla.org/snappy-addon-manager A lot of the concerns discussed in the comments to bug 699839 are relevant.
(copying in some other useful background from email) Some open questions: * Is it worth compressing the JSON on disk, via a very fast compression algorithm like Snappy? (See below) * Can we make the data store more resilient/fault tolerant? Do we need to? * How dependant is this work on bug 832664? * This would make 3rd party tampering with the on-disk data *much* easier. What can we do to counter that? * Does this mean we can avoid polluting the codebase with Promises? Regarding extensions.ini: What Dave said - however, that amount of data does worry me a bit. The size of the SQLite databases isn't huge (most of the size on disk is overhead from SQLite), but the actual data is still at least 2 orders of magnitude larger than extensions.ini. I did a little experiment, dumping the contents of extensions.sqlite and addons.sqlite from my main profile, and compared the data size: extensions.ini - 1.2Kb extensions.sqlite in JSON - 138Kb addons.sqlite in JSON - 79Kb Total of both DBs in JSON - 217Kb Loading and parsing 217Kb of JSON is going to be a lot slower than a 1.2Kb INI file. And we'll inevitably want to add more data to that. It would also mean that, for the edge-case of a very large number of installed extensions, we'll end up loading a multi-megabyte JSON file on startup. Now, we shouldn't optimise for that case... but creating such a pathological startup situation is worrisome. Getting rid of extensions.ini would also make it far more likely to always take an I/O hit on startup. At the moment, if there are no extensions enabled we only need to do a single fstat to check that extensions.ini doesn't exist. Even with no profile-installed extensions, globally-installed extensions are not uncommon - so we'll end up reading/parsing the data on startup more often for people that don't use add-ons. Saying that, it would be nice to achieve - assuming it doesn't end up regressing various situations just for the sake of code niceness. As a bonus, I think it would make it easier to solve the issue of writing data on shutdown (bug 731489). - Blair
Assignee: nobody → irving
Status: NEW → ASSIGNED
Depends on: 762618
Whiteboard: [Snappy] c=startup_addons u= p=
Database schema notes: The RDF data structure in the addon's install.rdf is copied almost exactly into the XPIProvider.jsm/AddonInternal data structure, in XPIProvider.jsm/loadManifestFromRDF. XPIProvider adds a few fields for internal use. The fields that need to be preserved in schema version changes are listed in XPIProvider.DB_MIGRATE_METADATA The XPIDatabase routines implemented in XPIProviderUtils.js use XPIProvider.jsm/DBAddonInternal to hold almost exactly the same data as AddonInternal, restored from the database. Both of these object types are then wrapped in XPIProvider.jsm/AddonWrapper to provide an external API As a first cut, it looks like saving a set of (possibly slightly redacted) AddonInternal along with some metadata into a JSON file should work. Here's a breakdown of the info in AddonInternal. Star in the RDF column means it's read directly from the addon's install.rdf or chrome.manifest; the MIG column means it's important state that gets migrated on version updates. RDF MIG Database: * id * syncGUID // GUID assigned on first install location // calculated from install path; e.g. app-profile, app-global * version * type * internalName * updateURL * updateKey * optionsURL * optionsType * aboutURL * iconURL * icon64URL * defaultLocale visible // calculated based on priority of install locations active // calculated based on visible, userDisabled, softDisabled, appDisabled, pendingInstall * userDisabled appDisabled // calculated on version compatibility, blocklist pendingUninstall // set by user, perhaps should be migrated for case of uninstall & DB upgrade in same restart descriptor // full path of install location * installDate updateDate // perhaps should be migrated * applyBackgroundUpdates * bootstrap * skinnable size // calculated from file system NOTE: does a directory scan in XPIProvider.loadManifestFromDir() * sourceURI * releaseNotesURI * softDisabled * isForeignInstall * hasBinaryComponents // parsed from chrome.manifest in addon * strictCompatibility // default value from COMPATIBLE_BY_DEFAULT_TYPES Each addon has a default locale (with the relevant fields stored in the top level of the addon.rdf structure) and a possibly empty set of localizations. * locale+ // e.g. ["en-US", "en-CA"] * name * description * creator * homepageURL * developer+ * translator+ * contributor+
Depends on: 839468
Some of the existing tests get JS exceptions in callbacks when the XPI database fails to find the requested addon; this causes the test to hang. Add checks for some of these cases so the tests fail fast when the XPI database is broken.
Just to make sure its 100% clear, since RDF was mentioned in comment 2: This needs to not touch any code related to RDF parsing (it needs to stay independent from the DB, for when we start supporting a new JSON-based manifest format).
Rough version of saving and restoring XPI data in JSON format, so far just synchronous and tested manually from the JS scratchpad.
(In reply to Blair McBride [:Unfocused] (Back from the dead. Mostly.) from comment #4) > Just to make sure its 100% clear, since RDF was mentioned in comment 2: This > needs to not touch any code related to RDF parsing (it needs to stay > independent from the DB, for when we start supporting a new JSON-based > manifest format). Absolutely. The only RDF I was thinking of removing was the old DB upgrade code, for handling extensions.rdf => extensions.sqlite. It turns out that I probably won't substantially redesign the DB upgrade, so it will likely take no extra effort to preserve the old DB upgrade; we can make the choice either way about whether to keep it around.
Related bugs worth looking into (maybe they will be fixed by this): bug 779056, bug 786861
This patch contains a bunch of debug junk and some valuable changes. Nothing in this depends on the main patch; at some point I'll clean it up and we can land it early if it will help anyone else.
Attachment #734926 - Attachment is obsolete: true
Eventually, we should use OS.File with read-ahead instead of streams to read in the JSON file. Yoric will add support in bug 863373
In the course of cleaning up test failures in the new JSON version of the code, I've come across one place where the test is sensitive to a change where the new code is synchronous and the old code was (usually) async. I've figured out where the failure is, but I'm not sure whether the problem is that the test is too sensitive, or that the original code is doing things in the wrong order. The test in question is toolkit/mozapps/extensions/test/xpcshell/test_bug675371.js. In short, the test adds a couple of addons, waits for several addon-related events to happen, ending with InstallListener.onInstallEnded, and then checks for several things, including that files in the addon's XPI are correctly registered with the component manager. The problem is that the onInstallEnded notification is sent before the addon's bootstrap "startup" method is called, around http://dxr.mozilla.org/mozilla-central/toolkit/mozapps/extensions/XPIProvider.jsm#l5144, and the addon's manifest is not registered with the component manager until we call XPI_callBootstrapMethod specifically for the "startup" method at http://dxr.mozilla.org/mozilla-central/toolkit/mozapps/extensions/XPIProvider.jsm#l3773 The test breaks because after it gets onInstallEnded, it calls a sometimes-async API (AddonManager.getAddonByID) that happens to always behave async in the test case; that gives the bootstrap startup a chance to run before the test is performed. In the new implementation that call is currently always synchronous, so the test fails. I can work around the problem by going around the event loop in the test, but that feels dirty; is there a better place to register the XPI with the component manager so that it doesn't race with the onInstallEnded notification? See also http://dxr.mozilla.org/mozilla-central/addon-sdk/source/lib/sdk/addon/installer.js#l50, which fixes bug 749745 by doing a setTimeout(0) to let the startup method run before other code that expects the addon to be completely ready.
Flags: needinfo?(dtownsend+bugmail)
Flags: needinfo?(bmcbride)
Most of the tests run through to completion now, though not 100% reliably. I'm going to try working with an optimized build to see if that helps. The root of the problem is that the async APIs in AddonManager use a helper object that implements iteration over asynchronous operations by tail recursion; particularly in the case of AMI_getAddonsByIDs() it iterates over a list of addon IDs, calling AMI_getAddonByID() which then recursively iterates over all the addon providers, leading to O(n * m) stack inflation, because the new XPI provider never needs to async wait for the database. It's still not clear whether there's an actual bug in JS making this problem so annoying; I've had a couple of conversations in #jsapi with followup in bug 762618, but no real progress.
Attachment #739185 - Attachment is obsolete: true
Attached patch WIP 2 - save XPI database as JSON (obsolete) (deleted) — Splinter Review
Passes 115 tests, still fails 21.
Attachment #735153 - Attachment is obsolete: true
(In reply to :Irving Reid from comment #11) > Most of the tests run through to completion now, though not 100% reliably. > The root of the problem is that the async APIs in AddonManager use a helper > object that implements iteration over asynchronous operations by tail > recursion I went through a similar hassle in bug 699839, and I originally worked around this by putting some code that spinned the event loop, or that waited for the next tick, in tests. Unfortunately, I don't have a copy of that patch, and the latest version used a different technique (see the waitPending calls in the latest patch).
(Sorry for delays - I've been off sick/in hospital. I'll have a look at the WiP patch when I can.) (In reply to :Irving Reid from comment #10) > The problem is that the onInstallEnded notification is sent before the > addon's bootstrap "startup" method is called This was changed to be that way about a year ago - firing onInstallEnded after startup() was breaking things. See: http://theunfocused.net/2012/03/06/addons-manager-api-change/ > I can work around the problem by going around the event loop in the test, > but that feels dirty; is there a better place to register the XPI with the > component manager so that it doesn't race with the onInstallEnded > notification? > > See also > http://dxr.mozilla.org/mozilla-central/addon-sdk/source/lib/sdk/addon/ > installer.js#l50, which fixes bug 749745 by doing a setTimeout(0) to let the > startup method run before other code that expects the addon to be completely > ready. Ew. Maybe we need to look into providing a new event for that in the future. Filed bug 864650. In the meantime, the test add-on can set a preference or send an observer notification that the test can listen for. We use this type of pattern already in a couple of places.
Flags: needinfo?(dtownsend+bugmail)
Flags: needinfo?(bmcbride)
(In reply to :Irving Reid from comment #11) > The root of the problem is that the async APIs in AddonManager use a helper > object that implements iteration over asynchronous operations by tail > recursion Ah. That sounds like it's fixable within AsyncObjectCaller, by having callNext() wait for the next tick before calling listener.nextObject().
Attachment #740300 - Flags: feedback?(bmcbride)
Hey Blair, what's the timeline for this? We're hoping to have FHR recording of add-ons from Java in 23, so not sure whether to block on this or not…
Flags: needinfo?(bmcbride)
At this point I don't think the JSON based implementation will land in the 23 cycle, so I'd recommend not blocking. We're doing our best to preserve APIs, so unless you're reaching into internals there should not be much change when we land.
Flags: needinfo?(bmcbride)
Thanks, Irving. I've opened a needinfo for you on the other bug, rather than polluting this one.
Comment on attachment 740300 [details] [diff] [review] WIP 2 - save XPI database as JSON Review of attachment 740300 [details] [diff] [review]: ----------------------------------------------------------------- Still quite a ways to go, but this seems to be shaping up nicely. General direction looks good for a first iteration, though there will be some challenges when saving/readings goes fully async and some resiliency gets built in. I'd still like to eventually look at refactoring much of the XPIDatabase API. I think it'd be worth abstracting out all sqlite migration code into it's own lazy-loaded JSM - that kind of thing really impacts mobile (and it would make the XPIDatabase code *so* much more manageable). ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ +227,5 @@ > aFile.path, e); > } > } > > +// XXX IRVING is it worth taking SafeInstallOperation off main thread, into OS.File or whereever? Yes, but IMO its out of scope here. @@ +921,5 @@ > > try { > let addon = loadManifestFromRDF(Services.io.newFileURI(file), bis); > addon._sourceBundle = aDir.clone(); > + addon.size = getFileSize(aDir); // XXX IRVING do this async? remove it? This is a public API, we can't remove it. Could change the API to be async, but the size should never change - so its best to be done just once here, to keep things simple. Reading the manifest needs to be eventually make async anyway, should be easy to do then (but again, this is out of scope of this bug). ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +159,5 @@ > this.count++; > let self = this; > XPIDatabase.makeAddonFromRowAsync(row, function handleResult_makeAddonFromRowAsync(aAddon) { > function completeAddon(aRepositoryAddon) { > + // XXX IRVING new JSON DB loading code no longer goes through this path; what should I do about _repositoryAddon? It has to be set on that object somehow - AddonWrapper uses it extensively. @@ +503,5 @@ > + this.addonDB = {}; > + return false; > + } > + if (inputAddons.schemaVersion != DB_SCHEMA) { > + // XXX UPGRADE FROM PREVIOUS VERSION OF JSON DB Note that this may not necessarily be an upgrade - we also need to support downgrades. @@ +510,5 @@ > + } > + // If we got here, we probably have good data > + // Make AddonInternal instances from the loaded data and save them > + delete this.addonDB; > + let addonDB = {} Any reason why this can't be a Map? @@ +972,4 @@ > }); > + // XXX IRVING removed an async callback when the database was closed > + // XXX do we want to keep the ability to async flush extensions.json > + // XXX and then call back? IMO, yes. @@ +1290,5 @@ > }, > > /** > * Synchronously sets the Sync GUID for an add-on. > + * XXX IRVING could replace this with setAddonProperties Sounds reasonable. @@ +1305,5 @@ > }, > > /** > * Synchronously sets the file descriptor for an add-on. > + * XXX IRVING could replace this with setAddonProperties Ditto. @@ +1324,5 @@ > * > * @param aAddon > * The DBAddonInternal to update > */ > + updateAddonActive: function XPIDB_updateAddonActive(aAddon, aActive) { This too.
Attachment #740300 - Flags: feedback?(bmcbride) → feedback+
(In reply to Blair McBride [:Unfocused] (Limited availability.) from comment #19) > Comment on attachment 740300 [details] [diff] [review] > WIP 2 - save XPI database as JSON > > Review of attachment 740300 [details] [diff] [review]: > ----------------------------------------------------------------- > > Still quite a ways to go, but this seems to be shaping up nicely. General > direction looks good for a first iteration, though there will be some > challenges when saving/readings goes fully async and some resiliency gets > built in. I'd still like to eventually look at refactoring much of the > XPIDatabase API. > > I think it'd be worth abstracting out all sqlite migration code into it's > own lazy-loaded JSM - that kind of thing really impacts mobile (and it would > make the XPIDatabase code *so* much more manageable). > > ::: toolkit/mozapps/extensions/XPIProvider.jsm > @@ +227,5 @@ > > aFile.path, e); > > } > > } > > > > +// XXX IRVING is it worth taking SafeInstallOperation off main thread, into OS.File or whereever? > > Yes, but IMO its out of scope here. > > @@ +921,5 @@ > > > > try { > > let addon = loadManifestFromRDF(Services.io.newFileURI(file), bis); > > addon._sourceBundle = aDir.clone(); > > + addon.size = getFileSize(aDir); // XXX IRVING do this async? remove it? > > This is a public API, we can't remove it. Could change the API to be async, > but the size should never change - so its best to be done just once here, to > keep things simple. > > Reading the manifest needs to be eventually make async anyway, should be > easy to do then (but again, this is out of scope of this bug). > > ::: toolkit/mozapps/extensions/XPIProviderUtils.js > @@ +159,5 @@ > > this.count++; > > let self = this; > > XPIDatabase.makeAddonFromRowAsync(row, function handleResult_makeAddonFromRowAsync(aAddon) { > > function completeAddon(aRepositoryAddon) { > > + // XXX IRVING new JSON DB loading code no longer goes through this path; what should I do about _repositoryAddon? > > It has to be set on that object somehow - AddonWrapper uses it extensively. > > @@ +503,5 @@ > > + this.addonDB = {}; > > + return false; > > + } > > + if (inputAddons.schemaVersion != DB_SCHEMA) { > > + // XXX UPGRADE FROM PREVIOUS VERSION OF JSON DB > > Note that this may not necessarily be an upgrade - we also need to support > downgrades. > > @@ +510,5 @@ > > + } > > + // If we got here, we probably have good data > > + // Make AddonInternal instances from the loaded data and save them > > + delete this.addonDB; > > + let addonDB = {} > > Any reason why this can't be a Map? The documentation for Map (https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Map) says it's experimental, not for production code, etc. > > /** > > * Synchronously sets the Sync GUID for an add-on. > > + * XXX IRVING could replace this with setAddonProperties > > Sounds reasonable. ... > > + updateAddonActive: function XPIDB_updateAddonActive(aAddon, aActive) { > > This too. Turns out there's special case code behind setting .active and .syncGUID, so folding these into setAddonProperties doesn't make sense in the short term.
Attached patch TESTS WIP v4 - Make test suite more async-safe (obsolete) (deleted) — Splinter Review
Cleaned almost all of the debug dumps out, and fixed a few more places where the tests weren't quite async-safe. This can be applied without the JSON changes, to ensure that we're comparing results from as close as possible to the same test suite before and after.
Attachment #740282 - Attachment is obsolete: true
This should be applied on top of TESTS. This patch passes the core xpcshell tests for extensions, but not the tests related to schema or database version changes.
Attachment #740300 - Attachment is obsolete: true
This should be applied after JSON. This patch removes most of the old sqlite code, except for that required to read and upgrade old databases. This passes all but five of the existing tests. Still to do: Clean up the remaining test failures Implement the backward/forward compatibility for JSON databases as described at the end of https://bugzilla.mozilla.org/show_bug.cgi?id=853389#c10 Replace existing begin / end transaction model of database management with clean/dirty and asynchronous writing of JSON file
(In reply to :Irving Reid from comment #20) > > Any reason why this can't be a Map? > > The documentation for Map > (https://developer.mozilla.org/en-US/docs/JavaScript/Reference/ > Global_Objects/Map) says it's experimental, not for production code, etc. Heh, that warning is needlessly scary IMO. We're already using it extensively in production code, and any new code I try to get using Map/Set where appropriate (and slowly convert all code to use them).
This unwinds the stack often enough that I can pass the test suite without my stack saving hacks to AddonManager.jsm, which makes them much more useful to everyone else.
Attachment #746060 - Attachment is obsolete: true
Now handling all the upgrade scenarios from sqlite to JSON storage. There are two failing tests that I haven't been able to sort out, and I'd like input from Dave and Blair (and others, if they want). test_AddonRepository_cache.js is failing in run_test_17, because the addon manager is returning much more information than the test expects for one of the addons. I don't understand the behaviour of the AddonManager cache well enough to make sense of what the expected result should be, and what code paths the test is meant to cover. test_bug659772.js is also failing, and I mostly understand why. As far as I can tell, the test sets up a scenario where: 1) We have downloaded and staged an addon update 2) We also have compatibility override data for that update 3) The browser is updated to a new version that changes the XPI database schema AND requires additional fields not extracted from the addon's manifest when the upgrade was staged The code in question is related to bug 680802 and bug 664833, despite the name of the test file. The behaviour I'm seeing is that in the original code, the database upgrade triggers a complete rebuild of the XPI database, with the addon state recovered by the database upgrade code. In the course of this, the compatibility overrides are applied to the addon. In the new code I'm working to implement forward/backward compatibility in the JSON format, so we don't need to do a database rebuild. Because of this, the staged addon upgrade happens through a different code path that *does not* apply the compatibility overrides. This leaves me wondering whether a) I'm misunderstanding the desired behaviour of the interaction between compatibility overrides and addon upgrades b) there's a latent bug where an addon upgrade (with compat overrides) and app upgrade at the same time, that *doesn't* require a database rebuild, is doing the wrong thing c) I mucked up the logic of the non-rebuild code path so that it stopped applying compat overrides
Attachment #746070 - Attachment is obsolete: true
Digging through the grimy underbelly of Bugzilla, I wonder if my problem with test_bug659772.js is related to bug 610740.
Depends on: 610740
Flags: needinfo?(dtownsend+bugmail)
(In reply to :Irving Reid from comment #26) > Created attachment 748051 [details] [diff] [review] > UPGRADE wip V2 - Upgrade sqlite and RDF databases to JSON > > Now handling all the upgrade scenarios from sqlite to JSON storage. > > There are two failing tests that I haven't been able to sort out, and I'd > like input from Dave and Blair (and others, if they want). > > test_AddonRepository_cache.js is failing in run_test_17, because the addon > manager is returning much more information than the test expects for one of > the addons. I don't understand the behaviour of the AddonManager cache well > enough to make sense of what the expected result should be, and what code > paths the test is meant to cover. That tests verifies that we can set a pref and limit the types of add-ons that are cached in the AddonRepository. The test tries to only cache extensions and not themes so what we should see in the results is an extension with all properties from the XPI and the repository and two themes with only properties that we have locally as opposed to the previous test that should also see repository information for those themes. You say its seeing more information? If its the repository information for the themes that its seeing then you're not correctly restricting caching based on the add-on type. > test_bug659772.js is also failing, and I mostly understand why. As far as I > can tell, the test sets up a scenario where: > > 1) We have downloaded and staged an addon update > 2) We also have compatibility override data for that update > 3) The browser is updated to a new version that changes the XPI database > schema AND requires additional fields not extracted from the addon's > manifest when the upgrade was staged Looks about right > In the new code I'm working to implement forward/backward compatibility in > the JSON format, so we don't need to do a database rebuild. Because of this, > the staged addon upgrade happens through a different code path that *does > not* apply the compatibility overrides. That sounds broken to me, why aren't you applying the compatibility update?
Flags: needinfo?(dtownsend+bugmail)
Attached patch TESTS WIP v6 - minor tweak to an error message (obsolete) (deleted) — Splinter Review
Not sure whether Mossop or Unfocused would rather have the r? on this, swap if you want. This goes through Try cleanly on all platforms, modulo some known intermittent failures: https://tbpl.mozilla.org/?tree=Try&rev=b2ca8ddd38c3 I was hoping this would help with the persistent test failures on Android (bug 676992 et al, see toolkit/mozapps/extensions/test/xpcshell/xpcshell.ini), but the extensions tests appear to be completely disabled in Android try builds so I'll need to revisit that later to re-enable them properly.
Attachment #748038 - Attachment is obsolete: true
Attachment #752204 - Flags: review?(dtownsend+bugmail)
Comment on attachment 752204 [details] [diff] [review] TESTS WIP v6 - minor tweak to an error message Blair is the one who has to be in charge of this long term so I suggest that he do most of the reviews, I can do bits and pieces where necessary though.
Attachment #752204 - Flags: review?(dtownsend+bugmail) → review?(bmcbride)
(In reply to Dave Townsend (:Mossop) from comment #28) > (In reply to :Irving Reid from comment #26) ... > > test_bug659772.js is also failing, and I mostly understand why. As far as I > > can tell, the test sets up a scenario where: > > > > 1) We have downloaded and staged an addon update > > 2) We also have compatibility override data for that update > > 3) The browser is updated to a new version that changes the XPI database > > schema AND requires additional fields not extracted from the addon's > > manifest when the upgrade was staged > > Looks about right > > > In the new code I'm working to implement forward/backward compatibility in > > the JSON format, so we don't need to do a database rebuild. Because of this, > > the staged addon upgrade happens through a different code path that *does > > not* apply the compatibility overrides. > > That sounds broken to me, why aren't you applying the compatibility update? OK, I may have misunderstood the cause here. I'm digging through this again, and here's the sequence of events I see: - We start up xpcshell with application version 1 - A packed XPI extension, addon3@... is installed (created with writeInstallRDFForExtension, compatible with version 1 only) - An update to addon3, which requires restart, is installed from a separate XPI file (test/xpcshell/addons/test_bug659772.xpi). This has the same extension version (2.0) but claims to be compatible with xpcshell version 2 only - We save a .json of the xpi metadata, including appDisabled == true because the app is version 1 and the XPI specifies version 2 - Shut down and modify the .json in place, to simulate the bug scenario described in 3) above - Force a DB schema migration and restart - In XPIProvider.checkForChanges, XPIProvider.processPendingFileChanges sees the staged add-on and installs it (including the cached metadata), saves metadata in aManifests[location][id] - Later in XPIProvider.checkForChanges, XPIProvider.getInstallLocationStates (the recursive directory scan) sees the changed files installed by processPendingFileChanges - this may be a separate bug where we do extra work in the face of staged installs? - XPIProvider.processFileChanges gets both the states from the recursive scan and the manifests from processPendingFileChanges This is where the behaviour diverges. In the code *before* my changes, because the database was discarded and is being rebuilt, we end up in addMetadata() - Get the cached manifest for the updated addon - Get the migration data from the old DB - Because the add-on version is the same, **copy the application compatibility from the old DB over the new compatibility data from the updated add-on**, which overwrites the targetApplication from compatible with only xpcshell version 2, back to compatible with version 1. - Reset appDisabled to false, since our targetApplication version now matches - test (do_check_false(a3.appDisabled)) passes In the new code, since we don't discard the database and rebuild from migration data, the add-on update proceeds normally; because the newly installed version of the add-on is only compatible with xpcshell v2, and we're still running as v1, the add-on is kept as appDisabled == true, which causes the test to fail. So my suspicion is that this is a case where the existing test, and the old code, was at least somewhat broken. The old code would give a different result for whether the updated add-on was compatible depending on whether the DB is being rebuilt at the same time, which doesn't seem right to me. There's a comment above the code at http://dxr.mozilla.org/mozilla-central/toolkit/mozapps/extensions/XPIProvider.jsm#l2714 suggesting that the add-on version check isn't a perfect way to control whether the add-on's compatibility data gets restored from the DB migration data. It's a fairly obscure failure; this only happens when we need a DB upgrade (including the upgrade from sqlite to JSON) *and* there's a pending add-on install that changes application compatibility but does *not* change the add-on version, so I'm not sure that it's too important to fix it.
Flags: needinfo?(dtownsend+bugmail)
(In reply to Dave Townsend (:Mossop) from comment #28) > (In reply to :Irving Reid from comment #26) ... > > test_AddonRepository_cache.js is failing in run_test_17, because the addon > > manager is returning much more information than the test expects for one of > > the addons. I don't understand the behaviour of the AddonManager cache well > > enough to make sense of what the expected result should be, and what code > > paths the test is meant to cover. > > That tests verifies that we can set a pref and limit the types of add-ons > that are cached in the AddonRepository. The test tries to only cache > extensions and not themes so what we should see in the results is an > extension with all properties from the XPI and the repository and two themes > with only properties that we have locally as opposed to the previous test > that should also see repository information for those themes. You say its > seeing more information? If its the repository information for the themes > that its seeing then you're not correctly restricting caching based on the > add-on type. I've sorted the rest of the way through this one. Basically, the old XPIDatabase code would always reset _repositoryAddon every time an XPI addon was fetched, via AsyncAddonListCallback. This makes XPIProvider.updateAddonRepositoryData mostly not do anything, since (a) it calls getVisibleAddons() which already resets the _repositoryAddon and (b) it never removes an existing value of _repositoryAddon, even if there is no longer one in the cache (but this doesn't matter since getVisibleAddons already removed it) (c) any subsequent retrieval of the addon will reset _repositoryAddon again anyway updateAddonRepositoryData does do updateAddonDisabledState() iff the new cache contains an entry for the addon, which AsyncAddonListCallback doesn't do, so the behaviour is not exactly the same for the two paths. In fact, test_AddonRepository_cache.js/run_test_17 resets _repositoryAddon five times, twice via AddonManager indirectly calling AsyncAddonListCallback before the cache is cleared, twice in updateAddonRepositoryData (indirectly via getVisibleAddons/AsyncAddonListCallback and then directly), and once again through getAddonsByID/AsyncAddonListCallback after. The logic change that broke the test was that my replacement for AsyncAddonListCallback only fetches _repositoryAddon if we don't already have a value for it; this leads to me holding onto the older cached data in this test case. I can fix it for the time being by always resetting _repositoryAddon, but in the medium term this whole mechanism cries out for redesign.
(In reply to :Irving Reid from comment #31) > (In reply to Dave Townsend (:Mossop) from comment #28) > > (In reply to :Irving Reid from comment #26) > ... > > > test_bug659772.js is also failing, and I mostly understand why. As far as I > > > can tell, the test sets up a scenario where: > > > > > > 1) We have downloaded and staged an addon update > > > 2) We also have compatibility override data for that update > > > 3) The browser is updated to a new version that changes the XPI database > > > schema AND requires additional fields not extracted from the addon's > > > manifest when the upgrade was staged > > > > Looks about right > > > > > In the new code I'm working to implement forward/backward compatibility in > > > the JSON format, so we don't need to do a database rebuild. Because of this, > > > the staged addon upgrade happens through a different code path that *does > > > not* apply the compatibility overrides. > > > > That sounds broken to me, why aren't you applying the compatibility update? > > OK, I may have misunderstood the cause here. > > I'm digging through this again, and here's the sequence of events I see: > > - We start up xpcshell with application version 1 > > - A packed XPI extension, addon3@... is installed (created with > writeInstallRDFForExtension, compatible with version 1 only) > > - An update to addon3, which requires restart, is installed from a separate > XPI file (test/xpcshell/addons/test_bug659772.xpi). This has the same > extension version (2.0) but claims to be compatible with xpcshell version 2 > only > > - We save a .json of the xpi metadata, including appDisabled == true > because the app is version 1 and the XPI specifies version 2 > > - Shut down and modify the .json in place, to simulate the bug scenario > described in 3) above > > - Force a DB schema migration and restart > > - In XPIProvider.checkForChanges, XPIProvider.processPendingFileChanges > sees the staged add-on and installs it (including the cached metadata), > saves metadata in aManifests[location][id] > > - Later in XPIProvider.checkForChanges, > XPIProvider.getInstallLocationStates (the recursive directory scan) sees the > changed files installed by processPendingFileChanges - this may be a > separate bug where we do extra work in the face of staged installs? > > - XPIProvider.processFileChanges gets both the states from the recursive > scan and the manifests from processPendingFileChanges > > > This is where the behaviour diverges. In the code *before* my changes, > because the database was discarded and is being rebuilt, we end up in > addMetadata() > > - Get the cached manifest for the updated addon > > - Get the migration data from the old DB > > - Because the add-on version is the same, **copy the application > compatibility from the old DB over the new compatibility data from the > updated add-on**, which overwrites the targetApplication from compatible > with only xpcshell version 2, back to compatible with version 1. > > - Reset appDisabled to false, since our targetApplication version now > matches > > - test (do_check_false(a3.appDisabled)) passes > > > In the new code, since we don't discard the database and rebuild from > migration data, the add-on update proceeds normally; because the newly > installed version of the add-on is only compatible with xpcshell v2, and > we're still running as v1, the add-on is kept as appDisabled == true, which > causes the test to fail. > > So my suspicion is that this is a case where the existing test, and the old > code, was at least somewhat broken. The old code would give a different > result for whether the updated add-on was compatible depending on whether > the DB is being rebuilt at the same time, which doesn't seem right to me. Yeah this doesn't make a lot of sense. I suspect it was a bit of a hack to keep things at least sane since without that fix things were breaking horribly in this case. I think the important scenario to check is this: Have add-on A installed, compatible with version 1. Stage the install of the same version of add-on A which also claims compatibility with version 1 but has an update.rdf that extends that to also be compatible with version 2. Upgrade to version 2 forcing a DB schema change. Add-on A should still be installed and compatible.
Flags: needinfo?(dtownsend+bugmail)
Comment on attachment 752204 [details] [diff] [review] TESTS WIP v6 - minor tweak to an error message Review of attachment 752204 [details] [diff] [review]: ----------------------------------------------------------------- Geez, this must have been fun :\ Just a few things to clear up: ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ +2593,2 @@ > // This add-on has disappeared > + LOG("Add-on " + aOldAddon.id + " removed from " + aOldAddon.location); Er, aOldAddon.location? You mean aOldAddon._installLocation.name ? @@ +4171,5 @@ > else > this.window = null; > + > + this.WARN = WARN; > + this.LOG = LOG; Eh? @@ +4716,5 @@ > startDownload: function AI_startDownload() { > this.state = AddonManager.STATE_DOWNLOADING; > if (!AddonManagerPrivate.callInstallListeners("onDownloadStarted", > this.listeners, this.wrapper)) { > + WARN("Install listeners failed for addon " + this.sourceURI.spec); This isn't failing - its designed so listeners can return false to (purposefully) cancel the install. ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +523,5 @@ > delete this.connection; > > if (!aForceOpen && !this.dbfileExists) { > this.connection = null; > + return; Shouldn't this change be in another patch?
Attachment #752204 - Flags: review?(bmcbride) → review-
(In reply to Blair McBride [:Unfocused] (Limited availability.) from comment #34) > ::: toolkit/mozapps/extensions/XPIProvider.jsm > @@ +2593,2 @@ > > // This add-on has disappeared > > + LOG("Add-on " + aOldAddon.id + " removed from " + aOldAddon.location); > > Er, aOldAddon.location? You mean aOldAddon._installLocation.name ? Ah, that carried back from one of the later patches where I always set AddonInternal.location... That said, we didn't use the parameter consistently in the existing code (sometimes passed installLocation object, sometimes string name) so I made it consistently take the string in the updated patch. > @@ +4171,5 @@ > > else > > this.window = null; > > + > > + this.WARN = WARN; > > + this.LOG = LOG; > > Eh? For reasons that I didn't get to the bottom of, some of the callback functions where we're trying to LOG and WARN were getting "not found" errors when they tried to call the file-global definitions, but they had a usable 'this' so I put references there. > @@ +4716,5 @@ > > startDownload: function AI_startDownload() { > > this.state = AddonManager.STATE_DOWNLOADING; > > if (!AddonManagerPrivate.callInstallListeners("onDownloadStarted", > > this.listeners, this.wrapper)) { > > + WARN("Install listeners failed for addon " + this.sourceURI.spec); > > This isn't failing - its designed so listeners can return false to > (purposefully) cancel the install. OK, improved the log message. > ::: toolkit/mozapps/extensions/XPIProviderUtils.js > @@ +523,5 @@ > > delete this.connection; > > > > if (!aForceOpen && !this.dbfileExists) { > > this.connection = null; > > + return; > > Shouldn't this change be in another patch? Sure, this was a drive-by fix of a JS strict mode warning for inconsistent return type - should I add it to one of my other patches or post it separately?
Attachment #752204 - Attachment is obsolete: true
Attachment #754529 - Flags: review?(bmcbride)
Cleaned up and ready for a first review pass. There are a bunch of things left half done in this patch and then finished off in the UPGRADE patch; let me know if you'd prefer to review a merged patch.
Attachment #746063 - Attachment is obsolete: true
Attachment #754908 - Flags: review?(bmcbride)
Handle database version upgrades, clean up all but one test failure (I'm not yet comfortable with the issues discussed in Comment 32 / Comment 33). Remaining XXX comments in this patch will be addressed in additional patches to be posted this week.
Attachment #748051 - Attachment is obsolete: true
Attachment #754912 - Flags: review?(bmcbride)
No longer blocks: 865444
Attachment #754529 - Flags: review?(bmcbride) → review+
Here's a first cut at moving the deferred JSON save into a helper module. The test file is just a stub of the harness for now; I'm working on it. This implements the behaviour discussed in http://www.controlledflight.ca/2013/05/31/saving-browser-state-asynchronously/ The basic API is: let saver = new DeferredSave(aFile, aDataCallback, aDelay); aFile: nsIFile where the data should be saved function aDataCallback() -> string: called by the saver when it wants a snapshot of the data to be saved aDelay: number of milliseconds to delay before starting write, in order to batch up changes saver.saveChanges(aCallback) function aCallback(error): optional callback function for when the save completes. If the save is successful, null is passed for the error argument. If the save fails, the exception is passed as the error argument. Marks the saver "dirty" and tells the saver to start the delay timer; when the timer runs out, the saver will call aDataCallback() to get a snapshot and start writing. If aCallback is non-null, the saver will call back aCallback when the write completes. Note that if there is more than one call to saveChanges() before the data is written, *all* the callbacks are remembered and called - this might not be the most useful design saver.flush(aCallback) Calls aDataCallback() to get a snapshot if the saver is dirty, waits for any in-progress write to complete, writes a final copy of the data to disk if the saver was dirty, and then calls the callback.
Attachment #757910 - Flags: feedback?(felipc)
Attachment #757910 - Flags: feedback?(dteller)
Both the module and the tests are still based on callbacks rather than Promises.
Attachment #757910 - Attachment is obsolete: true
Attachment #757910 - Flags: feedback?(felipc)
Attachment #757910 - Flags: feedback?(dteller)
Attachment #758123 - Flags: feedback?(felipc)
Attachment #758123 - Flags: feedback?(dteller)
Depends on: 879850
Comment on attachment 758123 [details] [diff] [review] DeferredSave WIP v2 - deferred async writes module, now with unit tests Review of attachment 758123 [details] [diff] [review]: ----------------------------------------------------------------- Very very nice! The flush() function is kinda confusing to me with all the aCallback/flushCallback dance, but I really don't know how to simplify it more ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +68,5 @@ > + > + LOG("Save changes"); > + if (!this._dirtyData) { > + this._dirtyData = true; > + if (!this._saveInProgress) _saveInProgress will always be false here because if it was true we would have returned earlier @@ +109,5 @@ > + // end up waiting for an in-progress async save > + let jsonData = this._dataCallback(); > + // reset dirty flag so _writeToDisk/finished() doesn't restart timer > + this._dirtyData = false; > + flushCallback = result => { result unused? fwiw you can use "() =>" for no params with these arrow functions @@ +114,5 @@ > + if (aCallback) { > + LOG("registering delayed save callback..."); > + this._pendingCallbacks.push(aCallback); > + } > + // This could be replaced with a synchronous write interesting idea..
Attachment #758123 - Flags: feedback?(felipc) → feedback+
Comment on attachment 754908 [details] [diff] [review] JSON v4: Save and restore XPI database in JSON format Review of attachment 754908 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ +5029,5 @@ > + /* > + * JSON.stringify filter to save only the relevant parts of an AddonInternal > + * when recording metadata for a deferred installation > + */ > + function install_metadata_filter(aKey, aValue) { Why move this to this function, instead of keeping it in AddonInternal.toJSON()? toJSON feels neater, and makes for nice debug possibilities (ditto for DBAddonInternal). ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +219,5 @@ > + * @param aCallback > + * Function with signature f([aMethod(object)]), called when all values > + * are available > + */ > +function asyncMap(aObjects, aMethod, aCallback) { This makes me wish we had Mike's Async.jsm in the tree already (see bug 870774). @@ +372,5 @@ > + * @param aLoaded > + * the addon data fields loaded from the other source > + * @return a DBAddonInternal populated with the loaded data > + */ > +function AddonFromSave(aLoaded) { The name of this irks me. createDBAddonInternal or something similar? Or maybe even just move it all to DBAddonInternal's constructor. @@ +528,5 @@ > + > + /** > + * Open and parse the JSON XPI extensions database. > + * @return true: the DB was successfully loaded > + * false: The DB either needs upgrade or did not exist at all. The existing AddonRepository code deletes the DB file when it knows it's meant to be empty, which saves I/O - the new JSON DB code has been adapted to do the same. XPIDatabase has never done that, but I think it'd be worth looking into. @@ +566,5 @@ > + } > + // If we got here, we probably have good data > + // Make AddonInternal instances from the loaded data and save them > + delete this.addonDB; > + let addonDB = {} Over in bug 853389, AddonRepository is storing the add-ons in-memory in a Map, which is specifically designed for this use-case. On disk, it's storing them in an array, which will save some bytes (and we don't benefit from being able to index by ID for the on-disk format anyway). Think we should do the same here. @@ +1013,5 @@ > // If we are running with an in-memory database then force a new > // extensions.ini to be written to disk on the next startup > + // XXX IRVING special case for if we fail to save extensions.json? > + // XXX maybe doesn't need to be at shutdown? > + // if (!this.connection.databaseFile) Yea, probably safest to still keep this - at least until after all this work is done, and we discover its safe to not have it. But if its there, it does need to be done on shutdown - there's a few places where we set this to false during runtime. Alternatively, we could use a separate pref just for this situation (probably more reliable). @@ +1028,4 @@ > }); > + // XXX IRVING removed an async callback when the database was closed > + // XXX do we want to keep the ability to async flush extensions.json > + // XXX and then call back? Yes - useful for testing. @@ +1051,5 @@ > return []; > > + let locations = {}; > + for each (let addon in this.addonDB) { > + locations[addon.location] = 1; You're using locations as a set, better off just using a Set. Can easily convert a Set to an array via: [...mySet] @@ +1083,5 @@ > + * Find the first addon that matches the filter function > + * @param aFilter > + * Function that takes an addon instance and returns > + * true if that addon should be selected > + * @return an array of DBAddonInternals That comment is wrong (it's not an array for _findAddon). @@ +1148,5 @@ > * A callback to pass the DBAddonInternal to > */ > getVisibleAddonForID: function XPIDB_getVisibleAddonForID(aId, aCallback) { > + let addon = this._findAddon(function visibleID(aAddon) {return ((aAddon.id == aId) && aAddon.visible)}); > + getRepositoryAddon(addon, aCallback); Hmm, is there a better way to do getRepositoryAddon()? Manually calling it in each of these functions seems error-prone (eg, why's it not being called in getAddonInLocation/getAddonsInLocation?). @@ +1252,5 @@ > addAddonMetadata: function XPIDB_addAddonMetadata(aAddon, aDescriptor) { > // If there is no DB yet then forcibly create one > + // XXX IRVING I don't think this will work as expected because the addonDB > + // getter will kick in. Might not matter because of the way the new DB > + // creates itself. Indeed, that getter will kick in. @@ +1264,5 @@ > + this.addonDB[newAddon._key] = newAddon; > + if (newAddon.visible) { > + // XXX IRVING This seems an odd place to remove visibility of existing addons, but > + // the old code did it here... > + this.makeAddonVisible(newAddon); Odd why? Doing it here guarantees only one add-on with the same ID is visible. ::: toolkit/mozapps/extensions/test/xpcshell/test_corrupt.js @@ +250,5 @@ > // serves this purpose). On startup the add-ons manager won't rebuild > // because there is a file there still. > shutdownManager(); > var dbfile = gProfD.clone(); > + dbfile.append("extensions.json"); EXTENSIONS_DB ::: toolkit/mozapps/extensions/test/xpcshell/test_corrupt_strictcompat.js @@ +250,5 @@ > // serves this purpose). On startup the add-ons manager won't rebuild > // because there is a file there still. > shutdownManager(); > var dbfile = gProfD.clone(); > + dbfile.append("extensions.json"); EXTENSIONS_DB ::: toolkit/mozapps/extensions/test/xpcshell/test_startup.js @@ +132,5 @@ > check_startup_changes(AddonManager.STARTUP_CHANGE_DISABLED, []); > check_startup_changes(AddonManager.STARTUP_CHANGE_ENABLED, []); > > let file = gProfD.clone(); > + file.append("extensions.json"); EXTENSIONS_DB @@ +190,5 @@ > check_startup_changes(AddonManager.STARTUP_CHANGE_ENABLED, []); > do_check_true(gCachePurged); > > let file = gProfD.clone(); > + file.append("extensions.json"); EXTENSIONS_DB
Attachment #754908 - Flags: review?(bmcbride) → review-
Comment on attachment 758123 [details] [diff] [review] DeferredSave WIP v2 - deferred async writes module, now with unit tests Review of attachment 758123 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +20,5 @@ > +}, this); > + > +/** > + * A module to manage deferred, asynchronous writing of data files > + * to disk. Docnit: please - define "deferred" in this context; - add annotation |@constructor|; - document that |aFile| is actually a path; - document that |aDataCallback| should provide the data, and in which format; - document |aDelay|. @@ +29,5 @@ > + // Callbacks to invoke after write completes > + this._pendingCallbacks = []; > + // Callbacks registered while a write was in progress, > + // deferred until the next write > + this._deferredCallbacks = []; Well, that starts to look like promises. @@ +35,5 @@ > + this.lastError = null; > + this._file = aFile; > + this._dataCallback = aDataCallback; > + if (aDelay) > + this._delay = aDelay; You don't seem to be using this._delay. Also, if |aDelay| is undefined, I'm almost sure that |DeferredTask| will fail, so you should probably have a default argument. @@ +37,5 @@ > + this._dataCallback = aDataCallback; > + if (aDelay) > + this._delay = aDelay; > + > + this.__defineGetter__("_deferredWriter", function DS_getDeferredWriter() { I believe that __defineGetter__ is very deprecated. You should |Object.defineProperty| instead. @@ +70,5 @@ > + if (!this._dirtyData) { > + this._dirtyData = true; > + if (!this._saveInProgress) > + this._deferredWriter.start(); > + // else else what? @@ +187,5 @@ > + finished(this, null); > + }, > + error => { > + WARN("Deferred save failed", error); > + finished(this, error); It really looks like _writeToDisk could be rewritten in 4-5 lines with Task.jsm + Promise. ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js @@ +27,5 @@ > + > +context.OS.File.writeAtomic = function mock_writeAtomic(aFile, aData, aOptions) { > + do_print("writeAtomic: " + aFile + " data: '" + aData + "', " + aOptions.toSource()); > + if (expectData !== undefined) { > + do_check_eq(expectData, aData); This global value |expectData| looks rather error-prone to me. Couldn't you do something simpler, along the lines of: - use the regular writeAtomic; - read the file from disk to compare with whichever data you have written ? @@ +39,5 @@ > + > +let callbackData = "Data to save"; > +function dataCallback() { > + return callbackData; > +} callbackData and dataCallback? Sounds like a good recipe for reviewer headache. Also, for the sake of testing, you probably want a different string for each test, so you could replace occurrences of |dataCallback| with |(() => "the name of the test")|. @@ +54,5 @@ > + saver.saveChanges(error => { > + if (error) > + do_report_unexpected_exception(error); > + run_next_test(); > + }); You probably won't be surprised if I mention that this would be cleaner with tasks. Something along the lines of: add_task(function test_basic_save_succeeds() { let saver = new DeferredSave(testFile, dataCallback, 1); expectData = callbackData; writeCallback = callback_soon(deferred => deferred.resolve(callbackData.length)); yield saver.saveChanges(); });
Attachment #758123 - Flags: feedback?(dteller)
This patch depends on all the preceding patches, in order: TEST, JSON, UPGRADE, DeferredSave. This removes the old beginTransaction/commitTransaction model of controlling updates to the database and replaces it with a clean/dirty/flush model as supported by DeferredSave.jsm. There were a couple of places where I added additional try/catch or other error handling to handle failures where the old code would have triggered a rollback; for the other cases I convinced myself that either (a) errors were already being handled in the existing code or (b) rollback would leave the data less consistent than not rolling back.
Attachment #759385 - Flags: review?(bmcbride)
Aside from addressing review comments, I also added handling and tests for cases where the data callback throws an error. Question: Should we try to get this landed in a shared place in toolkit, or leave it under toolkit/mozapps/extensions? (In reply to :Felipe Gomes from comment #40) > Comment on attachment 758123 [details] [diff] [review] > DeferredSave WIP v2 - deferred async writes module, now with unit tests > > Very very nice! The flush() function is kinda confusing to me with all the > aCallback/flushCallback dance, but I really don't know how to simplify it > more I updated the comments to try and make it clearer. > ::: toolkit/mozapps/extensions/DeferredSave.jsm > @@ +68,5 @@ > > + > > + LOG("Save changes"); > > + if (!this._dirtyData) { > > + this._dirtyData = true; > > + if (!this._saveInProgress) > > _saveInProgress will always be false here because if it was true we would > have returned earlier Good catch. > @@ +109,5 @@ > > + // end up waiting for an in-progress async save > > + let jsonData = this._dataCallback(); > > + // reset dirty flag so _writeToDisk/finished() doesn't restart timer > > + this._dirtyData = false; > > + flushCallback = result => { > > result unused? fwiw you can use "() =>" for no params with these arrow > functions I like leaving the parameter there to document that the "correct" signature of that particular callback is a function that takes one parameter. Bit of a holdover from writing type-checked code, I suppose. > @@ +114,5 @@ > > + if (aCallback) { > > + LOG("registering delayed save callback..."); > > + this._pendingCallbacks.push(aCallback); > > + } > > + // This could be replaced with a synchronous write > > interesting idea.. I'm not completely convinced we want to, mostly because we already need to allow async behaviour at that point in case there's already a write in progress. (In reply to David Rajchenbach Teller [:Yoric] from comment #42) > Comment on attachment 758123 [details] [diff] [review] > DeferredSave WIP v2 - deferred async writes module, now with unit tests ... > ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js > @@ +27,5 @@ > > + > > +context.OS.File.writeAtomic = function mock_writeAtomic(aFile, aData, aOptions) { > > + do_print("writeAtomic: " + aFile + " data: '" + aData + "', " + aOptions.toSource()); > > + if (expectData !== undefined) { > > + do_check_eq(expectData, aData); > > This global value |expectData| looks rather error-prone to me. Couldn't you > do something simpler, along the lines of: > - use the regular writeAtomic; > - read the file from disk to compare with whichever data you have written > ? I didn't want to use the real writeAtomic because it would have made the tests much harder to write and manage, since I wouldn't be able to control the order of asynchronous events. I changed from setting |expectData| and checking inside writeAtomic, to having writeAtomic "write" its output to a shared variable and then checking that shared variable in the test - I think that's clearer. > @@ +39,5 @@ > > + > > +let callbackData = "Data to save"; > > +function dataCallback() { > > + return callbackData; > > +} > > callbackData and dataCallback? Sounds like a good recipe for reviewer > headache. > Also, for the sake of testing, you probably want a different string for each > test, so you could replace occurrences of |dataCallback| with |(() => "the > name of the test")|. I use the same pattern of changing the data returned by the callback during test in many tests, so it would have been a pain to redefine this for each. I renamed the shared variable to "dataToSave" to be less confusing. I suppose I could make it even more obvious by putting these variables inside a "MockWriteAtomic" object or something like that, so references would look like MockWriteAtomic.dataToSave and MockWriteAtomic.writtenData...
Attachment #758123 - Attachment is obsolete: true
Attachment #759824 - Flags: feedback?(felipc)
Attachment #759824 - Flags: feedback?(dteller)
(In reply to :Irving Reid from comment #44) > Question: Should we try to get this landed in a shared place in toolkit, or > leave it under toolkit/mozapps/extensions? From a quick glance, it seems potentially useful more generally (other DeferredTask users may want to use it?), but we don't need to figure that out now necessarily. We can move it to toolkit when we have a user lined up for it (file followup bugs).
Based on feedback from Yoric on the DeferredSave patch, I've also converted some of the __defineGetter__() calls to Object.defineProperty(); I'll get the rest in a later patch. (In reply to Blair McBride [:Unfocused] from comment #41) > Comment on attachment 754908 [details] [diff] [review] > JSON v4: Save and restore XPI database in JSON format > ... > ::: toolkit/mozapps/extensions/XPIProvider.jsm > @@ +5029,5 @@ > > + * JSON.stringify filter to save only the relevant parts of an AddonInternal > > Why move this to this function, instead of keeping it in > AddonInternal.toJSON()? toJSON feels neater, and makes for nice debug > possibilities (ditto for DBAddonInternal). That was left over from an earlier version that used JSON.stringify() to extract two different sets of data from the same object, so I needed to control which which fields were serialized. I hadn't undone the change when I abandoned that approach. > ::: toolkit/mozapps/extensions/XPIProviderUtils.js > @@ +219,5 @@ > > +function asyncMap(aObjects, aMethod, aCallback) { > > This makes me wish we had Mike's Async.jsm in the tree already (see bug > 870774). If I need this code in the long run, I'd use Async.jsm - I'd much rather redesign the way we attach the AddonRepository cached objects to AddonWrappers so we don't need it at all. > @@ +372,5 @@ > > +function AddonFromSave(aLoaded) { > > The name of this irks me. createDBAddonInternal or something similar? Or > maybe even just move it all to DBAddonInternal's constructor. Excellent idea. I've moved DBAddonInternal to XPIProviderUtils.jsm and put this logic in its constructor. This will cause serious bitrot on the subsequent patches - would you rather I pulled it out into a separate patch file? > @@ +528,5 @@ > > + > > + /** > > + * Open and parse the JSON XPI extensions database. > > + * @return true: the DB was successfully loaded > > + * false: The DB either needs upgrade or did not exist at all. > > The existing AddonRepository code deletes the DB file when it knows it's > meant to be empty, which saves I/O - the new JSON DB code has been adapted > to do the same. XPIDatabase has never done that, but I think it'd be worth > looking into. I'd like to leave that for a follow up bug if we decide we want it. > @@ +566,5 @@ > > + let addonDB = {} > > Over in bug 853389, AddonRepository is storing the add-ons in-memory in a > Map, which is specifically designed for this use-case. On disk, it's storing > them in an array, which will save some bytes (and we don't benefit from > being able to index by ID for the on-disk format anyway). Think we should do > the same here. I change this to a Map in the UPGRADE patch. > @@ +1013,5 @@ > > // If we are running with an in-memory database then force a new > > // extensions.ini to be written to disk on the next startup > > + // XXX IRVING special case for if we fail to save extensions.json? > > + // XXX maybe doesn't need to be at shutdown? > > + // if (!this.connection.databaseFile) > > Yea, probably safest to still keep this - at least until after all this work > is done, and we discover its safe to not have it. > > But if its there, it does need to be done on shutdown - there's a few places > where we set this to false during runtime. Alternatively, we could use a > separate pref just for this situation (probably more reliable). I re-implement this logic in the UPGRADE patch. > @@ +1028,4 @@ > > }); > > + // XXX IRVING removed an async callback when the database was closed > > + // XXX do we want to keep the ability to async flush extensions.json > > + // XXX and then call back? > > Yes - useful for testing. Re-implemented in the ASYNC patch. > @@ +1051,5 @@ > > + let locations = {}; > > + for each (let addon in this.addonDB) { > > + locations[addon.location] = 1; > > You're using locations as a set, better off just using a Set. Can easily > convert a Set to an array via: [...mySet] Changed to a Set in the UPGRADE patch. > @@ +1083,5 @@ > > + * Find the first addon that matches the filter function > > + * @param aFilter > > + * Function that takes an addon instance and returns > > + * true if that addon should be selected > > + * @return an array of DBAddonInternals > > That comment is wrong (it's not an array for _findAddon). Fixed. > @@ +1148,5 @@ > > * A callback to pass the DBAddonInternal to > > */ > > getVisibleAddonForID: function XPIDB_getVisibleAddonForID(aId, aCallback) { > > + let addon = this._findAddon(function visibleID(aAddon) {return ((aAddon.id == aId) && aAddon.visible)}); > > + getRepositoryAddon(addon, aCallback); > > Hmm, is there a better way to do getRepositoryAddon()? Manually calling it > in each of these functions seems error-prone (eg, why's it not being called > in getAddonInLocation/getAddonsInLocation?). I would love to redesign this, but I want to wait until my changes here and Felipe's in AddonRepository are done before I look at anything that cuts across both. I've fixed getAddonInLocation() to call getRepositoryAddon(). getAddonsInLocation() is only used internally, in contexts where the repository addon info isn't needed. > @@ +1252,5 @@ > > addAddonMetadata: function XPIDB_addAddonMetadata(aAddon, aDescriptor) { > > // If there is no DB yet then forcibly create one > > + // XXX IRVING I don't think this will work as expected because the addonDB > > + // getter will kick in. Might not matter because of the way the new DB > > + // creates itself. > > Indeed, that getter will kick in. I'll fix this in a separate patch that does async load of the DB when we're loading on demand. > @@ +1264,5 @@ > > + this.addonDB[newAddon._key] = newAddon; > > + if (newAddon.visible) { > > + // XXX IRVING This seems an odd place to remove visibility of existing addons, but > > + // the old code did it here... > > + this.makeAddonVisible(newAddon); > > Odd why? Doing it here guarantees only one add-on with the same ID is > visible. I don't think it's so odd any more; comment removed. > ::: toolkit/mozapps/extensions/test/xpcshell/test_corrupt.js > @@ +250,5 @@ > > // serves this purpose). On startup the add-ons manager won't rebuild > > // because there is a file there still. > > shutdownManager(); > > var dbfile = gProfD.clone(); > > + dbfile.append("extensions.json"); > > EXTENSIONS_DB These are all cleaned up in a later patch.
Attachment #754908 - Attachment is obsolete: true
Attachment #760450 - Flags: review?(bmcbride)
Comment on attachment 759824 [details] [diff] [review] DeferredSave v3 - deferred async writes module, review comments incorporated Review of attachment 759824 [details] [diff] [review]: ----------------------------------------------------------------- (In reply to :Irving Reid from comment #44) > Question: Should we try to get this landed in a shared place in toolkit, or > leave it under toolkit/mozapps/extensions? What gavin said (comment 45). Good followup fodder. ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +62,5 @@ > + Object.defineProperty(this, "_deferredWriter", { > + get: function DS_getDeferredWriter() { > + delete this._deferredWriter; > + let DeferredTask = Cu.import("resource://gre/modules/DeferredTask.jsm", {}) > + .DeferredTask; Should use XPCOMUtils.defineLazyModuleGetter() to lazily import this at the top of the file - avoids importing it for every new instance of DeferredSave like it's doing now.
Attachment #759824 - Flags: feedback+
Comment on attachment 759824 [details] [diff] [review] DeferredSave v3 - deferred async writes module, review comments incorporated Review of attachment 759824 [details] [diff] [review]: ----------------------------------------------------------------- I'm still not enthusiastic about adding callback-based code while we're migrating plenty of code to promises. What's the rationale, again? ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +34,5 @@ > + * > + * @constructor > + * @param aFile > + * nsIFile representing the full path of the file where the data > + * is to be written. Please don't use a nsIFile to represent a path. We want to get rid of nsIFile, so just use a string. @@ +40,5 @@ > + * Callback function that takes no argument and returns the data to > + * be written. If aCallback returns a ArrayBufferView, the contained > + * bytes will be written to the file as is. > + * If the aCallback returns a String the data will be UTF-8 encoded > + * and then written to the file. Nit: I'm not a big fan of "aDataCallback" as a name. I don't have any better name at hand, but if you find a good idea, don't hesitate to use it :) @@ +72,5 @@ > +} > + > +DeferredSave.prototype = { > + // Data is dirty and needs to be flushed > + _dirtyData: false, That's mutable, please don't put it in the prototype. @@ +102,5 @@ > + this._dirtyData = true; > + this._deferredWriter.start(); > + } > + if (aCallback) { > + this._pendingCallbacks.push(aCallback); So, if data is not dirty, the callback might never be called? That looks weird. @@ +121,5 @@ > + * 4) Data is being written and in-memory copy is dirty > + * > + * @param aCallback a callback to be called when the data > + * has finished being flushed (it will still > + * be called if there was no dirty data). Nit: Whitespace. @@ +157,5 @@ > + } > + catch(e) { > + // Something broke, make our final flush call back with the error > + // instead of saving the data. Doing it this way waits for an > + // in-progress write to complete before calling back with the error Not sure why you put this try/call. The only code that seems to be able to raise an error is dataCallback, why not protect just that line? @@ +187,5 @@ > + } > + catch(e) { > + ERROR("Deferred save failed", e); > + this._writeFinished(e); > + } What's the point of this try/catch? I believe that |this._writeToDisk| can fail only in case of type error, which is certainly not what you want to defend against. @@ +189,5 @@ > + ERROR("Deferred save failed", e); > + this._writeFinished(e); > + } > + }, > + Nit: Whitespace. ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js @@ +41,5 @@ > +let dataToSave = "Data to save"; > +function dataCallback() { > + return dataToSave; > +} > + It seems to me that you want a subconstructor of DeferredSave (let's call it DeferredSaveTester) with the following additional features: - both |saveChanges| and |flush| provide the content that has been written (either by returning a promise or by calling a callback); - you can observe whenever a flush takes place. Could you please replace/encapsulate all these global variables stuff as this single constructor DeferredSaveTester? This should make your tests easier to read. @@ +55,5 @@ > + saver.saveChanges(error => { > + if (error) > + do_report_unexpected_exception(error); > + do_check_eq(data, writtenData); > + run_next_test(); Ideally, this could be rewritten as add_task(function test_basic_save_succeeds() { let data = "Test 1 Data"; let saver = new DeferredSaveTester(testFile, () => data, 1); let writtenData = yield saver.saveChanges(); do_check_eq(data, writtenData); }); Even if you don't want to use promise, you should be able to get something easier on the eye by using DeferredSaveTester. @@ +61,5 @@ > +}); > + > +// Two saves called during the same event loop, both with callbacks > +// Make sure we save only the second version of the data > +add_test(function test_two_saves() { Well, this could look like add_task(function test_two_saves() { let dataToSave = "Test first save"; let secondData = "Test second save"; let saver = new DeferredSaveTester(testFile, () => dataToSave, 1); let promiseWrittenData1 = saver.saveChanges(); dataToSave = secondData; let promiseWrittenData2 = saver.saveChanges(); do_check_eq((yield promiseWritenData1), secondData); do_check_eq((yield promiseWritenData2), secondData); }); Not sure what you want to test with |firstCallback|. That your callbacks are only called once? That they are called in some specific order?
Attachment #759824 - Flags: feedback?(dteller)
Comment on attachment 760450 [details] [diff] [review] JSON v5: Save and restore XPI database in JSON format, response to first review Review of attachment 760450 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ +582,5 @@ > function isAddonDisabled(aAddon) { > return aAddon.appDisabled || aAddon.softDisabled || aAddon.userDisabled; > } > > +Object.defineProperty(this, "gRDF", { If you're updating this, it should be updated to use XPCOMUtils.defineLazyServiceGetter() ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +92,5 @@ > const PREFIX_ITEM_URI = "urn:mozilla:item:"; > const RDFURI_ITEM_ROOT = "urn:mozilla:item:root" > const PREFIX_NS_EM = "http://www.mozilla.org/2004/em-rdf#"; > > +Object.defineProperty(this, "gRDF", { XPCOMUtils.defineLazyServiceGetter() @@ +386,5 @@ > + * @constructor > + * @param aLoaded > + * Addon data fields loaded from JSON or the addon manifest. > + */ > +function DBAddonInternal(aLoaded) { Should update the comment above AddonInternal, that says "It may have come from the database (see DBAddonInternal below)". Also, your additions to the constructor here could really use some newlines to break it up. @@ +406,5 @@ > + // this change is being detected. > + } > + > + Object.defineProperty(this, "pendingUpgrade", { > + get: function DBA_pendingUpgradeGetter() { XPCOMUtils.defineLazyGetter() @@ +549,5 @@ > + writeJSON: function XPIDB_writeJSON() { > + // XXX should have a guard here for if the addonDB hasn't been auto-loaded yet > + let addons = []; > + for (let aKey in this.addonDB) { > + addons.push(copyProperties(this.addonDB[aKey], PROP_JSON_FIELDS)); Implement this as DBAddonInternal.toJSON
Attachment #760450 - Flags: review?(bmcbride) → review+
Attached patch DeferredSave v4 - Promise version (obsolete) (deleted) — Splinter Review
(In reply to David Rajchenbach Teller [:Yoric] from comment #48) > Comment on attachment 759824 [details] [diff] [review] > DeferredSave v3 - deferred async writes module, review comments incorporated > > I'm still not enthusiastic about adding callback-based code while we're > migrating plenty of code to promises. > What's the rationale, again? Mostly that I didn't feel comfortable enough with Promises to dig in and try. I got over that; here's a version that uses Promises (and doesn't use DeferredTask, because it was just as easy to implement the timer myself). > > + * @param aFile > > + * nsIFile representing the full path of the file where the data > > + * is to be written. > > Please don't use a nsIFile to represent a path. We want to get rid of > nsIFile, so just use a string. Done. > Nit: I'm not a big fan of "aDataCallback" as a name. I don't have any better > name at hand, but if you find a good idea, don't hesitate to use it :) How about "aDataProvider"? > > +DeferredSave.prototype = { > > + // Data is dirty and needs to be flushed > > + _dirtyData: false, > > That's mutable, please don't put it in the prototype. Moot with the Promise-based rewrite, but thanks; I'll remember that for the future. > @@ +102,5 @@ > > + this._dirtyData = true; > > + this._deferredWriter.start(); > > + } > > + if (aCallback) { > > + this._pendingCallbacks.push(aCallback); > > So, if data is not dirty, the callback might never be called? That looks > weird. Also moot with the rewrite, but there's no else; if there's a callback it gets added to the chain whether or not the data was dirty (the tests check this) > Not sure why you put this try/call. The only code that seems to be able to > raise an error is dataCallback, why not protect just that line? I reduced the scope of the try/catch in the Promise version, but it still covers more than just the dataProvider; if something can cause some other part of this code to blow up I'd rather the error was passed to someone who can do something useful with it. > ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js > @@ +41,5 @@ > > +let dataToSave = "Data to save"; > > +function dataCallback() { > > + return dataToSave; > > +} > > + > > It seems to me that you want a subconstructor of DeferredSave (let's call it > DeferredSaveTester) with the following additional features: > - both |saveChanges| and |flush| provide the content that has been written > (either by returning a promise or by calling a callback); > - you can observe whenever a flush takes place. > > Could you please replace/encapsulate all these global variables stuff as > this single constructor DeferredSaveTester? This should make your tests > easier to read. I encapsulated the test harness quite a bit; not in exactly the way you describe, because I needed to be able to switch back and forth between waiting on outputs of the code under test, and prodding the underside of the mock. That said, between the Task/Promise approach to the new tests and the encapsulated harness I think the tests are much clearer. > Not sure what you want to test with |firstCallback|. That your callbacks are > only called once? That they are called in some specific order? Yes, I'm making sure that each callback is called exactly once and in the correct order. Thanks for pushing me to try a Task/Promise approach to this module. I think the code turned out well, and it was time well spent learning how to use the libraries.
Attachment #762461 - Flags: review?(dteller)
(In reply to Blair McBride [:Unfocused] from comment #49) > Comment on attachment 760450 [details] [diff] [review] > JSON v5: Save and restore XPI database in JSON format, response to first > review > > +Object.defineProperty(this, "gRDF", { > > If you're updating this, it should be updated to use > XPCOMUtils.defineLazyServiceGetter() > > > +Object.defineProperty(this, "gRDF", { > > XPCOMUtils.defineLazyServiceGetter() > > Should update the comment above AddonInternal, that says "It may have come > from the database (see DBAddonInternal below)". > > Also, your additions to the constructor here could really use some newlines > to break it up. > > > + Object.defineProperty(this, "pendingUpgrade", { > > + get: function DBA_pendingUpgradeGetter() { > > XPCOMUtils.defineLazyGetter() > > @@ +549,5 @@ > > + addons.push(copyProperties(this.addonDB[aKey], PROP_JSON_FIELDS)); > > Implement this as DBAddonInternal.toJSON I need to update the UPGRADE patch, because the first round of comment fixes bitrotted it terribly; I'll include these changes there. Any ETA on the rest of the review stack?
(In reply to :Irving Reid from comment #51) > I need to update the UPGRADE patch, because the first round of comment fixes > bitrotted it terribly; I'll include these changes there. Yea, makes sense. And, conveniently, will mean less reviewing for me :) > Any ETA on the rest of the review stack? Today, hopefully. Sorry its taken so long - these patches are getting hard to review, and I'm having to look at them all at once to get a full overview. Would be safest to wait to land anything until after the next merge on the 24th, so I kinda wonder if a temporary project branch would be useful - it'd make reviewing subsequent patches so much easier.
Comment on attachment 754912 [details] [diff] [review] UPGRADE V3 - Upgrade sqlite and RDF databases to JSON Review of attachment 754912 [details] [diff] [review]: ----------------------------------------------------------------- r- mostly just because its so badly bitrotten, and I'd like to be able to apply it to see everything. (In reply to :Irving Reid from comment #46) > I've moved DBAddonInternal to XPIProviderUtils.jsm and put > this logic in its constructor. This will cause serious bitrot on the > subsequent patches - would you rather I pulled it out into a separate patch > file? Since the JSON patch has a r+ now, with just minor fixups left, it'd be easier for me if you could unbitrot the other patches, and include any fixups there (or in separate patches). ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +385,5 @@ > converter.close(); > + this.dbfileExists = true; > + // XXX probably only want to do this if the version is different > + Services.prefs.setIntPref(PREF_DB_SCHEMA, DB_SCHEMA); > + Services.prefs.savePrefFile(null); // XXX is this bad sync I/O? Yes :( It's a trade-off between the sync IO by flushing prefs now, and the cost of rebuilding the DB next startup if we don't shutdown cleanly before the prefs are written to disk. That said, its arguably more important to ensure the prefs are written to disk when setting PREF_PENDING_OPERATIONS to true, and we don't flush then. @@ +440,5 @@ > }, > > /** > + * XXX IRVING new json file doesn't have this retry logic for > + * problems opening the file. Do we want it? No, it's not helpful with JSON. The old SQLite retry code was for the case where the DB was corrupt - you're already handling that case. @@ +447,2 @@ > * existing file and create an empty database. If that fails then it will > * open an in-memory database that can be used during this session. Should update this comment. @@ +478,5 @@ > + this.migrateData = null; > + return true; > + } > + LOG("Migrating data from sqlite"); > + this.migrateData = this.getMigrateDataFromDatabase(connection); loadSqliteData and getMigrateDataFromDatabase should just be merged, since there's no obvious distinction between the two. @@ +527,2 @@ > } > + cstream.close(); The |.close()| call should be inside a |finally {}| block. @@ +580,5 @@ > else { > + WARN("Extensions database " + this.jsonFile.path + > + " exists but is not readable; rebuilding in memory"); > + // XXX open question - if we can overwrite at save time, should we, or should we > + // leave the locked database in case we can recover from it next time we start up? If its locked from reading, you won't be able to write to it anyway, right? And we do want to try as hard as possible to recover the info from the existing DB. @@ +593,2 @@ > > + // XXX what about aForceOpen? Appears to handle the case of "don't open DB file if there aren't any extensions"? Not exactly. Or at least, not with the same behaviour as AddonRepository has. AddonRepository will delete an existing database file if it's empty; aForceOpen here only defers the creation of the database file if there isn't one already. I thought I saw some comment about the default theme, in relation to this...now I can't find it. Anyway: Not every app has a default theme (eg, Firefox on Android). @@ +891,2 @@ > > + for (let [key, addon] of this.addonDB) { If you're not using |key|, you can save creating the variable by: for (let [, addon] of this.addonDB) { ::: toolkit/mozapps/extensions/test/xpcshell/test_corrupt.js @@ +250,5 @@ > // serves this purpose). On startup the add-ons manager won't rebuild > // because there is a file there still. > shutdownManager(); > + gExtensionsJSON.remove(true); > + gExtensionsJSON.create(AM_Ci.nsIFile.DIRECTORY_TYPE, 0755); Since you're updating this anyway... octal notation is going away, so could you update that to |parseInt("755", 8)|? (I like that way because it lets us keep the familiar octal notation) Ditto with test_corrupt_strictcompat.js
Attachment #754912 - Flags: review?(bmcbride) → review-
Comment on attachment 759385 [details] [diff] [review] ASYNC v1 - Save XPI Database JSON asynchronously Review of attachment 759385 [details] [diff] [review]: ----------------------------------------------------------------- Gonna need re-review if DeferredSave moves to Promises. ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ +491,5 @@ > * previous instance may be a previous install or in the case of an application > * version change the same add-on. > * > + * NOTE: this may modify aNewAddon in place; callers should save the database if > + * necessary This seems error prone, and one of the reasons I've like to eventually have smart setters on DBAddonInternal that kicked off the timer to flush to disk. @@ +2386,5 @@ > } > catch (e) { > WARN("Add-on is invalid", e); > + // XXX if we remove the metadata for a broken addon in a locked location, > + // won't we try to re-install it on next startup? Yes :\ Existing bug, and we don't have a nicer way of dealing with it right now. Can't remember if its on file or not, but either way its out of scope here. @@ +2415,5 @@ > > // If this was the active theme and it is now disabled then enable the > // default theme > + // XXX shouldn't we be checking that the addon is a theme here? > + // XXX test was introduced in bug 648599 Yes, bug 715680. Out of scope here. @@ +3088,3 @@ > XPIDatabase.openConnection(false, true); > > + extensionListChanged = this.processFileChanges(state, manifests, This should in fact be in a separate try/catch - an exception there doesn't need to affect anything else. @@ +3807,5 @@ > this.bootstrapScopes[aId][aMethod](params, aReason); > } > catch (e) { > + // XXX failures in here should be recorded somewhere (FHR maybe?) so we can > + // feed them back to extension authors Bug 861078. @@ -3847,5 @@ > > /** > - * Updates the appDisabled property for all add-ons. > - */ > - updateAllAddonDisabledStates: function XPI_updateAllAddonDisabledStates() { Heh... should really be keeping this one instead, as it has the more correct method name compared to updateAddonAppDisabledStates (despite what the comment says). ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +369,5 @@ > + * Mark the current stored data dirty, and schedule a flush to disk > + */ > + saveChanges: function() { > + if (!this.initialized) { > + debugger; Don't forget to remove this. @@ +403,2 @@ > */ > + formJSON: function XPIDB_formJSON() { formJSON seems like an odd name. If you call it toJSON, you can just pass the whole object to JSON.stringify() like AddonRepository is doing. @@ +597,5 @@ > > if (aRebuildOnError) { > WARN("Rebuilding add-ons database from installed extensions."); > + let state = XPIProvider.getInstallLocationStates(); > + XPIProvider.processFileChanges(state, {}, false); This should still be wrapped in a try/catch - if it breaks, it doesn't have to break the entire startup process. @@ +834,5 @@ > + this.openConnection(true); > + return this.addonDB; > + }); > + // same for the deferred save > + delete this._deferredSave; What's the benefit of doing this? @@ +1101,5 @@ > + aNewAddon.installDate = aOldAddon.installDate; > + aNewAddon.applyBackgroundUpdates = aOldAddon.applyBackgroundUpdates; > + aNewAddon.foreignInstall = aOldAddon.foreignInstall; > + aNewAddon.active = (aNewAddon.visible && !aNewAddon.userDisabled && > + !aNewAddon.appDisabled && !aNewAddon.pendingUninstall) Since you're touching this line: Missing semicolon. ::: toolkit/mozapps/extensions/test/xpcshell/test_AddonRepository.js @@ +410,5 @@ > }); > } > > function end_test() { > + shutdownManager(); This is done automatically in head_addons.js ::: toolkit/mozapps/extensions/test/xpcshell/test_bug659772.js @@ +166,5 @@ > // Should stay enabled because we migrate the compat info from > // the previous version of the DB > do_check_neq(a3, null); > do_check_eq(a3.version, "2.0"); > + todo_check_false(a3.appDisabled); // XXX unresolved issue Were this issues introduced by this patch?
Attachment #759385 - Flags: review?(bmcbride) → review-
This patch fixes up bitrot and incorporates all the cleanups from https://bugzilla.mozilla.org/show_bug.cgi?id=853388#c49, and also: (In reply to Blair McBride [:Unfocused] from comment #53) > Review of attachment 754912 [details] [diff] [review]: > > + Services.prefs.setIntPref(PREF_DB_SCHEMA, DB_SCHEMA); > > + Services.prefs.savePrefFile(null); // XXX is this bad sync I/O? > > Yes :( It's a trade-off between the sync IO by flushing prefs now, and the > cost of rebuilding the DB next startup if we don't shutdown cleanly before > the prefs are written to disk. That said, its arguably more important to > ensure the prefs are written to disk when setting PREF_PENDING_OPERATIONS to > true, and we don't flush then. This gets rebuilt in the ASYNC patch - I set the pref but don't flush; the cost of being wrong is relatively low. > @@ +478,5 @@ > > + this.migrateData = null; > > + return true; > > + } > > + LOG("Migrating data from sqlite"); > > + this.migrateData = this.getMigrateDataFromDatabase(connection); > > loadSqliteData and getMigrateDataFromDatabase should just be merged, since > there's no obvious distinction between the two. Fixed in the ASYNC patch > @@ +527,2 @@ > > } > > + cstream.close(); > > The |.close()| call should be inside a |finally {}| block. Good point, I put in a finally for fstream too, in case it doesn't get closed by cstream. > @@ +580,5 @@ > > else { > > + WARN("Extensions database " + this.jsonFile.path + > > + " exists but is not readable; rebuilding in memory"); > > + // XXX open question - if we can overwrite at save time, should we, or should we > > + // leave the locked database in case we can recover from it next time we start up? > > If its locked from reading, you won't be able to write to it anyway, right? > And we do want to try as hard as possible to recover the info from the > existing DB. Well, the file could be temporarily locked by a stupid antivirus scanner just as we're starting up, and unlocked later. My main concern here is that unlike the sqlite version, if the file somehow gets bad permissions we'll never remove it; that leaves the user stuck never being able to save their XPI state changes. If we tried to write the file even when the read fails, the OS.File.asyncSave write-temp-then-swap should succeed as long as we have permission to delete the existing (bad) file. > @@ +593,2 @@ > > > > + // XXX what about aForceOpen? Appears to handle the case of "don't open DB file if there aren't any extensions"? > > Not exactly. Or at least, not with the same behaviour as AddonRepository > has. AddonRepository will delete an existing database file if it's empty; > aForceOpen here only defers the creation of the database file if there isn't > one already. > > I thought I saw some comment about the default theme, in relation to > this...now I can't find it. Anyway: Not every app has a default theme (eg, > Firefox on Android). OK. I need to work through this code as part of the (to be written) async load patch, so I'll clean it up then. > > + gExtensionsJSON.create(AM_Ci.nsIFile.DIRECTORY_TYPE, 0755); > > Since you're updating this anyway... octal notation is going away, so could > you update that to |parseInt("755", 8)|? (I like that way because it lets us > keep the familiar octal notation) > > Ditto with test_corrupt_strictcompat.js Does FileUtils.PERMS_DIRECTORY work for you?
Attachment #754912 - Attachment is obsolete: true
Attachment #763764 - Flags: review?(bmcbride)
(In reply to Blair McBride [:Unfocused] from comment #54) > Comment on attachment 759385 [details] [diff] [review] > ASYNC v1 - Save XPI Database JSON asynchronously > > Review of attachment 759385 [details] [diff] [review]: > ----------------------------------------------------------------- Review comments not included here have been directly addressed in the patch. > Gonna need re-review if DeferredSave moves to Promises. I did change over to the Promise version; it's not difficult to go back if we decide to. > ::: toolkit/mozapps/extensions/XPIProvider.jsm > > + * NOTE: this may modify aNewAddon in place; callers should save the database if > > + * necessary > > This seems error prone, and one of the reasons I've like to eventually have > smart setters on DBAddonInternal that kicked off the timer to flush to disk. I've been thinking about that too, partly in terms of the data structure forward/backward compatibility support. Maybe in a follow-up patch... > > - updateAllAddonDisabledStates: function XPI_updateAllAddonDisabledStates() { > > Heh... should really be keeping this one instead, as it has the more correct > method name compared to updateAddonAppDisabledStates (despite what the > comment says). Unfortunately, the one with the slightly misleading name was the one in the public API with external users... > ::: toolkit/mozapps/extensions/XPIProviderUtils.js > > + // same for the deferred save > > + delete this._deferredSave; > > What's the benefit of doing this? Just trying to make sure the test environment starts from scratch as much as possible. Given that we try to completely unload the XPIProvider module when we shut down AddonManager, it might not be necessary. > ::: toolkit/mozapps/extensions/test/xpcshell/test_bug659772.js > @@ +166,5 @@ > > // Should stay enabled because we migrate the compat info from > > // the previous version of the DB > > do_check_neq(a3, null); > > do_check_eq(a3.version, "2.0"); > > + todo_check_false(a3.appDisabled); // XXX unresolved issue > > Were this issues introduced by this patch? This is the issue discussed in comment 31 through comment 33 - the test started failing after the UPGRADE patch, I just didn't change the check until now. I could use some higher bandwidth help with this, either IRC or vidyo.
Attachment #759385 - Attachment is obsolete: true
Attachment #763958 - Flags: review?(bmcbride)
(In reply to :Irving Reid from comment #55) > This gets rebuilt in the ASYNC patch - I set the pref but don't flush; the > cost of being wrong is relatively low. Eh? Rebuilding the database is basically the most expensive thing we can do. > Well, the file could be temporarily locked by a stupid antivirus scanner > just as we're starting up, and unlocked later. My main concern here is that > unlike the sqlite version, if the file somehow gets bad permissions we'll > never remove it; that leaves the user stuck never being able to save their > XPI state changes. > > If we tried to write the file even when the read fails, the > OS.File.asyncSave write-temp-then-swap should succeed as long as we have > permission to delete the existing (bad) file. Ah, yes. The sounds reasonable. > Does FileUtils.PERMS_DIRECTORY work for you? Yes - even better :)
(In reply to Blair McBride [:Unfocused] from comment #57) > (In reply to :Irving Reid from comment #55) > > This gets rebuilt in the ASYNC patch - I set the pref but don't flush; the > > cost of being wrong is relatively low. > > Eh? Rebuilding the database is basically the most expensive thing we can do. Would have been more correct for me to say the *risk* of being wrong is very low. The failure window is very narrow - we're only wrong if we're restarting, need to upgrade the DB schema, and FF crashes between when our first XPI DB save succeeds and the next time prefs are flushed. That said, this is also rare enough that the delay from a sync flush of prefs here won't bother users much, so I'm happy to put it back in if you want (savePrefFile() only does i/o if the prefs are dirty). Sure would be nice to have something like DeferredSave for prefs... > > Well, the file could be temporarily locked by a stupid antivirus scanner > > just as we're starting up, and unlocked later. My main concern here is that > > unlike the sqlite version, if the file somehow gets bad permissions we'll > > never remove it; that leaves the user stuck never being able to save their > > XPI state changes. > > > > If we tried to write the file even when the read fails, the > > OS.File.asyncSave write-temp-then-swap should succeed as long as we have > > permission to delete the existing (bad) file. > > Ah, yes. The sounds reasonable. I'll do a separate patch and test for this. My rough design is that if a save fails, we set something like _lastSaveFailed and go on; future changes to the data will trigger another attempt to save and if that succeeds we clear the flag. At shutdown time, if _lastSaveFailed we make one more attempt to write. If that attempt fails we set PREF_PENDING_OPERATIONS so that we try to clean up on next restart, or we could use a separate preference instead of overloading PREF_PENDING_OPERATIONS.
Comment on attachment 762461 [details] [diff] [review] DeferredSave v4 - Promise version Review of attachment 762461 [details] [diff] [review]: ----------------------------------------------------------------- Looks good, although I believe code could be simplified a little further. I haven't had time to review the tests, I'll try and do that today. ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +51,5 @@ > + * begins writing the data to disk. Default 50 milliseconds. > + */ > +function DeferredSave(aFile, aDataProvider, aDelay) { > + // Deferred to resolve when all writes complete > + // If this is non-null, we have data that needs to be written That's not a widespread coding convention, but I generally write the following: /** * @type {Promise|null} null when no data needs to be written * @resolves When all writes complete. * @rejects Never. */ this._pending = null; Your call. @@ +61,5 @@ > + this._dataProvider = aDataProvider; > + > + this._timer = null; > + > + if (aDelay && (aDelay > 0)) You could also declare a default value for |aDelay| in the function header. Your call. @@ +77,5 @@ > + * Mark the current stored data dirty, and schedule a flush to disk > + * @return A Promise<integer> that will be resolved after the data is written to disk; > + * the promise is resolved with the number of bytes written. > + * XXX handle case where saveChanges() is called in the middle of a flush? Should just > + * become dirty/inProgress again, yes? I believe that either choice is good. Note that "in the middle of a flush" probably actually means "between the instant the data is sent to OS.File and the instant it is actually written to disk." @@ +81,5 @@ > + * become dirty/inProgress again, yes? > + */ > + saveChanges: function() { > + // Start the pending timer > + function startTimer(aDeferredSave) { Looks like |aDeferredSave| is always |this|. It would probably be nicer to make |startTimer| a method. @@ +90,5 @@ > + } > + > + // Restart the pending timer for data that got dirty while a write was > + // in progress, or error/write immediately if flushing > + function rewrite(aDeferredSave, aDeferred) { Same here, you could make |rewrite| a method. @@ +113,5 @@ > + // this._pending away before _inProgress resolves > + let pending = this._pending = Promise.defer(); > + this._inProgress.promise.then( > + result => rewrite(this, pending), > + error => rewrite(this, pending)); The following formulation would avoid having to pre-define rewrite, which I believe would be a little nicer. Your call. this._inProgress.promise.then( () => {} /*ignore any error*/ ).then( () => /*here, add the content of the rewrite*/ ) or, with Task.jsm try { yield this._inProgress.promise; } catch (ex) { /* ignore errors */ } // Here, the contents of |rewrite| @@ +134,5 @@ > + let inProgress = this._pending; > + this._pending = null; > + try { > + let toSave = this._dataProvider(); > + this._writeToDisk(toSave, inProgress); I would make dependency between promises a little more explicit: this._writeToDisk(toSave).then( x => this._pending.resolve(x), err=> this._pending.reject(err) ); (or a Task.jsm equivalent) This would, of course, require you to rewrite _writeToDisk to return a promise instead of taking one. Same thing about all the functions and methods that take as argument a promise. @@ +189,5 @@ > + } > + catch(e) { > + // tricky - signal saveChanges/rewrite() to reject with this error > + // after _inProgress is done > + this._flushingError = e; Why don't you immediately reject |this._pending|? @@ +212,5 @@ > + > + // Nothing to write, return a pre-resolved promise > + let done = new Promise.defer(); > + done.resolve(0); > + return done.promise; return Promise.resolve(0);
Attachment #762461 - Flags: review?(dteller)
Attachment #762461 - Flags: feedback+
Blocks: 640775
This might be overkill, but the new code (before this patch) would upgrade sqlite => JSON any time the JSON file was missing, whether or not the upgrade had been done before. This patch uses the "extensions.databaseSchema" preference to only do the upgrade once. If the JSON file is lost after a successful upgrade, we rebuild from scratch rather than migrating possibly stale data from the old sqlite DB.
Attachment #764935 - Flags: review?(bmcbride)
This patch adds async loading of the JSON database if it isn't force loaded at startup. There are several places where synchronous APIs can force a DB load after startup; I'll add telemetry in a later patch to track whether these cases happen in the field. The whole stack should apply cleanly now, in the order: TESTS JSON UPGRADE DeferredSave v4 asyncSave schemaVersion asyncLoad At this point I'd be comfortable landing reviewed versions of this stack; the main remaining changes are (1) adding telemetry probes and (2) handle permission denied and similar errors, see comment #58
Attachment #764943 - Flags: review?(bmcbride)
Comment on attachment 763764 [details] [diff] [review] UPGRADE V4 - Upgrade sqlite and RDF databases to JSON, incorporating reviews Review of attachment 763764 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +323,5 @@ > // not on disk. In general this should only happen during startup as > // this change is being detected. > } > > + // XXX Can we redesign pendingUpgrade? Redesign why? (And, how?) @@ +408,3 @@ > let addons = []; > + for (let [key, addon] of this.addonDB) { > + addons.push(addon); May as well simplify this, now there's not much going on in the loop: [...this.addonDB.values()] @@ +574,5 @@ > + } > + // If we got here, we probably have good data > + // Make AddonInternal instances from the loaded data and save them > + let addonDB = new Map(); > + inputAddons.addons.forEach(function(loadedAddon) { for-of loop, please. forEach is dead to me.
Attachment #763764 - Flags: review?(bmcbride) → review+
Comment on attachment 762461 [details] [diff] [review] DeferredSave v4 - Promise version Review of attachment 762461 [details] [diff] [review]: ----------------------------------------------------------------- r=me I have suggested a few changes in my review pass over DeferredSave.jsm, could you consider them before landing this patch? ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js @@ +145,5 @@ > + ); > +}); > + > +// Save one set of changes, then while the write is in progress, modify the > +// data two more times I don't quite understand what you're checking in that test.
Attachment #762461 - Flags: review?(dteller) → review+
(In reply to David Rajchenbach Teller [:Yoric] from comment #59) > Comment on attachment 762461 [details] [diff] [review] > DeferredSave v4 - Promise version > @@ +61,5 @@ > > + if (aDelay && (aDelay > 0)) > > You could also declare a default value for |aDelay| in the function header. > Your call. This is more robust in the face of odd values of aDelay > @@ +77,5 @@ > > + * Mark the current stored data dirty, and schedule a flush to disk > > + * @return A Promise<integer> that will be resolved after the data is written to disk; > > + * the promise is resolved with the number of bytes written. > > + * XXX handle case where saveChanges() is called in the middle of a flush? Should just > > + * become dirty/inProgress again, yes? > > I believe that either choice is good. Note that "in the middle of a flush" > probably actually means "between the instant the data is sent to OS.File and > the instant it is actually written to disk." The real question here is what the meaning of "flush()" is. Does flush() indicate that we're closing up, so future operations should be forbidden, or does it just mean "let me know when you've finished writing everything that needs to be written as of now", but future writes are still OK. > The following formulation would avoid having to pre-define rewrite, which I > believe would be a little nicer. Your call. > > this._inProgress.promise.then( > () => {} /*ignore any error*/ > ).then( > () => /*here, add the content of the rewrite*/ > ) I did this. > or, with Task.jsm > > try { > yield this._inProgress.promise; > } catch (ex) { > /* ignore errors */ > } > // Here, the contents of |rewrite| I tried rewriting this as a Task, but it didn't seem clearer to me so I left it as raw promises. > @@ +189,5 @@ > > + } > > + catch(e) { > > + // tricky - signal saveChanges/rewrite() to reject with this error > > + // after _inProgress is done > > + this._flushingError = e; > > Why don't you immediately reject |this._pending|? To maintain the order of promises resolving, for example: saveChanges().then(handler_1) ... writing starts flush().then(handler_2) ... dataProvider fails I want handler_1 to get the success/fail from the in progress write before handler_2 gets the dataProvider failure. > @@ +212,5 @@ > > + // Nothing to write, return a pre-resolved promise > return Promise.resolve(0); I looked for a shortcut API but didn't see that. Thanks. (In reply to David Rajchenbach Teller [:Yoric] from comment #63) > ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js > @@ +145,5 @@ > > + > > +// Save one set of changes, then while the write is in progress, modify the > > +// data two more times > > I don't quite understand what you're checking in that test. I clarified the comment a bit - if there are multiple changes while a write is in progress, we should only write one more time with the final state after all the changes.
Attachment #759824 - Attachment is obsolete: true
Attachment #762461 - Attachment is obsolete: true
Attachment #759824 - Flags: feedback?(felipc)
Attachment #765545 - Flags: review?(dteller)
To be applied on top of asyncSave. This roughly parallels the sqlite implementation, where we would always try to remove the existing DB if there was a problem reading it - the old implementation would only fail hard if the new DB couldn't be removed; in that case it would keep a temporary in-memory DB and try to go back to the old DB on next restart. The new code just keeps trying to do OS.File.writeAtomic(); this will normally get rid of an extensions.json file with bad permissions, because moving the newly written file into place can unlink the old one even though it's not writable. The test changes are a bit hacky; it's not easy to simulate conditions where we can't update the json file at all but the rest of the browser still works. The alternative might be to do a Windows-specific test where we hold the existing file open to prevent removal. This patch isn't critical for first Nightly landing, but I would prefer that it follow not too far behind.
Attachment #765566 - Flags: review?(bmcbride)
Try this again with my last change actually checked in...
Attachment #765566 - Attachment is obsolete: true
Attachment #765566 - Flags: review?(bmcbride)
Attachment #765573 - Flags: review?(bmcbride)
Comment on attachment 765545 [details] [diff] [review] DeferredSave v5 - Promise version with review updates Review of attachment 765545 [details] [diff] [review]: ----------------------------------------------------------------- r=me Thanks for all these changes.
Attachment #765545 - Flags: review?(dteller) → review+
Blocks: 886607
Irving, I was just asked to take QA ownership of this. Would you briefly explain what user facing impact this change will have. Without reading through every comment here, I get the feeling that the Addons Manager should just work as usual. Is that correct or are there new features included that QA black box testing would need to address? Thanks in advance.
QA Contact: twalker
Tracy, this change (and the companion change in bug 853389) are intended to have no user facing impact - the changes are entirely to the underlying implementation. What we're looking for is testing to ensure that we haven't broken any of the existing features.
I realized I had some other changes to DeferredSave in my telemetry patch, so I figured I'd roll them together for a quick re-review.
Attachment #765545 - Attachment is obsolete: true
Attachment #768546 - Flags: review?(dteller)
Comment on attachment 768546 [details] [diff] [review] DeferredSave v6 - Fix a race condition, add activity counters for telemetry Review of attachment 768546 [details] [diff] [review]: ----------------------------------------------------------------- (we did this on the whiteboard)
Attachment #768546 - Flags: review?(dteller)
After talking this through with Yoric at the work week, I found a way to make the logic much simpler by taking advantage of already-resolved promises to factor out the "do we have an existing write in progress" logic.
Attachment #768546 - Attachment is obsolete: true
Attachment #769673 - Flags: review?(dteller)
Comment on attachment 769673 [details] [diff] [review] DeferredSave v7 - Restructure after design discussions Review of attachment 769673 [details] [diff] [review]: ----------------------------------------------------------------- That's much more readable, thanks. ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +37,5 @@ > + * > + * @constructor > + * @param aFile > + * String representing the full path of the file where the data > + * is to be written. Nit: s/aFile/aPath @@ +71,5 @@ > + > + // Are we currently waiting for a write to complete > + this.writeInProgress = false; > + > + this._file = aFile; Nit: _file should be _path @@ +99,5 @@ > + return this._lastError; > + }, > + > + // Start the pending timer if data is dirty > + startTimer: function() { Nit: Shouldn't this be a private (i.e. underscore-prefixed) method? @@ +202,5 @@ > + if (this._pending) { > + LOG("Flush called while data is dirty"); > + if (this._timer) { > + this._timer.cancel(); > + this._timer = null; This is the only time you reset |this._timer| to |null|. That's weird.
Attachment #769673 - Flags: review?(dteller) → review+
One thing this doesn't record is how long the writes actually take; OS.File.writeAtomic can provide that, so we could add a histogram. Blair, feel free to hand off the review on this one if you want.
Attachment #770975 - Flags: review?(bmcbride)
Attachment #770975 - Flags: feedback?(vdjeric)
Depends on: 890387
Comment on attachment 769673 [details] [diff] [review] DeferredSave v7 - Restructure after design discussions Review of attachment 769673 [details] [diff] [review]: ----------------------------------------------------------------- Looks like this was bitrotten a little by bug 880245. ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +104,5 @@ > + if (!this._pending) { > + return; > + } > + > + LOG("Starting timer"); Should include the filename in these log messages - with multiple DeferredSave instances (XIProvider, AddonRepository, eventually others) there's no way to currently differentiate the logs, which is going to be painful for debugging.
Attachment #769673 - Flags: review+
Comment on attachment 770975 [details] [diff] [review] Telemetry v1: Add some simple measures for XPI Database load/saves - I don't think we're consistently reporting the update reason when there are multiple reasons to update: sometimes we only report the latest reason (line 3174) and sometimes we report an earlier reason (line 3170). We should be consistent about this.. either only report the earliest/latest reason or report all reasons in a simpleMeasures array (my preference) - This is mostly a personal preference: let's not do any heavy lifting in if-conditions. So let's move the calls to processPendingFileChanges, installDistributionAddons, and others inside or outside the if-blocks as appropriate. - Rename updateDatabase to updateReason or something similar - I think the coding style now requires curlies around all if-else blocks - Style request/nit: Move the "update reason 8" logic outside the call to recordSimpleMeasure and make it explicit: if (!updateDatabase && hasPendingChanges) { updateDatabase = 8; } - Why did you remove the comment at the beginning of XPIDB_openConnection? - Does the call to Ci.nsIAppStartup trigger initialization of the service? If so, how expensive is it?
Attachment #770975 - Flags: feedback?(vdjeric)
Comment on attachment 763958 [details] [diff] [review] asyncSave v2 - Save XPI Database JSON asynchronously, updated after review Review of attachment 763958 [details] [diff] [review]: ----------------------------------------------------------------- (In reply to :Irving Reid from comment #56) > Unfortunately, the one with the slightly misleading name was the one in the > public API with external users... *facepalm* > This is the issue discussed in comment 31 through comment 33 - the test > started failing after the UPGRADE patch, I just didn't change the check > until now. I could use some higher bandwidth help with this, either IRC or > vidyo. Oops, sorry, missed this. I'm gonna need to set aside some quiet time to think that one through. ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ +3221,2 @@ > try { > + XPIDatabase.openConnection(false, true); openConnection doesn't throw, right? Either way, should keep that in its original position outside this try/catch - because if it can throw, nothing else in that if block should run. (But processFileChanges is ok to throw, the rest of the code can keep running fine if that happens.)
Attachment #763958 - Flags: review?(bmcbride) → review+
Comment on attachment 764935 [details] [diff] [review] schemaVersion v1: Only upgrade from sqlite when the schema version preference says to Review of attachment 764935 [details] [diff] [review]: ----------------------------------------------------------------- Not overkill - I much prefer this. ::: toolkit/mozapps/extensions/test/xpcshell/test_migrate_max_version.js @@ +88,5 @@ > + > + AddonManager.getAddonByID("addon1@tests.mozilla.org", > + function check_after_rebuild(a1) { > + // addon1 was rebuilt from extensions directory, > + // so it appears enabled as a foreign install foreignInstall is just a guess - hopefully we'll be better at guessing in the future, so we shouldn't rely on that here. I'd put a different version string in the DB, and differentiate based on that.
Attachment #764935 - Flags: review?(bmcbride) → review+
At Blair's suggestion, I've reworked the logging so that each instance of DeferredSave has its own instances of LOG/WARN/ERROR that know the leaf name of the file being saved, so messages look like: WARN: DeferredSave/extensions.json: blah blah blah
Attachment #754529 - Attachment is obsolete: true
Attachment #774164 - Flags: review?(dteller)
Comment on attachment 774164 [details] [diff] [review] DeferredSave v8 - Tag log messages with the file name Review of attachment 774164 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +39,5 @@ > + * begins writing the data to disk. Default 50 milliseconds. > + */ > +function DeferredSave(aPath, aDataProvider, aDelay) { > + let leafName = String.substr(aPath, Math.max(String.lastIndexOf(aPath, "/"), > + String.lastIndexOf(aPath, "\\")) + 1); OS.File.basename?
Attachment #774164 - Flags: review?(dteller) → review+
Comment on attachment 764943 [details] [diff] [review] asyncLoad v1: Load the XPI JSON database asynchronously when possible Review of attachment 764943 [details] [diff] [review]: ----------------------------------------------------------------- This makes me realise how inconsistent the naming of DB-related methods are - some have a "Database" postfix, some have a "DB" postfix, some have no mention of either. Would be good to tidy that up, just to make future work a little smoother to do. ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +412,5 @@ > return; > } > > + if (!this._deferredSave) > + this._deferredSave = new DeferredSave(this.jsonFile.path, Nit: Wrap in {} @@ +509,5 @@ > * A boolean indicating whether add-on information should be loaded > * from the install locations if the database needs to be rebuilt. > * (if false, caller is XPIProvider.checkForChanges() which will rebuild) > */ > + openConnection: function XPIDB_openConnection(aRebuildOnError) { openConnection/getDatabase don't feel like great names for sync/async versions of the same thing - how about something along the lines of syncReadDB/asyncReadDB ? @@ +531,5 @@ > read = cstream.readString(0xffffffff, str); // read as much as we can and put it in str.value > data += str.value; > } while (read != 0); > } > + this.parseDB(data, false); Should be passing aRebuildOnError here - the catch block below won't handle it since parseDB doesn't throw. And even if it did throw, getActiveBundles() would end up being needlessly called twice via rebuildDatabase(). @@ +633,5 @@ > + /** > + * Reconstruct when the DB file exists but is unreadable > + * (for example because read permission is denied > + */ > + unreadableDB: function(aError, aRebuildOnError) { rebuildUnreadableDatabase() ? @@ +660,5 @@ > + } > + > + LOG("Starting async load of XPI database " + this.jsonFile.path); > + return this._dbPromise = OS.File.read(this.jsonFile.path).then( > + array => { |array| is ambiguous - |byteArray| ? @@ +990,5 @@ > + addonDB => { > + let addonList = this._listAddons(addonDB, aFilter); > + asyncMap(addonList, getRepositoryAddon, aCallback); > + }) > + .then(null, error => ERROR("getAddonList failed", e)); Need to still call aCallback in the error condition. @@ +997,5 @@ > + /** > + * Internal interface to list addons when the database is known > + * to be loaded > + */ > + _listAddons: function XPIDB_listAddons(addonDB, aFilter) { From the method names, its not clear what the difference between getAddonList() and _listAddons() is - rename this _filterAddons/_filterDB/something? @@ +1003,1 @@ > return []; Its odd that this checks addonDB is non-null, yet _findAddon() doesn't. @@ +1047,5 @@ > * The name of the install location > * @return an array of DBAddonInternals > */ > getAddonsInLocation: function XPIDB_getAddonsInLocation(aLocation) { > + return this._listAddons(this.addonDB, aAddon => (aAddon.location == aLocation)); Add a comment here explaining why this doesn't go through getAddonList() @@ +1105,5 @@ > getAddonsByType: function XPIDB_getAddonsByType(aType) { > + if (!this.addonDB) { > + // jank-tastic! Must synchronously load DB if the theme switches from > + // an XPI theme to a lightweight theme before the DB has loaded, > + // because we're called from sync XPIProvider.addonChanged Thankfully, bug 520124 removes XPIProvider.addonChanged (I have updated patches, I just need the time to finish it).
Attachment #764943 - Flags: review?(bmcbride) → review+
Comment on attachment 765573 [details] [diff] [review] retrySave v2 - If the JSON file is not readable at startup, try to save the new DB anyway Review of attachment 765573 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ -3188,5 @@ > // If the database doesn't exist and there are add-ons installed then we > // must update the database however if there are no add-ons then there is > // no need to update the database. > - // Avoid using XPIDatabase.dbFileExists, as that code is lazy-loaded, > - // and we want to avoid loading it until absolutely necessary. Why remove this? ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ -579,5 @@ > if (inputAddons.schemaVersion != DB_SCHEMA) { > // Handle mismatched JSON schema version. For now, we assume > // compatibility for JSON data, though we throw away any fields we > // don't know about > - // XXX preserve unknown fields during save/restore Still planning on doing this, right? @@ +927,5 @@ > }) > .then(count => { > + // If our last attempt to read or write the DB failed, force a new > + // extensions.ini to be written to disk on the next startup > + let result = this.lastError; Nit: Don't really need this extra variable.
Attachment #765573 - Flags: review?(bmcbride) → review+
Hm, so, thinking about atomicWrite... if we crash while that's writing, we won't get a corrupt DB. But we will be left with an out of date DB, and a temp file. I wonder if we should detect that temp file on startup and force a DB rebuild.
Comment on attachment 770975 [details] [diff] [review] Telemetry v1: Add some simple measures for XPI Database load/saves Review of attachment 770975 [details] [diff] [review]: ----------------------------------------------------------------- +1 to everything Vladan said in comment 76. Would be good to have a measurement that could help inform us if the value of ASYNC_SAVE_DELAY_MS is appropriate. ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ +3142,5 @@ > > // If the application version has changed then the database information > + // needs to be updated. Set this to an integer, 0 means no update needed, > + // positive integers represent various reasons. > + let updateDatabase = aAppChanged ? 1 : 0; Magic numbers are the devil. These should all be documented named constants at the top of the file. ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +570,5 @@ > // Now do some sanity checks on our JSON db > if (!("schemaVersion" in inputAddons) || !("addons" in inputAddons)) { > // Content of JSON file is bad, need to rebuild from scratch > ERROR("bad JSON file contents"); > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_invalid", 1); Can you name measures like this XPIDB_error_XXX ? Makes it easier to filter on this type of measure, versus non-error measurements. Though, saying that, most (all?) of these error measures are reasons to rebuild the database, so maybe it should be an array of reasons in one single measure, like the measure added in XPIProvider. @@ +579,5 @@ > // Handle mismatched JSON schema version. For now, we assume > // compatibility for JSON data, though we throw away any fields we > // don't know about > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_schemaMismatch", > + inputAddons.schemaVersion); Is there any reasonable way to correlate this with XPIDB_startup_load=3 ? @@ +656,5 @@ > * > * @return Promise<Map> resolves to the Map of loaded JSON data stored > * in this.addonDB; never rejects. > */ > + getDatabase: function XPIDB_asyncOpen() { Would be good to get measurements for how long the main-thread synchronous parts of this take (TextDecoder, parseDB, upgradeDB, and unreadableDB). @@ +664,5 @@ > } > > LOG("Starting async load of XPI database " + this.jsonFile.path); > + // Track in telemetry: did the async load start while XPIProvider > + // was initializing? before first paint? Would be nice to have something similar for the sync version too, so we can tell how often that happens outside of startup. @@ +665,5 @@ > > LOG("Starting async load of XPI database " + this.jsonFile.path); > + // Track in telemetry: did the async load start while XPIProvider > + // was initializing? before first paint? > + let loadTime = 3; Death to magic numbers! @@ +675,5 @@ > + if (!startupInfo.firstPaint) { > + loadTime = 2; > + } > + } > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_async_load", loadTime); Nit: Could do with some white space around this block of telemetry stuff. @@ +952,5 @@ > this.initialized = false; > > + if (this._deferredSave) { > + AddonManagerPrivate.recordSimpleMeasure( > + "XPIDB_total_saves", this._deferredSave.totalSaves); Naming these XPIDB_saves_XXX makes it easier to filter on. @@ +963,5 @@ > // Make sure any pending writes of the DB are complete, and we > // finish cleaning up, and then call back > this.flush() > .then(null, error => { > ERROR("Flush of XPI database failed", error); Add a measurement for this? @@ +1143,5 @@ > // jank-tastic! Must synchronously load DB if the theme switches from > // an XPI theme to a lightweight theme before the DB has loaded, > // because we're called from sync XPIProvider.addonChanged > WARN("Synchronous load of XPI database due to getAddonsByType(" + aType + ")"); > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_late_sync_open", 1); Ditto with the magic with these.
Attachment #770975 - Flags: review?(bmcbride) → review-
Comment on attachment 763958 [details] [diff] [review] asyncSave v2 - Save XPI Database JSON asynchronously, updated after review Review of attachment 763958 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +871,4 @@ > > + // Make sure any pending writes of the DB are complete, and we > + // finish cleaning up, and then call back > + this.flush() What guarantees are there that this completes safely on shutdown?
Comment on attachment 774164 [details] [diff] [review] DeferredSave v8 - Tag log messages with the file name Review of attachment 774164 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/DeferredSave.jsm @@ +41,5 @@ > +function DeferredSave(aPath, aDataProvider, aDelay) { > + let leafName = String.substr(aPath, Math.max(String.lastIndexOf(aPath, "/"), > + String.lastIndexOf(aPath, "\\")) + 1); > + // Set up loggers for this instance of DeferredSave > + Cu.import("resource://gre/modules/AddonLogging.jsm"); Only need to import this once, at the top of the file - not for every instance. @@ +88,5 @@ > + get dirty() { > + return this._pending || this.writeInProgress; > + }, > + > + get error() { Nit: For consistency, should this be called lastError, like the private property and on XPIDatabase?
Attachment #774164 - Flags: review+
(In reply to Blair McBride [:Unfocused] from comment #83) > Hm, so, thinking about atomicWrite... if we crash while that's writing, we > won't get a corrupt DB. But we will be left with an out of date DB, and a > temp file. I wonder if we should detect that temp file on startup and force > a DB rebuild. Note that, even in (non-default) paranoid mode, atomicWrite cannot fully guarantee that we won't have a corrupt db, so ensuring that we do not destroy Firefox in case of, say, power failure during writes, having a mechanism to rebuild the db is always a good idea.
Attachment #754529 - Attachment is obsolete: false
Attachment #754529 - Flags: checkin+
Attachment #769673 - Attachment is obsolete: true
Comment on attachment 774164 [details] [diff] [review] DeferredSave v8 - Tag log messages with the file name Inbound: https://hg.mozilla.org/integration/mozilla-inbound/rev/7b4f810372b4
Attachment #774164 - Flags: checkin+
Whiteboard: [Snappy] c=startup_addons u= p= → [Snappy] c=startup_addons u= p= [keep-open]
(In reply to Blair McBride [:Unfocused] from comment #77) > Comment on attachment 763958 [details] [diff] [review] > asyncSave v2 - Save XPI Database JSON asynchronously, updated after review > > Review of attachment 763958 [details] [diff] [review]: > ----------------------------------------------------------------- ... > ::: toolkit/mozapps/extensions/XPIProvider.jsm > @@ +3221,2 @@ > > try { > > + XPIDatabase.openConnection(false, true); > > openConnection doesn't throw, right? Either way, should keep that in its > original position outside this try/catch - because if it can throw, nothing > else in that if block should run. (But processFileChanges is ok to throw, > the rest of the code can keep running fine if that happens.) Included this in this update to asyncLoad rather than updating asyncSave again. (In reply to Blair McBride [:Unfocused] from comment #81) > Comment on attachment 764943 [details] [diff] [review] > asyncLoad v1: Load the XPI JSON database asynchronously when possible > > Review of attachment 764943 [details] [diff] [review]: > ----------------------------------------------------------------- > > This makes me realise how inconsistent the naming of DB-related methods are > - some have a "Database" postfix, some have a "DB" postfix, some have no > mention of either. Would be good to tidy that up, just to make future work a > little smoother to do. Cleaned up a couple of these, but didn't look at everything. > openConnection/getDatabase don't feel like great names for sync/async > versions of the same thing - how about something along the lines of > syncReadDB/asyncReadDB ? I used syncLoadDB / asyncLoadDB > @@ +531,5 @@ > > + this.parseDB(data, false); > > Should be passing aRebuildOnError here - the catch block below won't handle > it since parseDB doesn't throw. And even if it did throw, getActiveBundles() > would end up being needlessly called twice via rebuildDatabase(). OK > @@ +633,5 @@ > > + /** > > + * Reconstruct when the DB file exists but is unreadable > > + * (for example because read permission is denied > > + */ > > + unreadableDB: function(aError, aRebuildOnError) { > > rebuildUnreadableDatabase() ? Yes. > @@ +660,5 @@ > > + return this._dbPromise = OS.File.read(this.jsonFile.path).then( > > + array => { > > |array| is ambiguous - |byteArray| ? OK > @@ +990,5 @@ > > + .then(null, error => ERROR("getAddonList failed", e)); > > Need to still call aCallback in the error condition. Good catch. Added a safety wrapper to catch throws in API-supplied callback functions, and made sure aCallback gets called in a few other error conditions. > @@ +997,5 @@ > > + /** > > + * Internal interface to list addons when the database is known > > + * to be loaded > > + */ > > + _listAddons: function XPIDB_listAddons(addonDB, aFilter) { > > From the method names, its not clear what the difference between > getAddonList() and _listAddons() is - rename this > _filterAddons/_filterDB/something? Renamed to _filterDB, and moved it (and _findAddon) out of the XPIDatabase object and into functions since they don't need to be methods. > @@ +1003,1 @@ > > return []; > > Its odd that this checks addonDB is non-null, yet _findAddon() doesn't. Reviewed all the places we use this.addonDB and added comments and/or code > @@ +1047,5 @@ > > getAddonsInLocation: function XPIDB_getAddonsInLocation(aLocation) { > > + return this._listAddons(this.addonDB, aAddon => (aAddon.location == aLocation)); > > Add a comment here explaining why this doesn't go through getAddonList() OK Also, based on conversations with Felipe and Yoric, added handling for a race condition where syncLoadDB() gets called while asyncLoadDB() is in progress.
Attachment #764943 - Attachment is obsolete: true
Attachment #784807 - Flags: review?(bmcbride)
Attached patch asyncLoad v1 -> v2 interdiff (obsolete) (deleted) — Splinter Review
Turns out it's not too hard to make interdiffs with hg pbranch - I should blog my technique...
Bug 888753 was causing hangs in the test suite with the new DeferredSave implementation, so I ended up having to go through the test suite making sure every shutdownManager call was deferred to the next event loop rather than happening inside a callback from AddonManager. There was also a chunk of work to make the "locked file" tests do something reasonable on both Windows and Unix. This patch also cleans up some new JS warnings; there's a bit of conflict with Felipe's last patch because he made some of the same changes. I'll resolve those on landing. (In reply to Blair McBride [:Unfocused] from comment #82) > Comment on attachment 765573 [details] [diff] [review] > retrySave v2 - If the JSON file is not readable at startup, try to save the > new DB anyway > > Review of attachment 765573 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: toolkit/mozapps/extensions/XPIProvider.jsm > @@ -3188,5 @@ > > - // Avoid using XPIDatabase.dbFileExists, as that code is lazy-loaded, > > - // and we want to avoid loading it until absolutely necessary. > > Why remove this? XPIDatabase.dbFileExists was removed because it wasn't needed anywhere else. > ::: toolkit/mozapps/extensions/XPIProviderUtils.js > @@ -579,5 @@ > > - // XXX preserve unknown fields during save/restore > > Still planning on doing this, right? I'll file a follow up bug > @@ +927,5 @@ > > }) > > .then(count => { > > + // If our last attempt to read or write the DB failed, force a new > > + // extensions.ini to be written to disk on the next startup > > + let result = this.lastError; > > Nit: Don't really need this extra variable. XPIDatabase.lastError is a getter function that uses _deferredSave, which is destroyed in this code block, so I do need to store the value. I've renamed the variable. (In reply to Blair McBride [:Unfocused] from comment #86) > Comment on attachment 774164 [details] [diff] [review] > DeferredSave v8 - Tag log messages with the file name > > Review of attachment 774164 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: toolkit/mozapps/extensions/DeferredSave.jsm > @@ +41,5 @@ > > + // Set up loggers for this instance of DeferredSave > > + Cu.import("resource://gre/modules/AddonLogging.jsm"); > > Only need to import this once, at the top of the file - not for every > instance. Hrrm, hrroom. I'm not importing XPCUtils yet (for the lazy module loader), so i'm trapped in a twisty web of overhead-tradeoffs... > @@ +88,5 @@ > > + get dirty() { > > + return this._pending || this.writeInProgress; > > + }, > > + > > + get error() { > > Nit: For consistency, should this be called lastError, like the private > property and on XPIDatabase? OK, done.
Attachment #765573 - Attachment is obsolete: true
Attachment #784811 - Flags: review?(bmcbride)
Attached patch retrySave v2 -> v3 interdiff (obsolete) (deleted) — Splinter Review
(In reply to Vladan Djeric (:vladan) from comment #76) > Comment on attachment 770975 [details] [diff] [review] > Telemetry v1: Add some simple measures for XPI Database load/saves > > - I don't think we're consistently reporting the update reason when there > are multiple reasons to update: sometimes we only report the latest reason > (line 3174) and sometimes we report an earlier reason (line 3170). We should > be consistent about this.. either only report the earliest/latest reason or > report all reasons in a simpleMeasures array (my preference) OK. I made it an object, rather than an array of strings - do you prefer one or the other? > - This is mostly a personal preference: let's not do any heavy lifting in > if-conditions. So let's move the calls to processPendingFileChanges, > installDistributionAddons, and others inside or outside the if-blocks as > appropriate. OK > - Rename updateDatabase to updateReason or something similar > - I think the coding style now requires curlies around all if-else blocks > - Style request/nit: Move the "update reason 8" logic outside the call to > recordSimpleMeasure and make it explicit: OK > - Why did you remove the comment at the beginning of XPIDB_openConnection? Because I put the telemetry calls at the places where I call XPIDB_syncLoadDB (the new name for openConnection) > - Does the call to Ci.nsIAppStartup trigger initialization of the service? > If so, how expensive is it? It looks like it does a fair bit of work, and it's really not designed to support this use case. I've switched to observing final-ui-startup in XPIProvider.jsm to keep track of where we are in startup, but it happens quite a while before first paint so it might not be the best choice either. I'm open to suggestions here... (In reply to Blair McBride [:Unfocused] from comment #84) > Comment on attachment 770975 [details] [diff] [review] > Telemetry v1: Add some simple measures for XPI Database load/saves > > Review of attachment 770975 [details] [diff] [review]: > ----------------------------------------------------------------- > > Would be good to have a measurement that could help inform us if the value > of ASYNC_SAVE_DELAY_MS is appropriate. I'm tracking the number of saves per session (XPIDB_saves_total), and the number of times the data is modified while it's being written (XPIDB_saves_overlapped); hopefully those will help. > ::: toolkit/mozapps/extensions/XPIProvider.jsm > @@ +3142,5 @@ > > > > // If the application version has changed then the database information > > + // needs to be updated. Set this to an integer, 0 means no update needed, > > + // positive integers represent various reasons. > > + let updateDatabase = aAppChanged ? 1 : 0; > > Magic numbers are the devil. These should all be documented named constants > at the top of the file. At Vladan's suggestion I've made all these different attributes of an object, so we'll get a list of names in the telemetry message. > ::: toolkit/mozapps/extensions/XPIProviderUtils.js > @@ +570,5 @@ > > // Now do some sanity checks on our JSON db > > if (!("schemaVersion" in inputAddons) || !("addons" in inputAddons)) { > > // Content of JSON file is bad, need to rebuild from scratch > > ERROR("bad JSON file contents"); > > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_invalid", 1); > > Can you name measures like this XPIDB_error_XXX ? Makes it easier to filter > on this type of measure, versus non-error measurements. OK > Though, saying that, most (all?) of these error measures are reasons to > rebuild the database, so maybe it should be an array of reasons in one > single measure, like the measure added in XPIProvider. I'm pretty sure we'll never get more than one of these in a single run. > @@ +579,5 @@ > > // Handle mismatched JSON schema version. For now, we assume > > // compatibility for JSON data, though we throw away any fields we > > // don't know about > > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_schemaMismatch", > > + inputAddons.schemaVersion); > > Is there any reasonable way to correlate this with XPIDB_startup_load=3 ? They'll almost always happen together - I think they'll only be different if we saved the DB but failed to save the prefs on an XPI DB version upgrade. > @@ +656,5 @@ > > * > > * @return Promise<Map> resolves to the Map of loaded JSON data stored > > * in this.addonDB; never rejects. > > */ > > + getDatabase: function XPIDB_asyncOpen() { > > Would be good to get measurements for how long the main-thread synchronous > parts of this take (TextDecoder, parseDB, upgradeDB, and unreadableDB). OK. > @@ +664,5 @@ > > } > > > > LOG("Starting async load of XPI database " + this.jsonFile.path); > > + // Track in telemetry: did the async load start while XPIProvider > > + // was initializing? before first paint? > > Would be nice to have something similar for the sync version too, so we can > tell how often that happens outside of startup. I used the same XPIProvider.runPhase as I added for the async version. > @@ +665,5 @@ > > > > LOG("Starting async load of XPI database " + this.jsonFile.path); > > + // Track in telemetry: did the async load start while XPIProvider > > + // was initializing? before first paint? > > + let loadTime = 3; > > Death to magic numbers! Now using defined constants in XPIProvider and assigned to XPIProvider.runPhase > @@ +675,5 @@ > > + if (!startupInfo.firstPaint) { > > + loadTime = 2; > > + } > > + } > > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_async_load", loadTime); > > Nit: Could do with some white space around this block of telemetry stuff. OK > @@ +952,5 @@ > > + AddonManagerPrivate.recordSimpleMeasure( > > + "XPIDB_total_saves", this._deferredSave.totalSaves); > > Naming these XPIDB_saves_XXX makes it easier to filter on. OK > @@ +963,5 @@ > > ERROR("Flush of XPI database failed", error); > > Add a measurement for this? OK > @@ +1143,5 @@ > > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_late_sync_open", 1); > > Ditto with the magic with these. Switched from one measure name, with separate numeric values to identify the caller, to different measure names for each caller and XPIProvider.runPhase as the numeric value. I suppose I could make runPhase a string instead...
Attachment #770975 - Attachment is obsolete: true
Attachment #784823 - Flags: review?
Attachment #784823 - Flags: review?(vdjeric)
Attachment #784823 - Flags: review?(bmcbride)
Attachment #784823 - Flags: review?
Comment on attachment 784807 [details] [diff] [review] asyncLoad v2: Load the XPI JSON database asynchronously, address review comments Review of attachment 784807 [details] [diff] [review]: ----------------------------------------------------------------- (In reply to :Irving Reid (On vacation, responding sporadically) from comment #92) > Turns out it's not too hard to make interdiffs with hg pbranch - I should > blog my technique... Please do - that was *so* useful :) ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +165,5 @@ > + */ > +function safeCallback(aCallback) { > + return function(aParameter) { > + try { > + aCallback(aParameter); Make this more generic - use rest parameters. ie: return function(...aArgs) { try { aCallback.apply(null, aArgs); (Sadly, using the spread operator in calls isn't implemented yet - bug 762363)
Attachment #784807 - Flags: review?(bmcbride) → review+
Attachment #784810 - Attachment is obsolete: true
Comment on attachment 784811 [details] [diff] [review] retrySave v3 - If the JSON file is not readable at startup, try to save the new DB anyway - review comments Review of attachment 784811 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/test/xpcshell/head_addons.js @@ +345,5 @@ > } > > +// Record the error (if any) from trying to save the XPI > +// database at shutdown time > +let xpi_save_error = null; Nit: Naming convention for this is gXPISaveError
Attachment #784811 - Flags: review?(bmcbride) → review+
Attachment #784812 - Attachment is obsolete: true
> > - I don't think we're consistently reporting the update reason when there > > are multiple reasons to update: sometimes we only report the latest reason > > (line 3174) and sometimes we report an earlier reason (line 3170). We should > > be consistent about this.. either only report the earliest/latest reason or > > report all reasons in a simpleMeasures array (my preference) > > OK. I made it an object, rather than an array of strings - do you prefer one > or the other? - It's a bit odd to have an object whose property values are always "1", I think an array makes more sense - We've also added what looks like ~20 different new simple measures with this patch. Can we group them for easier reading? For example: - simpleMeasurements.XPIDB_startup_errors = ["badJSON", "schemaMismatch"]; - simpleMeasurements.XPIDB_startup_timings = {"rebuildBadJSON_MS": 38, "XPIDB_syncRead_MS": 72 }; - simpleMeasurements.XPIDB_init_info = {"XPIDB_lateOpen_byType": 1, "XPIDB_lateOpen_forInternalName": 2 }; - Nit: Can you add a new helper function "recordDuration" that accepts a timestamp & subtracts it from Date.now() and calls "recordSimpleMeasure", instead of doing the math in each of the "recordSimpleMeasure" calls? > // Checks if all expected events have been seen and if so calls the callback > function check_test_completed(aArgs) { > if (!gNext) >- return; >+ return undefined; Does check_test_completed every return anything other than undefined?
Attachment #784823 - Flags: review?(vdjeric)
Comment on attachment 760450 [details] [diff] [review] JSON v5: Save and restore XPI database in JSON format, response to first review https://hg.mozilla.org/integration/fx-team/rev/b3afaf3b4f3a
Attachment #760450 - Flags: checkin+
Comment on attachment 763764 [details] [diff] [review] UPGRADE V4 - Upgrade sqlite and RDF databases to JSON, incorporating reviews https://hg.mozilla.org/integration/fx-team/rev/5861a7f63f25
Attachment #763764 - Flags: checkin+
Comment on attachment 763958 [details] [diff] [review] asyncSave v2 - Save XPI Database JSON asynchronously, updated after review https://hg.mozilla.org/integration/fx-team/rev/ee904d911f2d
Attachment #763958 - Flags: checkin+
Comment on attachment 764935 [details] [diff] [review] schemaVersion v1: Only upgrade from sqlite when the schema version preference says to https://hg.mozilla.org/integration/fx-team/rev/f3cf78d7e62d
Attachment #764935 - Flags: checkin+
Attached patch asyncLoad as landed (deleted) — Splinter Review
Attachment #784807 - Attachment is obsolete: true
Attachment #787843 - Flags: review+
Attachment #787843 - Flags: checkin+
Attached patch retrySave as landed (obsolete) (deleted) — Splinter Review
Carrying forward Unfocused's r+ from https://hg.mozilla.org/integration/fx-team/rev/607b35c777f0 with nit fixed
Attachment #784811 - Attachment is obsolete: true
Attachment #787844 - Flags: review+
Attachment #787844 - Flags: checkin+
(In reply to Vladan Djeric (:vladan) from comment #98) > - It's a bit odd to have an object whose property values are always "1", I > think an array makes more sense Oh good, I was starting to wonder if I was the only one that *hates* seeing objects used like this :) But I'd much prefer a Set (API is far more suited to this), passed to telemetry as an array. And as a bonus, that means you don't need a separate updateDatabase variable (just check the size of updateReasons). > - We've also added what looks like ~20 different new simple measures with > this patch. Can we group them for easier reading? For example: > - simpleMeasurements.XPIDB_startup_errors = > ["badJSON", "schemaMismatch"]; +1 > - simpleMeasurements.XPIDB_startup_timings = > {"rebuildBadJSON_MS": 38, "XPIDB_syncRead_MS": 72 }; -1 - this makes it hard to query/filter. > - simpleMeasurements.XPIDB_init_info = > {"XPIDB_lateOpen_byType": 1, "XPIDB_lateOpen_forInternalName": 2 }; The values for these can only be 1, and there can only be one of them. So it should be more like: simpleMeasurements.XPIDB_lateOpenReason = "byType"; > - Nit: Can you add a new helper function "recordDuration" that accepts a > timestamp & subtracts it from Date.now() and calls "recordSimpleMeasure", > instead of doing the math in each of the "recordSimpleMeasure" calls? +1, but should use TelemetryStopwatch.jsm for this.
(In reply to :Irving Reid from comment #95) > > - Does the call to Ci.nsIAppStartup trigger initialization of the service? > > If so, how expensive is it? > > It looks like it does a fair bit of work Do you mean the component initialization, or just getStartupInfo()? Because that component is guaranteed to already be initialized on startup (before profile-do-change even).
(In reply to Blair McBride [:Unfocused] from comment #105) > > - We've also added what looks like ~20 different new simple measures with > > this patch. Can we group them for easier reading? For example: > > - simpleMeasurements.XPIDB_startup_errors = > > ["badJSON", "schemaMismatch"]; > > +1 Or, if it can only be one error, then: simpleMeasurements.XPIDB_startupErrors = "badJSON";
(In reply to Blair McBride [:Unfocused] from comment #105) > > - simpleMeasurements.XPIDB_init_info = > > {"XPIDB_lateOpen_byType": 1, "XPIDB_lateOpen_forInternalName": 2 }; > > The values for these can only be 1, and there can only be one of them. So it > should be more like: > simpleMeasurements.XPIDB_lateOpenReason = "byType"; Er, kindly ignore the part where I completely forget about runPhase :\ (Yay bugspam!)
Comment on attachment 784823 [details] [diff] [review] Telemetry v2: measures for XPI Database load/saves + review comments Review of attachment 784823 [details] [diff] [review]: ----------------------------------------------------------------- > Switched from one measure name, with separate numeric values to identify the > caller, to different measure names for each caller and XPIProvider.runPhase > as the numeric value. I suppose I could make runPhase a string instead... Yea, I think I'd prefer a string - otherwise we'll have to keep looking up the numbers, and it'll get complicated when we want to add a runPhase between 1 and 2. ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ +3160,5 @@ > // Import the website installation permissions if the application has changed > if (aAppChanged !== false) > this.importPermissions(); > > + // Keep track of whether and why we need to open and update the database at Uber nit: Trailing whitespace. @@ +3218,5 @@ > + telemetry.getHistogramById("CHECK_ADDONS_MODIFIED_MS").add(Date.now() - telemetryCaptureTime); > + > + // If the install directory state has changed then we must update the database > + let cache = Prefs.getCharPref(PREF_INSTALL_CACHE, null); > + if (cache != JSON.stringify(state)) { Hm, this is a trade-off of getting accurate telemetry and doing more work on startup (serializing |state|). Thankful it's not much, but its good to be aware of it. ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +727,5 @@ > } > > LOG("Starting async load of XPI database " + this.jsonFile.path); > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_async_load", XPIProvider.runPhase); > + // XXX use XPIProvider.runPhase for late sync loads too Leftover todo? @@ +1025,5 @@ > // finish cleaning up, and then call back > this.flush() > .then(null, error => { > ERROR("Flush of XPI database failed", error); > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_flush_failed", 1); Nit: XPIDB_shutdownFlush_failed
Attachment #784823 - Flags: review?(bmcbride) → review-
Sorry, backed out for xpcshell perma-oranges: TEST-UNEXPECTED-FAIL | /builds/slave/talos-slave/test/build/tests/xpcshell/tests/toolkit/mozapps/extensions/test/xpcshell-unpack/test_bug619730.js | Test timed out TEST-UNEXPECTED-FAIL | /builds/slave/talos-slave/test/build/tests/xpcshell/tests/toolkit/mozapps/extensions/test/xpcshell-unpack/test_bug619730.js | test failed (with xpcshell return code: -9), see following log: Return code: 256 TEST-UNEXPECTED-FAIL | C:\slave\test\build\tests\xpcshell\tests\toolkit\mozapps\extensions\test\xpcshell-unpack\test_update_strictcompat.js | Test timed out TEST-UNEXPECTED-FAIL | C:\slave\test\build\tests\xpcshell\tests\toolkit\mozapps\extensions\test\xpcshell-unpack\test_update_strictcompat.js | test failed (with xpcshell return code: 1), see following log: TEST-UNEXPECTED-FAIL | C:\slave\test\build\tests\xpcshell\tests\toolkit\mozapps\extensions\test\xpcshell\test_update_strictcompat.js | Test timed out TEST-UNEXPECTED-FAIL | C:\slave\test\build\tests\xpcshell\tests\toolkit\mozapps\extensions\test\xpcshell\test_update_strictcompat.js | test failed (with xpcshell return code: 1), see following log: Return code: 1 https://hg.mozilla.org/integration/fx-team/rev/960ecd36a714 https://hg.mozilla.org/integration/fx-team/rev/bc2cb7f532f2 https://hg.mozilla.org/integration/fx-team/rev/a64c1ad76ba1 https://hg.mozilla.org/integration/fx-team/rev/59a3d59d9fb7 https://hg.mozilla.org/integration/fx-team/rev/2467fe99d9b0 https://hg.mozilla.org/integration/fx-team/rev/917f586dd142
Carrying forward r=Unfocused, Re-landed with fixes to the broken tests, rs=taras, (and re-landed again, with the reviewer comment fixed :-( https://hg.mozilla.org/integration/fx-team/rev/d046391eff9b https://hg.mozilla.org/integration/fx-team/rev/acba57ca53b8 https://hg.mozilla.org/integration/fx-team/rev/cc533b0c791d https://hg.mozilla.org/integration/fx-team/rev/cccd4578a3c6 https://hg.mozilla.org/integration/fx-team/rev/2e31fb22f922 https://hg.mozilla.org/integration/fx-team/rev/61e0fb09a73c The difference from the first landing is: diff --git a/toolkit/mozapps/extensions/test/xpcshell/test_bug619730.js b/toolkit/mozapps/extensions/test/xpcshell/test_bug619730.js --- a/toolkit/mozapps/extensions/test/xpcshell/test_bug619730.js +++ b/toolkit/mozapps/extensions/test/xpcshell/test_bug619730.js @@ -9,17 +9,23 @@ const Cu = Components.utils; Cu.import("resource://testing-common/httpd.js"); var gTestserver = new HttpServer(); gTestserver.start(-1); gPort = gTestserver.identity.primaryPort; mapFile("/data/test_bug619730.xml", gTestserver); -function load_blocklist(file) { +function load_blocklist(file, aCallback) { + Services.obs.addObserver(function() { + Services.obs.removeObserver(arguments.callee, "blocklist-updated"); + + do_execute_soon(aCallback); + }, "blocklist-updated", false); + Services.prefs.setCharPref("extensions.blocklist.url", "http://localhost:" + gPort + "/data/" + file); var blocklist = Cc["@mozilla.org/extensions/blocklist;1"]. getService(Ci.nsITimerCallback); blocklist.notify(null); } var gSawGFX = false; @@ -44,14 +50,15 @@ function run_test() { do_check_eq(aSubject.getAttribute("testattr"), "FOO"); do_check_eq(aSubject.childNodes.length, 3); gSawTest = true; }, "blocklist-data-testItems", false); Services.obs.addObserver(function(aSubject, aTopic, aData) { do_check_true(gSawGFX); do_check_true(gSawTest); - - gTestserver.stop(do_test_finished); }, "blocklist-data-fooItems", false); - load_blocklist("test_bug619730.xml"); + // Need to wait for the blocklist to load; Bad Things happen if the test harness + // shuts down AddonManager before the blocklist service is done telling it about + // changes + load_blocklist("test_bug619730.xml", () => gTestserver.stop(do_test_finished)); } diff --git a/toolkit/mozapps/extensions/test/xpcshell/test_update_strictcompat.js b/toolkit/mozapps/extensions/test/xpcshell/test_update_strictcompat.js --- a/toolkit/mozapps/extensions/test/xpcshell/test_update_strictcompat.js +++ b/toolkit/mozapps/extensions/test/xpcshell/test_update_strictcompat.js @@ -390,17 +390,17 @@ function continue_test_6(install) { prepare_test({ "addon1@tests.mozilla.org": [ "onInstalling" ] }, [ "onInstallStarted", "onInstallEnded", - ], check_test_6); + ], callback_soon(check_test_6)); } function check_test_6(install) { do_check_eq(install.existingAddon.pendingUpgrade.install, install); restartManager(); AddonManager.getAddonByID("addon1@tests.mozilla.org", function(a1) { do_check_neq(a1, null);
Attachment #787844 - Attachment is obsolete: true
Attachment #788583 - Flags: review+
Attachment #788583 - Flags: checkin+
Irving, can this be marked as resolved-fixed now?
Flags: needinfo?(irving)
(In reply to Jared Wein [:jaws] from comment #113) > Irving, can this be marked as resolved-fixed now? Still have one more patch (Telemetry) to update and get landed. Real Soon Now...
Flags: needinfo?(irving)
Attached patch Telemetry v3: more review comments (obsolete) (deleted) — Splinter Review
(In reply to Vladan Djeric (:vladan) from comment #98) > - It's a bit odd to have an object whose property values are always "1", I > think an array makes more sense Changed to an array of strings. > - We've also added what looks like ~20 different new simple measures with > this patch. Can we group them for easier reading? For example: > - simpleMeasurements.XPIDB_startup_errors = > ["badJSON", "schemaMismatch"]; There's only one error per startup, so XPIDB_startupError = "badJSON" or whatever > - simpleMeasurements.XPIDB_startup_timings = > {"rebuildBadJSON_MS": 38, "XPIDB_syncRead_MS": 72 }; Based on Blair's comment, left these as is > - simpleMeasurements.XPIDB_init_info = > {"XPIDB_lateOpen_byType": 1, "XPIDB_lateOpen_forInternalName": 2 }; Again, there will only ever be one per ping, and I wanted to keep the ability to pass back the startup phase as the value > - Nit: Can you add a new helper function "recordDuration" that accepts a > timestamp & subtracts it from Date.now() and calls "recordSimpleMeasure", > instead of doing the math in each of the "recordSimpleMeasure" calls? Done. I went back and forth about the approach I took (of returning a mini-object with a "done()" method) and just returning a fully closed up function - the function approach creates fewer JS objects, but doesn't look quite as natural for API consumers - it would look something like this: (definition) startTimer: function(aName) { let startTime = Date.now(); return () => this.recordSimpleMeasure(aName, Date.now() - startTime); }, (usage) let readTimeRecorder = AMI.startTimer("XPIDB_timer"); ... do interesting stuff readTimeRecorder(); If I could come up with a great naming convention for the variable holding the timer function I'd feel more comfortable, but as is it feels a bit unnatural to my object-addled brain. As an alternative, the measure name parameter could move from the startTimer() function to the timeRecorder, and you could do: let durationRecorder = blah.startTimer(); ... durationRecorder("measure1"); ... durationRecorder("measure2"); but I'm not sure anyone actually needs that... > > // Checks if all expected events have been seen and if so calls the callback > > function check_test_completed(aArgs) { > > if (!gNext) > >- return; > >+ return undefined; > > Does check_test_completed every return anything other than undefined? Yes, some of the tests fail if the "we really are complete" case doesn't return the result of calling the callback. (In reply to Blair McBride [:Unfocused] from comment #105) > (In reply to Vladan Djeric (:vladan) from comment #98) > > - It's a bit odd to have an object whose property values are always "1", I > > think an array makes more sense > > Oh good, I was starting to wonder if I was the only one that *hates* seeing > objects used like this :) > > But I'd much prefer a Set (API is far more suited to this), passed to > telemetry as an array. > > And as a bonus, that means you don't need a separate updateDatabase variable > (just check the size of updateReasons). I could kind of smell it while I was writing it ;-> Set is overkill for this case, I really only need Array.push() / Array.length. ... > > - Nit: Can you add a new helper function "recordDuration" that accepts a > > timestamp & subtracts it from Date.now() and calls "recordSimpleMeasure", > > instead of doing the math in each of the "recordSimpleMeasure" calls? > > +1, but should use TelemetryStopwatch.jsm for this. TelemetryStopwatch only supports histograms, not simple measurements. This would be a good feature for 852974. (In reply to Blair McBride [:Unfocused] from comment #109) > Comment on attachment 784823 [details] [diff] [review] > Telemetry v2: measures for XPI Database load/saves + review comments > > Yea, I think I'd prefer a string - otherwise we'll have to keep looking up > the numbers, and it'll get complicated when we want to add a runPhase > between 1 and 2. Strung. > ::: toolkit/mozapps/extensions/XPIProvider.jsm > @@ +3160,5 @@ > > + // Keep track of whether and why we need to open and update the database at > > Uber nit: Trailing whitespace. Killed that one and a few others. ... > @@ +1025,5 @@ > > // finish cleaning up, and then call back > > this.flush() > > .then(null, error => { > > ERROR("Flush of XPI database failed", error); > > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_flush_failed", 1); > > Nit: XPIDB_shutdownFlush_failed Agreed.
Attachment #784823 - Attachment is obsolete: true
Attachment #790813 - Flags: review?(vdjeric)
Attachment #790813 - Flags: review?(bmcbride)
Attached patch Telemetry v2 => v3 interdiff (obsolete) (deleted) — Splinter Review
Attachment #790819 - Attachment is patch: true
Attachment #790819 - Attachment mime type: text/x-patch → text/plain
Comment on attachment 790813 [details] [diff] [review] Telemetry v3: more review comments >+ // Start a timer, record a simple measure of the time interval when >+ // timer.done() is called >+ simpleTimer: function(name) { >+ let startTime = Date.now(); >+ return { >+ done: () => AddonManagerPrivate.recordSimpleMeasure(name, Date.now() - startTime) >+ }; > } I'd prefer to have the measurement code be in a function, but I'll let the module owner decide
Attachment #790813 - Flags: review?(vdjeric) → review+
(In reply to :Irving Reid from comment #115) > Done. I went back and forth about the approach I took (of returning a > mini-object with a "done()" method) Yea, this will do. Should be temporary anyway, as I filed bug 906524 to make TelemetryStopwatch support simple measurements. > I could kind of smell it while I was writing it ;-> Set is overkill for this > case, I really only need Array.push() / Array.length. Haha, fair enough.
Comment on attachment 790813 [details] [diff] [review] Telemetry v3: more review comments Review of attachment 790813 [details] [diff] [review]: ----------------------------------------------------------------- One last issue I just noticed... ::: toolkit/mozapps/extensions/AddonManager.jsm @@ +2134,5 @@ > + }, > + > + // Start a timer, record a simple measure of the time interval when > + // timer.done() is called > + simpleTimer: function(name) { Nit: aName ::: toolkit/mozapps/extensions/XPIProviderUtils.js @@ +651,2 @@ > } > + parseTimer.done(); This is inconsistently recorded. Above, if we rebuild due to bad JSON, it's never recorded (early return). But here, if we rebuild due to other reasons, it does get recorded.
Attachment #790813 - Flags: review?(bmcbride) → review-
Took advantage of the opportunity to remove a few unused const definitions. (In reply to Blair McBride [:Unfocused] from comment #119) > Comment on attachment 790813 [details] [diff] [review] > Telemetry v3: more review comments > > One last issue I just noticed... > ... > ::: toolkit/mozapps/extensions/XPIProviderUtils.js > @@ +651,2 @@ > > } > > + parseTimer.done(); > > This is inconsistently recorded. Above, if we rebuild due to bad JSON, it's > never recorded (early return). But here, if we rebuild due to other reasons, > it does get recorded. Good catch. I think I was thinking that I wouldn't record the parse time if the parse failed, but I wasn't doing that consistently either. I've changed it to always record the parse time before the DB rebuild if a rebuild is needed, but including both the parse and building the DBAddonInternal objects on the happy path. (In reply to Vladan Djeric (:vladan) from comment #117) > Comment on attachment 790813 [details] [diff] [review] > Telemetry v3: more review comments > > >+ // Start a timer, record a simple measure of the time interval when > >+ // timer.done() is called > >+ simpleTimer: function(name) { > >+ let startTime = Date.now(); > >+ return { > >+ done: () => AddonManagerPrivate.recordSimpleMeasure(name, Date.now() - startTime) > >+ }; > > } > > I'd prefer to have the measurement code be in a function, but I'll let the > module owner decide Not sure what you mean by that - are you referring to my alternate design of returning a closed up function rather than an object?
Attachment #790813 - Attachment is obsolete: true
Attachment #790819 - Attachment is obsolete: true
Attachment #794065 - Flags: review?(bmcbride)
Depends on: 568739
Comment on attachment 794065 [details] [diff] [review] Telemetry v4: more review comments Review of attachment 794065 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/mozapps/extensions/XPIProvider.jsm @@ -93,5 @@ > const KEY_APP_SYSTEM_LOCAL = "app-system-local"; > const KEY_APP_SYSTEM_SHARE = "app-system-share"; > const KEY_APP_SYSTEM_USER = "app-system-user"; > > -const UNKNOWN_XPCOM_ABI = "unknownABI"; *facepalm* Filed bug 909159.
Attachment #794065 - Flags: review?(bmcbride) → review+
Whiteboard: [Snappy] c=startup_addons u= p= [keep-open] → [Snappy] c=startup_addons u= p=
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla26
Depends on: 908304
Depends on: 911820
No longer blocks: 903093
Depends on: 903093
Irving how can QA verify this fix? Are there any manual actions required or the existing tests are enough?
per comment #69, QA just needs to ensure current functionality hasn't regressed.
Marking as Verified Fixed based on the discussion that I had with Tracy on IRC and on the investigations done on Windows and Ubuntu and Mac OS.
Status: RESOLVED → VERIFIED
Keywords: qawanted
Depends on: 924340
No longer depends on: 924340
Depends on: 929013
QA Contact: twalker
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: