Closed
Bug 381795
Opened 17 years ago
Closed 17 years ago
places indexes need review
Categories
(Firefox :: Bookmarks & History, defect, P1)
Firefox
Bookmarks & History
Tracking
()
VERIFIED
FIXED
Firefox 3 beta2
People
(Reporter: dietrich, Assigned: dietrich)
References
Details
Attachments
(3 files, 5 obsolete files)
(deleted),
patch
|
moco
:
review+
mtschrep
:
approvalM9+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
mak
:
review+
|
Details | Diff | Splinter Review |
(deleted),
patch
|
moco
:
review+
|
Details | Diff | Splinter Review |
in bug 381378, export of a large bookmarks file was reduced from 180+ seconds to 6 seconds by properly constructing an index on an annotations table.
we should do a comprehensive review of all indices in places.sqlite, and make sure that they are properly optimized for the most common queries used against those tables.
unused or improperly constructed indices can do more harm than good, so we should remove indices that don't optimize for a specific use-case.
Assignee | ||
Updated•17 years ago
|
Assignee: nobody → dietrich
Flags: blocking-firefox3?
Target Milestone: --- → Firefox 3 alpha5
Updated•17 years ago
|
Flags: blocking-firefox3? → blocking-firefox3+
Updated•17 years ago
|
Target Milestone: Firefox 3 alpha5 → Firefox 3 alpha6
Assignee | ||
Comment 1•17 years ago
|
||
retargeting bugs that don't meet the alpha release-blocker criteria at http://wiki.mozilla.org/Firefox3/Schedule.
Target Milestone: Firefox 3 alpha6 → Firefox 3 beta1
Assignee | ||
Updated•17 years ago
|
Target Milestone: Firefox 3 M7 → Firefox 3 M8
Assignee | ||
Updated•17 years ago
|
Target Milestone: Firefox 3 M8 → Firefox 3 M9
Updated•17 years ago
|
Target Milestone: Firefox 3 M9 → Firefox 3 M10
Comment 2•17 years ago
|
||
i have some doubts on some indexes that i'd like to discuss out with you, notice that these are my thinkings and they could be wrong if i'm starting arguing from some wrong concepts.
---
in moz_items_annos we have
moz_items_annos_attributesindex ON item_id, anno_attribute_id
moz_annos_item_idindex ON item_id
i know that they are different things, but are we sure that sqlite will not be able to use the first index also when looking only for item_id?
i got this doubt reading this reply http://www.mail-archive.com/sqlite-users@sqlite.org/msg26673.html if this is true, the second index could be discarded gaining some speed in vacuum (it have to recreate all indexes, less indexes = faster) and some gain in db size. Some test should be done on this.
---
in moz_places we have an index on URL, this is quite strange since we don't want duplicate uris in places (is thi right?), so we will have an index with all the data from the url column duplicated (the only thing gained here is sorting)... a search in such an index is probably fast like a search in the column itself. The only function that will benefit from this index is removeduplicaturis done on import from other browsers (maybe executed only once).
This could also have other implications, since all the column is duplicated and this is a text column with many chars (some urls are very long) the manteinance on the table will get worst, vacuum has to recreate the index, that can be quite large, so vacuum is slower, and when deleting a record from places we delete url from index, causing an enlarging defragmentation in the db.
if this is true it should be better to avoid indexes on column that will however contain N different values on M records, where N is about the same size of M and we don't need data ordered ASC or DESC (sqlite index mantains data ordered ASC).
such index could instead be useful to avoid url duplication, but so why not create a UNIQUE index on that instead of a not unique one?
i have tried a SELECT * FROM moz_places WHERE url LIKE "%domain%"; and i get about the same timings with and without the index... so, some test could be done to see different speed in queries and vacuum.
similar problem is for moz_places_titleindex, it should be a fulltext index (i know it's still not impl), a simple index doesn't feel useful here, there should be fewer urls with equal titles and we don't need titles ordered.
---
in moz_favicons there is a UNIQUE index on URL, even if this is correct it will cause to duplicate all urls in the index, slowing down vacuum and causing defragmentation. But there is no other solution to check for unique. maybe a short hash function could help, and make hash UNIQUE (that should be shorter than the medium length of common urls: 16, 24, or 32 chars max)
Comment 3•17 years ago
|
||
explain query plan can help choosing the index, for example doing:
explain query plan select item_id from moz_items_annos where item_id=5
confirmed that sqlite is using moz_items_annos_attributesindex instead of moz_annos_item_idindex, so moz_annos_item_idindex can (and should) be discarded
Comment 4•17 years ago
|
||
i have a doubt also on the fk index on moz_bookmarks, since there will be a few duplicated fk in the table and there is no need to do queries like "where fk > N". tried to query with and without index, and i get about same timings
also doing:
explain query plan select * from moz_places where title LIKE "%domain%"
show that sqlite is NOT using index moz_places_titleindex
Comment 5•17 years ago
|
||
having an index on url is instead useful when doing queries like:
SELECT * FROM moz_places WHERE url = "http://www.domain.com/path"
with the index it takes up 0,30ms, without it takes up 60ms, that is because the index is ordered.
That is called every time a new page is inserted into places since it has to check if it's already there (a UNIQUE index on url could instead be used with REPLACE instead of INSERT).
but modifing the query like this:
SELECT * FROM moz_places h WHERE h.rev_host = "moc.niamod.www" AND h.url = "http://www.domain.com/path"
brings down to 0,30ms even without the index since it restrict the query to a single domain.
So the index on URL could be avoided modifiing queries that do check on url to check rev_host too. Don't know however if the time to calculate the rev_host is a concern.
Comment 6•17 years ago
|
||
First off thanks a *ton* for your help here..
(In reply to comment #4)
> i have a doubt also on the fk index on moz_bookmarks, since there will be a few
> duplicated fk in the table and there is no need to do queries like "where fk >
> N". tried to query with and without index, and i get about same timings
>
>
from my places.sqlite file in bug 332748 the bookmarks + index is a minor portion of overal db size (1% or so)
> also doing:
> explain query plan select * from moz_places where title LIKE "%domain%"
> show that sqlite is NOT using index moz_places_titleindex
>
Most databases, including sqlite (http://www.sqlite.org/optoverview.html) don't use indicies in like queries where there is the wildcard in the front (because you can't use stemming). Change your query to LIKE "domain%" and you'll see it uses the index. Do we use this kind of query in practice?
Comment 7•17 years ago
|
||
> Do we use this kind of query in practice?
yes, we do use queries with the wildcard in front for our url bar autocomplete query.
Type in "moz" in the URL bar and the query we'll eventually execute will have a where clause of the form:
... h.title LIKE '%moz%' ESCAPE '/' OR h.url LIKE '%moz%' ESCAPE '/' ...
Comment 8•17 years ago
|
||
As some more data - dropping the URL index from my 49MB places file takes it down to 32MB (after vacuum). Dropping the index + setting all urls to "" (after vacuum) takes me down to 21MB (after vacuum).
I'm also noticing some crazy big entries in there - like 17000, 1000+. I'm wondering if it makes sense to cap the storage to the first 100-200 characters of every URL. Not clear to me how useful those extra characters are on the end - a quick pass for me shows big bugzilla queries, etc where I don't think that extra data is super-useful (but could be wrong).
But it seems like capping the size of this table overall + size of URL's might give us slightly better behavior. We could look at putting a hard cap on the main db which fires for the URL bar and then add a "search everywhere" button or equivalent to go pickup data from further back in a separate db (that gets loaded only on demand)... This basic problem is an archiving one that partitioning solves well. Also likely that the url's would respond well to domain-specific compression (http://72.14.253.104/search?q=cache:9XtHOXgONLUJ:anres.cpe.ku.ac.th/pub/url-compression-ncsec.pdf+url+compression&hl=en&ct=clnk&cd=2&gl=us&client=firefox-a) which would work if we basically just loaded the table in-memory and did searching through the compressed form...
Comment 9•17 years ago
|
||
(In reply to comment #8)
> But it seems like capping the size of this table overall + size of URL's might
> give us slightly better behavior. We could look at putting a hard cap on the
> main db which fires for the URL bar and then add a "search everywhere" button
> or equivalent to go pickup data from further back in a separate db (that gets
> loaded only on demand)... This basic problem is an archiving one that
> partitioning solves well. Also likely that the url's would respond well to
> domain-specific compression
> (http://72.14.253.104/search?q=cache:9XtHOXgONLUJ:anres.cpe.ku.ac.th/pub/url-compression-ncsec.pdf+url+compression&hl=en&ct=clnk&cd=2&gl=us&client=firefox-a)
> which would work if we basically just loaded the table in-memory and did
> searching through the compressed form...
I'd add that we could partition by both time and move the extended urls to a separate table/db. Guessing that horks us on link coloring tho?
Comment 10•17 years ago
|
||
my final thoughts is that:
moz_annos_item_idindex shoduld be dropped since the db uses moz_items_annos_attributesindex
moz_places_urlindex is useful in queries like WHERE url = "something" and there is some of them, so it cannot be dropped, but it could become a unique index (it will avoid to check if a url is in table, since you could use REPLACE instead of insert and get the id out of that), this will require changes to InternalAddVisit code (if i'm not wrong). also a md5(url) could be shorter than the url to index and can be used to test for unique. Since there is already an index on rev_host the md5 could be made only on path to limit the probability of collision...
moz_places_titleindex is NOT used in queries like WHERE title LIKE "%something%" and i have checked with explain query plan that is NOT used also in queries like WHERE title LIKE "something%". I cannot find a query where it is used atm, could you try with your db and explain query plan to see if it's happening only on my side?
moz_bookmarks_itemindex is useful in queries like WHERE fk="something", but could maybe be improved changing it to an index to (fk,type)? since i have seen a couple of queries where the results are joined against fk and filtered against type. sqlite will use that index also for queries that do not involve type (can check always with "explain query plan YOUR_QUERY")
moz_bookmarks_parentindex could probably be improved to an index on (parent,position) to get faster queries like "parent=?1 and position >?2 and position <?3"
moz_historyvisits_pageindex could be improved to an index on (place_id,visit_type) since often visits are filtered against visit_type
still, those thoughts need further investigation on your side, i'll try to avoid more clutter here, thank you :)
Assignee | ||
Comment 11•17 years ago
|
||
(In reply to comment #9)
> I'd add that we could partition by both time and move the extended urls to a
> separate table/db. Guessing that horks us on link coloring tho?
>
spun off the archiving question to bug 401899.
archiving to a separate db brings an increase in complexity, and presents technical issues such as link coloring. i think there's lots of room to optimize our single db approach, so we should exhaust those first.
(In reply to comment #8)
> I'm also noticing some crazy big entries in there - like 17000, 1000+. I'm
> wondering if it makes sense to cap the storage to the first 100-200 characters
> of every URL. Not clear to me how useful those extra characters are on the end
> - a quick pass for me shows big bugzilla queries, etc where I don't think that
> extra data is super-useful (but could be wrong).
maybe i'm just sleep-deprived, but i don't understand this at all - you'd no longer be able to get back to the original URL... i've got to be missing something here.
Comment 12•17 years ago
|
||
> > I'm also noticing some crazy big entries in there - like 17000, 1000+. I'm
> > wondering if it makes sense to cap the storage to the first 100-200 characters
> > of every URL. Not clear to me how useful those extra characters are on the end
> > - a quick pass for me shows big bugzilla queries, etc where I don't think that
> > extra data is super-useful (but could be wrong).
>
> maybe i'm just sleep-deprived, but i don't understand this at all - you'd no
> longer be able to get back to the original URL... i've got to be missing
> something here.
>
Yea - wasn't thinking straight - was just trying to optimize for size + url searches but that obviously horkes getting back to the original url. Just surprised at the large size of them and how much a % of the data that represents. So ignore this particular idea.
Comment 13•17 years ago
|
||
We could supress very long URLs completely, or expire them more aggressively than normal URLs.
BTW, are we adding data: URIs to history? We probably shouldn't be.
Assignee | ||
Comment 14•17 years ago
|
||
drops the 2 indexes that we're clearly not using. fixes a typo.
the other changes recommended in the comments need more analysis and testing.
Attachment #286971 -
Flags: review?(sspitzer)
Comment 15•17 years ago
|
||
Once we get the indexes, page_size, and expiration work done can we force a db rebuild + full vacuum on nightlies (e.g. like on tomorrow's nightly) to get our testers on similar configs to new b1 users?
Comment 16•17 years ago
|
||
note to schrep, we'd have to completely rebuild the db (and not just migrate it in place) in order to get testers on the same page as new b1 users, as the page size and incremental vacuum settings can't be changed after the db has been created. see bug #402076 for details.
Comment 17•17 years ago
|
||
Comment on attachment 286971 [details] [diff] [review]
fix v1
r=sspitzer
(thanks again to marco for his excellent help)
my apologies for changing how autocomplete algorithm uses title, but not dropping the title index.
one comment:
doesn't this change imply we've got nightly testers (anyone before the fix for bug #389876, so pre-m7?) that have a moz_places_visitcount index (that does them no good, as it indexes rev_host, not visit_count):
- NS_LITERAL_CSTRING("CREATE INDEX moz_places_visitcount ON moz_places (rev_host)"));
+ NS_LITERAL_CSTRING("CREATE INDEX moz_places_visitcount ON moz_places (visit_count)"));
should we log a spin off bug on that issue (about dropping and recreating that index if we detect it's bad?)
this would be moot if we fix bug #402076
Attachment #286971 -
Flags: review?(sspitzer) → review+
Assignee | ||
Updated•17 years ago
|
Target Milestone: Firefox 3 M10 → Firefox 3 M9
Assignee | ||
Updated•17 years ago
|
Attachment #286971 -
Flags: approvalM9?
Assignee | ||
Updated•17 years ago
|
Whiteboard: [has patch][needs approval]
Comment 18•17 years ago
|
||
Comment on attachment 286971 [details] [diff] [review]
fix v1
a+ for schrep since we are trying to close out the places blockers
Attachment #286971 -
Flags: approvalM9? → approvalM9+
Assignee | ||
Comment 19•17 years ago
|
||
> doesn't this change imply we've got nightly testers (anyone before the fix for
> bug #389876, so pre-m7?) that have a moz_places_visitcount index (that does
> them no good, as it indexes rev_host, not visit_count):
>
filed bug 402161.
patch checked in. note i'm not closing this, but retargeting to m10 for more analysis of the indexes.
Checking in toolkit/components/places/src/nsAnnotationService.cpp;
/cvsroot/mozilla/toolkit/components/places/src/nsAnnotationService.cpp,v <-- nsAnnotationService.cpp
new revision: 1.32; previous revision: 1.31
done
Checking in toolkit/components/places/src/nsNavHistory.cpp;
/cvsroot/mozilla/toolkit/components/places/src/nsNavHistory.cpp,v <-- nsNavHistory.cpp
new revision: 1.181; previous revision: 1.180
done
Whiteboard: [has patch][needs approval]
Target Milestone: Firefox 3 M9 → Firefox 3 M10
Comment 20•17 years ago
|
||
Sidenote on comment 2:
Using a hash to compare n*10,000 entries, the following numbers of collisions are to be expected:
16-bit hash: 763 * n^2 collisions
24-bit hash: 3.0 * n^2 collisions
32-bit hash: 0.0116 * n^2 collisions
64-bit hash: 0.27E-11 * n^2 collisions
128-bit hash: 0.14E-30 * n^2 collisions
Comment 21•17 years ago
|
||
Using an hash does not appear as a win on vacuum time, but only on DB size.
I've tested this using CRC32, in my test db most urls are made up of domain only, and i have about 100 000 entries in moz_places, the win in size is about 3MB on a 24MB DB. This will be bigger with urls complete of full paths. With crc32 the size win can be evaluated to about 60 bytes per entry.
size_win_x_entry = medium_size_of_url - 2 * size_of_hash
vacuum time is about the same (20s instead of 21s), query time (modified to use hash and url) is about the same (0,27ms instead of 0,26ms).
So it does not appear as a winner choice if disk space is cheap.
about moz_bookmarks_parentindex, changing it from a (parent) index to a (parent,position) index will speed up the selectChildren query, that moves from 6-7ms to about 3ms. This also speed up mDBGetChildAt (from 1.5ms to 0.18ms) and should help AdjustIndices.
is there a way to printout/log all sql queries (and their execution time) sent by the browser during a common navigation session?
Comment 22•17 years ago
|
||
Marco, try setting the following environment variables:
NSPR_LOG_MODULES=mozStorage:5
NSPR_LOG_FILE=log.txt
Note, that will show you queries, but I'm not sure if you are going to get execution time (but the log should have time stamps.)
Assignee | ||
Updated•17 years ago
|
Priority: -- → P1
Assignee | ||
Comment 23•17 years ago
|
||
Changes:
- Make the index on moz_places.url unique
- Change moz_places inserts to INSERT OR REPLACE (the OR REPLACE is only ever hit on Places first-run)
- Remove the RemoveDuplicateURIs code
- Make the moz_bookmarks_itemindex a compound index of fk and type
- Make the moz_bookmarks_parentindex a compound index on parent and position
- Only create indexes on first-run, instead of every startup
- If migrating someone up from beta 4, don't re-add the moz_places.title index, as it's unused
Tests: (done w/ 8k records)
- Migration from branch startup: Performance here was almost exactly the same with this patch. Removing RemoveDuplicateURIs saved time, but adding the UNIQUE constraint makes inserts a little slower. However, this is still worth taking, as the constraint is easier to manage than RemoveDuplicates (sqlite does the work for us), is less risky than leaving it non-unique, and will scale better in the context of the fix for bug 389789, which will not be required.
- Standard post-migration startup: tests effect of removing unnecessary index creation: nsNavHistory::InitDB takes 25% less time to run.
Notes:
- Creating indexes post-import was easily 5-10% slower with 8k records.
- Making moz_places.url UNIQUE makes inserts slower than having a separate named index that's UNIQUE, by about 5%. Weird.
- Did some testing and confirmed that we can change the synchronous pragma around during program execution. However, also confirmed that it doesn't affect perf at all, probably because it's ignored by our custom async i/o impl.
Todo:
- Confirm the compound index changes are being used by the intended queries
- Profile the compound index changes
- Find a history.dat that results in the 50k or higher range and profile migration w/ these changes
Comment 24•17 years ago
|
||
fine, however a removeduplicateURIs function will still be needed when upgrading the current db to the new with a UNIQUE moz_places (it should use the new version of the function, with remapping)
Assignee | ||
Comment 25•17 years ago
|
||
(In reply to comment #24)
> fine, however a removeduplicateURIs function will still be needed when
> upgrading the current db to the new with a UNIQUE moz_places (it should use the
> new version of the function, with remapping)
>
Yes, this is true: If we ever decide to dump and recreate moz_places, we'd need to weed out and remap duplicates.
On shutdown, if the old non-unique url index exists then we could drop it, run RemoveDuplicates, and re-create the new unique index. However, this could take a very long time for users with lots of history. Maybe this should be done via a run-once idle timer.
Assignee | ||
Comment 26•17 years ago
|
||
more changes:
- Don't import hidden history records when upgrading from Fx2. Refer to the comments in bug 401722 about why these are of little value (eg: the primary use-case is inter-frame link coloring). Discussed this with Seth and mconnor, both agreed this is the right thing to do. In my tests, import time was reduced by a percentage correlating to the number of hidden visits in the file. I've seen a range of 30% to 80% hidden visits in the history.dat files that I've tested, so the impact on Places' first run, as well as initial size, is significant.
- Don't do some processing work on history records that we're not going to import because they don't have a URI.
Attachment #288625 -
Attachment is obsolete: true
Comment 27•17 years ago
|
||
some change could be done to nsNavHistory::GetUrlIdFor and to nsNavHistory::AddVisit to use the unique index. Calling an "insert or replace" (InternalAddNewVisit) allows you to update and get the id, without checking if that entry exists. So the code could probably be simplified and become faster.
Assignee | ||
Comment 28•17 years ago
|
||
This adds index migration after 15 mins idle time. I tested this with a places.sqlite that had 85k moz_places records and 120k moz_historyvisits records, and it took about 7 seconds. However, as previous changes like this have shown, these types of conversions can take minutes on slower boxes. This is why I've added it on idle time instead of startup or shutdown.
There's an issue with the remapping at idle time in that the UI could get out of sync with the database since this is done mid-run. However, given that the duplicate URIs potential is so small I think we're probably safe to do this. For example, in Jay's history.dat, there was only a single URL that had 2 duplicates, out of months of history.
Marco are you able to do a first review of this?
Attachment #288729 -
Attachment is obsolete: true
Attachment #288977 -
Flags: review?(mak77)
Comment 29•17 years ago
|
||
i'm looking at this...
nsNavHistory::CleanUpOnQuit()
+ NS_LITERAL_CSTRING("CREATE UNIQUE INDEX moz_places_url_uniqueindex ON moz_places (url)"));
NS_ENSURE_SUCCESS(rv, rv);
what happens when, after creating this index on the new table, you try to insert a duplicated url from the old table? you should probably call a RemoveDuplicateURIs on the old table before...
I think that removeDuplicateURIs and CREATE UNIQUE INDEX moz_places_url_uniqueindex should go together in an exclusive transaction, so that nothing can access the db during the change
also, there is a comment saying that creating indexes before is faster than creating them after insertion... but CreateLookupIndexes creates index after insertion telling that this way the migration is faster... What's the truth?
can't see anything more atm
Updated•17 years ago
|
Attachment #288977 -
Flags: review?(mak77)
Assignee | ||
Comment 30•17 years ago
|
||
(In reply to comment #29)
> i'm looking at this...
>
> nsNavHistory::CleanUpOnQuit()
> + NS_LITERAL_CSTRING("CREATE UNIQUE INDEX moz_places_url_uniqueindex ON
> moz_places (url)"));
> NS_ENSURE_SUCCESS(rv, rv);
>
> what happens when, after creating this index on the new table, you try to
> insert a duplicated url from the old table? you should probably call a
> RemoveDuplicateURIs on the old table before...
fixed
>
>
> I think that removeDuplicateURIs and CREATE UNIQUE INDEX
> moz_places_url_uniqueindex should go together in an exclusive transaction, so
> that nothing can access the db during the change
>
fixed
>
> also, there is a comment saying that creating indexes before is faster than
> creating them after insertion... but CreateLookupIndexes creates index after
> insertion telling that this way the migration is faster... What's the truth?
>
CreateLookupIndexes runs before history import. This is a problem in the comment though, left over from when this was thought to be true (from Places in Fx2 team). Really, CreateLookupIndexes should probably be removed, and those indexes added when the tables are created.
Attachment #288977 -
Attachment is obsolete: true
Assignee | ||
Comment 31•17 years ago
|
||
Removed CreateLookupIndexes, consolidates index creation w/ table creation.
Attachment #289252 -
Attachment is obsolete: true
Attachment #289253 -
Flags: review?(mak77)
Comment 32•17 years ago
|
||
Comment on attachment 289253 [details] [diff] [review]
fix
fine for me
Attachment #289253 -
Flags: review?(mak77) → review+
Assignee | ||
Comment 33•17 years ago
|
||
Checking in toolkit/components/places/src/nsMorkHistoryImporter.cpp;
/cvsroot/mozilla/toolkit/components/places/src/nsMorkHistoryImporter.cpp,v <-- nsMorkHistoryImporter.cpp
new revision: 1.13; previous revision: 1.12
done
Checking in toolkit/components/places/src/nsNavBookmarks.cpp;
/cvsroot/mozilla/toolkit/components/places/src/nsNavBookmarks.cpp,v <-- nsNavBookmarks.cpp
new revision: 1.130; previous revision: 1.129
done
Checking in toolkit/components/places/src/nsNavHistory.cpp;
/cvsroot/mozilla/toolkit/components/places/src/nsNavHistory.cpp,v <-- nsNavHistory.cpp
new revision: 1.194; previous revision: 1.193
done
Checking in toolkit/components/places/src/nsNavHistory.h;
/cvsroot/mozilla/toolkit/components/places/src/nsNavHistory.h,v <-- nsNavHistory.h
new revision: 1.112; previous revision: 1.111
done
Status: NEW → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
Assignee | ||
Comment 34•17 years ago
|
||
Attachment #289525 -
Flags: review?(sspitzer)
Assignee | ||
Comment 35•17 years ago
|
||
Comment on attachment 289525 [details] [diff] [review]
fix seth comments
canceling review for the moment. going to profile the exclusive transaction change, might not be necessary.
Attachment #289525 -
Flags: review?(sspitzer)
Assignee | ||
Comment 36•17 years ago
|
||
found no tangible perf difference between the transaction types.
Attachment #289525 -
Attachment is obsolete: true
Attachment #289545 -
Flags: review?(sspitzer)
Comment 37•17 years ago
|
||
Comment on attachment 289545 [details] [diff] [review]
fix seth comments v2
r=sspitzer, thanks dietrich.
Attachment #289545 -
Flags: review?(sspitzer) → review+
Comment 38•17 years ago
|
||
my request for the transaction exclusive was exclusively to be sure that any other software/thread could add non unique items beteween removeduplicateuris and the creation of the new index, not for perf. but since at the first write the lock becomes reserved, it is fine with the default deferred transaction :)
Good work, indexes are taking a better shape
Assignee | ||
Comment 39•17 years ago
|
||
Checking in toolkit/components/places/src/nsMorkHistoryImporter.cpp;
/cvsroot/mozilla/toolkit/components/places/src/nsMorkHistoryImporter.cpp,v <-- nsMorkHistoryImporter.cpp
new revision: 1.14; previous revision: 1.13
done
Checking in toolkit/components/places/src/nsNavHistory.cpp;
/cvsroot/mozilla/toolkit/components/places/src/nsNavHistory.cpp,v <-- nsNavHistory.cpp
new revision: 1.195; previous revision: 1.194
done
Updated•17 years ago
|
Status: RESOLVED → VERIFIED
Comment 40•15 years ago
|
||
Bug 451915 - move Firefox/Places bugs to Firefox/Bookmarks and History. Remove all bugspam from this move by filtering for the string "places-to-b-and-h".
In Thunderbird 3.0b, you do that as follows:
Tools | Message Filters
Make sure the correct account is selected. Click "New"
Conditions: Body contains places-to-b-and-h
Change the action to "Delete Message".
Select "Manually Run" from the dropdown at the top.
Click OK.
Select the filter in the list, make sure "Inbox" is selected at the bottom, and click "Run Now". This should delete all the bugspam. You can then delete the filter.
Gerv
Component: Places → Bookmarks & History
QA Contact: places → bookmarks
You need to log in
before you can comment on or make changes to this bug.
Description
•