vaccum all sqlite database
Categories
(Toolkit :: Storage, enhancement, P5)
Tracking
()
People
(Reporter: xavier.combelle, Unassigned)
Details
User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0
Steps to reproduce:
Firefox, thunderbird and my whole computer was slower and slower
Actual results:
I launch a script that vaccum all sqlite database of my home directory. (as far as I know, firefox and thunderbird, are the main users of sqlite)
And my computer magically react instantly now and is just a fresh one
I have a limited system of linux with only 4Go of RAM and xfce. So such slow behavior might not happen with a computer with a lot more ram.
Some way to vaccum all sqlite database of firefox including the ones of the extensions looks like a good idea, behind vaccum places.sqlite which is already done as far as I understand
Reporter | ||
Comment 1•5 years ago
|
||
I made exactly same report for thunderbird https://bugzilla.mozilla.org/show_bug.cgi?id=1574093 because I feel like the solution apply to both product.
Hi Xavier,
Is there any chance you can upload the script that you are using to the bug?
thanks!
Pablo
Reporter | ||
Comment 3•5 years ago
|
||
Additional thing, as far as I know, most of my performances problems start on my system when there is too much swapping and as such when there is too much memory used by some programs. There is probably some tweak to improve my system working, but if the reason is too much swapping, Vaccum on all files, avoid this swapping. <--- This is to take with a big conditional, because it can be something totally different
I copy a relevant message for thunderbird bug https://bugzilla.mozilla.org/show_bug.cgi?id=1574093#c5 .
I would do it, if my vaccum would have a totally unexpected behavior.
The thing I did vaccum on the whole .sqlite is only becaus I ddi it for being better without unexpected result. So I have no way to do anything in case of unexpected result.
The only expected result in my way of thinknig was places.sqlite would improve my navigation on firefox history.
The fact that my whole system worked magically far better was totally unexpected.
And the only single difference in the before/after is the whole vaccum thing.
So I would recommend, that at least in an experimental setup, that the vaccum whole instead of places would be tested (for example in firefox experiments with big enough base ) to mesure some difference. If whole, vaccum does something, even if unexplanable, for someone other than me. just do it for everybody. I have no idea how it could have some drawback except on potential new bugs by the new codes.
Except by additional code, I really don't see any reason than anything can't change only in better.
I searched for any example of PostgresSQL VACCUM, Sqlite VACCUM, or Mysql OPTIMIZE follow up and totally failed.
This search only gave me example of huge improvemnet:
First it returned me this bug of firefox, which makes exactly same asumption than me (and at last some real code was written on this assumption, and nobody has say that there is some scenario where it could decrease performance. The fact that there is potential benefits by optimizing the database was taken to be obvious. Apparently the only debate was how to do it in the best way. As my example show, it van definitely have a huge difference (and I would say that the fact I'm in a restricted hardware like a significant users of thunderbird is a factor of potential improvmenet.
The two other items confirming what I said (on a research trying to prove the opposite was https://forum.duplicati.com/t/vacuum-performance-improvement-is-huge/5186 and https://stackoverflow.com/questions/784173/what-are-the-performance-characteristics-of-sqlite-with-very-large-database-file )
VACCUM sqlite documentation, clearly assume the same thing (potential improvement and no drawback)
sqlite has even an auto_vaccum mode, which can't be enable by default, because while reducing the file size, it can lead to extra fragmentation which has a performance impact. in this documentation, vaccum as only advantages either by reducing the size or reducing the fragmentation.
Clearly the only reason that VACCUM or OPTIMZE is not done in automate way by SQL engine which give them as a manual command is that it is costly in time, might be lock the database, and as such has to be done when there is no impact. (and as far as I know, that my whole vaccum thing was slow, it was because I used a script which scanned all my home directory. I looked a little bit on the file and did not notice even when vaccum happened. The script write all the file scanned and I don't know how to disable it, and as such I hav no idea of the size redution implide, as it is logged.
I give the script if anyone want to try it.
https://gist.github.com/xcombelle/ecfa46719cd2ac5f8aeb401b173c7901
Comment 4•5 years ago
|
||
This is something we'd like to do from some time, but it's not always a win, indeed a vacuum database will take more time for inserts. Vacuuming is a good idea only when the internal database pages fragmentation goes over a certain threshold. Filesystem fragmentation adds problems on top of that. It's probably good to vaccum once every other month, though it would be better to vacuum when necessary (that would require a way to measure fragmentation).
The main concern is that it's a slow I/O operation, that requires some kind of maintenance service to be done properly in background. What we do today with Places is pretty much a workaround.
Description
•