[meta] Improve the reliability of Places and the storage subsystem
Categories
(Toolkit :: Places, enhancement, P5)
Tracking
()
People
(Reporter: past, Unassigned)
References
(Depends on 11 open bugs, Blocks 1 open bug)
Details
(Keywords: meta, Whiteboard: [fxsearch])
Updated•7 years ago
|
Updated•7 years ago
|
Updated•6 years ago
|
Comment 1•5 years ago
|
||
Marco, may I ask you a few questions?
-
did you hear about Places.SQLite being like 1/3-yanked randomly?
I am on a beta update channel (Developer Edition) so I can not really complain, but, honestly, I am in shock since I accidentally just found out that my Places.SQLite is now 170 MB and not 270 MB as it should have been. And, indeed, all of my history from earlier than 2017 is gone. Not sure if this is a core FF engine issue or maybe a glitch of cloud synchronization which can not survive a database this big (due to some space limits per user or due to a different bug). -
can you please tell me who is developing this part of FF so I would ask them to insert simple code that would check if the database has suddenly become (significantly) smaller then the browser should prompt the user about it and keep the older, bigger database file in the meantime? The difference from SessionStore database, where such reliability tricks/clutches are not necessary, is crucial since users may easily not notice that they lost most of their history and, eventually, even their backup files will be gradually replaced with half-empty Places.SQLite files.
-
thankfully, I do have my backup so I can try to write SQLite script to merge my older fuller database I recovered and the newer 1/3-yanked one. However, to make sure it is done properly, I have to be able to perform hash function calculation on the URLs. Can you please advice me about who can know the algorithm so I would try to convert it to SQLite function (from C++ or JS or Mozilla's new language)? My search for this has failed.
Thank you in advance.
Comment 2•5 years ago
|
||
Update: sorry, turns out this is completely my fault since I used to run SQLite scripts to streamline the history of my visits -- for example, replacing "http" with "https" in the URLs, removing tails in YouTube links (when you open videos from playlists, emails or notifications), and so on. And since recent FF versions URL checksum was implemented but I did not know the algorithm so the Moz_Places table was updated without updating checksums. It did not matter for a long time but a few weeks ago FireFox has removed all of the table lines with wrong checksums.
So the only question basically is how I can redo the checksums in my old database so I could simply copy-paste the lines into the newer database to make it full.
Comment 3•5 years ago
|
||
It's never a good idea to run queries on a third party database, the schema can change at any time, it would be better to make an extension that removes the http urls and inserts https... Or just use an add-on to always force https.
The only way to fix you db would be to use a script from Scratchpad to run a query on the database from Firefox, and do something like
UPDATE moz_places SET url_hash = hash(url) WHERE url_hash <> hash(url)
Updated•2 years ago
|
Description
•