Closed Bug 1202105 Opened 9 years ago Closed 9 years ago

After fix of bug 854798 in Tb 38, "mail which was not removed by Compact of BerkleyStore_LocalMailFolder" can be replaced by newly added/copied mail at Thread pane

Categories

(MailNews Core :: Database, defect)

defect
Not set
major

Tracking

(thunderbird42 wontfix, thunderbird43 fixed, thunderbird44 fixed, thunderbird45 fixed, thunderbird_esr3844+ fixed)

RESOLVED FIXED
Thunderbird 45.0
Tracking Status
thunderbird42 --- wontfix
thunderbird43 --- fixed
thunderbird44 --- fixed
thunderbird45 --- fixed
thunderbird_esr38 44+ fixed

People

(Reporter: World, Assigned: rkent)

References

Details

User Story

See Bug 1201782 Comment #4 for Steps to reproduce and test mails.

Phenomenon is:
  When mailX of messageKey=XXX/messageOffset=XXX is changed to
     messageKey=XXX/messageOffset=YYY
     (where YYY < fileSize after Compact < XXX)
     by Compact of BerkleyStore_LocalMailFolder,
  If a mailZ is unfortunately added to Offset=XXX,
     mailZ has messageKey=XXX/messageOffset=XXX,
  So mailX(offset=YYY<XXX) is replaced by mailZ(offset=XXX) in msgDB,
  Then mailX is removed from Thread pane.

Attachments

(1 file, 1 obsolete file)

+++ This bug was initially created as a clone of Bug #1201782 +++ This is spin-off of Bug 1201782 Comment #4. After fix of bug 854798 in Tb 38, Compact of BerkleyStore_LocalMailFolder keeps messageKey value of existent mails instead of replacing messageKey by new Offset value after Compact. This is to keep messageKey value of non-deleted mail even after Compact. However, CopyMove of still sets Offset vaalue to messageKey. So, when a mail has messageKey=XXX/messageOffset=xxx(XXX>fileSize after Compact, xxx<fileSize after Compact), if a mail is copied to Offset=XXX(XXX==fileSize before append data), this copied mail has messageKey=XXX/messageOffset=XXX(XXX==fileSize before append data). Because of same messageKey, existent mail(messageKey=XXX/messageOffset=xxx) is replaced by copied mail(messageKey=XXX/messageOffset=XXX) in msgDB, then existent mail(messageKey=XXX/messageOffset=xxx) disappears at Thread Pane. At this step, data of existent mail is not lost in msgStore file. The mail is simply hidden at Thread pane. So, when Repair Folder is executed, both mail of messageKey=xxx/messageOffset=xxx and mail of messageKey=XXX/messageOffset=XXX are shown in Thread pane.
User Story: (updated)
No longer depends on: 1183490, 1201782
Keywords: dataloss, privacy
Possible solution is: When copy of mail to BerkleyStore_LocalMailFolder, assign messageKey = Highest key in msgDB + 1 as done for MailDirStore_LocalMailFolder, instead of assigning messageOffset value. There is no essential reason to keep messageKey==messageOffset any more, because messageKey and messageOffset was separated by PluggableStore in Tb 12. And, at least in filter move, "messageKey=currently used highest messageKey + 1" is already done.
"Overlay of msgKey" can occur by POP3 download because messgeKey=messageOffset is used upon POP3 download.
This reply is an reply to comment 44 of bug 1183490, whose patch is being moved to this bug. (In reply to :aceman from comment #44) > Comment on attachment 8666431 [details] [diff] [review] > Detect and fix key collisions > > Review of attachment 8666431 [details] [diff] [review]: > ----------------------------------------------------------------- > > Well, I didn't propose this solution as it isn't perfect:) I can't guarantee > that this will work 100% (as some code may expect key=offset, that is why > bug 793865 is so complicated), but for the single theoretical message that > may get duplicate key, this may be enough as a quick fix. > > If you implement the loop I talk about below, then let's move the patch to > bug 1202105. > > If you use the highest-key solution, it could stay here. I don't like the highest-key approach, because we're going to eventually run out of keys. We already allow the key to get very close to the maximum value. If we have a folder that is close to 4 GB in size, and we have been setting keys based on offset for most of that time, then the highest key value will be close to the max value, and we will have a very limited range of new key values available. I'm pretty sure though that setting the key to none will get the max value, so that is not that hard to do. As for fixing here versus 100%, my first trial at this did this check in nsMsgDatabase::CreateNewHdr which would have been a more universal spot. I was just concerned about the risk of doing it there, as there could be code paths that rely on creating a header with the same key as an existing header, then deleting the existing header before adding it. It's really a matter of how to accomplish the most with the least risk. (We've really, really got to finish decoupling all of the various meanings of key if we are ever going to be able to make changes like this without fear of unexpected regressions.) So you have not convinced me yet that the proposed patch is the wrong thing to do for a esr38 update, which is what I am targeting. But I could be convinced, so feel free to keep discussing. > ::: mailnews/local/src/nsMsgBrkMBoxStore.cpp > @@ +665,5 @@ > > + // is a collision. > > + bool hasKey = false; > > + while (db->ContainsKey(key, &hasKey), hasKey) > > + key++; > > + } I don't why you are asking for another loop. This is already implemented as a while loop that finds the next key that is not already in the database until it finds one that is unique. Since the critical issue of data loss is moved to this bug, I'm going to move the tracking-esr38 here as well. I am concerned though that this bug is not clearly defined to a single issue, but I would have to try some of the STR steps to be sure.
(In reply to Kent James (:rkent) from comment #3) > I am concerned though that this bug is not clearly defined to a single issue, (snip) Which part is not so clear? Problem of this bug is written in "User Story", STR is essentially absolutely same as yours. My comment #2? That is merely a warning to developer and memo random for QA peoples. > I don't like the highest-key approach, because we're going to eventually run > out of keys. We already allow the key to get very close to the maximum > value. If we have a folder that is close to 4 GB in size, and we have been > setting keys based on offset for most of that time, then the highest key > value will be close to the max value, and we will have a very limited range > of new key values available. I'm pretty sure though that setting the key to > none will get the max value, so that is not that hard to do. What's wrong in highest-key approach? Current logic assigns current highest-key(==fileSize - size of last appended mail) + 1 if fileSize is near to 4GB, instead of fileSize. This is absolutely same as one you call "highest-key approach". It looks "get current highest-key" is not so easy. So, I believe your "search next free key if contention is detected" is pretty reasonable approach. "continuous keys" currently occurs only when "filter move of multiple messages to one BerkleyStore in single POP3 downloading". Upon next POP3 downloading or copy, fileSize(==Offset) is used as messageKey. So length of continuous key is not large. And, Majority of msgKey is messageKey==messageOffset in BerkleyStore and problem can occur on "mail which was not removed by Compact" only. I think performance issue won't occur. Possible problem if this bug will be fixed. I know phenomenon like next. After move of multiple mails by filter etc., msgDBHdr with messageKey==messageOffset==fileSize and size=0 is generated. (==phantom mail) Upon next copy or pop3 download, msgDBHdr for the phantom mail is re-used, so phenomenon of phantom mail is hidden. This phantom mail was perhaps observed in testing for bug in "filter move of header fetch only mail". It seems "excess move is tried one time", "when move n mails, move is tried (n+1) times". If this bug will be fixed, such phantom mail may be exposed to user. I think msgDBHdr is better re-used as currently done if size=0.
Correction. Current logic assigns something which is unique by setting Null if fileSize is near to 4GB, instead of fileSize, in order to avoid overflow of msgKey by this append of mail. This is absolutely same concept as one you call "highest-key approach", because both are "trying to assign unieq one other than Offset".
(In reply to Kent James (:rkent) from comment #3) > > If you use the highest-key solution, it could stay here. > > I don't like the highest-key approach, because we're going to eventually run > out of keys. We already allow the key to get very close to the maximum > value. If we have a folder that is close to 4 GB in size, and we have been > setting keys based on offset for most of that time, then the highest key > value will be close to the max value, and we will have a very limited range > of new key values available. I'm pretty sure though that setting the key to > none will get the max value, so that is not that hard to do. Yes, the 4GB limit for key is bad but I don't think you would make it worse. > As for fixing here versus 100%, my first trial at this did this check in > nsMsgDatabase::CreateNewHdr which would have been a more universal spot. I > was just concerned about the risk of doing it there, as there could be code > paths that rely on creating a header with the same key as an existing > header, then deleting the existing header before adding it. It's really a > matter of how to accomplish the most with the least risk. Yes, that is why I wanted some low risk solution. Touching CreateNewHdr and stuff I do in bug 793865 and that is risky. > (We've really, really got to finish decoupling all of the various meanings > of key if we are ever going to be able to make changes like this without > fear of unexpected regressions.) Yes. > So you have not convinced me yet that the proposed patch is the wrong thing > to do for a esr38 update, which is what I am targeting. But I could be > convinced, so feel free to keep discussing. No, I actually said it is OK for esr38. > > ::: mailnews/local/src/nsMsgBrkMBoxStore.cpp > > @@ +665,5 @@ > > > + // is a collision. > > > + bool hasKey = false; > > > + while (db->ContainsKey(key, &hasKey), hasKey) > > > + key++; > > > + } > > I don't why you are asking for another loop. This is already implemented as > a while loop that finds the next key that is not already in the database > until it finds one that is unique. Sorry, I was blind to the 'while' keyword in the tiny loop :) I read it as 'if'. > Since the critical issue of data loss is moved to this bug, I'm going to > move the tracking-esr38 here as well. I am concerned though that this bug is > not clearly defined to a single issue, but I would have to try some of the > STR steps to be sure. I think WADA says about POP3 download in comment 2. I think your patch fixes that. If there are any remaining issues AFTER the patch, I'm sure WADA will be able to analyze them. Please attach the patch here so I can review it. Thanks.
Assignee: nobody → rkent
Status: NEW → ASSIGNED
Attachment #8670518 - Flags: review?(acelists)
Comment on attachment 8670518 [details] [diff] [review] Check duplicate key, using nsMsgKey_None if duplicated. Review of attachment 8670518 [details] [diff] [review]: ----------------------------------------------------------------- (1) After Compoct, following is generated. fileSizeAfterCompct = fileSize_0 messageKey=XXX/messageOffset=newoffset_XXX messageKey=YYY/messageOffset=newoffset_YYY messageKey=ZZZ/messageOffset=newoffset_ZZZ where newoffset_??? < ??? where ??? = XXX or YYY or ZZZ XXX < YYY < ZZZ newoffset_XXX < newoffset_YYY < newoffset_ZZZ < fileSize_0 (2) At some point, if current fileSize < XXX, fileSize_0 < current file size=fileSize_1 == XXX may occur. messageKey=Null/messageOffset=XXX is generated by patch (3) At some point, if current fileSize < YYY, fileSize_1 < current file size=fileSize_2 == YYY > XXX may occur. messageKey=Null/messageOffset=YYY is generated by patch (4) At some point, if current fileSize < ZZZ, fileSize_2 < current file size=fileSize_3 == ZZZ > YYY may occur. messageKey=Null/messageOffset=ZZZ is generated by patch (A) If Null message key is "a message key value==Null", it's an unieque key, so I think there is no problem at step (2). However, duplicaate key of Null happens at step (3)/(4). (B) If Null message key is "Tb assigns a free key less than fileSize", there is no problem in step (3)/(4). Sorry but I don't know Tb's logic. I guess current Tb's behavior is (A). Current "assign Null key when large fileSize" logic perhaps expects: Big file size, so exceeds 4GB sooner or later, so "warning on 4GB" is shown sooner or later, then Compact will be done. So I think previous logic(find free key when contetion is found) is better.
The logic as I understand it is this. When the database is opened, and mork reads all of the items in the database, it keeps track of the largest key found. When you request a new header with MsgKey_None it assigns a value one more than the previously highest message key. What we really want to switch to is to always use the next highest key. I'm putting up patches on bug 1183490 that will do that, as that should also solve the problem of the order entered being incorrect. I've been working with those patches now for a few days, so I am quite comfortable with assigning key using MsgKey_None now. The concept of incrementing the key, although it should work, is unnecessary and I see no advantage to it. Once we switch to always using MsgKey_None then the message key size will no longer be a factor in creating message folders > 4GBytes. In fact, perhaps in the currently proposed patch we should go ahead and change this: if (filePos <= 0xFFFFFF00) to a more conservative value, say if (filePos <= 0xFF000000) so that we stop needlessly pushing user's key values close to the limit.
(In reply to Kent James (:rkent) from comment #9) > The logic as I understand it is this. When the database is opened, and mork > reads all of the items in the database, it keeps track of the largest key > found. When you request a new header with MsgKey_None it assigns a value one > more than the previously highest message key. Thanks for explanation. Filter move utilizes MsgKey_None? or merely forgot to set messageOffset to messageKey? :-)
(In reply to Kent James (:rkent) from comment #9) > The logic as I understand it is this. When the database is opened, and mork > reads all of the items in the database, it keeps track of the largest key > found. When you request a new header with MsgKey_None it assigns a value one > more than the previously highest message key. Yes. > Once we switch to always using MsgKey_None then the message key size will no > longer be a factor in creating message folders > 4GBytes. In fact, perhaps > in the currently proposed patch we should go ahead and change this: > if (filePos <= 0xFFFFFF00) > to a more conservative value, say > if (filePos <= 0xFF000000) > so that we stop needlessly pushing user's key values close to the limit. If you pull off bug 1183490 we should then ignore any filePos and increment the key from the beginning. If you want to ease the situation for TB38 users nearing 4GB, yes, you can change the constant here.
Comment on attachment 8670518 [details] [diff] [review] Check duplicate key, using nsMsgKey_None if duplicated. Review of attachment 8670518 [details] [diff] [review]: ----------------------------------------------------------------- I have some questions here to understand the code. ::: mailnews/local/src/nsMsgBrkMBoxStore.cpp @@ +656,5 @@ > if (db && !*aNewMsgHdr) > { > + nsMsgKey key = nsMsgKey_None; > + // The key should not need to be the filePos anymore, but out of caution > + // we will continue setting key to filePos for small mboxes. What do you mean by "small" here? @@ +662,5 @@ > + { > + // After compact, we can have duplicated keys (see bug 1202105) so only > + // set key to filePos if there is no collision. > + bool hasKey = true; > + if (db->ContainsKey((nsMsgKey) filePos, &hasKey), !hasKey) Could you use (NS_SUCCEEDED(db->ContainsKey()) && !hasKey) for better readability? @@ +663,5 @@ > + // After compact, we can have duplicated keys (see bug 1202105) so only > + // set key to filePos if there is no collision. > + bool hasKey = true; > + if (db->ContainsKey((nsMsgKey) filePos, &hasKey), !hasKey) > + key = (nsMsgKey) filePos; Could you use msgKeyFromInt() here instead of cast? ::: mailnews/local/test/unit/test_duplicateKey.js @@ +29,5 @@ > + > + // Note the listener won't work because this is a sync delete, > + // but it should! > + localAccountUtils.inboxFolder > + .deleteMessages(deletes, // in nsIArray messages, Trailing space. @@ +50,5 @@ > + hdrs = showMessages(localAccountUtils.inboxFolder); > + Assert.equal(hdrs.length, 3, "Check db length after compact"); > + > + // Add some more messages. This fails in nsMsgDatabase::AddNewHdrToDB with > + // NS_ERROR("adding hdr that already exists") from bug 1202105. What do you mean by "this fails"? You properly check if the key exists before creating the hdr. @@ +56,5 @@ > + yield gPOP3Pump.run(); > + > + dump("Messages after new message\n"); > + hdrs = showMessages(localAccountUtils.inboxFolder); > + Assert.equal(hdrs.length, 4, "Check db length after adding one message"); Where in this test you check that the key of the 4th msg in higher than the 3rd msg (previously the 5th)? @@ +73,5 @@ > + hdrs.push(enumerator.getNext().QueryInterface(Ci.nsIMsgDBHdr)); > + dump("key " + (hdrs.length - 1) + " is " + hdrs[hdrs.length - 1].messageKey + "\n"); > + } > + return hdrs; > +} Trailing whitespace.
Attachment #8670518 - Flags: review?(acelists) → feedback+
(In reply to :aceman from comment #12) > Comment on attachment 8670518 [details] [diff] [review] > Check duplicate key, using nsMsgKey_None if duplicated. > > Review of attachment 8670518 [details] [diff] [review]: > ----------------------------------------------------------------- > > I have some questions here to understand the code. > > ::: mailnews/local/src/nsMsgBrkMBoxStore.cpp > @@ +656,5 @@ > > if (db && !*aNewMsgHdr) > > { > > + nsMsgKey key = nsMsgKey_None; > > + // The key should not need to be the filePos anymore, but out of caution > > + // we will continue setting key to filePos for small mboxes. > > What do you mean by "small" here? Smaller than our close-to-4GB value which is 0xFFFFFF00 (and we are proposing to switch to 0xFF000000) > @@ +50,5 @@ > > + hdrs = showMessages(localAccountUtils.inboxFolder); > > + Assert.equal(hdrs.length, 3, "Check db length after compact"); > > + > > + // Add some more messages. This fails in nsMsgDatabase::AddNewHdrToDB with > > + // NS_ERROR("adding hdr that already exists") from bug 1202105. > > What do you mean by "this fails"? You properly check if the key exists > before creating the hdr. What I mean is that, prior to the fixes in this bug, this is the point where the test detects a failure. It should succeed in this bug. So run this test without the other fixes, the test would fail. We really need a better way to express this in tests, as tests should reliably fail before a fix, and succeed after a fix. > > @@ +56,5 @@ > > + yield gPOP3Pump.run(); > > + > > + dump("Messages after new message\n"); > > + hdrs = showMessages(localAccountUtils.inboxFolder); > > + Assert.equal(hdrs.length, 4, "Check db length after adding one message"); > > Where in this test you check that the key of the 4th msg in higher than the > 3rd msg (previously the 5th)? Order of keys is not being tested in this test, only prevention of duplicates. Order of keys is bug 1183490.
(In reply to Kent James (:rkent) from comment #13) > > What do you mean by "this fails"? You properly check if the key exists > > before creating the hdr. > > What I mean is that, prior to the fixes in this bug, this is the point where > the test detects a failure. It should succeed in this bug. So run this test > without the other fixes, the test would fail. We really need a better way to > express this in tests, as tests should reliably fail before a fix, and > succeed after a fix. OK, so can you make the sentence in the test mean this ("it fails before bug 1202105")? > > Where in this test you check that the key of the 4th msg in higher than the > > 3rd msg (previously the 5th)? > > Order of keys is not being tested in this test, only prevention of > duplicates. Order of keys is bug 1183490. OK. Please update the patch so I can give you r+ :)
Attached patch version 2, fix aceman issues (deleted) — Splinter Review
re "Could you use msgKeyFromInt() here instead of cast?" I tried that, the issue is that filePos needs to be int_64, msgKeyFromInt does not support that, I tried forcing it with casts, and adding a third nethod to support that. But IMHO all of those make the call really long and hard to read, and ultimately obscure what is going on for no added value. SO I did not change this.
Attachment #8670518 - Attachment is obsolete: true
Attachment #8671561 - Flags: review?(acelists)
Comment on attachment 8671561 [details] [diff] [review] version 2, fix aceman issues Review of attachment 8671561 [details] [diff] [review]: ----------------------------------------------------------------- Thanks.
Attachment #8671561 - Flags: review?(acelists) → review+
Keywords: checkin-needed
https://hg.mozilla.org/comm-central/rev/96721fd1e7e0cb1e64152964944d458370e74e02 Bug 1202105 - Prevent duplicate key when creating new message header. r=aceman
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → Thunderbird 45.0
Comment on attachment 8671561 [details] [diff] [review] version 2, fix aceman issues The intent of this patch was to be the short-term fix, so we need to push it upstream to try to get a beta test prior to a (possible) landing on esr38. https://hg.mozilla.org/releases/comm-beta/rev/11fade3df3f9 https://hg.mozilla.org/releases/comm-aurora/rev/600ff8e1ecde
Attachment #8671561 - Flags: approval-comm-esr38?
Attachment #8671561 - Flags: approval-comm-beta+
Attachment #8671561 - Flags: approval-comm-aurora+
I would have pushed this for tb 38.5, but with no tb 43 beta I don't think that I should do that.
Attachment #8671561 - Flags: approval-comm-esr38? → approval-comm-esr38+
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: