KeyError in mutable read-modify-write #1670
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
5 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1670
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Mac OS X 10.7 gateway,
mutable.format is set to mdmf
sometimes tahoe backup process several files fine and later stops
if it stops - it's always same place
like this
Attachment incident-2012-02-10--07-10-36Z-yyfpk7a.flog.bz2 (42352 bytes) added
possible related incident file 1
Attachment incident-2012-02-10--07-11-26Z-qwg5ivy.flog.bz2 (42111 bytes) added
possible related incident file 2
Attachment incident-2012-02-10--07-14-46Z-fpevtli.flog.bz2 (42576 bytes) added
possible related incident file 3
reformat description
tahoe backup sometimes failsto KeyError in mutable MDMF retrieve#1656 is a probable duplicate with a KeyError on the same line. Based on that, the bug doesn't seem to be specific to MDMF; vikarti, can you confirm that by trying to reproduce it with SDMF?
KeyError in mutable MDMF retrieveto KeyError in mutable retrieveAttachment 1656-tahoe-repair-incident-2012-02-18--22_59_20Z-fsyvi7q.flog.bz2 (41405 bytes) added
Incident file for 'tahoe deep-check --repair' from #1656
davidsarah, I can't reproduce this one on SDMF, it just either happens or not. If it will happen with SDMF - I will report here.
Davidsarah,
possible related to this:
SDMF directory, 2 tahoe backup sessions from different gateways to this directory,
2nd one got:
I think this is possible related to this ticket so reporting there.
also, there is inciden file for this one(attaching it)
Attachment incident-2012-02-19--06-26-28Z-o6fb6qy.flog.bz2 (38106 bytes) added
possibl related incident file(see comment 7)
vikarti: thanks, that confirms this problem is not specific to MDMF.
It looks like the mutable filenode modification code in
MutableFileVersion
reacts to anUncoordinatedWriteError
by performing a map update, then trying the operation again. That map update will update theMutableFileVersion
's internal servermap, but not its internal verinfo tuple. If theUncoordinatedWriteError
is due to a concurrent update operation that updated all or most (enough for the version to be unrecoverable, anyway) of the shares associated with theMutableFileVersion
's version of the mutable file, then we would see theKeyError
reported by vikarti and others. Someone (likely me, if no one gets around to it before next weekend) should look over the incident logs to see if they support or refute this theory.Attachment 1656-tahoe-put-incident-2012-02-18--22_00_17Z-7jusaxa.flog.bz2 (39376 bytes) added
Incident file for 'tahoe put' from #1656
This is a regression in 1.8→1.9, isn't it? I propose we put it into the "1.9.2" Milestone.
huang jun provided some detailed debugging:
https://tahoe-lafs.org/pipermail/tahoe-dev/2012-April/007285.html
You have to read carefully to find where huang jun inserted comments and evidence from other log files. Look for the lines beginning with
*
or>
.Kyle Markley reported a bug with a similar stacktrace:
Kevan, is Kyle's report consistent with comment:87382?
Replying to kevan:
Hm, that's interesting. Is this... intentional? Is this intended by someone? :-) I don't know right now whether I would intend that behavior.
A simpler behavior in response to
UncoordinatedWriteError
would be to abort the current operation and inform the user. That simpler behavior would presumably not have this bug in it, and maybe it would also avoid other potential problems caused by re-attempting an operation when there is evidence that someone else is uncoordinatedly changing the same file.This looks like a similar error to #1772, so I'm investigating them both right now. Kevan, Brian, David-Sarah: if you are interested, please consider the question of whether we should simplify handling of UncoordinatedWriteError as suggested in comment:15. It seems to me like we currently handle UCWE complicatedly and buggily, and we should maybe change it to simpler and less buggy before attempting to change it to more complicated and featureful without being buggy. Just a thought. Anyway, I'll report what I learn about this...
Sorry, I didn't mean #1772 (in comment:87389), I meant #1669.
Replying to zooko:
Well, it looks like there is quite a lot of code to support the retrying of mutable operations, so I assumed that was certainly intentional (it was there before I joined the project, and I was mildly surprised since it seemed not entirely consistent with the "prime coordination directive"). Let's redesign it as part of designing two-phase commit, but not before 1.11. I really want to get 1.10 out with all the changes already on trunk, without making any more significant design changes.
Fixed #1669 and it is different from this -- this isn't a duplicate.
By inspecting the bug reports and incident files I've confirmed what Kevan said, that the error happens during retry after a UCWE. One of the incident files shows the following:
And the user then reported the following:
Hm, actually this may be the same underlying problem as in #1669. In #1669, we found that:
• [SDMFSlotWriteProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L433 and [MDMFSlotReadProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L1692 return the same shape of tuple, but [MDMFSlotWriteProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L1102 returns a different shape.
So, could this ticket (#1670) be caused by
MDMFSlotWriteProxy.get_verinfo
returning a differently-shaped verinfo, which gets used as a key in theRetrieve
, and then later whenMDMFSlotReadProxy.get_verinfo
returns what ought to be the same thing (but isn't due to the different shape), and then theRetrieve
looks it up in theversionmap
, it gets a key error?Yes, I've looked at how the auto-retry functionality shown in the stack trace (comment:87392) works, and it looks like the earlier attempt to publish would indeed update the
ServerMap
object'sself.servermap
dict to have a verinfo returned by theMDMFSlotWriteProxy.get_verinfo
. So I'm increasingly confident that the fix to #1669 also fixed this bug. The next step is to write a unit test that exercises the auto-retry functionality with MDMF, which should show the bug present in 1.9.1 and absent in 1.9.2a1.(Also, by the way, I remain pretty interested in the idea of removing the auto-retry functionality completely.)
Did we fix the binary SIs (if that's what they are) in the log, BTW?
I haven't committed that patch yet. Wasn't planning to do for 1.9.2. Shall I?
Replying to zooko:
Please attach the patch here so I can decide whether to commit it for 1.9.2. It should be low-risk I think.
I think this bug can only happen for a read-modify-write.
KeyError in mutable retrieveto KeyError in mutable read-modify-writeKeyError in mutable read-modify-writeto add tests for KeyError in mutable read-modify-writeWill look at adding a test.
Replying to davidsarah:
Split to #1800.
News flash! joepie91 from IRC reported a bug which looks exactly like this one. However, he is using Tahoe-LAFS v1.9.2, with the patch from #1669 (changeset:5524/1.9.2) in place! This means that, contrary to what I thought, that patch did not fix this issue — #1670.
Here is a stack trace from joepie91:
Attaching an incident report file from joepie91 that I manually filtered to remove potentially sensitive information.
Attachment incident-2013-01-14--07-54-09Z-2ymyxfi.flog.bz2.dump.txt.manually-filtered.txt (286008 bytes) added
In incident-2013-01-14--07-54-09Z-2ymyxfi.flog.bz2.dump.txt.manually-filtered.txt, you can see that the KeyError was preceded by a mysteriously changed version number, probably due to a different gateway modifying the directory at the same time as this gateway was doing so:
…
…
So then it does the automatic-merge-and-retry thing (which I would still like to remove, for simplicity):
…
But then it somehow uses a cached verinfo which has the old "42" in it, and gets the KeyError:
add tests for KeyError in mutable read-modify-writeto KeyError in mutable read-modify-writeReplying to zooko:
This is why tests are important. I'm always rather skeptical that something has been fixed if there is no test.
AF saw this bug -- it happened nondeterministically:
(Presumably it is happening on the update of the
tahoe:zeros3
directory.)Notice that this is using Tahoe-LAFS v1.10.
Ed Kapitein posted to the tahoe-dev list [//pipermail/tahoe-dev/2013-July/008487.html a bug report] that matches this stack trace.
Upgrading "Priority" from "normal" to "major", because this seems like a bad bug. It also apparently led to data loss in Ed Kapitein's case, so I'm adding the
preservation
keyword.Replying to kevan:
So, isn't the correct fix for this, without making any design changes, just for
MutableFileVersion
to make sure that its verinfo tuple is also updated when it does a mapupdate in this case?We investigated this during the Tahoe-LAFS Summit. It seems likely that directory-modification operations have been accidentally using the original version of the directory, even when there was a write-collision and a newer version of the directory was discovered. I didn't 100% confirm this, but I suspect that means that in those (rare‽) cases where there are write-collisions on a directory, the retrying code would blow away the other person's write, by re-applying the earlier version (plus the current modification). This would be a data loss bug and very much not the kind of thing we tolerate. ☹
Now, there are no bug reports that I am aware of that could point to this having caused a directory modification to get thrown out in practice. There are several reports of this bug leading to an internal
KeyError
, but as far as I recall, nobody reported that a change they had made to their directory was subsequently discovered to have been lost.Nonetheless, the possibility of this happening seems to be present, from code inspection.
We agreed at the Tahoe-LAFS Summit to remove the "retry" feature instead of fixing it, for v1.11.0. I have a branch which does this: https://github.com/zooko/tahoe-lafs/commits/1670-KeyError-in-mutable-read-modify-write
This branch is not yet ready to merge into trunk because:
• Although it has unit tests, I'm not yet sure they are correct tests.
• The history of patches needs to be rewritten into a nice readable story.
We have no fix for this, nor do we understand it well.
Milestone renamed
renaming milestone
Moving open issues out of closed milestones.
Ticket retargeted after milestone closed