Prevent uncoordinated writes locally. #265
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#265
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Tahoe does not attempt to solve uncoordinated writes, generally. However, it could make it hard for a single user, using a single node, to trigger uncoordinated writes, by detecting this behavior locally.
We just had a conversation with MikeB and Peter about how the allmydata.com 3.0 native Windows integration has some uncoordinated writes due to links of child directories and unlinks of any kind being uncoordinated.
It would be good to serialize those to prevent errors, but it would require a bit of restructuring of MikeB's Windows integration code.
On the way out to the taxi, Nathan reminded me of this idea -- that the Tahoe node could serialize those for you. We could probably do it in Tahoe more easily than Mike could do it in the Window integration.
The first step, of course, would be to make unit tests which issue two successive updates through the WAPI to one mutable file, which makes sure that the first update doesn't complete before the second one is issued (by using a Fake object for the mutable file), and which makes sure that the second update doesn't get issued to the Fake mutable file object by the wapi layer until after the first update finishes.
Oh wait, even better is to do this serialization in the [MutableFileNode object]source:src/allmydata/mutable.py@2339#L1607, and for the wapi layer to simply guarantee that whenever you make a call to a mutable file or directory, that if a MutableFileNode object or Dirnode object for that file or directory is already in memory that it uses it rather than creating a new object pointing to the same mutable file or directory.
Yeah, I've wondered if the Client should keep a weak-value-dict that maps read-cap to filenode, so it could make a promise that there will never be more than a single filenode for any given mutable file. Then the
MutableFileNode.update
method would need internal locking: only one update may be active at a time.The
MutableFileNode
object now does internal serialization: if a single instance is asked to perform an operation while it's in the middle of another operation, the second will wait until the first has finished.To make this useful for external webapi callers, we still need to implement the weak-value-dict scheme. Basically client.create_node_from_uri() should look up the URI in a weak-value-dict, and return the value if found. If not, it should create a new node, then add it to the weak-value-dict before returning it.
It will still be possible to get, say, a write-node and a read-node to the same file in the same process, but these won't collide with each other in bad ways (only annoying ones: a read could fail because the write-node modified the shares as the read was trying to fetch them).
see also #391
Milestone 1.0.1 deleted
The weak-value-dict is now present, added in changeset:26187bfc8166a868.