uploader confuses self-write-dedup with "server is full" #2110
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#2110
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I just got this on my LeastAuthority.com S4 server, and then I verified by code inspection that it is also present in trunk. I haven't yet verified if it is also present in 1382-rewrite-2. If you write to a server and ask to allocate a bucket, and it writes back saying that it won't allocate that bucket for you, and by the way that it doesn't already have that bucket available to you, the client reasonably-enough concludes that the server is full. It reports that server as having been full if the upload fails. This gave me a bit of a start, since I couldn't figure out how Amazon S3 could be "full" so I thought there was a bug in my S4 service. ☺ But the truth appears to be that I was already uploading the same (immutable) file and the upload was in-progress, so the server was unwilling to start a new upload and also unwilling/unable to let me do a download.
I suspect the only real solution to this is going to be to extend the "get_bucket"/"allocate_buckets" protocol for immutable files so the server can mention to the client "... and by the way the reason that I won't take it and also won't give it to you is that there is a partial upload of that same file sitting here. So you might want to report to your human that they could try again in a few minutes and see if that one has finished".
We should certainly do better error reporting, but ideally both uploads should succeed. That seems a bit complicated to implement with the current protocol though.
Replying to daira:
That does sound desirable but complicated to me. I guess it would require the second uploader — the one who is not actually transferring the ciphertext — to check back later and see if the first uploader finished transferring the correct ciphertext or not. There might also need to be some kind of timeout/conflict/retry/2PC protocol in case the second uploader decide that the first alleged uploader is unacceptably slow/stalled/DoS'ing. How about if we add this issue to #1851?
So this ticket is just about error-reporting. Separately report "your upload didn't happen due to the server saying there is another already-initiated, but not-yet-completed upload of the same file" from "the server was full and refused to start your upload".
Replying to [zooko]comment:2:
+1
Done: comment:90131