don't write corrupt >12GiB files #439
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#439
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I suspect that an attempt to write files that are larger than 12GiB will result in a corrupted file, as the "self._data_size" (i.e. share size) in WriteBucketProxy overflows the 4-byte space reserved for it. #346 is about removing that limit, but in the interim we need an assert or a precondition or something that will make sure we don't appear to succeed when we in fact fail.
I'm marking this as critical because it can cause data loss: you think you've uploaded the file, you get a read-cap for it, but then you can't read it back.
A precondition() in WriteBucketProxy.init would be sufficient.
fixed, by changeset:8c37b8e3af2f4d1b. I'm not sure what the limit is, but the new FileTooLargeError will be raised if the shares are too big for any of the fields to fit in their 4-byte containers (data_size or any of the offsets).
I think the actual size limit (i.e. the largest file you can upload) for k=3 is 12875464371 bytes. This is about 9.4MB short of 12GiB. The new assertion rejects all files which are larger than this. I don't actually know if you could upload a file of this size (is there some other limitation lurking in there?).
We still need something to prevent a client of the helper from trying to upload something this large, since it will just be a waste of time (the check that changeset:8c37b8e3af2f4d1b adds is only for native uploads, which will be raised by the helper after they've transferred all the ciphertext over).