don't write corrupt >12GiB files #439

Closed
opened 2008-06-02 23:21:05 +00:00 by warner · 1 comment

I suspect that an attempt to write files that are larger than 12GiB will result in a corrupted file, as the "self._data_size" (i.e. share size) in WriteBucketProxy overflows the 4-byte space reserved for it. #346 is about removing that limit, but in the interim we need an assert or a precondition or something that will make sure we don't appear to succeed when we in fact fail.

I'm marking this as critical because it can cause data loss: you think you've uploaded the file, you get a read-cap for it, but then you can't read it back.

A precondition() in WriteBucketProxy.init would be sufficient.

I suspect that an attempt to write files that are larger than 12GiB will result in a corrupted file, as the "self._data_size" (i.e. share size) in WriteBucketProxy overflows the 4-byte space reserved for it. #346 is about removing that limit, but in the interim we need an assert or a precondition or something that will make sure we don't appear to succeed when we in fact fail. I'm marking this as critical because it can cause data loss: you think you've uploaded the file, you get a read-cap for it, but then you can't read it back. A precondition() in WriteBucketProxy.*init* would be sufficient.
warner added the
code-encoding
critical
defect
1.0.0
labels 2008-06-02 23:21:05 +00:00
warner added this to the 1.1.0 milestone 2008-06-02 23:21:05 +00:00
warner self-assigned this 2008-06-02 23:21:05 +00:00
Author

fixed, by changeset:8c37b8e3af2f4d1b. I'm not sure what the limit is, but the new FileTooLargeError will be raised if the shares are too big for any of the fields to fit in their 4-byte containers (data_size or any of the offsets).

I think the actual size limit (i.e. the largest file you can upload) for k=3 is 12875464371 bytes. This is about 9.4MB short of 12GiB. The new assertion rejects all files which are larger than this. I don't actually know if you could upload a file of this size (is there some other limitation lurking in there?).

We still need something to prevent a client of the helper from trying to upload something this large, since it will just be a waste of time (the check that changeset:8c37b8e3af2f4d1b adds is only for native uploads, which will be raised by the helper after they've transferred all the ciphertext over).

fixed, by changeset:8c37b8e3af2f4d1b. I'm not sure what the limit is, but the new FileTooLargeError will be raised if the shares are too big for any of the fields to fit in their 4-byte containers (data_size or any of the offsets). I think the actual size limit (i.e. the largest file you can upload) for k=3 is 12875464371 bytes. This is about 9.4MB short of 12GiB. The new assertion rejects all files which are larger than this. I don't actually know if you could upload a file of this size (is there some other limitation lurking in there?). We still need something to prevent a client of the helper from trying to upload something this large, since it will just be a waste of time (the check that changeset:8c37b8e3af2f4d1b adds is only for native uploads, which will be raised by the helper after they've transferred all the ciphertext over).
warner added the
fixed
label 2008-06-03 00:14:20 +00:00
Sign in to join this conversation.
No Milestone
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Reference: tahoe-lafs/trac-2024-07-25#439
No description provided.