let client specify the encryption key #684
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#684
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Per this tahoe-dev discussion, Shawn Willden has submitted a patch to allow the client to choose the encryption key for an immutable file upload. This is a very dangerous feature, because Tahoe doesn't use unique IVs under the hood, therefore you lose confidentiality if you ever ask Tahoe to use the same encryption key twice.
We could make this less dangerous by using random IVs.
Anyway, it is unfortunate that we didn't pay attention to Shawn patch until now, simply because there wasn't a ticket for it. So now there is.
This is one of the requirements to implement #320 (add streaming upload to HTTP interface), which is a ticket that I would love to see fixed.
I think it's a little worse than just loss of confidentiality. Since the storage ID is derived from the key, won't adding two files with the same key cause the first one to be lost? Or will the storage servers just refuse to accept another share of the same SID? If that's the case, and new servers have been added to the grid, it's possible that shares of the second file could be stored and then when the client tries to download the file it gets a mixture of shares from the two files... essentially losing both.
Clients should only set their own encryption key if they use another mechanisms to ensure that a given encryption key is only used once.
Re-using a key (which is equivalent to re-using a storage-index) will probably cause the second file to get lost. The uploading code will ask the storage servers to accept a share for storage-index XYZ, the server will say "I already have a share for that", the uploader will say "oh, ok, I'll use the existing share instead", and the result will be that the uploader will construct a filecap that won't be downloadable, and the second file will be lost.
Actually, in the current release, this depends upon whether any servers have been added since the first upload. If the first part of the permuted list is still the same, then the uploader will re-use the existing shares. If the uploader has a different peer list, then it might place new shares anyways, because our uploader code isn't very smart about avoiding duplicate shares yet (something that needs to improve to make the Repairer work better). In this case, you could wind up with some shares for the first version and some shares for the second version. In theory, both versions might wind up being downloadable: someone using the first filecap would see good shares and corrupt shares, someone using the second filecap would see corrupt shares and good shares. The wrinkle is that the downloader is not yet clever enough to switch to good shares once it sees corruption, so the most likely effect would be both versions getting bad-hash errors on download.
Eventually, the uploader code needs to be improved: if a server claims to have a share, the uploader should check to see that the share looks usable. The simplest check is to compare file lengths at the start of upload and UEB hashes at the end of upload; a more complete check would be to download the whole share and compare it to the one being generated, or challenge the server to return a keyed hash of the share and do the same comparison.
But yeah, in any case, it becomes important for the uploader to avoid re-using an encryption key. At best it will cause confusion.
Attachment client-keys.dpatch (191555 bytes) added
Shawn's client-choose-encryption-key patch
Attachment new-684.diff (19566 bytes) added
fix up patch to apply against trunk as of 15-May-2009
A few style questions:
otherwise, it looks pretty good.. I'll probably apply it to trunk soon.
Attachment new2-684.diff (19673 bytes) added
new version of the patch, with minor stylistic changes applied
See also [the CodingStandards page](wiki/CodingStandards).
How about this for the warning paragraph:
"Be VERY careful that you know what you're doing if you use this feature. Choosing bad keys could compromise the security of your files. Also, a key MUST NOT be used more than once. If the same key is ever used more than once -- whether on more than one file or even on the same file with more than one set of FEC parameters (K, N, segsize) -- it will expose the cleartext of the files it was used on, as well as causing uploads of those files to fail even while indicating successful upload."
Per this discussion: http://allmydata.org/pipermail/tahoe-dev/2009-May/001817.html , we're not going to support the client specifying the encryption key for now. Thanks for the patch and requirements discussion, Shawn.