re-key (write-enabler) protocol #1426
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1426
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Capturing some discussion from the 2011 Tahoe Summit:
Share migration (moving shares from one server to another by copying the
backing store from one drive to another) is currently limited by the
embedded "write-enablers": secret tokens, shared between writecap
holders and each server, which clients must present to authorize changes
to shares for mutable files.
The write-enabler is basically HASH(writecap + serverid): each server
gets a different value, which means that server 1 cannot use it's copy
to cause damage to a share (for the same file) on server 2. As a result,
when a share is moved from server 1 to server 2, the embedded
write-enabler will be wrong, and writecap holders will no longer be able
to modify the share. This is a drag.
So we want a "re-key" protocol, to update the write-enabler after the
share has been migrated. The writecap signing key is normally used to
validate a signature on the share's 'roothash', telling readcap holders
that the share was created by an actual writecap-holder. The basic idea
is that the re-keying client uses the writecap key to sign a "please
change the write-enabler to XYZ" message, and delivers this over a
confidential channel to the share's new home. The server extracts the
public verifying key from the share to verify this signature. This tells
the server that the re-key request was approved by someone who knows the
writecap.
The actual message needs to be:
and servers must only accept requests that have a matching serverid
(otherwise once server 1 receives a client's re-key message, it could
echo it to server 2 and gain control over the share). The "tag" value
prevents this signature from being confused with a normal share
signature.
Mutation requests must be accompanied by the correct write-enabler. If
the WE is wrong, the server should return a distinctive error message,
and the client should perform the re-key protocol, then try the mutation
again. This incurs a CPU cost of N pubkey signatures for the client (one
per server) and one pubkey verification on each server. But it only
needs to be done once per file per migration, and can be done lazily as
the file is being modified anyways, so the delay is not likely to be
significant.
The original mutable-share creation message can either provide the
initial write-enabler, or it can leave it blank (meaning all writes
should be denied until the share is re-keyed). When a rebalancer runs
without a writecap, or when a readcap-only repairer creates a new share,
they can use the blank write-enabler, and clients will re-key as soon as
they try to modify the un-keyed share. (readcap-only repairers who want
to replace an existing share are out of luck: only a
validate-the-whole-share form of mutation-authorization can correctly
allow mutations without proof of writecap-ownership).
Allowing write-enablers to be updated also reduces our dependence upon
long-term stable serverids. If we switched from Foolscap tubid-based
serverids to e.g. ECDSA pubkey-based serverids, all the servers'
write-enablers would be invalidated after upgrading to the new code. But
if all clients can re-key the WEs on demand, this is merely a
performance hit, not a correctness failure.
Replying to warner:
As a small optimization, retrieving a share could also return a bit saying
whether the share has a non-blank write-enabler. Since most updates of
mutable objects are read-then-write, this means that the client will
usually know whether it needs to re-key without incurring an extra round-trip.
(The server could alternatively return a one-way hash of the write-enabler,
to allow the client to determine whether it is correct, but that probably
isn't a significant win. It's sufficient to optimize out the round-trip
in the common case where the share has only been written by honest clients.)
If new mutable objects always start with a blank write-enabler, then the extra
cost of the rekeying protocol (N signatures by the client and a verification
by each server) will only be paid for objects that are actually modified, i.e.
when the second version is written.
Hm, it is worth adding protection against replay attack? This attack would be a denial of service in which the attacker stores an old
writecap.key.sign([new-write-enabler, storage-index, serverid]tag,)
and every time you try to set a new new write-enabler the attacker replays this old new write-enabler to reset it.One good defense would be to include the one-way hash of the previous write-enabler in the message. As davidsarah mentioned in comment:1, it might be convenient anyway for the server to send this one-way hash of the current write-enabler to the client, in order to inform the client about whether they need to rekey.
Replying to zooko:
The impact of that is an extra signature and round-trip (for each server whose write-enabler has been attacked, in parallel) when an honest client later tries to modify the file. Is it worth the attacker's cost (a round-trip to set the enabler) to perform that attack?
Oh, good idea.
Yes, I now think we should do this.
Replying to [davidsarah]comment:3:
I find it difficult to evaluate cost/benefit for denial-of-service attacks and defenses. I guess the question is how much is it worth to the attacker to deny the user the ability to update this file. Obviously we have no idea--that depends on the attacker, the user, and what's in that file!
A better question--a tighter bound--is whether this defense would increase the cost of this attack so that it is not the cheapest way to deny service. For example, the attacker could accomplish the same goal by preventing all of the user's packets from reaching the servers, or by compromising the servers. Would this attack be cheaper than that?
In at least some cases, injecting false packets is a lot cheaper (more doable) than censoring out all real packets or compromising all servers. In that case, this attack would be the cheapest way to accomplish the goal and for that case this defense would help.
Cool. :-)
Replying to [zooko]comment:4:
But it isn't doing that, it's only forcing the user to perform another signature and round-trip.
That would achieve a stronger denial of service.
Replying to [davidsarah]comment:3:
Wait; for the above defence, the client would need to know the hash of the previous write-enabler in order to calculate the current one. So the server would have to store the previous hash and send both hashes. This is getting a bit complicated -- is it really important to prevent this attack?