escalation of authority from knowing a storage index to being able to delete corresponding shares #1528
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1528
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
The Tahoe-LAFS core team has discovered a bug in Tahoe-LAFS v1.8.2 and all earlier versions starting with Tahoe-LAFS v1.3.0 that could allow users to unauthorizedly delete immutable files in some cases.
In Tahoe-LAFS, each file is encoded into a redundant set of "shares" (like in RAID-5 or RAID-6), and each share is stored on a different server. There is a secret string called the "cancellation secret" which is stored on the server by being appended to the end of the share data. The bug is that the server allows a client to read past the end of the share data and thus learn the cancellation secret. A client that knows the cancellation secret can use it to cause that server to delete the shares it stores of that file.
We have prepared a set of patches that do three things:
Fix the bounds violation in reading of immutable files that allowed the clients to learn the cancellation secrets.
Remove the function that takes a cancellation secret and deletes shares. This function (named "remote_cancel_lease") was not actually used, as all users currently rely on a different mechanism for deleting unused data (a garbage collection mechanism in which unused shares get deleted by the server once no client has renewed its lease on them in more than a month).
Fix some similar bounds violations in mutable files that could potentially lead to similar vulnerability. This vulnerability is probably not a concern in practice, because it doesn't arise unless the legitimate, authorized client deliberately writes a "hole" into the mutable file (by seeking past the end of the current data and not writing over all the bytes thus uncovered). No extant version of Tahoe-LAFS does this, so presumably no legitimate user would be exposed to that vulnerability.
[known_issues.rst for 1.8.3]source:1.8.3/docs/known_issues.rst#unauthorized-deletion-of-an-immutable-file-by-its-storage-index has more details, but I'll paste the most relevant bit here:
This vulnerability does not enable anyone to read file contents without authorization (confidentiality), nor to change the contents of a file (integrity).
A person could learn the storage index of a file in several ways:
By being granted the authority to read the immutable file—i.e. by being granted a read capability to the file. They can determine the file's storage index from its read capability.
By being granted a verify capability to the file. They can determine the file's storage index from its verify capability. This case probably doesn't happen often because users typically don't share verify caps.
By operating a storage server, and receiving a request from a client that has a read cap or a verify cap. If the client attempts to upload, download, or verify the file with their storage server, even if it doesn't actually have the file, then they can learn the storage index of the file.
By gaining read access to an existing storage server's local filesystem, and inspecting the directory structure that it stores its shares in. They can thus learn the storage indexes of all files that the server is holding at least one share of. Normally only the operator of an existing storage server would be able to inspect its local filesystem, so this requires either being such an operator of an existing storage server, or somehow gaining the ability to inspect the local filesystem of an existing storage server.
In [5006/1.8.3]:
In [5007/1.8.3]:
In [5013/1.8.3]:
Reassigning to me to apply the fix to trunk.
placeholder ticketto escalation of authority from knowing a storage index to being able to delete corresponding sharesIn changeset:5476f67dc1177a26:
In changeset:20e2910c616531c9:
In changeset:401d0e7f69ddeaef:
In changeset:c10099f982ee0803:
Note that changesets [5004/1.8.3], [5005/1.8.3], and [5008/1.8.3]--[5012/1.8.3] inclusive on the 1.8.3 branch, and changesets changeset:cffc98780414760c, changeset:65de17245da26a4c, changeset:942c5e5162fc3d9c, changeset:32f80625c912be45, changeset:48f56dab6fb9cc20, changeset:7a98abeb3a7b1efb, changeset:eb26075da077404e, changeset:b15bd674c3f1a73e, and changeset:4c33d855d1fbfb51 on trunk, are also related to this ticket. (They didn't get posted here automatically because the 'refs' syntax in the patch descriptions was wrong.)
For future code-archaeologists, this bug was introduced in changeset:6c4019ec33e7a253, which
removed a precondition check in
read_share_data()
(because it usedthe original 4-byte size field, which was deprecated in favor of
measuring the length of the container file with os.stat), but didn't
provide a replacement. This was 536 patches after the 1.2.0 release, and
about 295 patches before the 1.3.0 release.
Brian pointed out to me that there is another way that someone can learn the storage index of a file. It is shown on the "Recent Uploads and Downloads" page of a gateway. If someone can access your gateway, and you've uploaded or downloaded the file recently (if I recall correctly it is a FIFO queue of the most recent 20 uploads or downloads)...
Oh, I see that it is actually something more complicated:
http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/src/allmydata/history.py?annotate=blame&rev=4046
According to this post by Josh Bressers from RedHat on the oss-sec mailing list, we should use the identifier
CVE-2011-3617
for this bug.