make immutable check/verify/repair and mutable check/verify work given only a verify cap #568
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
5 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#568
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
For immutable files, !Check/Verify/Repair could do its work with only a verify cap, but currently the WAPI and the CLI seem to require a read cap. To fix this, relax the requirements of those front-ends so that they can make do with only a verify cap.
I think that immutable check/verify/repair can work with only a verifycap, and that the internal classes (immutable.checker.Checker, immutable.repairer.Repairer) use verifycaps. The limitation is that the immutable
FileNode
class is the only way to get at thecheck()
method, and it must be constructed with a readcap.My thought is to create a new
VerifierNode
class, constructed with a verifycap. This class will provide theICheckable
interface, which is the one that contains thecheck()
method.The mutable checker/verifier should be able to work from a verifycap, so I'm thinking we can use the same technique (
MutableVerifierNode.check
). Mutable repair, however, requires a writecap, to generate the correct write-enabler on the new shares (so that writers can modify the new shares in the future). SoMutableVerifierNode.check(repair=True)
must be disallowed. One of the goals for DSA-based mutable files is to fix this limitation.Let's release allmydata-tahoe-1.3.0 before fixing this ticket.
frontend: check/verify/repair doesn't need the decryption keyto frontend: immutable check/verify/repair doesn't need the decryption keyOh, also, if you do a GET on a verifycap, you could get the ciphertext. That would make
VerifierNode
instances provideICheckable
andIEncryptedReadable
, but only real filenodes would also provideIPlaintextReadable
/IReadable
.This would be most useful if it were possible to PUT ciphertext, moving the encrypted realm out beyond the tahoe node and into the HTTP client.
frontend: immutable check/verify/repair doesn't need the decryption keyto make immutable check/verify/repair and mutable check/verify work given only a verify capSee also #625, which is about allowing mutable repair to work using a cap weaker than a write cap.
#578 was a duplicate.
I think we should try to fix this for 1.10. It's a clear instance of requiring excess authority. I'll have a look at what code changes are needed.
FYI,
nodemaker.py
already knows how to create immutableverifier nodes, which have both
check()
andcheck_and_repair()
methods.My untested mutable-verifiernode
branch adds mutable verifier nodes, with only
check()
(but notcheck_and_repair()
due to #625).So what this needs is:
VerifyNodeHandler
class, which renders something sensible,and accepts a
t=check
and/orrepair
command to bedelivered to the verifiernode (and renders the results)
make_handler_for()
to createthese
VerifyNodeHandler
s when the right sort of URI isprovided
Would be fixing this for immutable files worth opening a (hopefully easier) separate ticket? It might even make it more likely to get into 1.11 milestone, perhaps.
No new tickets are being added into the 1.11 milestone at this point; the release is already quite late.
It might make sense to split this ticket into immutable and mutable cases, but let's wait until someone volunteers to do the work, so that they can decide whether to do those cases separately or together.
Milestone renamed
renaming milestone
Moving open issues out of closed milestones.
Ticket retargeted after milestone closed