S3 backend: [storage]readonly is documented but ignored #1568
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1568
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
This option makes sense for the S3 backend and should be implemented (preferably for both mutable and immutable shares, i.e. avoiding bug #390). Or maybe not.
Hmm, but another way to implement it is to make the access permissions on the bucket read-only. So maybe we should document that this is a disk-backend-only option, or maybe (as suggested in /tahoe-lafs/trac-2024-07-25/issues/5452#comment:-1) we should remove it entirely.
If we remove this option for S3 backends, we still need to test that when one server is attached to a read-only S3 bucket but other servers are accepting shares, then new shares go to those other servers.
Attachment s3-remove-readonly.darcs.patch (53290 bytes) added
S3 backend: remove support for storagereadonly option. refs #999, #1568
S3 backend ignores [storage]readonlyto S3 backend: [storage]readonly is documented but ignoredI reviewed s3-remove-readonly.darcs.patch and give it +1. I would feel better if the two
TODO
's mentioned in that patch (one just in line of context) were ticketed and hadTODO
'ed unit tests...For what it is worth, I increasingly think read-only storage should be deprecated for all backends, and people will have to learn how to use their operating system if they want readonliness of storage. When we invented the read-only storage option, I think partly we were thinking of users who could read our docs but didn't want to learn how to use their operating system to set policy. Nowadays I'm less interested in the idea of such users being server operators.
Also, the fact that we've never really finished implementing read-only storage (to include mutables), so that there are weird failure modes that could hit people who rely on it is evidence that we should not spend our precious engineering time on things that the operating system could do for us and do better.
I don't really follow this. It seems reasonable for a server operator to decide to not accept new shares, and for this to be separate than whether the server process is able to write the filesystem where the shares are kept. For example, it might be reasonable to allow lease renewal, or for other metadata to be updated. It might be that not accepting shares should be similar to zero space available, so increasing the size of a mutable share also might not be allowed. And, if the purpose really is decommissioning, then presumably the mechanism used for repair should somehow signal that the share is present but should be migrated, so that a deep-check --repair can put those shares on some other server.
There's a difference between people that don't understand enough to sysadmin a server, and the server having uniform configuation for server-level behavior. When tahoe is ported to ITS, it should still be possible to tell it to stop taking shares.
Excellent points, gdt. And I was thinking of you as an example "minimalist unixy sysadmin" when I wrote what I wrote. Would you please post your comments over at #390? I'll post mine first so you can reply to mine...
Replying to gdt:
However, that doesn't affect the S3 backend, which doesn't store lease data in shares. We plan to store it in a leasedb that is local to the server.
Making the storage read-only has the effect of preventing any changes to shares.
If we want a "don't use any more space" option, that's different to "read-only". I think "read-only", if supported, should mean precisely that (including preventing deletion of shares or reductions in size of mutable shares).
Note that removing
storagereadonly
for the S3 backend now doesn't mean that we won't support it in future. Currently the option doesn't work at all for the S3 backend, and we would have to spend effort implementing it that would delay (even if not by much) finishing the backend.True, but I think that should be a separate option.
Thanks for volunteering to do the ITS port! ;-)
In [5524/ticket999-S3-backend]:
Replying to [davidsarah]comment:8:
That's all fine. I was just reacting to what I perceived to be "why bother having options if someone can just chmod some directory". Deciding that read-only as a concept doesn't make sense seems entirely reasonable.
I think it does make sense for a server operator to be able to configure the server not to take new shares, and also for it to be in some sort of advertised-as-going-away mode. But as you say that may be a different concept.
As for ITS, it's been a long time - but a convenient way to point out that network server semantics and local filesystem semantics need not match.
This is fixed on the branch, and when the branch is recorded for trunk it will be as if
storagereadonly
never existed for the S3 backend.