repair to different levels of N #711
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#711
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Currently repair will try to create a number of shares equal to whatever number the file was originally created with.
It might be useful to tell repair not to go that far and produce only a few shares.
See also #678 (converge same file, same K, different M), which if implemented would make it also be sensible to tell repair to go further and produce more shares than the original file had.
The following clump of tickets might be of interest to people who are interested in this ticket: #711 (repair to different levels of M), #699 (optionally rebalance during repair or upload), #543 ('rebalancing manager'), #232 (Peer selection doesn't rebalance shares on overwrite of mutable file.), #678 (converge same file, same K, different M), #610 (upload should take better advantage of existing shares), #573 (Allow client to control which storage servers receive shares).
Also related: #778 ("shares of happiness" is the wrong measure; "servers of happiness" is better).
See new related ticket #1340 (consider share-at-a-time uploader).
s/M/N/ to match our existing "k-of-N" terminology
repair to different levels of Mto repair to different levels of NI'm confused, why would we want this? Just to decrease storage requirements?
Replying to daira:
So that people can change their minds about how much redundancy they want after uploading a file, and then adjust it without having to re-upload the file. If they changed their mind by reducing the desired redundancy, then great! Just let the excess shares expire or actively delete them. If they changed their mind by wanting added redundancy, then generate more shares and upload them.
Does that make sense as a desirable thing? (I'm not sure if it makes sense as a practically implementable thing...)