bring back sizelimit (i.e. max consumed, not min free) #671
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
6 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#671
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
We used to have a
sizelimit
option which would do a recursive examination of the storage directory at startup and calculate approximately how much disk space was used, and refuse to accept new shares if the disk space would exceed the limit. #34 shows when it was implemented. It was later removed because it took a long time -- about 30 minutes -- on allmydata.com storage servers, and the servers remained unavailable to clients during this period, and because it was replaced by thereserved_space
configuration, which was very fast and which satisfied the requirements of the allmydata.com storage servers.This ticket is to reintroduce
sizelimit
because [//pipermail/tahoe-dev/2009-March/001493.html some users want it]. This might mean that the storage server doesn't start serving clients until it finishes the disk space inspection at startup.Note that
sizelimit
would impose a maximum limit on the amount of space consumed by the node'sstorage/shares/
directory, whereasreserved_space
imposes a minimum limit on the amount of remaining available disk space. In general,reserved_space
can be implemented by asking the OS for filesystem stats, whereassizelimit
must be implemented by tracking the node's own usage and accumulating the sizes over time.To close this ticket, you do not need to implement some sort of interleaving of inspecting disk space and serving clients.
To close this ticket, you MUST NOT implement any sort of automatic deletion of shares to get back under the sizelimit if you find yourself over it (for example, if the user has changed the sizelimit to be lower after you've already filled it to the max), but you SHOULD implement some sort of warning message to the log if you detect this condition.
(updated description)
Note that any sizelimit code is allowed to speed things up by remembering state from one run to the next. The old code did the slow recursive-traversal sharewalk to handle the (important) case where this state was inaccurate or unavailable (i.e. when shares had been deleted by some external process, or to handle the local-fs-level overhead that accounts for the difference between what /bin/ls and /bin/df each report). But we could trade off accuracy for speed: it should be acceptable to just ensure that the sizelimit is eventually approximately correct.
A modern implementation should probably use the "share crawler" mechanism, doing a
stat
on each share, and adding up the results. It can store state in the normal crawler stash, probably in the form of a single total-bytes value per prefixdir. The do-I-have-space test should usemax(last-pass, current-pass)
, to handle the fact that the current-pass value will be low while the prefixdir is being scanned. The crawler would replace this state on each pass, so any stale information would go away within a few hours or days.Ideally, the server code should also keep track of new shares that were written into each prefixdir, and add the sizes of those shares to the state value, but only until the next crawler pass had swung by and seen the new shares. You'd also want do to something similar with shares that were deleted (by the lease expirer). To accomplish this, you'd want to make a
ShareCrawler
subclass that tracks this extra space in a per-prefixdir dict, and have the storage-server/lease-expirer notify it every time a share was created or deleted. TheShareCrawler
subclass is in the right position to know when the crawler has reached a bucket.Doing this with the crawler would also have the nice side-effect of balancing fast startup with accurate size limiting. Even though this ticket has been defined as not requiring such a feature, I'm sure users would appreciate it.
sizelimitto bring back sizelimit (i.e. max consumed, not min free)Brian: did you intend to put this into Milestone 1.6? I assume not, so I'm moving it to eventually. Apologies if you meant to put it here and feel free to move it back.
#1285 asks for the
df
command on a Tahoe filesystem mounted over SFTP to show some estimate for the space used on a grid (as well as the space available). However, by default we shouldn't slow down the startup process of storage servers in order to achieve that.Note that on a conventional filesystem, the total size of files corresponds roughly to the amount of space used (ignoring per-file overhead). On a Tahoe filesystem, the latter is usually greater than the former by the expansion factor, N/k. However if the encoding parameters have changed, or if different gateways are using different parameters, then dividing the total space used by the current N/k on a given gateway would lead to an inaccurate estimate of total file size.
Both the total file size and the total space usage are potentially interesting. If we are periodically crawling all shares as this ticket suggests, then it is not significantly more difficult to compute both (under the assumption that N shares are stored for each file, which is true if the shares are optimally balanced).
OTOH, perhaps the total size of files and the total space usage are just not important enough to do all this work to compute them, given that storing shares on a separate filesystem is sufficient to achieve the goal of limiting total space usage.
OTGH, long-term preservation is improved by occasionally crawling all shares to ensure that they can still be read. (That requires actually reading the shares rather than just the metadata, though.)
See also #940 (share-crawler should estimate+display space-used).
Our current plan is to support this using the leasedb.
The next step is to implement #1836, then we can use that to implement this ticket!
#1043 was a duplicate of this.
I'm working on this ticket here: https://github.com/markberger/tahoe-lafs/tree/671-bring-back-sizelimit
Also needs documentation in source:docs/configuration.rst.
I've started to write tests for this patch, but the share overhead seems to be pretty high. When I write a 1000 byte share, leasedb is reporting the share size to be 4098 bytes. Is this the expected behavior?
Yes, that's expected behaviour. Filesystems can be surprisingly inefficient. Which fs are you using?
Lets see… currently that is set by accounting_crawler (see #1835 to make it so the lease gets added into the leasedb immediately at the same time as the share is added to the store instead of later by a crawler) and I see from elsewhere in accounting_crawler that it is using the return value from the share's get_used_space().
Here are the implementations of
get_used_space()
in the 1819-cloud-merge-opensource branch:fileutil.get_used_space(home)+fileutil.get_used_space(finalhome)
; That's interesting! The home/finalhome distinction is that during the upload of an immutable file it is written into a location named home, and only after the upload is complete is it mv'ed into finalhome)Okay, check out the implementation of
fileutil.get_used_space
.So, in answer to your question:
Yes. ☺
Replying to daira:
I'm using ext4 but I didn't even think about fs overhead. I assumed the overhead was created by tahoe. Thanks daira.
And thanks for the detailed trace zooko.
My branch now has tests: https://github.com/markberger/tahoe-lafs/tree/671-bring-back-sizelimit
Note that my branch is based on #1836 which also needs to be reviewed.
markberger, first of all, thanks for the patch!
I've been reviewing it (diff against 72b49750d95b0ca01321e8cd0e2bc93cd0c71165), and in web/storage.py, in StorageStatus.render_JSON, the bucket-counter element is changed to always return None. While this appears to be for compatibility reasons when looking at the context, it might be wise to clarify as such in a comment :)
Hi joepie91, thanks for pointing that out.
I'm really busy this week, but I will add some comments this weekend.
I added the comment joepie91 suggested. It can be found on the pull request.
Alright, great. I do have to note... I was told there would be intentional backdoor easter eggs to check how well patches were reviewed, but I have not run across any. I hope this is a good thing, and that it means there aren't any :)
Also, I'll be removing the review-needed tag, but note that I haven't reviewed #1836 which this one is based on - #1836 is still awaiting review.
Posted a comment at related article at http://bitcartel.wordpress.com/2012/10/21/rbic-redundant-bunch-of-independent-clouds pointing to this ticket. Its use case might benefit of this.
looking at #648 and I support the functionality of a size limit.
Looks like #1836 is intended to address the long delays in calculating exactly how much space the storage node is actually using.
As far as what happens if a node originally given a limit of for example 1TB and actually grows to this size, and afterwards an admin seeks to reduce the limit to 500GB, there should be some functionality that allows shares to be transferred or copied to other servers to accommodate the storage node's request to downsize or shrink. It might take time to shrink the node, but at least it would provide the capability of a graceful way of resolving the issue. I don't know if such functionality (or a FR for this functionality) already exists.
Replying to Lcstyle:
Yes. And there is a good patch for #1836, but it isn't merged into trunk yet.
There is another ticket about that, #864.
Milestone renamed
renaming milestone
Moving open issues out of closed milestones.
Ticket retargeted after milestone closed