disk space resource limits on storage servers #34
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#34
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Currently if you run a storage server, then clients can ask you to store data, and you always say yes. Make it possible to configure a storage server to have a limit on how much disk space it will use, and when it is full it says no.
also mentioned in source:roadmap.txt
Implementation notes: I'm thinking of having the node do a 'du -s' of the storage directory on startup and keep that value around as its estimate of total storage used. Each time a lease request arrives, subtract the estimate from the max-space target to decide whether we'll accept the lease or not. Keep a list of pending incoming shares and count them against the storage space (so that four billion lease requests that arrive at the same time won't all be honored). When the bucket is closed, do a 'du -s' of the bucket and add it to the estimate. Likewise, when a bucket is deleted, do a 'du -s' first and remove that from the estimate.
Sound reasonable? I'll look to amdlib.utils.fileutil for 'du -s' code, so we have something that'll work on windows too.
Sounds reasonable!
I just implemented this, in changeset:9ddb92965161a067, changeset:c80ea7d69399b153, and changeset:94e6e6160b24959c. Write a number of bytes (perhaps with a suffix like kB or Mb or G) to a file named 'sizelimit' in the client's base directory. It will be read at node startup and used to constrain the StorageServer to that size.
It's not 100% accurate right now: buckets which are being filled count against the limit, but the metadata that goes into them is only counted once the bucket is closed. The whole storage pool is sized for real at startup, and the amount consumed is modified after that as buckets come and go. The sizelimit does not take into account space consumed by directories, but rather only pays attention to files.