clarify difference between full and read-only servers in servers-of-happiness failure message #1116
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
5 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1116
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
When it fails to find a satisfactory share layout, the peer selection process for immutable files prints out a message explaining why. In that, it counts servers that did not accept shares that they were asked to as full. This is slightly confusing, since the servers may not be full, but instead may just have too little free space to accept shares of the size it was asked to accept. The error message should be changed to reflect this.
(this was first reported in [//pipermail/tahoe-dev/2010-July/004656.html https://tahoe-lafs.org/pipermail/tahoe-dev/2010-July/004656.html])
I just made up a tag "unfinished-business" to mean an issue that is in some sense caused by a recent change even though the issue is not really a regression. Maybe in the morning inventing this tag will seem like a bad idea, but then we can always delete it and never speak of it again.
I just read through the mailing list thread and the relevant source code.
I believe the original issue report was mistaken, but there is still something confusing in this error message that should be clarified.
The original report from Kyle Markley on the mailing list turned out to have a different error behind it — #1118. Kevan opened this ticket in the belief that Kyle was confused about the difference between a server being "full" and being "not quite full, but too full to accept that share there", but Kyle was not confused about that.
I read through [the source]source:trunk/src/allmydata/immutable/upload.py?annotate=blame&rev=196bd583b6c4959c60d3f73cdcefc9edda6a38ae just now, and servers get counted as "full" by the uploader if either:
a) They are read-only, the uploader queries them to see if they have a share, and they respond with something other than a failure, or
b) They are not read-only, the uploader asks them to store one or more shares, and they store zero shares.
I'm pretty sure that the current server will do b) only in the case that it is full.
So I think the error message is already pretty accurate, except that it is combining full servers and read-only servers into one number. To close this ticket, track full servers (currently tracked in a member variable named
full_count
in the code separately from read-only servers, and change the error message from "of which %d placed none due to the server being full" to something like "of which %d placed none due to the server being full and %d found none already present on a read-only server".I don't think this ticket belongs in Milestone 1.11! Daira: could we move this to "soon"?
make the servers-of-happiness error message less confusingto clarify difference between full and read-only servers in servers-of-happiness failure messagerelated ticket: #2101
Milestone renamed
renaming milestone
Moving open issues out of closed milestones.
Ticket retargeted after milestone closed