Making requests too soon after startup can fail #719
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#719
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
This is an issue with hidden depths.. how should the client node know that it
has connected to every server that it's ever going to need?
But it should be easy to improve the situation somewhat. To start with, there
should be some internal function that keeps track of "progress towards full
connection":
even have an introducer.furl?
how many are left? how long have we been trying to connect to them?
Then, when a directory retrieve or a file download fails due to insufficient
shares, this function could provide additional human-useful data, like saying
"we couldn't retrieve that directory right now, but since it looks like we've
only been connected to the introducer for two seconds, maybe we just don't
know about enough servers yet. You should try again in ten seconds.".
I'm not sure how to deliver that extra information. Specifically, the tahoe
node should not try to guess whether this is a transient failure or a
permanent one: we don't want to resort to heuristics or fixed timeouts. So
this extra data is advisory and should be interpreted by a human rather than
a piece of code.
So from the webapi point of view, 410 still seems like the right response
code, but maybe we can add the text to the response body, and make sure that
the CLI tools will deliver this body to stderr.
We have similar issues in a browser. I don't know when browsers will show the
response body for things like 410 GONE, but maybe we can use the same
technique.
This issue also affects the WUI. Some browsers (in particular IE) will hide response bodies for HTTP errors by default, but that doesn't mean that isn't the right place to put human-readable info about the error; the HTTP spec specifically says that browsers SHOULD display the entity body for errors (see the end of RFC 2616 section 6.1.1).
This issue affects all operations including check and repair, and all frontends.
See also #1596 for the error-reporting aspect (not just on start-up).
#2043 was a duplicate.
Daira and I are working on the related ticket #1449. Can we also satisfy this ticket?
In 04b34b6/trunk: