errors during web download aren't reported, hangs instead #65
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#65
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
If an exception occurs during a download triggered by the web frontend, there's an Unhandled Errback written to the logs, but the web GET request itself never completes.
I think allmydata.download.Downloader should be changed to add an errback handler to the start() Deferred that will fire the DownloadTarget's fail() method. That would also mean that start() should never errback, which is kind of weird.. maybe the Failure should be sent to both fail() and the start() chain.
The webish code can use the fail() method to terminate the download GET with an error of some sort.
The easiest way to reproduce this is probably to corrupt a couple of files (the uri_extension block comes to mind) in all shares. The way I noticed it was to try to download a file that was uploaded with an earlier version.
fixed, if the error occurs early enough then we return a 500 Internal Server Error and attach a traceback of the problem (most likely a NotEnoughPeersError).
If the error occurs late (meaning after we've already started sending data), then our error-reporting options are much more limited. Fortunately most of the likely error situations will be detected before the first segment of data is validated and sent. This includes bogus URI, not enough peers holding data, and coding problems that lead to bad hashes everywhere. This does not include peers going away after the first segment is retrieved.