Unhandled error in Deferred during retrieve #1636
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1636
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
...with patches from #1007, #1010, and #1628 applied.
When uploading files to the volunteer grid on I2P, I received the following exception:
I do not see any flog files from the time of the error (and it took place when I was AFK).
At http://127.0.0.1:3456/status/ it's still visible in the "Active Operations" section, even though the event took place hours ago.
For what it's worth, there are other failed upload attempts that are remaining in the list of "Active Operations" (even though they failed many, many hours ago). I'm retrying the batch upload; I'll try to generate the flog files if it errors out while near.
(The fact that uploads may fail occasionally on networks like I2P or Tor isn't unusual; that they're still shown as active long after failing is (and it's not something that I saw with 1.8.x)).
I don't know how to best explain this (it may be related to the regression being tackled in #1628), but:
During my attempted re-upload, my system became really sluggish. After what seemed like an eternity I managed to get "htop" running. The system load was at 35 with the tahoe process using ~1.2GB of RAM.
Unfortunately, after 15 minutes or so of trying to get back to a prompt (or ssh in), I grew impatient and hit the power switch.
It may not be very helpful but attached is a snippet of twistd.log from that period. The fun started at 18:43:28 UTC. The logging stopped at 18:44:48 UTC. The attached file has 49679 lines.
There isn't anything in ~/.tahoe/logs/incidents from that time.
I haven't seen any "Unhandled error in Deferred:" entries in twistd.log from my use of 1.8.x.
Attachment twistd.log.old.xz (1992 bytes) added
I'm promoting this to priority
critical
and adding the keywordregression
and putting it into the Milestone 1.10.0.Replying to killyourtv:
I can't see any evidence that it is the same bug. What makes you suspect that it is?
(#1583 seems to be the same bug as #1628, but they have different symptoms to this one.)
Ok, I think we have two things going on here:
notify_server_corruption
needs to check if the rref it gets is None before invokingcallRemoteOnly
on it. This was probably exposed by myIServer
refactoring, since before that, the servermap was holding actualRemoteReference
objects, and now it just holdsIServer
objects (from which you can get aRemoteReference
, if it is still connected, with.get_rref()
).Zooko is going to write a patch for the first problem. Brian will walk through the code to fix the second problem.
I believe the current state of this ticket is
blocked on someone verifying that it occurs with upstream Tahoe-LAFS (no i2p-specific patches or other patches that aren't in trunk)
.Is that right?
killyourtv appears to be a reliable reporter of bugs so far -- his reports are low on speculation and include actual facts like the twistd file...
Okay, just had a conversation with Brian and David-Sarah on IRC and it seems clear that there is a bug in [the code for mutable files]source:git/src/allmydata/mutable/retrieve.py?annotate=blame&rev=5bf1ffbc879cf08253eb695d8b0742629f26cb84#L936 inasmuch as it assumes that the [server object]source:git/src/allmydata/storage_client.py?annotate=blame&rev=5bf1ffbc879cf08253eb695d8b0742629f26cb84#L149's
.rref
member will be an object, but instead that may beNone
due to normal, inevitable network entropy ([here]source:git/src/allmydata/storage_client.py?annotate=blame&rev=5bf1ffbc879cf08253eb695d8b0742629f26cb84#L254).There are two possible fixes:
.get_rref
make sure they sensibly handle the case that they getNone
..rref
even after it has been notified that the connection has been disconnected and that object will raise DeadReferenceError if it is used.I'll investigate both approaches and produce a patch for one or each of them.
I've cleaned up the control flow in retrieve.py, in a series of 6 patches ending in changeset:893eea849ba7cf78 / changeset:8db39f0e. I still need to do one more careful inspection, but I think it's pretty solid now.
On the
.get_rref
front, I had a thought. Since my long-term goal withIServer
is to remove knowledge of server protocols from the upload/download code and move it into theIServer
class (to facilitate having multiple backends), why not make itIServer.callRemote(..)
instead ofIServer.get_rref().callRemote(..)
? The currentIServer
class would just do:It'd be easy to grep for that
callRemote
later, when it's time to hoist the abstraction later and turn those calls intoputShare
or whatever.Replying to [davidsarah]comment:5:
Sorry, just saw this reply. :|
It may have been stupid speculation on my part. I don't remember why I suspected it might be. Disregard (as you likely have by now).
Replying to zooko:
[...]
[...]
That was until comment:86749 above. :| Sigh.
Anyhow...I'm back testing with was the latest git revision as of a few days ago (the webui shows 1.9.0.post77, rev 2da8339dd762aa96e73b1c1d98e217df43802418).
Once again, patches from #1007, #1010, and #68 have been applied. Everything, so far, seems to be working fine. Or...I should say that things are much better than they had been. It's been running for ~36 hours now.
The only problems I've had (a couple error 500s during upload or repair) can probably be attributed to our not-always-as-stable-as-we'd-like grid. When the failures occurred the tasks remained on the status page, such as
These entries are from around two hours ago (it currently just after 1am UTC). These tasks are no longer running but they're still in the "Active Operations" section.
This is the only entry in twistd.log from around that time:
If I'm around if/when it happens again I'll generate an incident report.
All that said, things look much better.
Replying to [killyourtv]comment:11:
Hrm. I don't think bad behavior on the part of the servers should ever cause a 5xx error code from the gateway. Maybe I'm wrong about that, but at the very least the gateway should emit a clear explanation of what went wrong. Please report any 5xx errors you see so we can figure out if they actually represent bugs in your gateway or if they represent bad behavior on the part of the servers. In the latter case, the gateway ought to explain clearly in the error message what went wrong.
Replying to [zooko]comment:12:
[...]
Filed #1675.
comment:86758 seems to suggest that this is fixed. Is that right? Do we have adequate tests for the original problem?
Wait, I'm confused. The description says that the error occurs on "upload" (i.e. publish), but Brian's patches mentioned in comment:86758 only changed the retrieve code.
For what it's worth, I cannot reproduce this anymore (current git) so I think this may have indeed been fixed.
Everything has been working wonderfully during the last week that I've been on this version. Thanks for all of your (collective) work on this. :)
Thanks for the update (comment:86767), kytv. I hope someone who understands the code better than I can tell us how likely it is that some specific patch or set of patches fixed this specific problem. If not, I'll try to dig into the question myself at some point. Adding the keyword "review-needed" to indicate that we need someone to look into the source code and try to connect this particular issue with some specific source code changes.
Just to explain why I re-opened this ticket:
While having a report from a user like kytv, who actually experiences the bug in the wild, is extremely valuable, and I would prefer not to close a ticket without such a report, also having a source-code-oriented explanation of what the bug was and how it was fixed is something I would prefer not to close a ticket without.
Bugs are sneaky things, and I don't believe they are really gone until I see the bodies.
OK. Should I refrain from closing bugs (that I reported) in the future? (I don't want to create "more work" for others--in fact, I was trying to help by closing it. ;))
As I implied in comment:86766, I don't think the patches warner mentioned in comment:86758 can have fixed this bug, since the bug occurs on upload, not retrieve. I don't know of any other changes that would have affected it, so it probably still exists.
killyourtv: in general closing a bug with "cannot reproduce" should be done with great reluctance, since bugs don't fix themselves, and we like to know what change fixed something. That resolution status is really intended for the case where the relevant code has changed so much since the bug was reported that the original report is probably no longer relevant. It's fine to close bugs after having verified that a particular fix works, though.
Okay since David-Sarah says (per comment:86775) that the patches mentioned in comment:86758 can't have fixed this issue, I'm removing the
review-needed
tag. (I was reading [/~marlowe/TWN29.html the most recent Tahoe-LAFS Weekly News] and saw this ticket listed as being one of thereview-needed
tickets.)killyourtv: yeah, one practice that we have in this project which is different from a lot of other projects is that we focus on reducing uncertainty about bugs. A lot of projects promote a sort of vague notion of "degree of bugginess" with associated folklore, like "Oh yeah, version 13.1.6 was way more crashy than version 13.1.3, but then version 22.0.7 was way more stable...".
I prefer for our collective understanding of bugs to be as specific and reproducible as possible. Therefore, since you definitely spotted a bug originally (thank you!) and since we don't have a specific reason to believe that the bug is fixed, we should leave this ticket open.
Thanks again for your good bug reports!
Replying to davidsarah:
Look at the traceback: the exception happens in Retrieve. I imagine that the WUI operation was to upload a file and add it to the directory, so a Retrieve operation occurs when it fetches the previous contents of the directory (so it can modify and publish the new contents). No (immutable) Upload code was implicated in the traceback.
I gather zooko has some specific idea in mind for how this bug can be closed, handing it off to him.
Replying to [warner]comment:24:
Ah, I see.
OK, I agree that the patches referenced in comment:86758 address the problem in the traceback.However, I don't see any regression tests for that problem.Unhandled error in Deferred while uploading (current darcs build)to Unhandled error in Deferred during retrieveI don't think the patches references in comment:86758 fix this bug. I read the patches and don't see a fix for this. The path that led to this exception is still in place in trunk. That is:
IServer.get_rref()
can returnNone
(because the connection-lost handling in storage_client.py setself.rref
toNone
), and the send-corruption-report-to-server functionality doesn't test forNone
.This is an instance of a problem that appears in many places in trunk, which problem and some proposed solutions are described in comment:86756, comment:86757, comment:11.
Attachment dead-rref.diff (3048 bytes) added
retain rref even after disconnect
dead-rref.diff implements zooko's second approach from comment:11, namely to make the
IServer
retain the deadRemoteReference
after disconnect, and continue to return it to anyone who callsget_rref()
. This removes the need for callers to guard against receiving aNone
(as long as the server was connected at least once, which is always the case for servers returned byget_servers_for_psi()
, which is how all upload/download code gets servers).This patch should remove the problem. A later cleanup step could go the callers of get_rref() and remove the now-redundant guards, but a better change would be to remove get_rref() and have callers do
server.callRemote()
, which will then be replaced by higher-level actions likeserver.putShare()
as we move away from Foolscap.Will review.
I reviewed attachment:dead-rref.diff. I see no bugs in it, it is a small patch, and I (still) like the idea of doing it that way. It does make me mildly uncomfortable that there are no unit tests in the patch.
Attachment dead-rref-test.diff (5107 bytes) added
same patch, plus a test
In the new patch (which includes a test), the assignment of
node_key_s
will eventually need to move out of the conditionalinit_storage()
, which only happens if the node is running a storage server. (client-only nodes will get a node-key too). But that fix can happen after 1.10, and will happen naturally as part of the client-side accounting work.I reviewed dead-rref-test.diff and approve it for 1.9.2. Will apply.
In changeset:1b07d307619049dd:
In changeset:1b07d307619049dd:
It looks like this test failure is in the new test:
I traced this to a bug in foolscap, foolscap#196, and fixed it. I'll try to make a new release this weekend, and we can bump tahoe's dependency if you like. (it should only affect tests).
(the bug was that
Tub.stopService
was shutting off Reconnectors by iterating over theself.reconnectors
list, but the Reconnector shutdown process was removing items from that list during the iteration. The shared-state hazard resulted in some Reconnectors being dropped, so their timers weren't shut down, so one was still running at the end of the test, triggering the DirtyReactorError)In changeset:3ab0f33413de23dd:
Review needed for fix-1636-for-1.9.2.darcs.patch .
warner: I took your suggestion to compare
self.c0.nodeid
ands.get_serverid()
, which seems to work. Please test with your candidate foolscap 0.6.4 release.In changeset:3ab0f33413de23dd:
In changeset:323774698ee0a2e8:
Attachment fix-1636-for-1.9.2.darcs.patch (106553 bytes) added
[for 1.9.2]rebased After a server disconnects, make the IServer retain the dead RemoteReference, and continue to return it to anyone who calls get_rref(). This removes the need for callers to guard against receiving a None (as long as the server was connected at least once, which is always the case for servers returned by get_servers_for_psi(), which is how all upload/download code gets servers). Includes test
In changeset:5520/1.9.2:
In changeset:5882/cloud-backend: