unit test failure: failed to download file with 2 shares on one server and one share on another #1191
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1191
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
For example in this run:
http://tahoe-lafs.org/buildbot/builders/hardy-amd64/builds/687/steps/test/logs/stdio
The download should have succeeded because server 0 should have shares 0 and 2 and server 1 should have share 1.
source:trunk/src/allmydata/test/test_hung_server.py@4661#L197
This test failure is nondeterministic. The first step is probably to understand why that is and make it fail every time.
This diff seems to have caused the test to be green every time. I'll leave it running overnight to be sure:
No, that wasn't it. Even with that diff the test occasionally goes red.
Okay, this patch makes it go red every time:
When I was looking at the behavior of the tests when it was non-deterministic, I noticed that there are two ways that this particular test could turn out: either the first server to respond claimed 2 shares and the second claimed 1 share, or the first server to respond claimed 1 share and the second claimed 2 shares. I observed that the download always failed in one case and always succeeded in the other. So I hypothesize that maybe the download logic which is trying to achieve diversity isn't realizing that it can use the newly arrived DYHB response for more than one share, or else it isn't realizing that it can use the previously arrived DYHB response for more than one share.
Attachment test_2_good_8_broken_copied_share.txt (10638 bytes) added
As first step, the failing test was run with verbose logging, see attachment:test_2_good_8_broken_copied_share.txt.
I don't really know why the following patch makes the test succeed (and breaks two others). But it might give some insight about the actual bug. Unfortunately, I'll probably won't find time to work on this during the next few days.
Okay I think I understand the bug now. [ShareFinder._deliver_shares()]source:trunk/src/allmydata/immutable/downloader/finder.py@4707#L217 calls
eventually(self.share_consumer.got_shares, shares)
and then in the next tick—beforegot_shares()
has been executed—[ShareFinder.loop()]source:trunk/src/allmydata/immutable/downloader/finder.py@4707#L89 runs and gives up and aborts the download because it isn't aware of the shares that were going to be delivered to it in a subsequent tick.Oh, yeah, that's a bug. I think the proper behavior would be for the call to
self.share_consumer.no_more_shares()
to use the same eventual-send that the call toself.share_consumer.got_shares()
uses (to make sure theno_more_shares
cannot race ahead of thegot_shares
call).This bug probably appeared during the diversity-seeking patch: part of that patch was to deliver all shares as quickly as possible, instead of trickling them out one share at a time.
I'll think about what the unit test for this should look like.. something that fails consistently without the patch, of course. Maybe just a way to consistentify
test_2_good_8_broken_copied_share
.Attachment 1191-fix.diff (644 bytes) added
fix for the bug, but lacks a specific unit test
that patch definitely fixes the problem (consistently, thanks to the recent patch to test_hung_server.py which makes the share order consistent). But I'd like to have a test case which specifically exercises this code path to go along with the fix.
To believe that 1191-fix.diff fixes this issue, I think I would have to believe that
eventually(A);eventually(B)
whereA
doeseventually(C)
will always execute in orderA;B;C
and neverA;C;B
. Is it true that this fix requires that property?It would be nice if the fix to this bug were more "local" and one could see by inspecting the source that the state was kept consistent. That is: isn't the problem here that a tick updates the
ShareFinder
's state to remove the last request frompending_requests
but does not update the state to add the newly discovered share? And thenloop
runs between that tick and the later tick which is going to add the resulting share withgot_shares
? Wouldn't it be better if there was never a time in between ticks where one of those bits of state had been updated and the other had not?If I understand correctly this patch leaves it the case that one tick updates
pending_requests
and a subsequent tick callsgot_shares
, but prevents the problem by ensuring thatloop
doesn't get run between those two ticks. This seems a bit fragile, but I guess as long as we have a robust test case for it then we'll know if it regresses. Maybe we could add some sort of internal consistency check assertion to the effect that, um,loop
never gets called after_request_retired
but before_got_shares
?Okay here is a log created by instrumenting
ShareFinder
to log each method call andDownloadNode
to loggot_shares
andno_more_shares
. This is without 1191-fix.diff.(Sorry about the wide lines there.)
So at
local#231 23:01:59.969
the request is retired but the resulting eventualgot_shares
won't happen untillocal#255 23:01:59.977
which is shortly after theloop
atlocal#247 23:01:59.973
which saidno_more_shares, ever
, which set a flag named_no_more_shares
in theSegmentFetcher
so that the next timeSegmentFetcher._do_loop
runs then it gives up and saysran out of shares
atlocal#253 23:01:59.976
.Now 1191-fix.diff makes it so that when
loop
decidesno_more_shares, ever
then it sets an eventual task to set the_no_more_shares
flag inSegmentFetcher
instead of doing it immediately. Is this guaranteed to always prevent this bug? I guess it is because when the_request_retired
(local#231 23:01:59.969
) is done immediately then during that same tick thegot_shares
(local#255 23:01:59.977
) is put on the eventual queue, so when the setting of_no_more_shares
is put on the eventual queue it will always take effect after thegot_shares
does.Okay.
But this still feels fragile to me, for example after we apply 1191-fix.diff, then if someone were to change the code of
ShareFinder._got_response
so that it invoked_deliver_shares
eventually instead of immediately, or if they were to change_deliver_shares
so that it invokedDownloadNode.got_shares
eventually instead of immediately, or if they were to changeDownloadNode.got_shares
so that it updated its_shares
data structure eventually instead of immediately, then that would reintroduce this bug.It would feel nicer to me if we could update both the
ShareFinder.pending_requests
data structure and theDownloadNode._shares
data structure in the same immediate call so that there is no tick that begins when those two data structures are in a mutually inconsistent state (with the request removed from the former but the share not yet added to the latter).Okay now I'll try to make a narrow test case of this issue.
Okay I've written a test of
ShareFinder
in which everything that is provided to theShareFinder
is a fake/mock thing. I think that's pretty cool and it helped me learn the code better. I have to go to sleep now and currently the test doesn't exercise the bug because it doesn't arrange for theShareFinder
to execute its ownloop
in between the final request being retired and the final share being added to the node's_share
. However, I'll attach the patch in case someone wants to play with it.Attachment test.patch.txt (10209 bytes) added
a test of ShareFinder in which everything that is provided to the ShareFinder is a fake/mock thing. This doesn't quite succeed at exercising the bug because it doesn't arrange for ShareFinder to run its own .loop after the last request is retired and before the last share is added to the node's _shares.
Okay I wrote a test for the first part -- it goes red if finder says "no more shares ever" and then delivers another share. I started writing a test for the second part -- it should go red if fetcher, given two shares on one server and one share on another server, gives up in despair. By inspection, it looks like fetcher doesn't have that bug: source:trunk/src/allmydata/immutable/downloader/fetcher.py@4707#L128. Also maybe that test is redundant with bigger downloader tests that we already have... I'll attach the patch as it exists in my sandbox at this moment.
Attachment mockingtests.dpatch.txt (14928 bytes) added
Okay here is a narrowly focussed unit test which gives the finder a red mark because the finder announced "no more shares, ever" and then subsequently announces a new share: changeset:56a3258ff7560af3. (Note: it may be appropriate to write a more lenient unit test which allows the finder to do that and get away with it. However, the documentation for "no more shares" should then be updated and also I haven't yet been able to think of a unit test which is looser than this one and which still narrowly focusses on this bug.)
I thought that the following patch (that I will attach to this ticket after I post this comment) would surely fix the bug by making the retire-request and add-new-share events happen in the same tick. Alas, it causes some other tests to fail in ways that I don't understand, namely allmydata.test.test_cli.Errors.test_get.
Attachment do-retire-and-got-share-in-same-tick.dpatch.txt (2056 bytes) added
Hm, how could do-retire-and-got-share-in-same-tick.dpatch.txt cause a failure of
allmydata.test.test_cli.Errors.test_get
? This means there is some other dependency between state-updating events related to_retired_request()
and_got_response
, right?I'm giving up for now. Help!
Okay I applied Brian's patch 1191-fix.diff and it made [my new test]source:trunk/src/allmydata/test/test_immutable.py@4715#L63 go green, so I guess I'll commit and stop worrying about why do-retire-and-got-share-in-same-tick.dpatch.txt made
allmydata.test.test_cli.Errors.test_get
hang. I consider 1191-fix.diff combined with [my new test]source:trunk/src/allmydata/test/test_immutable.py@4715#L63 to be a sufficient fix for this issue.In changeset:0475bd8e27ae8307:
By the way there is a generalization that I could make about do-retire-and-got-share-in-same-tick.dpatch.txt compared to 1191-fix.diff. We use state machines to handle asynchronous events, so when we want to have some behavior which spans multiple network events or timeouts and which has state, we have to store the state somewhere and check it in the next step of the state machine. But, we should really avoid the state machine paradigm whenever possible, so if we have some behavior which spans multiple ticks within one Twisted reactor, then we should not store the state somewhere and check the state in the function that we eventually run—we should instead pass that state as an argument to the eventually run function. That's just my opinion, man.
This is not to criticize 1191-fix.diff. It is the smallest patch that fixes all the known problems, and it is what we're actually deploying in 1.8.0, but I wanted to explain why I spent time attempting (unsuccessfully) to write do-retire-and-got-share-in-same-tick.dpatch.txt instead.
A related idea that I have is: suppose you have a
loop()
method (we have several) which contains the core of the state machine. Suppose you have some new information/some new event to communicate to the state machine. Then do not write down the new event and trigger the state machine, likeself._no_more_shares = True; eventually(self.loop)
, but instead extend the interface ofloop()
to accept this event, likeeventually(self.loop, no_more_shares=True)
. That way people will not have to think about what happens if the state changes in other ways before thateventually(self.loop)
happens, such as if there is already a different call toloop
on the eventual queue. If we hadn't needed to consider such potential complications, it would have been easier for us to diagnose the issue in this ticket.