downloader/share: coalesce mutiple Share.loop() calls #1268

Closed
opened 2010-11-23 23:36:04 +00:00 by warner · 5 comments

As alluded to in comment:79607 , the behavior of
src/allmydata/immutable/downloader/share.py (in the Share class)
should be improved. Each time a data-fetch response arrives from the
server (either for a data block or for a couple of hash nodes), a call
to Share.loop is queued. When run, loop recomputes the
desire/satisfaction bitmap (a Spans instance), which takes a
nontrivial amount of time. (although much less than it used to, before
some of the issues in #1170 were fixed).

The problem is that we're recomputing it far too many times, and most of
them are redundant. For example, supposed we get responses to 4 requests
in a big bunch (because they all arrived in the same network packet).
Foolscap will queue an eventual-send for the response Deferreds all on
the same reactor turn. When those fire, they'll update the
what-data-we've-received DataSpans and then queue a call to
loop(). All four messages will be processed before the first call to
loop() occurs.

The first loop() pass will process the received spans, deliver the
completed block to the caller that asked for it, and then update the
desire/satisfaction bitmaps in preparation for whatever the next block
might be. The second loop() pass will recompute the bitmaps, even though
no changes have occurred (no new blocks were requested, no new responses
received). The third and fourth loop() passes will do the same.

viz-3.png shows a bit of it: the pale yellow
candlestick-like shapes (just below the "misc" label) are the calls to
measure satisfaction and recompute the desire bitmap. The first call
spends a lot of time in satisfy() (to deliver the completed block), and
then a moderare amount of time in desire() (to figure out what we need
next). The second and third calls both spend a tiny amount of time in
satisfy() (since there are no completed blocks ready to be delivered),
but then spend the same moderate amount of time in desire() recomputing
the same bitmap as before.

This gets worse when there are a lot of loop() calls queued, which means
it gets worse when there are lots of (small) hash node requests coming
back. As a result, the amount of wasted time is larger just after
segments which trigger most hash-node requests. This distribution is an
interesting function (probably one of those exciting sequences in the
OEIS), but basically it's zero for all odd-numbered
segments, and gets larger as you switch over higher branches in a binary
tree (so the max is for seg0, the next highest is for NUMSEGS/2, third
highest is for NUMSEGS/4 and 3*NUMSEGS/4, etc).

180c2-viz-delays.png shows this variation: the
time between the end of the last pink block() request for any given
segment, and the end of the segment() request, is mostly filled with
this wasted recomputation. The time spent there is larger for some
segments than others. In the example chart, the largest set of block
requests (for the NUMSEGS/2 midpoint segment) has about 70ms of wasted
time, whereas the very next segment (NUMSEGS/2+1, which needs no extra
hash nodes) only spends about 10ms in that phase.

The total amount of time wasted scales (linearly, I think) with NUMSEGS.
It should also scale linearly with 'k', since there's a separate Share
instance (and thus a whole set of these queued loop() calls) for each
active share.

This problem is hypothesized to be involved with #1264 (slow downloads
for large k), although the rough measurements I did for #1170 suggest
that the time spent is too small (8% for a local one-CPU download of a
12MB file) to explain #1264.

The fix will be to have Share refrain from queueing redundant
calls to loop():

  • factor out all the places that do eventually(self.loop) into a

single schedule_loop() function

  • change schedule_loop() to check a flag:
  • if unset, set the flag and call eventually(self.loop)
  • if set, do nothing
  • clear the flag at the start of loop()

Then we should re-run the #1264 tests to see if there's a significant
change.

As alluded to in [comment:79607](/tahoe-lafs/trac-2024-07-25/issues/1170#issuecomment-79607) , the behavior of src/allmydata/immutable/downloader/share.py (in the `Share` class) should be improved. Each time a data-fetch response arrives from the server (either for a data block or for a couple of hash nodes), a call to `Share.loop` is queued. When run, `loop` recomputes the desire/satisfaction bitmap (a `Spans` instance), which takes a nontrivial amount of time. (although much less than it used to, before some of the issues in #1170 were fixed). The problem is that we're recomputing it far too many times, and most of them are redundant. For example, supposed we get responses to 4 requests in a big bunch (because they all arrived in the same network packet). Foolscap will queue an eventual-send for the response Deferreds all on the same reactor turn. When those fire, they'll update the what-data-we've-received `DataSpans` and then queue a call to loop(). All four messages will be processed before the first call to loop() occurs. The first loop() pass will process the `received` spans, deliver the completed block to the caller that asked for it, and then update the desire/satisfaction bitmaps in preparation for whatever the next block might be. The second loop() pass will recompute the bitmaps, even though no changes have occurred (no new blocks were requested, no new responses received). The third and fourth loop() passes will do the same. [viz-3.png](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-1c90-b882-d4ac-abd15acaa03c) shows a bit of it: the pale yellow candlestick-like shapes (just below the "misc" label) are the calls to measure satisfaction and recompute the desire bitmap. The first call spends a lot of time in satisfy() (to deliver the completed block), and then a moderare amount of time in desire() (to figure out what we need next). The second and third calls both spend a tiny amount of time in satisfy() (since there are no completed blocks ready to be delivered), but then spend the same moderate amount of time in desire() recomputing the same bitmap as before. This gets worse when there are a lot of loop() calls queued, which means it gets worse when there are lots of (small) hash node requests coming back. As a result, the amount of wasted time is larger just after segments which trigger most hash-node requests. This distribution is an interesting function (probably one of those exciting sequences in the [OEIS](http://oeis.org/)), but basically it's zero for all odd-numbered segments, and gets larger as you switch over higher branches in a binary tree (so the max is for seg0, the next highest is for NUMSEGS/2, third highest is for NUMSEGS/4 and 3*NUMSEGS/4, etc). [180c2-viz-delays.png](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-1c90-b882-d4ac-9ef836afe651) shows this variation: the time between the end of the last pink block() request for any given segment, and the end of the segment() request, is mostly filled with this wasted recomputation. The time spent there is larger for some segments than others. In the example chart, the largest set of block requests (for the NUMSEGS/2 midpoint segment) has about 70ms of wasted time, whereas the very next segment (NUMSEGS/2+1, which needs no extra hash nodes) only spends about 10ms in that phase. The total amount of time wasted scales (linearly, I think) with NUMSEGS. It should also scale linearly with 'k', since there's a separate Share instance (and thus a whole set of these queued loop() calls) for each active share. This problem is hypothesized to be involved with #1264 (slow downloads for large k), although the rough measurements I did for #1170 suggest that the time spent is too small (8% for a local one-CPU download of a 12MB file) to explain #1264. The fix will be to have `Share` refrain from queueing redundant calls to `loop()`: * factor out all the places that do `eventually(self.loop)` into a > single `schedule_loop()` function * change `schedule_loop()` to check a flag: * if unset, set the flag and call `eventually(self.loop)` * if set, do nothing * clear the flag at the start of `loop()` Then we should re-run the #1264 tests to see if there's a significant change.
warner added the
code-encoding
major
defect
1.8.0
labels 2010-11-23 23:36:04 +00:00
warner added this to the undecided milestone 2010-11-23 23:36:04 +00:00

I wrote up some notes about François's profiling results over on comment:9:/tahoe-lafs/trac-2024-07-25/issues/6326. It might shed light on this ticket.

I wrote up some notes about François's profiling results over on comment:9:[/tahoe-lafs/trac-2024-07-25/issues/6326](/tahoe-lafs/trac-2024-07-25/issues/6326). It might shed light on this ticket.
Author

Attachment mergeloops.diff (1463 bytes) added

patch to merge loops

**Attachment** mergeloops.diff (1463 bytes) added patch to merge loops
Author

The patch is pretty simple. Manual checking on a 12MB file shows a reduction in the number of calls to loop() from around 1200 to about 580, and a casual check shows the download time drops from 4.2s to 4.0s, so it looks like it's doing its job.

The patch is pretty simple. Manual checking on a 12MB file shows a reduction in the number of calls to `loop()` from around 1200 to about 580, and a casual check shows the download time drops from 4.2s to 4.0s, so it looks like it's doing its job.
warner modified the milestone from undecided to 1.9.0 2011-08-01 04:42:53 +00:00
davidsarah commented 2011-08-01 04:46:42 +00:00
Owner

Looks good.

Looks good.
warner was assigned by tahoe-lafs 2011-08-01 04:47:43 +00:00
Brian Warner <warner@lothar.com> commented 2011-08-01 15:16:38 +00:00
Owner

In changeset:c9def7697757bdae:

Coalesce multiple Share.loop() calls, make downloads faster. Closes #1268.
In changeset:c9def7697757bdae: ``` Coalesce multiple Share.loop() calls, make downloads faster. Closes #1268. ```
tahoe-lafs added the
fixed
label 2011-08-01 15:16:38 +00:00
Brian Warner <warner@lothar.com> closed this issue 2011-08-01 15:16:38 +00:00
Sign in to join this conversation.
No Milestone
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Reference: tahoe-lafs/trac-2024-07-25#1268
No description provided.