mitigate the performance bottleneck of slow servers in download #1187
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1187
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Ticket #1170 showed that server selection can have a big impact on download performance (presumably the same is true of upload performance) when the chosen servers have different effective bandwidths. For example, /tahoe-lafs/trac-2024-07-25/issues/6232#comment:94 showed an overall download bandwidth of ~90 Kbps vs ~190 Kbps depending on how many shares were taken from each of two servers.
My hypothesis is that this is mainly due to the time to download each segment being bottlenecked by the slowest server. Once we've downloaded the share(s) for a given segment from the faster servers and are waiting for the slower ones, the faster servers are left idle until the next segment.
In principle, we can use the erasure coding to mitigate this bottleneck. If k were larger than it typically is now, we could download as many shares per segment from each server as its current effective bandwidth is able to support. Since the individual shares would be a smaller fraction of the segment size, so would be the wasted time per segment.
A further optimization would be to pipeline requests across consecutive segments. That is, when we're getting near to finishing the download of shares for a segment, we can also be downloading shares for the next segment (but only the next segment) from servers that would otherwise be idle. Note that this depends on downloading a variable number of shares from each server for each segment (otherwise, the servers that finish first on one segment would also typically finish first on the next segment, so we would either have unbounded memory usage or still have to leave the
faster serversservers that finish first idle some of the time).It isn't entirely clear how large k would need to be for this approach to work. If it would need to be larger than the number of servers, then we would have to rethink the servers-of-happiness criterion. The current definition basically only credits each server for having one share, and it would have to be changed to credit each server for having multiple shares. I think this can still be given a fairly simple definition in the bipartite matching model.
In the ticket description, when I say "idle", I mean not working on this download. Of course servers that are idle in that sense might be doing other useful work, but probably most Tahoe grids have very bursty download traffic, and so they often (usually?) won't be.
See also #1110, which proposes pipelining without downloading a variable number of shares from each server per segment.