cloud backend: redundant reads of chunks from cloud when downloading large files #1885
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1885
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I uploaded a 7.7 MiB video as an MDMF file using the cloud backend on S3 (as of 1819-cloud-merge/022796fb), and then downloaded it. From
flogtool tail
ing the storage server, I saw that it was reading the same chunks multiple times during the download. That suggests that the chunk cache is not operating well enough.The file was being downloaded by playing it as a video in Chromium; I don't think that makes a difference.
Update: this also applies to immutable files if they are large enough.
During the upload and download, the server memory usage didn't go above 50 MiB according to the statmover graph.
Same behaviour for a straight download, rather than playing a video. Each chunk seems to get read 5 times, and the first chunk (containing the header) many more times.
cloud backend: redundant reads of chunks from S3 when downloading large MDMF fileto cloud backend: redundant reads of chunks from cloud when downloading large MDMF fileI changed
ChunkCache
to use a true LRU replacement policy, and that seems to have fixed this problem. (LRU is not often used because keeping track of ages can be inefficient for a large cache, but here we only need a cache of a few elements. In practice 5 chunks seems to be sufficient for the sizes of files I've tested; will investigate whether it's enough for larger files later.)cloud backend: redundant reads of chunks from cloud when downloading large MDMF fileto cloud backend: redundant reads of chunks from cloud when downloading large filesHmm, that's an improvement, but the immutable downloader is not able to max out my downstream bandwidth -- each HTTP request is finishing before the next can be started, so we're not getting any pipelining. (I am getting ~ 1 MiB/s and should be getting ~ 1.8 MiB/s.)
Milestone renamed
renaming milestone
Moving open issues out of closed milestones.
The established line of development on the "cloud backend" branch has been abandoned. This ticket is being closed as part of a batch-ticket cleanup for "cloud backend"-related tickets.
If this is a bug, it is probably genuinely no longer relevant. The "cloud backend" branch is too large and unwieldy to ever be merged into the main line of development (particularly now that the Python 3 porting effort is significantly underway).
If this is a feature, it may be relevant to some future efforts - if they are sufficiently similar to the "cloud backend" effort - but I am still closing it because there are no immediate plans for a new development effort in such a direction.
Tickets related to the "leasedb" are included in this set because the "leasedb" code is in the "cloud backend" branch and fairly well intertwined with the "cloud backend". If there is interest in lease implementation change at some future time then that effort will essentially have to be restarted as well.