memory leak on 'tahoe get' #1910
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1910
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
When trying to retrieve a file either via sshfs or via 'tahoe get' I seem to have a memory leak. Every 'tahoe get' on a 350MB large file seems to grow the memory usage of the tahoe daemon by about 20MB:
'myfile' is an immutable file according to the web interface. sshfs is unmounted for tahoe, so the 'tahoe get's above should be the only tahoe commands invoked during that time.
Versions (as provided by Debian wheezy):
Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.32-1 x86_64 unknown unknown GNU/Linux
Encoding parameters: shares.needed = 1, shares.happy = 2.
Does this pattern continue for more than, say, 200 MB without freeing any memory? Garbage collection doesn't necessarily happen immediately, so we need to exclude the possibility that it's just "floating garbage" rather than a leak.
T_X: could you please do this experiment? Keep iterating the
tahoe get
's and see if the memory usage keeps rising indefinitely or if it levels off. I'm guessing it is the latter, which means that this isn't a "memory leak" per se, but it might still be a problem if the level at which it levels off is too high. Please report back! Also, what sort of machine is that? Consider pasting in the output (or part of the output) ofpython misc/build_helpers/show-tool-versions.py
.Re-assigning this ticket to T_X.
Attachment show-tools-versions.log (2979 bytes) added
Attachment output (6217 bytes) added
Ok, I temporarily increased the RAM from 256MB to 1.5GB in this VM instance and redid the tests with 50 'tahoe get's. You are right, pretty much exactly after 10 rounds (~200MB) the memory consumption does not increase as quickly as before anymore.
However it looks like it still, linearly increases by about 0.6MB per 'tahoe get', with one 'tahoe get' needing 2 minutes. At least during these 50 rounds needing 92 minutes in total - not sure whether this curve will stop increasing after the next n rounds.
I also verified that these 0.6MB per 2 minutes are due to the 'tahoe get' invocation: Four hours after this test run the the memory consumption still is at 18.1%.
Also see attached files for details.
The section of the graph after 11 invocations does indeed appear to show a 0.6 MB-per-invocation memory leak.
ps -o %mem
is based on virtual size, though, and we prefer to measure RSS. Can you repeat the test usingps -o rss
?