Performance regression for large values of K #1264
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
4 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1264
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
On the grid described here, the download time of a 10 MB file encoded with 20-of-60 is roughly doubled on a client running v1.8.1 than the same client running v.1.7.1.
allmydata-tahoe: 1.8.0-r4843
allmydata-tahoe: 1.8.0c4-r4743
allmydata-tahoe: 1.7.0-r4579
Attachment timeline.zip (949761 bytes) added
Attachment 10MB-20-of-60-timeline.pdf (240768 bytes) added
Here are both a PDF and an HTML version of the download status timeline of a 10 MB 20-of-60 encoded file download.
Attachment Tahoe-LAFS - File Download Status.html (747069 bytes) added
Zooko mentioned that this bug might be a duplicate of #1180, so I modified the default number of in-flight DYHB request from 10 to 60. The result is a slight improvement in download time (about 1 second), so there's probably something else.
This performance regression most probably appeared when the new-downloader landed in v1.8.0, see ticket #798. In my opinion, this issue shouldn't block release v1.8.1 right now but might possibly be addressed in a potential v1.8.2.
Works for me. I ought to get some analysis time this upcoming (US holiday) weekend. My plan is to point the JS-based visualizer at it, and measure the time spent recomputing the desire/satisfaction bitmaps/Spans, since I've observed a surprising amount of time spent in that portion of the code (it's being called much too frequently, and I think it gets worse with large
k
).Replying to warner:
Great! I also hope to find some free time this weekend to work on that.
An quick analysis of [- File Download Status.html"]attachment:"Tahoe-LAFS shows that some segments requests were delayed because a single share request was taking too long. The average time for a share request during this download was about 100 ms, but it raised to a bit more than one second in a few cases such as this one.
I think that those share requests taking too long are probably due to the overall load on the grid and are probably difficult to avoid. However, it seems that each segment request is currently serialized (#1187). That means that a single slow share request will prevent the next segment download to start which will slowdown the overall file download. The bigger
k
is, the more shares the client will have to download and therefore the probability of slow share requests will increase.Now, I don't really understand if and why this issue was not present in v1.7.1 because it seems that segement pipelining wasn't already implemented in the previous downloader.
Replying to [francois]comment:5:
I don't understand why a few of the share requests would take ten times as long as normal. Is the delay on the client, the server, or the network? Brian hypothesized that it had something to do with how the spans data structure gets used more when K is higher. Maybe running the client (gateway, downloader) under a Python profiler would give us a clue. (David-Sarah has pointed out that tahoe has support for that! comment:2:ticket:1267 .)
You could also try running oprofile to get information about what the CPU is doing at the native-code (x86/amd64) level. You could try either or both of these on the client and on the servers.
But, if the slowdown is due to waiting for disk or for network rather than due to spinning the CPU, these tools won't show it. Also if you get unlucky and don't encounter any of these 1-second-long pauses while you are profiling or if you encounter so few of them that the functions responsible for the delay don't accumulate a lot of CPU time over the whole profiling run then they could evade detection.
On the gripping hand, a good profiling result is almost always interesting. It either tells us about some functions that are wasting CPU cycles or it tells us that there are no functions that waste CPU cycles (under this load). So I would encourage you to try it.
Yes this is definitely one of the mysteries. What changed between v1.7.1 and v1.8? The addition of the spans data structure is one thing...
Oh! This would probably help:
http://tahoe-lafs.org/trac/tahoe-lafs/attachment/ticket/1170/debuggery-trace-spans.dpatch.txt
That patch adds logging of every time
Share._received
gets touched, which is one of the most commonly-used spans objects.This tool reads logs like that and benchmarks how fast a given spans implementation can handle such usage: source:trunk/misc/simulators/bench_spans.py
Applying that patch to your downloader could tell us whether the
Share._received
object is getting used a lot, and then running bench_spans.py on the resulting log could tell us whether the current implementation of spans has some inefficient algorithm.As proposed I ran the client under the cProfile Python profiler by directly calling
twistd
with the following command because the-n
is necessary to have profiler output written into a file.You'll find the profile generated by downloading a 20-of-60 10 MB file and then stopping the client. I'll hopefully address the remaining points tomorrow.
Attachment tahoe.profile (535720 bytes) added
zooko said:
Actually my hypothesis is that the time spent between receipt of the block
request and the transmission of the next block request will be higher for
larger values of k. This time would not be "charged" against the individual
server: it occurs after the clock has been stopped.
This picture
(and the other pictures on #1170) show part of this: it's the delay between
the receipt of the last block() request (e.g. at 2.60s) and the transmission
of the next request (at around 2.67s).
Zooming in
shows what's happening in that gap. If you could read the "desire" words on
the small boxes (which of course is easier when you can interactively zoom
the chart, rather than looking at a screenshot), you could see that
recomputing the desire/satified bitmaps is being done three times, but the
second and third passes are redundant (no new responses have arrived, so the
computed bitmap is exactly the same, so that's just wasted time).
If I remember right, the downloader was doing one redundant pass for each
hash received, and we fetch more hashes for some segments than others
(peaking for NUMSEGS/2), resulting in an uneven distribution of extra delays.
My comments in http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1170#comment:92
suggest that it's not enough to explain the delays seen here (8% on a 12MB
file on a one-CPU testnet), but it would be worse with more segments.
Okay I did a quick dump of the stats from tahoe.profile, first sorted by total time spent in each function excluding time spent waiting for functions that it called to return ("tottime"), and then sorted by total time spent in each function including time spent waiting for functions it called to return ("cumtime").
Here are my comments/questions about this:
extract_stack
-- used about 5% of the CPU time and the 4th row --checkcache
-- used about 3%.extract_stack
getting caled 50,882 times!? Here are all of the callers which calledextract_stack
in this run:Ah, this indicates that
twisted.internet.defer.debug=True
. Okay. I wonder how it got turned on?posix.stat
invoked more than a million times? Ah,print_callers()
tells me thatposix.stat
is getting called bycheckcache
which is getting called byextract_stack
. So all told, around 23% of the CPU time (as measured in "tottime" terms instead of "cumtime" terms) was spent in doing the debugging stuff due todefer.debug=True
. This makes me wonder what the overall throughput would be ifdefer.debug
were turned off.Next, I turn my attention to the "cumtime" results.
defer.debug
stuff was 25% of all CPU time. (And this is is a better measure of this than the tottime result is.)_send_requests
call, and how long do they take?Huh-oh, I think this shows that cProfile is giving us bad data! The cumulative time spent in
_send_request
, including waiting for its callees to return, was 34.931s, and the total time spent in_send_request
not waiting for its callees to return was 0.029s, which means that it was waiting for its callees to return for almost 35s. But, the total time spent in all of its callees adds up to about 2s seconds. That doesn't make sense! :-(Well, it is obviously time for me to go to bed. Hopefully it will make more sense when I get back to this next weekend.
One last comment: it is surprising how much time is spent in banana.py and its children! (That's the code in foolscap to serialize/deserialize things.)
Hey.
I think this might be the case with recursive call - you end up having a cycle that is not correctly reported here (it's kind of hard and you need cycle detection). So you end up calling your children which are yourself. I would suggest using lsprofcalltree (a copy is here http://codespeak.net/svn/pypy/trunk/pypy/tool/lsprofcalltree.py) and kcachegrind for display (it's so much easier and contains cycle detection). I know installing KDE on mac is a pain but there is always virtualization technology and whatnot.
Cheers,
fijal
zooko says:
Whoa, yeah, defer.debug should definitely not be turned on. We've observed
the unit test suite taking like 3x as long when it is. (last time, it was
turned on by some accidentally left-in debugging code). What's going on?
Francois: do you have any local patches which call
twisted.internet.defer.setDebugging(True)? There's nothing in the current
trunk which does that. If you run twistd with -d/--debug, it will turn on
defer.setDebugging, but from the command line you ran it doesn't look like
that's the case.
Let's see, k=20, 10MB file, assuming default 128KiB segsize, that gives 77
segments, so a total of 144 nodes in the block hash merkle tree for each
share. We fetch half of the leaves and all of the non-leaves, so about 115
nodes per share, and there are 20 shares, so about 2310 block hash nodes
total. There are also 77 data blocks per share, so 1540 of those. We also
fetch maybe a dozen other pieces (offset table, UEB, some portion of the
share hash tree and ciphertext hash tree) for each share, so maybe an extra
240 fetches there. Eyeballing that is about 4090 fetches. So, it's within an
order of magnitude of what I'd expect to see.
Each
_send_requests
call turns into oneloop
call, yes: that'sthe #1268 bug. When that gets fixed, we should see 6377
_send_requests
but far fewer calls to
loop
, maybe 20*(77+2)=1580.Re
_send_request
and recursion: careful, there are two similarly-namedfunctions here.
_send_requests()
(plural) loops over_send_request()
(singular). Does this resolve the confusion?_send_request()
is just a single call tocallRemote
, and if we'respending a lot of time there, it means we're spending a lot of time in the
Foolscap serialization code. Let's see, 4777 calls and 34.931s total, so
7.3ms per call.. that feels high. The arguments being serialized are really
simple (two ints), but there are a lot of Deferreds involved, so if
defer.setDebugging(True) is really turned on, I'm not as surprised.
We really need a new profiling run with defer.setDebugging turned off.
Attachment tahoe-nodebug.profile (469838 bytes) added
Replying to warner:
Oh! Sorry, the previous profile was indeed generated by
twistd --debug
, my bad! Here's a new one.Replying to [francois]comment:13:
Did this one take about half as much time to download the 10 MB file? Because profiler reports 130 seconds (which it incorrectly calls CPU seconds when it is really wall clock seconds) for this one, and it reported about 240 for the last one. Hey, in fact, does this mean that almost all of the reported performance regression is due to François accidentally turning on debug mode? :-)
In any case, here are the total and cumulative times, showing all the ones that took more than 1% of the total time:
We can see the share.py functions in the cumulative listing, showing that we spent 13.5% of the total time inside share.py's
loop()
, but other than that there aren't any suspicious CPU users. It seems to spend most of its time waiting for the network, which is as it should be.Replying to [zooko]comment:14:
For the record, François assured me on IRC that this is not the case, that the regression from 1.7 to 1.8 is real regardless of debug mode being turned on. I guess the large difference in CPU seconds reported in the debug mode profiling vs. the no-debug-mode profiling doesn't indicate a proportionally large difference in real wall-clock time.
François: if you want to generate some fresh profiling results with your current setup and v1.7 vs. 1.8 that would be cool. We could compare the profiling results from the two versions.
Well, maybe the 13.5% time inside share.py's
loop()
causes the overall progress to slow down significantly, as François observed from the visualizations in comment:5 and as Brian potentially explained in comment:81263.Attachment atlasperf.zip (99574 bytes) added
tools for doing comparative download performance tests
Attachment atlasperf-1MB.png (60015 bytes) added
measured download speed on WAN versus encoding (1MB file) tahoe-1.8.2
Attachment atlasperf-10MB.png (53279 bytes) added
measured download speed on WAN versus encoding (10MB file) tahoe-1.8.2
Attachment atlasperf-100MB.png (54259 bytes) added
measured download speed on WAN versus encoding (100MB file) tahoe-1.8.2
Attachment atlasperf-timing.out (78473 bytes) added
raw data
I just finshed doing the first batch of comparative performance tests on the Atlas Networks hardware, and uploaded my tools and the graphs. I'm calling this environment "atlasperf1". It consists of four machines (one client, the others are storage servers, two server nodes per machine) on fast local connections. The CPUs are dual-core Pentium4s at 3.00GHz, the average ping time is 420us. The client is running tahoe-1.8.2 (the servers are a mix of versions: 1.8.2, 1.7.1, 1.6.1).
I uploaded 1MB/10MB/100MB immutable files with various encodings (from 1-of-1 to 60-of-60, all with k=N). Then I ran a program to download random ones over and over. I graphed the k-vs-time curve for about 1700 trials.
The graph shows a nearly-perfect linear slowdown with increasing k. For a 100MB file, k=5 yields a 50s download, k=30 takes 250s, and k=60 takes about 500s. The same shape holds for 1MB and 10MB files too.
The next test is to try various segment sizes. I've got a few pet theories about the slowdown: the number of foolscap messages being sent, or the limited pipelining of small requests. Both would have a correlation with number of segments. For a fast network, our current one-segment pipeline isn't very big. And with high 'k', the amount of data we're getting from each block can get quite small (k=60 means each block is 128k/60=2.2k, which is scarcely a whole packet), so we've got a lot of tiny little requests flying around.
Possibly mitigations include:
readv()
method on the storage servers to reduce the foolscap overhead (if that's in fact an issue)Brian: nice work! Now we're really getting somewhere. :-)
It would be cool to see the New Visualizer report for a
k=5
, 50s download and ak=50
, 500s download. Did you capture those?Brian pointed out to me, on the phone just now, that 1.8.2 doesn't have the New Visualizer in it. He's running the same experiment with trunk now, and will hopefully get New Visualizer reports from those.
On the topic of segment sizes, you might be interested in http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1264 and the various other notes linked from http://tahoe-lafs.org/trac/tahoe-lafs/wiki/Performance .
That last image shows a comparison of LAN download times for a 100MB file at various 'k' values (the graphs for 10MB and 1MB files have the same shape, but more variance). The faster blue samples are for 1.7.1, while the slower red samples are current trunk (probably around changeset:8e69b94588c1c0e7). Both graphs are quite linear, but trunk (with the new downloader) is significantly slower than the old downloader in 1.7.1 across the board, probably about 3x.
Higher 'k' slows down the process too, more severely for the new downloader. For 1.7.1, k=60 is about 3x slower than k=15, and for trunk, the difference is about 3.8x .
I'm still suspecting the foolscap/message-handling overhead. I'd also like to find a way to get a speed-limited WAN-like pipe (simulated, for consistency across runs) and compute similar graphs in an environment where the network is the bottleneck. I believe the new-downloader is faster than the old code in the WAN (that's why I wrote it in the first place), but it looks like something about the complexity or small-reads makes the LAN case slower.
Attachment atlas1-1.7.1vstrunk-100MB.png (66021 bytes) added
comparison of 1.7.1 versus current trunk (probably about changeset:8e69b94588c1c0e7) (better graph, more data)
FYI, my full report on LAN performance is now on the wiki at Performance/Sep2011.
I've opened some new tickets for some of the improvements we should experiment with, in decreasing order of bang-for-the-buck: