unit tests take too long #20
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
4 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#20
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
The best way to fix this would be to optimize the code under test so that it passes the tests faster.
The second best way would be to optimize the tests so that they test the same thing but faster.
The third best way would be to reduce the "replications, redundancy, random stabs in the dark" and so forth, and the tests of scaling. (This cries out for another category of test that reports on scaling behavior...)
The fourth best way would be to judiciously prune tests which have been superceded by more comprehensive tests and are now redundant.
one source of improvement would be the test_encode.Encode.test_send_NNN tests which seek to cover fencepost issues on both the size of the plaintext (relative to the segment size) and the number of segments (relative to the power-of-two size of the block hash tree).
At the moment this is using 100 shares per test, while we could probably get the same test coverage by using 10 share per test.
On the other hand, test_encode appears to take about 1% of the total test time on my system. test_system takes 4%.
curiously, the 'dapper' buildslave just went from taking 40+ minutes to run the test suite down to 12 minutes. I'm not quite sure how it helped, but the proximate change was to remove the .pyc files before running the tests: the build process (that whole setup.py install --prefix/--root thing) seems to put bogus source file names in the .pyc files, messing up stack traces and figleaf output.
For at least one test run (on the edgy buildslave), I used 'strace' to watch the test's progress, and I saw a whole boatload of futex() syscalls interspersed with failed attempts to open (or maybe just access()) bogus .py filenames that started with /lib . It feels like something was trying to find its source code, failing, and not cacheing the failure. There weren't a lot of stack traces taking place at the time (it was in the middle of test_system, somewhere between send_block_hashes and close_buckets), at least none that made it into the logs.
Oh! But, if Deferred debugging were turned on, that would capture a stack trace inside every single Deferred, both at creation and at invocation. If it were trying to open a source file for those stack traces, that would be a lot of path searching and failed open() calls.
Huh, look at that. Removing the defer.setDebugging(True) from test_introducer.py reduces the time to run the unit tests on my laptop from 1m56s to 55s, a factor of 2. Guess I should fix that.
wow. the dapper buildslave, our slowest, was taking half an hour to run the unit test suite yesterday. After removing defer.setDebugging() (and dealing with the .pyc problem in #35), the dapper buildslave just finished a test run in two minutes. Cool.
I think I can close this ticket now. When we add enough tests for this two minutes to become 10, let's revisit the ideas presented above.
Okay, nowadays it takes about 20 minutes to run our tests on the Dapper buildslave. More troublesome for me is that it takes about 5 minutes to run on my own personal workstation. Hopefully we'll implement #217 (DSA-based mutable files -- small URLs, fast file creation) and that it will help a bit. Also, could we use DSA instead of RSA for foolscap tub ids? Or even unencrypted foolscap with randomly-generated tub ids?
I've considered building a NotQuiteSystemTestMixin framework which would use real storage servers but not real Tubs: to bypass the serialization and network-roundtrip part of the framework. I figure that would cut out a lot of the time spent in system-ish tests.
We're using a lot more SystemTestMixin -style tests these days, partially because it's easier to set up (fewer "fake" objects), partially the system is basically complete (I wrote a lot of unit tests for, say, upload, before there was any code to perform download). The SystemTestMixin takes a second or two just to get initialized and establish all the connections.
Oh, so source:src/allmydata/test/test_mutable.py
LessFakeClient
is basically the framework I was thinking of. It uses real StorageServer instances but doesn't use Foolscap or an introducer. It can create a client and 10 connected storage servers in 7 milliseconds, whereas the usual SystemTestMixin takes 1.4 seconds to construct 5 client+servers and wait for them to connect.Most upload/download operations should be faster too.
On my main workstation, if it happens to be loaded down with other work, it can take more than 15 minutes to run all the tahoe tests. This interferes with my workflow.
Also I see that the cygwin buildslave took about 8 hours to run all the steps of its builder yesterday.
This still definitely takes too long (almost 9 minutes just now). It's quite disruptive.
Also, it would be helpful to document on the Dev page how to run a subset of the tests (I did work out how to do this but I've already forgotten the incantation).
davidsarah, where would such documentation live? I thought it would be better in the source:docs directory than on the wiki, but examining both places I didn't see a natural place to describe it. Anyway the description that should be added might include a link to http://twistedmatrix.com/trac/wiki/TwistedTrial plus the specific tip that
trial allmydata.test.test_module.TestClass.test_func
is the way to run just one func, and likewise for other prefixes of that test identifier.FYI, "trial $TESTNAME" doesn't work, because it doesn't set up PYTHONPATH and PATH and whatnot. I use
make quicktest TEST=$TESTNAME
. Usingmake test TEST=$TESTNAME
does something similar, but also does a rebuild first, which adds maybe 10 seconds and an awful lot of noise (but works right after a source checkout, whereas quicktest requires an explicit build first).python setup.py test -s $TESTNAME
is equivalent tomake test TEST=$TESTNAME
, since the latter is implemented in terms of the former.We could probably add a "developers.txt" file to the docs/ directory.. this hint could live there, along with how-is-the-code-organized and maybe a description of the various Makefile/setup.py commands that are available. OTOH, I usually just look for these descriptions in the Makefile and setup.py, so putting them elsewhere seems a bit weird.
An interesting next-step would be to turn on the profiler feature of trial and see which Python functions are taking the most time. :-)
The first person who submits a patch that reduces runtime of the full unit test suite, on Brian's laptop, by 20%, without causing any code to be less covered by tests, will receive 50 ⓑ from me. (Current market value, about $250.00.)
Offer good until I change my mind, so hurry up.
Oh wait, did I say reduce by 20%? I changed my mind. To get my 50ⓑ, you have to make it 5X faster—in other words take 20% of the total runtime—on Brian's laptop.
Note that Brian's laptop has some kind of performance problem with disk I/O, such that unit tests take about 24 minutes on there, compared to these runtimes on our buildslaves:
https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/lucid-amd64/builds/58
12 minutes
https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Atlas%20ubuntu%20natty/builds/65/steps/test
15 minutes
https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Ruben%20Fedora/builds/65
13 minutes
https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Kyle%20OpenBSD%20amd64/builds/72
10 minutes.
So you might be able to win by finding figuring out where our unit tests are relying on disk I/O and replacing those parts of the unit tests with in-memory fake objects. That would almost certainly not lose coverage over any line of code.
I probably won't change my mind again for at least a few days, so get crackin'!
In the unlikely and happy circumstance that more than one person posts a winning patch, the order that they appear as attachments to this ticket will determine who has priority. Also, if someone else wins, but you post a patch which is both original (you didn't copy it from the winner) and has some independent value (it contains code which we might like to adopt even after we adopted the winner's), then I might consider giving you a consolation prize of Bitcoin.
I won't have time to work on this ticket, but here are some ideas:
_trial_temp
in a tmpfs.To test the first of these, I created a tmpfs using
sudo mount -t tmpfs -o size=330m tmpfs _trial_temp
, and hacked Twisted Trial so that it would just use this existing directory rather than trying to delete and recreate it.Timings for
time bin/tahoe debug trial
on my machine with_trial_temp
on disk:Corresponding timings with
_trial_temp
on a tmpfs:Notice that the user CPU time is only slightly reduced, but the real time is reduced by ~36%, consistent with a significant improvement in I/O wait time. The machine was under memory pressure and I would expect the results to be even better if it hadn't been.
The 330 MiB tmpfs could be smaller if the test suite was optimized to not write as much (or to delete files created for tests that pass).
Sigh, should have thought of the tmpfs idea in the hour between comment:59637 and comment:59638 ;-)
Incidentally, the test run in comment:59640 left ~324 MiB of memory being used by the tmpfs, so the approach would obviously need more work to be acceptable for general use.
FYI, on my work laptop (2010 MacBookPro, with OS-X "Snow Leopard" Filevault non-FDE encrypted home directory), the full test suite (current git master, 8aa690b) takes 27m48s to run, of which 7m23s is "user" and 17m19s is "sys", and leaves 233MiB in
_trial_temp
(of which 118MiB isallmydata.test.test_mutable
, 54MiB iscli
, and 5-10MiB are used bysystem
,repairer
,dirnode
,test_upload
,test_download
,hung_server
, andtest_encode
).That feels like an awful lot of data being generated. I'd start by looking at test_mutable and seeing if we can get rid of most of the k=N=255 cases. We probably need to keep one of them to exercise that end of the encoding spectrum, but I really doubt we need to do it for every single test_mutable case.
I'd also walk though the tests that upload data and look at the filesizes they're using. Many are casually creating 10MB files, resulting in 30MB of shares. We really don't need to cover more than a few segments, so 1MB should be plenty, and in most cases we should be able to lower the min-segment-size to cover a multi-segment file with just a few kB.
I'd also look into moving most tests to use the common NoNetworkGrid class, and make sure that it's using fake server connections to avoid the SSL spinup time (except for test_system, of course, which needs to exercise the SSL connections too). We have about half a dozen fake-server classes in the test suite, partially because I was too lazy to improve an existing one and decided to create new ones instead. So there's probably some cleanup there that would help.
Replying to davidsarah:
See http://twistedmatrix.com/trac/ticket/1784. Summary: you'll be able to do
when that is released in the next version of Twisted. You'll need a lot of memory though!
Replying to davidsarah:
A better way to do that is using
--temp-directory
: trial will still attempt to delete the directory you specify, so it can't directly be the tmpfs mount, but that's OK as long as it is underneath the tmpfs mount:When done:
These tests:
will spuriously fail because the current directory is not as expected by the test.
Replying to [davidsarah]comment:21:
Pull request for a "
make quickertest
" target is at https://github.com/tahoe-lafs/tahoe-lafs/pull/17. It includes a fix for thetest_runner
failures.Note that, like "
make quicktest
", "make quickertest
" assumes that it is run in a Tahoe-LAFS source distribution that is already built. Use "make quickertest TEST=<testname>
" to run a specific test.... for
make quickertest
only.In changeset:edc1f5f67fb34734:
I noticed today that the full test suite takes 30 minutes on my (slow) linux box. Part of the problem is the size of the files created by the tests: the
_trial_temp
directory has 340MB of data at the end of the run, which is kind of excessive.The worst offender is
test_mutable
, which creates 132MB.test_cli
makes 76MB. The next heaviest ones aretest_system
,test_download
,test_dirnode
,test_upload
, andtest_repairer
, which create 12MB-14MB each. There's another 6 that produce 5-10MB, and the rest are 3MB or less.Note: now that we're on tox, the fastest cycle time for a specific test (when you're doing a tight edit-test-coverage-repeat loop) is something like:
.tox/py27/bin/trial allmydata.test.test_foo.Bar.test_blah
Or:
source .tox/py27/bin/activate
coverage 'which trial' allmydata.test.test_foo.Bar.test_blah
That won't do anything extra (installs, updates, etc). I'm no longer concerned about having a
make quicktest
or the like: virtualenvs are good enough for me now.I'm working on a branch that fixes a bunch of this.
test_mutable
uses smaller uploaded files, and stops exercising the N=255 case, which reduces the time by a factor of 7.test_cli
uses a grid with k=N=1, which reduces it by 50%. Smaller speedups were obtained by using k=N=1 on all other NoNetworkGrid tests that don't actually exercise multiple shares.test_system
is still pretty bad, I haven't yet tried to tackle it.In 594dd26/trunk:
Related: #2845
The test suite is faster now. How fast is fast enough? I don't know. I would love it if I could run the whole test suite locally in zero seconds. That would certainly be fast enough. It's probably not practical, though.
What I can do is run the unit test suite locally on my 4 core laptop in about 120 seconds. This is no doubt largely due to hardware improvements since this ticket was originally filed. However, whatever the reason, 120 seconds doesn't seem absurdly excessive, particularly considering comments on this ticket above are discussing run times of 1000 - 2000 seconds.
We should certainly remain open to further speedups but I think this ticket has served its purpose. Future speedups can fall under the umbrella of some other ticket with a narrower focus.
I decline the Bitcoin offered by Zooko above for observing this current state of affairs (and anyway I don't have Brian's laptop to verify the result).
thanks for closing this!
FWIW,
tox -e py2
on my current Debian/buster laptop currently spends 276 seconds intrial
(and failstest_util.FileUtil.test_create_long_path
for some reason).