unit tests take too long #20

Closed
opened 2007-05-01 02:55:09 +00:00 by zooko · 29 comments

The best way to fix this would be to optimize the code under test so that it passes the tests faster.

The second best way would be to optimize the tests so that they test the same thing but faster.

The third best way would be to reduce the "replications, redundancy, random stabs in the dark" and so forth, and the tests of scaling. (This cries out for another category of test that reports on scaling behavior...)

The fourth best way would be to judiciously prune tests which have been superceded by more comprehensive tests and are now redundant.

The best way to fix this would be to optimize the code under test so that it passes the tests faster. The second best way would be to optimize the tests so that they test the same thing but faster. The third best way would be to reduce the "replications, redundancy, random stabs in the dark" and so forth, and the tests of scaling. (This cries out for another category of test that reports on scaling behavior...) The fourth best way would be to judiciously prune tests which have been superceded by more comprehensive tests and are now redundant.
zooko added the
code
minor
defect
labels 2007-05-01 02:55:09 +00:00

one source of improvement would be the test_encode.Encode.test_send_NNN tests which seek to cover fencepost issues on both the size of the plaintext (relative to the segment size) and the number of segments (relative to the power-of-two size of the block hash tree).

At the moment this is using 100 shares per test, while we could probably get the same test coverage by using 10 share per test.

On the other hand, test_encode appears to take about 1% of the total test time on my system. test_system takes 4%.

one source of improvement would be the test_encode.Encode.test_send_NNN tests which seek to cover fencepost issues on both the size of the plaintext (relative to the segment size) and the number of segments (relative to the power-of-two size of the block hash tree). At the moment this is using 100 shares per test, while we could probably get the same test coverage by using 10 share per test. On the other hand, test_encode appears to take about 1% of the total test time on my system. test_system takes 4%.

curiously, the 'dapper' buildslave just went from taking 40+ minutes to run the test suite down to 12 minutes. I'm not quite sure how it helped, but the proximate change was to remove the .pyc files before running the tests: the build process (that whole setup.py install --prefix/--root thing) seems to put bogus source file names in the .pyc files, messing up stack traces and figleaf output.

For at least one test run (on the edgy buildslave), I used 'strace' to watch the test's progress, and I saw a whole boatload of futex() syscalls interspersed with failed attempts to open (or maybe just access()) bogus .py filenames that started with /lib . It feels like something was trying to find its source code, failing, and not cacheing the failure. There weren't a lot of stack traces taking place at the time (it was in the middle of test_system, somewhere between send_block_hashes and close_buckets), at least none that made it into the logs.

Oh! But, if Deferred debugging were turned on, that would capture a stack trace inside every single Deferred, both at creation and at invocation. If it were trying to open a source file for those stack traces, that would be a lot of path searching and failed open() calls.

Huh, look at that. Removing the defer.setDebugging(True) from test_introducer.py reduces the time to run the unit tests on my laptop from 1m56s to 55s, a factor of 2. Guess I should fix that.

curiously, the 'dapper' buildslave just went from taking 40+ minutes to run the test suite down to 12 minutes. I'm not quite sure how it helped, but the proximate change was to remove the .pyc files before running the tests: the build process (that whole setup.py install --prefix/--root thing) seems to put bogus source file names in the .pyc files, messing up stack traces and figleaf output. For at least one test run (on the edgy buildslave), I used 'strace' to watch the test's progress, and I saw a whole boatload of futex() syscalls interspersed with failed attempts to open (or maybe just access()) bogus .py filenames that started with /lib . It feels like something was trying to find its source code, failing, and not cacheing the failure. There weren't a lot of stack traces taking place at the time (it was in the middle of test_system, somewhere between send_block_hashes and close_buckets), at least none that made it into the logs. Oh! But, if Deferred debugging were turned on, that would capture a stack trace inside every single Deferred, both at creation and at invocation. If it were trying to open a source file for those stack traces, that would be a lot of path searching and failed open() calls. Huh, look at that. Removing the defer.setDebugging(True) from test_introducer.py reduces the time to run the unit tests on my laptop from 1m56s to 55s, a factor of 2. Guess I should fix that.

wow. the dapper buildslave, our slowest, was taking half an hour to run the unit test suite yesterday. After removing defer.setDebugging() (and dealing with the .pyc problem in #35), the dapper buildslave just finished a test run in two minutes. Cool.

I think I can close this ticket now. When we add enough tests for this two minutes to become 10, let's revisit the ideas presented above.

wow. the dapper buildslave, our slowest, was taking half an hour to run the unit test suite yesterday. After removing defer.setDebugging() (and dealing with the .pyc problem in #35), the dapper buildslave just finished a test run in two minutes. Cool. I think I can close this ticket now. When we add enough tests for this two minutes to become 10, let's revisit the ideas presented above.
warner added the
fixed
label 2007-05-04 07:57:10 +00:00
Author

Okay, nowadays it takes about 20 minutes to run our tests on the Dapper buildslave. More troublesome for me is that it takes about 5 minutes to run on my own personal workstation. Hopefully we'll implement #217 (DSA-based mutable files -- small URLs, fast file creation) and that it will help a bit. Also, could we use DSA instead of RSA for foolscap tub ids? Or even unencrypted foolscap with randomly-generated tub ids?

Okay, nowadays it takes about 20 minutes to run our tests on the Dapper buildslave. More troublesome for me is that it takes about 5 minutes to run on my own personal workstation. Hopefully we'll implement #217 (DSA-based mutable files -- small URLs, fast file creation) and that it will help a bit. Also, could we use DSA instead of RSA for foolscap tub ids? Or even unencrypted foolscap with randomly-generated tub ids?
zooko removed the
fixed
label 2008-12-08 22:41:28 +00:00
zooko reopened this issue 2008-12-08 22:41:28 +00:00

I've considered building a NotQuiteSystemTestMixin framework which would use real storage servers but not real Tubs: to bypass the serialization and network-roundtrip part of the framework. I figure that would cut out a lot of the time spent in system-ish tests.

We're using a lot more SystemTestMixin -style tests these days, partially because it's easier to set up (fewer "fake" objects), partially the system is basically complete (I wrote a lot of unit tests for, say, upload, before there was any code to perform download). The SystemTestMixin takes a second or two just to get initialized and establish all the connections.

I've considered building a NotQuiteSystemTestMixin framework which would use real storage servers but not real Tubs: to bypass the serialization and network-roundtrip part of the framework. I figure that would cut out a lot of the time spent in system-ish tests. We're using a lot more SystemTestMixin -style tests these days, partially because it's easier to set up (fewer "fake" objects), partially the system is basically complete (I wrote a lot of unit tests for, say, upload, before there was any code to perform download). The SystemTestMixin takes a second or two just to get initialized and establish all the connections.

Oh, so source:src/allmydata/test/test_mutable.py LessFakeClient is basically the framework I was thinking of. It uses real StorageServer instances but doesn't use Foolscap or an introducer. It can create a client and 10 connected storage servers in 7 milliseconds, whereas the usual SystemTestMixin takes 1.4 seconds to construct 5 client+servers and wait for them to connect.
Most upload/download operations should be faster too.

Oh, so source:src/allmydata/test/test_mutable.py `LessFakeClient` is basically the framework I was thinking of. It uses real StorageServer instances but doesn't use Foolscap or an introducer. It can create a client and 10 connected storage servers in 7 milliseconds, whereas the usual SystemTestMixin takes 1.4 seconds to construct 5 client+servers and wait for them to connect. Most upload/download operations should be faster too.
Author

On my main workstation, if it happens to be loaded down with other work, it can take more than 15 minutes to run all the tahoe tests. This interferes with my workflow.

Also I see that the cygwin buildslave took about 8 hours to run all the steps of its builder yesterday.

On my main workstation, if it happens to be loaded down with other work, it can take more than 15 minutes to run all the tahoe tests. This interferes with my workflow. Also I see that the cygwin buildslave took about 8 hours to run all the steps of its builder yesterday.
davidsarah commented 2009-12-13 03:52:08 +00:00
Owner

This still definitely takes too long (almost 9 minutes just now). It's quite disruptive.

Also, it would be helpful to document on the Dev page how to run a subset of the tests (I did work out how to do this but I've already forgotten the incantation).

This still definitely takes too long (almost 9 minutes just now). It's quite disruptive. Also, it would be helpful to document on the Dev page how to run a subset of the tests (I did work out how to do this but I've already forgotten the incantation).
tahoe-lafs added
major
and removed
minor
labels 2009-12-13 03:52:08 +00:00
Author

davidsarah, where would such documentation live? I thought it would be better in the source:docs directory than on the wiki, but examining both places I didn't see a natural place to describe it. Anyway the description that should be added might include a link to http://twistedmatrix.com/trac/wiki/TwistedTrial plus the specific tip that trial allmydata.test.test_module.TestClass.test_func is the way to run just one func, and likewise for other prefixes of that test identifier.

davidsarah, where would such documentation live? I thought it would be better in the source:docs directory than on the wiki, but examining both places I didn't see a natural place to describe it. Anyway the description that should be added might include a link to <http://twistedmatrix.com/trac/wiki/TwistedTrial> plus the specific tip that `trial allmydata.test.test_module.TestClass.test_func` is the way to run just one func, and likewise for other prefixes of that test identifier.

FYI, "trial $TESTNAME" doesn't work, because it doesn't set up PYTHONPATH and PATH and whatnot. I use make quicktest TEST=$TESTNAME. Using make test TEST=$TESTNAME does something similar, but also does a rebuild first, which adds maybe 10 seconds and an awful lot of noise (but works right after a source checkout, whereas quicktest requires an explicit build first).

python setup.py test -s $TESTNAME is equivalent to make test TEST=$TESTNAME, since the latter is implemented in terms of the former.

We could probably add a "developers.txt" file to the docs/ directory.. this hint could live there, along with how-is-the-code-organized and maybe a description of the various Makefile/setup.py commands that are available. OTOH, I usually just look for these descriptions in the Makefile and setup.py, so putting them elsewhere seems a bit weird.

FYI, "trial $TESTNAME" doesn't work, because it doesn't set up PYTHONPATH and PATH and whatnot. I use `make quicktest TEST=$TESTNAME`. Using `make test TEST=$TESTNAME` does something similar, but also does a rebuild first, which adds maybe 10 seconds and an awful lot of noise (but works right after a source checkout, whereas quicktest requires an explicit build first). `python setup.py test -s $TESTNAME` is equivalent to `make test TEST=$TESTNAME`, since the latter is implemented in terms of the former. We could probably add a "developers.txt" file to the docs/ directory.. this hint could live there, along with how-is-the-code-organized and maybe a description of the various Makefile/setup.py commands that are available. OTOH, I usually just look for these descriptions in the Makefile and setup.py, so putting them elsewhere seems a bit weird.
Author

An interesting next-step would be to turn on the profiler feature of trial and see which Python functions are taking the most time. :-)

An interesting next-step would be to turn on the profiler feature of trial and see which Python functions are taking the most time. :-)
tahoe-lafs added this to the eventually milestone 2011-07-21 19:44:29 +00:00
Author

The first person who submits a patch that reduces runtime of the full unit test suite, on Brian's laptop, by 20%, without causing any code to be less covered by tests, will receive 50 ⓑ from me. (Current market value, about $250.00.)

Offer good until I change my mind, so hurry up.

The first person who submits a patch that reduces runtime of the full unit test suite, on Brian's laptop, by 20%, without causing any code to be less covered by tests, will receive 50 ⓑ from me. (Current market value, about $250.00.) Offer good until I change my mind, so hurry up.
Author

Oh wait, did I say reduce by 20%? I changed my mind. To get my 50ⓑ, you have to make it 5X faster—in other words take 20% of the total runtime—on Brian's laptop.

Note that Brian's laptop has some kind of performance problem with disk I/O, such that unit tests take about 24 minutes on there, compared to these runtimes on our buildslaves:

https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/lucid-amd64/builds/58

12 minutes

https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Atlas%20ubuntu%20natty/builds/65/steps/test

15 minutes

https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Ruben%20Fedora/builds/65

13 minutes

https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Kyle%20OpenBSD%20amd64/builds/72

10 minutes.

So you might be able to win by finding figuring out where our unit tests are relying on disk I/O and replacing those parts of the unit tests with in-memory fake objects. That would almost certainly not lose coverage over any line of code.

I probably won't change my mind again for at least a few days, so get crackin'!

Oh wait, did I say reduce by 20%? I changed my mind. To get my 50ⓑ, you have to make it 5X faster—in other words take 20% of the total runtime—on Brian's laptop. Note that Brian's laptop has some kind of performance problem with disk I/O, such that unit tests take about 24 minutes on there, compared to these runtimes on our buildslaves: <https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/lucid-amd64/builds/58> 12 minutes <https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Atlas%20ubuntu%20natty/builds/65/steps/test> 15 minutes <https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Ruben%20Fedora/builds/65> 13 minutes <https://tahoe-lafs.org/buildbot-tahoe-lafs/builders/Kyle%20OpenBSD%20amd64/builds/72> 10 minutes. So you might be able to win by finding figuring out where our unit tests are relying on disk I/O and replacing those parts of the unit tests with in-memory fake objects. That would almost certainly not lose coverage over any line of code. I probably won't change my mind again for at least a few days, so get crackin'!
Author

In the unlikely and happy circumstance that more than one person posts a winning patch, the order that they appear as attachments to this ticket will determine who has priority. Also, if someone else wins, but you post a patch which is both original (you didn't copy it from the winner) and has some independent value (it contains code which we might like to adopt even after we adopted the winner's), then I might consider giving you a consolation prize of Bitcoin.

In the unlikely and happy circumstance that more than one person posts a winning patch, the order that they appear as attachments to this ticket will determine who has priority. Also, if someone else wins, but you post a patch which is both original (you didn't copy it from the winner) and has some independent value (it contains code which we might like to adopt even after we adopted the winner's), then I *might* consider giving you a consolation prize of Bitcoin.
davidsarah commented 2012-04-24 02:04:39 +00:00
Owner

I won't have time to work on this ticket, but here are some ideas:

  • create _trial_temp in a tmpfs.
  • run the tests in parallel (obviously this will only help on multicore machines).

To test the first of these, I created a tmpfs using sudo mount -t tmpfs -o size=330m tmpfs _trial_temp, and hacked Twisted Trial so that it would just use this existing directory rather than trying to delete and recreate it.

Timings for time bin/tahoe debug trial on my machine with _trial_temp on disk:

real    14m40.738s
user    7m34.270s
sys     0m30.600s

Corresponding timings with _trial_temp on a tmpfs:

real    9m21.444s
user    7m16.930s
sys     0m26.130s

Notice that the user CPU time is only slightly reduced, but the real time is reduced by ~36%, consistent with a significant improvement in I/O wait time. The machine was under memory pressure and I would expect the results to be even better if it hadn't been.

The 330 MiB tmpfs could be smaller if the test suite was optimized to not write as much (or to delete files created for tests that pass).

I won't have time to work on this ticket, but here are some ideas: * create `_trial_temp` in a tmpfs. * run the tests in parallel (obviously this will only help on multicore machines). To test the first of these, I created a tmpfs using `sudo mount -t tmpfs -o size=330m tmpfs _trial_temp`, and hacked Twisted Trial so that it would just use this existing directory rather than trying to delete and recreate it. Timings for `time bin/tahoe debug trial` on my machine with `_trial_temp` on disk: ``` real 14m40.738s user 7m34.270s sys 0m30.600s ``` Corresponding timings with `_trial_temp` on a tmpfs: ``` real 9m21.444s user 7m16.930s sys 0m26.130s ``` Notice that the user CPU time is only slightly reduced, but the real time is reduced by ~36%, consistent with a significant improvement in I/O wait time. The machine was under memory pressure and I would expect the results to be even better if it hadn't been. The 330 MiB tmpfs could be smaller if the test suite was optimized to not write as much (or to delete files created for tests that pass).
davidsarah commented 2012-04-24 02:08:45 +00:00
Owner

Sigh, should have thought of the tmpfs idea in the hour between comment:59637 and comment:59638 ;-)

Sigh, should have thought of the tmpfs idea in the hour between [comment:59637](/tahoe-lafs/trac-2024-07-25/issues/20#issuecomment-59637) and [comment:59638](/tahoe-lafs/trac-2024-07-25/issues/20#issuecomment-59638) ;-)
davidsarah commented 2012-04-24 02:13:33 +00:00
Owner

Incidentally, the test run in comment:59640 left ~324 MiB of memory being used by the tmpfs, so the approach would obviously need more work to be acceptable for general use.

Incidentally, the test run in [comment:59640](/tahoe-lafs/trac-2024-07-25/issues/20#issuecomment-59640) left ~324 MiB of memory being used by the tmpfs, so the approach would obviously need more work to be acceptable for general use.

FYI, on my work laptop (2010 MacBookPro, with OS-X "Snow Leopard" Filevault non-FDE encrypted home directory), the full test suite (current git master, 8aa690b) takes 27m48s to run, of which 7m23s is "user" and 17m19s is "sys", and leaves 233MiB in _trial_temp (of which 118MiB is allmydata.test.test_mutable, 54MiB is cli, and 5-10MiB are used by system, repairer, dirnode, test_upload, test_download, hung_server, and test_encode).

That feels like an awful lot of data being generated. I'd start by looking at test_mutable and seeing if we can get rid of most of the k=N=255 cases. We probably need to keep one of them to exercise that end of the encoding spectrum, but I really doubt we need to do it for every single test_mutable case.

I'd also walk though the tests that upload data and look at the filesizes they're using. Many are casually creating 10MB files, resulting in 30MB of shares. We really don't need to cover more than a few segments, so 1MB should be plenty, and in most cases we should be able to lower the min-segment-size to cover a multi-segment file with just a few kB.

I'd also look into moving most tests to use the common NoNetworkGrid class, and make sure that it's using fake server connections to avoid the SSL spinup time (except for test_system, of course, which needs to exercise the SSL connections too). We have about half a dozen fake-server classes in the test suite, partially because I was too lazy to improve an existing one and decided to create new ones instead. So there's probably some cleanup there that would help.

FYI, on my work laptop (2010 MacBookPro, with OS-X "Snow Leopard" Filevault non-FDE encrypted home directory), the full test suite (current git master, 8aa690b) takes 27m48s to run, of which 7m23s is "user" and 17m19s is "sys", and leaves 233MiB in `_trial_temp` (of which 118MiB is `allmydata.test.test_mutable`, 54MiB is `cli`, and 5-10MiB are used by `system`, `repairer`, `dirnode`, `test_upload`, `test_download`, `hung_server`, and `test_encode`). That feels like an awful lot of data being generated. I'd start by looking at test_mutable and seeing if we can get rid of most of the k=N=255 cases. We probably need to keep one of them to exercise that end of the encoding spectrum, but I really doubt we need to do it for every single test_mutable case. I'd also walk though the tests that upload data and look at the filesizes they're using. Many are casually creating 10MB files, resulting in 30MB of shares. We really don't need to cover more than a few segments, so 1MB should be plenty, and in most cases we should be able to lower the min-segment-size to cover a multi-segment file with just a few kB. I'd also look into moving most tests to use the common NoNetworkGrid class, and make sure that it's using fake server connections to avoid the SSL spinup time (except for test_system, of course, which needs to exercise the SSL connections too). We have about half a dozen fake-server classes in the test suite, partially because I was too lazy to improve an existing one and decided to create new ones instead. So there's probably some cleanup there that would help.
davidsarah commented 2012-10-23 22:04:11 +00:00
Owner

Replying to davidsarah:

  • run the tests in parallel (obviously this will only help on multicore machines).

See http://twistedmatrix.com/trac/ticket/1784. Summary: you'll be able to do

bin/tahoe debug trial -j NUMBER_OF_PROCESSES

when that is released in the next version of Twisted. You'll need a lot of memory though!

Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/20#issuecomment-59640): > * run the tests in parallel (obviously this will only help on multicore machines). See <http://twistedmatrix.com/trac/ticket/1784>. Summary: you'll be able to do ``` bin/tahoe debug trial -j NUMBER_OF_PROCESSES ``` when that is released in the next version of Twisted. You'll need a lot of memory though!
davidsarah commented 2012-10-23 22:22:13 +00:00
Owner

Replying to davidsarah:

To test the first of these, I created a tmpfs using sudo mount -t tmpfs -o size=330m tmpfs _trial_temp, and hacked Twisted Trial so that it would just use this existing directory rather than trying to delete and recreate it.

A better way to do that is using --temp-directory: trial will still attempt to delete the directory you specify, so it can't directly be the tmpfs mount, but that's OK as long as it is underneath the tmpfs mount:

mkdir -p tmp
sudo mount -t tmpfs -o size=330m tmpfs tmp
bin/tahoe debug trial --rterrors --temp-directory=tmp/_trial_temp

When done:

sudo umount tmp
rmdir tmp

These tests:

allmydata.test.test_runner.BinTahoe.test_import_in_repl
allmydata.test.test_runner.BinTahoe.test_the_right_code

will spuriously fail because the current directory is not as expected by the test.

Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/20#issuecomment-59640): > To test the first of these, I created a tmpfs using `sudo mount -t tmpfs -o size=330m tmpfs _trial_temp`, and hacked Twisted Trial so that it would just use this existing directory rather than trying to delete and recreate it. A better way to do that is using `--temp-directory`: trial will still attempt to delete the directory you specify, so it can't directly be the tmpfs mount, but that's OK as long as it is underneath the tmpfs mount: ``` mkdir -p tmp sudo mount -t tmpfs -o size=330m tmpfs tmp bin/tahoe debug trial --rterrors --temp-directory=tmp/_trial_temp ``` When done: ``` sudo umount tmp rmdir tmp ``` These tests: ``` allmydata.test.test_runner.BinTahoe.test_import_in_repl allmydata.test.test_runner.BinTahoe.test_the_right_code ``` will spuriously fail because the current directory is not as expected by the test.
davidsarah commented 2012-10-23 23:57:06 +00:00
Owner

Replying to [davidsarah]comment:21:

A better way to do that is using --temp-directory...

Pull request for a "make quickertest" target is at https://github.com/tahoe-lafs/tahoe-lafs/pull/17. It includes a fix for the test_runner failures.

Note that, like "make quicktest", "make quickertest" assumes that it is run in a Tahoe-LAFS source distribution that is already built. Use "make quickertest TEST=<testname>" to run a specific test.

Replying to [davidsarah]comment:21: > A better way to do that is using `--temp-directory`... Pull request for a "`make quickertest`" target is at <https://github.com/tahoe-lafs/tahoe-lafs/pull/17>. It includes a fix for the `test_runner` failures. Note that, like "`make quicktest`", "`make quickertest`" assumes that it is run in a Tahoe-LAFS source distribution that is already built. Use "`make quickertest TEST=<testname>`" to run a specific test.
tahoe-lafs added the
unknown
label 2012-10-23 23:59:23 +00:00
davidsarah commented 2012-10-24 00:05:09 +00:00
Owner

... for make quickertest only.

... for `make quickertest` only.
tahoe-lafs modified the milestone from eventually to 1.10.0 2012-10-24 00:05:09 +00:00
David-Sarah Hopwood <david-sarah@jacaranda.org> commented 2012-10-25 01:19:19 +00:00
Owner

In changeset:edc1f5f67fb34734:

Makefile: add 'make tmpfstest', which uses a tmpfs for _trial_temp.

This probably only works on Linux. It uses sudo to mount and unmount the tmpfs,
which may prompt for a password. refs #20

Signed-off-by: David-Sarah Hopwood <david-sarah@jacaranda.org>
In changeset:edc1f5f67fb34734: ``` Makefile: add 'make tmpfstest', which uses a tmpfs for _trial_temp. This probably only works on Linux. It uses sudo to mount and unmount the tmpfs, which may prompt for a password. refs #20 Signed-off-by: David-Sarah Hopwood <david-sarah@jacaranda.org> ```
tahoe-lafs modified the milestone from 1.10.0 to eventually 2012-10-30 17:21:51 +00:00

I noticed today that the full test suite takes 30 minutes on my (slow) linux box. Part of the problem is the size of the files created by the tests: the _trial_temp directory has 340MB of data at the end of the run, which is kind of excessive.

The worst offender is test_mutable, which creates 132MB. test_cli makes 76MB. The next heaviest ones are test_system, test_download, test_dirnode, test_upload, and test_repairer, which create 12MB-14MB each. There's another 6 that produce 5-10MB, and the rest are 3MB or less.

Note: now that we're on tox, the fastest cycle time for a specific test (when you're doing a tight edit-test-coverage-repeat loop) is something like:

  • .tox/py27/bin/trial allmydata.test.test_foo.Bar.test_blah

Or:

  • source .tox/py27/bin/activate
  • coverage 'which trial' allmydata.test.test_foo.Bar.test_blah

That won't do anything extra (installs, updates, etc). I'm no longer concerned about having a make quicktest or the like: virtualenvs are good enough for me now.

I noticed today that the full test suite takes 30 minutes on my (slow) linux box. Part of the problem is the size of the files created by the tests: the `_trial_temp` directory has 340MB of data at the end of the run, which is kind of excessive. The worst offender is `test_mutable`, which creates 132MB. `test_cli` makes 76MB. The next heaviest ones are `test_system`, `test_download`, `test_dirnode`, `test_upload`, and `test_repairer`, which create 12MB-14MB each. There's another 6 that produce 5-10MB, and the rest are 3MB or less. Note: now that we're on tox, the fastest cycle time for a specific test (when you're doing a tight edit-test-coverage-repeat loop) is something like: * `.tox/py27/bin/trial allmydata.test.test_foo.Bar.test_blah` Or: * `source .tox/py27/bin/activate` * `coverage 'which trial' allmydata.test.test_foo.Bar.test_blah` That won't do anything extra (installs, updates, etc). I'm no longer concerned about having a `make quicktest` or the like: virtualenvs are good enough for me now.

I'm working on a branch that fixes a bunch of this. test_mutable uses smaller uploaded files, and stops exercising the N=255 case, which reduces the time by a factor of 7. test_cli uses a grid with k=N=1, which reduces it by 50%. Smaller speedups were obtained by using k=N=1 on all other NoNetworkGrid tests that don't actually exercise multiple shares.

test_system is still pretty bad, I haven't yet tried to tackle it.

I'm working on a branch that fixes a bunch of this. `test_mutable` uses smaller uploaded files, and stops exercising the N=255 case, which reduces the time by a factor of 7. `test_cli` uses a grid with k=N=1, which reduces it by 50%. Smaller speedups were obtained by using k=N=1 on all other NoNetworkGrid tests that don't actually exercise multiple shares. `test_system` is still pretty bad, I haven't yet tried to tackle it.
Brian Warner <warner@lothar.com> commented 2016-08-06 00:17:21 +00:00
Owner

In 594dd26/trunk:

Merge branch '20-improve-tests'

This reduces the 'tox -e coverage' test time (on travis) by 40%, from
1711s to 1014s.

refs ticket:20
In [594dd26/trunk](/tahoe-lafs/trac-2024-07-25/commit/594dd26285dfb37423a0b9a9010834f93f06e632): ``` Merge branch '20-improve-tests' This reduces the 'tox -e coverage' test time (on travis) by 40%, from 1711s to 1014s. refs ticket:20 ```
Owner

Related: #2845

Related: #2845

The test suite is faster now. How fast is fast enough? I don't know. I would love it if I could run the whole test suite locally in zero seconds. That would certainly be fast enough. It's probably not practical, though.

What I can do is run the unit test suite locally on my 4 core laptop in about 120 seconds. This is no doubt largely due to hardware improvements since this ticket was originally filed. However, whatever the reason, 120 seconds doesn't seem absurdly excessive, particularly considering comments on this ticket above are discussing run times of 1000 - 2000 seconds.

We should certainly remain open to further speedups but I think this ticket has served its purpose. Future speedups can fall under the umbrella of some other ticket with a narrower focus.

I decline the Bitcoin offered by Zooko above for observing this current state of affairs (and anyway I don't have Brian's laptop to verify the result).

The test suite is faster now. How fast is fast enough? I don't know. I would love it if I could run the whole test suite locally in zero seconds. That would certainly be fast enough. It's probably not practical, though. What I can do is run the unit test suite locally on my 4 core laptop in about 120 seconds. This is no doubt largely due to hardware improvements since this ticket was originally filed. However, whatever the reason, 120 seconds doesn't seem absurdly excessive, particularly considering comments on this ticket above are discussing run times of 1000 - 2000 seconds. We should certainly remain open to further speedups but I think this ticket has served its purpose. Future speedups can fall under the umbrella of some other ticket with a narrower focus. I decline the Bitcoin offered by Zooko above for observing this current state of affairs (and anyway I don't have Brian's laptop to verify the result).
exarkun added the
fixed
label 2020-01-14 20:23:47 +00:00

thanks for closing this!

FWIW, tox -e py2 on my current Debian/buster laptop currently spends 276 seconds in trial (and fails test_util.FileUtil.test_create_long_path for some reason).

thanks for closing this! FWIW, `tox -e py2` on my current Debian/buster laptop currently spends 276 seconds in `trial` (and fails `test_util.FileUtil.test_create_long_path` for some reason).
Sign in to join this conversation.
No Milestone
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Reference: tahoe-lafs/trac-2024-07-25#20
No description provided.