add docs/tests.rst documenting how to run tests and how to interpret the output #1439
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1439
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
In /tahoe-lafs/trac-2024-07-25/issues/5207#comment:26, Zooko wrote about various ways to run the Tahoe-LAFS test suite, some of which work and some of which do not. He suggested adding a
docs/tests.rst
to document this.tests.rst should also document cases in which the tests are known to fail (such as when we're not testing the right code), how to interpret skips, todos, and other sources of noise, and how to get Twisted log output from test cases.
See also [//pipermail/tahoe-dev/2014-August/009152.html Nathan's comments about "How To Run Tests" during a Nuts & Bolts meeting].
I'm picky about making tests run quickly, so I've identified three ways to run them, with various levels of nuisance and performance:
python setup.py test [-s allmydata.test.test_FOO]
./bin/tahoe debug trial allmydata.test.test_FOO
PYTHONPATH=support/lib/pythonX.Y/site-packages trial allmydata.test.test_FOO
The last is the fastest, but you have to have twisted/trial already installed (or add more to $PATH/$PYTHONPATH), tests which involve searching $PATH won't work (or add more to $PATH), and you have to know what minor version of python you're running (to get the support/ directory right).
2 is easier, and only slightly slower: the
bin/tahoe
wrapper performs a fork+exec to re-run the entry point with the right $PATH/$PYTHONPATH1 is how the test automation does it, because it's not particularly tahoe-specific. It's even slower because it runs a second subprocess.call to invoke the second form. More importantly, it actually runs
setup.py build
first (because of the aliases insetup.cfg
), which can spend several seconds looking at your support/ directory and deciding whether to install dependencies or not.And if you use
python setup.py test --coverage
, then you get one subprocess.call to invokebin/tahoe @coverage
, a second one inside thebin/tahoe
wrapper to run the real bin/tahoe, and third one in bin/tahoe to runcoverage
. But for large tests, that's usually small compared to the overhead induced by the coverage tracing function.I'm hoping that our #2255/#2077 overhauls can remove these extra calls.