Tahoe-LAFS fails unit tests when the "-OO" flag is passed to Python to optimize and strip docstrings #749
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#749
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
These six unit tests fail if optimization is turned on:
...
Not urgent for TahoeLAFS v1.5, so I'm bumping it. Note that aside from the general "curiousness" that our code (or more likely our unit tests) rely on assertion-checking to be correct, turning on Python's
-OO
optimization mode might reduce CPU load a bit. I thought I observed that when benchmarking for #329/#752, but I'm not sure if it was real or an artifact of noisy benchmarking.I had some time, so I fixed it, in changeset:d8ba8c2eb5d4b1b5. All were cases where we intentionally do something wrong so as to trigger an (assertion) exception and then check the resulting behavior. All were fixed by replacing the AssertionError with something more specific.
The best rule to follow is probably this:
OTOH, it's a bother to create a whole new exception type and add an "if" clause (and then fret about whether the code-coverage stats are going to go down) just to do something that should be highly encouraged like adding preconditions and internal consistency checks. Maybe it means we're being overzealous in our unit tests and exercising things that we shouldn't or don't need to exercise.
I'm no expert on unit testing, but I think it makes sense that assertions (preconditions, postconditions, etc.) are things that don't need to be tested. I see them not as code which must be validated for correctness, but instead as code that does correctness validation, like the unit tests.
Said another way, assertions and other design-by-contract artifacts are actually tests. They're just tests that are normally run all the time rather than only when we choose to explicitly invoke a test run.
If you need to write test code to test your assertions, don't you also need to write test code to test your unit tests? And test code to test the test code that tests your tests, and... (try saying that five times fast) :-)