automatically schedule tests of large files #437
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#437
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
#435 (automate testing of large files) is to create an automated test which, once launched, can determine if a given version of Tahoe correctly uploads and downloads large files. For starters, we need to test files > 4 GiB, and also we need to test files > 12 GiB and probably also even larger in the future.
This ticket is to automatically schedule those tests to get run under certain conditions. Obviously running such a test can be expensive in terms of disk, CPU, and network, so we don't necessarily want to run such a test on the normal schedule of all of our unit tests, which are done on every darcs commit on all test builders.
Also running such a test takes a long time, so we don't want to habitually block other processes (such as running other unit tests or building packages) on such a large-file test completing.
Brian has already expressed a desire that large-file tests are not automatically launched at all, and instead just get launched by the specific decision of a human user. I disagree and would like for us to have an automated policy for running large-file tests. If we go with Brian's preference then we'll just close this ticket as "invalid" or "wontfix".
Here are some policies which are easily implemented in buildbot which can reduce the problems of an expensive test like this:
We can limit the large-file test to only certain buildslaves.
We can make it so that other processes go ahead without waiting to see the result of the large-file test (such as other unit tests, buildbot results e-mails, building of packages, etc.)
Buildbot normally always makes it so that a new test won't launch if one is already in progress, and so that if multiple triggers for a test accumulate then only one test is needed to satisfy all those triggers. That can help with this.
We could make it so that large-file tests are not triggered by darcs commit activity, but instead are triggered once per day, or once per week, or whatever.
We could implement more detailed policies, such as "do it every day at midnight in UTC-7 if there was any darcs commit that day" or "if it takes X hours to run the test, then wait at least 3 * X hours before starting the next test", but I don't know off the top of my head how easy it is to program buildbot for such a policy.
yeah, running it in an automated fashion like once a week seems like it could be reasonable. How about we create a target in the Makefile called "make expensive-test", and then have a buildslave that runs it once a week or something? of course, first we need to write the test and measure how long it actually takes, and then decide on what hardware we want to use for this purpose (after seeing just how slammed the machine gets)
Sounds good.
Moving this out of the 1.3.1 Milestone.