Debian package installation is unreliable and spuriously fails CI jobs #2979
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#2979
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
For example, seen on <https://circleci.com/gh/tahoe-lafs/tahoe-lafs/2864>,
apt-get --quiet --yes install git
eventually fails like:CI got refactored to build Debian (and other) Docker images first and only once they have been built successfully to use them to run tests.
It's still possible for Docker image builds to fail... But image builds aren't in the critical path for development. I guess we could add some kind of retry logic in the Docker image building code so that failures like the above don't cause an image build to fail ... but that strikes me as quite low priority since these failures don't get in the way of any development now (unless they happen repeatedly so many times that we eventually end up testing against ancient versions of Debian/whatever that no longer reflect what users will really be using)...
I'm going to call this "good enough" and say we won't do the extra work to try to avoid these failures ever failing an image build job.