handle SIGHUP by reloading your config file #980
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#980
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Jacob is writing init scripts for Tahoe-LAFS for Debian, and he asked if it handles SIGHUP by reloading its config file. It currently doesn't. I wonder what all internal state Tahoe-LAFS would have to reset or revisit in order to cleanly "reload its config file". Perhaps the cleanest response to SIGHUP would be to shut down and restart.
Tahoe should certainly catch SIGHUP for various things. An example that comes to mind is closing and re-opening log files.
there was a lot of state and configuration stuff set up during init, which is why I gave up on SIGHUP-to-reread-tahoe.cfg pretty early. It would involve replacing the Tub, for a start, which means re-registering all Referenceables, replacing the IntroducerClient and the StorageServer.
OTOH, all of this is contained inside the Node instance, and that's just a Service which can be shut down, so maybe we could handle it by doing a top-level stopService, waiting for that to finish, then re-creating the whole Node and starting it back up. This would retain the same pid but replace pretty much everything else.
Logfiles are handled by twisted.python.log, which rotates at some configurable size (1MB, I think). We could route SIGHUP or SIGUSR1 or something to force that, but in my experience the built-in log rotation has always been sufficient.