publish-and-subscribe to change events on mutables #2555
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#2555
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Currently if a client wants to keep track of changes being made by someone else to a mutable, it has to poll that mutable. In Magic Folder, for example, we poll every few seconds. Polling is terrible! It has bad latency (a few seconds) and also bad load (new requests every few seconds).
Instead we should implement a publish-and-subscribe notification system by which clients can maintain an open (but idle) connection to a server, and then get low-latency, cheap notifications from servers about events.
The storage server should provide an API which does something like this:
Passing the
previous_watcher
to the storage server is just a way to let the storage server free up resources by ceasing to send notifications to that old one. The client does not rely on the storage server's behavior with regard to that! The client implements a purely client-side ordering guarantee in which it synchronously disables the old watcher (the old watcher is effectively a revocable forwarder) before it sends this request to the server and gives the server access to the new watcher.To be specific:
RIStorageServer
, next toget_buckets()
andslot_readv()
notifyOnDisconnect
hereWe think it'd be a good idea to require that the mutable share exists before a caller can establish a watcher on it. Ideally we'd require that the caller proves knowledge of a readcap before allowing them to add the watcher, but our share formats don't really make that possible. Also maybe the API should let you watch multiple storage indicies at once. But maybe not.
At the next higher level up, I think we'll add a
IMutableFileNode.watch(callback)
object, and maybeunwatch
, which will do a mapupdate of the mutable file, locate all the servers that currently have shares, then register a watcher on each one. Actually it should probably establish a loop which does:Basically we want a max-5-minutes-latency fallback in case the watcher connection fails, without trying to react to Foolscap's (unreliable / racy)
notifyOnDisconnect
feature (which we're trying to remove).Also, note that when we move to an HTTP-based storage server, this will probably be implemented with an EventSource or WebSocket. The
IMutableFileNode
API should hide the details.There is currently a WIP PR for a WebSocket-based design - https://github.com/tahoe-lafs/tahoe-lafs/pull/513