add tcpdump data to viz tool #1269
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1269
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
as mentioned in comment:79607 in the context of the not-yet-landed #1200 viz tool:
The idea would be to start a
tcpdump
process just before startinga download, then run a tool over the output to extract just the relevant
packets (actually you'd want a tool that starts by asking the tahoe
client for a list of its connections, to get the port numbers, then runs
tcpdump itself with the right filter arguments). You'd store some
condensed form of the output (maybe a pickled list of timestamps) in a
directory where
web/status.py
could find it. Thenstatus.py
would serve packet timestamps in the same JSON bundle as the other
download events (in particular the tx/rx of data-block requests). These
packet timestamps would then be shown on the same chart as the
application-level requests.
(another thought is to have the tcpdump process publish its data over
HTTP, and put a box on the viz page to paste in the URL of that process,
so it can fetch the data itself. This requires a browser that allows
CORS (also see
here), but that
dates back to Firefox 3.5 and maybe IE7).
The goal would be to eyeball how much overhead is coming from Foolscap
and the network layer. Even though the data inside the SSL connections
would be opaque to
tcpdump
, all we really care about is thetiming. It should also be possible to see how multiple small messages
are combined into a single packet (Nagle), and maybe how a small message
gets stalled behind some other large messages (head-of-line-blocking).
Contention between parallel requests to multiple servers might also show
up here.
It would be great to be able to do this on the server side as well, and
get a sense for how the delay is divided between the outbound network
trip, the server's internal processing, and the return network trip. Of
course, this assumes synchronized clocks, but perhaps the
tcpdump-running tool could exchange a couple of packets with timestamps
before the download starts, a sort of cheap stripped-down NTP, and apply
the offset to the resulting packet trace.