explore improved peer-selection approaches: chord, reliability-based #447
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#447
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Our old roadmap.txt had "Upload Peer Selection step 3/4" listed as "reliability/goodness-point counting?" and "denver airport (chord)?" listed.
What this means: our current "tahoe two" design could be changed to take historical server reliability into account. A long time ago, we imagined a system in which each server would get a certain number of points (based upon their demonstrated reliability, either computed by the uploading client itself, through other peers, or by a centralized service). Peer selection would assign shares to peers until it had reached a certain number of points. The idea was to take advantage of relatively unreliable peers instead of just always sticking with the reliable ones.
The "dernver airport / chord" idea involved reducing the number of connections needed to work with several files from fully-connected mesh down to log(N) (see also #235). We wrote this down somewhere. The rough idea was to send messages out along the chords towards the region near the SI, bearing a request to hold shares. The nodes which received and accepted the request would then connect directly to the uploading node to receive the share data.