cloud branch: allmydata.storage.leasedb.NonExistentShareError #2204
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#2204
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Extracting and reformatting the traceback contained in that error:
I started with a new SQLite.db but get the same result.
in add_or_renew_leases\\n raise NonExistentShareError(si_s, shnum)\\nallmyto data.storage.leasedb.NonExistentShareErrordata.storage.leasedb.NonExistentShareErrorto cloud branch: allmydata.storage.leasedb.NonExistentShareErrorPossible duplicate of #1921.
I've encountered this now as well.
The comment about starting with a new SQLite3 lease database is appreciated - that agrees with my situation and supports my hypothesis about what's going wrong here.
My understanding of the
shares
table of the lease database is that it is populated in one of two ways:The code is all meant to tolerate lose of the lease database, too.
When
_locked_testv_and_readv_and_writev
is done applying writes, it attempts toadd_or_renew_default_lease
for the share (via the account). This eventually calls down toadd_or_renew_leases
on the lease database.add_or_renew_leases
requires that the share being leased already exists in theshares
table.So, if the lease database is ever lost, there is a period while the accounting crawler is re-populating the
shares
table when it is not possible to write to mutable files.Because the problem depends on both the implementation of mutable storage, the accounting system, the lease database, and on an external event causing the loss of the lease database, it's not clear to me what testing approach will work.
Also, the implementation of
_locked_testv_and_readv_and_writev
is sufficiently complex that the idea of touching it at all is pretty scary. It seems like a reasonable fix would be to cause it to alwaysaccount.add_share
before_update_lease
(rather than the current behavior of onlyaccount.add_share
ing ifshnum not in sharemap
).Fixed in the #2237 branch(es):