Repair used default shares.happy #1212
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1212
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I've tried to repair a file and got:
Everything worked fine on 1.7.1 and shares.happy = 3 (didn't changed it after upgrade). So I did a little investigation and found the problem. It's immutable/repairer.py, line 60:
Why do we use default happy here? It definitely should be read from
the config. I didin't dig further but replaced it with ugly hack:
...and the problem has gone! Repairing works with just 6 servers
online.
We eventually decided that this should be 0 when designing and implementing Servers of Happiness; see #778, around comment 45 for the discussion on that.
If no one has claimed this ticket by Tuesday, I'll fix it.
(I'm also setting the version to 1.8.0; if I understand your mailing list message, that's what your client is running. Feel free to change it back if I'm wrong. :-)
Yes, it's 1.8.0 but there was no such a version in a drop-down list when I created the ticket. This issue is important because it breaks the repair feature on small grids with <7 nodes online. And even more important because repairing is needed more often on such small networks.
Attachment 1212.dpatch (3258 bytes) added
I think that the patch in 1212.dpatch fixes this issue.
Attachment 1212.patch (2810 bytes) added
Why do you provide patches in dpatch format? It's debian only and I didn't find sources in google to compile it for OpenSuSE (which is RPM-based). So I patched by hand and here is unified diff patch: 1212.patch
Looks like this version works, thanks.
Replying to [eurekafag]comment:6:
That was actually a darcs patch rather than a debian patch. I didn't realize that "dpatch" stood for "debian patch". I've updated wiki/Patches to suggest that people named their darcs patches
thing.darcspatch.txt
instead ofthing.dpatch
.Reviewed and applied in changeset:ec4f87a98c034dac, thanks!
By the way, I think we should do more work here. This patch corrects the regression from v1.7.1 to v1.8.0 (introduced in changeset:797828f47fe1aa44), in that v1.7.1 would repair with a servers-of-happiness (
H
) of0
and v1.8.0 would repair with anH
of7
. While I agree that this was a regression and that we should put it back to0
, I actually think that the old behavior of0
was wrong and that we should have been using "currently configuredH
" instead!That is: if you have configured your servers-of-happiness
H
to be 3, like eurekafag did, and the number of servers currently reachable on your grid is 2, and you do a repair, then I think the repair should stop with an explicit error message instead of proceeding and then giving you a report at the end that mentions (if you know how to read it) that it actually only put the shares onto 2 servers.(In other words, I think I was wrong when I suggested letting repairer use
H==0
in comment:72461. Or at least, what we did then was to keep the behavior of repairer from v1.6 when we made v1.7, but what I'm suggesting to do now is improve that behavior for the next release.)Kevan, David-Sarah, Brian, eurekafag: do you agree? If so, let's open a new ticket saying to make the
H
used by repair be the same as theH
that would be used by an upload. (Also in the new code we should make theH
value be a parameter passed to the repairer instead of letting the repairer query the node-wide configuration. This is in keeping with CodingStandards regarding configuration and will facilitate some possible future work where people can pass explicitK
,M
, andH
for a given upload or repair, e.g. as options to thetahoe put
command line or optional fields in the WUI.Just testing syntax highlighting by uploading an attachment named "thing.darcspatch.txt"...
Attachment 1212.darcspatch.txt (3258 bytes) added
Attachment 1212again.darcspatch.txt (3258 bytes) added
Attachment 1212.darcs.patch (3258 bytes) added
I do agree that zero happiness should be changed to H. There is no need to create a ticket because I've mentioned that already:
It definitely should be read from the config.
The temporary solution is nice but not complete to close this ticket.
If we do that, we lose the property that the repairer will always try to place whichever shares are missing onto some storage servers, even if the end result isn't optimally distributed.
If I have a cron job that does a deep repair of my rootcap, and the rootcap or some other important dircap or filecap only has
k
ork+1
shares available, and it is stored on a grid with a lot of churn, I probably care more about the fact that there are more than a few shares of that cap around than I do about where they are, and I certainly wouldn't want the repairer to not even bother generating new ones because it couldn't satisfy my distribution criteria; IOW, I'm better off with more shares that are poorly distributed than I am with no repair action (I'm oversimplifying, and it depends on the specific situation, but having more shares will make things better in some situations and generally won't make things worse, AFAICT without doing the math).On the other hand, I think that the repairer should definitely tell the user whether the file is distributed correctly or not, and an exception message certainly does that. I can also make my node's repair go for broke with share regeneration by changing the value of happiness in
tahoe.cfg
to be 0. This is a chore, but it means that people who really want the repairer to try to place new shares regardless of where can still get that behavior.Maybe the best approach is to fix #614 with this in mind. The repairer could regenerate and try to place all of the missing shares, as it does now, but also tell the caller (in the post repair results) whether the repair was ultimately successful or not based on how the shares are distributed, using the client's configured happiness value for that check.
Edit: I didn't read Zooko's comment closely enough. Is what I describe in the third paragraph what the repairer already does? If so, what don't you like about that?
Replying to kevan:
Doesn't this mean that
H
is effectively0
for you when you are doing this?Right. If you want this behavior, set
H==0
. If you want the other behavior (abort the repair) setH
to something else. With the v1.7.1 behavior and the current trunk behavior (since 20100927200102-b8d28-9111a341188a4264e5070f91b52364a2addcb3dc), settingH
in yourtahoe.cfg
has no effect on repairer behavior—repairer always acts as thoughH==0
.Oh, good catch. Yes, if we fix #614 then repairer would be using
H
(during the check/verify step) to determine whether or not to trigger a repair. Once it triggered the repairer, then it could also useH
to determine whether to abort the repair, or it could instead treatH
as effectively0
for the purpose of the repair.Now that I've thought about it more and read your comments, Kevan, I think I agree that we should have the latter behavior, as long as we fix #614 so that the output reported by the repairer can be easily understood by the user as indicating "unhealthy" when the servers of happiness is less than
H
.Oh, in fact, what I really want is for repairer to proceed and to do its best even if it knows that it can't reach servers of happiness greater than or equal to
H
(instead of aborting the way uploader does), but then to return a failure result saying that it wasn't able to repair the file back to health.Does that make sense?
Okay, I'm done changing my mind for the moment. What do you think?
Sorry: I don't understand this question. Hopefully I answered it above.
This is how I think the repairer should work (I think this is violently agreeing with Zooko's comment:13, but with more detail):
let k and N be the shares-needed and total number of shares for this file, and let H be the happiness threshold read from tahoe.cfg.
if there are fewer than k connected servers, report that the repair failed completely.
construct a server map for this file by asking all connected servers which shares they have. (In the case of a mutable file, construct a server map for the latest retrievable version.)
construct a maximum matching M : server -> share, of size |M|, for this file (preferring to include servers that are earlier on the permuted list when there is a choice).
while |M| < N, and we have not tried to put shares on all connected servers:
if |M| < k, report that the repair failed completely. If k <= |M| < H, report that the file is retrievable but unhealthy. In any case report what |M| is.
(The while loop should be done in parallel, with up to N - |M| outstanding requests.)
comment:80267 seems sensible to me.
I'm not sure if milestone 1.8.1 is about the little regression that I submitted a patch for, or the broader, likely to be fixed by #614 issue of how the repairer should work. If the latter, then 1.8.1 might be a little optimistic; fixing #614 correctly will require (unless I'm missing something obvious) a decent chunk of engineering, since the immutable file repairer is currently very simple. I would at least be more confident in my ability to get #614 done by 1.9.0 than by 1.8.1.
Replying to davidsarah:
Why this step?
Replying to [zooko]comment:19:
Just a shortcut; this case would fail in the last step anyway.
Replying to davidsarah:
[...]
A small refinement of this step would be that once |M| >= H, we could allow placing the remaining N-H shares on servers that are already in the matching, if we're unable to place them on servers that are not in the matching.
Replying to davidsarah:
Okay, but why this one?
We definitely need to classify health into several types: unrecoverable, 100% (
|M|
>=N
), and servers-of-happiness-satisfying (|M|
>=H
) (needs a better name! "healthy" ?).Do we also need another type to show that
servers_of_happiness
>=K
?I think we should distinguish between levels of happiness at which the uploader or repairer will
One of the questions in this ticket -- comment:80274 -- is whether (a) should trigger when
|M| < K
or not. Sometimes people would rather that the uploader/repairer get the file out there, even if all the shares are on a single server! Other times people might prefer that the uploader/repairer avoid wasting bandwidth on that and instead stop and raise the alarm.#614 is all about whether (b) should trigger when
|M| < N
(current behavior) or|M| < H
(proposed new behavior).Oh, what I just proposed in comment:80275 is a significant new behavior if we allow the level of happiness that triggers (a) to be different than the level of happiness that triggers (b)! Currently uploader/repairer aborts the upload or repair if it knows that it cannot achieve "health", i.e.
|M| >= H
. There are even unit tests to ensure that buildbot will go red if uploader/repairer proceeds to do an upload when it can't reach that level of happiness. :-)Ah, I confusingly said "|M| < k" when I actually meant to say "the file is not retrievable". (It might be retrievable if there are >= k shares, but on less than k distinct servers.)
I think we should only abort a repair if the file is not retrievable (in which case we can't repair it anyway).
Hmm, why shouldn't a check-and-repair always try to restore a file to happiness N? The only reason I can think of is that it might result in redundant shares if there are a few servers that are sometimes disconnected, but wouldn't that tend to stablise after a few repair cycles?
I guess something that I haven't made up my mind about yet is how repair jobs (either
tahoe repair
command on the cli or clicking on the "check-and-repair" button on the wui) should handle the case that the upload/repair fails, or partially fails on some of the files.Should it proceed to completion, generate a report saying to what degree each attempt to repair a file succeeded, and exit with a "success" code (i.e. exit code 0 from
tahoe repair
), or should it abort the attempt to repair this one file, and should it also abort any other file repair attempts from the current deep-repair job?For example, suppose you ask it to repair a single file with
K=3, H=7, N=10
, and it finds out that there are only two storage servers currently connected. One storage server has 3 shares and the other has 0. Then should it abort the upload immediately? Or should it upload a few shares (3?) to the second storage server which currently has none, and then report to you that the file is still unhealthy?Here is one set of principles to answer this question (not sure if this is the best set):
Idempotence if you run an upload-or-repair job, and it does some work (uploads some shares), and then you run it again when nothing has changed among the servers (there are no servers that joined or left and none of them acquired or lost shares), then the second run will not upload any shares.
Forward progress if you run a repair job (not necessarily an upload job!), and it is possible for it to make
|M|
greater than it was before, then it will do so.If we use these principles then we give up on an alternate principle:
|M| >= H
, then it does not use any bulk network bandwidth. (Also, if it looks like it is possible at first, but after it has started uploading then one of the servers fails and it becomes impossible, then it aborts right then and does not use any more of your network bandwidth.)I think people (including me) intuitively wanted principle 3 for uploads, but now that we are thinking about repairs instead of uploads we intuitively want principle 2.
One possibility would be to make the behavior of uploader different than of repairer. Perhaps people prefer for their initial uploads to fail quickly and network-efficiently (principle 3) if they won't be able to achieve happiness level of
H
, but prefer for their repairs to proceed and do their best (principle 2). However, making the two behave differently would make things more complicated in the source code and would also make things more complicated in the usage, because principle 1 -- idempotence -- would not apply to "first upload and then repair" or "first repair and then upload". Sometimes an upload would abort itself and return failure but then a subsequent repair would do a lot of work to make progress, or a repair would do a lot of work to make progress but then an upload would abort itself and return failure.Unless we are really sure that we need to support two different modes, I would prefer to err on the side of simplicity and find a mode that is good enough for both upload and repair. One good way to estimate "complication in usage" is to think how much documentation we would need to write to explain the different behavior of upload and repair in the different cases. :-)
Replying to zooko:
I'm not sure that two different modes would add much complexity. Almost all of the code would be shared, and the upload/repair flag would just enable the fast abort in the upload case.
At some point, possibly in email to tahoe-dev, davidsarah convinced me that two modes was appropriate because people who are uploading a file are not yet committed to the file being up, so it is better for them to abort in case of unsatisfying distribution, but people who are repairing an existing file are already committed to the file being out there, so it is better for them to do your best to make some improvement even in case of unsatisfying distribution.
Okay, I've now re-read this long, confusing ticket and I now agree that the patch Kevan already applied to make
H
be0
during repair is correct. This means that repair processes always try to make progress (principle 2 from comment:80279) instead of trying to conserve network bandwidth (principle 3 from comment:80279), but upload processes (which aren't repairs) choose principle 3 instead of principle 2.Also, yes, we really ought to fix #614 by defining
healthy
as "satisfying the servers-of-happiness level that my user has chosen". :-)I don't think there's anything else to do but add a source:NEWS entry and then we can close this ticket. Does anyone else who is reading this agree?
Replying to zooko:
Yes. There are still things we want to fix about repair (at least #614, #1124, and giving more complete information about the health of a file after repair), but let's address those for v1.9.0.
In changeset:cb764da0edc2b161:
Replying to davidsarah:
and preferring to include servers that have least available space (especially those that are full), since that will allow uploads to succeed in more cases by placing new shares on servers that have available space.
Diego "sickness" Righi is dissatisfied with this solution. He has 10 storage servers, and sets M=10 and H=10. His desire is that he never gets more than one share on one storage server. Current uploader does what he wants -- it never places more than one share on one storage server. But repairer does what he doesn't want -- if fewer than 10 storage servers are available then repairer uploads extra shares to some of the available servers.
To my way of thinking, uploading extra shares is making the file more available. For example if you have 8 servers with 1 share each and 1 server is 2 shares (and
K=5
), then if you lost the first five of your servers (each of which had 1 share) you could still recover your file from the remaining four servers. If instead you have 9 servers with one share each,K=5
, and you lost the first five of your servers then the file would be lost.So, now I'm going to stop here and ask sickness: does this cause you to change your mind so that now you want repairer to upload a second share to one of the existing servers in the case that there are only 9 servers available? Or do you still prefer that it should not do that?
I think sickness's desire for not having more than one share on a server blurs two things. One is having adequate redundancy, and a behavior of adding shares s.t. a server has two (in the 9 servers present case) helps. But, when the 10th server is back on line, if it doesn't have a share, then repair should consider the file unhealthy and place a share on the 10th server such that 10 servers have a distinct share. Further the lease on the extra share probably shouldn't be renewed.
If sickness also desires some form of storage efficiency, to avoid placing the 2nd share, then I think it's a misuse of servers-of-happiness and there should be some max-shares-per-server config, defaulting to infinite.
This all becomes difficult in the middle, when you have a 3/10 encoding and 3 or 4 servers. You want to set H to 3 or 4, but a share distribution of 7/1/1/1 isn't really ok - you want it to be more balanced. But I think we should figure out if this is a a reliability concern or an efficiency concern and treat them separately.
Sorry, I didn't mean sickness's desire was blurry. I meant that on reading it, there are two issues possibly behind it, and we should be clear on which we are addressing and why.
Replying to zooko:
The original problem in this ticket was that the repairer was using the default value for happiness, which was certainly wrong. Let's not overload the ticket; sickness' complaint is that the current repairer often places shares in a way that doesn't increase happiness, when another different placement of the same number of shares would have done so. That's covered by #1130.
Hm, perhaps we should take this to tahoe-dev. Because I don't think that is sickness's complaint--I think his complaint is that it uploads more than one share to a server. I'll try to write a post for tahoe-dev.
Reopening this ticket. I'm affected by the same fundamental problem, but by a different path. The fix identified earlier was to immutable/repairer.py, but I'm getting an error from immutable/upload.py.
Scenario:
I'm using 2-of-4 encoding with shares.happy=4 on tahoe 1.8.1. From the CLI I do a tahoe check --repair on a file with shares {0, 2, 3} already existing on the grid but share 1 not existing, and I get an UploadUnhappinessError complaining that "we were asked to place shares on at least 7" servers. There are only 4 servers on my grid -- hence my choice of shares.happy=4.
I observed that in immutable/upload.py, BaseUploadable has a statement "default_encoding_param_happy = 7". I tried the experiment of changing this value to 4 (the shares.happy value in my tahoe.cfg) and then the repair succeeds without error.
So there must be a path through this code where the default_encoding_param_happy value is actually used instead of being overridden by the value in tahoe.cfg. (I think it smells a little that this object has defaults at all, instead of requiring the parameters to be provided.)
Since this is a regression, I think we should consider trying to fix it for Tahoe-LAFS v1.9.1. Advice and help would be appreciated...
Please note that the scope of this ticket is just the fact that immutable/upload.py is incorrectly using
default_encoding_param_happy = 7
. As far as I know, we're not trying either to fix #1130 or to apply the refactoring/improvements to share placement in #1382 in Tahoe-LAFS 1.9.1.kmarkley86: A stack trace would help me fix this. Could you provide one?
The problem described in comment:80298 is critical to fix for v1.9.2 (or 1.10.0 if we decide to call it that; the next release, anyway).
Oh wait, no hold on--this is a php script? No php on tahoe-lafs.org! Sorry.
Wrong ticket. (should have been #1417)
Repairing fails if less than 7 servers availableto Upload (sometimes?) ignores shares.happy in tahoe.cfgIn changeset:196bd583b6c4959c:
In changeset:196bd583b6c4959c:
kmarkley86: can you try again to reproduce the problem in comment:80298 using trunk?
In changeset:5521/1.9.2:
In changeset:5522/1.9.2:
We decided to defer actually fixing the bug (if it still exists) to 1.10.
In changeset:5883/cloud-backend:
Kyle: this ticket is blocked on you attempting to reproduce comment:80298 using the new code, which has assertions that will let us learn more about the bug.
Moved to #1830. The original problem was fixed in 1.8.1 I think. See #1130 and #1382 for other improvements to share placement and servers-of-happiness.
Upload (sometimes?) ignores shares.happy in tahoe.cfgto Repair used default shares.happyThere was discussion of this issue on tahoe-dev: [//pipermail/tahoe-dev/2013-March/008091.html]
Replying to zooko:
I'm sure that's not the same issue (nor is it the same issue as #1830).