review Brian's patches for #607 #830
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#830
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
per http://allmydata.org/pipermail/tahoe-dev/2009-October/003061.html
Okay here is the timeline of all patches committed since the 1.5.0 release until today: http://allmydata.org/trac/tahoe/timeline?from=2009-11-13&daysback=104&changeset=on&update=Update
First up: changeset:0d5dc5161791a90a: "Overhaul IFilesystemNode handling, to simplify tests and use POLA internally."
I feel uncomfortable about the prospect that we might release a new stable version of Tahoe-LAFS without anybody having reviewed these deep changes. I'm marking this as "critical" to remind myself that I'm currently uncomfortable releasing a new stable release without first closing this ticket. I still own this ticket because I plan to do it, but if anyone else wants to do it I would definitely appreciate the help. We could even split up the work.
I'd offer to help with the review, but, well, reviewing one's own patches might cause a recursive loop in the universe and destroy us all :).
But seriously, if there's anything I can explain or provide more background on, just give a yell.
I made a functional review of these new immutable dirnodes and their
use by tahoe backup.
I've been running tahoe backup on a daily basis to save my precious
data for about 8 months now. After an upgrade to the latest trunk, I ran
a backup manually which, as expected, created a new DIR-IMM dirnode
inside Archive and a link to it named Latest.
Here are my observations so far:
I'll run 'tahoe deep-check' to see if it works correctly.
My daily tahoe backup got a huge performance hit after the upgrade from 1.5.0 to trunk. This the result of a typical daily run on 1.5.0 (+ a few patches).
This is the result of the first run after upgrade.
I've just launched a second run to determine whether it's only a one-time penalty due to the upgrade or if it's another problem. If this behavior is normal, it's probably worth begin mentioned in the release notes.
Yeah, the first run with the new immutable-directory code will have to re-upload all of your directories, so you should see a slowdown on the first run. But after that, the second and later runs should be super quick, much faster than the second-and-later runs of the old (mutable directory) code, because the immutable-directory code will be doing fast sqlite lookups instead of a bunch of tahoe-side directory reads.
If you keep seeing slowdowns and can confirm that nothing much has changed between subsequent runs, let's take a look at the SQL schema and make sure we're not missing an index on something important.
I'll add a note to the release notes to prepare folks for this change and for the initial slowdown.
Yes! The subsequent runs are way faster.
I opened #894 (blackmatch fuse doesn't know what to think about immutable directories).
I read this comment:
for KeyGenerator.set_default_keysize to mean that if I call that method with None as an argument, the key generator will know to generate keys of 2048 bits. If I understand the code correctly, what actually happens is that the default keysize (which is in fact 2048 bits before I call this method) is overwritten with None.
If I later call KeyGenerator.generate without specifying a keysize, None is passed (depending on whether a remote key generator is running) to the remote key generator (in source:src/allmydata/key_generator.py) or to the RSA implementation in pycryptopp. In neither case of these cases, from what I understand, is the comment actually true -- in the former, the key generator will prefer the passed key size to its default (which is also 2048 bits) and pass it to pycryptopp, and in the latter the configured keysize will be passed to pycryptopp, which will in both cases complain about its arguments.
Also note that KeyGenerator in client.py has the same name as the key generator in source:src/allmydata/key_generator.py. I'm not sure if that's a stylistic concern or anything -- they're probably unlikely to ever be in scope in the same context.
erm, nevermind -- I read that comment again and realized that I didn't read it right -- mutable.filenode is not the same thing as client.KeyGenerator.
Thank you very much for reviewing Brian's patches! I desperately need help with this task. (And I'm prioritizing reviewing your patches for #778, which I have been doing on the bus to work every morning by the way.)
Please post on this ticket which specific patches you have reviewed so I can skip those ones when I'm looking for Brian-patches to review.
Kevan: yeah,
allmydata.client.KeyGenerator
is what mutable-file creation uses to get an RSA keypair, and it either generates one locally or sends a message to a remote key-generator process (defined inallmydata.key_generator.KeyGenerator
). I agree that the comment needs improvement. I touched that code to make it easier for most unit tests to generate small/fast keys, while still allowing at least one test to use full-sized 2048-bit keys, and of course to make sure that normal operations use full-size keys. As you observed, each call toKeyGenerator.generate()
gets to pick a keysize, and if the caller doesn't provide one, it will useself.default_keysize
which can be controlled by a call toKeyGenerator.set_default_keysize()
, and defaults to 2048.I think the comment on
set_default_keysize
was moved fromgenerate
without sufficient editing. I've just pushed a patch changeset:fb879ddea40c03e7 to improve the comments a bit.. let me know if you see any mistakes in the new text.zooko: I've looked over the functional changes for changeset:0d5dc5161791a90a, but none of the tests.
warner: Yes, that's much better. Thanks for indulging me. :)
I've just finished looking over the tests.
You store the nodemaker as
self.nodemaker
in [test_dirnode.Dirnode2]source:src/allmydata/test/test_dirnode.py?rev=4045#L780, and you've switched the tests to use nodemaker where they used client before, but you've leftself.client
as an ivar. Unless I'm missing something, none of the tests use it, so it should work just fine as a local variable insetUp
.Other than that, everything looks fine. Though there isn't a set of tests specifically for the NodeMaker (unless I've missed them), the fact that it is an abstraction of existing (and, from what I can tell, well-tested) behavior rather than new behavior means that this is probably a non-issue.
I've now looked at all of changeset:0d5dc5161791a90a.
Yeah, you're right. I've pushed changeset:9ab7524f0da3fcf8 to implement your suggestion.
And yes, there's no test_nodemaker, but all of nodemaker.py was extracted from client.py, and during the development process I iterated to attain full line-coverage on nodemaker.py . So I wasn't too worried about it. Plus I couldn't think of a good way to test it in isolation.
A lot of changeset:0d5dc5161791a90a was changing everything else to stop needing a Client, and instead using a Nodemaker or some other set of objects. And those changes should be covered by the existing tests for all of those "elses". Plus changing a lot of tests themselves to be smaller and not use a full Client when possible.
Is there a canonical list of changesets that need to be reviewed for this to be closed?
That's a good question. How about if I kind of scout around for patches that Brian committed that seem to be relevant to #607... Oh boy there are a lot of them.
changeset:5fe713fc52dc331b, changeset:f85690697a21e669, changeset:3ee740628ab32aae, changeset:e2ffc3dc03df8d73, changeset:304aadd4f7632afe, changeset:f871c3bb3ddc3b42, changeset:d079eb45f6581c27, changeset:cf65cc2ae3cc1062, changeset:b30041c5ecf3e2b6, changeset:c2520e4ec76195fb, changeset:480e1d318dd46619, changeset:ea373de042c49ba1 (you might be especially interested in that one. :-)), changeset:8a7c980e3765f89c, changeset:b4ec86c95a64d911, changeset:1273b5c233b076aa, changeset:768c76aa5fbe2c7f, changeset:2695af91a73661bf, changeset:f47672d12acb9042, changeset:cc422f8dc00d5cd3, changeset:5fe713fc52dc331b, changeset:cc422f8dc00d5cd3, changeset:512fe3ad62d0ad94, changeset:f85690697a21e669, changeset:834b20210ac37194.
Phewf! That is quite a lot, but the more of it you can review before the Tahoe-LAFS v1.6 release the better. Even if you don't find any major bugs, you will definitely be gaining expertise in the core Tahoe-LAFS logic -- expertise which I hope you will go on to use in Tahoe-LAFS v1.7. :-)
Excellent -- thanks for that. I'm mostly reviewing these when I have a spare moment during my day, so I'm not sure how much I'll get done, but I'll try.
Hm. Well, let's say for now that I'll take care of changeset:5fe713fc52dc331b. If I finish that, I'll start on the next changeset that is unclaimed.
When doing #833, I looked over all of the dirnode and web tests for immutable directories, and also the webapi.txt documentation. Those all looked OK; anything I found is addressed in the #833 patch.
Okay, good enough! Thanks, Kevan and David-Sarah!
I just finished looking through changeset:5fe713fc52dc331b, and didn't see any problems. I'll probably poke through some of the rest of these eventually, so I guess I'll reopen this if I find any problems.
Okay, from changeset:f85690697a21e669:
Am I missing something, or could you remove the
self.convergence
ivar fromClient
? From what I can tell, the only place it is used now is to make the secret holder that the nodemaker uses to get the convergence secret.Well, I guess we can't remove it: we have
web/unlinked.py
,web/filenode.py
,frontends/ftpd.py
andfrontends/sftpd.py
that all reference that part of
Client
to get the convergence secret. It seemslike it would be more elegant if there was only
one canonical place for that -- the
SecretHolder
instance. We deal with this challenge for other sorts of files with methods inClient
-- seewhere there is a method in client that talks to the nodemaker and makes a new mutable file, abstracting away the details of convergence and so on, but none for an immutable file. But then there isn't a method to make an immutable file in nodemaker, either, so there's not a lot to use as an abstraction. I guess that's kind of rambly, but that's what I was thinking when reading that changeset -- why not add immutable file logic to the nodemaker, and use that to eliminate external dependencies on things that aren't the
SecretHolder
?The rest of the changes look okay.
Replying to kevan:
Yeah.. I'm not sure I was thinking about it at the time, but some day we're
likely to add a webapi facility to let clients provide the convergence secret
on an upload-by-upload basis, which is a vague argument for continuing to let
at least
web/*.py
grab the default fromClient
and then pass itinto the uploader code themselves.
There will be other values like this in the future.. I'm thinking of the
Accounting authority string here (for which there might be a client-wide
default, or there may be no default and webapi callers are obligated to
provide their own on each request).
Yeah, I've updated more of the mutable-using code than the immutable code so
far (the immutable code is among the oldest in the tree).
So far,
NodeMaker
has acquired responsibility for turning caps intonodes, and then picked up methods to create brand new nodes:
create_mutable_file
,create_new_mutable_directory
, and mostrecently
create_immutable_directory
. So the next logical step would beto give it a way to create new immutable filenodes,
create_immutable_file
which would be known asupload()
in thevernacular.
(oh, it's also worth pointing out that mutable files don't use convergence..
wouldn't even make sense with them, so that's one fewer arguments needed for
client.create_mutable_file()
)It would probably be a good start to get rid of
dirnode.add_file
,change the signature of
set_node
to return the node that was justadded, and then replace the example you cite with something like:
If the nodemaker learned how to upload stuff, that could probably turn into:
As I get more comfortable with the Producer/Consumer framework, I see some
places where I can get rid of the funky custom classes that I built (like
FileHandle
andIUploadable
) and replace them with more normalthings like file-like objects and
IProducer
s. I'm not yet sure I coulddo away with
IUploadable
, but it's worth exploring. If bothcreate_mutable_file
andcreate_immutable_file
were defined toaccept either a string, a Producer, or a file-like object, then that code
could turn into: