Compare commits

...

224 Commits

Author SHA1 Message Date
David Stainton 02b7cc0072 test_download_failure: advance clock by 0, not 3 seconds. 2016-02-18 16:04:12 +00:00
meejah afdc715870 re-org things to make test pass 2016-02-18 16:04:12 +00:00
David Stainton 20fac34d39 Rewrite the download retry test
this test still fails
2016-02-18 16:04:12 +00:00
David Stainton 3f925d6e5c Add not-working test for download retry 2016-02-18 16:04:12 +00:00
Daira Hopwood 53714ee5aa Expand test for Magic Folder statistics. 2016-02-18 15:40:19 +00:00
Daira Hopwood e3a27ede79 Improve test for Magic Folder statistics and move it from test_system.py to test_magic_folder.py. 2016-02-16 15:58:53 +00:00
Daira Hopwood 73e9cfe818 Work in progress to fix incorrect statistics output for Magic Folder. refs ticket:2709 2016-02-16 15:27:23 +00:00
Daira Hopwood 6576f084be Merge pull request #241 from tahoe-lafs/2535.chmod-not-umask.2
2535.chmod-not-umask.2
2016-02-16 15:20:30 +00:00
Daira Hopwood 1d01a98733 Fix test_magic_folder.py to use absolute unicode paths when calling write_downloaded_file. 2016-02-15 15:43:00 +00:00
Daira Hopwood 1f3463c815 Fix test to use arguments with absolute unicode paths 2016-02-15 15:29:29 +00:00
Daira Hopwood dd3330ea2f Fix missing import 2016-02-15 15:29:06 +00:00
David Stainton b45d34ebf8 Add daira's implementation of make_dirs_with_absolute_mode 2016-02-15 13:48:12 +01:00
David Stainton cf039664c0 Fix file_util test of make_dirs_with_absolute_mode
3 directory levels deep is sufficient for this test
2016-02-15 13:48:08 +01:00
David Stainton c7397f0066 Use os.path.dirname instead of split and rename var to initial_path_u 2016-02-15 13:48:03 +01:00
David Stainton fa16ebf954 remove superfluous trailing comma from make_dirs_with_absolute_mode def 2016-02-15 13:47:58 +01:00
David Stainton 856fe4708a Add unit test and make corrections to make_dirs_with_absolute_mode 2016-02-15 13:47:52 +01:00
David Stainton 3197c96a14 Break out our chmod while loop into fielutils.py 2016-02-15 13:47:39 +01:00
David Stainton 8cfc02691a Use chmod instead of changing umask
Conflicts:
	src/allmydata/frontends/magic_folder.py
2016-02-15 13:47:14 +01:00
David Stainton 2b8654ebb9 Add comment about FUDGE_SECONDS and refer to our design doc 2016-02-15 13:43:49 +01:00
Daira Hopwood 2d922db853 Fix pyflakes warnings.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-09 15:30:06 +00:00
meejah 83a3f7047a remove humanize, use internal method, teach internal method to understand timedelta 2016-02-08 10:13:09 -07:00
meejah 6867b0a3c0 add Bob leaving to smoke-tests 2016-02-08 10:12:32 -07:00
meejah 9f84feef33 fix return-valid of leave 2016-02-08 10:12:32 -07:00
meejah 5ed27bdfae add docstrings 2016-02-08 10:12:32 -07:00
meejah 6901105a24 make history longer 2016-02-08 10:12:32 -07:00
meejah b5cc7f6a1c flesh out IQueuedItem 2016-02-08 10:12:32 -07:00
meejah e869991373 humanize doesn't report a proper version 2016-02-08 10:12:32 -07:00
meejah 2341fc3d30 add humanize dependency 2016-02-08 10:12:32 -07:00
meejah 456f67f0a4 Add some magicfolder database tests 2016-02-08 10:12:32 -07:00
meejah 8a4692dcd5 Add simple auth-token to get JSON data 2016-02-08 10:12:32 -07:00
meejah 144d31b4c3 Flesh out "tahoe magic-folder status" command
Adds:

 - a JSON endpoint
 - CLI to display information
 - QueuedItem + IQueuedItem for uploader/downloader
 - IProgress interface + PercentProgress implementation
 - progress= args to many upload/download APIs
2016-02-08 10:12:32 -07:00
David Stainton f660aa78ab Add cli stub for magic-folder status command 2016-02-08 10:12:32 -07:00
David Stainton cec7948d89 Daira's fix during session with David and Meejah. 2016-02-05 22:13:37 +00:00
David Stainton 58e8474859 Further work in progress refinements to unit tests
wip from pairing with Daira and Meejah
2016-02-05 22:13:37 +00:00
David Stainton 3c5f8d2949 Daira's fix to the hook mixin 2016-02-05 22:13:37 +00:00
David Stainton dc41fa111c Fix cleanup after test_periodic_full_scan. 2016-02-05 22:13:37 +00:00
David Stainton 9244ac2613 Add logging for Downloader.stop. 2016-02-05 22:13:37 +00:00
David Stainton 2ebb233389 Work in progress on fixing test_periodic_full_scan. 2016-02-05 22:13:37 +00:00
David Stainton ad0ca86d46 Minor cleanup and logging fix. 2016-02-05 22:13:37 +00:00
David Stainton 0e92d3626d Call hooks eventually. 2016-02-05 22:13:37 +00:00
David Stainton ffe67b9a09 Clean up queue loging and add a clock advance in unit test
not yet working
2016-02-05 22:13:37 +00:00
David Stainton c54a127ce3 WIP 2016-02-05 22:13:37 +00:00
David Stainton 25f94bcfd9 Add ignore pending argument to _periodic_full_scan 2016-02-05 22:13:37 +00:00
David Stainton efe29af6d6 Attempt to test periodic uploader full scan 2016-02-05 22:13:37 +00:00
David Stainton b8fb8848a2 Only perform full scan if pending set is empty 2016-02-05 22:13:37 +00:00
David Stainton 2be3bc88b7 Naive periodic full scan 2016-02-05 22:13:37 +00:00
David Stainton 25892aa205 Fix uploader's _process to extend queue after scan and count stats properly 2016-02-05 22:13:37 +00:00
David Stainton 2e6e13f78e Daira's excellent fixes to the uploader from our pairing session 2016-02-05 22:13:37 +00:00
David Stainton 85ce163598 Teach uploader's _scan to use the deque 2016-02-05 22:13:37 +00:00
Daira Hopwood 7cfff0b5b0 Add multi-party-conflict-detection.rst.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 22:13:37 +00:00
David Stainton ac87d59e49 Remove UNIQUE constraint from magicfolder db schema 2016-02-05 22:13:37 +00:00
Daira Hopwood 20576463d0 Documentation for Magic Folder.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 22:13:37 +00:00
Daira Hopwood 973a5f3ad0 Add get_pathinfo.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 22:13:37 +00:00
Daira Hopwood f47e669ab3 Fix a test failure.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 22:13:21 +00:00
Daira Hopwood 44640be53e Teach magic-folder join to use configutil.
Author: David Stainton <david@leastauthority.com>
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
David Stainton 93f24ce9e0 Add magic-folder test_scan_once_on_startup 2016-02-05 21:59:38 +00:00
Daira Hopwood 55e69554c6 Various test cleanups.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
David Stainton 56a5a8dd97 Teach join twice test to test for proper stdout and stderr values 2016-02-05 21:59:38 +00:00
David Stainton cff2dbf6b3 More thorough checks to the join leave join test 2016-02-05 21:59:38 +00:00
David Stainton 0c5d029180 Fix test_join_leave_join 2016-02-05 21:59:38 +00:00
David Stainton 1db2f783ec Add more magic-folder join leave tests 2016-02-05 21:59:38 +00:00
David Stainton 5bc7db0da6 Add unit test for join twice failure 2016-02-05 21:59:38 +00:00
Daira Hopwood 44cc039aa3 Delay only the download scan not the turning of our event queue.
Author: David Stainton <david@leastauthority.com>
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
David Stainton 700920c6eb Fix magic folder constructor signature in test 2016-02-05 21:59:38 +00:00
David Stainton 49dfffabb0 Fix tests usage of umask 2016-02-05 21:59:38 +00:00
David Stainton 11b44556fc Fix umask again 2016-02-05 21:59:38 +00:00
David Stainton f74601987e Fix umask usage bit fiddling and use try finally blocks 2016-02-05 21:59:38 +00:00
David Stainton 2e5d049dbb Add download.umask config option with default of 077 2016-02-05 21:59:38 +00:00
David Stainton d7a59cd515 Add umask to Downloader 2016-02-05 21:59:38 +00:00
David Stainton a999654d83 remove magic_folder from config in test 2016-02-05 21:59:38 +00:00
David Stainton 8e4d895c02 Remove magic_folder subsection from default tahoe.cfg 2016-02-05 21:59:38 +00:00
David Stainton 04f95d0050 Remove magic-folder exclude stat 2016-02-05 21:59:38 +00:00
Daira Hopwood d643f426c7 Improve the error reporting for 'tahoe magic-folder join/leave'. refs #2568
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
David Stainton 154f0e0278 teach leave to: remove magic_folder section of config 2016-02-05 21:59:38 +00:00
David Stainton 1082a6fdfd Add simple magic-folder leave command 2016-02-05 21:59:38 +00:00
David Stainton d596d07642 Teach join to also check for existing magicfolder db file 2016-02-05 21:59:38 +00:00
David Stainton 45b1335172 Prevent magic-folder join if already joined. 2016-02-05 21:59:38 +00:00
Daira Hopwood e2db35981c Refactor is_new_file.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
Daira Hopwood 3aa978b084 .gitignore: add /smoke_magicfolder/, delete obsolete stuff.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
Daira Hopwood c280335329 Minor simplification.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
David Stainton 0102f10f46 Simply conflict detection by removing nested if statements 2016-02-05 21:59:38 +00:00
David Stainton 3dd027ca2c Scan our own dmd upon start only 2016-02-05 21:59:38 +00:00
meejah af44a555d9 remove print 2016-02-05 21:59:38 +00:00
meejah 8af7fc298c Add a unit-test and correct the code for "already deleted"
If a Downloader decides that it needs to delete a file, but that
file is already gone locally, the exeption is caugt and a log message
produced.
2016-02-05 21:59:38 +00:00
David Stainton 30a7b05c66 Fix test: previously we accounted for the propagation of the conflict
because alice scaned her own dmd... whereas now she does not.
2016-02-05 21:59:38 +00:00
David Stainton 80e127edad Fix test helper _check_version_in_local_db 2016-02-05 21:59:38 +00:00
Daira Hopwood c904cba3ef WIP: Refactoring to get db fields in one query.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
David Stainton e1d0a8bc4c WIP 2016-02-05 21:59:38 +00:00
Daira Hopwood 2b5f8f448c Fix pending upload conflict detection.
Author: David Stainton <david@leastauthority.com>
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
David Stainton dd753059fd Create test for last upload uri conflict 2016-02-05 21:59:38 +00:00
Daira Hopwood 13020b9937 Detect remote conflict by checking for pending upload.
Author: David Stainton <david@leastauthority.com>
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
David Stainton d63e13052e Add debugging print statement for timestamp comparison 2016-02-05 21:59:38 +00:00
David Stainton fbf0e0da75 Add last uploaded timestamp comparison for remote conflict detection 2016-02-05 21:59:38 +00:00
Daira Hopwood ec3a350e6c Fix an unused import.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
Daira Hopwood d39ed561b7 Fix some miscapture bugs.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:38 +00:00
Daira Hopwood 10e7e41f5a Workaround.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 8c742149ac Simplify _notify and improve logging.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood c3a86e4edd Simplify _scan_remote_* and remove Downloader._download_scan_batch attribute.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood eea43d59a0 Cosmetics.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 03d2000ce8 Downloader doesn't need the pending set.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood f23fa3a400 Delete redundant is_ready attribute from MagicFolder.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 4e27fab1ef test_encodingutil: fixes for Unix.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 793d42426f Add precondition to Uploader._process.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 63cef55a6f Fix test_errors.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 71bbe474d1 Fix test_move_tree.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood a590b2b191 Debugging WIP.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 4fdd4c8402 Fix a corner case for to_filepath on Windows to make it consistent with Unix.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 5be2fe1ce2 test_encodingutil: add tests for FilePath-related functions.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 1f076df98b Depend on FilePath.asTextMode().
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 45c99ff698 WIP: exclude own dirnode from scan. This is not quite right; we shouldn't exclude it on startup. refs #2553
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 3c7118863f Magic Folder docs: status of tests on Windows.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood dbbbb58735 More Magic Folder doc updates.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood efb986bfce Magic Folder doc updates.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 5e90916fe9 Don't add subdirectory watches if the platform's notifier doesn't require them. refs #2559
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood d415888566 Improve reporting of assertion failures.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood b19a3057f4 Fix unused import.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood b369d98d61 Revert pinning of Twisted 15.2.0 which causes more problems than it solves.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 5dab93475d Turn off bridging to Twisted log, and pin to Twisted 15.2.0.
Hopefully this will avoid http://foolscap.lothar.com/trac/ticket/244 .

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
meejah 08e4dc1767 fix the windows command-line too 2016-02-05 21:59:37 +00:00
meejah 27d6e3103e some minor fixes for instructions 2016-02-05 21:59:37 +00:00
Daira Hopwood e5a6113559 Windows path fix.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood b29405a760 magic-folder-howto.rst formatting fixes.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 567230a21d Add docs/magic-folder-howto.rst.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 543a12e1f7 Add test for 'tahoe create-node/client/introducer' output. closes ticket:2556
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 6a7a3f7387 bin\tahoe can't be run directly on Windows.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood d5aff8eb76 Strip any long path marker in the input to flush_volume.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 1a902da796 Correct type for Windows BOOL.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 492848c1ee Depend on Twisted >= 15.2.0 and (finally!) retire the setup_requires hack.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood cf56347b95 Disable precondition that autoAdd == recursive.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 53820cbd9d Fix a type error.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 92468ce095 Flush handling WIP.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood d390257e9b Use fileutil.write for magic folder tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood a8d912827a Improve all of the Windows-specific error reporting.
Also make the Windows function declarations more readable and consistent.

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 7bf8d7c40d replace_file should allow the replaced file not to exist on Windows.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 4afafc97f3 Fix fileutil tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood c7c1cf439a More path fixes.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 853a380c10 Fix a test broken by the last commit.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood d0f22a008a Don't include [magic_folder]enabled and local.directory fields by default.
Add a comment reminding to do the field modification properly.

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood fb5e75fd5e Don't use a long path for the [magic_folder]local.directory field.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:37 +00:00
Daira Hopwood 8ea3cc3d9e Fix some path Unixisms.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 45633fe418 Add long_path=False option to abspath_expanduser_unicode.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 039ebfb3be Fix test_alice_bob.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 0e6718d70f Refactor _check_up/downloader_count.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood bf8c005c4a Don't download the deletion marker file unnecessarily.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 2084fec5d3 Distinguish deletion of directories.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 1693df30d5 Rename deleted files to .backup rather than unlinking them.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
David Stainton 1035ab456e Add file conflict unit test 2016-02-05 21:59:36 +00:00
David Stainton 68db46d6c3 Add basic bob upload test and fix conflict detect 2016-02-05 21:59:36 +00:00
David Stainton 7e01a7a9c2 Fix bob's uploading test... 2016-02-05 21:59:36 +00:00
David Stainton 36ddf66371 Attempt to teach bob to upload a file 2016-02-05 21:59:36 +00:00
David Stainton a32dced99b Count conflicted objects 2016-02-05 21:59:36 +00:00
Daira Hopwood 9e900e72b8 Basic remote conflict detection based on ancestor uri
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood ae5b3132e3 magic-folder.rst: remove "Known Issues and Limitations" that have been fixed.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 6facf8d1f1 magic-folder.rst: update introduction.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood c89ac63323 Avoid .format, since it is inconsistent between Python 2.6 and 2.7 (and the rest of Tahoe-LAFS doesn't use it).
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood ac8ba601ba Fix test_alice_bob.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood bf5abcaba1 Add counter for uploader.objects_not_uploaded.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 00485d66bb Advance Bob's clock after notifying.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood f1b6db66d4 test_alice_bob: use magic= argument to notify, rather than self.magicfolder.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
meejah 994bfd8142 add excluded check 2016-02-05 21:59:36 +00:00
meejah a4a64daf2f add the 'spurious' notifies 2016-02-05 21:59:36 +00:00
Daira Hopwood c8974e3274 Fix a pyflakes warning and check existence of file in Bob's local dir.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
meejah 2f3fda16b0 implement 'delete' functionality, with tests 2016-02-05 21:59:36 +00:00
meejah dc1799b589 smoketest for magic-folder functionality 2016-02-05 21:59:36 +00:00
Daira Hopwood 582b6333fe Add test that we don't write files outside the magic folder directory. refs ticket:2506
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood f11ba5d878 Fix infinite loop in should_ignore_path for absolute paths.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood c24d92937e More debug logging.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood d128f3af74 Unicode fix for do_join.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 94f4e7e0ab Minor cleanups to tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 098781dca0 Ensure that errors from Alice-and-Bob tests are reported correctly if setup fails.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 0d35d8e1ff Cosmetics.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 1b003b2854 Eliminate duplicate parsing of invite code.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 41be2aab6f Remaining test fixes.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood dd730c808e Make sure that do_cli is only called with strs, and avoid unnecessary use of attributes in tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 7c886ec1b7 Cosmetics.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood b01cf6770f URIs are strs.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood a6bb11e8d4 Aliases and nicknames are Unicode.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 074723a9ea Fix call to argv_to_abspath. Also rename localdir to local_dir.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
David Stainton 22ca47417a Fix magic-folder cli tests
convert path to abs path when matching
strings in the generated config file.
2016-02-05 21:59:36 +00:00
David Stainton 7c888e5d97 Attempt to fix cli tests 2016-02-05 21:59:36 +00:00
Daira Hopwood 58bf5798ca Better but still broken tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
Daira Hopwood 9ed6ac9cba Fix check for initial '-' in argv_to_abspath.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:36 +00:00
David Stainton 7a51eba7a2 Fix tests by submitting unicode args instead of str 2016-02-05 21:59:35 +00:00
David Stainton 7025dd6a26 Teach magic-folder join to use argv_to_abspath
- also we modify argv_to_abspath to through a usage error
if the name starts with a '-'

- add a test
currently the tests fail
2016-02-05 21:59:35 +00:00
David Stainton f28a161f11 Use argv_to_abspath for magic-folder join file path arg 2016-02-05 21:59:35 +00:00
Daira Hopwood 9cc2539cbf Test creation of a subdirectory.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood ec056c6dee Watch for IN_CREATE events but filter them out for non-directories.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 08bbf06be4 Patch Downloader.REMOTE_SCAN_INTERVAL rather than setting it persistently.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 23e83273a0 Implement creating local directories in downloader.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood bb4dd732f8 Decode names in the scanned remote.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood d94e8a6083 Refactoring to allow logging from _write_downloaded_file and _rename_conflicted_file.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 81b3f2b19a Simplify and fix non-existent-file handling.
Also make the existent and non-existent cases as similar as possible,
with a view to merging them.

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood c5522f9224 Logging/debugging improvements.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 9f7469c49b Refactor and fix race conditions in test_alice_bob.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 9aaa33c07f Correct a call to did_upload_version in the downloader.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 90fb85ada2 Make sure that test_move_tree waits until files have been uploaded as well as downloaded.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood ce121d7863 Restore a call to increment files_uploaded that was mistakenly removed.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
David Stainton a404e7104c Teach uploader+downloader to use to db schema
here we attempt to fix all the unit tests as well...
however two tests still fail
2016-02-05 21:59:35 +00:00
Daira Hopwood 19c4681b85 Add magicfolderdb.py.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
David Stainton f438bda0ec Remove magic-folder db code from backupdb.py 2016-02-05 21:59:35 +00:00
David Stainton 31ad6e86df WIP 2016-02-05 21:59:35 +00:00
David Stainton 2057c77ed2 Include brief summary of magic-folder CLI commands 2016-02-05 21:59:35 +00:00
David Stainton 10b461563d Add link to our cli design doc 2016-02-05 21:59:35 +00:00
David Stainton ba4e869ba5 Mention gc is not part of the otf grant and link to the gc ticket 2016-02-05 21:59:35 +00:00
David Stainton 4f55f50961 Remove old obsolete/inaccurate statements 2016-02-05 21:59:35 +00:00
David Stainton 3504b1d3e1 Minor comment correction for get_all_relpaths 2016-02-05 21:59:35 +00:00
David Stainton 5873d31058 For all downloaded files ensure parent dir exists 2016-02-05 21:59:35 +00:00
Daira Hopwood 1f7244f5da Simplify the cleanup_Alice_and_Bob callback.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
meejah d020a66398 Make downloader delay a class-variable
This gives the integration-style CLI-based tests a chance
to set the delay to 0 before the first 3-second delayed
call is queued to _lazy_tail in the Downloader
2016-02-05 21:59:35 +00:00
meejah 93015086b0 Teach unit-tests to time-warp
1. Split alice/bob clocks to avoid races conditions
   in the tests
2. Wrap ._notify so we can advance the clock after inotify
   calls in the RealTest (since it takes >0ms to do the "real" notifies)
2016-02-05 21:59:35 +00:00
meejah 9e9c8f5a1a Fix call to ready() 2016-02-05 21:59:35 +00:00
Daira Hopwood fb41c55887 Correct a string-type error.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 7c377e08ca WIP.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 39bd7a1a4c Magic Folder file moves.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:35 +00:00
Daira Hopwood 1bb1c1ed42 Prepare to move drop_upload.py to magic_folder.py.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood 5e3570b171 Move backupdb.py to src/allmydata.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood 9614e13ea8 Rename upload_ready_d to connected_enough_d.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood 0e89e54b33 Documentation changes for Magic Folder.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood adf42f3b61 Teach StorageFarmBroker to fire a deferred when a connection threshold is reached. refs #1449
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood e7fde7770b Enable Windows inotify support.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood 288025fd03 New code for Windows drop-upload support. refs #1431
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood ef9e75f611 Docs for drop-upload on Windows.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood edc3b6fb6a Add magic folder db.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
Daira Hopwood be278e6eee Unicode path fixes for drop-upload.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2016-02-05 21:59:34 +00:00
56 changed files with 5713 additions and 702 deletions

2
.gitignore vendored
View File

@ -26,7 +26,6 @@ zope.interface-*.egg
/tmp*
/*.patch
/dist/
/twisted/plugins/dropin.cache
/tahoe-deps/
/tahoe-deps.tar.gz
/.coverage
@ -34,3 +33,4 @@ zope.interface-*.egg
/coverage-html/
/miscaptures.txt
/violations.txt
/smoke_magicfolder/

View File

@ -85,6 +85,11 @@ _tmpfstest: make-version
sudo umount '$(TMPDIR)'
rmdir '$(TMPDIR)'
.PHONY: smoketest
smoketest:
-python ./src/allmydata/test/check_magicfolder_smoke.py kill
-rm -rf smoke_magicfolder/
python ./src/allmydata/test/check_magicfolder_smoke.py
# code coverage: install the "coverage" package from PyPI, do "make test-coverage" to
# do a unit test run with coverage-gathering enabled, then use "make coverage-output" to

View File

@ -68,7 +68,7 @@ Client/server nodes provide one or more of the following services:
* web-API service
* SFTP service
* FTP service
* drop-upload service
* Magic Folder service
* helper service
* storage service.
@ -453,16 +453,16 @@ SFTP, FTP
instructions on configuring these services, and the ``[sftpd]`` and
``[ftpd]`` sections of ``tahoe.cfg``.
Drop-Upload
Magic Folder
As of Tahoe-LAFS v1.9.0, a node running on Linux can be configured to
automatically upload files that are created or changed in a specified
local directory. See drop-upload.rst_ for details.
A node running on Linux or Windows can be configured to automatically
upload files that are created or changed in a specified local directory.
See `magic-folder.rst`_ for details.
.. _download-status.rst: frontends/download-status.rst
.. _CLI.rst: frontends/CLI.rst
.. _FTP-and-SFTP.rst: frontends/FTP-and-SFTP.rst
.. _drop-upload.rst: frontends/drop-upload.rst
.. _magic-folder.rst: frontends/magic-folder.rst
Storage Server Configuration

View File

@ -1,158 +0,0 @@
.. -*- coding: utf-8-with-signature -*-
===============================
Tahoe-LAFS Drop-Upload Frontend
===============================
1. `Introduction`_
2. `Configuration`_
3. `Known Issues and Limitations`_
Introduction
============
The drop-upload frontend allows an upload to a Tahoe-LAFS grid to be triggered
automatically whenever a file is created or changed in a specific local
directory. This is a preview of a feature that we expect to support across
several platforms, but it currently works only on Linux.
The implementation was written as a prototype at the First International
Tahoe-LAFS Summit in June 2011, and is not currently in as mature a state as
the other frontends (web, CLI, SFTP and FTP). This means that you probably
should not keep important data in the upload directory, and should not rely
on all changes to files in the local directory to result in successful uploads.
There might be (and have been) incompatible changes to how the feature is
configured. There is even the possibility that it may be abandoned, for
example if unsolveable reliability issues are found.
We are very interested in feedback on how well this feature works for you, and
suggestions to improve its usability, functionality, and reliability.
Configuration
=============
The drop-upload frontend runs as part of a gateway node. To set it up, you
need to choose the local directory to monitor for file changes, and a mutable
directory on the grid to which files will be uploaded.
These settings are configured in the ``[drop_upload]`` section of the
gateway's ``tahoe.cfg`` file.
``[drop_upload]``
``enabled = (boolean, optional)``
If this is ``True``, drop-upload will be enabled. The default value is
``False``.
``local.directory = (UTF-8 path)``
This specifies the local directory to be monitored for new or changed
files. If the path contains non-ASCII characters, it should be encoded
in UTF-8 regardless of the system's filesystem encoding. Relative paths
will be interpreted starting from the node's base directory.
In addition, the file ``private/drop_upload_dircap`` must contain a
writecap pointing to an existing mutable directory to be used as the target
of uploads. It will start with ``URI:DIR2:``, and cannot include an alias
or path.
After setting the above fields and starting or restarting the gateway,
you can confirm that the feature is working by copying a file into the
local directory. Then, use the WUI or CLI to check that it has appeared
in the upload directory with the same filename. A large file may take some
time to appear, since it is only linked into the directory after the upload
has completed.
The 'Operational Statistics' page linked from the Welcome page shows
counts of the number of files uploaded, the number of change events currently
queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
page and the node log_ may be helpful to determine the cause of any failures.
.. _log: ../logging.rst
Known Issues and Limitations
============================
This frontend only works on Linux. There is an even-more-experimental
implementation for Windows (`#1431`_), and a ticket to add support for
Mac OS X and BSD-based systems (`#1432`_).
Subdirectories of the local directory are not monitored. If a subdirectory
is created, it will be ignored. (`#1433`_)
If files are created or changed in the local directory just after the gateway
has started, it might not have connected to a sufficient number of servers
when the upload is attempted, causing the upload to fail. (`#1449`_)
Files that were created or changed in the local directory while the gateway
was not running, will not be uploaded. (`#1458`_)
The only way to determine whether uploads have failed is to look at the
'Operational Statistics' page linked from the Welcome page. This only shows
a count of failures, not the names of files. Uploads are never retried.
The drop-upload frontend performs its uploads sequentially (i.e. it waits
until each upload is finished before starting the next), even when there
would be enough memory and bandwidth to efficiently perform them in parallel.
A drop-upload can occur in parallel with an upload by a different frontend,
though. (`#1459`_)
If there are a large number of near-simultaneous file creation or
change events (greater than the number specified in the file
``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
could be missed. This is fairly unlikely under normal circumstances, because
the default value of ``max_queued_events`` in most Linux distributions is
16384, and events are removed from this queue immediately without waiting for
the corresponding upload to complete. (`#1430`_)
Some filesystems may not support the necessary change notifications.
So, it is recommended for the local directory to be on a directly attached
disk-based filesystem, not a network filesystem or one provided by a virtual
machine.
Attempts to read the mutable directory at about the same time as an uploaded
file is being linked into it, might fail, even if they are done through the
same gateway. (`#1105`_)
When a local file is changed and closed several times in quick succession,
it may be uploaded more times than necessary to keep the remote copy
up-to-date. (`#1440`_)
Files deleted from the local directory will not be unlinked from the upload
directory. (`#1710`_)
The ``private/drop_upload_dircap`` file cannot use an alias or path to
specify the upload directory. (`#1711`_)
Files are always uploaded as immutable. If there is an existing mutable file
of the same name in the upload directory, it will be unlinked and replaced
with an immutable file. (`#1712`_)
If a file in the upload directory is changed (actually relinked to a new
file), then the old file is still present on the grid, and any other caps to
it will remain valid. See `docs/garbage-collection.rst`_ for how to reclaim
the space used by files that are no longer needed.
Unicode names are supported, but the local name of a file must be encoded
correctly in order for it to be uploaded. The expected encoding is that
printed by ``python -c "import sys; print sys.getfilesystemencoding()"``.
.. _`#1105`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1105
.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
.. _`#1433`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1433
.. _`#1440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1440
.. _`#1449`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1449
.. _`#1458`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1458
.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
.. _`#1710`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1710
.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
.. _`#1712`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1712
.. _docs/garbage-collection.rst: ../garbage-collection.rst

View File

@ -0,0 +1,149 @@
.. -*- coding: utf-8-with-signature -*-
================================
Tahoe-LAFS Magic Folder Frontend
================================
1. `Introduction`_
2. `Configuration`_
3. `Known Issues and Limitations`_
Introduction
============
The Magic Folder frontend synchronizes local directories on two or more
clients, using a Tahoe-LAFS grid for storage. Whenever a file is created
or changed under the local directory of one of the clients, the change is
propagated to the grid and then to the other clients.
The implementation of the "drop-upload" frontend, on which Magic Folder is
based, was written as a prototype at the First International Tahoe-LAFS
Summit in June 2011. In 2015, with the support of a grant from the
`Open Technology Fund`_, it was redesigned and extended to support
synchronization between clients. It currently works on Linux and Windows.
Magic Folder is not currently in as mature a state as the other frontends
(web, CLI, SFTP and FTP). This means that you probably should not rely on
all changes to files in the local directory to result in successful uploads.
There might be (and have been) incompatible changes to how the feature is
configured.
We are very interested in feedback on how well this feature works for you, and
suggestions to improve its usability, functionality, and reliability.
.. _`Open Technology Fund`: https://www.opentech.fund/
Configuration
=============
The Magic Folder frontend runs as part of a gateway node. To set it up, you
must use the tahoe magic-folder CLI. For detailed information see our
`Magic-Folder CLI design documentation`_. For a given Magic-Folder collective
directory you need to run the ``tahoe magic-folder create`` command. After
that the ``tahoe magic-folder invite`` command must used to generate an
*invite code* for each member of the magic-folder collective. A confidential,
authenticated communications channel should be used to transmit the invite code
to each member, who will be joining using the ``tahoe magic-folder join``
command.
These settings are persisted in the ``[magic_folder]`` section of the
gateway's ``tahoe.cfg`` file.
``[magic_folder]``
``enabled = (boolean, optional)``
If this is ``True``, Magic Folder will be enabled. The default value is
``False``.
``local.directory = (UTF-8 path)``
This specifies the local directory to be monitored for new or changed
files. If the path contains non-ASCII characters, it should be encoded
in UTF-8 regardless of the system's filesystem encoding. Relative paths
will be interpreted starting from the node's base directory.
You should not normally need to set these fields manually because they are
set by the ``tahoe magic-folder create`` and/or ``tahoe magic-folder join``
commands. Use the ``--help`` option to these commands for more information.
After setting up a Magic Folder collective and starting or restarting each
gateway, you can confirm that the feature is working by copying a file into
any local directory, and checking that it appears on other clients.
Large files may take some time to appear.
The 'Operational Statistics' page linked from the Welcome page shows
counts of the number of files uploaded, the number of change events currently
queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
page and the node log_ may be helpful to determine the cause of any failures.
.. _log: ../logging.rst
Known Issues and Limitations
============================
This feature only works on Linux and Windows. There is a ticket to add
support for Mac OS X and BSD-based systems (`#1432`_).
The only way to determine whether uploads have failed is to look at the
'Operational Statistics' page linked from the Welcome page. This only shows
a count of failures, not the names of files. Uploads are never retried.
The Magic Folder frontend performs its uploads sequentially (i.e. it waits
until each upload is finished before starting the next), even when there
would be enough memory and bandwidth to efficiently perform them in parallel.
A Magic Folder upload can occur in parallel with an upload by a different
frontend, though. (`#1459`_)
On Linux, if there are a large number of near-simultaneous file creation or
change events (greater than the number specified in the file
``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
could be missed. This is fairly unlikely under normal circumstances, because
the default value of ``max_queued_events`` in most Linux distributions is
16384, and events are removed from this queue immediately without waiting for
the corresponding upload to complete. (`#1430`_)
The Windows implementation might also occasionally miss file creation or
change events, due to limitations of the underlying Windows API
(ReadDirectoryChangesW). We do not know how likely or unlikely this is.
(`#1431`_)
Some filesystems may not support the necessary change notifications.
So, it is recommended for the local directory to be on a directly attached
disk-based filesystem, not a network filesystem or one provided by a virtual
machine.
The ``private/magic_folder_dircap`` and ``private/collective_dircap`` files
cannot use an alias or path to specify the upload directory. (`#1711`_)
If a file in the upload directory is changed (actually relinked to a new
file), then the old file is still present on the grid, and any other caps
to it will remain valid. Eventually it will be possible to use
`garbage collection`_ to reclaim the space used by these files; however
currently they are retained indefinitely. (`#2440`_)
Unicode filenames are supported on both Linux and Windows, but on Linux, the
local name of a file must be encoded correctly in order for it to be uploaded.
The expected encoding is that printed by
``python -c "import sys; print sys.getfilesystemencoding()"``.
On Windows, local directories with non-ASCII names are not currently working.
(`#2219`_)
On Windows, when a node has Magic Folder enabled, it is unresponsive to Ctrl-C
(it can only be killed using Task Manager or similar). (`#2218`_)
.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
.. _`#2218`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2218
.. _`#2219`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2219
.. _`#2440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2440
.. _`garbage collection`: ../garbage-collection.rst
.. _`Magic-Folder CLI design documentation`: ../proposed/magic-folder/user-interface-design.rst

231
docs/magic-folder-howto.rst Normal file
View File

@ -0,0 +1,231 @@
=========================
Magic Folder Set-up Howto
=========================
1. `This document`_
2. `Preparation`_
3. `Setting up a local test grid`_
4. `Setting up Magic Folder`_
5. `Testing`_
This document
=============
This is preliminary documentation of how to set up the
Magic Folder pre-release using a test grid on a single Linux
or Windows machine, with two clients and one server. It is
aimed at a fairly technical audience.
For an introduction to Magic Folder and how to configure it
more generally, see `docs/frontends/magic-folder.rst`_.
It it possible to adapt these instructions to run the nodes on
different machines, to synchronize between three or more clients,
to mix Windows and Linux clients, and to use multiple servers
(if the Tahoe-LAFS encoding parameters are changed).
.. _`docs/frontends/magic-folder.rst`: ../docs/frontends/magic-folder.rst
Preparation
===========
Linux
-----
Install ``git`` from your distribution's package manager.
Then run these commands::
git clone -b 2438.magic-folder-stable.8 https://github.com/tahoe-lafs/tahoe-lafs.git
cd tahoe-lafs
python setup.py test
The test suite usually takes about 15 minutes to run.
Note that it is normal for some tests to be skipped.
In the current branch, the Magic Folder tests produce
considerable debugging output.
If you see an error like ``fatal error: Python.h: No such file or directory``
while compiling the dependencies, you need the Python development headers. If
you are on a Debian or Ubuntu system, you can install them with ``sudo
apt-get install python-dev``. On RedHat/Fedora, install ``python-devel``.
Windows
-------
Windows 7 or above is required.
For 64-bit Windows:
* Install Python 2.7 from
https://www.python.org/ftp/python/2.7/python-2.7.amd64.msi
* Install pywin32 from
https://tahoe-lafs.org/source/tahoe-lafs/deps/windows/pywin32-219.win-amd64-py2.7.exe
* Install git from
https://github.com/git-for-windows/git/releases/download/v2.6.2.windows.1/Git-2.6.2-64-bit.exe
For 32-bit Windows:
* Install Python 2.7 from
https://www.python.org/ftp/python/2.7/python-2.7.msi
* Install pywin32 from
https://tahoe-lafs.org/source/tahoe-lafs/deps/windows/pywin32-219.win32-py2.7.exe
* Install git from
https://github.com/git-for-windows/git/releases/download/v2.6.2.windows.1/Git-2.6.2-32-bit.exe
Then (for any version) run these commands in a Command Prompt::
git clone -b 2438.magic-folder-stable.5 https://github.com/tahoe-lafs/tahoe-lafs.git
cd tahoe-lafs
python setup.py build
Open a new Command Prompt with the same current directory,
then run::
bin\tahoe --version-and-path
It is normal for this command to print warnings and debugging output
on some systems. ``python setup.py test`` can also be run, but there
are some known sources of nondeterministic errors in tests on Windows
that are unrelated to Magic Folder.
Setting up a local test grid
============================
Linux
-----
Run these commands::
mkdir ../grid
bin/tahoe create-introducer ../grid/introducer
bin/tahoe start ../grid/introducer
export FURL=`cat ../grid/introducer/private/introducer.furl`
bin/tahoe create-node --introducer="$FURL" ../grid/server
bin/tahoe create-client --introducer="$FURL" ../grid/alice
bin/tahoe create-client --introducer="$FURL" ../grid/bob
Windows
-------
Run::
mkdir ..\grid
bin\tahoe create-introducer ..\grid\introducer
bin\tahoe start ..\grid\introducer
Leave the introducer running in that Command Prompt,
and in a separate Command Prompt (with the same current
directory), run::
set /p FURL=<..\grid\introducer\private\introducer.furl
bin\tahoe create-node --introducer=%FURL% ..\grid\server
bin\tahoe create-client --introducer=%FURL% ..\grid\alice
bin\tahoe create-client --introducer=%FURL% ..\grid\bob
Both Linux and Windows
----------------------
(Replace ``/`` with ``\`` for Windows paths.)
Edit ``../grid/alice/tahoe.cfg``, and make the following
changes to the ``[node]`` and ``[client]`` sections::
[node]
nickname = alice
web.port = tcp:3457:interface=127.0.0.1
[client]
shares.needed = 1
shares.happy = 1
shares.total = 1
Edit ``../grid/bob/tahoe.cfg``, and make the following
change to the ``[node]`` section, and the same change as
above to the ``[client]`` section::
[node]
nickname = bob
web.port = tcp:3458:interface=127.0.0.1
Note that when running nodes on a single machine,
unique port numbers must be used for each node (and they
must not clash with ports used by other server software).
Here we have used the default of 3456 for the server,
3457 for alice, and 3458 for bob.
Now start all of the nodes (the introducer should still be
running from above)::
bin/tahoe start ../grid/server
bin/tahoe start ../grid/alice
bin/tahoe start ../grid/bob
On Windows, a separate Command Prompt is needed to run each
node.
Open a web browser on http://127.0.0.1:3457/ and verify that
alice is connected to the introducer and one storage server.
Then do the same for http://127.0.0.1:3568/ to verify that
bob is connected. Leave all of the nodes running for the
next stage.
Setting up Magic Folder
=======================
Linux
-----
Run::
mkdir -p ../local/alice ../local/bob
bin/tahoe -d ../grid/alice magic-folder create magic: alice ../local/alice
bin/tahoe -d ../grid/alice magic-folder invite magic: bob >invitecode
export INVITECODE=`cat invitecode`
bin/tahoe -d ../grid/bob magic-folder join "$INVITECODE" ../local/bob
bin/tahoe restart ../grid/alice
bin/tahoe restart ../grid/bob
Windows
-------
Run::
mkdir ..\local\alice ..\local\bob
bin\tahoe -d ..\grid\alice magic-folder create magic: alice ..\local\alice
bin\tahoe -d ..\grid\alice magic-folder invite magic: bob >invitecode
set /p INVITECODE=<invitecode
bin\tahoe -d ..\grid\bob magic-folder join %INVITECODE% ..\local\bob
Then close the Command Prompt windows that are running the alice and bob
nodes, and open two new ones in which to run::
bin\tahoe start ..\grid\alice
bin\tahoe start ..\grid\bob
Testing
=======
You can now experiment with creating files and directories in
``../local/alice`` and ``/local/bob``; any changes should be
propagated to the other directory.
Note that when a file is deleted, the corresponding file in the
other directory will be renamed to a filename ending in ``.backup``.
Deleting a directory will have no effect.
For other known issues and limitations, see
https://github.com/tahoe-lafs/tahoe-lafs/blob/2438.magic-folder-stable.8/docs/frontends/magic-folder.rst#known-issues-and-limitations
As mentioned earlier, it is also possible to run the nodes on
different machines, to synchronize between three or more clients,
to mix Windows and Linux clients, and to use multiple servers
(if the Tahoe-LAFS encoding parameters are changed).

View File

@ -0,0 +1,375 @@
Multi-party Conflict Detection
==============================
The current Magic-Folder remote conflict detection design does not properly detect remote conflicts
for groups of three or more parties. This design is specified in the "Fire Dragon" section of this document:
https://github.com/tahoe-lafs/tahoe-lafs/blob/2551.wip.2/docs/proposed/magic-folder/remote-to-local-sync.rst#fire-dragons-distinguishing-conflicts-from-overwrites
This Tahoe-LAFS trac ticket comment outlines a scenario with
three parties in which a remote conflict is falsely detected:
.. _`ticket comment`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2551#comment:22
Summary and definitions
=======================
Abstract file: a file being shared by a Magic Folder.
Local file: a file in a client's local filesystem corresponding to an abstract file.
Relative path: the path of an abstract or local file relative to the Magic Folder root.
Version: a snapshot of an abstract file, with associated metadata, that is uploaded by a Magic Folder client.
A version is associated with the file's relative path, its contents, and
mtime and ctime timestamps. Versions also have a unique identity.
Follows relation:
* If and only if a change to a client's local file at relative path F that results in an upload of version V',
was made when the client already had version V of that file, then we say that V' directly follows V.
* The follows relation is the irreflexive transitive closure of the "directly follows" relation.
The follows relation is transitive and acyclic, and therefore defines a DAG called the
Version DAG. Different abstract files correspond to disconnected sets of nodes in the Version DAG
(in other words there are no "follows" relations between different files).
The DAG is only ever extended, not mutated.
The desired behaviour for initially classifying overwrites and conflicts is as follows:
* if a client Bob currently has version V of a file at relative path F, and it sees a new version V'
of that file in another client Alice's DMD, such that V' follows V, then the write of the new version
is initially an overwrite and should be to the same filename.
* if, in the same situation, V' does not follow V, then the write of the new version should be
classified as a conflict.
The existing `Magic Folder design for remote-to-local sync`_ defines when an initial overwrite
should be reclassified as a conflict.
The above definitions completely specify the desired solution of the false
conflict behaviour described in the `ticket comment`_. However, they do not give
a concrete algorithm to compute the follows relation, or a representation in the
Tahoe-LAFS file store of the metadata needed to compute it.
We will consider two alternative designs, proposed by Leif Ryge and
Zooko Wilcox-O'Hearn, that aim to fill this gap.
.. _`Magic Folder design for remote-to-local sync`: remote-to-local-sync.rst
Leif's Proposal: Magic-Folder "single-file" snapshot design
===========================================================
Abstract
--------
We propose a relatively simple modification to the initial Magic Folder design which
adds merkle DAGs of immutable historical snapshots for each file. The full history
does not necessarily need to be retained, and the choice of how much history to retain
can potentially be made on a per-file basis.
Motivation:
-----------
no SPOFs, no admins
```````````````````
Additionally, the initial design had two cases of excess authority:
1. The magic folder administrator (inviter) has everyone's write-caps and is thus essentially "root"
2. Each client shares ambient authority and can delete anything or everything and
(assuming there is not a conflict) the data will be deleted from all clients. So, each client
is effectively "root" too.
Thus, while it is useful for file synchronization, the initial design is a much less safe place
to store data than in a single mutable tahoe directory (because more client computers have the
possibility to delete it).
Glossary
--------
- merkle DAG: like a merkle tree but with multiple roots, and with each node potentially having multiple parents
- magic folder: a logical directory that can be synchronized between many clients
(devices, users, ...) using a Tahoe-LAFS storage grid
- client: a Magic-Folder-enabled Tahoe-LAFS client instance that has access to a magic folder
- DMD: "distributed mutable directory", a physical Tahoe-LAFS mutable directory.
Each client has the write cap to their own DMD, and read caps to all other client's DMDs
(as in the original Magic Folder design).
- snapshot: a reference to a version of a file; represented as an immutable directory containing
an entry called "content" (pointing to the immutable file containing the file's contents),
and an entry called "parent0" (pointing to a parent snapshot), and optionally parent1 through
parentN pointing at other parents. The Magic Folder snapshot object is conceptually very similar
to a git commit object, except for that it is created automatically and it records the history of an
individual file rather than an entire repository. Also, commits do not need to have authors
(although an author field could be easily added later).
- deletion snapshot: immutable directory containing no content entry (only one or more parents)
- capability: a Tahoe-LAFS diminishable cryptographic capability
- cap: short for capability
- conflict: the situation when another client's current snapshot for a file is different than our current snapshot, and is not a descendant of ours.
- overwrite: the situation when another client's current snapshot for a file is a (not necessarily direct) descendant of our current snapshot.
Overview
--------
This new design will track the history of each file using "snapshots" which are
created at each upload. Each snapshot will specify one or more parent snapshots,
forming a directed acyclic graph. A Magic-Folder user's DMD uses a flattened directory
hierarchy naming scheme, as in the original design. But, instead of pointing directly
at file contents, each file name will link to that user's latest snapshot for that file.
Inside the dmd there will also be an immutable directory containing the client's subscriptions
(read-caps to other clients' dmds).
Clients periodically poll each other's DMDs. When they see the current snapshot for a file is
different than their own current snapshot for that file, they immediately begin downloading its
contents and then walk backwards through the DAG from the new snapshot until they find their own
snapshot or a common ancestor.
For the common ancestor search to be efficient, the client will need to keep a local store (in the magic folder db) of all of the snapshots
(but not their contents) between the oldest current snapshot of any of their subscriptions and their own current snapshot.
See "local cache purging policy" below for more details.
If the new snapshot is a descendant of the client's existing snapshot, then this update
is an "overwrite" - like a git fast-forward. So, when the download of the new file completes it can overwrite
the existing local file with the new contents and update its dmd to point at the new snapshot.
If the new snapshot is not a descendant of the client's current snapshot, then the update is a
conflict. The new file is downloaded and named $filename.conflict-$user1,$user2 (including a list
of other subscriptions who have that version as their current version).
Changes to the local .conflict- file are not tracked. When that file disappears
(either by deletion, or being renamed) a new snapshot for the conflicting file is
created which has two parents - the client's snapshot prior to the conflict, and the
new conflicting snapshot. If multiple .conflict files are deleted or renamed in a short
period of time, a single conflict-resolving snapshot with more than two parents can be created.
! I think this behavior will confuse users.
Tahoe-LAFS snapshot objects
---------------------------
These Tahoe-LAFS snapshot objects only track the history of a single file, not a directory hierarchy.
Snapshot objects contain only two field types:
- ``Content``: an immutable capability of the file contents (omitted if deletion snapshot)
- ``Parent0..N``: immutable capabilities representing parent snapshots
Therefore in this system an interesting side effect of this Tahoe snapshot object is that there is no
snapshot author. The only notion of an identity in the Magic-Folder system is the write capability of the user's DMD.
The snapshot object is an immutable directory which looks like this:
content -> immutable cap to file content
parent0 -> immutable cap to a parent snapshot object
parent1..N -> more parent snapshots
Snapshot Author Identity
------------------------
Snapshot identity might become an important feature so that bad actors
can be recognized and other clients can stop "subscribing" to (polling for) updates from them.
Perhaps snapshots could be signed by the user's Magic-Folder write key for this purpose? Probably a bad idea to reuse the write-cap key for this. Better to introduce ed25519 identity keys which can (optionally) sign snapshot contents and store the signature as another member of the immutable directory.
Conflict Resolution
-------------------
detection of conflicts
``````````````````````
A Magic-Folder client updates a given file's current snapshot link to a snapshot which is a descendent
of the previous snapshot. For a given file, let's say "file1", Alice can detect that Bob's DMD has a "file1"
that links to a snapshot which conflicts. Two snapshots conflict if one is not an ancestor of the other.
a possible UI for resolving conflicts
`````````````````````````````````````
If Alice links a conflicting snapshot object for a file named "file1",
Bob and Carole will see a file in their Magic-Folder called "file1.conflicted.Alice".
Alice conversely will see an additional file called "file1.conflicted.previous".
If Alice wishes to resolve the conflict with her new version of the file then
she simply deletes the file called "file1.conflicted.previous". If she wants to
choose the other version then she moves it into place:
mv file1.conflicted.previous file1
This scheme works for N number of conflicts. Bob for instance could choose
the same resolution for the conflict, like this:
mv file1.Alice file1
Deletion propagation and eventual Garbage Collection
----------------------------------------------------
When a user deletes a file, this is represented by a link from their DMD file
object to a deletion snapshot. Eventually all users will link this deletion
snapshot into their DMD. When all users have the link then they locally cache
the deletion snapshot and remove the link to that file in their DMD.
Deletions can of course be undeleted; this means creating a new snapshot
object that specifies itself a descent of the deletion snapshot.
Clients periodically renew leases to all capabilities recursively linked
to in their DMD. Files which are unlinked by ALL the users of a
given Magic-Folder will eventually be garbage collected.
Lease expirey duration must be tuned properly by storage servers such that
Garbage Collection does not occur too frequently.
Performance Considerations
--------------------------
local changes
`````````````
Our old scheme requires two remote Tahoe-LAFS operations per local file modification:
1. upload new file contents (as an immutable file)
2. modify mutable directory (DMD) to link to the immutable file cap
Our new scheme requires three remote operations:
1. upload new file contents (as in immutable file)
2. upload immutable directory representing Tahoe-LAFS snapshot object
3. modify mutable directory (DMD) to link to the immutable snapshot object
remote changes
``````````````
Our old scheme requires one remote Tahoe-LAFS operation per remote file modification (not counting the polling of the dmd):
1. Download new file content
Our new scheme requires a minimum of two remote operations (not counting the polling of the dmd) for conflicting downloads, or three remote operations for overwrite downloads:
1. Download new snapshot object
2. Download the content it points to
3. If the download is an overwrite, modify the DMD to indicate that the downloaded version is their current version.
If the new snapshot is not a direct descendant of our current snapshot or the other party's previous snapshot we saw, we will also need to download more snapshots to determine if it is a conflict or an overwrite. However, those can be done in
parallel with the content download since we will need to download the content in either case.
While the old scheme is obviously more efficient, we think that the properties provided by the new scheme make it worth the additional cost.
Physical updates to the DMD overiouslly need to be serialized, so multiple logical updates should be combined when an update is already in progress.
conflict detection and local caching
````````````````````````````````````
Local caching of snapshots is important for performance.
We refer to the client's local snapshot cache as the ``magic-folder db``.
Conflict detection can be expensive because it may require the client
to download many snapshots from the other user's DMD in order to try
and find it's own current snapshot or a descendent. The cost of scanning
the remote DMDs should not be very high unless the client conducting the
scan has lots of history to download because of being offline for a long
time while many new snapshots were distributed.
local cache purging policy
``````````````````````````
The client's current snapshot for each file should be cached at all times.
When all clients' views of a file are synchronized (they all have the same
snapshot for that file), no ancestry for that file needs to be cached.
When clients' views of a file are *not* synchronized, the most recent
common ancestor of all clients' snapshots must be kept cached, as must
all intermediate snapshots.
Local Merge Property
--------------------
Bob can in fact, set a pre-existing directory (with files) as his new Magic-Folder directory, resulting
in a merge of the Magic-Folder with Bob's local directory. Filename collisions will result in conflicts
because Bob's new snapshots are not descendent's of the existing Magic-Folder file snapshots.
Example: simultaneous update with four parties:
1. A, B, C, D are in sync for file "foo" at snapshot X
2. A and B simultaneously change the file, creating snapshots XA and XB (both descendants of X).
3. C hears about XA first, and D hears about XB first. Both accept an overwrite.
4. All four parties hear about the other update they hadn't heard about yet.
5. Result:
- everyone's local file "foo" has the content pointed to by the snapshot in their DMD's "foo" entry
- A and C's DMDs each have the "foo" entry pointing at snapshot XA
- B and D's DMDs each have the "foo" entry pointing at snapshot XB
- A and C have a local file called foo.conflict-B,D with XB's content
- B and D have a local file called foo.conflict-A,C with XA's content
Later:
- Everyone ignores the conflict, and continue updating their local "foo". but slowly enough that there are no further conflicts, so that A and C remain in sync with eachother, and B and D remain in sync with eachother.
- A and C's foo.conflict-B,D file continues to be updated with the latest version of the file B and D are working on, and vice-versa.
- A and C edit the file at the same time again, causing a new conflict.
- Local files are now:
A: "foo", "foo.conflict-B,D", "foo.conflict-C"
C: "foo", "foo.conflict-B,D", "foo.conflict-A"
B and D: "foo", "foo.conflict-A", "foo.conflict-C"
- Finally, D decides to look at "foo.conflict-A" and "foo.conflict-C", and they manually integrate (or decide to ignore) the differences into their own local file "foo".
- D deletes their conflict files.
- D's DMD now points to a snapshot that is a descendant of everyone else's current snapshot, resolving all conflicts.
- The conflict files on A, B, and C disappear, and everyone's local file "foo" contains D's manually-merged content.
Daira: I think it is too complicated to include multiple nicknames in the .conflict files
(e.g. "foo.conflict-B,D"). It should be sufficient to have one file for each other client,
reflecting that client's latest version, regardless of who else it conflicts with.
Zooko's Design (as interpreted by Daira)
========================================
A version map is a mapping from client nickname to version number.
Definition: a version map M' strictly-follows a mapping M iff for every entry c->v
in M, there is an entry c->v' in M' such that v' > v.
Each client maintains a 'local version map' and a 'conflict version map' for each file
in its magic folder db.
If it has never written the file, then the entry for its own nickname in the local version
map is zero. The conflict version map only contains entries for nicknames B where
"$FILENAME.conflict-$B" exists.
When a client A uploads a file, it increments the version for its own nickname in its
local version map for the file, and includes that map as metadata with its upload.
A download by client A from client B is an overwrite iff the downloaded version map
strictly-follows A's local version map for that file; in this case A replaces its local
version map with the downloaded version map. Otherwise it is a conflict, and the
download is put into "$FILENAME.conflict-$B"; in this case A's
local version map remains unchanged, and the entry B->v taken from the downloaded
version map is added to its conflict version map.
If client A deletes or renames a conflict file "$FILENAME.conflict-$B", then A copies
the entry for B from its conflict version map to its local version map, deletes
the entry for B in its conflict version map, and performs another upload (with
incremented version number) of $FILENAME.
Example:
A, B, C = (10, 20, 30) everyone agrees.
A updates: (11, 20, 30)
B updates: (10, 21, 30)
C will see either A or B first. Both would be an overwrite, if considered alone.

View File

@ -29,6 +29,19 @@ install_requires = [
# zope.interface 3.6.3 and 3.6.4 are incompatible with Nevow (#1435).
"zope.interface >= 3.6.0, != 3.6.3, != 3.6.4",
# * We need Twisted 10.1.0 for the FTP frontend in order for
# Twisted's FTP server to support asynchronous close.
# * The SFTP frontend depends on Twisted 11.0.0 to fix the SSH server
# rekeying bug <https://twistedmatrix.com/trac/ticket/4395>
# * The FTP frontend depends on Twisted >= 11.1.0 for
# filepath.Permissions
# * Nevow 0.11.1 depends on Twisted >= 13.0.0.
# * The Magic Folder frontend depends on Twisted >= 15.2.0.
"Twisted >= 15.2.0",
# Nevow 0.11.1 can be installed using pip (#2032).
"Nevow >= 0.11.1",
# * foolscap < 0.5.1 had a performance bug which spent O(N**2) CPU for
# transferring large mutable files of size N.
# * foolscap < 0.6 is incompatible with Twisted 10.2.0.
@ -53,6 +66,9 @@ install_requires = [
"pyasn1-modules >= 0.0.5", # service-identity depends on this
]
# We no longer have any setup dependencies.
setup_requires = []
# Includes some indirect dependencies, but does not include allmydata.
# These are in the order they should be listed by --version, etc.
package_imports = [
@ -101,63 +117,6 @@ if not hasattr(sys, 'frozen'):
package_imports.append(('setuptools', 'setuptools'))
# * On Linux we need at least Twisted 10.1.0 for inotify support
# used by the drop-upload frontend.
# * We also need Twisted 10.1.0 for the FTP frontend in order for
# Twisted's FTP server to support asynchronous close.
# * The SFTP frontend depends on Twisted 11.0.0 to fix the SSH server
# rekeying bug <https://twistedmatrix.com/trac/ticket/4395>
# * The FTP frontend depends on Twisted >= 11.1.0 for
# filepath.Permissions
#
# On Windows, Twisted >= 12.2.0 has a dependency on pywin32.
# Since pywin32 can only be installed manually, we fall back to
# requiring earlier versions of Twisted and Nevow if it is not
# already installed.
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2028>
#
# When the fallback is used we also need to work around the fact
# that Nevow imports itself when building, which causes Twisted
# and zope.interface to be imported; therefore, we need to set
# setup_requires to make sure that the versions of Twisted and
# zope.interface used at build time satisfy Nevow's requirements.
#
# In cases where this fallback isn't needed, we prefer Nevow >= 0.11.1
# which can be installed using pip, and Twisted >= 13.0.0 which
# Nevow 0.11.1 depends on. In this case we should *not* use the
# setup_requires hack, because if we do then the build will break
# when Twisted < 13.0.0 is already installed (even though it could
# have succeeded by building a later version under support/ ).
#
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2032>
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2249>
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2291>
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2286>
setup_requires = []
_use_old_Twisted_and_Nevow = False
if sys.platform == "win32":
try:
import win32api
[win32api]
except ImportError:
_use_old_Twisted_and_Nevow = True
if _use_old_Twisted_and_Nevow:
install_requires += [
"Twisted >= 11.1.0, <= 12.1.0",
"Nevow >= 0.9.33, <= 0.10",
]
setup_requires += [req for req in install_requires if req.startswith('Twisted')
or req.startswith('zope.interface')]
else:
install_requires += [
"Twisted >= 13.0.0",
"Nevow >= 0.11.1",
]
# * pyOpenSSL is required in order for foolscap to provide secure connections.
# Since foolscap doesn't reliably declare this dependency in a machine-readable
# way, we need to declare a dependency on pyOpenSSL ourselves. Tahoe-LAFS does

View File

@ -130,7 +130,7 @@ class ProhibitedNode:
def get_best_readable_version(self):
raise FileProhibited(self.reason)
def download_best_version(self):
def download_best_version(self, progress=None):
raise FileProhibited(self.reason)
def get_best_mutable_version(self):

View File

@ -1,5 +1,6 @@
import os, stat, time, weakref
from allmydata import node
from base64 import urlsafe_b64encode
from zope.interface import implements
from twisted.internet import reactor, defer
@ -129,7 +130,9 @@ class Client(node.Node, pollmixin.PollMixin):
}
def __init__(self, basedir="."):
#print "Client.__init__(%r)" % (basedir,)
node.Node.__init__(self, basedir)
self.connected_enough_d = defer.Deferred()
self.started_timestamp = time.time()
self.logSource="Client"
self.encoding_params = self.DEFAULT_ENCODING_PARAMETERS.copy()
@ -150,7 +153,7 @@ class Client(node.Node, pollmixin.PollMixin):
# ControlServer and Helper are attached after Tub startup
self.init_ftp_server()
self.init_sftp_server()
self.init_drop_uploader()
self.init_magic_folder()
# If the node sees an exit_trigger file, it will poll every second to see
# whether the file still exists, and what its mtime is. If the file does not
@ -332,6 +335,9 @@ class Client(node.Node, pollmixin.PollMixin):
DEP["n"] = int(self.get_config("client", "shares.total", DEP["n"]))
DEP["happy"] = int(self.get_config("client", "shares.happy", DEP["happy"]))
# for the CLI to authenticate to local JSON endpoints
self._auth_token = self._create_or_read_auth_token()
self.init_client_storage_broker()
self.history = History(self.stats_provider)
self.terminator = Terminator()
@ -341,12 +347,44 @@ class Client(node.Node, pollmixin.PollMixin):
self.init_blacklist()
self.init_nodemaker()
def get_auth_token(self):
"""
This returns a local authentication token, which is just some
random data in "api_auth_token" which must be echoed to API
calls.
Currently only the URI '/magic' for magic-folder status; other
endpoints are invited to include this as well, as appropriate.
"""
return self._auth_token
def _create_or_read_auth_token(self):
"""
This returns the current auth-token data, possibly creating it and
writing 'private/api_auth_token' in the process.
"""
fname = os.path.join(self.basedir, 'private', 'api_auth_token')
try:
with open(fname, 'rb') as f:
data = f.read()
except (OSError, IOError):
log.msg("Creating '%s'." % (fname,))
with open(fname, 'wb') as f:
data = urlsafe_b64encode(os.urandom(32))
f.write(data)
return data
def init_client_storage_broker(self):
# create a StorageFarmBroker object, for use by Uploader/Downloader
# (and everybody else who wants to use storage servers)
ps = self.get_config("client", "peers.preferred", "").split(",")
preferred_peers = tuple([p.strip() for p in ps if p != ""])
sb = storage_client.StorageFarmBroker(self.tub, permute_peers=True, preferred_peers=preferred_peers)
connection_threshold = min(self.encoding_params["k"],
self.encoding_params["happy"] + 1)
sb = storage_client.StorageFarmBroker(self.tub, True, connection_threshold,
self.connected_enough_d, preferred_peers=preferred_peers)
self.storage_broker = sb
# load static server specifications from tahoe.cfg, if any.
@ -488,22 +526,32 @@ class Client(node.Node, pollmixin.PollMixin):
sftp_portstr, pubkey_file, privkey_file)
s.setServiceParent(self)
def init_drop_uploader(self):
def init_magic_folder(self):
#print "init_magic_folder"
if self.get_config("drop_upload", "enabled", False, boolean=True):
if self.get_config("drop_upload", "upload.dircap", None):
raise OldConfigOptionError("The [drop_upload]upload.dircap option is no longer supported; please "
"put the cap in a 'private/drop_upload_dircap' file, and delete this option.")
raise OldConfigOptionError("The [drop_upload] section must be renamed to [magic_folder].\n"
"See docs/frontends/magic-folder.rst for more information.")
upload_dircap = self.get_or_create_private_config("drop_upload_dircap")
local_dir_utf8 = self.get_config("drop_upload", "local.directory")
if self.get_config("magic_folder", "enabled", False, boolean=True):
#print "magic folder enabled"
upload_dircap = self.get_private_config("magic_folder_dircap")
collective_dircap = self.get_private_config("collective_dircap")
try:
from allmydata.frontends import drop_upload
s = drop_upload.DropUploader(self, upload_dircap, local_dir_utf8)
s.setServiceParent(self)
s.startService()
except Exception, e:
self.log("couldn't start drop-uploader: %r", args=(e,))
local_dir_config = self.get_config("magic_folder", "local.directory").decode("utf-8")
local_dir = abspath_expanduser_unicode(local_dir_config, base=self.basedir)
dbfile = os.path.join(self.basedir, "private", "magicfolderdb.sqlite")
dbfile = abspath_expanduser_unicode(dbfile)
from allmydata.frontends import magic_folder
umask = self.get_config("magic_folder", "download.umask", 0077)
s = magic_folder.MagicFolder(self, upload_dircap, collective_dircap, local_dir, dbfile, umask)
self._magic_folder = s
s.setServiceParent(self)
s.startService()
# start processing the upload queue when we've connected to enough servers
self.connected_enough_d.addCallback(lambda ign: s.ready())
def _check_exit_trigger(self, exit_trigger_file):
if os.path.exists(exit_trigger_file):

View File

@ -588,7 +588,7 @@ class DirectoryNode:
return d
def add_file(self, namex, uploadable, metadata=None, overwrite=True):
def add_file(self, namex, uploadable, metadata=None, overwrite=True, progress=None):
"""I upload a file (using the given IUploadable), then attach the
resulting FileNode to the directory at the given name. I return a
Deferred that fires (with the IFileNode of the uploaded file) when
@ -596,7 +596,7 @@ class DirectoryNode:
name = normalize(namex)
if self.is_readonly():
return defer.fail(NotWriteableError())
d = self._uploader.upload(uploadable)
d = self._uploader.upload(uploadable, progress=progress)
d.addCallback(lambda results:
self._create_and_validate_node(results.get_uri(), None,
name))

View File

@ -1,124 +0,0 @@
import sys
from twisted.internet import defer
from twisted.python.filepath import FilePath
from twisted.application import service
from foolscap.api import eventually
from allmydata.interfaces import IDirectoryNode
from allmydata.util.encodingutil import quote_output, get_filesystem_encoding
from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.immutable.upload import FileName
class DropUploader(service.MultiService):
name = 'drop-upload'
def __init__(self, client, upload_dircap, local_dir_utf8, inotify=None):
service.MultiService.__init__(self)
try:
local_dir_u = abspath_expanduser_unicode(local_dir_utf8.decode('utf-8'))
if sys.platform == "win32":
local_dir = local_dir_u
else:
local_dir = local_dir_u.encode(get_filesystem_encoding())
except (UnicodeEncodeError, UnicodeDecodeError):
raise AssertionError("The '[drop_upload] local.directory' parameter %s was not valid UTF-8 or "
"could not be represented in the filesystem encoding."
% quote_output(local_dir_utf8))
self._client = client
self._stats_provider = client.stats_provider
self._convergence = client.convergence
self._local_path = FilePath(local_dir)
if inotify is None:
from twisted.internet import inotify
self._inotify = inotify
if not self._local_path.exists():
raise AssertionError("The '[drop_upload] local.directory' parameter was %s but there is no directory at that location." % quote_output(local_dir_u))
if not self._local_path.isdir():
raise AssertionError("The '[drop_upload] local.directory' parameter was %s but the thing at that location is not a directory." % quote_output(local_dir_u))
# TODO: allow a path rather than a cap URI.
self._parent = self._client.create_node_from_uri(upload_dircap)
if not IDirectoryNode.providedBy(self._parent):
raise AssertionError("The URI in 'private/drop_upload_dircap' does not refer to a directory.")
if self._parent.is_unknown() or self._parent.is_readonly():
raise AssertionError("The URI in 'private/drop_upload_dircap' is not a writecap to a directory.")
self._uploaded_callback = lambda ign: None
self._notifier = inotify.INotify()
# We don't watch for IN_CREATE, because that would cause us to read and upload a
# possibly-incomplete file before the application has closed it. There should always
# be an IN_CLOSE_WRITE after an IN_CREATE (I think).
# TODO: what about IN_MOVE_SELF or IN_UNMOUNT?
mask = inotify.IN_CLOSE_WRITE | inotify.IN_MOVED_TO | inotify.IN_ONLYDIR
self._notifier.watch(self._local_path, mask=mask, callbacks=[self._notify])
def startService(self):
service.MultiService.startService(self)
d = self._notifier.startReading()
self._stats_provider.count('drop_upload.dirs_monitored', 1)
return d
def _notify(self, opaque, path, events_mask):
self._log("inotify event %r, %r, %r\n" % (opaque, path, ', '.join(self._inotify.humanReadableMask(events_mask))))
self._stats_provider.count('drop_upload.files_queued', 1)
eventually(self._process, opaque, path, events_mask)
def _process(self, opaque, path, events_mask):
d = defer.succeed(None)
# FIXME: if this already exists as a mutable file, we replace the directory entry,
# but we should probably modify the file (as the SFTP frontend does).
def _add_file(ign):
name = path.basename()
# on Windows the name is already Unicode
if not isinstance(name, unicode):
name = name.decode(get_filesystem_encoding())
u = FileName(path.path, self._convergence)
return self._parent.add_file(name, u)
d.addCallback(_add_file)
def _succeeded(ign):
self._stats_provider.count('drop_upload.files_queued', -1)
self._stats_provider.count('drop_upload.files_uploaded', 1)
def _failed(f):
self._stats_provider.count('drop_upload.files_queued', -1)
if path.exists():
self._log("drop-upload: %r failed to upload due to %r" % (path.path, f))
self._stats_provider.count('drop_upload.files_failed', 1)
return f
else:
self._log("drop-upload: notified file %r disappeared "
"(this is normal for temporary files): %r" % (path.path, f))
self._stats_provider.count('drop_upload.files_disappeared', 1)
return None
d.addCallbacks(_succeeded, _failed)
d.addBoth(self._uploaded_callback)
return d
def set_uploaded_callback(self, callback):
"""This sets a function that will be called after a file has been uploaded."""
self._uploaded_callback = callback
def finish(self, for_tests=False):
self._notifier.stopReading()
self._stats_provider.count('drop_upload.dirs_monitored', -1)
if for_tests and hasattr(self._notifier, 'wait_until_stopped'):
return self._notifier.wait_until_stopped()
else:
return defer.succeed(None)
def _log(self, msg):
self._client.log(msg)
#open("events", "ab+").write(msg)

View File

@ -0,0 +1,895 @@
import sys, os
import os.path
from collections import deque
import time
from twisted.internet import defer, reactor, task
from twisted.python.failure import Failure
from twisted.python import runtime
from twisted.application import service
from zope.interface import Interface, Attribute, implementer
from allmydata.util import fileutil
from allmydata.interfaces import IDirectoryNode
from allmydata.util import log
from allmydata.util.fileutil import precondition_abspath, get_pathinfo, ConflictError
from allmydata.util.assertutil import precondition, _assert
from allmydata.util.deferredutil import HookMixin
from allmydata.util.progress import PercentProgress
from allmydata.util.encodingutil import listdir_filepath, to_filepath, \
extend_filepath, unicode_from_filepath, unicode_segments_from, \
quote_filepath, quote_local_unicode_path, quote_output, FilenameEncodingError
from allmydata.immutable.upload import FileName, Data
from allmydata import magicfolderdb, magicpath
defer.setDebugging(True)
IN_EXCL_UNLINK = 0x04000000L
def get_inotify_module():
try:
if sys.platform == "win32":
from allmydata.windows import inotify
elif runtime.platform.supportsINotify():
from twisted.internet import inotify
else:
raise NotImplementedError("filesystem notification needed for Magic Folder is not supported.\n"
"This currently requires Linux or Windows.")
return inotify
except (ImportError, AttributeError) as e:
log.msg(e)
if sys.platform == "win32":
raise NotImplementedError("filesystem notification needed for Magic Folder is not supported.\n"
"Windows support requires at least Vista, and has only been tested on Windows 7.")
raise
def is_new_file(pathinfo, db_entry):
if db_entry is None:
return True
if not pathinfo.exists and db_entry.size is None:
return False
return ((pathinfo.size, pathinfo.ctime, pathinfo.mtime) !=
(db_entry.size, db_entry.ctime, db_entry.mtime))
class MagicFolder(service.MultiService):
name = 'magic-folder'
def __init__(self, client, upload_dircap, collective_dircap, local_path_u, dbfile, umask,
pending_delay=1.0, clock=None):
precondition_abspath(local_path_u)
service.MultiService.__init__(self)
immediate = clock is not None
clock = clock or reactor
db = magicfolderdb.get_magicfolderdb(dbfile, create_version=(magicfolderdb.SCHEMA_v1, 1))
if db is None:
return Failure(Exception('ERROR: Unable to load magic folder db.'))
# for tests
self._client = client
self._db = db
upload_dirnode = self._client.create_node_from_uri(upload_dircap)
collective_dirnode = self._client.create_node_from_uri(collective_dircap)
self.uploader = Uploader(client, local_path_u, db, upload_dirnode, pending_delay, clock, immediate)
self.downloader = Downloader(client, local_path_u, db, collective_dirnode,
upload_dirnode.get_readonly_uri(), clock, self.uploader.is_pending, umask)
def startService(self):
# TODO: why is this being called more than once?
if self.running:
return defer.succeed(None)
print "%r.startService" % (self,)
service.MultiService.startService(self)
return self.uploader.start_monitoring()
def ready(self):
"""ready is used to signal us to start
processing the upload and download items...
"""
self.uploader.start_uploading() # synchronous
return self.downloader.start_downloading()
def finish(self):
print "finish"
d = self.uploader.stop()
d2 = self.downloader.stop()
d.addCallback(lambda ign: d2)
return d
def remove_service(self):
return service.MultiService.disownServiceParent(self)
class QueueMixin(HookMixin):
def __init__(self, client, local_path_u, db, name, clock):
self._client = client
self._local_path_u = local_path_u
self._local_filepath = to_filepath(local_path_u)
self._db = db
self._name = name
self._clock = clock
self._hooks = {'processed': None, 'started': None}
self.started_d = self.set_hook('started')
if not self._local_filepath.exists():
raise AssertionError("The '[magic_folder] local.directory' parameter was %s "
"but there is no directory at that location."
% quote_local_unicode_path(self._local_path_u))
if not self._local_filepath.isdir():
raise AssertionError("The '[magic_folder] local.directory' parameter was %s "
"but the thing at that location is not a directory."
% quote_local_unicode_path(self._local_path_u))
self._deque = deque()
# do we also want to bound on "maximum age"?
self._process_history = deque(maxlen=20)
self._lazy_tail = defer.succeed(None)
self._stopped = False
self._turn_delay = 0
def get_status(self):
"""
Returns an iterable of instances that implement IQueuedItem
"""
for item in self._deque:
yield item
for item in self._process_history:
yield item
def _get_filepath(self, relpath_u):
self._log("_get_filepath(%r)" % (relpath_u,))
return extend_filepath(self._local_filepath, relpath_u.split(u"/"))
def _get_relpath(self, filepath):
self._log("_get_relpath(%r)" % (filepath,))
segments = unicode_segments_from(filepath, self._local_filepath)
self._log("segments = %r" % (segments,))
return u"/".join(segments)
def _count(self, counter_name, delta=1):
ctr = 'magic_folder.%s.%s' % (self._name, counter_name)
self._log("%s += %r" % (counter_name, delta))
self._client.stats_provider.count(ctr, delta)
def _logcb(self, res, msg):
self._log("%s: %r" % (msg, res))
return res
def _log(self, msg):
s = "Magic Folder %s %s: %s" % (quote_output(self._client.nickname), self._name, msg)
self._client.log(s)
print s
#open("events", "ab+").write(msg)
def _turn_deque(self):
try:
self._log("_turn_deque")
if self._stopped:
self._log("stopped")
return
try:
item = IQueuedItem(self._deque.pop())
self._process_history.append(item)
self._log("popped %r, now have %d" % (item, len(self._deque)))
self._count('objects_queued', -1)
except IndexError:
self._log("deque is now empty")
self._lazy_tail.addBoth(self._logcb, "whawhat empty")
self._lazy_tail.addCallback(lambda ign: self._when_queue_is_empty())
self._lazy_tail.addBoth(self._logcb, "got past _when_queue_is_empty")
else:
self._log("_turn_deque else clause")
self._lazy_tail.addBoth(self._logcb, "whawhat else %r" % (item,))
self._lazy_tail.addCallback(lambda ign: self._process(item))
self._lazy_tail.addBoth(self._logcb, "got past _process")
self._lazy_tail.addBoth(self._call_hook, 'processed', async=True)
self._lazy_tail.addBoth(self._logcb, "got past _call_hook (turn_delay = %r)" % (self._turn_delay,))
self._lazy_tail.addErrback(log.err)
self._lazy_tail.addCallback(lambda ign: task.deferLater(self._clock, self._turn_delay, self._turn_deque))
self._lazy_tail.addBoth(self._logcb, "got past deferLater")
except Exception as e:
self._log("---- turn deque exception %s" % (e,))
raise
# this isn't in interfaces.py because it's very specific to QueueMixin
class IQueuedItem(Interface):
relpath_u = Attribute("The path this item represents")
progress = Attribute("A PercentProgress instance")
def set_status(self, status, current_time=None):
"""
"""
def status_time(self, state):
"""
Get the time of particular state change, or None
"""
def status_history(self):
"""
All status changes, sorted latest -> oldest
"""
@implementer(IQueuedItem)
class QueuedItem(object):
def __init__(self, relpath_u, progress):
self.relpath_u = relpath_u
self.progress = progress
self._status_history = dict()
def set_status(self, status, current_time=None):
if current_time is None:
current_time = time.time()
self._status_history[status] = current_time
def status_time(self, state):
"""
Returns None if there's no status-update for 'state', else returns
the timestamp when that state was reached.
"""
return self._status_history.get(state, None)
def status_history(self):
"""
Returns a list of 2-tuples of (state, timestamp) sorted by timestamp
"""
hist = self._status_history.items()
hist.sort(lambda a, b: cmp(a[1], b[1]))
return hist
class UploadItem(QueuedItem):
"""
Represents a single item the _deque of the Uploader
"""
pass
class Uploader(QueueMixin):
def __init__(self, client, local_path_u, db, upload_dirnode, pending_delay, clock,
immediate=False):
QueueMixin.__init__(self, client, local_path_u, db, 'uploader', clock)
self.is_ready = False
self._immediate = immediate
if not IDirectoryNode.providedBy(upload_dirnode):
raise AssertionError("The URI in '%s' does not refer to a directory."
% os.path.join('private', 'magic_folder_dircap'))
if upload_dirnode.is_unknown() or upload_dirnode.is_readonly():
raise AssertionError("The URI in '%s' is not a writecap to a directory."
% os.path.join('private', 'magic_folder_dircap'))
self._upload_dirnode = upload_dirnode
self._inotify = get_inotify_module()
self._notifier = self._inotify.INotify()
self._pending = set() # of unicode relpaths
self._periodic_full_scan_duration = 10 * 60 # perform a full scan every 10 minutes
if hasattr(self._notifier, 'set_pending_delay'):
self._notifier.set_pending_delay(pending_delay)
# TODO: what about IN_MOVE_SELF and IN_UNMOUNT?
#
self.mask = ( self._inotify.IN_CREATE
| self._inotify.IN_CLOSE_WRITE
| self._inotify.IN_MOVED_TO
| self._inotify.IN_MOVED_FROM
| self._inotify.IN_DELETE
| self._inotify.IN_ONLYDIR
| IN_EXCL_UNLINK
)
self._notifier.watch(self._local_filepath, mask=self.mask, callbacks=[self._notify],
recursive=True)
def start_monitoring(self):
self._log("start_monitoring")
d = defer.succeed(None)
d.addCallback(lambda ign: self._notifier.startReading())
d.addCallback(lambda ign: self._count('dirs_monitored'))
d.addBoth(self._call_hook, 'started')
return d
def stop(self):
self._log("stop")
self._notifier.stopReading()
self._count('dirs_monitored', -1)
self.periodic_callid.cancel()
if hasattr(self._notifier, 'wait_until_stopped'):
d = self._notifier.wait_until_stopped()
else:
d = defer.succeed(None)
d.addCallback(lambda ign: self._lazy_tail)
return d
def start_uploading(self):
self._log("start_uploading")
self.is_ready = True
all_relpaths = self._db.get_all_relpaths()
self._log("all relpaths: %r" % (all_relpaths,))
for relpath_u in all_relpaths:
self._add_pending(relpath_u)
self._full_scan()
def _extend_queue_and_keep_going(self, relpaths_u):
self._log("_extend_queue_and_keep_going %r" % (relpaths_u,))
for relpath_u in relpaths_u:
progress = PercentProgress()
item = UploadItem(relpath_u, progress)
item.set_status('queued', self._clock.seconds())
self._deque.append(item)
self._count('objects_queued', len(relpaths_u))
if self.is_ready:
if self._immediate: # for tests
self._turn_deque()
else:
self._clock.callLater(0, self._turn_deque)
def _full_scan(self):
self.periodic_callid = self._clock.callLater(self._periodic_full_scan_duration, self._full_scan)
print "FULL SCAN"
self._log("_pending %r" % (self._pending))
self._scan(u"")
self._extend_queue_and_keep_going(self._pending)
def _add_pending(self, relpath_u):
self._log("add pending %r" % (relpath_u,))
if not magicpath.should_ignore_file(relpath_u):
self._pending.add(relpath_u)
def _scan(self, reldir_u):
# Scan a directory by (synchronously) adding the paths of all its children to self._pending.
# Note that this doesn't add them to the deque -- that will
self._log("scan %r" % (reldir_u,))
fp = self._get_filepath(reldir_u)
try:
children = listdir_filepath(fp)
except EnvironmentError:
raise Exception("WARNING: magic folder: permission denied on directory %s"
% quote_filepath(fp))
except FilenameEncodingError:
raise Exception("WARNING: magic folder: could not list directory %s due to a filename encoding error"
% quote_filepath(fp))
for child in children:
_assert(isinstance(child, unicode), child=child)
self._add_pending("%s/%s" % (reldir_u, child) if reldir_u != u"" else child)
def is_pending(self, relpath_u):
return relpath_u in self._pending
def _notify(self, opaque, path, events_mask):
self._log("inotify event %r, %r, %r\n" % (opaque, path, ', '.join(self._inotify.humanReadableMask(events_mask))))
relpath_u = self._get_relpath(path)
# We filter out IN_CREATE events not associated with a directory.
# Acting on IN_CREATE for files could cause us to read and upload
# a possibly-incomplete file before the application has closed it.
# There should always be an IN_CLOSE_WRITE after an IN_CREATE, I think.
# It isn't possible to avoid watching for IN_CREATE at all, because
# it is the only event notified for a directory creation.
if ((events_mask & self._inotify.IN_CREATE) != 0 and
(events_mask & self._inotify.IN_ISDIR) == 0):
self._log("ignoring event for %r (creation of non-directory)\n" % (relpath_u,))
return
if relpath_u in self._pending:
self._log("not queueing %r because it is already pending" % (relpath_u,))
return
if magicpath.should_ignore_file(relpath_u):
self._log("ignoring event for %r (ignorable path)" % (relpath_u,))
return
self._pending.add(relpath_u)
self._extend_queue_and_keep_going([relpath_u])
def _when_queue_is_empty(self):
return defer.succeed(None)
def _process(self, item):
# Uploader
relpath_u = item.relpath_u
self._log("_process(%r)" % (relpath_u,))
item.set_status('started', self._clock.seconds())
if relpath_u is None:
item.set_status('invalid_path', self._clock.seconds())
return
precondition(isinstance(relpath_u, unicode), relpath_u)
precondition(not relpath_u.endswith(u'/'), relpath_u)
d = defer.succeed(None)
def _maybe_upload(ign, now=None):
self._log("_maybe_upload: relpath_u=%r, now=%r" % (relpath_u, now))
if now is None:
now = time.time()
fp = self._get_filepath(relpath_u)
pathinfo = get_pathinfo(unicode_from_filepath(fp))
self._log("about to remove %r from pending set %r" %
(relpath_u, self._pending))
self._pending.remove(relpath_u)
encoded_path_u = magicpath.path2magic(relpath_u)
if not pathinfo.exists:
# FIXME merge this with the 'isfile' case.
self._log("notified object %s disappeared (this is normal)" % quote_filepath(fp))
self._count('objects_disappeared')
db_entry = self._db.get_db_entry(relpath_u)
if db_entry is None:
return None
last_downloaded_timestamp = now # is this correct?
if is_new_file(pathinfo, db_entry):
new_version = db_entry.version + 1
else:
self._log("Not uploading %r" % (relpath_u,))
self._count('objects_not_uploaded')
return
metadata = { 'version': new_version,
'deleted': True,
'last_downloaded_timestamp': last_downloaded_timestamp }
if db_entry.last_downloaded_uri is not None:
metadata['last_downloaded_uri'] = db_entry.last_downloaded_uri
empty_uploadable = Data("", self._client.convergence)
d2 = self._upload_dirnode.add_file(
encoded_path_u, empty_uploadable,
metadata=metadata,
overwrite=True,
progress=item.progress,
)
def _add_db_entry(filenode):
filecap = filenode.get_uri()
last_downloaded_uri = metadata.get('last_downloaded_uri', None)
self._db.did_upload_version(relpath_u, new_version, filecap,
last_downloaded_uri, last_downloaded_timestamp,
pathinfo)
self._count('files_uploaded')
d2.addCallback(_add_db_entry)
return d2
elif pathinfo.islink:
self.warn("WARNING: cannot upload symlink %s" % quote_filepath(fp))
return None
elif pathinfo.isdir:
print "ISDIR "
if not getattr(self._notifier, 'recursive_includes_new_subdirectories', False):
self._notifier.watch(fp, mask=self.mask, callbacks=[self._notify], recursive=True)
uploadable = Data("", self._client.convergence)
encoded_path_u += magicpath.path2magic(u"/")
self._log("encoded_path_u = %r" % (encoded_path_u,))
upload_d = self._upload_dirnode.add_file(
encoded_path_u, uploadable,
metadata={"version": 0},
overwrite=True,
progress=item.progress,
)
def _dir_succeeded(ign):
self._log("created subdirectory %r" % (relpath_u,))
self._count('directories_created')
def _dir_failed(f):
self._log("failed to create subdirectory %r" % (relpath_u,))
return f
upload_d.addCallbacks(_dir_succeeded, _dir_failed)
upload_d.addCallback(lambda ign: self._scan(relpath_u))
upload_d.addCallback(lambda ign: self._extend_queue_and_keep_going(self._pending))
return upload_d
elif pathinfo.isfile:
db_entry = self._db.get_db_entry(relpath_u)
last_downloaded_timestamp = now
if db_entry is None:
new_version = 0
elif is_new_file(pathinfo, db_entry):
new_version = db_entry.version + 1
else:
self._log("Not uploading %r" % (relpath_u,))
self._count('objects_not_uploaded')
return None
metadata = { 'version': new_version,
'last_downloaded_timestamp': last_downloaded_timestamp }
if db_entry is not None and db_entry.last_downloaded_uri is not None:
metadata['last_downloaded_uri'] = db_entry.last_downloaded_uri
uploadable = FileName(unicode_from_filepath(fp), self._client.convergence)
d2 = self._upload_dirnode.add_file(
encoded_path_u, uploadable,
metadata=metadata,
overwrite=True,
progress=item.progress,
)
def _add_db_entry(filenode):
filecap = filenode.get_uri()
last_downloaded_uri = metadata.get('last_downloaded_uri', None)
self._db.did_upload_version(relpath_u, new_version, filecap,
last_downloaded_uri, last_downloaded_timestamp,
pathinfo)
self._count('files_uploaded')
d2.addCallback(_add_db_entry)
return d2
else:
self.warn("WARNING: cannot process special file %s" % quote_filepath(fp))
return None
d.addCallback(_maybe_upload)
def _succeeded(res):
self._count('objects_succeeded')
item.set_status('success', self._clock.seconds())
return res
def _failed(f):
self._count('objects_failed')
self._log("%s while processing %r" % (f, relpath_u))
item.set_status('failure', self._clock.seconds())
return f
d.addCallbacks(_succeeded, _failed)
return d
def _get_metadata(self, encoded_path_u):
try:
d = self._upload_dirnode.get_metadata_for(encoded_path_u)
except KeyError:
return Failure()
return d
def _get_filenode(self, encoded_path_u):
try:
d = self._upload_dirnode.get(encoded_path_u)
except KeyError:
return Failure()
return d
class WriteFileMixin(object):
FUDGE_SECONDS = 10.0
def _get_conflicted_filename(self, abspath_u):
return abspath_u + u".conflict"
def _write_downloaded_file(self, local_path_u, abspath_u, file_contents, is_conflict=False, now=None):
self._log("_write_downloaded_file(%r, <%d bytes>, is_conflict=%r, now=%r)"
% (abspath_u, len(file_contents), is_conflict, now))
# 1. Write a temporary file, say .foo.tmp.
# 2. is_conflict determines whether this is an overwrite or a conflict.
# 3. Set the mtime of the replacement file to be T seconds before the
# current local time.
# 4. Perform a file replacement with backup filename foo.backup,
# replaced file foo, and replacement file .foo.tmp. If any step of
# this operation fails, reclassify as a conflict and stop.
#
# Returns the path of the destination file.
precondition_abspath(abspath_u)
replacement_path_u = abspath_u + u".tmp" # FIXME more unique
backup_path_u = abspath_u + u".backup"
if now is None:
now = time.time()
initial_path_u = os.path.dirname(abspath_u)
fileutil.make_dirs_with_absolute_mode(local_path_u, initial_path_u, (~ self._umask) & 0777)
fileutil.write(replacement_path_u, file_contents)
os.chmod(replacement_path_u, (~ self._umask) & 0777)
# FUDGE_SECONDS is used to determine if another process
# has written to the same file concurrently. This is described
# in the Earth Dragon section of our design document:
# docs/proposed/magic-folder/remote-to-local-sync.rst
os.utime(replacement_path_u, (now, now - self.FUDGE_SECONDS))
if is_conflict:
print "0x00 ------------ <><> is conflict; calling _rename_conflicted_file... %r %r" % (abspath_u, replacement_path_u)
return self._rename_conflicted_file(abspath_u, replacement_path_u)
else:
try:
fileutil.replace_file(abspath_u, replacement_path_u, backup_path_u)
return abspath_u
except fileutil.ConflictError:
return self._rename_conflicted_file(abspath_u, replacement_path_u)
def _rename_conflicted_file(self, abspath_u, replacement_path_u):
self._log("_rename_conflicted_file(%r, %r)" % (abspath_u, replacement_path_u))
conflict_path_u = self._get_conflicted_filename(abspath_u)
print "XXX rename %r %r" % (replacement_path_u, conflict_path_u)
if os.path.isfile(replacement_path_u):
print "%r exists" % (replacement_path_u,)
if os.path.isfile(conflict_path_u):
print "%r exists" % (conflict_path_u,)
fileutil.rename_no_overwrite(replacement_path_u, conflict_path_u)
return conflict_path_u
def _rename_deleted_file(self, abspath_u):
self._log('renaming deleted file to backup: %s' % (abspath_u,))
try:
fileutil.rename_no_overwrite(abspath_u, abspath_u + u'.backup')
except OSError:
self._log("Already gone: '%s'" % (abspath_u,))
return abspath_u
class DownloadItem(QueuedItem):
"""
Represents a single item in the _deque of the Downloader
"""
def __init__(self, relpath_u, progress, filenode, metadata):
super(DownloadItem, self).__init__(relpath_u, progress)
self.file_node = filenode
self.metadata = metadata
class Downloader(QueueMixin, WriteFileMixin):
REMOTE_SCAN_INTERVAL = 3 # facilitates tests
def __init__(self, client, local_path_u, db, collective_dirnode,
upload_readonly_dircap, clock, is_upload_pending, umask):
QueueMixin.__init__(self, client, local_path_u, db, 'downloader', clock)
if not IDirectoryNode.providedBy(collective_dirnode):
raise AssertionError("The URI in '%s' does not refer to a directory."
% os.path.join('private', 'collective_dircap'))
if collective_dirnode.is_unknown() or not collective_dirnode.is_readonly():
raise AssertionError("The URI in '%s' is not a readonly cap to a directory."
% os.path.join('private', 'collective_dircap'))
self._collective_dirnode = collective_dirnode
self._upload_readonly_dircap = upload_readonly_dircap
self._is_upload_pending = is_upload_pending
self._umask = umask
def start_downloading(self):
self._log("start_downloading")
self._turn_delay = self.REMOTE_SCAN_INTERVAL
files = self._db.get_all_relpaths()
self._log("all files %s" % files)
d = self._scan_remote_collective(scan_self=True)
d.addBoth(self._logcb, "after _scan_remote_collective 0")
self._turn_deque()
return d
def stop(self):
self._log("stop")
self._stopped = True
d = defer.succeed(None)
d.addCallback(lambda ign: self._lazy_tail)
return d
def _should_download(self, relpath_u, remote_version):
"""
_should_download returns a bool indicating whether or not a remote object should be downloaded.
We check the remote metadata version against our magic-folder db version number;
latest version wins.
"""
self._log("_should_download(%r, %r)" % (relpath_u, remote_version))
if magicpath.should_ignore_file(relpath_u):
self._log("nope")
return False
self._log("yep")
db_entry = self._db.get_db_entry(relpath_u)
if db_entry is None:
return True
self._log("version %r" % (db_entry.version,))
return (db_entry.version < remote_version)
def _get_local_latest(self, relpath_u):
"""
_get_local_latest takes a unicode path string checks to see if this file object
exists in our magic-folder db; if not then return None
else check for an entry in our magic-folder db and return the version number.
"""
if not self._get_filepath(relpath_u).exists():
return None
db_entry = self._db.get_db_entry(relpath_u)
return None if db_entry is None else db_entry.version
def _get_collective_latest_file(self, filename):
"""
_get_collective_latest_file takes a file path pointing to a file managed by
magic-folder and returns a deferred that fires with the two tuple containing a
file node and metadata for the latest version of the file located in the
magic-folder collective directory.
"""
collective_dirmap_d = self._collective_dirnode.list()
def scan_collective(result):
list_of_deferreds = []
for dir_name in result.keys():
# XXX make sure it's a directory
d = defer.succeed(None)
d.addCallback(lambda x, dir_name=dir_name: result[dir_name][0].get_child_and_metadata(filename))
list_of_deferreds.append(d)
deferList = defer.DeferredList(list_of_deferreds, consumeErrors=True)
return deferList
collective_dirmap_d.addCallback(scan_collective)
def highest_version(deferredList):
max_version = 0
metadata = None
node = None
for success, result in deferredList:
if success:
if result[1]['version'] > max_version:
node, metadata = result
max_version = result[1]['version']
return node, metadata
collective_dirmap_d.addCallback(highest_version)
return collective_dirmap_d
def _scan_remote_dmd(self, nickname, dirnode, scan_batch):
self._log("_scan_remote_dmd nickname %r" % (nickname,))
d = dirnode.list()
def scan_listing(listing_map):
for encoded_relpath_u in listing_map.keys():
relpath_u = magicpath.magic2path(encoded_relpath_u)
self._log("found %r" % (relpath_u,))
file_node, metadata = listing_map[encoded_relpath_u]
local_version = self._get_local_latest(relpath_u)
remote_version = metadata.get('version', None)
self._log("%r has local version %r, remote version %r" % (relpath_u, local_version, remote_version))
if local_version is None or remote_version is None or local_version < remote_version:
self._log("%r added to download queue" % (relpath_u,))
if scan_batch.has_key(relpath_u):
scan_batch[relpath_u] += [(file_node, metadata)]
else:
scan_batch[relpath_u] = [(file_node, metadata)]
d.addCallback(scan_listing)
d.addBoth(self._logcb, "end of _scan_remote_dmd")
return d
def _scan_remote_collective(self, scan_self=False):
self._log("_scan_remote_collective")
scan_batch = {} # path -> [(filenode, metadata)]
d = self._collective_dirnode.list()
def scan_collective(dirmap):
d2 = defer.succeed(None)
for dir_name in dirmap:
(dirnode, metadata) = dirmap[dir_name]
if scan_self or dirnode.get_readonly_uri() != self._upload_readonly_dircap:
d2.addCallback(lambda ign, dir_name=dir_name, dirnode=dirnode:
self._scan_remote_dmd(dir_name, dirnode, scan_batch))
def _err(f, dir_name=dir_name):
self._log("failed to scan DMD for client %r: %s" % (dir_name, f))
# XXX what should we do to make this failure more visible to users?
d2.addErrback(_err)
return d2
d.addCallback(scan_collective)
def _filter_batch_to_deque(ign):
self._log("deque = %r, scan_batch = %r" % (self._deque, scan_batch))
for relpath_u in scan_batch.keys():
file_node, metadata = max(scan_batch[relpath_u], key=lambda x: x[1]['version'])
if self._should_download(relpath_u, metadata['version']):
to_dl = DownloadItem(
relpath_u,
PercentProgress(file_node.get_size()),
file_node,
metadata,
)
to_dl.set_status('queued', self._clock.seconds())
self._deque.append(to_dl)
else:
self._log("Excluding %r" % (relpath_u,))
self._call_hook(None, 'processed', async=True)
self._log("deque after = %r" % (self._deque,))
d.addCallback(_filter_batch_to_deque)
return d
def _when_queue_is_empty(self):
d = task.deferLater(self._clock, self.REMOTE_SCAN_INTERVAL, self._scan_remote_collective)
d.addBoth(self._logcb, "after _scan_remote_collective 1")
d.addCallback(lambda ign: self._turn_deque())
return d
def _process(self, item, now=None):
# Downloader
self._log("_process(%r)" % (item,))
if now is None: # XXX why can we pass in now?
now = time.time() # self._clock.seconds()
self._log("started! %s" % (now,))
item.set_status('started', now)
fp = self._get_filepath(item.relpath_u)
abspath_u = unicode_from_filepath(fp)
conflict_path_u = self._get_conflicted_filename(abspath_u)
d = defer.succeed(None)
def do_update_db(written_abspath_u):
filecap = item.file_node.get_uri()
last_uploaded_uri = item.metadata.get('last_uploaded_uri', None)
last_downloaded_uri = filecap
last_downloaded_timestamp = now
written_pathinfo = get_pathinfo(written_abspath_u)
if not written_pathinfo.exists and not item.metadata.get('deleted', False):
raise Exception("downloaded object %s disappeared" % quote_local_unicode_path(written_abspath_u))
self._db.did_upload_version(
item.relpath_u, item.metadata['version'], last_uploaded_uri,
last_downloaded_uri, last_downloaded_timestamp, written_pathinfo,
)
self._count('objects_downloaded')
item.set_status('success', self._clock.seconds())
def failed(f):
item.set_status('failure', self._clock.seconds())
self._log("download failed: %s" % (str(f),))
self._count('objects_failed')
return f
if os.path.isfile(conflict_path_u):
def fail(res):
raise ConflictError("download failed: already conflicted: %r" % (item.relpath_u,))
d.addCallback(fail)
else:
is_conflict = False
db_entry = self._db.get_db_entry(item.relpath_u)
dmd_last_downloaded_uri = item.metadata.get('last_downloaded_uri', None)
dmd_last_uploaded_uri = item.metadata.get('last_uploaded_uri', None)
if db_entry:
if dmd_last_downloaded_uri is not None and db_entry.last_downloaded_uri is not None:
if dmd_last_downloaded_uri != db_entry.last_downloaded_uri:
is_conflict = True
self._count('objects_conflicted')
elif dmd_last_uploaded_uri is not None and dmd_last_uploaded_uri != db_entry.last_uploaded_uri:
is_conflict = True
self._count('objects_conflicted')
elif self._is_upload_pending(item.relpath_u):
is_conflict = True
self._count('objects_conflicted')
if item.relpath_u.endswith(u"/"):
if item.metadata.get('deleted', False):
self._log("rmdir(%r) ignored" % (abspath_u,))
else:
self._log("mkdir(%r)" % (abspath_u,))
d.addCallback(lambda ign: fileutil.make_dirs(abspath_u))
d.addCallback(lambda ign: abspath_u)
else:
if item.metadata.get('deleted', False):
d.addCallback(lambda ign: self._rename_deleted_file(abspath_u))
else:
d.addCallback(lambda ign: item.file_node.download_best_version(progress=item.progress))
d.addCallback(lambda contents: self._write_downloaded_file(self._local_path_u, abspath_u, contents,
is_conflict=is_conflict))
d.addCallbacks(do_update_db, failed)
def trap_conflicts(f):
f.trap(ConflictError)
return None
d.addErrback(trap_conflicts)
return d

View File

@ -74,7 +74,7 @@ PiB=1024*TiB
class Encoder(object):
implements(IEncoder)
def __init__(self, log_parent=None, upload_status=None):
def __init__(self, log_parent=None, upload_status=None, progress=None):
object.__init__(self)
self.uri_extension_data = {}
self._codec = None
@ -86,6 +86,7 @@ class Encoder(object):
self._log_number = log.msg("creating Encoder %s" % self,
facility="tahoe.encoder", parent=log_parent)
self._aborted = False
self._progress = progress
def __repr__(self):
if hasattr(self, "_storage_index"):
@ -105,6 +106,8 @@ class Encoder(object):
def _got_size(size):
self.log(format="file size: %(size)d", size=size)
self.file_size = size
if self._progress:
self._progress.set_progress_total(self.file_size)
d.addCallback(_got_size)
d.addCallback(lambda res: eu.get_all_encoding_parameters())
d.addCallback(self._got_all_encoding_parameters)
@ -436,6 +439,7 @@ class Encoder(object):
shareid = shareids[i]
d = self.send_block(shareid, segnum, block, lognum)
dl.append(d)
block_hash = hashutil.block_hash(block)
#from allmydata.util import base32
#log.msg("creating block (shareid=%d, blocknum=%d) "
@ -445,6 +449,14 @@ class Encoder(object):
self.block_hashes[shareid].append(block_hash)
dl = self._gather_responses(dl)
def do_progress(ign):
done = self.segment_size * (segnum + 1)
if self._progress:
self._progress.set_progress(done)
return ign
dl.addCallback(do_progress)
def _logit(res):
self.log("%s uploaded %s / %s bytes (%d%%) of your file." %
(self,

View File

@ -1,7 +1,7 @@
import binascii
import time
now = time.time
from time import time as now
from zope.interface import implements
from twisted.internet import defer
@ -245,11 +245,13 @@ class ImmutableFileNode:
# we keep it here, we should also put this on CiphertextFileNode
def __hash__(self):
return self.u.__hash__()
def __eq__(self, other):
if isinstance(other, ImmutableFileNode):
return self.u.__eq__(other.u)
else:
return False
def __ne__(self, other):
if isinstance(other, ImmutableFileNode):
return self.u.__eq__(other.u)
@ -273,12 +275,16 @@ class ImmutableFileNode:
def get_uri(self):
return self.u.to_string()
def get_cap(self):
return self.u
def get_readcap(self):
return self.u.get_readonly()
def get_verify_cap(self):
return self.u.get_verify_cap()
def get_repair_cap(self):
# CHK files can be repaired with just the verifycap
return self.u.get_verify_cap()
@ -288,6 +294,7 @@ class ImmutableFileNode:
def get_size(self):
return self.u.get_size()
def get_current_size(self):
return defer.succeed(self.get_size())
@ -305,6 +312,7 @@ class ImmutableFileNode:
def check_and_repair(self, monitor, verify=False, add_lease=False):
return self._cnode.check_and_repair(monitor, verify, add_lease)
def check(self, monitor, verify=False, add_lease=False):
return self._cnode.check(monitor, verify, add_lease)
@ -316,14 +324,13 @@ class ImmutableFileNode:
"""
return defer.succeed(self)
def download_best_version(self):
def download_best_version(self, progress=None):
"""
Download the best version of this file, returning its contents
as a bytestring. Since there is only one version of an immutable
file, we download and return the contents of this file.
"""
d = consumer.download_to_data(self)
d = consumer.download_to_data(self, progress=progress)
return d
# for an immutable file, download_to_data (specified in IReadable)

View File

@ -113,7 +113,10 @@ class LiteralFileNode(_ImmutableFileNodeBase):
return defer.succeed(self)
def download_best_version(self):
def download_best_version(self, progress=None):
if progress is not None:
progress.set_progress_total(len(self.u.data))
progress.set_progress(len(self.u.data))
return defer.succeed(self.u.data)

View File

@ -137,7 +137,8 @@ class CHKUploadHelper(Referenceable, upload.CHKUploader):
def __init__(self, storage_index,
helper, storage_broker, secret_holder,
incoming_file, encoding_file,
log_number):
log_number, progress=None):
upload.CHKUploader.__init__(self, storage_broker, secret_holder, progress=progress)
self._storage_index = storage_index
self._helper = helper
self._incoming_file = incoming_file

View File

@ -21,7 +21,7 @@ from allmydata.util.rrefutil import add_version_to_remote_reference
from allmydata.interfaces import IUploadable, IUploader, IUploadResults, \
IEncryptedUploadable, RIEncryptedUploadable, IUploadStatus, \
NoServersError, InsufficientVersionError, UploadUnhappinessError, \
DEFAULT_MAX_SEGMENT_SIZE
DEFAULT_MAX_SEGMENT_SIZE, IProgress
from allmydata.immutable import layout
from pycryptopp.cipher.aes import AES
@ -623,7 +623,7 @@ class EncryptAnUploadable:
implements(IEncryptedUploadable)
CHUNKSIZE = 50*1024
def __init__(self, original, log_parent=None):
def __init__(self, original, log_parent=None, progress=None):
precondition(original.default_params_set,
"set_default_encoding_parameters not called on %r before wrapping with EncryptAnUploadable" % (original,))
self.original = IUploadable(original)
@ -636,6 +636,7 @@ class EncryptAnUploadable:
self._file_size = None
self._ciphertext_bytes_read = 0
self._status = None
self._progress = progress
def set_upload_status(self, upload_status):
self._status = IUploadStatus(upload_status)
@ -656,6 +657,8 @@ class EncryptAnUploadable:
self._file_size = size
if self._status:
self._status.set_size(size)
if self._progress:
self._progress.set_progress_total(size)
return size
d.addCallback(_got_size)
return d
@ -894,7 +897,7 @@ class UploadStatus:
class CHKUploader:
server_selector_class = Tahoe2ServerSelector
def __init__(self, storage_broker, secret_holder):
def __init__(self, storage_broker, secret_holder, progress=None):
# server_selector needs storage_broker and secret_holder
self._storage_broker = storage_broker
self._secret_holder = secret_holder
@ -904,6 +907,7 @@ class CHKUploader:
self._upload_status = UploadStatus()
self._upload_status.set_helper(False)
self._upload_status.set_active(True)
self._progress = progress
# locate_all_shareholders() will create the following attribute:
# self._server_trackers = {} # k: shnum, v: instance of ServerTracker
@ -947,8 +951,11 @@ class CHKUploader:
eu = IEncryptedUploadable(encrypted)
started = time.time()
self._encoder = e = encode.Encoder(self._log_number,
self._upload_status)
self._encoder = e = encode.Encoder(
self._log_number,
self._upload_status,
progress=self._progress,
)
d = e.set_encrypted_uploadable(eu)
d.addCallback(self.locate_all_shareholders, started)
d.addCallback(self.set_shareholders, e)
@ -1073,12 +1080,13 @@ def read_this_many_bytes(uploadable, size, prepend_data=[]):
class LiteralUploader:
def __init__(self):
def __init__(self, progress=None):
self._status = s = UploadStatus()
s.set_storage_index(None)
s.set_helper(False)
s.set_progress(0, 1.0)
s.set_active(False)
self._progress = progress
def start(self, uploadable):
uploadable = IUploadable(uploadable)
@ -1086,6 +1094,8 @@ class LiteralUploader:
def _got_size(size):
self._size = size
self._status.set_size(size)
if self._progress:
self._progress.set_progress_total(size)
return read_this_many_bytes(uploadable, size)
d.addCallback(_got_size)
d.addCallback(lambda data: uri.LiteralFileURI("".join(data)))
@ -1109,6 +1119,8 @@ class LiteralUploader:
self._status.set_progress(1, 1.0)
self._status.set_progress(2, 1.0)
self._status.set_results(ur)
if self._progress:
self._progress.set_progress(self._size)
return ur
def close(self):
@ -1503,12 +1515,13 @@ class Uploader(service.MultiService, log.PrefixingLogMixin):
name = "uploader"
URI_LIT_SIZE_THRESHOLD = 55
def __init__(self, helper_furl=None, stats_provider=None, history=None):
def __init__(self, helper_furl=None, stats_provider=None, history=None, progress=None):
self._helper_furl = helper_furl
self.stats_provider = stats_provider
self._history = history
self._helper = None
self._all_uploads = weakref.WeakKeyDictionary() # for debugging
self._progress = progress
log.PrefixingLogMixin.__init__(self, facility="tahoe.immutable.upload")
service.MultiService.__init__(self)
@ -1542,12 +1555,13 @@ class Uploader(service.MultiService, log.PrefixingLogMixin):
return (self._helper_furl, bool(self._helper))
def upload(self, uploadable):
def upload(self, uploadable, progress=None):
"""
Returns a Deferred that will fire with the UploadResults instance.
"""
assert self.parent
assert self.running
assert progress is None or IProgress.providedBy(progress)
uploadable = IUploadable(uploadable)
d = uploadable.get_size()
@ -1556,13 +1570,15 @@ class Uploader(service.MultiService, log.PrefixingLogMixin):
precondition(isinstance(default_params, dict), default_params)
precondition("max_segment_size" in default_params, default_params)
uploadable.set_default_encoding_parameters(default_params)
if progress:
progress.set_progress_total(size)
if self.stats_provider:
self.stats_provider.count('uploader.files_uploaded', 1)
self.stats_provider.count('uploader.bytes_uploaded', size)
if size <= self.URI_LIT_SIZE_THRESHOLD:
uploader = LiteralUploader()
uploader = LiteralUploader(progress=progress)
return uploader.start(uploadable)
else:
eu = EncryptAnUploadable(uploadable, self._parentmsgid)
@ -1575,7 +1591,7 @@ class Uploader(service.MultiService, log.PrefixingLogMixin):
else:
storage_broker = self.parent.get_storage_broker()
secret_holder = self.parent._secret_holder
uploader = CHKUploader(storage_broker, secret_holder)
uploader = CHKUploader(storage_broker, secret_holder, progress=progress)
d2.addCallback(lambda x: uploader.start(eu))
self._all_uploads[uploader] = None

View File

@ -1,5 +1,5 @@
from zope.interface import Interface
from zope.interface import Interface, Attribute
from foolscap.api import StringConstraint, ListOf, TupleOf, SetOf, DictOf, \
ChoiceOf, IntegerConstraint, Any, RemoteInterface, Referenceable
@ -624,6 +624,38 @@ class MustNotBeUnknownRWError(CapConstraintError):
"""Cannot add an unknown child cap specified in a rw_uri field."""
class IProgress(Interface):
"""
Remembers progress measured in arbitrary units. Users of these
instances must call ``set_progress_total`` at least once before
progress can be valid, and must use the same units for both
``set_progress_total`` and ``set_progress calls``.
See also:
:class:`allmydata.util.progress.PercentProgress`
"""
progress = Attribute(
"Current amount of progress (in percentage)"
)
def set_progress(self, value):
"""
Sets the current amount of progress.
Arbitrary units, but must match units used for
set_progress_total.
"""
def set_progress_total(self, value):
"""
Sets the total amount of expected progress
Arbitrary units, but must be same units as used when calling
set_progress() on this instance)..
"""
class IReadable(Interface):
"""I represent a readable object -- either an immutable file, or a
specific version of a mutable file.
@ -653,9 +685,12 @@ class IReadable(Interface):
def get_size():
"""Return the length (in bytes) of this readable object."""
def download_to_data():
def download_to_data(progress=None):
"""Download all of the file contents. I return a Deferred that fires
with the contents as a byte string."""
with the contents as a byte string.
:param progress: None or IProgress implementer
"""
def read(consumer, offset=0, size=None):
"""Download a portion (possibly all) of the file's contents, making
@ -915,11 +950,13 @@ class IFileNode(IFilesystemNode):
the Deferred will errback with an UnrecoverableFileError.
"""
def download_best_version():
def download_best_version(progress=None):
"""Download the contents of the version that would be returned
by get_best_readable_version(). This is equivalent to calling
download_to_data() on the IReadable given by that method.
progress is anything that implements IProgress
I return a Deferred that fires with a byte string when the file
has been fully downloaded. To support streaming download, use
the 'read' method of IReadable. If no version is recoverable,
@ -1065,7 +1102,7 @@ class IMutableFileNode(IFileNode):
everything) to get increased visibility.
"""
def upload(new_contents, servermap):
def upload(new_contents, servermap, progress=None):
"""Replace the contents of the file with new ones. This requires a
servermap that was previously updated with MODE_WRITE.
@ -1086,6 +1123,8 @@ class IMutableFileNode(IFileNode):
operation. If I do not signal UncoordinatedWriteError, then I was
able to write the new version without incident.
``progress`` is either None or an IProgress provider
I return a Deferred that fires (with a PublishStatus object) when the
publish has completed. I will update the servermap in-place with the
location of all new shares.
@ -1276,12 +1315,14 @@ class IDirectoryNode(IFilesystemNode):
equivalent to calling set_node() multiple times, but is much more
efficient."""
def add_file(name, uploadable, metadata=None, overwrite=True):
def add_file(name, uploadable, metadata=None, overwrite=True, progress=None):
"""I upload a file (using the given IUploadable), then attach the
resulting ImmutableFileNode to the directory at the given name. I set
metadata the same way as set_uri and set_node. The child name must be
a unicode string.
``progress`` either provides IProgress or is None
I return a Deferred that fires (with the IFileNode of the uploaded
file) when the operation completes."""

View File

@ -0,0 +1,99 @@
import sys
from collections import namedtuple
from allmydata.util.dbutil import get_db, DBError
# magic-folder db schema version 1
SCHEMA_v1 = """
CREATE TABLE version
(
version INTEGER -- contains one row, set to 1
);
CREATE TABLE local_files
(
path VARCHAR(1024) PRIMARY KEY, -- UTF-8 filename relative to local magic folder dir
-- note that size is before mtime and ctime here, but after in function parameters
size INTEGER, -- ST_SIZE, or NULL if the file has been deleted
mtime NUMBER, -- ST_MTIME
ctime NUMBER, -- ST_CTIME
version INTEGER,
last_uploaded_uri VARCHAR(256), -- URI:CHK:...
last_downloaded_uri VARCHAR(256), -- URI:CHK:...
last_downloaded_timestamp TIMESTAMP
);
"""
def get_magicfolderdb(dbfile, stderr=sys.stderr,
create_version=(SCHEMA_v1, 1), just_create=False):
# Open or create the given backupdb file. The parent directory must
# exist.
try:
(sqlite3, db) = get_db(dbfile, stderr, create_version,
just_create=just_create, dbname="magicfolderdb")
if create_version[1] in (1, 2):
return MagicFolderDB(sqlite3, db)
else:
print >>stderr, "invalid magicfolderdb schema version specified"
return None
except DBError, e:
print >>stderr, e
return None
PathEntry = namedtuple('PathEntry', 'size mtime ctime version last_uploaded_uri last_downloaded_uri last_downloaded_timestamp')
class MagicFolderDB(object):
VERSION = 1
def __init__(self, sqlite_module, connection):
self.sqlite_module = sqlite_module
self.connection = connection
self.cursor = connection.cursor()
def get_db_entry(self, relpath_u):
"""
Retrieve the entry in the database for a given path, or return None
if there is no such entry.
"""
c = self.cursor
c.execute("SELECT size, mtime, ctime, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp"
" FROM local_files"
" WHERE path=?",
(relpath_u,))
row = self.cursor.fetchone()
if not row:
print "found nothing for", relpath_u
return None
else:
(size, mtime, ctime, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp) = row
return PathEntry(size=size, mtime=mtime, ctime=ctime, version=version,
last_uploaded_uri=last_uploaded_uri,
last_downloaded_uri=last_downloaded_uri,
last_downloaded_timestamp=last_downloaded_timestamp)
def get_all_relpaths(self):
"""
Retrieve a set of all relpaths of files that have had an entry in magic folder db
(i.e. that have been downloaded at least once).
"""
self.cursor.execute("SELECT path FROM local_files")
rows = self.cursor.fetchall()
return set([r[0] for r in rows])
def did_upload_version(self, relpath_u, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp, pathinfo):
print "%r.did_upload_version(%r, %r, %r, %r, %r, %r)" % (self, relpath_u, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp, pathinfo)
try:
print "insert"
self.cursor.execute("INSERT INTO local_files VALUES (?,?,?,?,?,?,?,?)",
(relpath_u, pathinfo.size, pathinfo.mtime, pathinfo.ctime, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp))
except (self.sqlite_module.IntegrityError, self.sqlite_module.OperationalError):
print "err... update"
self.cursor.execute("UPDATE local_files"
" SET size=?, mtime=?, ctime=?, version=?, last_uploaded_uri=?, last_downloaded_uri=?, last_downloaded_timestamp=?"
" WHERE path=?",
(pathinfo.size, pathinfo.mtime, pathinfo.ctime, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp, relpath_u))
self.connection.commit()
print "committed"

View File

@ -0,0 +1,33 @@
import re
import os.path
from allmydata.util.assertutil import precondition, _assert
def path2magic(path):
return re.sub(ur'[/@]', lambda m: {u'/': u'@_', u'@': u'@@'}[m.group(0)], path)
def magic2path(path):
return re.sub(ur'@[_@]', lambda m: {u'@_': u'/', u'@@': u'@'}[m.group(0)], path)
IGNORE_SUFFIXES = [u'.backup', u'.tmp', u'.conflicted']
IGNORE_PREFIXES = [u'.']
def should_ignore_file(path_u):
precondition(isinstance(path_u, unicode), path_u=path_u)
for suffix in IGNORE_SUFFIXES:
if path_u.endswith(suffix):
return True
while path_u != u"":
oldpath_u = path_u
path_u, tail_u = os.path.split(path_u)
if tail_u.startswith(u"."):
return True
if path_u == oldpath_u:
return True # the path was absolute
_assert(len(path_u) < len(oldpath_u), path_u=path_u, oldpath_u=oldpath_u)
return False

View File

@ -403,21 +403,21 @@ class MutableFileNode:
return d.addCallback(_get_version, version)
def download_best_version(self):
def download_best_version(self, progress=None):
"""
I return a Deferred that fires with the contents of the best
version of this mutable file.
"""
return self._do_serialized(self._download_best_version)
return self._do_serialized(self._download_best_version, progress=progress)
def _download_best_version(self):
def _download_best_version(self, progress=None):
"""
I am the serialized sibling of download_best_version.
"""
d = self.get_best_readable_version()
d.addCallback(self._record_size)
d.addCallback(lambda version: version.download_to_data())
d.addCallback(lambda version: version.download_to_data(progress=progress))
# It is possible that the download will fail because there
# aren't enough shares to be had. If so, we will try again after
@ -432,7 +432,7 @@ class MutableFileNode:
d = self.get_best_mutable_version()
d.addCallback(self._record_size)
d.addCallback(lambda version: version.download_to_data())
d.addCallback(lambda version: version.download_to_data(progress=progress))
return d
d.addErrback(_maybe_retry)
@ -935,13 +935,13 @@ class MutableFileVersion:
return self._servermap.size_of_version(self._version)
def download_to_data(self, fetch_privkey=False):
def download_to_data(self, fetch_privkey=False, progress=None):
"""
I return a Deferred that fires with the contents of this
readable object as a byte string.
"""
c = consumer.MemoryConsumer()
c = consumer.MemoryConsumer(progress=progress)
d = self.read(c, fetch_privkey=fetch_privkey)
d.addCallback(lambda mc: "".join(mc.chunks))
return d

View File

@ -173,6 +173,8 @@ class BackupDB_v2:
"""
path = abspath_expanduser_unicode(path)
# TODO: consider using get_pathinfo.
s = os.stat(path)
size = s[stat.ST_SIZE]
ctime = s[stat.ST_CTIME]

View File

@ -31,6 +31,7 @@ class BadResponse(object):
def __init__(self, url, err):
self.status = -1
self.reason = "Error trying to connect to %s: %s" % (url, err)
self.error = err
def read(self):
return ""

View File

@ -153,14 +153,6 @@ def create_node(config, out=sys.stdout, err=sys.stderr):
c.write("enabled = false\n")
c.write("\n")
c.write("[drop_upload]\n")
c.write("# Shall this node automatically upload files created or modified in a local directory?\n")
c.write("enabled = false\n")
c.write("# To specify the target of uploads, a mutable directory writecap URI must be placed\n"
"# in 'private/drop_upload_dircap'.\n")
c.write("local.directory = ~/drop_upload\n")
c.write("\n")
c.close()
from allmydata.util import fileutil

View File

@ -0,0 +1,416 @@
import os
import urllib
from sys import stderr
from types import NoneType
from cStringIO import StringIO
from datetime import datetime
import simplejson
from twisted.python import usage
from allmydata.util.assertutil import precondition
from .common import BaseOptions, BasedirOptions, get_aliases
from .cli import MakeDirectoryOptions, LnOptions, CreateAliasOptions
import tahoe_mv
from allmydata.util.encodingutil import argv_to_abspath, argv_to_unicode, to_str, \
quote_local_unicode_path
from allmydata.scripts.common_http import do_http, BadResponse
from allmydata.util import fileutil
from allmydata.util import configutil
from allmydata import uri
from allmydata.util.abbreviate import abbreviate_space, abbreviate_time
INVITE_SEPARATOR = "+"
class CreateOptions(BasedirOptions):
nickname = None
local_dir = None
synopsis = "MAGIC_ALIAS: [NICKNAME LOCAL_DIR]"
def parseArgs(self, alias, nickname=None, local_dir=None):
BasedirOptions.parseArgs(self)
alias = argv_to_unicode(alias)
if not alias.endswith(u':'):
raise usage.UsageError("An alias must end with a ':' character.")
self.alias = alias[:-1]
self.nickname = None if nickname is None else argv_to_unicode(nickname)
# Expand the path relative to the current directory of the CLI command, not the node.
self.local_dir = None if local_dir is None else argv_to_abspath(local_dir, long_path=False)
if self.nickname and not self.local_dir:
raise usage.UsageError("If NICKNAME is specified then LOCAL_DIR must also be specified.")
node_url_file = os.path.join(self['node-directory'], u"node.url")
self['node-url'] = fileutil.read(node_url_file).strip()
def _delegate_options(source_options, target_options):
target_options.aliases = get_aliases(source_options['node-directory'])
target_options["node-url"] = source_options["node-url"]
target_options["node-directory"] = source_options["node-directory"]
target_options.stdin = StringIO("")
target_options.stdout = StringIO()
target_options.stderr = StringIO()
return target_options
def create(options):
precondition(isinstance(options.alias, unicode), alias=options.alias)
precondition(isinstance(options.nickname, (unicode, NoneType)), nickname=options.nickname)
precondition(isinstance(options.local_dir, (unicode, NoneType)), local_dir=options.local_dir)
from allmydata.scripts import tahoe_add_alias
create_alias_options = _delegate_options(options, CreateAliasOptions())
create_alias_options.alias = options.alias
rc = tahoe_add_alias.create_alias(create_alias_options)
if rc != 0:
print >>options.stderr, create_alias_options.stderr.getvalue()
return rc
print >>options.stdout, create_alias_options.stdout.getvalue()
if options.nickname is not None:
invite_options = _delegate_options(options, InviteOptions())
invite_options.alias = options.alias
invite_options.nickname = options.nickname
rc = invite(invite_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to invite after create\n"
print >>options.stderr, invite_options.stderr.getvalue()
return rc
invite_code = invite_options.stdout.getvalue().strip()
join_options = _delegate_options(options, JoinOptions())
join_options.local_dir = options.local_dir
join_options.invite_code = invite_code
rc = join(join_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to join after create\n"
print >>options.stderr, join_options.stderr.getvalue()
return rc
return 0
class InviteOptions(BasedirOptions):
nickname = None
synopsis = "MAGIC_ALIAS: NICKNAME"
stdin = StringIO("")
def parseArgs(self, alias, nickname=None):
BasedirOptions.parseArgs(self)
alias = argv_to_unicode(alias)
if not alias.endswith(u':'):
raise usage.UsageError("An alias must end with a ':' character.")
self.alias = alias[:-1]
self.nickname = argv_to_unicode(nickname)
node_url_file = os.path.join(self['node-directory'], u"node.url")
self['node-url'] = open(node_url_file, "r").read().strip()
aliases = get_aliases(self['node-directory'])
self.aliases = aliases
def invite(options):
precondition(isinstance(options.alias, unicode), alias=options.alias)
precondition(isinstance(options.nickname, unicode), nickname=options.nickname)
from allmydata.scripts import tahoe_mkdir
mkdir_options = _delegate_options(options, MakeDirectoryOptions())
mkdir_options.where = None
rc = tahoe_mkdir.mkdir(mkdir_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to mkdir\n"
return rc
# FIXME this assumes caps are ASCII.
dmd_write_cap = mkdir_options.stdout.getvalue().strip()
dmd_readonly_cap = uri.from_string(dmd_write_cap).get_readonly().to_string()
if dmd_readonly_cap is None:
print >>options.stderr, "magic-folder: failed to diminish dmd write cap\n"
return 1
magic_write_cap = get_aliases(options["node-directory"])[options.alias]
magic_readonly_cap = uri.from_string(magic_write_cap).get_readonly().to_string()
# tahoe ln CLIENT_READCAP COLLECTIVE_WRITECAP/NICKNAME
ln_options = _delegate_options(options, LnOptions())
ln_options.from_file = unicode(dmd_readonly_cap, 'utf-8')
ln_options.to_file = u"%s/%s" % (unicode(magic_write_cap, 'utf-8'), options.nickname)
rc = tahoe_mv.mv(ln_options, mode="link")
if rc != 0:
print >>options.stderr, "magic-folder: failed to create link\n"
print >>options.stderr, ln_options.stderr.getvalue()
return rc
# FIXME: this assumes caps are ASCII.
print >>options.stdout, "%s%s%s" % (magic_readonly_cap, INVITE_SEPARATOR, dmd_write_cap)
return 0
class JoinOptions(BasedirOptions):
synopsis = "INVITE_CODE LOCAL_DIR"
dmd_write_cap = ""
magic_readonly_cap = ""
def parseArgs(self, invite_code, local_dir):
BasedirOptions.parseArgs(self)
# Expand the path relative to the current directory of the CLI command, not the node.
self.local_dir = None if local_dir is None else argv_to_abspath(local_dir, long_path=False)
self.invite_code = to_str(argv_to_unicode(invite_code))
def join(options):
fields = options.invite_code.split(INVITE_SEPARATOR)
if len(fields) != 2:
raise usage.UsageError("Invalid invite code.")
magic_readonly_cap, dmd_write_cap = fields
dmd_cap_file = os.path.join(options["node-directory"], u"private", u"magic_folder_dircap")
collective_readcap_file = os.path.join(options["node-directory"], u"private", u"collective_dircap")
magic_folder_db_file = os.path.join(options["node-directory"], u"private", u"magicfolderdb.sqlite")
if os.path.exists(dmd_cap_file) or os.path.exists(collective_readcap_file) or os.path.exists(magic_folder_db_file):
print >>options.stderr, ("\nThis client has already joined a magic folder."
"\nUse the 'tahoe magic-folder leave' command first.\n")
return 1
fileutil.write(dmd_cap_file, dmd_write_cap)
fileutil.write(collective_readcap_file, magic_readonly_cap)
config = configutil.get_config(os.path.join(options["node-directory"], u"tahoe.cfg"))
configutil.set_config(config, "magic_folder", "enabled", "True")
configutil.set_config(config, "magic_folder", "local.directory", options.local_dir.encode('utf-8'))
configutil.write_config(os.path.join(options["node-directory"], u"tahoe.cfg"), config)
return 0
class LeaveOptions(BasedirOptions):
synopsis = ""
def parseArgs(self):
BasedirOptions.parseArgs(self)
def leave(options):
from ConfigParser import SafeConfigParser
dmd_cap_file = os.path.join(options["node-directory"], u"private", u"magic_folder_dircap")
collective_readcap_file = os.path.join(options["node-directory"], u"private", u"collective_dircap")
magic_folder_db_file = os.path.join(options["node-directory"], u"private", u"magicfolderdb.sqlite")
parser = SafeConfigParser()
parser.read(os.path.join(options["node-directory"], u"tahoe.cfg"))
parser.remove_section("magic_folder")
f = open(os.path.join(options["node-directory"], u"tahoe.cfg"), "w")
parser.write(f)
f.close()
for f in [dmd_cap_file, collective_readcap_file, magic_folder_db_file]:
try:
fileutil.remove(f)
except Exception as e:
print >>options.stderr, ("Warning: unable to remove %s due to %s: %s"
% (quote_local_unicode_path(f), e.__class__.__name__, str(e)))
# if this doesn't return 0, then the CLI stuff fails
return 0
class StatusOptions(BasedirOptions):
nickname = None
synopsis = ""
stdin = StringIO("")
def parseArgs(self):
BasedirOptions.parseArgs(self)
node_url_file = os.path.join(self['node-directory'], u"node.url")
with open(node_url_file, "r") as f:
self['node-url'] = f.read().strip()
def _get_json_for_fragment(options, fragment, method='GET'):
nodeurl = options['node-url']
if nodeurl.endswith('/'):
nodeurl = nodeurl[:-1]
url = u'%s/%s' % (nodeurl, fragment)
resp = do_http(method, url)
if isinstance(resp, BadResponse):
# specifically NOT using format_http_error() here because the
# URL is pretty sensitive (we're doing /uri/<key>).
raise RuntimeError(
"Failed to get json from '%s': %s" % (nodeurl, resp.error)
)
data = resp.read()
parsed = simplejson.loads(data)
if not parsed:
raise RuntimeError("No data from '%s'" % (nodeurl,))
return parsed
def _get_json_for_cap(options, cap):
return _get_json_for_fragment(
options,
'uri/%s?t=json' % urllib.quote(cap),
)
def _print_item_status(item, now, longest):
paddedname = (' ' * (longest - len(item['path']))) + item['path']
if 'failure_at' in item:
ts = datetime.fromtimestamp(item['started_at'])
prog = 'Failed %s (%s)' % (abbreviate_time(now - ts), ts)
elif item['percent_done'] < 100.0:
if 'started_at' not in item:
prog = 'not yet started'
else:
so_far = now - datetime.fromtimestamp(item['started_at'])
if so_far.seconds > 0.0:
rate = item['percent_done'] / so_far.seconds
if rate != 0:
time_left = (100.0 - item['percent_done']) / rate
prog = '%2.1f%% done, around %s left' % (
item['percent_done'],
abbreviate_time(time_left),
)
else:
time_left = None
prog = '%2.1f%% done' % (item['percent_done'],)
else:
prog = 'just started'
else:
prog = ''
for verb in ['finished', 'started', 'queued']:
keyname = verb + '_at'
if keyname in item:
when = datetime.fromtimestamp(item[keyname])
prog = '%s %s' % (verb, abbreviate_time(now - when))
break
print " %s: %s" % (paddedname, prog)
def status(options):
nodedir = options["node-directory"]
with open(os.path.join(nodedir, u"private", u"magic_folder_dircap")) as f:
dmd_cap = f.read().strip()
with open(os.path.join(nodedir, u"private", u"collective_dircap")) as f:
collective_readcap = f.read().strip()
try:
captype, dmd = _get_json_for_cap(options, dmd_cap)
if captype != 'dirnode':
print >>stderr, "magic_folder_dircap isn't a directory capability"
return 2
except RuntimeError as e:
print >>stderr, str(e)
return 1
now = datetime.now()
print "Local files:"
for (name, child) in dmd['children'].items():
captype, meta = child
status = 'good'
size = meta['size']
created = datetime.fromtimestamp(meta['metadata']['tahoe']['linkcrtime'])
version = meta['metadata']['version']
nice_size = abbreviate_space(size)
nice_created = abbreviate_time(now - created)
if captype != 'filenode':
print "%20s: error, should be a filecap" % name
continue
print " %s (%s): %s, version=%s, created %s" % (name, nice_size, status, version, nice_created)
captype, collective = _get_json_for_cap(options, collective_readcap)
print
print "Remote files:"
for (name, data) in collective['children'].items():
if data[0] != 'dirnode':
print "Error: '%s': expected a dirnode, not '%s'" % (name, data[0])
print " %s's remote:" % name
dmd = _get_json_for_cap(options, data[1]['ro_uri'])
if dmd[0] != 'dirnode':
print "Error: should be a dirnode"
continue
for (n, d) in dmd[1]['children'].items():
if d[0] != 'filenode':
print "Error: expected '%s' to be a filenode." % (n,)
meta = d[1]
status = 'good'
size = meta['size']
created = datetime.fromtimestamp(meta['metadata']['tahoe']['linkcrtime'])
version = meta['metadata']['version']
nice_size = abbreviate_space(size)
nice_created = abbreviate_time(now - created)
print " %s (%s): %s, version=%s, created %s" % (n, nice_size, status, version, nice_created)
with open(os.path.join(nodedir, u'private', u'api_auth_token'), 'rb') as f:
token = f.read()
magicdata = _get_json_for_fragment(
options,
'magic_folder?t=json&token=' + token,
method='POST',
)
if len(magicdata):
uploads = [item for item in magicdata if item['kind'] == 'upload']
downloads = [item for item in magicdata if item['kind'] == 'download']
longest = max([len(item['path']) for item in magicdata])
if True: # maybe --show-completed option or something?
uploads = [item for item in uploads if item['status'] != 'success']
downloads = [item for item in downloads if item['status'] != 'success']
if len(uploads):
print
print "Uploads:"
for item in uploads:
_print_item_status(item, now, longest)
if len(downloads):
print
print "Downloads:"
for item in downloads:
_print_item_status(item, now, longest)
for item in magicdata:
if item['status'] == 'failure':
print "Failed:", item
return 0
class MagicFolderCommand(BaseOptions):
subCommands = [
["create", None, CreateOptions, "Create a Magic Folder."],
["invite", None, InviteOptions, "Invite someone to a Magic Folder."],
["join", None, JoinOptions, "Join a Magic Folder."],
["leave", None, LeaveOptions, "Leave a Magic Folder."],
["status", None, StatusOptions, "Display stutus of uploads/downloads."],
]
def postOptions(self):
if not hasattr(self, 'subOptions'):
raise usage.UsageError("must specify a subcommand")
def getSynopsis(self):
return "Usage: tahoe [global-options] magic SUBCOMMAND"
def getUsage(self, width=None):
t = BaseOptions.getUsage(self, width)
t += """\
Please run e.g. 'tahoe magic-folder create --help' for more details on each
subcommand.
"""
return t
subDispatch = {
"create": create,
"invite": invite,
"join": join,
"leave": leave,
"status": status,
}
def do_magic_folder(options):
so = options.subOptions
so.stdout = options.stdout
so.stderr = options.stderr
f = subDispatch[options.subCommand]
return f(so)
subCommands = [
["magic-folder", None, MagicFolderCommand,
"Magic Folder subcommands: use 'tahoe magic-folder' for a list."],
]
dispatch = {
"magic-folder": do_magic_folder,
}

View File

@ -5,7 +5,8 @@ from cStringIO import StringIO
from twisted.python import usage
from allmydata.scripts.common import get_default_nodedir
from allmydata.scripts import debug, create_node, startstop_node, cli, keygen, stats_gatherer, admin
from allmydata.scripts import debug, create_node, startstop_node, cli, keygen, stats_gatherer, \
admin, magic_folder_cli
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
def GROUP(s):
@ -45,6 +46,7 @@ class Options(usage.Options):
+ debug.subCommands
+ GROUP("Using the filesystem")
+ cli.subCommands
+ magic_folder_cli.subCommands
)
optFlags = [
@ -143,6 +145,8 @@ def runner(argv,
rc = admin.dispatch[command](so)
elif command in cli.dispatch:
rc = cli.dispatch[command](so)
elif command in magic_folder_cli.dispatch:
rc = magic_folder_cli.dispatch[command](so)
elif command in ac_dispatch:
rc = ac_dispatch[command](so, stdout, stderr)
else:

View File

@ -62,10 +62,13 @@ class StorageFarmBroker:
I'm also responsible for subscribing to the IntroducerClient to find out
about new servers as they are announced by the Introducer.
"""
def __init__(self, tub, permute_peers, preferred_peers=()):
def __init__(self, tub, permute_peers, connected_threshold, connected_d,
preferred_peers=()):
self.tub = tub
assert permute_peers # False not implemented yet
self.permute_peers = permute_peers
self.connected_threshold = connected_threshold
self.connected_d = connected_d
self.preferred_peers = preferred_peers
# self.servers maps serverid -> IServer, and keeps track of all the
# storage servers that we've heard about. Each descriptor manages its
@ -76,7 +79,7 @@ class StorageFarmBroker:
# these two are used in unit tests
def test_add_rref(self, serverid, rref, ann):
s = NativeStorageServer(serverid, ann.copy())
s = NativeStorageServer(serverid, ann.copy(), self)
s.rref = rref
s._is_connected = True
self.servers[serverid] = s
@ -93,7 +96,7 @@ class StorageFarmBroker:
precondition(isinstance(key_s, str), key_s)
precondition(key_s.startswith("v0-"), key_s)
assert ann["service-name"] == "storage"
s = NativeStorageServer(key_s, ann)
s = NativeStorageServer(key_s, ann, self)
serverid = s.get_serverid()
old = self.servers.get(serverid)
if old:
@ -119,6 +122,13 @@ class StorageFarmBroker:
for dsc in self.servers.values():
dsc.try_to_connect()
def check_enough_connected(self):
if (self.connected_d is not None and
len(self.get_connected_servers()) >= self.connected_threshold):
d = self.connected_d
self.connected_d = None
d.callback(None)
def get_servers_for_psi(self, peer_selection_index):
# return a list of server objects (IServers)
assert self.permute_peers == True
@ -190,9 +200,10 @@ class NativeStorageServer:
"application-version": "unknown: no get_version()",
}
def __init__(self, key_s, ann):
def __init__(self, key_s, ann, broker):
self.key_s = key_s
self.announcement = ann
self.broker = broker
assert "anonymous-storage-FURL" in ann, ann
furl = str(ann["anonymous-storage-FURL"])
@ -295,6 +306,7 @@ class NativeStorageServer:
default = self.VERSION_DEFAULTS
d = add_version_to_remote_reference(rref, default)
d.addCallback(self._got_versioned_service, lp)
d.addCallback(lambda ign: self.broker.check_enough_connected())
d.addErrback(log.err, format="storageclient._got_connection",
name=self.get_name(), umid="Sdq3pg")

View File

@ -0,0 +1,378 @@
#!/usr/bin/env python
# this is a smoke-test using "./bin/tahoe" to:
#
# 1. create an introducer
# 2. create 5 storage nodes
# 3. create 2 client nodes (alice, bob)
# 4. Alice creates a magic-folder ("magik:")
# 5. Alice invites Bob
# 6. Bob joins
#
# After that, some basic tests are performed; see the "if True:"
# blocks to turn some on or off. Could benefit from some cleanups
# etc. but this seems useful out of the gate for quick testing.
#
# TO RUN:
# from top-level of your checkout (we use "./bin/tahoe"):
# python src/allmydata/test/check_magicfolder_smoke.py
#
# This will create "./smoke_magicfolder" (which is disposable) and
# contains all the Tahoe basedirs for the introducer, storage nodes,
# clients, and the clients' magic-folders. NOTE that if these
# directories already exist they will NOT be re-created. So kill the
# grid and then "rm -rf smoke_magicfolder" if you want to re-run the
# tests cleanly.
#
# Run the script with a single arg, "kill" to run "tahoe stop" on all
# the nodes.
#
# This will have "tahoe start" -ed all the nodes, so you can continue
# to play around after the script exits.
from __future__ import print_function
import sys
import time
import shutil
import subprocess
from os.path import join, abspath, curdir, exists
from os import mkdir, listdir, unlink
tahoe_base = abspath(curdir)
data_base = join(tahoe_base, 'smoke_magicfolder')
tahoe_bin = join(tahoe_base, 'bin', 'tahoe')
python = sys.executable
if not exists(data_base):
print("Creating", data_base)
mkdir(data_base)
if not exists(tahoe_bin):
raise RuntimeError("Can't find 'tahoe' binary at %r" % (tahoe_bin,))
if 'kill' in sys.argv:
print("Killing the grid")
for d in listdir(data_base):
print("killing", d)
subprocess.call(
[
python, tahoe_bin, 'stop', join(data_base, d),
]
)
sys.exit(0)
if not exists(join(data_base, 'introducer')):
subprocess.check_call(
[
python, tahoe_bin, 'create-introducer', join(data_base, 'introducer'),
]
)
with open(join(data_base, 'introducer', 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = introducer0
web.port = 4560
''')
subprocess.check_call(
[
python, tahoe_bin, 'start', join(data_base, 'introducer'),
]
)
furl_fname = join(data_base, 'introducer', 'private', 'introducer.furl')
while not exists(furl_fname):
time.sleep(1)
furl = open(furl_fname, 'r').read()
print("FURL", furl)
for x in range(5):
data_dir = join(data_base, 'node%d' % x)
if not exists(data_dir):
subprocess.check_call(
[
python, tahoe_bin, 'create-node',
'--nickname', 'node%d' % (x,),
'--introducer', furl,
data_dir,
]
)
with open(join(data_dir, 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = node%(node_id)s
web.port =
web.static = public_html
tub.location = localhost:%(tub_port)d
[client]
# Which services should this client connect to?
introducer.furl = %(furl)s
shares.needed = 2
shares.happy = 3
shares.total = 4
''' % {'node_id':x, 'furl':furl, 'tub_port':(9900 + x)})
subprocess.check_call(
[
python, tahoe_bin, 'start', data_dir,
]
)
# alice and bob clients
do_invites = False
node_id = 0
for name in ['alice', 'bob']:
data_dir = join(data_base, name)
magic_dir = join(data_base, '%s-magic' % (name,))
mkdir(magic_dir)
if not exists(data_dir):
do_invites = True
subprocess.check_call(
[
python, tahoe_bin, 'create-node',
'--no-storage',
'--nickname', name,
'--introducer', furl,
data_dir,
]
)
with open(join(data_dir, 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = %(name)s
web.port = tcp:998%(node_id)d:interface=localhost
web.static = public_html
[client]
# Which services should this client connect to?
introducer.furl = %(furl)s
shares.needed = 2
shares.happy = 3
shares.total = 4
''' % {'name':name, 'node_id':node_id, 'furl':furl})
subprocess.check_call(
[
python, tahoe_bin, 'start', data_dir,
]
)
node_id += 1
# okay, now we have alice + bob (alice, bob)
# now we have alice create a magic-folder, and invite bob to it
if do_invites:
data_dir = join(data_base, 'alice')
# alice creates her folder, invites bob
print("Alice creates a magic-folder")
subprocess.check_call(
[
python, tahoe_bin, 'magic-folder', 'create', '--basedir', data_dir, 'magik:', 'alice',
join(data_base, 'alice-magic'),
]
)
print("Alice invites Bob")
invite = subprocess.check_output(
[
python, tahoe_bin, 'magic-folder', 'invite', '--basedir', data_dir, 'magik:', 'bob',
]
)
print(" invite:", invite)
# now we let "bob"/bob join
print("Bob joins Alice's magic folder")
data_dir = join(data_base, 'bob')
subprocess.check_call(
[
python, tahoe_bin, 'magic-folder', 'join', '--basedir', data_dir, invite,
join(data_base, 'bob-magic'),
]
)
print("Bob has joined.")
print("Restarting alice + bob clients")
subprocess.check_call(
[
python, tahoe_bin, 'restart', '--basedir', join(data_base, 'alice'),
]
)
subprocess.check_call(
[
python, tahoe_bin, 'restart', '--basedir', join(data_base, 'bob'),
]
)
if True:
for name in ['alice', 'bob']:
with open(join(data_base, name, 'private', 'magic_folder_dircap'), 'r') as f:
print("dircap %s: %s" % (name, f.read().strip()))
# give storage nodes a chance to connect properly? I'm not entirely
# sure what's up here, but I get "UnrecoverableFileError" on the
# first_file upload from Alice "very often" otherwise
print("waiting 3 seconds")
time.sleep(3)
if True:
# alice writes a file; bob should get it
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
with open(alice_foo, 'w') as f:
f.write("line one\n")
print("Waiting for:", bob_foo)
while True:
if exists(bob_foo):
print(" found", bob_foo)
with open(bob_foo, 'r') as f:
if f.read() == "line one\n":
break
print(" file contents still mismatched")
time.sleep(1)
if True:
# bob writes a file; alice should get it
alice_bar = join(data_base, 'alice-magic', 'second_file')
bob_bar = join(data_base, 'bob-magic', 'second_file')
with open(bob_bar, 'w') as f:
f.write("line one\n")
print("Waiting for:", alice_bar)
while True:
if exists(bob_bar):
print(" found", bob_bar)
with open(bob_bar, 'r') as f:
if f.read() == "line one\n":
break
print(" file contents still mismatched")
time.sleep(1)
if True:
# alice deletes 'first_file'
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
unlink(alice_foo)
print("Waiting for '%s' to disappear" % (bob_foo,))
while True:
if not exists(bob_foo):
print(" disappeared", bob_foo)
break
time.sleep(1)
bob_tmp = bob_foo + '.backup'
print("Waiting for '%s' to appear" % (bob_tmp,))
while True:
if exists(bob_tmp):
print(" appeared", bob_tmp)
break
time.sleep(1)
if True:
# bob writes new content to 'second_file'; alice should get it
# get it.
alice_foo = join(data_base, 'alice-magic', 'second_file')
bob_foo = join(data_base, 'bob-magic', 'second_file')
gold_content = "line one\nsecond line\n"
with open(bob_foo, 'w') as f:
f.write(gold_content)
print("Waiting for:", alice_foo)
while True:
if exists(alice_foo):
print(" found", alice_foo)
with open(alice_foo, 'r') as f:
content = f.read()
if content == gold_content:
break
print(" file contents still mismatched:\n")
print(content)
time.sleep(1)
if True:
# bob creates a sub-directory and adds a file to it
alice_dir = join(data_base, 'alice-magic', 'subdir')
bob_dir = join(data_base, 'alice-magic', 'subdir')
gold_content = 'a file in a subdirectory\n'
mkdir(bob_dir)
with open(join(bob_dir, 'subfile'), 'w') as f:
f.write(gold_content)
print("Waiting for Bob's subdir '%s' to appear" % (bob_dir,))
while True:
if exists(bob_dir):
print(" found subdir")
if exists(join(bob_dir, 'subfile')):
print(" found file")
with open(join(bob_dir, 'subfile'), 'r') as f:
if f.read() == gold_content:
print(" contents match")
break
time.sleep(0.1)
if True:
# bob deletes the whole subdir
alice_dir = join(data_base, 'alice-magic', 'subdir')
bob_dir = join(data_base, 'alice-magic', 'subdir')
shutil.rmtree(bob_dir)
print("Waiting for Alice's subdir '%s' to disappear" % (alice_dir,))
while True:
if not exists(alice_dir):
print(" it's gone")
break
time.sleep(0.1)
# XXX restore the file not working (but, unit-tests work; what's wrong with them?)
# NOTE: only not-works if it's alice restoring the file!
if True:
# restore 'first_file' but with different contents
print("re-writing 'first_file'")
assert not exists(join(data_base, 'bob-magic', 'first_file'))
assert not exists(join(data_base, 'alice-magic', 'first_file'))
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
if True:
# if we don't swap around, it works fine
alice_foo, bob_foo = bob_foo, alice_foo
gold_content = "see it again for the first time\n"
with open(bob_foo, 'w') as f:
f.write(gold_content)
print("Waiting for:", alice_foo)
while True:
if exists(alice_foo):
print(" found", alice_foo)
with open(alice_foo, 'r') as f:
content = f.read()
if content == gold_content:
break
print(" file contents still mismatched: %d bytes:\n" % (len(content),))
print(content)
else:
print(" %r not there yet" % (alice_foo,))
time.sleep(1)
if True:
# bob leaves
print('bob leaves')
data_dir = join(data_base, 'bob')
subprocess.check_call(
[
python, tahoe_bin, 'magic-folder', 'leave', '--basedir', data_dir,
]
)
# XXX test .backup (delete a file)
# port david's clock.advance stuff
# fix clock.advance()
# subdirectory
# file deletes
# conflicts

View File

@ -151,8 +151,8 @@ class FakeCHKFileNode:
return defer.succeed(self)
def download_to_data(self):
return download_to_data(self)
def download_to_data(self, progress=None):
return download_to_data(self, progress=progress)
download_best_version = download_to_data
@ -329,11 +329,11 @@ class FakeMutableFileNode:
d.addCallback(_done)
return d
def download_best_version(self):
return defer.succeed(self._download_best_version())
def download_best_version(self, progress=None):
return defer.succeed(self._download_best_version(progress=progress))
def _download_best_version(self, ignored=None):
def _download_best_version(self, ignored=None, progress=None):
if isinstance(self.my_uri, uri.LiteralFileURI):
return self.my_uri.data
if self.storage_index not in self.all_contents:

View File

@ -20,6 +20,9 @@ from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from foolscap.api import Referenceable, fireEventually, RemoteException
from base64 import b32encode
from allmydata.util.assertutil import _assert
from allmydata import uri as tahoe_uri
from allmydata.client import Client
from allmydata.storage.server import StorageServer, storage_index_to_dir
@ -174,6 +177,9 @@ class NoNetworkStorageBroker:
return None
class NoNetworkClient(Client):
def disownServiceParent(self):
self.disownServiceParent()
def create_tub(self):
pass
def init_introducer_client(self):
@ -232,6 +238,7 @@ class NoNetworkGrid(service.MultiService):
self.proxies_by_id = {} # maps to IServer on which .rref is a wrapped
# StorageServer
self.clients = []
self.client_config_hooks = client_config_hooks
for i in range(num_servers):
ss = self.make_server(i)
@ -239,30 +246,42 @@ class NoNetworkGrid(service.MultiService):
self.rebuild_serverlist()
for i in range(num_clients):
clientid = hashutil.tagged_hash("clientid", str(i))[:20]
clientdir = os.path.join(basedir, "clients",
idlib.shortnodeid_b2a(clientid))
fileutil.make_dirs(clientdir)
f = open(os.path.join(clientdir, "tahoe.cfg"), "w")
c = self.make_client(i)
self.clients.append(c)
def make_client(self, i, write_config=True):
clientid = hashutil.tagged_hash("clientid", str(i))[:20]
clientdir = os.path.join(self.basedir, "clients",
idlib.shortnodeid_b2a(clientid))
fileutil.make_dirs(clientdir)
tahoe_cfg_path = os.path.join(clientdir, "tahoe.cfg")
if write_config:
f = open(tahoe_cfg_path, "w")
f.write("[node]\n")
f.write("nickname = client-%d\n" % i)
f.write("web.port = tcp:0:interface=127.0.0.1\n")
f.write("[storage]\n")
f.write("enabled = false\n")
f.close()
c = None
if i in client_config_hooks:
# this hook can either modify tahoe.cfg, or return an
# entirely new Client instance
c = client_config_hooks[i](clientdir)
if not c:
c = NoNetworkClient(clientdir)
c.set_default_mutable_keysize(TEST_RSA_KEY_SIZE)
c.nodeid = clientid
c.short_nodeid = b32encode(clientid).lower()[:8]
c._servers = self.all_servers # can be updated later
c.setServiceParent(self)
self.clients.append(c)
else:
_assert(os.path.exists(tahoe_cfg_path), tahoe_cfg_path=tahoe_cfg_path)
c = None
if i in self.client_config_hooks:
# this hook can either modify tahoe.cfg, or return an
# entirely new Client instance
c = self.client_config_hooks[i](clientdir)
if not c:
c = NoNetworkClient(clientdir)
c.set_default_mutable_keysize(TEST_RSA_KEY_SIZE)
c.nodeid = clientid
c.short_nodeid = b32encode(clientid).lower()[:8]
c._servers = self.all_servers # can be updated later
c.setServiceParent(self)
return c
def make_server(self, i, readonly=False):
serverid = hashutil.tagged_hash("serverid", str(i))[:20]
@ -350,6 +369,9 @@ class GridTestMixin:
num_servers=num_servers,
client_config_hooks=client_config_hooks)
self.g.setServiceParent(self.s)
self._record_webports_and_baseurls()
def _record_webports_and_baseurls(self):
self.client_webports = [c.getServiceNamed("webish").getPortnum()
for c in self.g.clients]
self.client_baseurls = [c.getServiceNamed("webish").getURL()
@ -358,6 +380,23 @@ class GridTestMixin:
def get_clientdir(self, i=0):
return self.g.clients[i].basedir
def set_clientdir(self, basedir, i=0):
self.g.clients[i].basedir = basedir
def get_client(self, i=0):
return self.g.clients[i]
def restart_client(self, i=0):
client = self.g.clients[i]
d = defer.succeed(None)
d.addCallback(lambda ign: self.g.removeService(client))
def _make_client(ign):
c = self.g.make_client(i, write_config=False)
self.g.clients[i] = c
self._record_webports_and_baseurls()
d.addCallback(_make_client)
return d
def get_serverdir(self, i):
return self.g.servers_by_number[i].storedir

View File

@ -22,7 +22,7 @@ class FakeClient:
class WebResultsRendering(unittest.TestCase, WebRenderingMixin):
def create_fake_client(self):
sb = StorageFarmBroker(None, True)
sb = StorageFarmBroker(None, True, 0, None)
# s.get_name() (the "short description") will be "v0-00000000".
# s.get_longname() will include the -long suffix.
# s.get_peerid() (i.e. tubid) will be "aaa.." or "777.." or "ceir.."
@ -41,7 +41,7 @@ class WebResultsRendering(unittest.TestCase, WebRenderingMixin):
"my-version": "ver",
"oldest-supported": "oldest",
}
s = NativeStorageServer(key_s, ann)
s = NativeStorageServer(key_s, ann, sb)
sb.test_add_server(peerid, s) # XXX: maybe use key_s?
c = FakeClient()
c.storage_broker = sb

View File

@ -0,0 +1,365 @@
import os.path
import re
from twisted.trial import unittest
from twisted.internet import defer
from twisted.internet import reactor
from twisted.python import usage
from allmydata.util.assertutil import precondition
from allmydata.util import fileutil
from allmydata.scripts.common import get_aliases
from allmydata.test.no_network import GridTestMixin
from .test_cli import CLITestMixin
from allmydata.scripts import magic_folder_cli
from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.util.encodingutil import unicode_to_argv
from allmydata.frontends.magic_folder import MagicFolder
from allmydata import uri
class MagicFolderCLITestMixin(CLITestMixin, GridTestMixin):
def do_create_magic_folder(self, client_num):
d = self.do_cli("magic-folder", "create", "magic:", client_num=client_num)
def _done((rc,stdout,stderr)):
self.failUnlessEqual(rc, 0)
self.failUnlessIn("Alias 'magic' created", stdout)
self.failUnlessEqual(stderr, "")
aliases = get_aliases(self.get_clientdir(i=client_num))
self.failUnlessIn("magic", aliases)
self.failUnless(aliases["magic"].startswith("URI:DIR2:"))
d.addCallback(_done)
return d
def do_invite(self, client_num, nickname):
nickname_arg = unicode_to_argv(nickname)
d = self.do_cli("magic-folder", "invite", "magic:", nickname_arg, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
return d
def do_join(self, client_num, local_dir, invite_code):
precondition(isinstance(local_dir, unicode), local_dir=local_dir)
precondition(isinstance(invite_code, str), invite_code=invite_code)
local_dir_arg = unicode_to_argv(local_dir)
d = self.do_cli("magic-folder", "join", invite_code, local_dir_arg, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.failUnlessEqual(stdout, "")
self.failUnlessEqual(stderr, "")
return (rc, stdout, stderr)
d.addCallback(_done)
return d
def do_leave(self, client_num):
d = self.do_cli("magic-folder", "leave", client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
return d
def check_joined_config(self, client_num, upload_dircap):
"""Tests that our collective directory has the readonly cap of
our upload directory.
"""
collective_readonly_cap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"collective_dircap"))
d = self.do_cli("ls", "--json", collective_readonly_cap, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
def test_joined_magic_folder((rc,stdout,stderr)):
readonly_cap = unicode(uri.from_string(upload_dircap).get_readonly().to_string(), 'utf-8')
s = re.search(readonly_cap, stdout)
self.failUnless(s is not None)
return None
d.addCallback(test_joined_magic_folder)
return d
def get_caps_from_files(self, client_num):
collective_dircap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"collective_dircap"))
upload_dircap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"magic_folder_dircap"))
self.failIf(collective_dircap is None or upload_dircap is None)
return collective_dircap, upload_dircap
def check_config(self, client_num, local_dir):
client_config = fileutil.read(os.path.join(self.get_clientdir(i=client_num), "tahoe.cfg"))
local_dir_utf8 = local_dir.encode('utf-8')
magic_folder_config = "[magic_folder]\nenabled = True\nlocal.directory = %s" % (local_dir_utf8,)
self.failUnlessIn(magic_folder_config, client_config)
def create_invite_join_magic_folder(self, nickname, local_dir):
nickname_arg = unicode_to_argv(nickname)
local_dir_arg = unicode_to_argv(local_dir)
d = self.do_cli("magic-folder", "create", "magic:", nickname_arg, local_dir_arg)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
client = self.get_client()
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
self.collective_dirnode = client.create_node_from_uri(self.collective_dircap)
self.upload_dirnode = client.create_node_from_uri(self.upload_dircap)
d.addCallback(_done)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, local_dir))
return d
def cleanup(self, res):
d = defer.succeed(None)
if self.magicfolder is not None:
d.addCallback(lambda ign: self.magicfolder.finish())
d.addCallback(lambda ign: res)
return d
def init_magicfolder(self, client_num, upload_dircap, collective_dircap, local_magic_dir, clock):
dbfile = abspath_expanduser_unicode(u"magicfolderdb.sqlite", base=self.get_clientdir(i=client_num))
magicfolder = MagicFolder(self.get_client(client_num), upload_dircap, collective_dircap, local_magic_dir,
dbfile, 0077, pending_delay=0.2, clock=clock)
magicfolder.downloader._turn_delay = 0
magicfolder.setServiceParent(self.get_client(client_num))
magicfolder.ready()
return magicfolder
def setup_alice_and_bob(self, alice_clock=reactor, bob_clock=reactor):
self.set_up_grid(num_clients=2)
self.alice_magicfolder = None
self.bob_magicfolder = None
alice_magic_dir = abspath_expanduser_unicode(u"Alice-magic", base=self.basedir)
self.mkdir_nonascii(alice_magic_dir)
bob_magic_dir = abspath_expanduser_unicode(u"Bob-magic", base=self.basedir)
self.mkdir_nonascii(bob_magic_dir)
# Alice creates a Magic Folder,
# invites herself then and joins.
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice\u00F8"))
def get_invite_code(result):
self.invite_code = result[1].strip()
d.addCallback(get_invite_code)
d.addCallback(lambda ign: self.do_join(0, alice_magic_dir, self.invite_code))
def get_alice_caps(ign):
self.alice_collective_dircap, self.alice_upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_alice_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.alice_upload_dircap))
d.addCallback(lambda ign: self.check_config(0, alice_magic_dir))
def get_Alice_magicfolder(result):
self.alice_magicfolder = self.init_magicfolder(0, self.alice_upload_dircap,
self.alice_collective_dircap,
alice_magic_dir, alice_clock)
return result
d.addCallback(get_Alice_magicfolder)
# Alice invites Bob. Bob joins.
d.addCallback(lambda ign: self.do_invite(0, u"Bob\u00F8"))
def get_invite_code(result):
self.invite_code = result[1].strip()
d.addCallback(get_invite_code)
d.addCallback(lambda ign: self.do_join(1, bob_magic_dir, self.invite_code))
def get_bob_caps(ign):
self.bob_collective_dircap, self.bob_upload_dircap = self.get_caps_from_files(1)
d.addCallback(get_bob_caps)
d.addCallback(lambda ign: self.check_joined_config(1, self.bob_upload_dircap))
d.addCallback(lambda ign: self.check_config(1, bob_magic_dir))
def get_Bob_magicfolder(result):
self.bob_magicfolder = self.init_magicfolder(1, self.bob_upload_dircap,
self.bob_collective_dircap,
bob_magic_dir, bob_clock)
return result
d.addCallback(get_Bob_magicfolder)
return d
class CreateMagicFolder(MagicFolderCLITestMixin, unittest.TestCase):
def test_create_and_then_invite_join(self):
self.basedir = "cli/MagicFolder/create-and-then-invite-join"
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
return d
def test_create_error(self):
self.basedir = "cli/MagicFolder/create-error"
self.set_up_grid()
d = self.do_cli("magic-folder", "create", "m a g i c:", client_num=0)
def _done((rc, stdout, stderr)):
self.failIfEqual(rc, 0)
self.failUnlessIn("Alias names cannot contain spaces.", stderr)
d.addCallback(_done)
return d
def test_create_invite_join(self):
self.basedir = "cli/MagicFolder/create-invite-join"
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
d = self.do_cli("magic-folder", "create", "magic:", "Alice", local_dir)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(_done)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
return d
def test_create_invite_join_failure(self):
self.basedir = "cli/MagicFolder/create-invite-join-failure"
os.makedirs(self.basedir)
o = magic_folder_cli.CreateOptions()
o.parent = magic_folder_cli.MagicFolderCommand()
o.parent['node-directory'] = self.basedir
try:
o.parseArgs("magic:", "Alice", "-foo")
except usage.UsageError as e:
self.failUnlessIn("cannot start with '-'", str(e))
else:
self.fail("expected UsageError")
def test_join_failure(self):
self.basedir = "cli/MagicFolder/create-join-failure"
os.makedirs(self.basedir)
o = magic_folder_cli.JoinOptions()
o.parent = magic_folder_cli.MagicFolderCommand()
o.parent['node-directory'] = self.basedir
try:
o.parseArgs("URI:invite+URI:code", "-foo")
except usage.UsageError as e:
self.failUnlessIn("cannot start with '-'", str(e))
else:
self.fail("expected UsageError")
def test_join_twice_failure(self):
self.basedir = "cli/MagicFolder/create-join-twice-failure"
os.makedirs(self.basedir)
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
self.invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), self.invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
def join_again(ignore):
return self.do_cli("magic-folder", "join", self.invite_code, local_dir, client_num=0)
d.addCallback(join_again)
def get_results(result):
(rc, out, err) = result
self.failUnlessEqual(out, "")
self.failUnlessIn("This client has already joined a magic folder.", err)
self.failUnlessIn("Use the 'tahoe magic-folder leave' command first.", err)
self.failIfEqual(rc, 0)
d.addCallback(get_results)
return d
def test_join_leave_join(self):
self.basedir = "cli/MagicFolder/create-join-leave-join"
os.makedirs(self.basedir)
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
self.invite_code = None
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), self.invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
d.addCallback(lambda ign: self.do_leave(0))
d.addCallback(lambda ign: self.do_join(0, unicode(local_dir), self.invite_code))
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
return d
def test_join_failures(self):
self.basedir = "cli/MagicFolder/create-join-failures"
os.makedirs(self.basedir)
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
self.invite_code = None
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), self.invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
def check_success(result):
(rc, out, err) = result
self.failUnlessEqual(rc, 0)
def check_failure(result):
(rc, out, err) = result
self.failIfEqual(rc, 0)
def leave(ign):
return self.do_cli("magic-folder", "leave", client_num=0)
d.addCallback(leave)
d.addCallback(check_success)
collective_dircap_file = os.path.join(self.get_clientdir(i=0), u"private", u"collective_dircap")
upload_dircap = os.path.join(self.get_clientdir(i=0), u"private", u"magic_folder_dircap")
magic_folder_db_file = os.path.join(self.get_clientdir(i=0), u"private", u"magicfolderdb.sqlite")
def check_join_if_file(my_file):
fileutil.write(my_file, "my file data")
d2 = self.do_cli("magic-folder", "join", self.invite_code, local_dir, client_num=0)
d2.addCallback(check_failure)
return d2
for my_file in [collective_dircap_file, upload_dircap, magic_folder_db_file]:
d.addCallback(lambda ign, my_file: check_join_if_file(my_file), my_file)
d.addCallback(leave)
d.addCallback(check_success)
return d

View File

@ -4,7 +4,7 @@ from twisted.trial import unittest
from twisted.application import service
import allmydata
import allmydata.frontends.drop_upload
import allmydata.frontends.magic_folder
import allmydata.util.log
from allmydata.node import Node, OldConfigError, OldConfigOptionError, MissingConfigEntry, UnescapedHashError
@ -27,7 +27,7 @@ BASECONFIG_I = ("[client]\n"
"introducer.furl = %s\n"
)
class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
class Basic(testutil.ReallyEqualMixin, testutil.NonASCIIPathMixin, unittest.TestCase):
def test_loadable(self):
basedir = "test_client.Basic.test_loadable"
os.mkdir(basedir)
@ -251,7 +251,7 @@ class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
return [ s.get_longname() for s in sb.get_servers_for_psi(key) ]
def test_permute(self):
sb = StorageFarmBroker(None, True)
sb = StorageFarmBroker(None, True, 0, None)
for k in ["%d" % i for i in range(5)]:
ann = {"anonymous-storage-FURL": "pb://abcde@nowhere/fake",
"permutation-seed-base32": base32.b2a(k) }
@ -263,7 +263,7 @@ class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(self._permute(sb, "one"), [])
def test_permute_with_preferred(self):
sb = StorageFarmBroker(None, True, ['1','4'])
sb = StorageFarmBroker(None, True, 0, None, ['1','4'])
for k in ["%d" % i for i in range(5)]:
ann = {"anonymous-storage-FURL": "pb://abcde@nowhere/fake",
"permutation-seed-base32": base32.b2a(k) }
@ -314,76 +314,80 @@ class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
_check("helper.furl = None", None)
_check("helper.furl = pb://blah\n", "pb://blah")
def test_create_drop_uploader(self):
class MockDropUploader(service.MultiService):
name = 'drop-upload'
def test_create_magic_folder_service(self):
class MockMagicFolder(service.MultiService):
name = 'magic-folder'
def __init__(self, client, upload_dircap, local_dir_utf8, inotify=None):
def __init__(self, client, upload_dircap, collective_dircap, local_dir, dbfile, umask, inotify=None,
pending_delay=1.0):
service.MultiService.__init__(self)
self.client = client
self._umask = umask
self.upload_dircap = upload_dircap
self.local_dir_utf8 = local_dir_utf8
self.collective_dircap = collective_dircap
self.local_dir = local_dir
self.dbfile = dbfile
self.inotify = inotify
self.patch(allmydata.frontends.drop_upload, 'DropUploader', MockDropUploader)
def ready(self):
pass
self.patch(allmydata.frontends.magic_folder, 'MagicFolder', MockMagicFolder)
upload_dircap = "URI:DIR2:blah"
local_dir_utf8 = u"loc\u0101l_dir".encode('utf-8')
local_dir_u = self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir")
local_dir_utf8 = local_dir_u.encode('utf-8')
config = (BASECONFIG +
"[storage]\n" +
"enabled = false\n" +
"[drop_upload]\n" +
"[magic_folder]\n" +
"enabled = true\n")
basedir1 = "test_client.Basic.test_create_drop_uploader1"
basedir1 = "test_client.Basic.test_create_magic_folder_service1"
os.mkdir(basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "local.directory = " + local_dir_utf8 + "\n")
self.failUnlessRaises(MissingConfigEntry, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"), config)
fileutil.write(os.path.join(basedir1, "private", "drop_upload_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir1, "private", "magic_folder_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir1, "private", "collective_dircap"), "URI:DIR2:meow")
self.failUnlessRaises(MissingConfigEntry, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "upload.dircap = " + upload_dircap + "\n")
config.replace("[magic_folder]\n", "[drop_upload]\n"))
self.failUnlessRaises(OldConfigOptionError, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "local.directory = " + local_dir_utf8 + "\n")
c1 = client.Client(basedir1)
uploader = c1.getServiceNamed('drop-upload')
self.failUnless(isinstance(uploader, MockDropUploader), uploader)
self.failUnlessReallyEqual(uploader.client, c1)
self.failUnlessReallyEqual(uploader.upload_dircap, upload_dircap)
self.failUnlessReallyEqual(uploader.local_dir_utf8, local_dir_utf8)
self.failUnless(uploader.inotify is None, uploader.inotify)
self.failUnless(uploader.running)
magicfolder = c1.getServiceNamed('magic-folder')
self.failUnless(isinstance(magicfolder, MockMagicFolder), magicfolder)
self.failUnlessReallyEqual(magicfolder.client, c1)
self.failUnlessReallyEqual(magicfolder.upload_dircap, upload_dircap)
self.failUnlessReallyEqual(os.path.basename(magicfolder.local_dir), local_dir_u)
self.failUnless(magicfolder.inotify is None, magicfolder.inotify)
self.failUnless(magicfolder.running)
class Boom(Exception):
pass
def BoomDropUploader(client, upload_dircap, local_dir_utf8, inotify=None):
def BoomMagicFolder(client, upload_dircap, collective_dircap, local_dir, dbfile,
inotify=None, pending_delay=1.0):
raise Boom()
self.patch(allmydata.frontends.magic_folder, 'MagicFolder', BoomMagicFolder)
logged_messages = []
def mock_log(*args, **kwargs):
logged_messages.append("%r %r" % (args, kwargs))
self.patch(allmydata.util.log, 'msg', mock_log)
self.patch(allmydata.frontends.drop_upload, 'DropUploader', BoomDropUploader)
basedir2 = "test_client.Basic.test_create_drop_uploader2"
basedir2 = "test_client.Basic.test_create_magic_folder_service2"
os.mkdir(basedir2)
os.mkdir(os.path.join(basedir2, "private"))
fileutil.write(os.path.join(basedir2, "tahoe.cfg"),
BASECONFIG +
"[drop_upload]\n" +
"[magic_folder]\n" +
"enabled = true\n" +
"local.directory = " + local_dir_utf8 + "\n")
fileutil.write(os.path.join(basedir2, "private", "drop_upload_dircap"), "URI:DIR2:blah")
c2 = client.Client(basedir2)
self.failUnlessRaises(KeyError, c2.getServiceNamed, 'drop-upload')
self.failUnless([True for arg in logged_messages if "Boom" in arg],
logged_messages)
fileutil.write(os.path.join(basedir2, "private", "magic_folder_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir2, "private", "collective_dircap"), "URI:DIR2:meow")
self.failUnlessRaises(Boom, client.Client, basedir2)
def flush_but_dont_ignore(res):

View File

@ -1519,7 +1519,7 @@ class FakeMutableFile:
def get_write_uri(self):
return self.uri.to_string()
def download_best_version(self):
def download_best_version(self, progress=None):
return defer.succeed(self.data)
def get_writekey(self):

View File

@ -1,181 +0,0 @@
import os, sys
from twisted.trial import unittest
from twisted.python import filepath, runtime
from twisted.internet import defer
from allmydata.interfaces import IDirectoryNode, NoSuchChildError
from allmydata.util import fake_inotify
from allmydata.util.encodingutil import get_filesystem_encoding
from allmydata.util.consumer import download_to_data
from allmydata.test.no_network import GridTestMixin
from allmydata.test.common_util import ReallyEqualMixin, NonASCIIPathMixin
from allmydata.test.common import ShouldFailMixin
from allmydata.frontends.drop_upload import DropUploader
class DropUploadTestMixin(GridTestMixin, ShouldFailMixin, ReallyEqualMixin, NonASCIIPathMixin):
"""
These tests will be run both with a mock notifier, and (on platforms that support it)
with the real INotify.
"""
def _get_count(self, name):
return self.stats_provider.get_stats()["counters"].get(name, 0)
def _test(self):
self.uploader = None
self.set_up_grid()
self.local_dir = os.path.join(self.basedir, self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir"))
self.mkdir_nonascii(self.local_dir)
self.client = self.g.clients[0]
self.stats_provider = self.client.stats_provider
d = self.client.create_dirnode()
def _made_upload_dir(n):
self.failUnless(IDirectoryNode.providedBy(n))
self.upload_dirnode = n
self.upload_dircap = n.get_uri()
self.uploader = DropUploader(self.client, self.upload_dircap, self.local_dir.encode('utf-8'),
inotify=self.inotify)
return self.uploader.startService()
d.addCallback(_made_upload_dir)
# Write something short enough for a LIT file.
d.addCallback(lambda ign: self._test_file(u"short", "test"))
# Write to the same file again with different data.
d.addCallback(lambda ign: self._test_file(u"short", "different"))
# Test that temporary files are not uploaded.
d.addCallback(lambda ign: self._test_file(u"tempfile", "test", temporary=True))
# Test that we tolerate creation of a subdirectory.
d.addCallback(lambda ign: os.mkdir(os.path.join(self.local_dir, u"directory")))
# Write something longer, and also try to test a Unicode name if the fs can represent it.
name_u = self.unicode_or_fallback(u"l\u00F8ng", u"long")
d.addCallback(lambda ign: self._test_file(name_u, "test"*100))
# TODO: test that causes an upload failure.
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_failed'), 0))
# Prevent unclean reactor errors.
def _cleanup(res):
d = defer.succeed(None)
if self.uploader is not None:
d.addCallback(lambda ign: self.uploader.finish(for_tests=True))
d.addCallback(lambda ign: res)
return d
d.addBoth(_cleanup)
return d
def _test_file(self, name_u, data, temporary=False):
previously_uploaded = self._get_count('drop_upload.files_uploaded')
previously_disappeared = self._get_count('drop_upload.files_disappeared')
d = defer.Deferred()
# Note: this relies on the fact that we only get one IN_CLOSE_WRITE notification per file
# (otherwise we would get a defer.AlreadyCalledError). Should we be relying on that?
self.uploader.set_uploaded_callback(d.callback)
path_u = os.path.join(self.local_dir, name_u)
if sys.platform == "win32":
path = filepath.FilePath(path_u)
else:
path = filepath.FilePath(path_u.encode(get_filesystem_encoding()))
# We don't use FilePath.setContent() here because it creates a temporary file that
# is renamed into place, which causes events that the test is not expecting.
f = open(path.path, "wb")
try:
if temporary and sys.platform != "win32":
os.unlink(path.path)
f.write(data)
finally:
f.close()
if temporary and sys.platform == "win32":
os.unlink(path.path)
self.notify_close_write(path)
if temporary:
d.addCallback(lambda ign: self.shouldFail(NoSuchChildError, 'temp file not uploaded', None,
self.upload_dirnode.get, name_u))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_disappeared'),
previously_disappeared + 1))
else:
d.addCallback(lambda ign: self.upload_dirnode.get(name_u))
d.addCallback(download_to_data)
d.addCallback(lambda actual_data: self.failUnlessReallyEqual(actual_data, data))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_uploaded'),
previously_uploaded + 1))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_queued'), 0))
return d
class MockTest(DropUploadTestMixin, unittest.TestCase):
"""This can run on any platform, and even if twisted.internet.inotify can't be imported."""
def test_errors(self):
self.basedir = "drop_upload.MockTest.test_errors"
self.set_up_grid()
errors_dir = os.path.join(self.basedir, "errors_dir")
os.mkdir(errors_dir)
client = self.g.clients[0]
d = client.create_dirnode()
def _made_upload_dir(n):
self.failUnless(IDirectoryNode.providedBy(n))
upload_dircap = n.get_uri()
readonly_dircap = n.get_readonly_uri()
self.shouldFail(AssertionError, 'invalid local.directory', 'could not be represented',
DropUploader, client, upload_dircap, '\xFF', inotify=fake_inotify)
self.shouldFail(AssertionError, 'nonexistent local.directory', 'there is no directory',
DropUploader, client, upload_dircap, os.path.join(self.basedir, "Laputa"), inotify=fake_inotify)
fp = filepath.FilePath(self.basedir).child('NOT_A_DIR')
fp.touch()
self.shouldFail(AssertionError, 'non-directory local.directory', 'is not a directory',
DropUploader, client, upload_dircap, fp.path, inotify=fake_inotify)
self.shouldFail(AssertionError, 'bad upload.dircap', 'does not refer to a directory',
DropUploader, client, 'bad', errors_dir, inotify=fake_inotify)
self.shouldFail(AssertionError, 'non-directory upload.dircap', 'does not refer to a directory',
DropUploader, client, 'URI:LIT:foo', errors_dir, inotify=fake_inotify)
self.shouldFail(AssertionError, 'readonly upload.dircap', 'is not a writecap to a directory',
DropUploader, client, readonly_dircap, errors_dir, inotify=fake_inotify)
d.addCallback(_made_upload_dir)
return d
def test_drop_upload(self):
self.inotify = fake_inotify
self.basedir = "drop_upload.MockTest.test_drop_upload"
return self._test()
def notify_close_write(self, path):
self.uploader._notifier.event(path, self.inotify.IN_CLOSE_WRITE)
class RealTest(DropUploadTestMixin, unittest.TestCase):
"""This is skipped unless both Twisted and the platform support inotify."""
def test_drop_upload(self):
# We should always have runtime.platform.supportsINotify, because we're using
# Twisted >= 10.1.
if not runtime.platform.supportsINotify():
raise unittest.SkipTest("Drop-upload support can only be tested for-real on an OS that supports inotify or equivalent.")
self.inotify = None # use the appropriate inotify for the platform
self.basedir = "drop_upload.RealTest.test_drop_upload"
return self._test()
def notify_close_write(self, path):
# Writing to the file causes the notification.
pass

View File

@ -61,12 +61,15 @@ import os, sys, locale
from twisted.trial import unittest
from twisted.python.filepath import FilePath
from allmydata.test.common_util import ReallyEqualMixin
from allmydata.util import encodingutil, fileutil
from allmydata.util.encodingutil import argv_to_unicode, unicode_to_url, \
unicode_to_output, quote_output, quote_path, quote_local_unicode_path, \
unicode_platform, listdir_unicode, FilenameEncodingError, get_io_encoding, \
get_filesystem_encoding, to_str, from_utf8_or_none, _reload
quote_filepath, unicode_platform, listdir_unicode, FilenameEncodingError, \
get_io_encoding, get_filesystem_encoding, to_str, from_utf8_or_none, _reload, \
to_filepath, extend_filepath, unicode_from_filepath, unicode_segments_from
from allmydata.dirnode import normalize
from twisted.python import usage
@ -410,6 +413,9 @@ class QuoteOutput(ReallyEqualMixin, unittest.TestCase):
self.test_quote_output_utf8(None)
def win32_other(win32, other):
return win32 if sys.platform == "win32" else other
class QuotePaths(ReallyEqualMixin, unittest.TestCase):
def test_quote_path(self):
self.failUnlessReallyEqual(quote_path([u'foo', u'bar']), "'foo/bar'")
@ -419,9 +425,6 @@ class QuotePaths(ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(quote_path([u'foo', u'\nbar'], quotemarks=True), '"foo/\\x0abar"')
self.failUnlessReallyEqual(quote_path([u'foo', u'\nbar'], quotemarks=False), '"foo/\\x0abar"')
def win32_other(win32, other):
return win32 if sys.platform == "win32" else other
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\C:\\foo"),
win32_other("'C:\\foo'", "'\\\\?\\C:\\foo'"))
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\C:\\foo", quotemarks=True),
@ -435,6 +438,73 @@ class QuotePaths(ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\UNC\\foo\\bar", quotemarks=False),
win32_other("\\\\foo\\bar", "\\\\?\\UNC\\foo\\bar"))
def test_quote_filepath(self):
foo_bar_fp = FilePath(win32_other(u'C:\\foo\\bar', u'/foo/bar'))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp),
win32_other("'C:\\foo\\bar'", "'/foo/bar'"))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp, quotemarks=True),
win32_other("'C:\\foo\\bar'", "'/foo/bar'"))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp, quotemarks=False),
win32_other("C:\\foo\\bar", "/foo/bar"))
if sys.platform == "win32":
foo_longfp = FilePath(u'\\\\?\\C:\\foo')
self.failUnlessReallyEqual(quote_filepath(foo_longfp),
"'C:\\foo'")
self.failUnlessReallyEqual(quote_filepath(foo_longfp, quotemarks=True),
"'C:\\foo'")
self.failUnlessReallyEqual(quote_filepath(foo_longfp, quotemarks=False),
"C:\\foo")
class FilePaths(ReallyEqualMixin, unittest.TestCase):
def test_to_filepath(self):
foo_u = win32_other(u'C:\\foo', u'/foo')
nosep_fp = to_filepath(foo_u)
sep_fp = to_filepath(foo_u + os.path.sep)
for fp in (nosep_fp, sep_fp):
self.failUnlessReallyEqual(fp, FilePath(foo_u))
if encodingutil.use_unicode_filepath:
self.failUnlessReallyEqual(fp.path, foo_u)
if sys.platform == "win32":
long_u = u'\\\\?\\C:\\foo'
longfp = to_filepath(long_u + u'\\')
self.failUnlessReallyEqual(longfp, FilePath(long_u))
self.failUnlessReallyEqual(longfp.path, long_u)
def test_extend_filepath(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_bar_baz_u = win32_other(u'C:\\foo\\bar\\baz', u'/foo/bar/baz')
for foo_fp in (foo_bfp, foo_ufp):
fp = extend_filepath(foo_fp, [u'bar', u'baz'])
self.failUnlessReallyEqual(fp, FilePath(foo_bar_baz_u))
if encodingutil.use_unicode_filepath:
self.failUnlessReallyEqual(fp.path, foo_bar_baz_u)
def test_unicode_from_filepath(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_u = win32_other(u'C:\\foo', u'/foo')
for foo_fp in (foo_bfp, foo_ufp):
self.failUnlessReallyEqual(unicode_from_filepath(foo_fp), foo_u)
def test_unicode_segments_from(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_bar_baz_bfp = FilePath(win32_other(b'C:\\foo\\bar\\baz', b'/foo/bar/baz'))
foo_bar_baz_ufp = FilePath(win32_other(u'C:\\foo\\bar\\baz', u'/foo/bar/baz'))
for foo_fp in (foo_bfp, foo_ufp):
for foo_bar_baz_fp in (foo_bar_baz_bfp, foo_bar_baz_ufp):
self.failUnlessReallyEqual(unicode_segments_from(foo_bar_baz_fp, foo_fp),
[u'bar', u'baz'])
class UbuntuKarmicUTF8(EncodingUtil, unittest.TestCase):
uname = 'Linux korn 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:05:01 UTC 2009 x86_64'

View File

@ -116,7 +116,7 @@ class AssistedUpload(unittest.TestCase):
timeout = 240 # It takes longer than 120 seconds on Francois's arm box.
def setUp(self):
self.s = FakeClient()
self.s.storage_broker = StorageFarmBroker(None, True)
self.s.storage_broker = StorageFarmBroker(None, True, 0, None)
self.s.secret_holder = client.SecretHolder("lease secret", "converge")
self.s.startService()

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,28 @@
from twisted.trial import unittest
from allmydata import magicpath
class MagicPath(unittest.TestCase):
tests = {
u"Documents/work/critical-project/qed.txt": u"Documents@_work@_critical-project@_qed.txt",
u"Documents/emails/bunnyfufu@hoppingforest.net": u"Documents@_emails@_bunnyfufu@@hoppingforest.net",
u"foo/@/bar": u"foo@_@@@_bar",
}
def test_path2magic(self):
for test, expected in self.tests.items():
self.failUnlessEqual(magicpath.path2magic(test), expected)
def test_magic2path(self):
for expected, test in self.tests.items():
self.failUnlessEqual(magicpath.magic2path(test), expected)
def test_should_ignore(self):
self.failUnlessEqual(magicpath.should_ignore_file(u".bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"bashrc."), False)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/branch/.bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/.branch/bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/.tree/branch/bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/branch/bashrc"), False)

View File

@ -234,7 +234,7 @@ def make_storagebroker(s=None, num_peers=10):
s = FakeStorage()
peerids = [tagged_hash("peerid", "%d" % i)[:20]
for i in range(num_peers)]
storage_broker = StorageFarmBroker(None, True)
storage_broker = StorageFarmBroker(None, True, 0, None)
for peerid in peerids:
fss = FakeStorageServer(peerid, s)
ann = {"anonymous-storage-FURL": "pb://%s@nowhere/fake" % base32.b2a(peerid),

View File

@ -7,7 +7,8 @@ from twisted.python import usage, runtime
from twisted.internet import threads
from allmydata.util import fileutil, pollmixin
from allmydata.util.encodingutil import unicode_to_argv, unicode_to_output, get_filesystem_encoding
from allmydata.util.encodingutil import unicode_to_argv, unicode_to_output, \
get_filesystem_encoding
from allmydata.scripts import runner
from allmydata.client import Client
from allmydata.test import common_util
@ -265,8 +266,6 @@ class CreateNode(unittest.TestCase):
self.failUnless(re.search(r"\n\[storage\]\n#.*\nenabled = true\n", content), content)
self.failUnless("\nreserved_space = 1G\n" in content)
self.failUnless(re.search(r"\n\[drop_upload\]\n#.*\nenabled = false\n", content), content)
# creating the node a second time should be rejected
rc, out, err = self.run_tahoe(argv)
self.failIfEqual(rc, 0, str((out, err, rc)))

View File

@ -198,7 +198,7 @@ class FakeClient:
mode = dict([i,mode] for i in range(num_servers))
servers = [ ("%20d"%fakeid, FakeStorageServer(mode[fakeid]))
for fakeid in range(self.num_servers) ]
self.storage_broker = StorageFarmBroker(None, permute_peers=True)
self.storage_broker = StorageFarmBroker(None, True, 0, None)
for (serverid, rref) in servers:
ann = {"anonymous-storage-FURL": "pb://%s@nowhere/fake" % base32.b2a(serverid),
"permutation-seed-base32": base32.b2a(serverid) }

View File

@ -441,6 +441,74 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
self.failIf(os.path.exists(fn))
self.failUnless(os.path.exists(fn2))
def test_rename_no_overwrite(self):
workdir = fileutil.abspath_expanduser_unicode(u"test_rename_no_overwrite")
fileutil.make_dirs(workdir)
source_path = os.path.join(workdir, "source")
dest_path = os.path.join(workdir, "dest")
# when neither file exists
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
# when only dest exists
fileutil.write(dest_path, "dest")
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
self.failUnlessEqual(fileutil.read(dest_path), "dest")
# when both exist
fileutil.write(source_path, "source")
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
self.failUnlessEqual(fileutil.read(source_path), "source")
self.failUnlessEqual(fileutil.read(dest_path), "dest")
# when only source exists
os.remove(dest_path)
fileutil.rename_no_overwrite(source_path, dest_path)
self.failUnlessEqual(fileutil.read(dest_path), "source")
self.failIf(os.path.exists(source_path))
def test_replace_file(self):
workdir = fileutil.abspath_expanduser_unicode(u"test_replace_file")
fileutil.make_dirs(workdir)
backup_path = os.path.join(workdir, "backup")
replaced_path = os.path.join(workdir, "replaced")
replacement_path = os.path.join(workdir, "replacement")
# when none of the files exist
self.failUnlessRaises(fileutil.ConflictError, fileutil.replace_file, replaced_path, replacement_path, backup_path)
# when only replaced exists
fileutil.write(replaced_path, "foo")
self.failUnlessRaises(fileutil.ConflictError, fileutil.replace_file, replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(replaced_path), "foo")
# when both replaced and replacement exist, but not backup
fileutil.write(replacement_path, "bar")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(backup_path), "foo")
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
# when only replacement exists
os.remove(backup_path)
os.remove(replaced_path)
fileutil.write(replacement_path, "bar")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
self.failIf(os.path.exists(backup_path))
# when replaced, replacement and backup all exist
fileutil.write(replaced_path, "foo")
fileutil.write(replacement_path, "bar")
fileutil.write(backup_path, "bak")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(backup_path), "foo")
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
def test_du(self):
basedir = "util/FileUtil/test_du"
fileutil.make_dirs(basedir)
@ -460,12 +528,14 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
saved_cwd = os.path.normpath(os.getcwdu())
abspath_cwd = fileutil.abspath_expanduser_unicode(u".")
abspath_cwd_notlong = fileutil.abspath_expanduser_unicode(u".", long_path=False)
self.failUnless(isinstance(saved_cwd, unicode), saved_cwd)
self.failUnless(isinstance(abspath_cwd, unicode), abspath_cwd)
if sys.platform == "win32":
self.failUnlessReallyEqual(abspath_cwd, fileutil.to_windows_long_path(saved_cwd))
else:
self.failUnlessReallyEqual(abspath_cwd, saved_cwd)
self.failUnlessReallyEqual(abspath_cwd_notlong, saved_cwd)
self.failUnlessReallyEqual(fileutil.to_windows_long_path(u"\\\\?\\foo"), u"\\\\?\\foo")
self.failUnlessReallyEqual(fileutil.to_windows_long_path(u"\\\\.\\foo"), u"\\\\.\\foo")
@ -494,7 +564,19 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(baz[4], bar[4]) # same drive
baz_notlong = fileutil.abspath_expanduser_unicode(u"\\baz", long_path=False)
self.failIf(baz_notlong.startswith(u"\\\\?\\"), baz_notlong)
self.failUnlessReallyEqual(baz_notlong[1 :], u":\\baz")
bar_notlong = fileutil.abspath_expanduser_unicode(u"\\bar", base=baz_notlong, long_path=False)
self.failIf(bar_notlong.startswith(u"\\\\?\\"), bar_notlong)
self.failUnlessReallyEqual(bar_notlong[1 :], u":\\bar")
# not u":\\baz\\bar", because \bar is absolute on the current drive.
self.failUnlessReallyEqual(baz_notlong[0], bar_notlong[0]) # same drive
self.failIfIn(u"~", fileutil.abspath_expanduser_unicode(u"~"))
self.failIfIn(u"~", fileutil.abspath_expanduser_unicode(u"~", long_path=False))
cwds = ['cwd']
try:
@ -510,9 +592,26 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
for upath in (u'', u'fuu', u'f\xf9\xf9', u'/fuu', u'U:\\', u'~'):
uabspath = fileutil.abspath_expanduser_unicode(upath)
self.failUnless(isinstance(uabspath, unicode), uabspath)
uabspath_notlong = fileutil.abspath_expanduser_unicode(upath, long_path=False)
self.failUnless(isinstance(uabspath_notlong, unicode), uabspath_notlong)
finally:
os.chdir(saved_cwd)
def test_make_dirs_with_absolute_mode(self):
workdir = fileutil.abspath_expanduser_unicode(u"test_make_dirs_with_absolute_mode")
fileutil.make_dirs(workdir)
abspath = fileutil.abspath_expanduser_unicode(u"a/b/c", base=workdir)
fileutil.make_dirs_with_absolute_mode(workdir, abspath, 0766)
new_mode = os.stat(os.path.join(workdir,"a/b/c")).st_mode & 0777
self.failUnlessEqual(new_mode, 0766)
new_mode = os.stat(os.path.join(workdir,"a/b")).st_mode & 0777
self.failUnlessEqual(new_mode, 0766)
new_mode = os.stat(os.path.join(workdir,"a")).st_mode & 0777
self.failUnlessEqual(new_mode, 0766)
new_mode = os.stat(workdir).st_mode & 0777
self.failIfEqual(new_mode, 0766)
def test_create_long_path(self):
workdir = u"test_create_long_path"
fileutil.make_dirs(workdir)
@ -567,6 +666,60 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
disk = fileutil.get_disk_stats('.', 2**128)
self.failUnlessEqual(disk['avail'], 0)
def test_get_pathinfo(self):
basedir = "util/FileUtil/test_get_pathinfo"
fileutil.make_dirs(basedir)
# create a directory
self.mkdir(basedir, "a")
dirinfo = fileutil.get_pathinfo(basedir)
self.failUnlessTrue(dirinfo.isdir)
self.failUnlessTrue(dirinfo.exists)
self.failUnlessFalse(dirinfo.isfile)
self.failUnlessFalse(dirinfo.islink)
# create a file
f = os.path.join(basedir, "1.txt")
fileutil.write(f, "a"*10)
fileinfo = fileutil.get_pathinfo(f)
self.failUnlessTrue(fileinfo.isfile)
self.failUnlessTrue(fileinfo.exists)
self.failUnlessFalse(fileinfo.isdir)
self.failUnlessFalse(fileinfo.islink)
self.failUnlessEqual(fileinfo.size, 10)
# path at which nothing exists
dnename = os.path.join(basedir, "doesnotexist")
now = time.time()
dneinfo = fileutil.get_pathinfo(dnename, now=now)
self.failUnlessFalse(dneinfo.exists)
self.failUnlessFalse(dneinfo.isfile)
self.failUnlessFalse(dneinfo.isdir)
self.failUnlessFalse(dneinfo.islink)
self.failUnlessEqual(dneinfo.size, None)
self.failUnlessEqual(dneinfo.mtime, now)
self.failUnlessEqual(dneinfo.ctime, now)
def test_get_pathinfo_symlink(self):
if not hasattr(os, 'symlink'):
raise unittest.SkipTest("can't create symlinks on this platform")
basedir = "util/FileUtil/test_get_pathinfo"
fileutil.make_dirs(basedir)
f = os.path.join(basedir, "1.txt")
fileutil.write(f, "a"*10)
# create a symlink pointing to 1.txt
slname = os.path.join(basedir, "linkto1.txt")
os.symlink(f, slname)
symlinkinfo = fileutil.get_pathinfo(slname)
self.failUnlessTrue(symlinkinfo.islink)
self.failUnlessTrue(symlinkinfo.exists)
self.failUnlessFalse(symlinkinfo.isfile)
self.failUnlessFalse(symlinkinfo.isdir)
class PollMixinTests(unittest.TestCase):
def setUp(self):
self.pm = pollmixin.PollMixin()

View File

@ -83,7 +83,7 @@ class FakeUploader(service.Service):
helper_furl = None
helper_connected = False
def upload(self, uploadable):
def upload(self, uploadable, **kw):
d = uploadable.get_size()
d.addCallback(lambda size: uploadable.read(size))
def _got_data(datav):
@ -246,7 +246,7 @@ class FakeClient(Client):
self._secret_holder = SecretHolder("lease secret", "convergence secret")
self.helper = None
self.convergence = "some random string"
self.storage_broker = StorageFarmBroker(None, permute_peers=True)
self.storage_broker = StorageFarmBroker(None, True, 0, None)
# fake knowledge of another server
self.storage_broker.test_add_server("other_nodeid",
FakeDisplayableServer(

View File

@ -1,5 +1,6 @@
import re
from datetime import timedelta
HOUR = 3600
DAY = 24*3600
@ -8,11 +9,18 @@ MONTH = 30*DAY
YEAR = 365*DAY
def abbreviate_time(s):
postfix = ''
if isinstance(s, timedelta):
s = s.total_seconds()
if s >= 0.0:
postfix = ' ago'
else:
postfix = ' in the future'
def _plural(count, unit):
count = int(count)
if count == 1:
return "%d %s" % (count, unit)
return "%d %ss" % (count, unit)
return "%d %s%s" % (count, unit, postfix)
return "%d %ss%s" % (count, unit, postfix)
if s is None:
return "unknown"
if s < 120:

View File

@ -8,9 +8,12 @@ from twisted.internet.interfaces import IConsumer
class MemoryConsumer:
implements(IConsumer)
def __init__(self):
def __init__(self, progress=None):
self.chunks = []
self.done = False
self._progress = progress
def registerProducer(self, p, streaming):
self.producer = p
if streaming:
@ -19,12 +22,19 @@ class MemoryConsumer:
else:
while not self.done:
p.resumeProducing()
def write(self, data):
self.chunks.append(data)
if self._progress is not None:
self._progress.set_progress(sum([len(c) for c in self.chunks]))
def unregisterProducer(self):
self.done = True
def download_to_data(n, offset=0, size=None):
d = n.read(MemoryConsumer(), offset, size)
def download_to_data(n, offset=0, size=None, progress=None):
"""
:param on_progress: if set, a single-arg callable that receives total bytes downloaded
"""
d = n.read(MemoryConsumer(progress=progress), offset, size)
d.addCallback(lambda mc: "".join(mc.chunks))
return d

View File

@ -5,6 +5,7 @@ from foolscap.api import eventually, fireEventually
from twisted.internet import defer, reactor
from allmydata.util import log
from allmydata.util.assertutil import _assert
from allmydata.util.pollmixin import PollMixin
@ -77,28 +78,35 @@ class HookMixin:
I am a helper mixin that maintains a collection of named hooks, primarily
for use in tests. Each hook is set to an unfired Deferred using 'set_hook',
and can then be fired exactly once at the appropriate time by '_call_hook'.
If 'ignore_count' is given, that number of calls to '_call_hook' will be
ignored before firing the hook.
I assume a '_hooks' attribute that should set by the class constructor to
a dict mapping each valid hook name to None.
"""
def set_hook(self, name, d=None):
def set_hook(self, name, d=None, ignore_count=0):
"""
Called by the hook observer (e.g. by a test).
If d is not given, an unfired Deferred is created and returned.
The hook must not already be set.
"""
self._log("set_hook %r, ignore_count=%r" % (name, ignore_count))
if d is None:
d = defer.Deferred()
assert self._hooks[name] is None, self._hooks[name]
assert isinstance(d, defer.Deferred), d
self._hooks[name] = d
_assert(ignore_count >= 0, ignore_count=ignore_count)
_assert(name in self._hooks, name=name)
_assert(self._hooks[name] is None, name=name, hook=self._hooks[name])
_assert(isinstance(d, defer.Deferred), d=d)
self._hooks[name] = (d, ignore_count)
return d
def _call_hook(self, res, name):
def _call_hook(self, res, name, async=False):
"""
Called to trigger the hook, with argument 'res'. This is a no-op if the
hook is unset. Otherwise, the hook will be unset, and then its Deferred
will be fired synchronously.
Called to trigger the hook, with argument 'res'. This is a no-op if
the hook is unset. If the hook's ignore_count is positive, it will be
decremented; if it was already zero, the hook will be unset, and then
its Deferred will be fired synchronously.
The expected usage is "deferred.addBoth(self._call_hook, 'hookname')".
This ensures that if 'res' is a failure, the hook will be errbacked,
@ -106,13 +114,25 @@ class HookMixin:
'res' is returned so that the current result or failure will be passed
through.
"""
d = self._hooks[name]
if d is None:
return defer.succeed(None)
self._hooks[name] = None
_with_log(d.callback, res)
hook = self._hooks[name]
if hook is None:
return res # pass on error/result
(d, ignore_count) = hook
self._log("call_hook %r, ignore_count=%r" % (name, ignore_count))
if ignore_count > 0:
self._hooks[name] = (d, ignore_count - 1)
else:
self._hooks[name] = None
if async:
_with_log(eventually_callback(d), res)
else:
_with_log(d.callback, res)
return res
def _log(self, msg):
log.msg(msg, level=log.NOISY)
def async_iterate(process, iterable, *extra_args, **kwargs):
"""

View File

@ -6,8 +6,9 @@ unicode and back.
import sys, os, re, locale
from types import NoneType
from allmydata.util.assertutil import precondition
from allmydata.util.assertutil import precondition, _assert
from twisted.python import usage
from twisted.python.filepath import FilePath
from allmydata.util import log
from allmydata.util.fileutil import abspath_expanduser_unicode
@ -35,9 +36,10 @@ def check_encoding(encoding):
filesystem_encoding = None
io_encoding = None
is_unicode_platform = False
use_unicode_filepath = False
def _reload():
global filesystem_encoding, io_encoding, is_unicode_platform
global filesystem_encoding, io_encoding, is_unicode_platform, use_unicode_filepath
filesystem_encoding = canonical_encoding(sys.getfilesystemencoding())
check_encoding(filesystem_encoding)
@ -61,6 +63,12 @@ def _reload():
is_unicode_platform = sys.platform in ["win32", "darwin"]
# Despite the Unicode-mode FilePath support added to Twisted in
# <https://twistedmatrix.com/trac/ticket/7805>, we can't yet use
# Unicode-mode FilePaths with INotify on non-Windows platforms
# due to <https://twistedmatrix.com/trac/ticket/7928>.
use_unicode_filepath = sys.platform == "win32"
_reload()
@ -249,6 +257,54 @@ def quote_local_unicode_path(path, quotemarks=True):
return quote_output(path, quotemarks=quotemarks, quote_newlines=True)
def quote_filepath(path, quotemarks=True):
return quote_local_unicode_path(unicode_from_filepath(path), quotemarks=quotemarks)
def extend_filepath(fp, segments):
# We cannot use FilePath.preauthChild, because
# * it has the security flaw described in <https://twistedmatrix.com/trac/ticket/6527>;
# * it may return a FilePath in the wrong mode.
for segment in segments:
fp = fp.child(segment)
if isinstance(fp.path, unicode) and not use_unicode_filepath:
return FilePath(fp.path.encode(filesystem_encoding))
else:
return fp
def to_filepath(path):
precondition(isinstance(path, unicode if use_unicode_filepath else basestring),
path=path)
if isinstance(path, unicode) and not use_unicode_filepath:
path = path.encode(filesystem_encoding)
if sys.platform == "win32":
_assert(isinstance(path, unicode), path=path)
if path.startswith(u"\\\\?\\") and len(path) > 4:
# FilePath normally strips trailing path separators, but not in this case.
path = path.rstrip(u"\\")
return FilePath(path)
def _decode(s):
precondition(isinstance(s, basestring), s=s)
if isinstance(s, bytes):
return s.decode(filesystem_encoding)
else:
return s
def unicode_from_filepath(fp):
precondition(isinstance(fp, FilePath), fp=fp)
return _decode(fp.path)
def unicode_segments_from(base_fp, ancestor_fp):
precondition(isinstance(base_fp, FilePath), base_fp=base_fp)
precondition(isinstance(ancestor_fp, FilePath), ancestor_fp=ancestor_fp)
return base_fp.asTextMode().segmentsFrom(ancestor_fp.asTextMode())
def unicode_platform():
"""
@ -296,3 +352,6 @@ def listdir_unicode(path):
return os.listdir(path)
else:
return listdir_unicode_fallback(path)
def listdir_filepath(fp):
return listdir_unicode(unicode_from_filepath(fp))

View File

@ -3,16 +3,20 @@ Futz with files like a pro.
"""
import sys, exceptions, os, stat, tempfile, time, binascii
from collections import namedtuple
from errno import ENOENT
if sys.platform == "win32":
from ctypes import WINFUNCTYPE, WinError, windll, POINTER, byref, c_ulonglong, \
create_unicode_buffer, get_last_error
from ctypes.wintypes import BOOL, DWORD, LPCWSTR, LPWSTR
from ctypes.wintypes import BOOL, DWORD, LPCWSTR, LPWSTR, LPVOID, HANDLE
from twisted.python import log
from pycryptopp.cipher.aes import AES
from allmydata.util.assertutil import _assert
def rename(src, dst, tries=4, basedelay=0.1):
""" Here is a superkludge to workaround the fact that occasionally on
@ -140,6 +144,31 @@ class EncryptedTemporaryFile:
old end-of-file are unspecified. The file position after this operation is unspecified."""
self.file.truncate(newsize)
def make_dirs_with_absolute_mode(parent, dirname, mode):
"""
Make directory `dirname` and chmod it to `mode` afterwards.
We chmod all parent directories of `dirname` until we reach
`parent`.
"""
precondition_abspath(parent)
precondition_abspath(dirname)
if not is_ancestor_path(parent, dirname):
raise AssertionError("dirname must be a descendant of parent")
make_dirs(dirname)
while dirname != parent:
os.chmod(dirname, mode)
# FIXME: doesn't seem to work on Windows for long paths
old_dirname, dirname = dirname, os.path.dirname(dirname)
_assert(len(dirname) < len(old_dirname), dirname=dirname, old_dirname=old_dirname)
def is_ancestor_path(parent, dirname):
while dirname != parent:
# FIXME: doesn't seem to work on Windows for long paths
old_dirname, dirname = dirname, os.path.dirname(dirname)
if len(dirname) >= len(old_dirname):
return False
return True
def make_dirs(dirname, mode=0777):
"""
@ -279,17 +308,18 @@ try:
except ImportError:
pass
def abspath_expanduser_unicode(path, base=None):
def abspath_expanduser_unicode(path, base=None, long_path=True):
"""
Return the absolute version of a path. If 'base' is given and 'path' is relative,
the path will be expanded relative to 'base'.
'path' must be a Unicode string. 'base', if given, must be a Unicode string
corresponding to an absolute path as returned by a previous call to
abspath_expanduser_unicode.
On Windows, the result will be a long path unless long_path is given as False.
"""
if not isinstance(path, unicode):
raise AssertionError("paths must be Unicode strings")
if base is not None:
if base is not None and long_path:
precondition_abspath(base)
path = expanduser(path)
@ -316,7 +346,7 @@ def abspath_expanduser_unicode(path, base=None):
# there is always at least one Unicode path component.
path = os.path.normpath(path)
if sys.platform == "win32":
if sys.platform == "win32" and long_path:
path = to_windows_long_path(path)
return path
@ -514,3 +544,157 @@ def get_available_space(whichdir, reserved_space):
except EnvironmentError:
log.msg("OS call to get disk statistics failed")
return 0
if sys.platform == "win32":
# <http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx>
CreateFileW = WINFUNCTYPE(
HANDLE, LPCWSTR, DWORD, DWORD, LPVOID, DWORD, DWORD, HANDLE,
use_last_error=True
)(("CreateFileW", windll.kernel32))
GENERIC_WRITE = 0x40000000
FILE_SHARE_READ = 0x00000001
FILE_SHARE_WRITE = 0x00000002
OPEN_EXISTING = 3
INVALID_HANDLE_VALUE = 0xFFFFFFFF
# <http://msdn.microsoft.com/en-us/library/aa364439%28v=vs.85%29.aspx>
FlushFileBuffers = WINFUNCTYPE(
BOOL, HANDLE,
use_last_error=True
)(("FlushFileBuffers", windll.kernel32))
# <http://msdn.microsoft.com/en-us/library/ms724211%28v=vs.85%29.aspx>
CloseHandle = WINFUNCTYPE(
BOOL, HANDLE,
use_last_error=True
)(("CloseHandle", windll.kernel32))
# <http://social.msdn.microsoft.com/forums/en-US/netfxbcl/thread/4465cafb-f4ed-434f-89d8-c85ced6ffaa8/>
def flush_volume(path):
abspath = os.path.realpath(path)
if abspath.startswith("\\\\?\\"):
abspath = abspath[4 :]
drive = os.path.splitdrive(abspath)[0]
print "flushing %r" % (drive,)
hVolume = CreateFileW(u"\\\\.\\" + drive,
GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
None,
OPEN_EXISTING,
0,
None
)
if hVolume == INVALID_HANDLE_VALUE:
raise WinError(get_last_error())
if FlushFileBuffers(hVolume) == 0:
raise WinError(get_last_error())
CloseHandle(hVolume)
else:
def flush_volume(path):
# use sync()?
pass
class ConflictError(Exception):
pass
class UnableToUnlinkReplacementError(Exception):
pass
def reraise(wrapper):
_, exc, tb = sys.exc_info()
wrapper_exc = wrapper("%s: %s" % (exc.__class__.__name__, exc))
raise wrapper_exc.__class__, wrapper_exc, tb
if sys.platform == "win32":
# <https://msdn.microsoft.com/en-us/library/windows/desktop/aa365512%28v=vs.85%29.aspx>
ReplaceFileW = WINFUNCTYPE(
BOOL, LPCWSTR, LPCWSTR, LPCWSTR, DWORD, LPVOID, LPVOID,
use_last_error=True
)(("ReplaceFileW", windll.kernel32))
REPLACEFILE_IGNORE_MERGE_ERRORS = 0x00000002
# <https://msdn.microsoft.com/en-us/library/windows/desktop/ms681382%28v=vs.85%29.aspx>
ERROR_FILE_NOT_FOUND = 2
def rename_no_overwrite(source_path, dest_path):
os.rename(source_path, dest_path)
def replace_file(replaced_path, replacement_path, backup_path):
precondition_abspath(replaced_path)
precondition_abspath(replacement_path)
precondition_abspath(backup_path)
r = ReplaceFileW(replaced_path, replacement_path, backup_path,
REPLACEFILE_IGNORE_MERGE_ERRORS, None, None)
if r == 0:
# The UnableToUnlinkReplacementError case does not happen on Windows;
# all errors should be treated as signalling a conflict.
err = get_last_error()
if err != ERROR_FILE_NOT_FOUND:
raise ConflictError("WinError: %s" % (WinError(err),))
try:
rename_no_overwrite(replacement_path, replaced_path)
except EnvironmentError:
reraise(ConflictError)
else:
def rename_no_overwrite(source_path, dest_path):
# link will fail with EEXIST if there is already something at dest_path.
os.link(source_path, dest_path)
try:
os.unlink(source_path)
except EnvironmentError:
reraise(UnableToUnlinkReplacementError)
def replace_file(replaced_path, replacement_path, backup_path):
precondition_abspath(replaced_path)
precondition_abspath(replacement_path)
precondition_abspath(backup_path)
if not os.path.exists(replacement_path):
raise ConflictError("Replacement file not found: %r" % (replacement_path,))
try:
os.rename(replaced_path, backup_path)
except OSError as e:
if e.errno != ENOENT:
raise
try:
rename_no_overwrite(replacement_path, replaced_path)
except EnvironmentError:
reraise(ConflictError)
PathInfo = namedtuple('PathInfo', 'isdir isfile islink exists size mtime ctime')
def get_pathinfo(path_u, now=None):
try:
statinfo = os.lstat(path_u)
mode = statinfo.st_mode
return PathInfo(isdir =stat.S_ISDIR(mode),
isfile=stat.S_ISREG(mode),
islink=stat.S_ISLNK(mode),
exists=True,
size =statinfo.st_size,
mtime =statinfo.st_mtime,
ctime =statinfo.st_ctime,
)
except OSError as e:
if e.errno == ENOENT:
if now is None:
now = time.time()
return PathInfo(isdir =False,
isfile=False,
islink=False,
exists=False,
size =None,
mtime =now,
ctime =now,
)
raise

View File

@ -0,0 +1,37 @@
"""
Utilities relating to computing progress information.
Ties in with the "consumer" module also
"""
from allmydata.interfaces import IProgress
from zope.interface import implementer
@implementer(IProgress)
class PercentProgress(object):
"""
Represents progress as a percentage, from 0.0 to 100.0
"""
def __init__(self, total_size=None):
self._value = 0.0
self.set_progress_total(total_size)
def set_progress(self, value):
"IProgress API"
self._value = value
def set_progress_total(self, size):
"IProgress API"
if size is not None:
size = float(size)
self._total_size = size
@property
def progress(self):
if self._total_size is None:
return 0 # or 1.0?
if self._total_size <= 0.0:
return 0
return (self._value / self._total_size) * 100.0

View File

@ -0,0 +1,68 @@
import simplejson
from twisted.web.server import UnsupportedMethod
from nevow import rend
from nevow.inevow import IRequest
from allmydata.web.common import get_arg, WebError
class MagicFolderWebApi(rend.Page):
"""
I provide the web-based API for Magic Folder status etc.
"""
def __init__(self, client):
super(MagicFolderWebApi, self).__init__(client)
self.client = client
def _render_json(self, req):
req.setHeader("content-type", "application/json")
data = []
for item in self.client._magic_folder.uploader.get_status():
d = dict(
path=item.relpath_u,
status=item.status_history()[-1][0],
kind='upload',
)
for (status, ts) in item.status_history():
d[status + '_at'] = ts
d['percent_done'] = item.progress.progress
data.append(d)
for item in self.client._magic_folder.downloader.get_status():
d = dict(
path=item.relpath_u,
status=item.status_history()[-1][0],
kind='download',
)
for (status, ts) in item.status_history():
d[status + '_at'] = ts
d['percent_done'] = item.progress.progress
data.append(d)
return simplejson.dumps(data)
def renderHTTP(self, ctx):
req = IRequest(ctx)
t = get_arg(req, "t", None)
if req.method != 'POST':
raise UnsupportedMethod(('POST',))
token = get_arg(req, "token", None)
# XXX need constant-time comparison?
if token is None or token != self.client.get_auth_token():
raise WebError("Missing or invalid token.", 400)
if t is None:
return rend.Page.renderHTTP(self, ctx)
t = t.strip()
if t == 'json':
return self._render_json(req)
raise WebError("'%s' invalid type for 't' arg" % (t,), 400)

View File

@ -12,7 +12,7 @@ from allmydata import get_package_versions_string
from allmydata.util import log
from allmydata.interfaces import IFileNode
from allmydata.web import filenode, directory, unlinked, status, operations
from allmydata.web import storage
from allmydata.web import storage, magic_folder
from allmydata.web.common import abbreviate_size, getxmlfile, WebError, \
get_arg, RenderMixin, get_format, get_mutable_type, render_time_delta, render_time, render_time_attr
@ -154,6 +154,9 @@ class Root(rend.Page):
self.child_uri = URIHandler(client)
self.child_cap = URIHandler(client)
# handler for "/magic_folder" URIs
self.child_magic_folder = magic_folder.MagicFolderWebApi(client)
self.child_file = FileHandler(client)
self.child_named = FileHandler(client)
self.child_status = status.Status(client.get_history())

View File

@ -20,13 +20,16 @@
<li>Files Retrieved (mutable): <span n:render="retrieves" /></li>
</ul>
<h2>Drop-Uploader</h2>
<h2>Magic Folder</h2>
<ul>
<li>Local Directories Monitored: <span n:render="drop_monitored" /></li>
<li>Files Uploaded: <span n:render="drop_uploads" /></li>
<li>File Changes Queued: <span n:render="drop_queued" /></li>
<li>Failed Uploads: <span n:render="drop_failed" /></li>
<li>Local Directories Monitored: <span n:render="magic_uploader_monitored" /></li>
<li>Files Uploaded: <span n:render="magic_uploader_succeeded" /></li>
<li>Files Queued for Upload: <span n:render="magic_uploader_queued" /></li>
<li>Failed Uploads: <span n:render="magic_uploader_failed" /></li>
<li>Files Downloaded: <span n:render="magic_downloader_succeeded" /></li>
<li>Files Queued for Download: <span n:render="magic_downloader_queued" /></li>
<li>Failed Downloads: <span n:render="magic_downloader_failed" /></li>
</ul>
<h2>Raw Stats:</h2>

View File

@ -1273,21 +1273,34 @@ class Statistics(rend.Page):
return "%s files / %s bytes (%s)" % (files, bytes,
abbreviate_size(bytes))
def render_drop_monitored(self, ctx, data):
dirs = data["counters"].get("drop_upload.dirs_monitored", 0)
def render_magic_uploader_monitored(self, ctx, data):
dirs = data["counters"].get("magic_folder.uploader.dirs_monitored", 0)
return "%s directories" % (dirs,)
def render_drop_uploads(self, ctx, data):
def render_magic_uploader_succeeded(self, ctx, data):
# TODO: bytes uploaded
files = data["counters"].get("drop_upload.files_uploaded", 0)
files = data["counters"].get("magic_folder.uploader.objects_succeeded", 0)
return "%s files" % (files,)
def render_drop_queued(self, ctx, data):
files = data["counters"].get("drop_upload.files_queued", 0)
def render_magic_uploader_queued(self, ctx, data):
files = data["counters"].get("magic_folder.uploader.objects_queued", 0)
return "%s files" % (files,)
def render_drop_failed(self, ctx, data):
files = data["counters"].get("drop_upload.files_failed", 0)
def render_magic_uploader_failed(self, ctx, data):
files = data["counters"].get("magic_folder.uploader.objects_failed", 0)
return "%s files" % (files,)
def render_magic_downloader_succeeded(self, ctx, data):
# TODO: bytes uploaded
files = data["counters"].get("magic_folder.downloader.objects_succeeded", 0)
return "%s files" % (files,)
def render_magic_downloader_queued(self, ctx, data):
files = data["counters"].get("magic_folder.downloader.objects_queued", 0)
return "%s files" % (files,)
def render_magic_downloader_failed(self, ctx, data):
files = data["counters"].get("magic_folder.downloader.objects_failed", 0)
return "%s files" % (files,)
def render_raw(self, ctx, data):

View File

@ -0,0 +1,285 @@
# Windows near-equivalent to twisted.internet.inotify
# This should only be imported on Windows.
import os, sys
from twisted.internet import reactor
from twisted.internet.threads import deferToThread
from allmydata.util.fake_inotify import humanReadableMask, \
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED
[humanReadableMask, \
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED]
from allmydata.util.assertutil import _assert, precondition
from allmydata.util.encodingutil import quote_output
from allmydata.util import log, fileutil
from allmydata.util.pollmixin import PollMixin
from ctypes import WINFUNCTYPE, WinError, windll, POINTER, byref, create_string_buffer, \
addressof, get_last_error
from ctypes.wintypes import BOOL, HANDLE, DWORD, LPCWSTR, LPVOID
# <http://msdn.microsoft.com/en-us/library/gg258116%28v=vs.85%29.aspx>
FILE_LIST_DIRECTORY = 1
# <http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx>
CreateFileW = WINFUNCTYPE(
HANDLE, LPCWSTR, DWORD, DWORD, LPVOID, DWORD, DWORD, HANDLE,
use_last_error=True
)(("CreateFileW", windll.kernel32))
FILE_SHARE_READ = 0x00000001
FILE_SHARE_WRITE = 0x00000002
FILE_SHARE_DELETE = 0x00000004
OPEN_EXISTING = 3
FILE_FLAG_BACKUP_SEMANTICS = 0x02000000
# <http://msdn.microsoft.com/en-us/library/ms724211%28v=vs.85%29.aspx>
CloseHandle = WINFUNCTYPE(
BOOL, HANDLE,
use_last_error=True
)(("CloseHandle", windll.kernel32))
# <http://msdn.microsoft.com/en-us/library/aa365465%28v=vs.85%29.aspx>
ReadDirectoryChangesW = WINFUNCTYPE(
BOOL, HANDLE, LPVOID, DWORD, BOOL, DWORD, POINTER(DWORD), LPVOID, LPVOID,
use_last_error=True
)(("ReadDirectoryChangesW", windll.kernel32))
FILE_NOTIFY_CHANGE_FILE_NAME = 0x00000001
FILE_NOTIFY_CHANGE_DIR_NAME = 0x00000002
FILE_NOTIFY_CHANGE_ATTRIBUTES = 0x00000004
#FILE_NOTIFY_CHANGE_SIZE = 0x00000008
FILE_NOTIFY_CHANGE_LAST_WRITE = 0x00000010
FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x00000020
#FILE_NOTIFY_CHANGE_CREATION = 0x00000040
FILE_NOTIFY_CHANGE_SECURITY = 0x00000100
# <http://msdn.microsoft.com/en-us/library/aa364391%28v=vs.85%29.aspx>
FILE_ACTION_ADDED = 0x00000001
FILE_ACTION_REMOVED = 0x00000002
FILE_ACTION_MODIFIED = 0x00000003
FILE_ACTION_RENAMED_OLD_NAME = 0x00000004
FILE_ACTION_RENAMED_NEW_NAME = 0x00000005
_action_to_string = {
FILE_ACTION_ADDED : "FILE_ACTION_ADDED",
FILE_ACTION_REMOVED : "FILE_ACTION_REMOVED",
FILE_ACTION_MODIFIED : "FILE_ACTION_MODIFIED",
FILE_ACTION_RENAMED_OLD_NAME : "FILE_ACTION_RENAMED_OLD_NAME",
FILE_ACTION_RENAMED_NEW_NAME : "FILE_ACTION_RENAMED_NEW_NAME",
}
_action_to_inotify_mask = {
FILE_ACTION_ADDED : IN_CREATE,
FILE_ACTION_REMOVED : IN_DELETE,
FILE_ACTION_MODIFIED : IN_CHANGED,
FILE_ACTION_RENAMED_OLD_NAME : IN_MOVED_FROM,
FILE_ACTION_RENAMED_NEW_NAME : IN_MOVED_TO,
}
INVALID_HANDLE_VALUE = 0xFFFFFFFF
TRUE = 0
FALSE = 1
class Event(object):
"""
* action: a FILE_ACTION_* constant (not a bit mask)
* filename: a Unicode string, giving the name relative to the watched directory
"""
def __init__(self, action, filename):
self.action = action
self.filename = filename
def __repr__(self):
return "Event(%r, %r)" % (_action_to_string.get(self.action, self.action), self.filename)
class FileNotifyInformation(object):
"""
I represent a buffer containing FILE_NOTIFY_INFORMATION structures, and can
iterate over those structures, decoding them into Event objects.
"""
def __init__(self, size=1024):
self.size = size
self.buffer = create_string_buffer(size)
address = addressof(self.buffer)
_assert(address & 3 == 0, "address 0x%X returned by create_string_buffer is not DWORD-aligned" % (address,))
self.data = None
def read_changes(self, hDirectory, recursive, filter):
bytes_returned = DWORD(0)
r = ReadDirectoryChangesW(hDirectory,
self.buffer,
self.size,
recursive,
filter,
byref(bytes_returned),
None, # NULL -> no overlapped I/O
None # NULL -> no completion routine
)
if r == 0:
raise WinError(get_last_error())
self.data = self.buffer.raw[:bytes_returned.value]
def __iter__(self):
# Iterator implemented as generator: <http://docs.python.org/library/stdtypes.html#generator-types>
pos = 0
while True:
bytes = self._read_dword(pos+8)
s = Event(self._read_dword(pos+4),
self.data[pos+12 : pos+12+bytes].decode('utf-16-le'))
next_entry_offset = self._read_dword(pos)
yield s
if next_entry_offset == 0:
break
pos = pos + next_entry_offset
def _read_dword(self, i):
# little-endian
return ( ord(self.data[i]) |
(ord(self.data[i+1]) << 8) |
(ord(self.data[i+2]) << 16) |
(ord(self.data[i+3]) << 24))
def _open_directory(path_u):
hDirectory = CreateFileW(path_u,
FILE_LIST_DIRECTORY, # access rights
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
# don't prevent other processes from accessing
None, # no security descriptor
OPEN_EXISTING, # directory must already exist
FILE_FLAG_BACKUP_SEMANTICS, # necessary to open a directory
None # no template file
)
if hDirectory == INVALID_HANDLE_VALUE:
e = WinError(get_last_error())
raise OSError("Opening directory %s gave WinError: %s" % (quote_output(path_u), e))
return hDirectory
def simple_test():
path_u = u"test"
filter = FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_LAST_WRITE
recursive = FALSE
hDirectory = _open_directory(path_u)
fni = FileNotifyInformation()
print "Waiting..."
while True:
fni.read_changes(hDirectory, recursive, filter)
print repr(fni.data)
for info in fni:
print info
NOT_STARTED = "NOT_STARTED"
STARTED = "STARTED"
STOPPING = "STOPPING"
STOPPED = "STOPPED"
class INotify(PollMixin):
def __init__(self):
self._state = NOT_STARTED
self._filter = None
self._callbacks = None
self._hDirectory = None
self._path = None
self._pending = set()
self._pending_delay = 1.0
self.recursive_includes_new_subdirectories = True
def set_pending_delay(self, delay):
self._pending_delay = delay
def startReading(self):
deferToThread(self._thread)
return self.poll(lambda: self._state != NOT_STARTED)
def stopReading(self):
# FIXME race conditions
if self._state != STOPPED:
self._state = STOPPING
def wait_until_stopped(self):
fileutil.write(os.path.join(self._path.path, u".ignore-me"), "")
return self.poll(lambda: self._state == STOPPED)
def watch(self, path, mask=IN_WATCH_MASK, autoAdd=False, callbacks=None, recursive=False):
precondition(self._state == NOT_STARTED, "watch() can only be called before startReading()", state=self._state)
precondition(self._filter is None, "only one watch is supported")
precondition(isinstance(autoAdd, bool), autoAdd=autoAdd)
precondition(isinstance(recursive, bool), recursive=recursive)
#precondition(autoAdd == recursive, "need autoAdd and recursive to be the same", autoAdd=autoAdd, recursive=recursive)
self._path = path
path_u = path.path
if not isinstance(path_u, unicode):
path_u = path_u.decode(sys.getfilesystemencoding())
_assert(isinstance(path_u, unicode), path_u=path_u)
self._filter = FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_LAST_WRITE
if mask & (IN_ACCESS | IN_CLOSE_NOWRITE | IN_OPEN):
self._filter = self._filter | FILE_NOTIFY_CHANGE_LAST_ACCESS
if mask & IN_ATTRIB:
self._filter = self._filter | FILE_NOTIFY_CHANGE_ATTRIBUTES | FILE_NOTIFY_CHANGE_SECURITY
self._recursive = TRUE if recursive else FALSE
self._callbacks = callbacks or []
self._hDirectory = _open_directory(path_u)
def _thread(self):
try:
_assert(self._filter is not None, "no watch set")
# To call Twisted or Tahoe APIs, use reactor.callFromThread as described in
# <http://twistedmatrix.com/documents/current/core/howto/threading.html>.
fni = FileNotifyInformation()
while True:
self._state = STARTED
fni.read_changes(self._hDirectory, self._recursive, self._filter)
for info in fni:
if self._state == STOPPING:
hDirectory = self._hDirectory
self._callbacks = None
self._hDirectory = None
CloseHandle(hDirectory)
self._state = STOPPED
return
path = self._path.preauthChild(info.filename) # FilePath with Unicode path
#mask = _action_to_inotify_mask.get(info.action, IN_CHANGED)
def _maybe_notify(path):
if path not in self._pending:
self._pending.add(path)
def _do_callbacks():
self._pending.remove(path)
for cb in self._callbacks:
try:
cb(None, path, IN_CHANGED)
except Exception, e:
log.err(e)
reactor.callLater(self._pending_delay, _do_callbacks)
reactor.callFromThread(_maybe_notify, path)
except Exception, e:
log.err(e)
self._state = STOPPED
raise