Compare commits

...

133 Commits

Author SHA1 Message Date
Daira Hopwood fa2304d275 Blah 2015-11-02 22:00:47 +00:00
Daira Hopwood 4271e0daab Fix a corner case for to_filepath on Windows to make it consistent with Unix.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 22:00:31 +00:00
Daira Hopwood 959a4a9c33 test_encodingutil: add tests for FilePath-related functions.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 21:43:41 +00:00
Daira Hopwood 15a71f386d test_encodingutil: use self.patch rather than modifying encodingutil.io_encoding directly.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 21:42:42 +00:00
Daira Hopwood 7508744ea0 Depend on FilePath.asTextMode().
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 21:41:10 +00:00
Daira Hopwood 034f9910b4 WIP debugging and partial fix.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 18:21:52 +00:00
Daira Hopwood 751ef57c51 Fix a test that fails only on Windows.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 17:34:04 +00:00
Daira Hopwood aacc912ccc Fix test_move_tree.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 15:43:06 +00:00
Daira Hopwood ed2e8a3d13 WIP: exclude own dirnode from scan. This is not quite right; we shouldn't exclude it on startup. refs #2553
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 15:16:21 +00:00
Daira Hopwood 725903c338 More Magic Folder doc updates.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 14:24:24 +00:00
Daira Hopwood 394887c833 Magic Folder doc updates.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-11-02 14:14:49 +00:00
Daira Hopwood 14097eec7e Don't add subdirectory watches if the platform's notifier doesn't require them. refs #2559
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-30 02:04:30 +00:00
Daira Hopwood 38bf092525 Improve reporting of assertion failures.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-30 02:03:26 +00:00
Daira Hopwood 0583141543 Fix the change to turn off bridging to Twisted log.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-29 21:15:24 +00:00
Daira Hopwood a2a4b91fd5 Fix unused import.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-29 21:12:14 +00:00
Daira Hopwood cd81d42084 Revert pinning of Twisted 15.2.0 which causes more problems than it solves.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-29 21:10:55 +00:00
Daira Hopwood eb4f07ffb7 Turn off bridging to Twisted log, and pin to Twisted 15.2.0.
Hopefully this will avoid http://foolscap.lothar.com/trac/ticket/244 .

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-29 20:54:14 +00:00
Daira Hopwood 6da832cf14 Merge pull request #200 from meejah/2438.magic-folder-stable.5
some minor fixes for instructions
2015-10-28 20:57:29 +00:00
meejah 6f19e74e07 fix the windows command-line too 2015-10-28 14:45:50 -06:00
meejah fef48788b0 some minor fixes for instructions 2015-10-28 14:41:15 -06:00
Daira Hopwood a0af577777 Windows path fix.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-28 17:00:57 +00:00
Daira Hopwood 9373b81d0c magic-folder-howto.rst formatting fixes.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-28 16:58:49 +00:00
Daira Hopwood b33d5480bd Add docs/magic-folder-howto.rst.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-28 16:54:37 +00:00
Daira Hopwood e12bc5b320 Add test for 'tahoe create-node/client/introducer' output. closes ticket:2556
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-28 13:49:43 +00:00
Daira Hopwood 7ef0c70a92 Quote local paths correctly in the output of node creation commands. fixes ticket:2556
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-28 02:57:15 +00:00
Daira Hopwood bcdefe41a6 bin\tahoe can't be run directly on Windows.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-28 00:33:59 +00:00
Daira Hopwood 319dedb0df Strip any long path marker in the input to flush_volume.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 21:39:05 +00:00
Daira Hopwood ea55fa5c20 Correct type for Windows BOOL.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 21:38:03 +00:00
Daira Hopwood fa5d86b55c Depend on Twisted >= 15.2.0 and (finally!) retire the setup_requires hack.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 20:55:14 +00:00
Daira Hopwood 9142b25f00 Disable precondition that autoAdd == recursive.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 20:36:04 +00:00
Daira Hopwood 2130eb4240 Fix a type error.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 20:35:41 +00:00
Daira Hopwood 3a6a62c6d1 Flush handling WIP.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 20:07:41 +00:00
Daira Hopwood ecc944d5f1 Use fileutil.write for magic folder tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:23 +00:00
Daira Hopwood b88d450666 Improve all of the Windows-specific error reporting.
Also make the Windows function declarations more readable and consistent.

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:22 +00:00
Daira Hopwood 92301cb731 Consolidate Windows-specific imports in fileutil to avoid pyflakes warnings.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:22 +00:00
Daira Hopwood b04f3d6994 replace_file should allow the replaced file not to exist on Windows.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:21 +00:00
Daira Hopwood c9346d2022 Fix fileutil tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:21 +00:00
Daira Hopwood 2fdc7f2689 More path fixes.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:20 +00:00
Daira Hopwood fffdd64ab5 Fix a test broken by the last commit.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:20 +00:00
Daira Hopwood d0515a7427 Don't include [magic_folder]enabled and local.directory fields by default.
Add a comment reminding to do the field modification properly.

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:19 +00:00
Daira Hopwood 01a8719179 Don't use a long path for the [magic_folder]local.directory field.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:19 +00:00
Daira Hopwood 035a7c9d31 Fix some path Unixisms.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:18 +00:00
Daira Hopwood cda88b2f80 Add long_path=False option to abspath_expanduser_unicode.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:34:18 +00:00
Daira Hopwood 70d01cb147 Fix test_alice_bob.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:31:22 +00:00
Daira Hopwood 730f4e4ba3 Refactor _check_up/downloader_count.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 19:29:14 +00:00
Daira Hopwood 243c89204d Don't download the deletion marker file unnecessarily.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 18:44:55 +00:00
Daira Hopwood a7ef6948d9 Distinguish deletion of directories.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 18:44:55 +00:00
Daira Hopwood f2db5068b9 Rename deleted files to .backup rather than unlinking them.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 18:44:55 +00:00
David Stainton a1367f6447 Add file conflict unit test 2015-10-27 18:44:55 +00:00
David Stainton 976ea15863 Add basic bob upload test and fix conflict detect 2015-10-27 18:44:55 +00:00
David Stainton 44e2c6acc8 Fix bob's uploading test... 2015-10-27 18:44:55 +00:00
David Stainton f1bb8afd2b Attempt to teach bob to upload a file 2015-10-27 18:44:55 +00:00
David Stainton c33e918bbf Count conflicted objects 2015-10-27 18:44:55 +00:00
Daira Hopwood 63ad778b7d Basic remote conflict detection based on ancestor uri
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 18:44:45 +00:00
Daira Hopwood 7a2e021c75 magic-folder.rst: remove "Known Issues and Limitations" that have been fixed.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 01:54:54 +00:00
Daira Hopwood 3530e662d6 magic-folder.rst: update introduction.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 01:53:50 +00:00
Daira Hopwood 61023bd98a Avoid .format, since it is inconsistent between Python 2.6 and 2.7 (and the rest of Tahoe-LAFS doesn't use it).
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-27 01:20:28 +00:00
Daira Hopwood c4c22767c0 Fix test_alice_bob.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-26 14:24:36 -06:00
Daira Hopwood 7944d2b30a Add counter for uploader.objects_not_uploaded.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-26 14:24:36 -06:00
Daira Hopwood e8e2038468 Advance Bob's clock after notifying.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-26 14:24:36 -06:00
Daira Hopwood 377c30517e test_alice_bob: use magic= argument to notify, rather than self.magicfolder.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-26 14:24:36 -06:00
meejah 0058044128 add excluded check 2015-10-26 14:24:36 -06:00
meejah 7e8577be06 add the 'spurious' notifies 2015-10-26 14:24:36 -06:00
Daira Hopwood aa6a22fb89 Fix a pyflakes warning and check existence of file in Bob's local dir.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-26 14:24:36 -06:00
meejah ab68608345 implement 'delete' functionality, with tests 2015-10-26 14:24:13 -06:00
meejah 716d45dbdd smoketest for magic-folder functionality 2015-10-26 14:20:03 -06:00
Daira Hopwood 5d2365f6c4 Describe use of size=None for deleted files. refs ticket:1710.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-25 13:19:13 +00:00
Daira Hopwood 85bcd89f3a Schema change for last_uploaded_uri.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-25 13:18:11 +00:00
Daira Hopwood 277c996e4a Earth Dragons: take into account that the replaced file may not exist.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-25 13:17:46 +00:00
Daira Hopwood 7d996dbb19 Writing a file without a db entry is an overwrite.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-25 13:13:36 +00:00
Daira Hopwood a2135600f8 remote-to-local-sync.rst: fix an inconsistency in the representation option table.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-25 11:18:23 +00:00
Daira Hopwood e067d2d682 Add test that we don't write files outside the magic folder directory. refs ticket:2506
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-22 14:30:13 +01:00
Daira Hopwood 35e05fa979 Fix infinite loop in should_ignore_path for absolute paths.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-22 14:28:26 +01:00
Daira Hopwood ae27bc2b83 More debug logging.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-22 14:27:39 +01:00
Daira Hopwood 98879af6e9 Unicode fix for do_join.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 21:50:00 +01:00
Daira Hopwood f7e02c51b3 Minor cleanups to tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 21:43:54 +01:00
Daira Hopwood b5b301707b Ensure that errors from Alice-and-Bob tests are reported correctly if setup fails.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 21:37:05 +01:00
Daira Hopwood fadd2049bb Cosmetics.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 21:36:11 +01:00
Daira Hopwood 3d4deca0b2 Eliminate duplicate parsing of invite code.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 21:19:04 +01:00
Daira Hopwood dc77acc424 Remaining test fixes.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 17:31:03 +01:00
Daira Hopwood 3eecd1ab2e Make sure that do_cli is only called with strs, and avoid unnecessary use of attributes in tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 17:30:53 +01:00
Daira Hopwood 1552b7dde0 Cosmetics.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 17:27:38 +01:00
Daira Hopwood db536a3bce URIs are strs.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 17:16:32 +01:00
Daira Hopwood 238479305f Aliases and nicknames are Unicode.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 17:13:58 +01:00
Daira Hopwood 8c4a9b08ee Add precondition that arguments to do_cli are strs.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 15:12:36 +01:00
Daira Hopwood f1ed5b7136 Fix call to argv_to_abspath. Also rename localdir to local_dir.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-20 14:52:00 +01:00
David Stainton aa4a3c2da4 Fix magic-folder cli tests
convert path to abs path when matching
strings in the generated config file.
2015-10-20 14:27:05 +02:00
David Stainton 2e587937c1 Attempt to fix cli tests 2015-10-20 13:08:14 +02:00
Daira Hopwood b032ab829f Better but still broken tests.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-19 18:25:02 +01:00
Daira Hopwood 588002a8b1 Fix check for initial '-' in argv_to_abspath.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-19 18:24:33 +01:00
David Stainton 09871d76c7 Fix tests by submitting unicode args instead of str 2015-10-19 16:14:12 +02:00
David Stainton ef39ca17ba Teach magic-folder join to use argv_to_abspath
- also we modify argv_to_abspath to through a usage error
if the name starts with a '-'

- add a test
currently the tests fail
2015-10-19 16:02:28 +02:00
David Stainton 3367b40990 Use argv_to_abspath for magic-folder join file path arg 2015-10-19 14:14:32 +02:00
Daira Hopwood fb55baa91b Test creation of a subdirectory.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-16 20:39:39 +01:00
Daira Hopwood 58388bebf1 Watch for IN_CREATE events but filter them out for non-directories.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-16 16:21:50 +01:00
Daira Hopwood 5297f99116 Patch Downloader.REMOTE_SCAN_INTERVAL rather than setting it persistently.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-16 03:27:59 +01:00
Daira Hopwood 07b80cf669 Implement creating local directories in downloader.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-16 03:27:20 +01:00
Daira Hopwood dad9b11853 Decode names in the scanned remote.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-16 03:25:45 +01:00
Daira Hopwood 8c8d1885ce Refactoring to allow logging from _write_downloaded_file and _rename_conflicted_file.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-16 03:24:46 +01:00
Daira Hopwood 8348913b8a Simplify and fix non-existent-file handling.
Also make the existent and non-existent cases as similar as possible,
with a view to merging them.

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-15 23:02:30 +01:00
Daira Hopwood 1d0e2b5cda Logging/debugging improvements.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-15 23:02:07 +01:00
Daira Hopwood 768b478cb9 Refactor and fix race conditions in test_alice_bob.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-15 22:56:38 +01:00
Daira Hopwood 257da9f86d Correct a call to did_upload_version in the downloader.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-13 15:56:31 +01:00
Daira Hopwood fd506bd550 Make sure that test_move_tree waits until files have been uploaded as well as downloaded.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-13 15:56:00 +01:00
Daira Hopwood 6945b85ce7 Restore a call to increment files_uploaded that was mistakenly removed.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-13 15:55:00 +01:00
David Stainton 37269740dd Teach uploader+downloader to use to db schema
here we attempt to fix all the unit tests as well...
however two tests still fail
2015-10-12 20:07:39 +02:00
Daira Hopwood 2342393d8a Add magicfolderdb.py.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-08 16:02:34 +01:00
David Stainton c82f977fb7 Remove magic-folder db code from backupdb.py 2015-10-08 16:02:34 +01:00
David Stainton f819aa5b5d WIP 2015-10-08 16:02:34 +01:00
David Stainton 70c623d7d3 Include brief summary of magic-folder CLI commands 2015-10-08 16:02:34 +01:00
David Stainton 7868f0b400 Add link to our cli design doc 2015-10-08 16:02:34 +01:00
David Stainton fd34630218 Mention gc is not part of the otf grant and link to the gc ticket 2015-10-08 16:02:34 +01:00
David Stainton 30feb122c5 Remove old obsolete/inaccurate statements 2015-10-08 16:02:34 +01:00
David Stainton 5d7d2febc6 Minor comment correction for get_all_relpaths 2015-10-08 16:02:34 +01:00
David Stainton 531747303d For all downloaded files ensure parent dir exists 2015-10-08 16:02:34 +01:00
Daira Hopwood 27fb403dad Simplify the cleanup_Alice_and_Bob callback.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-08 14:54:39 +01:00
meejah 1377aecbd7 Make downloader delay a class-variable
This gives the integration-style CLI-based tests a chance
to set the delay to 0 before the first 3-second delayed
call is queued to _lazy_tail in the Downloader
2015-10-08 14:54:39 +01:00
meejah 906d3da819 Teach unit-tests to time-warp
1. Split alice/bob clocks to avoid races conditions
   in the tests
2. Wrap ._notify so we can advance the clock after inotify
   calls in the RealTest (since it takes >0ms to do the "real" notifies)
2015-10-08 14:54:39 +01:00
Daira Hopwood bfbc20f40b Merge pull request #193 from meejah/2438.magic-folder-stable.3
Fix call to ready()
2015-10-08 14:07:05 +01:00
meejah c398218c55 Fix call to ready() 2015-10-06 14:11:32 -06:00
Daira Hopwood 0b5039d253 Correct a string-type error.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-02 22:08:54 +01:00
Daira Hopwood 3180471ce8 WIP.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:40:10 +01:00
Daira Hopwood 27ed601cef Magic Folder file moves.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:36:18 +01:00
Daira Hopwood df044f884c Prepare to move drop_upload.py to magic_folder.py.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:32 +01:00
Daira Hopwood 71ec32373a Move backupdb.py to src/allmydata.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:32 +01:00
Daira Hopwood 2b17d88644 Rename upload_ready_d to connected_enough_d.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:32 +01:00
Daira Hopwood 00159ecd06 Documentation changes for Magic Folder.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:32 +01:00
Daira Hopwood e7d4e3d4a9 Teach StorageFarmBroker to fire a deferred when a connection threshold is reached. refs #1449
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:32 +01:00
Daira Hopwood dc49a1e511 Enable Windows inotify support.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:32 +01:00
Daira Hopwood 6e64ab563b New code for Windows drop-upload support. refs #1431
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:31 +01:00
Daira Hopwood b417ae76a6 Docs for drop-upload on Windows.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:31 +01:00
Daira Hopwood 81e6a12779 Add magic folder db.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:31 +01:00
Daira Hopwood 30d7d8a501 Unicode path fixes for drop-upload.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
2015-10-01 22:25:31 +01:00
46 changed files with 4290 additions and 727 deletions

View File

@ -83,6 +83,11 @@ _tmpfstest: make-version
sudo umount '$(TMPDIR)'
rmdir '$(TMPDIR)'
.PHONY: smoketest
smoketest:
-python ./src/allmydata/test/check_magicfolder_smoke.py kill
-rm -rf smoke_magicfolder/
python ./src/allmydata/test/check_magicfolder_smoke.py
# code coverage: install the "coverage" package from PyPI, do "make test-coverage" to
# do a unit test run with coverage-gathering enabled, then use "make coverage-output" to

View File

@ -430,9 +430,9 @@ SFTP, FTP
Drop-Upload
As of Tahoe-LAFS v1.9.0, a node running on Linux can be configured to
automatically upload files that are created or changed in a specified
local directory. See drop-upload.rst_ for details.
A node running on Linux or Windows can be configured to automatically
upload files that are created or changed in a specified local directory.
See `drop-upload.rst`_ for details.
.. _download-status.rst: frontends/download-status.rst
.. _CLI.rst: frontends/CLI.rst

View File

@ -1,158 +0,0 @@
.. -*- coding: utf-8-with-signature -*-
===============================
Tahoe-LAFS Drop-Upload Frontend
===============================
1. `Introduction`_
2. `Configuration`_
3. `Known Issues and Limitations`_
Introduction
============
The drop-upload frontend allows an upload to a Tahoe-LAFS grid to be triggered
automatically whenever a file is created or changed in a specific local
directory. This is a preview of a feature that we expect to support across
several platforms, but it currently works only on Linux.
The implementation was written as a prototype at the First International
Tahoe-LAFS Summit in June 2011, and is not currently in as mature a state as
the other frontends (web, CLI, SFTP and FTP). This means that you probably
should not keep important data in the upload directory, and should not rely
on all changes to files in the local directory to result in successful uploads.
There might be (and have been) incompatible changes to how the feature is
configured. There is even the possibility that it may be abandoned, for
example if unsolveable reliability issues are found.
We are very interested in feedback on how well this feature works for you, and
suggestions to improve its usability, functionality, and reliability.
Configuration
=============
The drop-upload frontend runs as part of a gateway node. To set it up, you
need to choose the local directory to monitor for file changes, and a mutable
directory on the grid to which files will be uploaded.
These settings are configured in the ``[drop_upload]`` section of the
gateway's ``tahoe.cfg`` file.
``[drop_upload]``
``enabled = (boolean, optional)``
If this is ``True``, drop-upload will be enabled. The default value is
``False``.
``local.directory = (UTF-8 path)``
This specifies the local directory to be monitored for new or changed
files. If the path contains non-ASCII characters, it should be encoded
in UTF-8 regardless of the system's filesystem encoding. Relative paths
will be interpreted starting from the node's base directory.
In addition, the file ``private/drop_upload_dircap`` must contain a
writecap pointing to an existing mutable directory to be used as the target
of uploads. It will start with ``URI:DIR2:``, and cannot include an alias
or path.
After setting the above fields and starting or restarting the gateway,
you can confirm that the feature is working by copying a file into the
local directory. Then, use the WUI or CLI to check that it has appeared
in the upload directory with the same filename. A large file may take some
time to appear, since it is only linked into the directory after the upload
has completed.
The 'Operational Statistics' page linked from the Welcome page shows
counts of the number of files uploaded, the number of change events currently
queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
page and the node log_ may be helpful to determine the cause of any failures.
.. _log: ../logging.rst
Known Issues and Limitations
============================
This frontend only works on Linux. There is an even-more-experimental
implementation for Windows (`#1431`_), and a ticket to add support for
Mac OS X and BSD-based systems (`#1432`_).
Subdirectories of the local directory are not monitored. If a subdirectory
is created, it will be ignored. (`#1433`_)
If files are created or changed in the local directory just after the gateway
has started, it might not have connected to a sufficient number of servers
when the upload is attempted, causing the upload to fail. (`#1449`_)
Files that were created or changed in the local directory while the gateway
was not running, will not be uploaded. (`#1458`_)
The only way to determine whether uploads have failed is to look at the
'Operational Statistics' page linked from the Welcome page. This only shows
a count of failures, not the names of files. Uploads are never retried.
The drop-upload frontend performs its uploads sequentially (i.e. it waits
until each upload is finished before starting the next), even when there
would be enough memory and bandwidth to efficiently perform them in parallel.
A drop-upload can occur in parallel with an upload by a different frontend,
though. (`#1459`_)
If there are a large number of near-simultaneous file creation or
change events (greater than the number specified in the file
``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
could be missed. This is fairly unlikely under normal circumstances, because
the default value of ``max_queued_events`` in most Linux distributions is
16384, and events are removed from this queue immediately without waiting for
the corresponding upload to complete. (`#1430`_)
Some filesystems may not support the necessary change notifications.
So, it is recommended for the local directory to be on a directly attached
disk-based filesystem, not a network filesystem or one provided by a virtual
machine.
Attempts to read the mutable directory at about the same time as an uploaded
file is being linked into it, might fail, even if they are done through the
same gateway. (`#1105`_)
When a local file is changed and closed several times in quick succession,
it may be uploaded more times than necessary to keep the remote copy
up-to-date. (`#1440`_)
Files deleted from the local directory will not be unlinked from the upload
directory. (`#1710`_)
The ``private/drop_upload_dircap`` file cannot use an alias or path to
specify the upload directory. (`#1711`_)
Files are always uploaded as immutable. If there is an existing mutable file
of the same name in the upload directory, it will be unlinked and replaced
with an immutable file. (`#1712`_)
If a file in the upload directory is changed (actually relinked to a new
file), then the old file is still present on the grid, and any other caps to
it will remain valid. See `docs/garbage-collection.rst`_ for how to reclaim
the space used by files that are no longer needed.
Unicode names are supported, but the local name of a file must be encoded
correctly in order for it to be uploaded. The expected encoding is that
printed by ``python -c "import sys; print sys.getfilesystemencoding()"``.
.. _`#1105`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1105
.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
.. _`#1433`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1433
.. _`#1440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1440
.. _`#1449`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1449
.. _`#1458`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1458
.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
.. _`#1710`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1710
.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
.. _`#1712`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1712
.. _docs/garbage-collection.rst: ../garbage-collection.rst

View File

@ -0,0 +1,149 @@
.. -*- coding: utf-8-with-signature -*-
================================
Tahoe-LAFS Magic Folder Frontend
================================
1. `Introduction`_
2. `Configuration`_
3. `Known Issues and Limitations`_
Introduction
============
The Magic Folder frontend synchronizes local directories on two or more
clients, using a Tahoe-LAFS grid for storage. Whenever a file is created
or changed under the local directory of one of the clients, the change is
propagated to the grid and then to the other clients.
The implementation of the "drop-upload" frontend, on which Magic Folder is
based, was written as a prototype at the First International Tahoe-LAFS
Summit in June 2011. In 2015, with the support of a grant from the
`Open Technology Fund`_, it was redesigned and extended to support
synchronization between clients. It currently works on Linux and Windows.
Magic Folder is not currently in as mature a state as the other frontends
(web, CLI, SFTP and FTP). This means that you probably should not rely on
all changes to files in the local directory to result in successful uploads.
There might be (and have been) incompatible changes to how the feature is
configured.
We are very interested in feedback on how well this feature works for you, and
suggestions to improve its usability, functionality, and reliability.
.. _`Open Technology Fund`: https://www.opentech.fund/
Configuration
=============
The Magic Folder frontend runs as part of a gateway node. To set it up, you
must use the tahoe magic-folder CLI. For detailed information see our
`Magic-Folder CLI design documentation`_. For a given Magic-Folder collective
directory you need to run the ``tahoe magic-folder create`` command. After
that the ``tahoe magic-folder invite`` command must used to generate an
*invite code* for each member of the magic-folder collective. A confidential,
authenticated communications channel should be used to transmit the invite code
to each member, who will be joining using the ``tahoe magic-folder join``
command.
These settings are persisted in the ``[magic_folder]`` section of the
gateway's ``tahoe.cfg`` file.
``[magic_folder]``
``enabled = (boolean, optional)``
If this is ``True``, Magic Folder will be enabled. The default value is
``False``.
``local.directory = (UTF-8 path)``
This specifies the local directory to be monitored for new or changed
files. If the path contains non-ASCII characters, it should be encoded
in UTF-8 regardless of the system's filesystem encoding. Relative paths
will be interpreted starting from the node's base directory.
You should not normally need to set these fields manually because they are
set by the ``tahoe magic-folder create`` and/or ``tahoe magic-folder join``
commands. Use the ``--help`` option to these commands for more information.
After setting up a Magic Folder collective and starting or restarting each
gateway, you can confirm that the feature is working by copying a file into
any local directory, and checking that it appears on other clients.
Large files may take some time to appear.
The 'Operational Statistics' page linked from the Welcome page shows
counts of the number of files uploaded, the number of change events currently
queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
page and the node log_ may be helpful to determine the cause of any failures.
.. _log: ../logging.rst
Known Issues and Limitations
============================
This feature only works on Linux and Windows. There is a ticket to add
support for Mac OS X and BSD-based systems (`#1432`_).
The only way to determine whether uploads have failed is to look at the
'Operational Statistics' page linked from the Welcome page. This only shows
a count of failures, not the names of files. Uploads are never retried.
The Magic Folder frontend performs its uploads sequentially (i.e. it waits
until each upload is finished before starting the next), even when there
would be enough memory and bandwidth to efficiently perform them in parallel.
A Magic Folder upload can occur in parallel with an upload by a different
frontend, though. (`#1459`_)
On Linux, if there are a large number of near-simultaneous file creation or
change events (greater than the number specified in the file
``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
could be missed. This is fairly unlikely under normal circumstances, because
the default value of ``max_queued_events`` in most Linux distributions is
16384, and events are removed from this queue immediately without waiting for
the corresponding upload to complete. (`#1430`_)
The Windows implementation might also occasionally miss file creation or
change events, due to limitations of the underlying Windows API
(ReadDirectoryChangesW). We do not know how likely or unlikely this is.
(`#1431`_)
Some filesystems may not support the necessary change notifications.
So, it is recommended for the local directory to be on a directly attached
disk-based filesystem, not a network filesystem or one provided by a virtual
machine.
The ``private/magic_folder_dircap`` and ``private/collective_dircap`` files
cannot use an alias or path to specify the upload directory. (`#1711`_)
If a file in the upload directory is changed (actually relinked to a new
file), then the old file is still present on the grid, and any other caps
to it will remain valid. Eventually it will be possible to use
`garbage collection`_ to reclaim the space used by these files; however
currently they are retained indefinitely. (`#2440`_)
Unicode filenames are supported on both Linux and Windows, but on Linux, the
local name of a file must be encoded correctly in order for it to be uploaded.
The expected encoding is that printed by
``python -c "import sys; print sys.getfilesystemencoding()"``.
On Windows, local directories with non-ASCII names are not currently working.
(`#2219`_)
On Windows, when a node has Magic Folder enabled, it is unresponsive to Ctrl-C
(it can only be killed using Task Manager or similar). (`#2218`_)
.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
.. _`#2218`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2218
.. _`#2219`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2219
.. _`#2440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2440
.. _`garbage collection`: ../garbage-collection.rst
.. _`Magic-Folder CLI design documentation`: ../proposed/magic-folder/user-interface-design.rst

231
docs/magic-folder-howto.rst Normal file
View File

@ -0,0 +1,231 @@
=========================
Magic Folder Set-up Howto
=========================
1. `This document`_
2. `Preparation`_
3. `Setting up a local test grid`_
4. `Setting up Magic Folder`_
5. `Testing`_
This document
=============
This is preliminary documentation of how to set up the
Magic Folder pre-release using a test grid on a single Linux
or Windows machine, with two clients and one server. It is
aimed at a fairly technical audience.
For an introduction to Magic Folder and how to configure it
more generally, see `docs/frontends/magic-folder.rst`_.
It it possible to adapt these instructions to run the nodes on
different machines, to synchronize between three or more clients,
to mix Windows and Linux clients, and to use multiple servers
(if the Tahoe-LAFS encoding parameters are changed).
.. _`docs/frontends/magic-folder.rst`: ../docs/frontends/magic-folder.rst
Preparation
===========
Linux
-----
Install ``git`` from your distribution's package manager.
Then run these commands::
git clone -b 2438.magic-folder-stable.5 https://github.com/tahoe-lafs/tahoe-lafs.git
cd tahoe-lafs
python setup.py test
The test suite usually takes about 15 minutes to run.
Note that it is normal for some tests to be skipped.
In the current branch, the Magic Folder tests produce
considerable debugging output.
If you see an error like ``fatal error: Python.h: No such file or directory``
while compiling the dependencies, you need the Python development headers. If
you are on a Debian or Ubuntu system, you can install them with ``sudo
apt-get install python-dev``. On RedHat/Fedora, install ``python-devel``.
Windows
-------
Windows 7 or above is required.
For 64-bit Windows:
* Install Python 2.7 from
https://www.python.org/ftp/python/2.7/python-2.7.amd64.msi
* Install pywin32 from
https://tahoe-lafs.org/source/tahoe-lafs/deps/windows/pywin32-219.win-amd64-py2.7.exe
* Install git from
https://github.com/git-for-windows/git/releases/download/v2.6.2.windows.1/Git-2.6.2-64-bit.exe
For 32-bit Windows:
* Install Python 2.7 from
https://www.python.org/ftp/python/2.7/python-2.7.msi
* Install pywin32 from
https://tahoe-lafs.org/source/tahoe-lafs/deps/windows/pywin32-219.win32-py2.7.exe
* Install git from
https://github.com/git-for-windows/git/releases/download/v2.6.2.windows.1/Git-2.6.2-32-bit.exe
Then (for any version) run these commands in a Command Prompt::
git clone -b 2438.magic-folder-stable.5 https://github.com/tahoe-lafs/tahoe-lafs.git
cd tahoe-lafs
python setup.py build
Open a new Command Prompt with the same current directory,
then run::
bin\tahoe --version-and-path
It is normal for this command to print warnings and
debugging output on some systems. Do not run
``python setup.py test``, because it currently hangs on
Windows.
Setting up a local test grid
============================
Linux
-----
Run these commands::
mkdir ../grid
bin/tahoe create-introducer ../grid/introducer
bin/tahoe start ../grid/introducer
export FURL=`cat ../grid/introducer/private/introducer.furl`
bin/tahoe create-node --introducer="$FURL" ../grid/server
bin/tahoe create-client --introducer="$FURL" ../grid/alice
bin/tahoe create-client --introducer="$FURL" ../grid/bob
Windows
-------
Run::
mkdir ..\grid
bin\tahoe create-introducer ..\grid\introducer
bin\tahoe start ..\grid\introducer
Leave the introducer running in that Command Prompt,
and in a separate Command Prompt (with the same current
directory), run::
set /p FURL=<..\grid\introducer\private\introducer.furl
bin\tahoe create-node --introducer=%FURL% ..\grid\server
bin\tahoe create-client --introducer=%FURL% ..\grid\alice
bin\tahoe create-client --introducer=%FURL% ..\grid\bob
Both Linux and Windows
----------------------
(Replace ``/`` with ``\`` for Windows paths.)
Edit ``../grid/alice/tahoe.cfg``, and make the following
changes to the ``[node]`` and ``[client]`` sections::
[node]
nickname = alice
web.port = tcp:3457:interface=127.0.0.1
[client]
shares.needed = 1
shares.happy = 1
shares.total = 1
Edit ``../grid/bob/tahoe.cfg``, and make the following
change to the ``[node]`` section, and the same change as
above to the ``[client]`` section::
[node]
nickname = bob
web.port = tcp:3458:interface=127.0.0.1
Note that when running nodes on a single machine,
unique port numbers must be used for each node (and they
must not clash with ports used by other server software).
Here we have used the default of 3456 for the server,
3457 for alice, and 3458 for bob.
Now start all of the nodes (the introducer should still be
running from above)::
bin/tahoe start ../grid/server
bin/tahoe start ../grid/alice
bin/tahoe start ../grid/bob
On Windows, a separate Command Prompt is needed to run each
node.
Open a web browser on http://127.0.0.1:3457/ and verify that
alice is connected to the introducer and one storage server.
Then do the same for http://127.0.0.1:3568/ to verify that
bob is connected. Leave all of the nodes running for the
next stage.
Setting up Magic Folder
=======================
Linux
-----
Run::
mkdir -p ../local/alice ../local/bob
bin/tahoe -d ../grid/alice magic-folder create magic: alice ../local/alice
bin/tahoe -d ../grid/alice magic-folder invite magic: bob >invitecode
export INVITECODE=`cat invitecode`
bin/tahoe -d ../grid/bob magic-folder join "$INVITECODE" ../local/bob
bin/tahoe restart ../grid/alice
bin/tahoe restart ../grid/bob
Windows
-------
Run::
mkdir ..\local\alice ..\local\bob
bin\tahoe -d ..\grid\alice magic-folder create magic: alice ..\local\alice
bin\tahoe -d ..\grid\alice magic-folder invite magic: bob >invitecode
set /p INVITECODE=<invitecode
bin\tahoe -d ..\grid\bob magic-folder join %INVITECODE% ..\local\bob
Then close the Command Prompt windows that are running the alice and bob
nodes, and open two new ones in which to run::
bin\tahoe start ..\grid\alice
bin\tahoe start ..\grid\bob
Testing
=======
You can now experiment with creating files and directories in
``../local/alice`` and ``/local/bob``; any changes should be
propagated to the other directory.
Note that when a file is deleted, the corresponding file in the
other directory will be renamed to a filename ending in ``.backup``.
Deleting a directory will have no effect.
For other known issues and limitations, see
https://github.com/tahoe-lafs/tahoe-lafs/blob/2438.magic-folder-stable.5/docs/frontends/magic-folder.rst#known-issues-and-limitations
As mentioned earlier, it is also possible to run the nodes on
different machines, to synchronize between three or more clients,
to mix Windows and Linux clients, and to use multiple servers
(if the Tahoe-LAFS encoding parameters are changed).

View File

@ -174,7 +174,7 @@ collapsed into the same DMD, which could get quite large. In practice a
single DMD can easily handle the number of files expected to be written
by a client, so this is unlikely to be a significant issue.
123 : In these designs, the set of files in a Magic Folder is
123 : In these designs, the set of files in a Magic Folder is
represented as the union of the files in all client DMDs. However,
when a file is modified by more than one client, it will be linked
from multiple client DMDs. We therefore need a mechanism, such as a
@ -231,7 +231,7 @@ leave some corner cases of the write coordination problem unsolved.
+------------------------------------------------+------+------+------+------+------+------+
| Can result in large DMDs | | | | | | |
+------------------------------------------------+------+------+------+------+------+------+
| Need version number to determine priority | | | | | | |
| Need version number to determine priority | | | | | | |
+------------------------------------------------+------+------+------+------+------+------+
| Must traverse immutable directory structure | | | | | | |
+------------------------------------------------+------+------+------+------+------+------+
@ -350,6 +350,9 @@ remote change has been initially classified as an overwrite.
.. _`Fire Dragons`: #fire-dragons-distinguishing-conflicts-from-overwrites
Note that writing a file that does not already have an entry in
the `magic folder db`_ is initially classed as an overwrite.
A *write/download collision* occurs when another program writes
to ``foo`` in the local filesystem, concurrently with the new
version being written by the Magic Folder client. We need to
@ -384,11 +387,16 @@ To reclassify as a conflict, attempt to rename ``.foo.tmp`` to
The implementation of file replacement differs between Unix
and Windows. On Unix, it can be implemented as follows:
* 4a. Set the permissions of the replacement file to be the
same as the replaced file, bitwise-or'd with octal 600
(``rw-------``).
* 4a. Stat the replaced path, and set the permissions of the
replacement file to be the same as the replaced file,
bitwise-or'd with octal 600 (``rw-------``). If the replaced
file does not exist, set the permissions according to the
user's umask. If there is a directory at the replaced path,
fail.
* 4b. Attempt to move the replaced file (``foo``) to the
backup filename (``foo.backup``).
backup filename (``foo.backup``). If an ``ENOENT`` error
occurs because the replaced file does not exist, ignore this
error and continue with steps 4c and 4d.
* 4c. Attempt to create a hard link at the replaced filename
(``foo``) pointing to the replacement file (``.foo.tmp``).
* 4d. Attempt to unlink the replacement file (``.foo.tmp``),
@ -396,24 +404,30 @@ and Windows. On Unix, it can be implemented as follows:
Note that, if there is no conflict, the entry for ``foo``
recorded in the `magic folder db`_ will reflect the ``mtime``
set in step 3. The link operation in step 4c will cause an
``IN_CREATE`` event for ``foo``, but this will not trigger an
upload, because the metadata recorded in the database entry
will exactly match the metadata for the file's inode on disk.
(The two hard links — ``foo`` and, while it still exists,
``.foo.tmp`` — share the same inode and therefore the same
metadata.)
set in step 3. The move operation in step 4b will cause a
``MOVED_FROM`` event for ``foo``, and the link operation in
step 4c will cause an ``IN_CREATE`` event for ``foo``.
However, these events will not trigger an upload, because they
are guaranteed to be processed only after the file replacement
has finished, at which point the metadata recorded in the
database entry will exactly match the metadata for the file's
inode on disk. (The two hard links — ``foo`` and, while it
still exists, ``.foo.tmp`` — share the same inode and
therefore the same metadata.)
.. _`magic folder db`: filesystem_integration.rst#local-scanning-and-database
On Windows, file replacement can be implemented as a single
call to the `ReplaceFileW`_ API (with the
``REPLACEFILE_IGNORE_MERGE_ERRORS`` flag).
On Windows, file replacement can be implemented by a call to
the `ReplaceFileW`_ API (with the
``REPLACEFILE_IGNORE_MERGE_ERRORS`` flag). If an error occurs
because the replaced file does not exist, then we ignore this
error and attempt to move the replacement file to the replaced
file.
Similar to the Unix case, the `ReplaceFileW`_ operation will
cause a change notification for ``foo``. The replaced ``foo``
has the same ``mtime`` as the replacement file, and so this
notification will not trigger an unwanted upload.
cause one or more change notifications for ``foo``. The replaced
``foo`` has the same ``mtime`` as the replacement file, and so any
such notification(s) will not trigger an unwanted upload.
.. _`ReplaceFileW`: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365512%28v=vs.85%29.aspx
@ -425,7 +439,7 @@ operations performed by the Magic Folder client and the other process.
(Note that atomic operations on a directory are totally ordered.)
The set of possible interleavings differs between Windows and Unix.
On Unix, we have:
On Unix, for the case where the replaced file already exists, we have:
* Interleaving A: the other process' rename precedes our rename in
step 4b, and we get an ``IN_MOVED_TO`` event for its rename by
@ -457,6 +471,14 @@ On Unix, we have:
Therefore, an upload will be triggered for ``foo`` after its
change, which is correct and avoids data loss.
If the replaced file did not already exist, an ``ENOENT`` error
occurs at step 4b, and we continue with steps 4c and 4d. The other
process' rename races with our link operation in step 4c. If the
other process wins the race then the effect is similar to
Interleaving C, and if we win the race this it is similar to
Interleaving D. Either case avoids data loss.
On Windows, the internal implementation of `ReplaceFileW`_ is similar
to what we have described above for Unix; it works like this:
@ -477,7 +499,11 @@ step 4c. (If there is a failure at steps 4c after step 4b has
completed, the `ReplaceFileW`_ call will fail with return code
``ERROR_UNABLE_TO_MOVE_REPLACEMENT_2``. However, it is still
preferable to use this API over two `MoveFileExW`_ calls, because
it retains the attributes and ACLs of ``foo`` where possible.)
it retains the attributes and ACLs of ``foo`` where possible.
Also note that if the `ReplaceFileW`_ call fails with
``ERROR_FILE_NOT_FOUND`` because the replaced file does not exist,
then the replacment operation ignores this error and continues with
the equivalent of step 4c, as on Unix.)
However, on Windows the other application will not be able to
directly rename ``foo.other`` onto ``foo`` (which would fail because
@ -486,7 +512,10 @@ the destination already exists); it will have to rename or delete
deleted. This complicates the interleaving analysis, because we
have two operations done by the other process interleaving with
three done by the magic folder process (rather than one operation
interleaving with four as on Unix). The cases are:
interleaving with four as on Unix).
So on Windows, for the case where the replaced file already exists,
we have:
* Interleaving A: the other process' deletion of ``foo`` and its
rename of ``foo.other`` to ``foo`` both precede our rename in
@ -504,10 +533,14 @@ interleaving with four as on Unix). The cases are:
our rename of ``foo`` to ``foo.backup`` done by `ReplaceFileW`_,
but its rename of ``foo.other`` to ``foo`` does not, so we get
an ``ERROR_FILE_NOT_FOUND`` error from `ReplaceFileW`_ indicating
that the replaced file does not exist. Then we reclassify as a
conflict; the other process' changes end up at ``foo`` (after
it has renamed ``foo.other`` to ``foo``) and our changes end up
at ``foo.conflicted``. This avoids data loss.
that the replaced file does not exist. We ignore this error and
attempt to move ``foo.tmp`` to ``foo``, racing with the other
process which is attempting to move ``foo.other`` to ``foo``.
If we win the race, then our changes end up at ``foo``, and the
other process' move fails. If the other process wins the race,
then its changes end up at ``foo``, our move fails, and we
reclassify as a conflict, so that our changes end up at
``foo.conflicted``. Either possibility avoids data loss.
* Interleaving D: the other process' deletion and/or rename happen
during the call to `ReplaceFileW`_, causing the latter to fail.
@ -540,6 +573,11 @@ interleaving with four as on Unix). The cases are:
.. _`MoveFileExW`: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240%28v=vs.85%29.aspx
If the replaced file did not already exist, we get an
``ERROR_FILE_NOT_FOUND`` error from `ReplaceFileW`_, and attempt to
move ``foo.tmp`` to ``foo``. This is similar to Interleaving C, and
either possibility for the resulting race avoids data loss.
We also need to consider what happens if another process opens ``foo``
and writes to it directly, rather than renaming another file onto it:
@ -662,8 +700,9 @@ must not bother the user.
For example, suppose that Alice's Magic Folder client sees a change
to ``foo`` in Bob's DMD. If the version it downloads from Bob's DMD
is "based on" the version currently in Alice's local filesystem at
the time Alice's client attempts to write the downloaded file, then
it is an overwrite. Otherwise it is initially classified as a
the time Alice's client attempts to write the downloaded file or if
there is no existing version in Alice's local filesystem at that time
then it is an overwrite. Otherwise it is initially classified as a
conflict.
This initial classification is used by the procedure for writing a
@ -748,9 +787,9 @@ may be absent). Then the algorithm is:
* the *last-uploaded statinfo*, if any (this is the size in
bytes, ``mtime``, and ``ctime`` stored in the ``local_files``
table when the file was last uploaded);
* the ``filecap`` field of the ``caps`` table for this file,
which is the URI under which the file was last uploaded.
Call this ``last_uploaded_uri``.
* the ``last_uploaded_uri`` field of the ``local_files`` table
for this file, which is the URI under which the file was last
uploaded.
* 2d. If any of the following are true, then classify as a conflict:
@ -857,10 +896,20 @@ take this as a signal to rename their copies to the backup filename.
Note that the entry for this zero-length file has a version number as
usual, and later versions may restore the file.
When the downloader deletes a file (or renames it to a filename
ending in ``.backup``) in response to a remote change, a local
filesystem notification will occur, and we must make sure that this
is not treated as a local change. To do this we have the downloader
set the ``size`` field in the magic folder db to ``None`` (SQL NULL)
just before deleting the file, and suppress notifications for which
the local file does not exist, and the recorded ``size`` field is
``None``.
When a Magic Folder client restarts, we can detect files that had
been downloaded but were deleted while it was not running, because
their paths will have last-downloaded records in the magic folder db
without any corresponding local file.
with a ``size`` other than ``None``, and without any corresponding
local file.
Deletion of a directory
~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -29,6 +29,19 @@ install_requires = [
# zope.interface 3.6.3 and 3.6.4 are incompatible with Nevow (#1435).
"zope.interface >= 3.6.0, != 3.6.3, != 3.6.4",
# * We need Twisted 10.1.0 for the FTP frontend in order for
# Twisted's FTP server to support asynchronous close.
# * The SFTP frontend depends on Twisted 11.0.0 to fix the SSH server
# rekeying bug <https://twistedmatrix.com/trac/ticket/4395>
# * The FTP frontend depends on Twisted >= 11.1.0 for
# filepath.Permissions
# * Nevow 0.11.1 depends on Twisted >= 13.0.0.
# * The Magic Folder frontend depends on Twisted >= 15.2.0.
"Twisted >= 15.2.0",
# Nevow 0.11.1 can be installed using pip (#2032).
"Nevow >= 0.11.1",
# * foolscap < 0.5.1 had a performance bug which spent O(N**2) CPU for
# transferring large mutable files of size N.
# * foolscap < 0.6 is incompatible with Twisted 10.2.0.
@ -53,6 +66,9 @@ install_requires = [
"pyasn1-modules >= 0.0.5", # service-identity depends on this
]
# We no longer have any setup dependencies.
setup_requires = []
# Includes some indirect dependencies, but does not include allmydata.
# These are in the order they should be listed by --version, etc.
package_imports = [
@ -101,63 +117,6 @@ if not hasattr(sys, 'frozen'):
package_imports.append(('setuptools', 'setuptools'))
# * On Linux we need at least Twisted 10.1.0 for inotify support
# used by the drop-upload frontend.
# * We also need Twisted 10.1.0 for the FTP frontend in order for
# Twisted's FTP server to support asynchronous close.
# * The SFTP frontend depends on Twisted 11.0.0 to fix the SSH server
# rekeying bug <https://twistedmatrix.com/trac/ticket/4395>
# * The FTP frontend depends on Twisted >= 11.1.0 for
# filepath.Permissions
#
# On Windows, Twisted >= 12.2.0 has a dependency on pywin32.
# Since pywin32 can only be installed manually, we fall back to
# requiring earlier versions of Twisted and Nevow if it is not
# already installed.
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2028>
#
# When the fallback is used we also need to work around the fact
# that Nevow imports itself when building, which causes Twisted
# and zope.interface to be imported; therefore, we need to set
# setup_requires to make sure that the versions of Twisted and
# zope.interface used at build time satisfy Nevow's requirements.
#
# In cases where this fallback isn't needed, we prefer Nevow >= 0.11.1
# which can be installed using pip, and Twisted >= 13.0.0 which
# Nevow 0.11.1 depends on. In this case we should *not* use the
# setup_requires hack, because if we do then the build will break
# when Twisted < 13.0.0 is already installed (even though it could
# have succeeded by building a later version under support/ ).
#
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2032>
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2249>
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2291>
# <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2286>
setup_requires = []
_use_old_Twisted_and_Nevow = False
if sys.platform == "win32":
try:
import win32api
[win32api]
except ImportError:
_use_old_Twisted_and_Nevow = True
if _use_old_Twisted_and_Nevow:
install_requires += [
"Twisted >= 11.1.0, <= 12.1.0",
"Nevow >= 0.9.33, <= 0.10",
]
setup_requires += [req for req in install_requires if req.startswith('Twisted')
or req.startswith('zope.interface')]
else:
install_requires += [
"Twisted >= 13.0.0",
"Nevow >= 0.11.1",
]
# * pyOpenSSL is required in order for foolscap to provide secure connections.
# Since foolscap doesn't reliably declare this dependency in a machine-readable
# way, we need to declare a dependency on pyOpenSSL ourselves. Tahoe-LAFS does

View File

@ -12,28 +12,29 @@ from allmydata.util.dbutil import get_db, DBError
DAY = 24*60*60
MONTH = 30*DAY
SCHEMA_v1 = """
CREATE TABLE version -- added in v1
MAIN_SCHEMA = """
CREATE TABLE version
(
version INTEGER -- contains one row, set to 2
version INTEGER -- contains one row, set to %s
);
CREATE TABLE local_files -- added in v1
CREATE TABLE local_files
(
path VARCHAR(1024) PRIMARY KEY, -- index, this is an absolute UTF-8-encoded local filename
size INTEGER, -- os.stat(fn)[stat.ST_SIZE]
-- note that size is before mtime and ctime here, but after in function parameters
size INTEGER, -- os.stat(fn)[stat.ST_SIZE] (NULL if the file has been deleted)
mtime NUMBER, -- os.stat(fn)[stat.ST_MTIME]
ctime NUMBER, -- os.stat(fn)[stat.ST_CTIME]
fileid INTEGER
fileid INTEGER%s
);
CREATE TABLE caps -- added in v1
CREATE TABLE caps
(
fileid INTEGER PRIMARY KEY AUTOINCREMENT,
filecap VARCHAR(256) UNIQUE -- URI:CHK:...
);
CREATE TABLE last_upload -- added in v1
CREATE TABLE last_upload
(
fileid INTEGER PRIMARY KEY,
last_uploaded TIMESTAMP,
@ -42,6 +43,8 @@ CREATE TABLE last_upload -- added in v1
"""
SCHEMA_v1 = MAIN_SCHEMA % (1, "")
TABLE_DIRECTORY = """
CREATE TABLE directories -- added in v2
@ -54,7 +57,7 @@ CREATE TABLE directories -- added in v2
"""
SCHEMA_v2 = SCHEMA_v1 + TABLE_DIRECTORY
SCHEMA_v2 = MAIN_SCHEMA % (2, "") + TABLE_DIRECTORY
UPDATE_v1_to_v2 = TABLE_DIRECTORY + """
UPDATE version SET version=2;
@ -64,6 +67,7 @@ UPDATERS = {
2: UPDATE_v1_to_v2,
}
def get_backupdb(dbfile, stderr=sys.stderr,
create_version=(SCHEMA_v2, 2), just_create=False):
# Open or create the given backupdb file. The parent directory must
@ -71,7 +75,11 @@ def get_backupdb(dbfile, stderr=sys.stderr,
try:
(sqlite3, db) = get_db(dbfile, stderr, create_version, updaters=UPDATERS,
just_create=just_create, dbname="backupdb")
return BackupDB_v2(sqlite3, db)
if create_version[1] in (1, 2):
return BackupDB(sqlite3, db)
else:
print >>stderr, "invalid db schema version specified"
return None
except DBError, e:
print >>stderr, e
return None
@ -127,7 +135,7 @@ class DirectoryResult:
self.bdb.did_check_directory_healthy(self.dircap, results)
class BackupDB_v2:
class BackupDB:
VERSION = 2
NO_CHECK_BEFORE = 1*MONTH
ALWAYS_CHECK_AFTER = 2*MONTH
@ -137,6 +145,21 @@ class BackupDB_v2:
self.connection = connection
self.cursor = connection.cursor()
def check_file_db_exists(self, path):
"""I will tell you if a given file has an entry in my database or not
by returning True or False.
"""
c = self.cursor
c.execute("SELECT size,mtime,ctime,fileid"
" FROM local_files"
" WHERE path=?",
(path,))
row = self.cursor.fetchone()
if not row:
return False
else:
return True
def check_file(self, path, use_timestamps=True):
"""I will tell you if a given local file needs to be uploaded or not,
by looking in a database and seeing if I have a record of this file
@ -159,9 +182,9 @@ class BackupDB_v2:
is not healthy, please upload the file and call r.did_upload(filecap)
when you're done.
If use_timestamps=True (the default), I will compare ctime and mtime
If use_timestamps=True (the default), I will compare mtime and ctime
of the local file against an entry in my database, and consider the
file to be unchanged if ctime, mtime, and filesize are all the same
file to be unchanged if mtime, ctime, and filesize are all the same
as the earlier version. If use_timestamps=False, I will not trust the
timestamps, so more files (perhaps all) will be marked as needing
upload. A future version of this database may hash the file to make
@ -173,10 +196,12 @@ class BackupDB_v2:
"""
path = abspath_expanduser_unicode(path)
# XXX consider using get_pathinfo
s = os.stat(path)
size = s[stat.ST_SIZE]
ctime = s[stat.ST_CTIME]
mtime = s[stat.ST_MTIME]
ctime = s[stat.ST_CTIME]
now = time.time()
c = self.cursor

View File

@ -129,7 +129,9 @@ class Client(node.Node, pollmixin.PollMixin):
}
def __init__(self, basedir="."):
#print "Client.__init__(%r)" % (basedir,)
node.Node.__init__(self, basedir)
self.connected_enough_d = defer.Deferred()
self.started_timestamp = time.time()
self.logSource="Client"
self.encoding_params = self.DEFAULT_ENCODING_PARAMETERS.copy()
@ -150,7 +152,7 @@ class Client(node.Node, pollmixin.PollMixin):
# ControlServer and Helper are attached after Tub startup
self.init_ftp_server()
self.init_sftp_server()
self.init_drop_uploader()
self.init_magic_folder()
# If the node sees an exit_trigger file, it will poll every second to see
# whether the file still exists, and what its mtime is. If the file does not
@ -344,7 +346,12 @@ class Client(node.Node, pollmixin.PollMixin):
def init_client_storage_broker(self):
# create a StorageFarmBroker object, for use by Uploader/Downloader
# (and everybody else who wants to use storage servers)
sb = storage_client.StorageFarmBroker(self.tub, permute_peers=True)
connection_threshold = min(self.encoding_params["k"],
self.encoding_params["happy"] + 1)
sb = storage_client.StorageFarmBroker(self.tub, True, connection_threshold,
self.connected_enough_d)
self.storage_broker = sb
# load static server specifications from tahoe.cfg, if any.
@ -486,22 +493,31 @@ class Client(node.Node, pollmixin.PollMixin):
sftp_portstr, pubkey_file, privkey_file)
s.setServiceParent(self)
def init_drop_uploader(self):
def init_magic_folder(self):
#print "init_magic_folder"
if self.get_config("drop_upload", "enabled", False, boolean=True):
if self.get_config("drop_upload", "upload.dircap", None):
raise OldConfigOptionError("The [drop_upload]upload.dircap option is no longer supported; please "
"put the cap in a 'private/drop_upload_dircap' file, and delete this option.")
raise OldConfigOptionError("The [drop_upload] section must be renamed to [magic_folder].\n"
"See docs/frontends/magic-folder.rst for more information.")
upload_dircap = self.get_or_create_private_config("drop_upload_dircap")
local_dir_utf8 = self.get_config("drop_upload", "local.directory")
if self.get_config("magic_folder", "enabled", False, boolean=True):
#print "magic folder enabled"
upload_dircap = self.get_private_config("magic_folder_dircap")
collective_dircap = self.get_private_config("collective_dircap")
try:
from allmydata.frontends import drop_upload
s = drop_upload.DropUploader(self, upload_dircap, local_dir_utf8)
s.setServiceParent(self)
s.startService()
except Exception, e:
self.log("couldn't start drop-uploader: %r", args=(e,))
local_dir_config = self.get_config("magic_folder", "local.directory").decode("utf-8")
local_dir = abspath_expanduser_unicode(local_dir_config, base=self.basedir)
dbfile = os.path.join(self.basedir, "private", "magicfolderdb.sqlite")
dbfile = abspath_expanduser_unicode(dbfile)
from allmydata.frontends import magic_folder
s = magic_folder.MagicFolder(self, upload_dircap, collective_dircap, local_dir, dbfile)
s.setServiceParent(self)
s.startService()
# start processing the upload queue when we've connected to enough servers
self.connected_enough_d.addCallback(lambda ign: s.ready())
def _check_exit_trigger(self, exit_trigger_file):
if os.path.exists(exit_trigger_file):

View File

@ -1,124 +0,0 @@
import sys
from twisted.internet import defer
from twisted.python.filepath import FilePath
from twisted.application import service
from foolscap.api import eventually
from allmydata.interfaces import IDirectoryNode
from allmydata.util.encodingutil import quote_output, get_filesystem_encoding
from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.immutable.upload import FileName
class DropUploader(service.MultiService):
name = 'drop-upload'
def __init__(self, client, upload_dircap, local_dir_utf8, inotify=None):
service.MultiService.__init__(self)
try:
local_dir_u = abspath_expanduser_unicode(local_dir_utf8.decode('utf-8'))
if sys.platform == "win32":
local_dir = local_dir_u
else:
local_dir = local_dir_u.encode(get_filesystem_encoding())
except (UnicodeEncodeError, UnicodeDecodeError):
raise AssertionError("The '[drop_upload] local.directory' parameter %s was not valid UTF-8 or "
"could not be represented in the filesystem encoding."
% quote_output(local_dir_utf8))
self._client = client
self._stats_provider = client.stats_provider
self._convergence = client.convergence
self._local_path = FilePath(local_dir)
if inotify is None:
from twisted.internet import inotify
self._inotify = inotify
if not self._local_path.exists():
raise AssertionError("The '[drop_upload] local.directory' parameter was %s but there is no directory at that location." % quote_output(local_dir_u))
if not self._local_path.isdir():
raise AssertionError("The '[drop_upload] local.directory' parameter was %s but the thing at that location is not a directory." % quote_output(local_dir_u))
# TODO: allow a path rather than a cap URI.
self._parent = self._client.create_node_from_uri(upload_dircap)
if not IDirectoryNode.providedBy(self._parent):
raise AssertionError("The URI in 'private/drop_upload_dircap' does not refer to a directory.")
if self._parent.is_unknown() or self._parent.is_readonly():
raise AssertionError("The URI in 'private/drop_upload_dircap' is not a writecap to a directory.")
self._uploaded_callback = lambda ign: None
self._notifier = inotify.INotify()
# We don't watch for IN_CREATE, because that would cause us to read and upload a
# possibly-incomplete file before the application has closed it. There should always
# be an IN_CLOSE_WRITE after an IN_CREATE (I think).
# TODO: what about IN_MOVE_SELF or IN_UNMOUNT?
mask = inotify.IN_CLOSE_WRITE | inotify.IN_MOVED_TO | inotify.IN_ONLYDIR
self._notifier.watch(self._local_path, mask=mask, callbacks=[self._notify])
def startService(self):
service.MultiService.startService(self)
d = self._notifier.startReading()
self._stats_provider.count('drop_upload.dirs_monitored', 1)
return d
def _notify(self, opaque, path, events_mask):
self._log("inotify event %r, %r, %r\n" % (opaque, path, ', '.join(self._inotify.humanReadableMask(events_mask))))
self._stats_provider.count('drop_upload.files_queued', 1)
eventually(self._process, opaque, path, events_mask)
def _process(self, opaque, path, events_mask):
d = defer.succeed(None)
# FIXME: if this already exists as a mutable file, we replace the directory entry,
# but we should probably modify the file (as the SFTP frontend does).
def _add_file(ign):
name = path.basename()
# on Windows the name is already Unicode
if not isinstance(name, unicode):
name = name.decode(get_filesystem_encoding())
u = FileName(path.path, self._convergence)
return self._parent.add_file(name, u)
d.addCallback(_add_file)
def _succeeded(ign):
self._stats_provider.count('drop_upload.files_queued', -1)
self._stats_provider.count('drop_upload.files_uploaded', 1)
def _failed(f):
self._stats_provider.count('drop_upload.files_queued', -1)
if path.exists():
self._log("drop-upload: %r failed to upload due to %r" % (path.path, f))
self._stats_provider.count('drop_upload.files_failed', 1)
return f
else:
self._log("drop-upload: notified file %r disappeared "
"(this is normal for temporary files): %r" % (path.path, f))
self._stats_provider.count('drop_upload.files_disappeared', 1)
return None
d.addCallbacks(_succeeded, _failed)
d.addBoth(self._uploaded_callback)
return d
def set_uploaded_callback(self, callback):
"""This sets a function that will be called after a file has been uploaded."""
self._uploaded_callback = callback
def finish(self, for_tests=False):
self._notifier.stopReading()
self._stats_provider.count('drop_upload.dirs_monitored', -1)
if for_tests and hasattr(self._notifier, 'wait_until_stopped'):
return self._notifier.wait_until_stopped()
else:
return defer.succeed(None)
def _log(self, msg):
self._client.log(msg)
#open("events", "ab+").write(msg)

View File

@ -0,0 +1,742 @@
import sys, os
import os.path
from collections import deque
import time
from twisted.internet import defer, reactor, task
from twisted.python.failure import Failure
from twisted.python import runtime
from twisted.application import service
from allmydata.util import fileutil
from allmydata.interfaces import IDirectoryNode
from allmydata.util import log
from allmydata.util.fileutil import precondition_abspath, get_pathinfo, ConflictError
from allmydata.util.assertutil import precondition, _assert
from allmydata.util.deferredutil import HookMixin
from allmydata.util.encodingutil import listdir_filepath, to_filepath, \
extend_filepath, unicode_from_filepath, unicode_segments_from, \
quote_filepath, quote_local_unicode_path, quote_output, FilenameEncodingError
from allmydata.immutable.upload import FileName, Data
from allmydata import magicfolderdb, magicpath
IN_EXCL_UNLINK = 0x04000000L
def get_inotify_module():
try:
if sys.platform == "win32":
from allmydata.windows import inotify
elif runtime.platform.supportsINotify():
from twisted.internet import inotify
else:
raise NotImplementedError("filesystem notification needed for Magic Folder is not supported.\n"
"This currently requires Linux or Windows.")
return inotify
except (ImportError, AttributeError) as e:
log.msg(e)
if sys.platform == "win32":
raise NotImplementedError("filesystem notification needed for Magic Folder is not supported.\n"
"Windows support requires at least Vista, and has only been tested on Windows 7.")
raise
class MagicFolder(service.MultiService):
name = 'magic-folder'
def __init__(self, client, upload_dircap, collective_dircap, local_path_u, dbfile,
pending_delay=1.0, clock=reactor):
precondition_abspath(local_path_u)
service.MultiService.__init__(self)
db = magicfolderdb.get_magicfolderdb(dbfile, create_version=(magicfolderdb.SCHEMA_v1, 1))
if db is None:
return Failure(Exception('ERROR: Unable to load magic folder db.'))
# for tests
self._client = client
self._db = db
self.is_ready = False
upload_dirnode = self._client.create_node_from_uri(upload_dircap)
collective_dirnode = self._client.create_node_from_uri(collective_dircap)
self.uploader = Uploader(client, local_path_u, db, upload_dirnode, pending_delay, clock)
self.downloader = Downloader(client, local_path_u, db, collective_dirnode, upload_dirnode.get_readonly_uri(), clock)
def startService(self):
# TODO: why is this being called more than once?
if self.running:
return defer.succeed(None)
print "%r.startService" % (self,)
service.MultiService.startService(self)
return self.uploader.start_monitoring()
def ready(self):
"""ready is used to signal us to start
processing the upload and download items...
"""
self.is_ready = True
d = self.uploader.start_scanning()
d2 = self.downloader.start_scanning()
d.addCallback(lambda ign: d2)
return d
def finish(self):
print "finish"
d = self.uploader.stop()
d2 = self.downloader.stop()
d.addCallback(lambda ign: d2)
return d
def remove_service(self):
return service.MultiService.disownServiceParent(self)
class QueueMixin(HookMixin):
def __init__(self, client, local_path_u, db, name, clock):
self._client = client
self._local_path_u = local_path_u
self._local_filepath = to_filepath(local_path_u)
self._db = db
self._name = name
self._clock = clock
self._hooks = {'processed': None, 'started': None}
self.started_d = self.set_hook('started')
if not self._local_filepath.exists():
raise AssertionError("The '[magic_folder] local.directory' parameter was %s "
"but there is no directory at that location."
% quote_local_unicode_path(self._local_path_u))
if not self._local_filepath.isdir():
raise AssertionError("The '[magic_folder] local.directory' parameter was %s "
"but the thing at that location is not a directory."
% quote_local_unicode_path(self._local_path_u))
self._deque = deque()
self._lazy_tail = defer.succeed(None)
self._pending = set()
self._stopped = False
self._turn_delay = 0
def _get_filepath(self, relpath_u):
self._log("_get_filepath(%r)" % (relpath_u,))
return extend_filepath(self._local_filepath, relpath_u.split(u"/"))
def _get_relpath(self, filepath):
self._log("_get_relpath(%r)" % (filepath,))
segments = unicode_segments_from(filepath, self._local_filepath)
self._log("segments = %r" % (segments,))
return u"/".join(segments)
def _count(self, counter_name, delta=1):
ctr = 'magic_folder.%s.%s' % (self._name, counter_name)
self._log("%s += %r" % (counter_name, delta))
self._client.stats_provider.count(ctr, delta)
def _logcb(self, res, msg):
self._log("%s: %r" % (msg, res))
return res
def _log(self, msg):
s = "Magic Folder %s %s: %s" % (quote_output(self._client.nickname), self._name, msg)
self._client.log(s)
print s
#open("events", "ab+").write(msg)
def _append_to_deque(self, relpath_u):
self._log("_append_to_deque(%r)" % (relpath_u,))
if relpath_u in self._pending or magicpath.should_ignore_file(relpath_u):
return
self._deque.append(relpath_u)
self._pending.add(relpath_u)
self._count('objects_queued')
if self.is_ready:
self._clock.callLater(0, self._turn_deque)
def _turn_deque(self):
self._log("_turn_deque")
if self._stopped:
self._log("stopped")
return
try:
item = self._deque.pop()
self._log("popped %r" % (item,))
self._count('objects_queued', -1)
except IndexError:
self._log("deque is now empty")
self._lazy_tail.addCallback(lambda ign: self._when_queue_is_empty())
else:
self._lazy_tail.addCallback(lambda ign: self._process(item))
self._lazy_tail.addBoth(self._call_hook, 'processed')
self._lazy_tail.addErrback(log.err)
self._lazy_tail.addCallback(lambda ign: task.deferLater(self._clock, self._turn_delay, self._turn_deque))
class Uploader(QueueMixin):
def __init__(self, client, local_path_u, db, upload_dirnode, pending_delay, clock):
QueueMixin.__init__(self, client, local_path_u, db, 'uploader', clock)
self.is_ready = False
if not IDirectoryNode.providedBy(upload_dirnode):
raise AssertionError("The URI in '%s' does not refer to a directory."
% os.path.join('private', 'magic_folder_dircap'))
if upload_dirnode.is_unknown() or upload_dirnode.is_readonly():
raise AssertionError("The URI in '%s' is not a writecap to a directory."
% os.path.join('private', 'magic_folder_dircap'))
self._upload_dirnode = upload_dirnode
self._inotify = get_inotify_module()
self._notifier = self._inotify.INotify()
if hasattr(self._notifier, 'set_pending_delay'):
self._notifier.set_pending_delay(pending_delay)
# TODO: what about IN_MOVE_SELF, IN_MOVED_FROM, or IN_UNMOUNT?
#
self.mask = ( self._inotify.IN_CREATE
| self._inotify.IN_CLOSE_WRITE
| self._inotify.IN_MOVED_TO
| self._inotify.IN_MOVED_FROM
| self._inotify.IN_DELETE
| self._inotify.IN_ONLYDIR
| IN_EXCL_UNLINK
)
self._notifier.watch(self._local_filepath, mask=self.mask, callbacks=[self._notify],
recursive=True)
def start_monitoring(self):
self._log("start_monitoring")
d = defer.succeed(None)
d.addCallback(lambda ign: self._notifier.startReading())
d.addCallback(lambda ign: self._count('dirs_monitored'))
d.addBoth(self._call_hook, 'started')
return d
def stop(self):
self._log("stop")
self._notifier.stopReading()
self._count('dirs_monitored', -1)
if hasattr(self._notifier, 'wait_until_stopped'):
d = self._notifier.wait_until_stopped()
else:
d = defer.succeed(None)
d.addCallback(lambda ign: self._lazy_tail)
return d
def start_scanning(self):
self._log("start_scanning")
self.is_ready = True
self._pending = self._db.get_all_relpaths()
self._log("all_files %r" % (self._pending))
d = self._scan(u"")
def _add_pending(ign):
# This adds all of the files that were in the db but not already processed
# (normally because they have been deleted on disk).
self._log("adding %r" % (self._pending))
self._deque.extend(self._pending)
d.addCallback(_add_pending)
d.addCallback(lambda ign: self._turn_deque())
return d
def _scan(self, reldir_u):
self._log("scan %r" % (reldir_u,))
fp = self._get_filepath(reldir_u)
try:
children = listdir_filepath(fp)
except EnvironmentError:
raise Exception("WARNING: magic folder: permission denied on directory %s"
% quote_filepath(fp))
except FilenameEncodingError:
raise Exception("WARNING: magic folder: could not list directory %s due to a filename encoding error"
% quote_filepath(fp))
d = defer.succeed(None)
for child in children:
_assert(isinstance(child, unicode), child=child)
d.addCallback(lambda ign, child=child:
("%s/%s" % (reldir_u, child) if reldir_u else child))
def _add_pending(relpath_u):
if magicpath.should_ignore_file(relpath_u):
return None
self._pending.add(relpath_u)
return relpath_u
d.addCallback(_add_pending)
# This call to _process doesn't go through the deque, and probably should.
d.addCallback(self._process)
d.addBoth(self._call_hook, 'processed')
d.addErrback(log.err)
return d
def _notify(self, opaque, path, events_mask):
self._log("inotify event %r, %r, %r\n" % (opaque, path, ', '.join(self._inotify.humanReadableMask(events_mask))))
# We filter out IN_CREATE events not associated with a directory.
# Acting on IN_CREATE for files could cause us to read and upload
# a possibly-incomplete file before the application has closed it.
# There should always be an IN_CLOSE_WRITE after an IN_CREATE, I think.
# It isn't possible to avoid watching for IN_CREATE at all, because
# it is the only event notified for a directory creation.
if ((events_mask & self._inotify.IN_CREATE) != 0 and
(events_mask & self._inotify.IN_ISDIR) == 0):
self._log("ignoring inotify event for creation of file %r\n" % (path,))
return
relpath_u = self._get_relpath(path)
self._append_to_deque(relpath_u)
def _when_queue_is_empty(self):
return defer.succeed(None)
def _process(self, relpath_u):
self._log("_process(%r)" % (relpath_u,))
if relpath_u is None:
return
precondition(isinstance(relpath_u, unicode), relpath_u)
precondition(not relpath_u.endswith(u'/'), relpath_u)
d = defer.succeed(None)
def _maybe_upload(val, now=None):
if now is None:
now = time.time()
fp = self._get_filepath(relpath_u)
pathinfo = get_pathinfo(unicode_from_filepath(fp))
self._log("pending = %r, about to remove %r" % (self._pending, relpath_u))
self._pending.remove(relpath_u)
encoded_path_u = magicpath.path2magic(relpath_u)
if not pathinfo.exists:
# FIXME merge this with the 'isfile' case.
self._log("notified object %s disappeared (this is normal)" % quote_filepath(fp))
self._count('objects_disappeared')
if not self._db.check_file_db_exists(relpath_u):
return None
last_downloaded_timestamp = now
last_downloaded_uri = self._db.get_last_downloaded_uri(relpath_u)
current_version = self._db.get_local_file_version(relpath_u)
if current_version is None:
new_version = 0
elif self._db.is_new_file(pathinfo, relpath_u):
new_version = current_version + 1
else:
self._log("Not uploading %r" % (relpath_u,))
self._count('objects_not_uploaded')
return
metadata = { 'version': new_version,
'deleted': True,
'last_downloaded_timestamp': last_downloaded_timestamp }
if last_downloaded_uri is not None:
metadata['last_downloaded_uri'] = last_downloaded_uri
empty_uploadable = Data("", self._client.convergence)
d2 = self._upload_dirnode.add_file(encoded_path_u, empty_uploadable,
metadata=metadata, overwrite=True)
def _add_db_entry(filenode):
filecap = filenode.get_uri()
self._db.did_upload_version(relpath_u, new_version, filecap,
last_downloaded_uri, last_downloaded_timestamp, pathinfo)
self._count('files_uploaded')
d2.addCallback(_add_db_entry)
return d2
elif pathinfo.islink:
self.warn("WARNING: cannot upload symlink %s" % quote_filepath(fp))
return None
elif pathinfo.isdir:
if not getattr(self._notifier, 'recursive_includes_new_subdirectories', False):
self._notifier.watch(fp, mask=self.mask, callbacks=[self._notify], recursive=True)
uploadable = Data("", self._client.convergence)
encoded_path_u += magicpath.path2magic(u"/")
self._log("encoded_path_u = %r" % (encoded_path_u,))
upload_d = self._upload_dirnode.add_file(encoded_path_u, uploadable, metadata={"version":0}, overwrite=True)
def _succeeded(ign):
self._log("created subdirectory %r" % (relpath_u,))
self._count('directories_created')
def _failed(f):
self._log("failed to create subdirectory %r" % (relpath_u,))
return f
upload_d.addCallbacks(_succeeded, _failed)
upload_d.addCallback(lambda ign: self._scan(relpath_u))
return upload_d
elif pathinfo.isfile:
last_downloaded_uri = self._db.get_last_downloaded_uri(relpath_u)
last_downloaded_timestamp = now
current_version = self._db.get_local_file_version(relpath_u)
if current_version is None:
new_version = 0
elif self._db.is_new_file(pathinfo, relpath_u):
new_version = current_version + 1
else:
self._log("Not uploading %r" % (relpath_u,))
self._count('objects_not_uploaded')
return None
metadata = { 'version': new_version,
'last_downloaded_timestamp': last_downloaded_timestamp }
if last_downloaded_uri is not None:
metadata['last_downloaded_uri'] = last_downloaded_uri
uploadable = FileName(unicode_from_filepath(fp), self._client.convergence)
d2 = self._upload_dirnode.add_file(encoded_path_u, uploadable,
metadata=metadata, overwrite=True)
def _add_db_entry(filenode):
filecap = filenode.get_uri()
last_downloaded_uri = metadata.get('last_downloaded_uri', None)
self._db.did_upload_version(relpath_u, new_version, filecap,
last_downloaded_uri, last_downloaded_timestamp, pathinfo)
self._count('files_uploaded')
d2.addCallback(_add_db_entry)
return d2
else:
self.warn("WARNING: cannot process special file %s" % quote_filepath(fp))
return None
d.addCallback(_maybe_upload)
def _succeeded(res):
self._count('objects_succeeded')
return res
def _failed(f):
self._count('objects_failed')
self._log("%s while processing %r" % (f, relpath_u))
return f
d.addCallbacks(_succeeded, _failed)
return d
def _get_metadata(self, encoded_path_u):
try:
d = self._upload_dirnode.get_metadata_for(encoded_path_u)
except KeyError:
return Failure()
return d
def _get_filenode(self, encoded_path_u):
try:
d = self._upload_dirnode.get(encoded_path_u)
except KeyError:
return Failure()
return d
class WriteFileMixin(object):
FUDGE_SECONDS = 10.0
def _get_conflicted_filename(self, abspath_u):
return abspath_u + u".conflict"
def _write_downloaded_file(self, abspath_u, file_contents, is_conflict=False, now=None):
self._log("_write_downloaded_file(%r, <%d bytes>, is_conflict=%r, now=%r)"
% (abspath_u, len(file_contents), is_conflict, now))
# 1. Write a temporary file, say .foo.tmp.
# 2. is_conflict determines whether this is an overwrite or a conflict.
# 3. Set the mtime of the replacement file to be T seconds before the
# current local time.
# 4. Perform a file replacement with backup filename foo.backup,
# replaced file foo, and replacement file .foo.tmp. If any step of
# this operation fails, reclassify as a conflict and stop.
#
# Returns the path of the destination file.
precondition_abspath(abspath_u)
replacement_path_u = abspath_u + u".tmp" # FIXME more unique
backup_path_u = abspath_u + u".backup"
if now is None:
now = time.time()
# ensure parent directory exists
head, tail = os.path.split(abspath_u)
mode = 0777 # XXX
fileutil.make_dirs(head, mode)
fileutil.write(replacement_path_u, file_contents)
os.utime(replacement_path_u, (now, now - self.FUDGE_SECONDS))
if is_conflict:
print "0x00 ------------ <><> is conflict; calling _rename_conflicted_file... %r %r" % (abspath_u, replacement_path_u)
return self._rename_conflicted_file(abspath_u, replacement_path_u)
else:
try:
fileutil.replace_file(abspath_u, replacement_path_u, backup_path_u)
return abspath_u
except fileutil.ConflictError:
return self._rename_conflicted_file(abspath_u, replacement_path_u)
def _rename_conflicted_file(self, abspath_u, replacement_path_u):
self._log("_rename_conflicted_file(%r, %r)" % (abspath_u, replacement_path_u))
conflict_path_u = self._get_conflicted_filename(abspath_u)
print "XXX rename %r %r" % (replacement_path_u, conflict_path_u)
if os.path.isfile(replacement_path_u):
print "%r exists" % (replacement_path_u,)
if os.path.isfile(conflict_path_u):
print "%r exists" % (conflict_path_u,)
fileutil.rename_no_overwrite(replacement_path_u, conflict_path_u)
return conflict_path_u
def _rename_deleted_file(self, abspath_u):
self._log('renaming deleted file to backup: %s' % (abspath_u,))
try:
fileutil.rename_no_overwrite(abspath_u, abspath_u + u'.backup')
except IOError:
# XXX is this the correct error?
self._log("Already gone: '%s'" % (abspath_u,))
return abspath_u
class Downloader(QueueMixin, WriteFileMixin):
REMOTE_SCAN_INTERVAL = 3 # facilitates tests
def __init__(self, client, local_path_u, db, collective_dirnode, upload_readonly_dircap, clock):
QueueMixin.__init__(self, client, local_path_u, db, 'downloader', clock)
if not IDirectoryNode.providedBy(collective_dirnode):
raise AssertionError("The URI in '%s' does not refer to a directory."
% os.path.join('private', 'collective_dircap'))
if collective_dirnode.is_unknown() or not collective_dirnode.is_readonly():
raise AssertionError("The URI in '%s' is not a readonly cap to a directory."
% os.path.join('private', 'collective_dircap'))
self._collective_dirnode = collective_dirnode
self._upload_readonly_dircap = upload_readonly_dircap
self._turn_delay = self.REMOTE_SCAN_INTERVAL
self._download_scan_batch = {} # path -> [(filenode, metadata)]
def start_scanning(self):
self._log("start_scanning")
files = self._db.get_all_relpaths()
self._log("all files %s" % files)
d = self._scan_remote_collective()
d.addBoth(self._logcb, "after _scan_remote_collective 0")
self._turn_deque()
return d
def stop(self):
self._stopped = True
d = defer.succeed(None)
d.addCallback(lambda ign: self._lazy_tail)
return d
def _should_download(self, relpath_u, remote_version):
"""
_should_download returns a bool indicating whether or not a remote object should be downloaded.
We check the remote metadata version against our magic-folder db version number;
latest version wins.
"""
self._log("_should_download(%r, %r)" % (relpath_u, remote_version))
if magicpath.should_ignore_file(relpath_u):
self._log("nope")
return False
self._log("yep")
v = self._db.get_local_file_version(relpath_u)
self._log("v = %r" % (v,))
return (v is None or v < remote_version)
def _get_local_latest(self, relpath_u):
"""
_get_local_latest takes a unicode path string checks to see if this file object
exists in our magic-folder db; if not then return None
else check for an entry in our magic-folder db and return the version number.
"""
if not self._get_filepath(relpath_u).exists():
return None
return self._db.get_local_file_version(relpath_u)
def _get_collective_latest_file(self, filename):
"""
_get_collective_latest_file takes a file path pointing to a file managed by
magic-folder and returns a deferred that fires with the two tuple containing a
file node and metadata for the latest version of the file located in the
magic-folder collective directory.
"""
collective_dirmap_d = self._collective_dirnode.list()
def scan_collective(result):
list_of_deferreds = []
for dir_name in result.keys():
# XXX make sure it's a directory
d = defer.succeed(None)
d.addCallback(lambda x, dir_name=dir_name: result[dir_name][0].get_child_and_metadata(filename))
list_of_deferreds.append(d)
deferList = defer.DeferredList(list_of_deferreds, consumeErrors=True)
return deferList
collective_dirmap_d.addCallback(scan_collective)
def highest_version(deferredList):
max_version = 0
metadata = None
node = None
for success, result in deferredList:
if success:
if result[1]['version'] > max_version:
node, metadata = result
max_version = result[1]['version']
return node, metadata
collective_dirmap_d.addCallback(highest_version)
return collective_dirmap_d
def _append_to_batch(self, name, file_node, metadata):
if self._download_scan_batch.has_key(name):
self._download_scan_batch[name] += [(file_node, metadata)]
else:
self._download_scan_batch[name] = [(file_node, metadata)]
def _scan_remote(self, nickname, dirnode):
self._log("_scan_remote nickname %r" % (nickname,))
d = dirnode.list()
def scan_listing(listing_map):
for encoded_relpath_u in listing_map.keys():
relpath_u = magicpath.magic2path(encoded_relpath_u)
self._log("found %r" % (relpath_u,))
file_node, metadata = listing_map[encoded_relpath_u]
local_version = self._get_local_latest(relpath_u)
remote_version = metadata.get('version', None)
self._log("%r has local version %r, remote version %r" % (relpath_u, local_version, remote_version))
if local_version is None or remote_version is None or local_version < remote_version:
self._log("%r added to download queue" % (relpath_u,))
self._append_to_batch(relpath_u, file_node, metadata)
d.addCallback(scan_listing)
d.addBoth(self._logcb, "end of _scan_remote")
return d
def _scan_remote_collective(self):
self._log("_scan_remote_collective")
self._download_scan_batch = {} # XXX
d = self._collective_dirnode.list()
def scan_collective(dirmap):
d2 = defer.succeed(None)
for dir_name in dirmap:
(dirnode, metadata) = dirmap[dir_name]
if dirnode.get_readonly_uri() != self._upload_readonly_dircap:
d2.addCallback(lambda ign, dir_name=dir_name: self._scan_remote(dir_name, dirnode))
def _err(f):
self._log("failed to scan DMD for client %r: %s" % (dir_name, f))
# XXX what should we do to make this failure more visible to users?
d2.addErrback(_err)
return d2
d.addCallback(scan_collective)
d.addCallback(self._filter_scan_batch)
d.addCallback(self._add_batch_to_download_queue)
return d
def _add_batch_to_download_queue(self, result):
self._log("result = %r" % (result,))
self._log("deque = %r" % (self._deque,))
self._deque.extend(result)
self._log("deque after = %r" % (self._deque,))
self._count('objects_queued', len(result))
self._log("pending = %r" % (self._pending,))
self._pending.update(map(lambda x: x[0], result))
self._log("pending after = %r" % (self._pending,))
def _filter_scan_batch(self, result):
self._log("_filter_scan_batch")
extension = [] # consider whether this should be a dict
for relpath_u in self._download_scan_batch.keys():
if relpath_u in self._pending:
continue
file_node, metadata = max(self._download_scan_batch[relpath_u], key=lambda x: x[1]['version'])
if self._should_download(relpath_u, metadata['version']):
extension += [(relpath_u, file_node, metadata)]
else:
self._log("Excluding %r" % (relpath_u,))
self._count('objects_excluded')
self._call_hook(None, 'processed')
return extension
def _when_queue_is_empty(self):
d = task.deferLater(self._clock, self._turn_delay, self._scan_remote_collective)
d.addBoth(self._logcb, "after _scan_remote_collective 1")
d.addCallback(lambda ign: self._turn_deque())
return d
def _process(self, item, now=None):
self._log("_process(%r)" % (item,))
if now is None:
now = time.time()
(relpath_u, file_node, metadata) = item
fp = self._get_filepath(relpath_u)
abspath_u = unicode_from_filepath(fp)
conflict_path_u = self._get_conflicted_filename(abspath_u)
d = defer.succeed(None)
def do_update_db(written_abspath_u):
filecap = file_node.get_uri()
last_uploaded_uri = metadata.get('last_uploaded_uri', None)
last_downloaded_uri = filecap
last_downloaded_timestamp = now
written_pathinfo = get_pathinfo(written_abspath_u)
if not written_pathinfo.exists and not metadata.get('deleted', False):
raise Exception("downloaded object %s disappeared" % quote_local_unicode_path(written_abspath_u))
self._db.did_upload_version(relpath_u, metadata['version'], last_uploaded_uri,
last_downloaded_uri, last_downloaded_timestamp, written_pathinfo)
self._count('objects_downloaded')
def failed(f):
self._log("download failed: %s" % (str(f),))
self._count('objects_failed')
return f
if os.path.isfile(conflict_path_u):
def fail(res):
raise ConflictError("download failed: already conflicted: %r" % (relpath_u,))
d.addCallback(fail)
else:
is_conflict = False
if self._db.check_file_db_exists(relpath_u):
dmd_last_downloaded_uri = metadata.get('last_downloaded_uri', None)
local_last_downloaded_uri = self._db.get_last_downloaded_uri(relpath_u)
print "metadata %r" % (metadata,)
print "<<<<--- if %r != %r" % (dmd_last_downloaded_uri, local_last_downloaded_uri)
if dmd_last_downloaded_uri is not None and local_last_downloaded_uri is not None:
if dmd_last_downloaded_uri != local_last_downloaded_uri:
is_conflict = True
self._count('objects_conflicted')
#dmd_last_uploaded_uri = metadata.get('last_uploaded_uri', None)
#local_last_uploaded_uri = ...
if relpath_u.endswith(u"/"):
if metadata.get('deleted', False):
self._log("rmdir(%r) ignored" % (abspath_u,))
else:
self._log("mkdir(%r)" % (abspath_u,))
d.addCallback(lambda ign: fileutil.make_dirs(abspath_u))
d.addCallback(lambda ign: abspath_u)
else:
if metadata.get('deleted', False):
d.addCallback(lambda ign: self._rename_deleted_file(abspath_u))
else:
d.addCallback(lambda ign: file_node.download_best_version())
d.addCallback(lambda contents: self._write_downloaded_file(abspath_u, contents,
is_conflict=is_conflict))
d.addCallbacks(do_update_db, failed)
def remove_from_pending(res):
self._pending.remove(relpath_u)
return res
d.addBoth(remove_from_pending)
def trap_conflicts(f):
f.trap(ConflictError)
return None
d.addErrback(trap_conflicts)
return d

View File

@ -0,0 +1,140 @@
import sys
from allmydata.util.dbutil import get_db, DBError
# magic-folder db schema version 1
SCHEMA_v1 = """
CREATE TABLE version
(
version INTEGER -- contains one row, set to 1
);
CREATE TABLE local_files
(
path VARCHAR(1024) PRIMARY KEY, -- UTF-8 filename relative to local magic folder dir
-- note that size is before mtime and ctime here, but after in function parameters
size INTEGER, -- ST_SIZE, or NULL if the file has been deleted
mtime NUMBER, -- ST_MTIME
ctime NUMBER, -- ST_CTIME
version INTEGER,
last_uploaded_uri VARCHAR(256) UNIQUE, -- URI:CHK:...
last_downloaded_uri VARCHAR(256) UNIQUE, -- URI:CHK:...
last_downloaded_timestamp TIMESTAMP
);
"""
def get_magicfolderdb(dbfile, stderr=sys.stderr,
create_version=(SCHEMA_v1, 1), just_create=False):
# Open or create the given backupdb file. The parent directory must
# exist.
try:
(sqlite3, db) = get_db(dbfile, stderr, create_version,
just_create=just_create, dbname="magicfolderdb")
if create_version[1] in (1, 2):
return MagicFolderDB(sqlite3, db)
else:
print >>stderr, "invalid magicfolderdb schema version specified"
return None
except DBError, e:
print >>stderr, e
return None
class MagicFolderDB(object):
VERSION = 1
def __init__(self, sqlite_module, connection):
self.sqlite_module = sqlite_module
self.connection = connection
self.cursor = connection.cursor()
def check_file_db_exists(self, path):
"""I will tell you if a given file has an entry in my database or not
by returning True or False.
"""
c = self.cursor
c.execute("SELECT size,mtime,ctime"
" FROM local_files"
" WHERE path=?",
(path,))
row = self.cursor.fetchone()
if not row:
return False
else:
return True
def get_all_relpaths(self):
"""
Retrieve a set of all relpaths of files that have had an entry in magic folder db
(i.e. that have been downloaded at least once).
"""
self.cursor.execute("SELECT path FROM local_files")
rows = self.cursor.fetchall()
return set([r[0] for r in rows])
def get_last_downloaded_uri(self, relpath_u):
"""
Return the last downloaded uri recorded in the magic folder db.
If none are found then return None.
"""
c = self.cursor
c.execute("SELECT last_downloaded_uri"
" FROM local_files"
" WHERE path=?",
(relpath_u,))
row = self.cursor.fetchone()
if not row:
return None
else:
return row[0]
def get_local_file_version(self, relpath_u):
"""
Return the version of a local file tracked by our magic folder db.
If no db entry is found then return None.
"""
c = self.cursor
c.execute("SELECT version"
" FROM local_files"
" WHERE path=?",
(relpath_u,))
row = self.cursor.fetchone()
if not row:
return None
else:
return row[0]
def did_upload_version(self, relpath_u, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp, pathinfo):
print "%r.did_upload_version(%r, %r, %r, %r, %r, %r)" % (self, relpath_u, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp, pathinfo)
try:
print "insert"
self.cursor.execute("INSERT INTO local_files VALUES (?,?,?,?,?,?,?,?)",
(relpath_u, pathinfo.size, pathinfo.mtime, pathinfo.ctime, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp))
except (self.sqlite_module.IntegrityError, self.sqlite_module.OperationalError):
print "err... update"
self.cursor.execute("UPDATE local_files"
" SET size=?, mtime=?, ctime=?, version=?, last_uploaded_uri=?, last_downloaded_uri=?, last_downloaded_timestamp=?"
" WHERE path=?",
(pathinfo.size, pathinfo.mtime, pathinfo.ctime, version, last_uploaded_uri, last_downloaded_uri, last_downloaded_timestamp, relpath_u))
self.connection.commit()
print "committed"
def is_new_file(self, pathinfo, relpath_u):
"""
Returns true if the file's current pathinfo (size, mtime, and ctime) has
changed from the pathinfo previously stored in the db.
"""
c = self.cursor
c.execute("SELECT size, mtime, ctime"
" FROM local_files"
" WHERE path=?",
(relpath_u,))
row = self.cursor.fetchone()
if not row:
return True
if not pathinfo.exists and row[0] is None:
return False
return (pathinfo.size, pathinfo.mtime, pathinfo.ctime) != row

View File

@ -0,0 +1,33 @@
import re
import os.path
from allmydata.util.assertutil import precondition, _assert
def path2magic(path):
return re.sub(ur'[/@]', lambda m: {u'/': u'@_', u'@': u'@@'}[m.group(0)], path)
def magic2path(path):
return re.sub(ur'@[_@]', lambda m: {u'@_': u'/', u'@@': u'@'}[m.group(0)], path)
IGNORE_SUFFIXES = [u'.backup', u'.tmp', u'.conflicted']
IGNORE_PREFIXES = [u'.']
def should_ignore_file(path_u):
precondition(isinstance(path_u, unicode), path_u=path_u)
for suffix in IGNORE_SUFFIXES:
if path_u.endswith(suffix):
return True
while path_u != u"":
oldpath_u = path_u
path_u, tail_u = os.path.split(path_u)
if tail_u.startswith(u"."):
return True
if path_u == oldpath_u:
return True # the path was absolute
_assert(len(path_u) < len(oldpath_u), path_u=path_u, oldpath_u=oldpath_u)
return False

View File

@ -381,7 +381,7 @@ class Node(service.MultiService):
self.tub.setOption("log-gatherer-furl", lgfurl)
self.tub.setOption("log-gatherer-furlfile",
os.path.join(self.basedir, "log_gatherer.furl"))
self.tub.setOption("bridge-twisted-logs", True)
#self.tub.setOption("bridge-twisted-logs", True)
incident_dir = os.path.join(self.basedir, "logs", "incidents")
foolscap.logging.log.setLogDir(incident_dir.encode(get_filesystem_encoding()))

View File

@ -57,9 +57,14 @@ class BasedirOptions(BaseOptions):
]
def parseArgs(self, basedir=None):
if self.parent['node-directory'] and self['basedir']:
# This finds the node-directory option correctly even if we are in a subcommand.
root = self.parent
while root.parent is not None:
root = root.parent
if root['node-directory'] and self['basedir']:
raise usage.UsageError("The --node-directory (or -d) and --basedir (or -C) options cannot both be used.")
if self.parent['node-directory'] and basedir:
if root['node-directory'] and basedir:
raise usage.UsageError("The --node-directory (or -d) option and a basedir argument cannot both be used.")
if self['basedir'] and basedir:
raise usage.UsageError("The --basedir (or -C) option and a basedir argument cannot both be used.")
@ -68,13 +73,14 @@ class BasedirOptions(BaseOptions):
b = argv_to_abspath(basedir)
elif self['basedir']:
b = argv_to_abspath(self['basedir'])
elif self.parent['node-directory']:
b = argv_to_abspath(self.parent['node-directory'])
elif root['node-directory']:
b = argv_to_abspath(root['node-directory'])
elif self.default_nodedir:
b = self.default_nodedir
else:
raise usage.UsageError("No default basedir available, you must provide one with --node-directory, --basedir, or a basedir argument")
self['basedir'] = b
self['node-directory'] = b
def postOptions(self):
if not self['basedir']:

View File

@ -3,7 +3,7 @@ import os, sys
from allmydata.scripts.common import BasedirOptions, NoDefaultBasedirOptions
from allmydata.scripts.default_nodedir import _default_nodedir
from allmydata.util.assertutil import precondition
from allmydata.util.encodingutil import listdir_unicode, argv_to_unicode, quote_output, quote_local_unicode_path
from allmydata.util.encodingutil import listdir_unicode, argv_to_unicode, quote_local_unicode_path
import allmydata
class _CreateBaseOptions(BasedirOptions):
@ -109,7 +109,7 @@ def create_node(config, out=sys.stdout, err=sys.stderr):
if os.path.exists(basedir):
if listdir_unicode(basedir):
print >>err, "The base directory %s is not empty." % quote_output(basedir)
print >>err, "The base directory %s is not empty." % quote_local_unicode_path(basedir)
print >>err, "To avoid clobbering anything, I am going to quit now."
print >>err, "Please use a different directory, or empty this one."
return -1
@ -153,19 +153,19 @@ def create_node(config, out=sys.stdout, err=sys.stderr):
c.write("enabled = false\n")
c.write("\n")
c.write("[drop_upload]\n")
c.write("[magic_folder]\n")
c.write("# Shall this node automatically upload files created or modified in a local directory?\n")
c.write("enabled = false\n")
c.write("#enabled = false\n")
c.write("# To specify the target of uploads, a mutable directory writecap URI must be placed\n"
"# in 'private/drop_upload_dircap'.\n")
c.write("local.directory = ~/drop_upload\n")
"# in '%s'.\n" % os.path.join('private', 'magic_folder_dircap'))
c.write("#local.directory = \n")
c.write("\n")
c.close()
from allmydata.util import fileutil
fileutil.make_dirs(os.path.join(basedir, "private"), 0700)
print >>out, "Node created in %s" % quote_output(basedir)
print >>out, "Node created in %s" % quote_local_unicode_path(basedir)
if not config.get("introducer", ""):
print >>out, " Please set [client]introducer.furl= in tahoe.cfg!"
print >>out, " The node cannot connect to a grid without it."
@ -185,7 +185,7 @@ def create_introducer(config, out=sys.stdout, err=sys.stderr):
if os.path.exists(basedir):
if listdir_unicode(basedir):
print >>err, "The base directory %s is not empty." % quote_output(basedir)
print >>err, "The base directory %s is not empty." % quote_local_unicode_path(basedir)
print >>err, "To avoid clobbering anything, I am going to quit now."
print >>err, "Please use a different directory, or empty this one."
return -1
@ -200,7 +200,7 @@ def create_introducer(config, out=sys.stdout, err=sys.stderr):
write_node_config(c, config)
c.close()
print >>out, "Introducer created in %s" % quote_output(basedir)
print >>out, "Introducer created in %s" % quote_local_unicode_path(basedir)
return 0

View File

@ -0,0 +1,204 @@
import os
from types import NoneType
from cStringIO import StringIO
from twisted.python import usage
from allmydata.util.assertutil import precondition
from .common import BaseOptions, BasedirOptions, get_aliases
from .cli import MakeDirectoryOptions, LnOptions, CreateAliasOptions
import tahoe_mv
from allmydata.util.encodingutil import argv_to_abspath, argv_to_unicode, to_str
from allmydata.util import fileutil
from allmydata import uri
INVITE_SEPARATOR = "+"
class CreateOptions(BasedirOptions):
nickname = None
local_dir = None
synopsis = "MAGIC_ALIAS: [NICKNAME LOCAL_DIR]"
def parseArgs(self, alias, nickname=None, local_dir=None):
BasedirOptions.parseArgs(self)
alias = argv_to_unicode(alias)
if not alias.endswith(u':'):
raise usage.UsageError("An alias must end with a ':' character.")
self.alias = alias[:-1]
self.nickname = None if nickname is None else argv_to_unicode(nickname)
# Expand the path relative to the current directory of the CLI command, not the node.
self.local_dir = None if local_dir is None else argv_to_abspath(local_dir, long_path=False)
if self.nickname and not self.local_dir:
raise usage.UsageError("If NICKNAME is specified then LOCAL_DIR must also be specified.")
node_url_file = os.path.join(self['node-directory'], u"node.url")
self['node-url'] = fileutil.read(node_url_file).strip()
def _delegate_options(source_options, target_options):
target_options.aliases = get_aliases(source_options['node-directory'])
target_options["node-url"] = source_options["node-url"]
target_options["node-directory"] = source_options["node-directory"]
target_options.stdin = StringIO("")
target_options.stdout = StringIO()
target_options.stderr = StringIO()
return target_options
def create(options):
precondition(isinstance(options.alias, unicode), alias=options.alias)
precondition(isinstance(options.nickname, (unicode, NoneType)), nickname=options.nickname)
precondition(isinstance(options.local_dir, (unicode, NoneType)), local_dir=options.local_dir)
from allmydata.scripts import tahoe_add_alias
create_alias_options = _delegate_options(options, CreateAliasOptions())
create_alias_options.alias = options.alias
rc = tahoe_add_alias.create_alias(create_alias_options)
if rc != 0:
print >>options.stderr, create_alias_options.stderr.getvalue()
return rc
print >>options.stdout, create_alias_options.stdout.getvalue()
if options.nickname is not None:
invite_options = _delegate_options(options, InviteOptions())
invite_options.alias = options.alias
invite_options.nickname = options.nickname
rc = invite(invite_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to invite after create\n"
print >>options.stderr, invite_options.stderr.getvalue()
return rc
invite_code = invite_options.stdout.getvalue().strip()
join_options = _delegate_options(options, JoinOptions())
join_options.local_dir = options.local_dir
join_options.invite_code = invite_code
rc = join(join_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to join after create\n"
print >>options.stderr, join_options.stderr.getvalue()
return rc
return 0
class InviteOptions(BasedirOptions):
nickname = None
synopsis = "MAGIC_ALIAS: NICKNAME"
stdin = StringIO("")
def parseArgs(self, alias, nickname=None):
BasedirOptions.parseArgs(self)
alias = argv_to_unicode(alias)
if not alias.endswith(u':'):
raise usage.UsageError("An alias must end with a ':' character.")
self.alias = alias[:-1]
self.nickname = argv_to_unicode(nickname)
node_url_file = os.path.join(self['node-directory'], u"node.url")
self['node-url'] = open(node_url_file, "r").read().strip()
aliases = get_aliases(self['node-directory'])
self.aliases = aliases
def invite(options):
precondition(isinstance(options.alias, unicode), alias=options.alias)
precondition(isinstance(options.nickname, unicode), nickname=options.nickname)
from allmydata.scripts import tahoe_mkdir
mkdir_options = _delegate_options(options, MakeDirectoryOptions())
mkdir_options.where = None
rc = tahoe_mkdir.mkdir(mkdir_options)
if rc != 0:
print >>options.stderr, "magic-folder: failed to mkdir\n"
return rc
# FIXME this assumes caps are ASCII.
dmd_write_cap = mkdir_options.stdout.getvalue().strip()
dmd_readonly_cap = uri.from_string(dmd_write_cap).get_readonly().to_string()
if dmd_readonly_cap is None:
print >>options.stderr, "magic-folder: failed to diminish dmd write cap\n"
return 1
magic_write_cap = get_aliases(options["node-directory"])[options.alias]
magic_readonly_cap = uri.from_string(magic_write_cap).get_readonly().to_string()
# tahoe ln CLIENT_READCAP COLLECTIVE_WRITECAP/NICKNAME
ln_options = _delegate_options(options, LnOptions())
ln_options.from_file = unicode(dmd_readonly_cap, 'utf-8')
ln_options.to_file = u"%s/%s" % (unicode(magic_write_cap, 'utf-8'), options.nickname)
rc = tahoe_mv.mv(ln_options, mode="link")
if rc != 0:
print >>options.stderr, "magic-folder: failed to create link\n"
print >>options.stderr, ln_options.stderr.getvalue()
return rc
# FIXME: this assumes caps are ASCII.
print >>options.stdout, "%s%s%s" % (magic_readonly_cap, INVITE_SEPARATOR, dmd_write_cap)
return 0
class JoinOptions(BasedirOptions):
synopsis = "INVITE_CODE LOCAL_DIR"
dmd_write_cap = ""
magic_readonly_cap = ""
def parseArgs(self, invite_code, local_dir):
BasedirOptions.parseArgs(self)
# Expand the path relative to the current directory of the CLI command, not the node.
self.local_dir = None if local_dir is None else argv_to_abspath(local_dir, long_path=False)
self.invite_code = to_str(argv_to_unicode(invite_code))
def join(options):
fields = options.invite_code.split(INVITE_SEPARATOR)
if len(fields) != 2:
raise usage.UsageError("Invalid invite code.")
magic_readonly_cap, dmd_write_cap = fields
dmd_cap_file = os.path.join(options["node-directory"], u"private", u"magic_folder_dircap")
collective_readcap_file = os.path.join(options["node-directory"], u"private", u"collective_dircap")
fileutil.write(dmd_cap_file, dmd_write_cap)
fileutil.write(collective_readcap_file, magic_readonly_cap)
# FIXME: modify any existing [magic_folder] fields, rather than appending.
fileutil.write(os.path.join(options["node-directory"], u"tahoe.cfg"),
"[magic_folder]\nenabled = True\nlocal.directory = %s\n"
% (options.local_dir.encode('utf-8'),), mode="ab")
return 0
class MagicFolderCommand(BaseOptions):
subCommands = [
["create", None, CreateOptions, "Create a Magic Folder."],
["invite", None, InviteOptions, "Invite someone to a Magic Folder."],
["join", None, JoinOptions, "Join a Magic Folder."],
]
def postOptions(self):
if not hasattr(self, 'subOptions'):
raise usage.UsageError("must specify a subcommand")
def getSynopsis(self):
return "Usage: tahoe [global-options] magic SUBCOMMAND"
def getUsage(self, width=None):
t = BaseOptions.getUsage(self, width)
t += """\
Please run e.g. 'tahoe magic-folder create --help' for more details on each
subcommand.
"""
return t
subDispatch = {
"create": create,
"invite": invite,
"join": join,
}
def do_magic_folder(options):
so = options.subOptions
so.stdout = options.stdout
so.stderr = options.stderr
f = subDispatch[options.subCommand]
return f(so)
subCommands = [
["magic-folder", None, MagicFolderCommand,
"Magic Folder subcommands: use 'tahoe magic-folder' for a list."],
]
dispatch = {
"magic-folder": do_magic_folder,
}

View File

@ -5,7 +5,8 @@ from cStringIO import StringIO
from twisted.python import usage
from allmydata.scripts.common import get_default_nodedir
from allmydata.scripts import debug, create_node, startstop_node, cli, keygen, stats_gatherer, admin
from allmydata.scripts import debug, create_node, startstop_node, cli, keygen, stats_gatherer, admin, \
magic_folder_cli
from allmydata.util.encodingutil import quote_output, quote_local_unicode_path, get_io_encoding
def GROUP(s):
@ -45,6 +46,7 @@ class Options(usage.Options):
+ debug.subCommands
+ GROUP("Using the filesystem")
+ cli.subCommands
+ magic_folder_cli.subCommands
)
optFlags = [
@ -143,6 +145,8 @@ def runner(argv,
rc = admin.dispatch[command](so)
elif command in cli.dispatch:
rc = cli.dispatch[command](so)
elif command in magic_folder_cli.dispatch:
rc = magic_folder_cli.dispatch[command](so)
elif command in ac_dispatch:
rc = ac_dispatch[command](so, stdout, stderr)
else:

View File

@ -1,6 +1,9 @@
import os.path
import codecs
from allmydata.util.assertutil import precondition
from allmydata import uri
from allmydata.scripts.common_http import do_http, check_http_error
from allmydata.scripts.common import get_aliases
@ -29,6 +32,7 @@ def add_line_to_aliasfile(aliasfile, alias, cap):
def add_alias(options):
nodedir = options['node-directory']
alias = options.alias
precondition(isinstance(alias, unicode), alias=alias)
cap = options.cap
stdout = options.stdout
stderr = options.stderr
@ -56,6 +60,7 @@ def create_alias(options):
# mkdir+add_alias
nodedir = options['node-directory']
alias = options.alias
precondition(isinstance(alias, unicode), alias=alias)
stdout = options.stdout
stderr = options.stderr
if u":" in alias:

View File

@ -8,7 +8,7 @@ from allmydata.scripts.common import get_alias, escape_path, DEFAULT_ALIAS, \
UnknownAliasError
from allmydata.scripts.common_http import do_http, HTTPError, format_http_error
from allmydata.util import time_format
from allmydata.scripts import backupdb
from allmydata import backupdb
from allmydata.util.encodingutil import listdir_unicode, quote_output, \
quote_local_unicode_path, to_str, FilenameEncodingError, unicode_to_url
from allmydata.util.assertutil import precondition

View File

@ -151,9 +151,7 @@ def list(options):
line.append(uri)
if options["readonly-uri"]:
line.append(quote_output(ro_uri or "-", quotemarks=False))
rows.append((encoding_error, line))
max_widths = []
left_justifys = []
for (encoding_error, row) in rows:

View File

@ -62,10 +62,12 @@ class StorageFarmBroker:
I'm also responsible for subscribing to the IntroducerClient to find out
about new servers as they are announced by the Introducer.
"""
def __init__(self, tub, permute_peers):
def __init__(self, tub, permute_peers, connected_threshold, connected_d):
self.tub = tub
assert permute_peers # False not implemented yet
self.permute_peers = permute_peers
self.connected_threshold = connected_threshold
self.connected_d = connected_d
# self.servers maps serverid -> IServer, and keeps track of all the
# storage servers that we've heard about. Each descriptor manages its
# own Reconnector, and will give us a RemoteReference when we ask
@ -75,7 +77,7 @@ class StorageFarmBroker:
# these two are used in unit tests
def test_add_rref(self, serverid, rref, ann):
s = NativeStorageServer(serverid, ann.copy())
s = NativeStorageServer(serverid, ann.copy(), self)
s.rref = rref
s._is_connected = True
self.servers[serverid] = s
@ -92,7 +94,7 @@ class StorageFarmBroker:
precondition(isinstance(key_s, str), key_s)
precondition(key_s.startswith("v0-"), key_s)
assert ann["service-name"] == "storage"
s = NativeStorageServer(key_s, ann)
s = NativeStorageServer(key_s, ann, self)
serverid = s.get_serverid()
old = self.servers.get(serverid)
if old:
@ -118,6 +120,13 @@ class StorageFarmBroker:
for dsc in self.servers.values():
dsc.try_to_connect()
def check_enough_connected(self):
if (self.connected_d is not None and
len(self.get_connected_servers()) >= self.connected_threshold):
d = self.connected_d
self.connected_d = None
d.callback(None)
def get_servers_for_psi(self, peer_selection_index):
# return a list of server objects (IServers)
assert self.permute_peers == True
@ -187,9 +196,10 @@ class NativeStorageServer:
"application-version": "unknown: no get_version()",
}
def __init__(self, key_s, ann):
def __init__(self, key_s, ann, broker):
self.key_s = key_s
self.announcement = ann
self.broker = broker
assert "anonymous-storage-FURL" in ann, ann
furl = str(ann["anonymous-storage-FURL"])
@ -290,6 +300,7 @@ class NativeStorageServer:
default = self.VERSION_DEFAULTS
d = add_version_to_remote_reference(rref, default)
d.addCallback(self._got_versioned_service, lp)
d.addCallback(lambda ign: self.broker.check_enough_connected())
d.addErrback(log.err, format="storageclient._got_connection",
name=self.get_name(), umid="Sdq3pg")

View File

@ -0,0 +1,367 @@
#!/usr/bin/env python
# this is a smoke-test using "./bin/tahoe" to:
#
# 1. create an introducer
# 2. create 5 storage nodes
# 3. create 2 client nodes (alice, bob)
# 4. Alice creates a magic-folder ("magik:")
# 5. Alice invites Bob
# 6. Bob joins
#
# After that, some basic tests are performed; see the "if True:"
# blocks to turn some on or off. Could benefit from some cleanups
# etc. but this seems useful out of the gate for quick testing.
#
# TO RUN:
# from top-level of your checkout (we use "./bin/tahoe"):
# python src/allmydata/test/check_magicfolder_smoke.py
#
# This will create "./smoke_magicfolder" (which is disposable) and
# contains all the Tahoe basedirs for the introducer, storage nodes,
# clients, and the clients' magic-folders. NOTE that if these
# directories already exist they will NOT be re-created. So kill the
# grid and then "rm -rf smoke_magicfolder" if you want to re-run the
# tests cleanly.
#
# Run the script with a single arg, "kill" to run "tahoe stop" on all
# the nodes.
#
# This will have "tahoe start" -ed all the nodes, so you can continue
# to play around after the script exits.
from __future__ import print_function
import sys
import time
import shutil
import subprocess
from os.path import join, abspath, curdir, exists
from os import mkdir, listdir, unlink
tahoe_base = abspath(curdir)
data_base = join(tahoe_base, 'smoke_magicfolder')
tahoe_bin = join(tahoe_base, 'bin', 'tahoe')
python = sys.executable
if not exists(data_base):
print("Creating", data_base)
mkdir(data_base)
if not exists(tahoe_bin):
raise RuntimeError("Can't find 'tahoe' binary at %r" % (tahoe_bin,))
if 'kill' in sys.argv:
print("Killing the grid")
for d in listdir(data_base):
print("killing", d)
subprocess.call(
[
python, tahoe_bin, 'stop', join(data_base, d),
]
)
sys.exit(0)
if not exists(join(data_base, 'introducer')):
subprocess.check_call(
[
python, tahoe_bin, 'create-introducer', join(data_base, 'introducer'),
]
)
with open(join(data_base, 'introducer', 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = introducer0
web.port = 4560
''')
subprocess.check_call(
[
python, tahoe_bin, 'start', join(data_base, 'introducer'),
]
)
furl_fname = join(data_base, 'introducer', 'private', 'introducer.furl')
while not exists(furl_fname):
time.sleep(1)
furl = open(furl_fname, 'r').read()
print("FURL", furl)
for x in range(5):
data_dir = join(data_base, 'node%d' % x)
if not exists(data_dir):
subprocess.check_call(
[
python, tahoe_bin, 'create-node',
'--nickname', 'node%d' % (x,),
'--introducer', furl,
data_dir,
]
)
with open(join(data_dir, 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = node%(node_id)s
web.port =
web.static = public_html
tub.location = localhost:%(tub_port)d
[client]
# Which services should this client connect to?
introducer.furl = %(furl)s
shares.needed = 2
shares.happy = 3
shares.total = 4
''' % {'node_id':x, 'furl':furl, 'tub_port':(9900 + x)})
subprocess.check_call(
[
python, tahoe_bin, 'start', data_dir,
]
)
# alice and bob clients
do_invites = False
node_id = 0
for name in ['alice', 'bob']:
data_dir = join(data_base, name)
magic_dir = join(data_base, '%s-magic' % (name,))
mkdir(magic_dir)
if not exists(data_dir):
do_invites = True
subprocess.check_call(
[
python, tahoe_bin, 'create-node',
'--no-storage',
'--nickname', name,
'--introducer', furl,
data_dir,
]
)
with open(join(data_dir, 'tahoe.cfg'), 'w') as f:
f.write('''
[node]
nickname = %(name)s
web.port = tcp:998%(node_id)d:interface=localhost
web.static = public_html
[client]
# Which services should this client connect to?
introducer.furl = %(furl)s
shares.needed = 2
shares.happy = 3
shares.total = 4
''' % {'name':name, 'node_id':node_id, 'furl':furl})
subprocess.check_call(
[
python, tahoe_bin, 'start', data_dir,
]
)
node_id += 1
# okay, now we have alice + bob (alice, bob)
# now we have alice create a magic-folder, and invite bob to it
if do_invites:
data_dir = join(data_base, 'alice')
# alice creates her folder, invites bob
print("Alice creates a magic-folder")
subprocess.check_call(
[
python, tahoe_bin, 'magic-folder', 'create', '--basedir', data_dir, 'magik:', 'alice',
join(data_base, 'alice-magic'),
]
)
print("Alice invites Bob")
invite = subprocess.check_output(
[
python, tahoe_bin, 'magic-folder', 'invite', '--basedir', data_dir, 'magik:', 'bob',
]
)
print(" invite:", invite)
# now we let "bob"/bob join
print("Bob joins Alice's magic folder")
data_dir = join(data_base, 'bob')
subprocess.check_call(
[
python, tahoe_bin, 'magic-folder', 'join', '--basedir', data_dir, invite,
join(data_base, 'bob-magic'),
]
)
print("Bob has joined.")
print("Restarting alice + bob clients")
subprocess.check_call(
[
python, tahoe_bin, 'restart', '--basedir', join(data_base, 'alice'),
]
)
subprocess.check_call(
[
python, tahoe_bin, 'restart', '--basedir', join(data_base, 'bob'),
]
)
if True:
for name in ['alice', 'bob']:
with open(join(data_base, name, 'private', 'magic_folder_dircap'), 'r') as f:
print("dircap %s: %s" % (name, f.read().strip()))
# give storage nodes a chance to connect properly? I'm not entirely
# sure what's up here, but I get "UnrecoverableFileError" on the
# first_file upload from Alice "very often" otherwise
print("waiting 3 seconds")
time.sleep(3)
if True:
# alice writes a file; bob should get it
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
with open(alice_foo, 'w') as f:
f.write("line one\n")
print("Waiting for:", bob_foo)
while True:
if exists(bob_foo):
print(" found", bob_foo)
with open(bob_foo, 'r') as f:
if f.read() == "line one\n":
break
print(" file contents still mismatched")
time.sleep(1)
if True:
# bob writes a file; alice should get it
alice_bar = join(data_base, 'alice-magic', 'second_file')
bob_bar = join(data_base, 'bob-magic', 'second_file')
with open(bob_bar, 'w') as f:
f.write("line one\n")
print("Waiting for:", alice_bar)
while True:
if exists(bob_bar):
print(" found", bob_bar)
with open(bob_bar, 'r') as f:
if f.read() == "line one\n":
break
print(" file contents still mismatched")
time.sleep(1)
if True:
# alice deletes 'first_file'
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
unlink(alice_foo)
print("Waiting for '%s' to disappear" % (bob_foo,))
while True:
if not exists(bob_foo):
print(" disappeared", bob_foo)
break
time.sleep(1)
bob_tmp = bob_foo + '.backup'
print("Waiting for '%s' to appear" % (bob_tmp,))
while True:
if exists(bob_tmp):
print(" appeared", bob_tmp)
break
time.sleep(1)
if True:
# bob writes new content to 'second_file'; alice should get it
# get it.
alice_foo = join(data_base, 'alice-magic', 'second_file')
bob_foo = join(data_base, 'bob-magic', 'second_file')
gold_content = "line one\nsecond line\n"
with open(bob_foo, 'w') as f:
f.write(gold_content)
print("Waiting for:", alice_foo)
while True:
if exists(alice_foo):
print(" found", alice_foo)
with open(alice_foo, 'r') as f:
content = f.read()
if content == gold_content:
break
print(" file contents still mismatched:\n")
print(content)
time.sleep(1)
if True:
# bob creates a sub-directory and adds a file to it
alice_dir = join(data_base, 'alice-magic', 'subdir')
bob_dir = join(data_base, 'alice-magic', 'subdir')
gold_content = 'a file in a subdirectory\n'
mkdir(bob_dir)
with open(join(bob_dir, 'subfile'), 'w') as f:
f.write(gold_content)
print("Waiting for Bob's subdir '%s' to appear" % (bob_dir,))
while True:
if exists(bob_dir):
print(" found subdir")
if exists(join(bob_dir, 'subfile')):
print(" found file")
with open(join(bob_dir, 'subfile'), 'r') as f:
if f.read() == gold_content:
print(" contents match")
break
time.sleep(0.1)
if True:
# bob deletes the whole subdir
alice_dir = join(data_base, 'alice-magic', 'subdir')
bob_dir = join(data_base, 'alice-magic', 'subdir')
shutil.rmtree(bob_dir)
print("Waiting for Alice's subdir '%s' to disappear" % (alice_dir,))
while True:
if not exists(alice_dir):
print(" it's gone")
break
time.sleep(0.1)
# XXX restore the file not working (but, unit-tests work; what's wrong with them?)
# NOTE: only not-works if it's alice restoring the file!
if True:
# restore 'first_file' but with different contents
print("re-writing 'first_file'")
assert not exists(join(data_base, 'bob-magic', 'first_file'))
assert not exists(join(data_base, 'alice-magic', 'first_file'))
alice_foo = join(data_base, 'alice-magic', 'first_file')
bob_foo = join(data_base, 'bob-magic', 'first_file')
if True:
# if we don't swap around, it works fine
alice_foo, bob_foo = bob_foo, alice_foo
gold_content = "see it again for the first time\n"
with open(bob_foo, 'w') as f:
f.write(gold_content)
print("Waiting for:", alice_foo)
while True:
if exists(alice_foo):
print(" found", alice_foo)
with open(alice_foo, 'r') as f:
content = f.read()
if content == gold_content:
break
print(" file contents still mismatched: %d bytes:\n" % (len(content),))
print(content)
else:
print(" %r not there yet" % (alice_foo,))
time.sleep(1)
# XXX test .backup (delete a file)
# port david's clock.advance stuff
# fix clock.advance()
# subdirectory
# file deletes
# conflicts

View File

@ -20,6 +20,9 @@ from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from foolscap.api import Referenceable, fireEventually, RemoteException
from base64 import b32encode
from allmydata.util.assertutil import _assert
from allmydata import uri as tahoe_uri
from allmydata.client import Client
from allmydata.storage.server import StorageServer, storage_index_to_dir
@ -174,6 +177,9 @@ class NoNetworkStorageBroker:
return None
class NoNetworkClient(Client):
def disownServiceParent(self):
self.disownServiceParent()
def create_tub(self):
pass
def init_introducer_client(self):
@ -232,6 +238,7 @@ class NoNetworkGrid(service.MultiService):
self.proxies_by_id = {} # maps to IServer on which .rref is a wrapped
# StorageServer
self.clients = []
self.client_config_hooks = client_config_hooks
for i in range(num_servers):
ss = self.make_server(i)
@ -239,30 +246,42 @@ class NoNetworkGrid(service.MultiService):
self.rebuild_serverlist()
for i in range(num_clients):
clientid = hashutil.tagged_hash("clientid", str(i))[:20]
clientdir = os.path.join(basedir, "clients",
idlib.shortnodeid_b2a(clientid))
fileutil.make_dirs(clientdir)
f = open(os.path.join(clientdir, "tahoe.cfg"), "w")
c = self.make_client(i)
self.clients.append(c)
def make_client(self, i, write_config=True):
clientid = hashutil.tagged_hash("clientid", str(i))[:20]
clientdir = os.path.join(self.basedir, "clients",
idlib.shortnodeid_b2a(clientid))
fileutil.make_dirs(clientdir)
tahoe_cfg_path = os.path.join(clientdir, "tahoe.cfg")
if write_config:
f = open(tahoe_cfg_path, "w")
f.write("[node]\n")
f.write("nickname = client-%d\n" % i)
f.write("web.port = tcp:0:interface=127.0.0.1\n")
f.write("[storage]\n")
f.write("enabled = false\n")
f.close()
c = None
if i in client_config_hooks:
# this hook can either modify tahoe.cfg, or return an
# entirely new Client instance
c = client_config_hooks[i](clientdir)
if not c:
c = NoNetworkClient(clientdir)
c.set_default_mutable_keysize(TEST_RSA_KEY_SIZE)
c.nodeid = clientid
c.short_nodeid = b32encode(clientid).lower()[:8]
c._servers = self.all_servers # can be updated later
c.setServiceParent(self)
self.clients.append(c)
else:
_assert(os.path.exists(tahoe_cfg_path), tahoe_cfg_path=tahoe_cfg_path)
c = None
if i in self.client_config_hooks:
# this hook can either modify tahoe.cfg, or return an
# entirely new Client instance
c = self.client_config_hooks[i](clientdir)
if not c:
c = NoNetworkClient(clientdir)
c.set_default_mutable_keysize(TEST_RSA_KEY_SIZE)
c.nodeid = clientid
c.short_nodeid = b32encode(clientid).lower()[:8]
c._servers = self.all_servers # can be updated later
c.setServiceParent(self)
return c
def make_server(self, i, readonly=False):
serverid = hashutil.tagged_hash("serverid", str(i))[:20]
@ -350,6 +369,9 @@ class GridTestMixin:
num_servers=num_servers,
client_config_hooks=client_config_hooks)
self.g.setServiceParent(self.s)
self._record_webports_and_baseurls()
def _record_webports_and_baseurls(self):
self.client_webports = [c.getServiceNamed("webish").getPortnum()
for c in self.g.clients]
self.client_baseurls = [c.getServiceNamed("webish").getURL()
@ -358,6 +380,23 @@ class GridTestMixin:
def get_clientdir(self, i=0):
return self.g.clients[i].basedir
def set_clientdir(self, basedir, i=0):
self.g.clients[i].basedir = basedir
def get_client(self, i=0):
return self.g.clients[i]
def restart_client(self, i=0):
client = self.g.clients[i]
d = defer.succeed(None)
d.addCallback(lambda ign: self.g.removeService(client))
def _make_client(ign):
c = self.g.make_client(i, write_config=False)
self.g.clients[i] = c
self._record_webports_and_baseurls()
d.addCallback(_make_client)
return d
def get_serverdir(self, i):
return self.g.servers_by_number[i].storedir

View File

@ -6,7 +6,7 @@ from twisted.trial import unittest
from allmydata.util import fileutil
from allmydata.util.encodingutil import listdir_unicode, get_filesystem_encoding, unicode_platform
from allmydata.util.assertutil import precondition
from allmydata.scripts import backupdb
from allmydata import backupdb
class BackupDB(unittest.TestCase):
def create(self, dbfile):

View File

@ -22,7 +22,7 @@ class FakeClient:
class WebResultsRendering(unittest.TestCase, WebRenderingMixin):
def create_fake_client(self):
sb = StorageFarmBroker(None, True)
sb = StorageFarmBroker(None, True, 0, None)
# s.get_name() (the "short description") will be "v0-00000000".
# s.get_longname() will include the -long suffix.
# s.get_peerid() (i.e. tubid) will be "aaa.." or "777.." or "ceir.."
@ -41,7 +41,7 @@ class WebResultsRendering(unittest.TestCase, WebRenderingMixin):
"my-version": "ver",
"oldest-supported": "oldest",
}
s = NativeStorageServer(key_s, ann)
s = NativeStorageServer(key_s, ann, sb)
sb.test_add_server(peerid, s) # XXX: maybe use key_s?
c = FakeClient()
c.storage_broker = sb

View File

@ -36,7 +36,7 @@ from twisted.python import usage
from allmydata.util.assertutil import precondition
from allmydata.util.encodingutil import listdir_unicode, unicode_platform, \
get_io_encoding, get_filesystem_encoding
get_io_encoding, get_filesystem_encoding, unicode_to_argv
timeout = 480 # deep_check takes 360s on Zandr's linksys box, others take > 240s
@ -49,8 +49,14 @@ def parse_options(basedir, command, args):
class CLITestMixin(ReallyEqualMixin):
def do_cli(self, verb, *args, **kwargs):
precondition(not [True for arg in args if not isinstance(arg, str)],
"arguments to do_cli must be strs -- convert using unicode_to_argv", args=args)
# client_num is used to execute client CLI commands on a specific client.
client_num = kwargs.get("client_num", 0)
nodeargs = [
"--node-directory", self.get_clientdir(),
"--node-directory", unicode_to_argv(self.get_clientdir(i=client_num)),
]
argv = nodeargs + [verb] + list(args)
stdin = kwargs.get("stdin", "")

View File

@ -11,7 +11,8 @@ from allmydata.util import fileutil
from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.util.encodingutil import get_io_encoding, unicode_to_argv
from allmydata.util.namespace import Namespace
from allmydata.scripts import cli, backupdb
from allmydata.scripts import cli
from allmydata import backupdb
from .common_util import StallMixin
from .no_network import GridTestMixin
from .test_cli import CLITestMixin, parse_options

View File

@ -0,0 +1,254 @@
import os.path
import re
from twisted.trial import unittest
from twisted.internet import defer
from twisted.internet import reactor
from twisted.python import usage
from allmydata.util.assertutil import precondition
from allmydata.util import fileutil
from allmydata.scripts.common import get_aliases
from allmydata.test.no_network import GridTestMixin
from .test_cli import CLITestMixin
from allmydata.scripts import magic_folder_cli
from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.util.encodingutil import unicode_to_argv
from allmydata.frontends.magic_folder import MagicFolder
from allmydata import uri
class MagicFolderCLITestMixin(CLITestMixin, GridTestMixin):
def do_create_magic_folder(self, client_num):
d = self.do_cli("magic-folder", "create", "magic:", client_num=client_num)
def _done((rc,stdout,stderr)):
self.failUnlessEqual(rc, 0)
self.failUnlessIn("Alias 'magic' created", stdout)
self.failUnlessEqual(stderr, "")
aliases = get_aliases(self.get_clientdir(i=client_num))
self.failUnlessIn("magic", aliases)
self.failUnless(aliases["magic"].startswith("URI:DIR2:"))
d.addCallback(_done)
return d
def do_invite(self, client_num, nickname):
nickname_arg = unicode_to_argv(nickname)
d = self.do_cli("magic-folder", "invite", "magic:", nickname_arg, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
return d
def do_join(self, client_num, local_dir, invite_code):
precondition(isinstance(local_dir, unicode), local_dir=local_dir)
precondition(isinstance(invite_code, str), invite_code=invite_code)
local_dir_arg = unicode_to_argv(local_dir)
d = self.do_cli("magic-folder", "join", invite_code, local_dir_arg, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
return d
def check_joined_config(self, client_num, upload_dircap):
"""Tests that our collective directory has the readonly cap of
our upload directory.
"""
collective_readonly_cap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"collective_dircap"))
d = self.do_cli("ls", "--json", collective_readonly_cap, client_num=client_num)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
return (rc, stdout, stderr)
d.addCallback(_done)
def test_joined_magic_folder((rc,stdout,stderr)):
readonly_cap = unicode(uri.from_string(upload_dircap).get_readonly().to_string(), 'utf-8')
s = re.search(readonly_cap, stdout)
self.failUnless(s is not None)
return None
d.addCallback(test_joined_magic_folder)
return d
def get_caps_from_files(self, client_num):
collective_dircap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"collective_dircap"))
upload_dircap = fileutil.read(os.path.join(self.get_clientdir(i=client_num),
u"private", u"magic_folder_dircap"))
self.failIf(collective_dircap is None or upload_dircap is None)
return collective_dircap, upload_dircap
def check_config(self, client_num, local_dir):
client_config = fileutil.read(os.path.join(self.get_clientdir(i=client_num), "tahoe.cfg"))
local_dir_utf8 = local_dir.encode('utf-8')
magic_folder_config = "[magic_folder]\nenabled = True\nlocal.directory = %s" % (local_dir_utf8,)
self.failUnlessIn(magic_folder_config, client_config)
def create_invite_join_magic_folder(self, nickname, local_dir):
nickname_arg = unicode_to_argv(nickname)
local_dir_arg = unicode_to_argv(local_dir)
d = self.do_cli("magic-folder", "create", "magic:", nickname_arg, local_dir_arg)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
client = self.get_client()
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
self.collective_dirnode = client.create_node_from_uri(self.collective_dircap)
self.upload_dirnode = client.create_node_from_uri(self.upload_dircap)
d.addCallback(_done)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, local_dir))
return d
def cleanup(self, res):
#print "cleanup", res
d = defer.succeed(None)
if self.magicfolder is not None:
d.addCallback(lambda ign: self.magicfolder.finish())
d.addCallback(lambda ign: res)
return d
def init_magicfolder(self, client_num, upload_dircap, collective_dircap, local_magic_dir, clock):
dbfile = abspath_expanduser_unicode(u"magicfolderdb.sqlite", base=self.get_clientdir(i=client_num))
magicfolder = MagicFolder(self.get_client(client_num), upload_dircap, collective_dircap, local_magic_dir,
dbfile, pending_delay=0.2, clock=clock)
magicfolder.downloader._turn_delay = 0
orig = magicfolder.uploader._append_to_deque
# the _append_to_deque method queues a _turn_deque, so we
# immediately trigger it by wrapping _append_to_deque
def wrap(*args, **kw):
x = orig(*args, **kw)
clock.advance(0) # _turn_delay is always 0 for the tests
return x
magicfolder.uploader._append_to_deque = wrap
magicfolder.setServiceParent(self.get_client(client_num))
magicfolder.ready()
return magicfolder
def setup_alice_and_bob(self, alice_clock=reactor, bob_clock=reactor):
self.set_up_grid(num_clients=2)
self.alice_magicfolder = None
self.bob_magicfolder = None
alice_magic_dir = abspath_expanduser_unicode(u"Alice-magic", base=self.basedir)
self.mkdir_nonascii(alice_magic_dir)
bob_magic_dir = abspath_expanduser_unicode(u"Bob-magic", base=self.basedir)
self.mkdir_nonascii(bob_magic_dir)
# Alice creates a Magic Folder,
# invites herself then and joins.
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice\u00F8"))
def get_invite_code(result):
self.invite_code = result[1].strip()
d.addCallback(get_invite_code)
d.addCallback(lambda ign: self.do_join(0, alice_magic_dir, self.invite_code))
def get_alice_caps(ign):
self.alice_collective_dircap, self.alice_upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_alice_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.alice_upload_dircap))
d.addCallback(lambda ign: self.check_config(0, alice_magic_dir))
def get_Alice_magicfolder(result):
self.alice_magicfolder = self.init_magicfolder(0, self.alice_upload_dircap,
self.alice_collective_dircap,
alice_magic_dir, alice_clock)
return result
d.addCallback(get_Alice_magicfolder)
# Alice invites Bob. Bob joins.
d.addCallback(lambda ign: self.do_invite(0, u"Bob\u00F8"))
def get_invite_code(result):
self.invite_code = result[1].strip()
d.addCallback(get_invite_code)
d.addCallback(lambda ign: self.do_join(1, bob_magic_dir, self.invite_code))
def get_bob_caps(ign):
self.bob_collective_dircap, self.bob_upload_dircap = self.get_caps_from_files(1)
d.addCallback(get_bob_caps)
d.addCallback(lambda ign: self.check_joined_config(1, self.bob_upload_dircap))
d.addCallback(lambda ign: self.check_config(1, bob_magic_dir))
def get_Bob_magicfolder(result):
self.bob_magicfolder = self.init_magicfolder(1, self.bob_upload_dircap,
self.bob_collective_dircap,
bob_magic_dir, bob_clock)
return result
d.addCallback(get_Bob_magicfolder)
return d
class CreateMagicFolder(MagicFolderCLITestMixin, unittest.TestCase):
def test_create_and_then_invite_join(self):
self.basedir = "cli/MagicFolder/create-and-then-invite-join"
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
d = self.do_create_magic_folder(0)
d.addCallback(lambda ign: self.do_invite(0, u"Alice"))
def get_invite_code_and_join((rc, stdout, stderr)):
invite_code = stdout.strip()
return self.do_join(0, unicode(local_dir), invite_code)
d.addCallback(get_invite_code_and_join)
def get_caps(ign):
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(get_caps)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
return d
def test_create_error(self):
self.basedir = "cli/MagicFolder/create-error"
self.set_up_grid()
d = self.do_cli("magic-folder", "create", "m a g i c:", client_num=0)
def _done((rc, stdout, stderr)):
self.failIfEqual(rc, 0)
self.failUnlessIn("Alias names cannot contain spaces.", stderr)
d.addCallback(_done)
return d
def test_create_invite_join(self):
self.basedir = "cli/MagicFolder/create-invite-join"
self.set_up_grid()
local_dir = os.path.join(self.basedir, "magic")
abs_local_dir_u = abspath_expanduser_unicode(unicode(local_dir), long_path=False)
d = self.do_cli("magic-folder", "create", "magic:", "Alice", local_dir)
def _done((rc, stdout, stderr)):
self.failUnlessEqual(rc, 0)
self.collective_dircap, self.upload_dircap = self.get_caps_from_files(0)
d.addCallback(_done)
d.addCallback(lambda ign: self.check_joined_config(0, self.upload_dircap))
d.addCallback(lambda ign: self.check_config(0, abs_local_dir_u))
return d
def test_create_invite_join_failure(self):
self.basedir = "cli/MagicFolder/create-invite-join-failure"
os.makedirs(self.basedir)
o = magic_folder_cli.CreateOptions()
o.parent = magic_folder_cli.MagicFolderCommand()
o.parent['node-directory'] = self.basedir
try:
o.parseArgs("magic:", "Alice", "-foo")
except usage.UsageError as e:
self.failUnlessIn("cannot start with '-'", str(e))
else:
self.fail("expected UsageError")
def test_join_failure(self):
self.basedir = "cli/MagicFolder/create-join-failure"
os.makedirs(self.basedir)
o = magic_folder_cli.JoinOptions()
o.parent = magic_folder_cli.MagicFolderCommand()
o.parent['node-directory'] = self.basedir
try:
o.parseArgs("URI:invite+URI:code", "-foo")
except usage.UsageError as e:
self.failUnlessIn("cannot start with '-'", str(e))
else:
self.fail("expected UsageError")

View File

@ -4,7 +4,7 @@ from twisted.trial import unittest
from twisted.application import service
import allmydata
import allmydata.frontends.drop_upload
import allmydata.frontends.magic_folder
import allmydata.util.log
from allmydata.node import Node, OldConfigError, OldConfigOptionError, MissingConfigEntry, UnescapedHashError
@ -27,7 +27,7 @@ BASECONFIG_I = ("[client]\n"
"introducer.furl = %s\n"
)
class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
class Basic(testutil.ReallyEqualMixin, testutil.NonASCIIPathMixin, unittest.TestCase):
def test_loadable(self):
basedir = "test_client.Basic.test_loadable"
os.mkdir(basedir)
@ -251,7 +251,7 @@ class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
return [ s.get_longname() for s in sb.get_servers_for_psi(key) ]
def test_permute(self):
sb = StorageFarmBroker(None, True)
sb = StorageFarmBroker(None, True, 0, None)
for k in ["%d" % i for i in range(5)]:
ann = {"anonymous-storage-FURL": "pb://abcde@nowhere/fake",
"permutation-seed-base32": base32.b2a(k) }
@ -302,76 +302,79 @@ class Basic(testutil.ReallyEqualMixin, unittest.TestCase):
_check("helper.furl = None", None)
_check("helper.furl = pb://blah\n", "pb://blah")
def test_create_drop_uploader(self):
class MockDropUploader(service.MultiService):
name = 'drop-upload'
def test_create_magic_folder_service(self):
class MockMagicFolder(service.MultiService):
name = 'magic-folder'
def __init__(self, client, upload_dircap, local_dir_utf8, inotify=None):
def __init__(self, client, upload_dircap, collective_dircap, local_dir, dbfile, inotify=None,
pending_delay=1.0):
service.MultiService.__init__(self)
self.client = client
self.upload_dircap = upload_dircap
self.local_dir_utf8 = local_dir_utf8
self.collective_dircap = collective_dircap
self.local_dir = local_dir
self.dbfile = dbfile
self.inotify = inotify
self.patch(allmydata.frontends.drop_upload, 'DropUploader', MockDropUploader)
def ready(self):
pass
self.patch(allmydata.frontends.magic_folder, 'MagicFolder', MockMagicFolder)
upload_dircap = "URI:DIR2:blah"
local_dir_utf8 = u"loc\u0101l_dir".encode('utf-8')
local_dir_u = self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir")
local_dir_utf8 = local_dir_u.encode('utf-8')
config = (BASECONFIG +
"[storage]\n" +
"enabled = false\n" +
"[drop_upload]\n" +
"[magic_folder]\n" +
"enabled = true\n")
basedir1 = "test_client.Basic.test_create_drop_uploader1"
basedir1 = "test_client.Basic.test_create_magic_folder_service1"
os.mkdir(basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "local.directory = " + local_dir_utf8 + "\n")
self.failUnlessRaises(MissingConfigEntry, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"), config)
fileutil.write(os.path.join(basedir1, "private", "drop_upload_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir1, "private", "magic_folder_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir1, "private", "collective_dircap"), "URI:DIR2:meow")
self.failUnlessRaises(MissingConfigEntry, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "upload.dircap = " + upload_dircap + "\n")
config.replace("[magic_folder]\n", "[drop_upload]\n"))
self.failUnlessRaises(OldConfigOptionError, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "local.directory = " + local_dir_utf8 + "\n")
c1 = client.Client(basedir1)
uploader = c1.getServiceNamed('drop-upload')
self.failUnless(isinstance(uploader, MockDropUploader), uploader)
self.failUnlessReallyEqual(uploader.client, c1)
self.failUnlessReallyEqual(uploader.upload_dircap, upload_dircap)
self.failUnlessReallyEqual(uploader.local_dir_utf8, local_dir_utf8)
self.failUnless(uploader.inotify is None, uploader.inotify)
self.failUnless(uploader.running)
magicfolder = c1.getServiceNamed('magic-folder')
self.failUnless(isinstance(magicfolder, MockMagicFolder), magicfolder)
self.failUnlessReallyEqual(magicfolder.client, c1)
self.failUnlessReallyEqual(magicfolder.upload_dircap, upload_dircap)
self.failUnlessReallyEqual(os.path.basename(magicfolder.local_dir), local_dir_u)
self.failUnless(magicfolder.inotify is None, magicfolder.inotify)
self.failUnless(magicfolder.running)
class Boom(Exception):
pass
def BoomDropUploader(client, upload_dircap, local_dir_utf8, inotify=None):
def BoomMagicFolder(client, upload_dircap, collective_dircap, local_dir, dbfile,
inotify=None, pending_delay=1.0):
raise Boom()
self.patch(allmydata.frontends.magic_folder, 'MagicFolder', BoomMagicFolder)
logged_messages = []
def mock_log(*args, **kwargs):
logged_messages.append("%r %r" % (args, kwargs))
self.patch(allmydata.util.log, 'msg', mock_log)
self.patch(allmydata.frontends.drop_upload, 'DropUploader', BoomDropUploader)
basedir2 = "test_client.Basic.test_create_drop_uploader2"
basedir2 = "test_client.Basic.test_create_magic_folder_service2"
os.mkdir(basedir2)
os.mkdir(os.path.join(basedir2, "private"))
fileutil.write(os.path.join(basedir2, "tahoe.cfg"),
BASECONFIG +
"[drop_upload]\n" +
"[magic_folder]\n" +
"enabled = true\n" +
"local.directory = " + local_dir_utf8 + "\n")
fileutil.write(os.path.join(basedir2, "private", "drop_upload_dircap"), "URI:DIR2:blah")
c2 = client.Client(basedir2)
self.failUnlessRaises(KeyError, c2.getServiceNamed, 'drop-upload')
self.failUnless([True for arg in logged_messages if "Boom" in arg],
logged_messages)
fileutil.write(os.path.join(basedir2, "private", "magic_folder_dircap"), "URI:DIR2:blah")
fileutil.write(os.path.join(basedir2, "private", "collective_dircap"), "URI:DIR2:meow")
self.failUnlessRaises(Boom, client.Client, basedir2)
def flush_but_dont_ignore(res):

View File

@ -1,181 +0,0 @@
import os, sys
from twisted.trial import unittest
from twisted.python import filepath, runtime
from twisted.internet import defer
from allmydata.interfaces import IDirectoryNode, NoSuchChildError
from allmydata.util import fake_inotify
from allmydata.util.encodingutil import get_filesystem_encoding
from allmydata.util.consumer import download_to_data
from allmydata.test.no_network import GridTestMixin
from allmydata.test.common_util import ReallyEqualMixin, NonASCIIPathMixin
from allmydata.test.common import ShouldFailMixin
from allmydata.frontends.drop_upload import DropUploader
class DropUploadTestMixin(GridTestMixin, ShouldFailMixin, ReallyEqualMixin, NonASCIIPathMixin):
"""
These tests will be run both with a mock notifier, and (on platforms that support it)
with the real INotify.
"""
def _get_count(self, name):
return self.stats_provider.get_stats()["counters"].get(name, 0)
def _test(self):
self.uploader = None
self.set_up_grid()
self.local_dir = os.path.join(self.basedir, self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir"))
self.mkdir_nonascii(self.local_dir)
self.client = self.g.clients[0]
self.stats_provider = self.client.stats_provider
d = self.client.create_dirnode()
def _made_upload_dir(n):
self.failUnless(IDirectoryNode.providedBy(n))
self.upload_dirnode = n
self.upload_dircap = n.get_uri()
self.uploader = DropUploader(self.client, self.upload_dircap, self.local_dir.encode('utf-8'),
inotify=self.inotify)
return self.uploader.startService()
d.addCallback(_made_upload_dir)
# Write something short enough for a LIT file.
d.addCallback(lambda ign: self._test_file(u"short", "test"))
# Write to the same file again with different data.
d.addCallback(lambda ign: self._test_file(u"short", "different"))
# Test that temporary files are not uploaded.
d.addCallback(lambda ign: self._test_file(u"tempfile", "test", temporary=True))
# Test that we tolerate creation of a subdirectory.
d.addCallback(lambda ign: os.mkdir(os.path.join(self.local_dir, u"directory")))
# Write something longer, and also try to test a Unicode name if the fs can represent it.
name_u = self.unicode_or_fallback(u"l\u00F8ng", u"long")
d.addCallback(lambda ign: self._test_file(name_u, "test"*100))
# TODO: test that causes an upload failure.
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_failed'), 0))
# Prevent unclean reactor errors.
def _cleanup(res):
d = defer.succeed(None)
if self.uploader is not None:
d.addCallback(lambda ign: self.uploader.finish(for_tests=True))
d.addCallback(lambda ign: res)
return d
d.addBoth(_cleanup)
return d
def _test_file(self, name_u, data, temporary=False):
previously_uploaded = self._get_count('drop_upload.files_uploaded')
previously_disappeared = self._get_count('drop_upload.files_disappeared')
d = defer.Deferred()
# Note: this relies on the fact that we only get one IN_CLOSE_WRITE notification per file
# (otherwise we would get a defer.AlreadyCalledError). Should we be relying on that?
self.uploader.set_uploaded_callback(d.callback)
path_u = os.path.join(self.local_dir, name_u)
if sys.platform == "win32":
path = filepath.FilePath(path_u)
else:
path = filepath.FilePath(path_u.encode(get_filesystem_encoding()))
# We don't use FilePath.setContent() here because it creates a temporary file that
# is renamed into place, which causes events that the test is not expecting.
f = open(path.path, "wb")
try:
if temporary and sys.platform != "win32":
os.unlink(path.path)
f.write(data)
finally:
f.close()
if temporary and sys.platform == "win32":
os.unlink(path.path)
self.notify_close_write(path)
if temporary:
d.addCallback(lambda ign: self.shouldFail(NoSuchChildError, 'temp file not uploaded', None,
self.upload_dirnode.get, name_u))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_disappeared'),
previously_disappeared + 1))
else:
d.addCallback(lambda ign: self.upload_dirnode.get(name_u))
d.addCallback(download_to_data)
d.addCallback(lambda actual_data: self.failUnlessReallyEqual(actual_data, data))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_uploaded'),
previously_uploaded + 1))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_queued'), 0))
return d
class MockTest(DropUploadTestMixin, unittest.TestCase):
"""This can run on any platform, and even if twisted.internet.inotify can't be imported."""
def test_errors(self):
self.basedir = "drop_upload.MockTest.test_errors"
self.set_up_grid()
errors_dir = os.path.join(self.basedir, "errors_dir")
os.mkdir(errors_dir)
client = self.g.clients[0]
d = client.create_dirnode()
def _made_upload_dir(n):
self.failUnless(IDirectoryNode.providedBy(n))
upload_dircap = n.get_uri()
readonly_dircap = n.get_readonly_uri()
self.shouldFail(AssertionError, 'invalid local.directory', 'could not be represented',
DropUploader, client, upload_dircap, '\xFF', inotify=fake_inotify)
self.shouldFail(AssertionError, 'nonexistent local.directory', 'there is no directory',
DropUploader, client, upload_dircap, os.path.join(self.basedir, "Laputa"), inotify=fake_inotify)
fp = filepath.FilePath(self.basedir).child('NOT_A_DIR')
fp.touch()
self.shouldFail(AssertionError, 'non-directory local.directory', 'is not a directory',
DropUploader, client, upload_dircap, fp.path, inotify=fake_inotify)
self.shouldFail(AssertionError, 'bad upload.dircap', 'does not refer to a directory',
DropUploader, client, 'bad', errors_dir, inotify=fake_inotify)
self.shouldFail(AssertionError, 'non-directory upload.dircap', 'does not refer to a directory',
DropUploader, client, 'URI:LIT:foo', errors_dir, inotify=fake_inotify)
self.shouldFail(AssertionError, 'readonly upload.dircap', 'is not a writecap to a directory',
DropUploader, client, readonly_dircap, errors_dir, inotify=fake_inotify)
d.addCallback(_made_upload_dir)
return d
def test_drop_upload(self):
self.inotify = fake_inotify
self.basedir = "drop_upload.MockTest.test_drop_upload"
return self._test()
def notify_close_write(self, path):
self.uploader._notifier.event(path, self.inotify.IN_CLOSE_WRITE)
class RealTest(DropUploadTestMixin, unittest.TestCase):
"""This is skipped unless both Twisted and the platform support inotify."""
def test_drop_upload(self):
# We should always have runtime.platform.supportsINotify, because we're using
# Twisted >= 10.1.
if not runtime.platform.supportsINotify():
raise unittest.SkipTest("Drop-upload support can only be tested for-real on an OS that supports inotify or equivalent.")
self.inotify = None # use the appropriate inotify for the platform
self.basedir = "drop_upload.RealTest.test_drop_upload"
return self._test()
def notify_close_write(self, path):
# Writing to the file causes the notification.
pass

View File

@ -61,12 +61,15 @@ import os, sys, locale
from twisted.trial import unittest
from twisted.python.filepath import FilePath
from allmydata.test.common_util import ReallyEqualMixin
from allmydata.util import encodingutil, fileutil
from allmydata.util.encodingutil import argv_to_unicode, unicode_to_url, \
unicode_to_output, quote_output, quote_path, quote_local_unicode_path, \
unicode_platform, listdir_unicode, FilenameEncodingError, get_io_encoding, \
get_filesystem_encoding, to_str, from_utf8_or_none, _reload
quote_filepath, unicode_platform, listdir_unicode, FilenameEncodingError, \
get_io_encoding, get_filesystem_encoding, to_str, from_utf8_or_none, _reload, \
to_filepath, extend_filepath, unicode_from_filepath, unicode_segments_from
from allmydata.dirnode import normalize
from twisted.python import usage
@ -400,16 +403,19 @@ class QuoteOutput(ReallyEqualMixin, unittest.TestCase):
check(u"\n", u"\"\\x0a\"", quote_newlines=True)
def test_quote_output_default(self):
encodingutil.io_encoding = 'ascii'
self.patch(encodingutil, 'io_encoding', 'ascii')
self.test_quote_output_ascii(None)
encodingutil.io_encoding = 'latin1'
self.patch(encodingutil, 'io_encoding', 'latin1')
self.test_quote_output_latin1(None)
encodingutil.io_encoding = 'utf-8'
self.patch(encodingutil, 'io_encoding', 'utf-8')
self.test_quote_output_utf8(None)
def win32_other(win32, other):
return win32 if sys.platform == "win32" else other
class QuotePaths(ReallyEqualMixin, unittest.TestCase):
def test_quote_path(self):
self.failUnlessReallyEqual(quote_path([u'foo', u'bar']), "'foo/bar'")
@ -419,9 +425,6 @@ class QuotePaths(ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(quote_path([u'foo', u'\nbar'], quotemarks=True), '"foo/\\x0abar"')
self.failUnlessReallyEqual(quote_path([u'foo', u'\nbar'], quotemarks=False), '"foo/\\x0abar"')
def win32_other(win32, other):
return win32 if sys.platform == "win32" else other
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\C:\\foo"),
win32_other("'C:\\foo'", "'\\\\?\\C:\\foo'"))
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\C:\\foo", quotemarks=True),
@ -435,6 +438,70 @@ class QuotePaths(ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(quote_local_unicode_path(u"\\\\?\\UNC\\foo\\bar", quotemarks=False),
win32_other("\\\\foo\\bar", "\\\\?\\UNC\\foo\\bar"))
def test_quote_filepath(self):
foo_bar_fp = FilePath(win32_other(u'C:\\foo\\bar', u'/foo/bar'))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp),
win32_other("'C:\\foo\\bar'", "'/foo/bar'"))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp, quotemarks=True),
win32_other("'C:\\foo\\bar'", "'/foo/bar'"))
self.failUnlessReallyEqual(quote_filepath(foo_bar_fp, quotemarks=False),
win32_other("C:\\foo\\bar", "/foo/bar"))
foo_longfp = FilePath(u'\\\\?\\C:\\foo')
self.failUnlessReallyEqual(quote_filepath(foo_longfp),
win32_other("'C:\\foo'", "'\\\\?\\C:\\foo'"))
self.failUnlessReallyEqual(quote_filepath(foo_longfp, quotemarks=True),
win32_other("'C:\\foo'", "'\\\\?\\C:\\foo'"))
self.failUnlessReallyEqual(quote_filepath(foo_longfp, quotemarks=False),
win32_other("C:\\foo", "\\\\?\\C:\\foo"))
class FilePaths(ReallyEqualMixin, unittest.TestCase):
def test_to_filepath(self):
foo_u = win32_other(u'C:\\foo', u'/foo')
nosep_fp = to_filepath(foo_u)
sep_fp = to_filepath(foo_u + os.path.sep)
for fp in (nosep_fp, sep_fp):
self.failUnlessReallyEqual(fp, FilePath(foo_u))
self.failUnlessReallyEqual(fp.path, foo_u)
if sys.platform == "win32":
long_u = u'\\\\?\\C:\\foo'
longfp = to_filepath(long_u + u'\\')
self.failUnlessReallyEqual(longfp, FilePath(long_u))
self.failUnlessReallyEqual(longfp.path, long_u)
def test_extend_filepath(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_bar_baz_u = win32_other(u'C:\\foo\\bar\\baz', u'/foo/bar/baz')
for foo_fp in (foo_bfp, foo_ufp):
fp = extend_filepath(foo_fp, [u'bar', u'baz'])
self.failUnlessReallyEqual(fp, FilePath(foo_bar_baz_u))
self.failUnlessReallyEqual(fp.path, foo_bar_baz_u)
def test_unicode_from_filepath(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_u = win32_other(u'C:\\foo', u'/foo')
for foo_fp in (foo_bfp, foo_ufp):
self.failUnlessReallyEqual(unicode_from_filepath(foo_fp), foo_u)
def test_unicode_segments_from(self):
foo_bfp = FilePath(win32_other(b'C:\\foo', b'/foo'))
foo_ufp = FilePath(win32_other(u'C:\\foo', u'/foo'))
foo_bar_baz_bfp = FilePath(win32_other(b'C:\\foo\\bar\\baz', b'/foo/bar/baz'))
foo_bar_baz_ufp = FilePath(win32_other(u'C:\\foo\\bar\\baz', u'/foo/bar/baz'))
for foo_fp in (foo_bfp, foo_ufp):
for foo_bar_baz_fp in (foo_bar_baz_bfp, foo_bar_baz_ufp):
self.failUnlessReallyEqual(unicode_segments_from(foo_bar_baz_fp, foo_fp),
[u'bar', u'baz'])
class UbuntuKarmicUTF8(EncodingUtil, unittest.TestCase):
uname = 'Linux korn 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:05:01 UTC 2009 x86_64'

View File

@ -116,7 +116,7 @@ class AssistedUpload(unittest.TestCase):
timeout = 240 # It takes longer than 120 seconds on Francois's arm box.
def setUp(self):
self.s = FakeClient()
self.s.storage_broker = StorageFarmBroker(None, True)
self.s.storage_broker = StorageFarmBroker(None, True, 0, None)
self.s.secret_holder = client.SecretHolder("lease secret", "converge")
self.s.startService()

View File

@ -0,0 +1,975 @@
import os, sys
from twisted.trial import unittest
from twisted.internet import defer, task
from allmydata.interfaces import IDirectoryNode
from allmydata.util.assertutil import precondition
from allmydata.util import fake_inotify, fileutil
from allmydata.util.deferredutil import DeferredListShouldSucceed
from allmydata.util.encodingutil import get_filesystem_encoding, to_filepath
from allmydata.util.consumer import download_to_data
from allmydata.test.no_network import GridTestMixin
from allmydata.test.common_util import ReallyEqualMixin, NonASCIIPathMixin
from allmydata.test.common import ShouldFailMixin
from .test_cli_magic_folder import MagicFolderCLITestMixin
from allmydata.frontends import magic_folder
from allmydata.frontends.magic_folder import MagicFolder, Downloader, WriteFileMixin
from allmydata import magicfolderdb, magicpath
from allmydata.util.fileutil import abspath_expanduser_unicode
from allmydata.immutable.upload import Data
class MagicFolderTestMixin(MagicFolderCLITestMixin, ShouldFailMixin, ReallyEqualMixin, NonASCIIPathMixin):
"""
These tests will be run both with a mock notifier, and (on platforms that support it)
with the real INotify.
"""
def setUp(self):
GridTestMixin.setUp(self)
temp = self.mktemp()
self.basedir = abspath_expanduser_unicode(temp.decode(get_filesystem_encoding()))
self.magicfolder = None
self.patch(Downloader, 'REMOTE_SCAN_INTERVAL', 0)
def _get_count(self, name, client=None):
counters = (client or self.get_client()).stats_provider.get_stats()["counters"]
return counters.get('magic_folder.%s' % (name,), 0)
def _createdb(self):
dbfile = abspath_expanduser_unicode(u"magicfolderdb.sqlite", base=self.basedir)
mdb = magicfolderdb.get_magicfolderdb(dbfile, create_version=(magicfolderdb.SCHEMA_v1, 1))
self.failUnless(mdb, "unable to create magicfolderdb from %r" % (dbfile,))
self.failUnlessEqual(mdb.VERSION, 1)
return mdb
def _restart_client(self, ign):
#print "_restart_client"
d = self.restart_client()
d.addCallback(self._wait_until_started)
return d
def _wait_until_started(self, ign):
#print "_wait_until_started"
self.magicfolder = self.get_client().getServiceNamed('magic-folder')
return self.magicfolder.ready()
def test_db_basic(self):
fileutil.make_dirs(self.basedir)
self._createdb()
def test_db_persistence(self):
"""Test that a file upload creates an entry in the database."""
fileutil.make_dirs(self.basedir)
db = self._createdb()
relpath1 = u"myFile1"
pathinfo = fileutil.PathInfo(isdir=False, isfile=True, islink=False,
exists=True, size=1, mtime=123, ctime=456)
db.did_upload_version(relpath1, 0, 'URI:LIT:1', 'URI:LIT:0', 0, pathinfo)
c = db.cursor
c.execute("SELECT size, mtime, ctime"
" FROM local_files"
" WHERE path=?",
(relpath1,))
row = c.fetchone()
self.failUnlessEqual(row, (pathinfo.size, pathinfo.mtime, pathinfo.ctime))
# Second test uses db.is_new_file instead of SQL query directly
# to confirm the previous upload entry in the db.
relpath2 = u"myFile2"
path2 = os.path.join(self.basedir, relpath2)
fileutil.write(path2, "meow\n")
pathinfo = fileutil.get_pathinfo(path2)
db.did_upload_version(relpath2, 0, 'URI:LIT:2', 'URI:LIT:1', 0, pathinfo)
self.failUnlessFalse(db.is_new_file(pathinfo, relpath2))
different_pathinfo = fileutil.PathInfo(isdir=False, isfile=True, islink=False,
exists=True, size=0, mtime=pathinfo.mtime, ctime=pathinfo.ctime)
self.failUnlessTrue(db.is_new_file(different_pathinfo, relpath2))
def test_magicfolder_start_service(self):
self.set_up_grid()
self.local_dir = abspath_expanduser_unicode(self.unicode_or_fallback(u"l\u00F8cal_dir", u"local_dir"),
base=self.basedir)
self.mkdir_nonascii(self.local_dir)
d = defer.succeed(None)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.dirs_monitored'), 0))
d.addCallback(lambda ign: self.create_invite_join_magic_folder(u"Alice", self.local_dir))
d.addCallback(self._restart_client)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.dirs_monitored'), 1))
d.addBoth(self.cleanup)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.dirs_monitored'), 0))
return d
def test_move_tree(self):
self.set_up_grid()
self.local_dir = abspath_expanduser_unicode(self.unicode_or_fallback(u"l\u00F8cal_dir", u"local_dir"),
base=self.basedir)
self.mkdir_nonascii(self.local_dir)
empty_tree_name = self.unicode_or_fallback(u"empty_tr\u00EAe", u"empty_tree")
empty_tree_dir = abspath_expanduser_unicode(empty_tree_name, base=self.basedir)
new_empty_tree_dir = abspath_expanduser_unicode(empty_tree_name, base=self.local_dir)
small_tree_name = self.unicode_or_fallback(u"small_tr\u00EAe", u"empty_tree")
small_tree_dir = abspath_expanduser_unicode(small_tree_name, base=self.basedir)
new_small_tree_dir = abspath_expanduser_unicode(small_tree_name, base=self.local_dir)
d = self.create_invite_join_magic_folder(u"Alice", self.local_dir)
d.addCallback(self._restart_client)
def _check_move_empty_tree(res):
print "_check_move_empty_tree"
uploaded_d = self.magicfolder.uploader.set_hook('processed')
self.mkdir_nonascii(empty_tree_dir)
os.rename(empty_tree_dir, new_empty_tree_dir)
self.notify(to_filepath(new_empty_tree_dir), self.inotify.IN_MOVED_TO)
return uploaded_d
d.addCallback(_check_move_empty_tree)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_succeeded'), 1))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.files_uploaded'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_queued'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.directories_created'), 1))
def _check_move_small_tree(res):
print "_check_move_small_tree"
uploaded_d = self.magicfolder.uploader.set_hook('processed', ignore_count=1)
self.mkdir_nonascii(small_tree_dir)
what_path = abspath_expanduser_unicode(u"what", base=small_tree_dir)
fileutil.write(what_path, "say when")
os.rename(small_tree_dir, new_small_tree_dir)
self.notify(to_filepath(new_small_tree_dir), self.inotify.IN_MOVED_TO)
return uploaded_d
d.addCallback(_check_move_small_tree)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_succeeded'), 3))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.files_uploaded'), 1))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_queued'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.directories_created'), 2))
def _check_moved_tree_is_watched(res):
print "_check_moved_tree_is_watched"
uploaded_d = self.magicfolder.uploader.set_hook('processed')
another_path = abspath_expanduser_unicode(u"another", base=new_small_tree_dir)
fileutil.write(another_path, "file")
self.notify(to_filepath(another_path), self.inotify.IN_CLOSE_WRITE)
return uploaded_d
d.addCallback(_check_moved_tree_is_watched)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_succeeded'), 4))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.files_uploaded'), 2))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_queued'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.directories_created'), 2))
# Files that are moved out of the upload directory should no longer be watched.
#def _move_dir_away(ign):
# os.rename(new_empty_tree_dir, empty_tree_dir)
# # Wuh? Why don't we get this event for the real test?
# #self.notify(to_filepath(new_empty_tree_dir), self.inotify.IN_MOVED_FROM)
#d.addCallback(_move_dir_away)
#def create_file(val):
# test_file = abspath_expanduser_unicode(u"what", base=empty_tree_dir)
# fileutil.write(test_file, "meow")
# #self.notify(...)
# return
#d.addCallback(create_file)
#d.addCallback(lambda ign: time.sleep(1)) # XXX ICK
#d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
#d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_succeeded'), 4))
#d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.files_uploaded'), 2))
#d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_queued'), 0))
#d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.directories_created'), 2))
d.addBoth(self.cleanup)
return d
def test_persistence(self):
"""
Perform an upload of a given file and then stop the client.
Start a new client and magic-folder service... and verify that the file is NOT uploaded
a second time. This test is meant to test the database persistence along with
the startup and shutdown code paths of the magic-folder service.
"""
self.set_up_grid()
self.local_dir = abspath_expanduser_unicode(u"test_persistence", base=self.basedir)
self.mkdir_nonascii(self.local_dir)
self.collective_dircap = ""
d = defer.succeed(None)
d.addCallback(lambda ign: self.create_invite_join_magic_folder(u"Alice", self.local_dir))
d.addCallback(self._restart_client)
def create_test_file(filename):
d2 = self.magicfolder.uploader.set_hook('processed')
test_file = abspath_expanduser_unicode(filename, base=self.local_dir)
fileutil.write(test_file, "meow %s" % filename)
self.notify(to_filepath(test_file), self.inotify.IN_CLOSE_WRITE)
return d2
d.addCallback(lambda ign: create_test_file(u"what1"))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_succeeded'), 1))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_queued'), 0))
d.addCallback(self.cleanup)
d.addCallback(self._restart_client)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_succeeded'), 1))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_queued'), 0))
d.addCallback(lambda ign: create_test_file(u"what2"))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_succeeded'), 2))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_queued'), 0))
d.addBoth(self.cleanup)
return d
@defer.inlineCallbacks
def test_delete(self):
self.set_up_grid()
self.local_dir = os.path.join(self.basedir, u"local_dir")
self.mkdir_nonascii(self.local_dir)
yield self.create_invite_join_magic_folder(u"Alice\u0101", self.local_dir)
yield self._restart_client(None)
try:
# create a file
up_proc = self.magicfolder.uploader.set_hook('processed')
# down_proc = self.magicfolder.downloader.set_hook('processed')
path = os.path.join(self.local_dir, u'foo')
fileutil.write(path, 'foo\n')
self.notify(to_filepath(path), self.inotify.IN_CLOSE_WRITE)
yield up_proc
self.assertTrue(os.path.exists(path))
# the real test part: delete the file
up_proc = self.magicfolder.uploader.set_hook('processed')
os.unlink(path)
self.notify(to_filepath(path), self.inotify.IN_DELETE)
yield up_proc
self.assertFalse(os.path.exists(path))
# ensure we still have a DB entry, and that the version is 1
node, metadata = yield self.magicfolder.downloader._get_collective_latest_file(u'foo')
self.assertTrue(node is not None, "Failed to find %r in DMD" % (path,))
self.failUnlessEqual(metadata['version'], 1)
finally:
yield self.cleanup(None)
@defer.inlineCallbacks
def test_delete_and_restore(self):
self.set_up_grid()
self.local_dir = os.path.join(self.basedir, u"local_dir")
self.mkdir_nonascii(self.local_dir)
yield self.create_invite_join_magic_folder(u"Alice\u0101", self.local_dir)
yield self._restart_client(None)
try:
# create a file
up_proc = self.magicfolder.uploader.set_hook('processed')
# down_proc = self.magicfolder.downloader.set_hook('processed')
path = os.path.join(self.local_dir, u'foo')
fileutil.write(path, 'foo\n')
self.notify(to_filepath(path), self.inotify.IN_CLOSE_WRITE)
yield up_proc
self.assertTrue(os.path.exists(path))
# delete the file
up_proc = self.magicfolder.uploader.set_hook('processed')
os.unlink(path)
self.notify(to_filepath(path), self.inotify.IN_DELETE)
yield up_proc
self.assertFalse(os.path.exists(path))
# ensure we still have a DB entry, and that the version is 1
node, metadata = yield self.magicfolder.downloader._get_collective_latest_file(u'foo')
self.assertTrue(node is not None, "Failed to find %r in DMD" % (path,))
self.failUnlessEqual(metadata['version'], 1)
# restore the file, with different contents
up_proc = self.magicfolder.uploader.set_hook('processed')
path = os.path.join(self.local_dir, u'foo')
fileutil.write(path, 'bar\n')
self.notify(to_filepath(path), self.inotify.IN_CLOSE_WRITE)
yield up_proc
# ensure we still have a DB entry, and that the version is 2
node, metadata = yield self.magicfolder.downloader._get_collective_latest_file(u'foo')
self.assertTrue(node is not None, "Failed to find %r in DMD" % (path,))
self.failUnlessEqual(metadata['version'], 2)
finally:
yield self.cleanup(None)
@defer.inlineCallbacks
def test_alice_delete_bob_restore(self):
alice_clock = task.Clock()
bob_clock = task.Clock()
yield self.setup_alice_and_bob(alice_clock, bob_clock)
alice_dir = self.alice_magicfolder.uploader._local_path_u
bob_dir = self.bob_magicfolder.uploader._local_path_u
alice_fname = os.path.join(alice_dir, 'blam')
bob_fname = os.path.join(bob_dir, 'blam')
try:
# alice creates a file, bob downloads it
alice_proc = self.alice_magicfolder.uploader.set_hook('processed')
bob_proc = self.bob_magicfolder.downloader.set_hook('processed')
fileutil.write(alice_fname, 'contents0\n')
self.notify(to_filepath(alice_fname), self.inotify.IN_CLOSE_WRITE, magic=self.alice_magicfolder)
alice_clock.advance(0)
yield alice_proc # alice uploads
bob_clock.advance(0)
yield bob_proc # bob downloads
# check the state
yield self._check_version_in_dmd(self.alice_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.alice_magicfolder, u"blam", 0)
yield self._check_version_in_dmd(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.bob_magicfolder, u"blam", 0)
yield self.failUnlessReallyEqual(
self._get_count('downloader.objects_failed', client=self.bob_magicfolder._client),
0
)
yield self.failUnlessReallyEqual(
self._get_count('downloader.objects_downloaded', client=self.bob_magicfolder._client),
1
)
print("BOB DELETE")
# now bob deletes it (bob should upload, alice download)
bob_proc = self.bob_magicfolder.uploader.set_hook('processed')
alice_proc = self.alice_magicfolder.downloader.set_hook('processed')
os.unlink(bob_fname)
self.notify(to_filepath(bob_fname), self.inotify.IN_DELETE, magic=self.bob_magicfolder)
bob_clock.advance(0)
yield bob_proc
alice_clock.advance(0)
yield alice_proc
# check versions
node, metadata = yield self.alice_magicfolder.downloader._get_collective_latest_file(u'blam')
self.assertTrue(metadata['deleted'])
yield self._check_version_in_dmd(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_dmd(self.alice_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.alice_magicfolder, u"blam", 1)
print("ALICE RESTORE")
# now alice restores it (alice should upload, bob download)
alice_proc = self.alice_magicfolder.uploader.set_hook('processed')
bob_proc = self.bob_magicfolder.downloader.set_hook('processed')
fileutil.write(alice_fname, 'new contents\n')
self.notify(to_filepath(alice_fname), self.inotify.IN_CLOSE_WRITE, magic=self.alice_magicfolder)
alice_clock.advance(0)
yield alice_proc
bob_clock.advance(0)
yield bob_proc
# check versions
node, metadata = yield self.alice_magicfolder.downloader._get_collective_latest_file(u'blam')
self.assertTrue('deleted' not in metadata or not metadata['deleted'])
yield self._check_version_in_dmd(self.bob_magicfolder, u"blam", 2)
yield self._check_version_in_local_db(self.bob_magicfolder, u"blam", 2)
yield self._check_version_in_dmd(self.alice_magicfolder, u"blam", 2)
yield self._check_version_in_local_db(self.alice_magicfolder, u"blam", 2)
finally:
# cleanup
d0 = self.alice_magicfolder.finish()
alice_clock.advance(0)
yield d0
d1 = self.bob_magicfolder.finish()
bob_clock.advance(0)
yield d1
@defer.inlineCallbacks
def test_alice_create_bob_update(self):
alice_clock = task.Clock()
bob_clock = task.Clock()
yield self.setup_alice_and_bob(alice_clock, bob_clock)
alice_dir = self.alice_magicfolder.uploader._local_path_u
bob_dir = self.bob_magicfolder.uploader._local_path_u
alice_fname = os.path.join(alice_dir, 'blam')
bob_fname = os.path.join(bob_dir, 'blam')
try:
# alice creates a file, bob downloads it
alice_proc = self.alice_magicfolder.uploader.set_hook('processed')
bob_proc = self.bob_magicfolder.downloader.set_hook('processed')
fileutil.write(alice_fname, 'contents0\n')
self.notify(to_filepath(alice_fname), self.inotify.IN_CLOSE_WRITE, magic=self.alice_magicfolder)
alice_clock.advance(0)
yield alice_proc # alice uploads
bob_clock.advance(0)
yield bob_proc # bob downloads
# check the state
yield self._check_version_in_dmd(self.alice_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.alice_magicfolder, u"blam", 0)
yield self._check_version_in_dmd(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.bob_magicfolder, u"blam", 0)
yield self.failUnlessReallyEqual(
self._get_count('downloader.objects_failed', client=self.bob_magicfolder._client),
0
)
yield self.failUnlessReallyEqual(
self._get_count('downloader.objects_downloaded', client=self.bob_magicfolder._client),
1
)
# now bob updates it (bob should upload, alice download)
bob_proc = self.bob_magicfolder.uploader.set_hook('processed')
alice_proc = self.alice_magicfolder.downloader.set_hook('processed')
fileutil.write(bob_fname, 'bob wuz here\n')
self.notify(to_filepath(bob_fname), self.inotify.IN_CLOSE_WRITE, magic=self.bob_magicfolder)
bob_clock.advance(0)
yield bob_proc
alice_clock.advance(0)
yield alice_proc
# check the state
yield self._check_version_in_dmd(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_dmd(self.alice_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.alice_magicfolder, u"blam", 1)
finally:
# cleanup
d0 = self.alice_magicfolder.finish()
alice_clock.advance(0)
yield d0
d1 = self.bob_magicfolder.finish()
bob_clock.advance(0)
yield d1
@defer.inlineCallbacks
def test_alice_delete_and_restore(self):
alice_clock = task.Clock()
bob_clock = task.Clock()
yield self.setup_alice_and_bob(alice_clock, bob_clock)
alice_dir = self.alice_magicfolder.uploader._local_path_u
bob_dir = self.bob_magicfolder.uploader._local_path_u
alice_fname = os.path.join(alice_dir, 'blam')
bob_fname = os.path.join(bob_dir, 'blam')
try:
# alice creates a file, bob downloads it
alice_proc = self.alice_magicfolder.uploader.set_hook('processed')
bob_proc = self.bob_magicfolder.downloader.set_hook('processed')
fileutil.write(alice_fname, 'contents0\n')
self.notify(to_filepath(alice_fname), self.inotify.IN_CLOSE_WRITE, magic=self.alice_magicfolder)
alice_clock.advance(0)
yield alice_proc # alice uploads
bob_clock.advance(0)
yield bob_proc # bob downloads
# check the state
yield self._check_version_in_dmd(self.alice_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.alice_magicfolder, u"blam", 0)
yield self._check_version_in_dmd(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.bob_magicfolder, u"blam", 0)
yield self.failUnlessReallyEqual(
self._get_count('downloader.objects_failed', client=self.bob_magicfolder._client),
0
)
yield self.failUnlessReallyEqual(
self._get_count('downloader.objects_downloaded', client=self.bob_magicfolder._client),
1
)
self.failUnless(os.path.exists(bob_fname))
# now alice deletes it (alice should upload, bob download)
alice_proc = self.alice_magicfolder.uploader.set_hook('processed')
bob_proc = self.bob_magicfolder.downloader.set_hook('processed')
os.unlink(alice_fname)
self.notify(to_filepath(alice_fname), self.inotify.IN_DELETE, magic=self.alice_magicfolder)
alice_clock.advance(0)
yield alice_proc
bob_clock.advance(0)
yield bob_proc
# check the state
yield self._check_version_in_dmd(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.bob_magicfolder, u"blam", 1)
yield self._check_version_in_dmd(self.alice_magicfolder, u"blam", 1)
yield self._check_version_in_local_db(self.alice_magicfolder, u"blam", 1)
self.failIf(os.path.exists(bob_fname))
# now alice restores the file (with new contents)
alice_proc = self.alice_magicfolder.uploader.set_hook('processed')
bob_proc = self.bob_magicfolder.downloader.set_hook('processed')
fileutil.write(alice_fname, 'alice wuz here\n')
self.notify(to_filepath(alice_fname), self.inotify.IN_CLOSE_WRITE, magic=self.alice_magicfolder)
alice_clock.advance(0)
yield alice_proc
bob_clock.advance(0)
yield bob_proc
# check the state
yield self._check_version_in_dmd(self.bob_magicfolder, u"blam", 2)
yield self._check_version_in_local_db(self.bob_magicfolder, u"blam", 2)
yield self._check_version_in_dmd(self.alice_magicfolder, u"blam", 2)
yield self._check_version_in_local_db(self.alice_magicfolder, u"blam", 2)
self.failUnless(os.path.exists(bob_fname))
finally:
# cleanup
d0 = self.alice_magicfolder.finish()
alice_clock.advance(0)
yield d0
d1 = self.bob_magicfolder.finish()
bob_clock.advance(0)
yield d1
def test_magic_folder(self):
self.set_up_grid()
self.local_dir = os.path.join(self.basedir, self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir"))
self.mkdir_nonascii(self.local_dir)
d = self.create_invite_join_magic_folder(u"Alice\u0101", self.local_dir)
d.addCallback(self._restart_client)
# Write something short enough for a LIT file.
d.addCallback(lambda ign: self._check_file(u"short", "test"))
# Write to the same file again with different data.
d.addCallback(lambda ign: self._check_file(u"short", "different"))
# Test that temporary files are not uploaded.
d.addCallback(lambda ign: self._check_file(u"tempfile", "test", temporary=True))
# Test creation of a subdirectory.
d.addCallback(lambda ign: self._check_mkdir(u"directory"))
# Write something longer, and also try to test a Unicode name if the fs can represent it.
name_u = self.unicode_or_fallback(u"l\u00F8ng", u"long")
d.addCallback(lambda ign: self._check_file(name_u, "test"*100))
# TODO: test that causes an upload failure.
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
d.addBoth(self.cleanup)
return d
def _check_mkdir(self, name_u):
return self._check_file(name_u + u"/", "", directory=True)
def _check_file(self, name_u, data, temporary=False, directory=False):
precondition(not (temporary and directory), temporary=temporary, directory=directory)
print "%r._check_file(%r, %r, temporary=%r, directory=%r)" % (self, name_u, data, temporary, directory)
previously_uploaded = self._get_count('uploader.objects_succeeded')
previously_disappeared = self._get_count('uploader.objects_disappeared')
d = self.magicfolder.uploader.set_hook('processed')
path_u = abspath_expanduser_unicode(name_u, base=self.local_dir)
path = to_filepath(path_u)
if directory:
os.mkdir(path_u)
event_mask = self.inotify.IN_CREATE | self.inotify.IN_ISDIR
else:
# We don't use FilePath.setContent() here because it creates a temporary file that
# is renamed into place, which causes events that the test is not expecting.
f = open(path_u, "wb")
try:
if temporary and sys.platform != "win32":
os.unlink(path_u)
f.write(data)
finally:
f.close()
if temporary and sys.platform == "win32":
os.unlink(path_u)
self.notify(path, self.inotify.IN_DELETE, flush=False)
event_mask = self.inotify.IN_CLOSE_WRITE
self.notify(path, event_mask)
encoded_name_u = magicpath.path2magic(name_u)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_failed'), 0))
if temporary:
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_disappeared'),
previously_disappeared + 1))
else:
def _here(res, n):
print "here %r %r" % (n, res)
return res
d.addBoth(_here, 1)
d.addCallback(lambda ign: self.upload_dirnode.list())
d.addBoth(_here, 1.5)
d.addCallback(lambda ign: self.upload_dirnode.get(encoded_name_u))
d.addBoth(_here, 2)
d.addCallback(download_to_data)
d.addBoth(_here, 3)
d.addCallback(lambda actual_data: self.failUnlessReallyEqual(actual_data, data))
d.addBoth(_here, 4)
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_succeeded'),
previously_uploaded + 1))
d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('uploader.objects_queued'), 0))
return d
def _check_version_in_dmd(self, magicfolder, relpath_u, expected_version):
encoded_name_u = magicpath.path2magic(relpath_u)
d = magicfolder.downloader._get_collective_latest_file(encoded_name_u)
def check_latest(result):
if result[0] is not None:
node, metadata = result
d.addCallback(lambda ign: self.failUnlessEqual(metadata['version'], expected_version))
d.addCallback(check_latest)
return d
def _check_version_in_local_db(self, magicfolder, relpath_u, expected_version):
version = magicfolder._db.get_local_file_version(relpath_u)
#print "_check_version_in_local_db: %r has version %s" % (relpath_u, version)
self.failUnlessEqual(version, expected_version)
def _check_file_gone(self, magicfolder, relpath_u):
path = os.path.join(magicfolder.uploader._local_path_u, relpath_u)
self.assertTrue(not os.path.exists(path))
def _check_uploader_count(self, name, expected, magic=None):
self.failUnlessReallyEqual(self._get_count('uploader.'+name, client=(magic or self.alice_magicfolder)._client),
expected)
def _check_downloader_count(self, name, expected, magic=None):
self.failUnlessReallyEqual(self._get_count('downloader.'+name, client=(magic or self.bob_magicfolder)._client),
expected)
def test_alice_bob(self):
alice_clock = task.Clock()
bob_clock = task.Clock()
d = self.setup_alice_and_bob(alice_clock, bob_clock)
def _wait_for_Alice(ign, downloaded_d):
print "Now waiting for Alice to download\n"
alice_clock.advance(0)
return downloaded_d
def _wait_for_Bob(ign, downloaded_d):
print "Now waiting for Bob to download\n"
bob_clock.advance(0)
return downloaded_d
def _wait_for(ign, something_to_do, alice=True):
if alice:
downloaded_d = self.bob_magicfolder.downloader.set_hook('processed')
uploaded_d = self.alice_magicfolder.uploader.set_hook('processed')
else:
downloaded_d = self.alice_magicfolder.downloader.set_hook('processed')
uploaded_d = self.bob_magicfolder.uploader.set_hook('processed')
something_to_do()
if alice:
print "Waiting for Alice to upload\n"
alice_clock.advance(0)
uploaded_d.addCallback(_wait_for_Bob, downloaded_d)
else:
print "Waiting for Bob to upload\n"
bob_clock.advance(0)
uploaded_d.addCallback(_wait_for_Alice, downloaded_d)
return uploaded_d
def Alice_to_write_a_file():
print "Alice writes a file\n"
self.file_path = abspath_expanduser_unicode(u"file1", base=self.alice_magicfolder.uploader._local_path_u)
fileutil.write(self.file_path, "meow, meow meow. meow? meow meow! meow.")
self.notify(to_filepath(self.file_path), self.inotify.IN_CLOSE_WRITE, magic=self.alice_magicfolder)
d.addCallback(_wait_for, Alice_to_write_a_file)
d.addCallback(lambda ign: self._check_version_in_dmd(self.alice_magicfolder, u"file1", 0))
d.addCallback(lambda ign: self._check_version_in_local_db(self.alice_magicfolder, u"file1", 0))
d.addCallback(lambda ign: self._check_uploader_count('objects_failed', 0))
d.addCallback(lambda ign: self._check_uploader_count('objects_succeeded', 1))
d.addCallback(lambda ign: self._check_uploader_count('files_uploaded', 1))
d.addCallback(lambda ign: self._check_uploader_count('objects_queued', 0))
d.addCallback(lambda ign: self._check_uploader_count('directories_created', 0))
d.addCallback(lambda ign: self._check_uploader_count('objects_conflicted', 0))
d.addCallback(lambda ign: self._check_uploader_count('objects_conflicted', 0, magic=self.bob_magicfolder))
d.addCallback(lambda ign: self._check_version_in_local_db(self.bob_magicfolder, u"file1", 0))
d.addCallback(lambda ign: self._check_downloader_count('objects_failed', 0))
d.addCallback(lambda ign: self._check_downloader_count('objects_downloaded', 1))
d.addCallback(lambda ign: self._check_uploader_count('objects_succeeded', 0, magic=self.bob_magicfolder))
def Alice_to_delete_file():
print "Alice deletes the file!\n"
os.unlink(self.file_path)
self.notify(to_filepath(self.file_path), self.inotify.IN_DELETE, magic=self.alice_magicfolder)
d.addCallback(_wait_for, Alice_to_delete_file)
def notify_bob_moved(ign):
d0 = self.bob_magicfolder.uploader.set_hook('processed')
p = abspath_expanduser_unicode(u"file1", base=self.bob_magicfolder.uploader._local_path_u)
self.notify(to_filepath(p), self.inotify.IN_MOVED_FROM, magic=self.bob_magicfolder, flush=False)
self.notify(to_filepath(p + u'.backup'), self.inotify.IN_MOVED_TO, magic=self.bob_magicfolder)
bob_clock.advance(0)
return d0
d.addCallback(notify_bob_moved)
d.addCallback(lambda ign: self._check_version_in_dmd(self.alice_magicfolder, u"file1", 1))
d.addCallback(lambda ign: self._check_version_in_local_db(self.alice_magicfolder, u"file1", 1))
d.addCallback(lambda ign: self._check_uploader_count('objects_failed', 0))
d.addCallback(lambda ign: self._check_uploader_count('objects_succeeded', 2))
d.addCallback(lambda ign: self._check_uploader_count('objects_not_uploaded', 1, magic=self.bob_magicfolder))
d.addCallback(lambda ign: self._check_uploader_count('objects_succeeded', 1, magic=self.bob_magicfolder))
d.addCallback(lambda ign: self._check_version_in_local_db(self.bob_magicfolder, u"file1", 1))
d.addCallback(lambda ign: self._check_version_in_dmd(self.bob_magicfolder, u"file1", 1))
d.addCallback(lambda ign: self._check_file_gone(self.bob_magicfolder, u"file1"))
d.addCallback(lambda ign: self._check_downloader_count('objects_failed', 0))
d.addCallback(lambda ign: self._check_downloader_count('objects_downloaded', 2))
def Alice_to_rewrite_file():
print "Alice rewrites file\n"
self.file_path = abspath_expanduser_unicode(u"file1", base=self.alice_magicfolder.uploader._local_path_u)
fileutil.write(self.file_path, "Alice suddenly sees the white rabbit running into the forest.")
self.notify(to_filepath(self.file_path), self.inotify.IN_CLOSE_WRITE, magic=self.alice_magicfolder)
d.addCallback(_wait_for, Alice_to_rewrite_file)
d.addCallback(lambda ign: self._check_version_in_dmd(self.alice_magicfolder, u"file1", 2))
d.addCallback(lambda ign: self._check_version_in_local_db(self.alice_magicfolder, u"file1", 2))
d.addCallback(lambda ign: self._check_uploader_count('objects_failed', 0))
d.addCallback(lambda ign: self._check_uploader_count('objects_succeeded', 3))
d.addCallback(lambda ign: self._check_uploader_count('files_uploaded', 3))
d.addCallback(lambda ign: self._check_uploader_count('objects_queued', 0))
d.addCallback(lambda ign: self._check_uploader_count('directories_created', 0))
d.addCallback(lambda ign: self._check_downloader_count('objects_conflicted', 0, magic=self.alice_magicfolder))
d.addCallback(lambda ign: self._check_downloader_count('objects_conflicted', 0))
d.addCallback(lambda ign: self._check_version_in_dmd(self.bob_magicfolder, u"file1", 2))
d.addCallback(lambda ign: self._check_version_in_local_db(self.bob_magicfolder, u"file1", 2))
d.addCallback(lambda ign: self._check_downloader_count('objects_failed', 0))
d.addCallback(lambda ign: self._check_downloader_count('objects_downloaded', 3))
path_u = u"/tmp/magic_folder_test"
encoded_path_u = magicpath.path2magic(u"/tmp/magic_folder_test")
def Alice_tries_to_p0wn_Bob(ign):
print "Alice tries to p0wn Bob\n"
self.objects_excluded = self._get_count('downloader.objects_excluded', client=self.bob_magicfolder._client)
processed_d = self.bob_magicfolder.downloader.set_hook('processed')
# upload a file that would provoke the security bug from #2506
uploadable = Data("", self.alice_magicfolder._client.convergence)
alice_dmd = self.alice_magicfolder.uploader._upload_dirnode
d2 = alice_dmd.add_file(encoded_path_u, uploadable, metadata={"version": 0}, overwrite=True)
d2.addCallback(lambda ign: self.failUnless(alice_dmd.has_child(encoded_path_u)))
d2.addCallback(_wait_for_Bob, processed_d)
return d2
d.addCallback(Alice_tries_to_p0wn_Bob)
d.addCallback(lambda ign: self.failIf(os.path.exists(path_u)))
d.addCallback(lambda ign: self._check_version_in_local_db(self.bob_magicfolder, encoded_path_u, None))
d.addCallback(lambda ign: self._check_downloader_count('objects_excluded', self.objects_excluded+1))
d.addCallback(lambda ign: self._check_downloader_count('objects_downloaded', 3))
d.addCallback(lambda ign: self._check_downloader_count('objects_conflicted', 0, magic=self.alice_magicfolder))
d.addCallback(lambda ign: self._check_downloader_count('objects_conflicted', 0))
def Bob_to_rewrite_file():
print "Bob rewrites file\n"
self.file_path = abspath_expanduser_unicode(u"file1", base=self.bob_magicfolder.uploader._local_path_u)
print "---- bob's file is %r" % (self.file_path,)
fileutil.write(self.file_path, "No white rabbit to be found.")
self.magicfolder = self.bob_magicfolder
self.notify(to_filepath(self.file_path), self.inotify.IN_CLOSE_WRITE)
d.addCallback(lambda ign: _wait_for(None, Bob_to_rewrite_file, alice=False))
d.addCallback(lambda ign: self._check_version_in_dmd(self.bob_magicfolder, u"file1", 3))
d.addCallback(lambda ign: self._check_version_in_local_db(self.bob_magicfolder, u"file1", 3))
d.addCallback(lambda ign: self._check_uploader_count('objects_failed', 0, magic=self.bob_magicfolder))
d.addCallback(lambda ign: self._check_uploader_count('objects_succeeded', 2, magic=self.bob_magicfolder))
d.addCallback(lambda ign: self._check_uploader_count('files_uploaded', 1, magic=self.bob_magicfolder))
d.addCallback(lambda ign: self._check_uploader_count('objects_queued', 0, magic=self.bob_magicfolder))
d.addCallback(lambda ign: self._check_uploader_count('directories_created', 0, magic=self.bob_magicfolder))
d.addCallback(lambda ign: self._check_downloader_count('objects_conflicted', 0))
d.addCallback(lambda ign: self._check_version_in_dmd(self.alice_magicfolder, u"file1", 3))
d.addCallback(lambda ign: self._check_version_in_local_db(self.alice_magicfolder, u"file1", 3))
d.addCallback(lambda ign: self._check_downloader_count('objects_failed', 0, magic=self.alice_magicfolder))
d.addCallback(lambda ign: self._check_downloader_count('objects_downloaded', 1, magic=self.alice_magicfolder))
d.addCallback(lambda ign: self._check_downloader_count('objects_conflicted', 0, magic=self.alice_magicfolder))
def Alice_conflicts_with_Bob():
print "Alice conflicts with Bob\n"
downloaded_d = self.bob_magicfolder.downloader.set_hook('processed')
uploadable = Data("do not follow the white rabbit", self.alice_magicfolder._client.convergence)
alice_dmd = self.alice_magicfolder.uploader._upload_dirnode
d2 = alice_dmd.add_file(u"file1", uploadable,
metadata={"version": 5,
"last_downloaded_uri" : "URI:LIT:" },
overwrite=True)
print "Waiting for Alice to upload\n"
d2.addCallback(lambda ign: bob_clock.advance(6))
d2.addCallback(lambda ign: downloaded_d)
d2.addCallback(lambda ign: self.failUnless(alice_dmd.has_child(encoded_path_u)))
return d2
d.addCallback(lambda ign: Alice_conflicts_with_Bob())
# XXX fix the code so that it doesn't increment objects_excluded each turn
#d.addCallback(lambda ign: self._check_downloader_count('objects_excluded', 1))
d.addCallback(lambda ign: self._check_downloader_count('objects_downloaded', 4))
d.addCallback(lambda ign: self._check_downloader_count('objects_conflicted', 1))
def _cleanup(ign, magicfolder, clock):
if magicfolder is not None:
d2 = magicfolder.finish()
clock.advance(0)
return d2
def cleanup_Alice_and_Bob(result):
print "cleanup alice bob test\n"
d = defer.succeed(None)
d.addCallback(_cleanup, self.alice_magicfolder, alice_clock)
d.addCallback(_cleanup, self.bob_magicfolder, bob_clock)
d.addCallback(lambda ign: result)
return d
d.addBoth(cleanup_Alice_and_Bob)
return d
class MockTest(MagicFolderTestMixin, unittest.TestCase):
"""This can run on any platform, and even if twisted.internet.inotify can't be imported."""
def setUp(self):
MagicFolderTestMixin.setUp(self)
self.inotify = fake_inotify
self.patch(magic_folder, 'get_inotify_module', lambda: self.inotify)
def notify(self, path, mask, magic=None, flush=True):
if magic is None:
magic = self.magicfolder
magic.uploader._notifier.event(path, mask)
# no flush for the mock test.
def test_errors(self):
self.set_up_grid()
errors_dir = abspath_expanduser_unicode(u"errors_dir", base=self.basedir)
os.mkdir(errors_dir)
not_a_dir = abspath_expanduser_unicode(u"NOT_A_DIR", base=self.basedir)
fileutil.write(not_a_dir, "")
magicfolderdb = abspath_expanduser_unicode(u"magicfolderdb", base=self.basedir)
doesnotexist = abspath_expanduser_unicode(u"doesnotexist", base=self.basedir)
client = self.g.clients[0]
d = client.create_dirnode()
def _check_errors(n):
self.failUnless(IDirectoryNode.providedBy(n))
upload_dircap = n.get_uri()
readonly_dircap = n.get_readonly_uri()
self.shouldFail(AssertionError, 'nonexistent local.directory', 'there is no directory',
MagicFolder, client, upload_dircap, '', doesnotexist, magicfolderdb)
self.shouldFail(AssertionError, 'non-directory local.directory', 'is not a directory',
MagicFolder, client, upload_dircap, '', not_a_dir, magicfolderdb)
self.shouldFail(AssertionError, 'bad upload.dircap', 'does not refer to a directory',
MagicFolder, client, 'bad', '', errors_dir, magicfolderdb)
self.shouldFail(AssertionError, 'non-directory upload.dircap', 'does not refer to a directory',
MagicFolder, client, 'URI:LIT:foo', '', errors_dir, magicfolderdb)
self.shouldFail(AssertionError, 'readonly upload.dircap', 'is not a writecap to a directory',
MagicFolder, client, readonly_dircap, '', errors_dir, magicfolderdb)
self.shouldFail(AssertionError, 'collective dircap', 'is not a readonly cap to a directory',
MagicFolder, client, upload_dircap, upload_dircap, errors_dir, magicfolderdb)
def _not_implemented():
raise NotImplementedError("blah")
self.patch(magic_folder, 'get_inotify_module', _not_implemented)
self.shouldFail(NotImplementedError, 'unsupported', 'blah',
MagicFolder, client, upload_dircap, '', errors_dir, magicfolderdb)
d.addCallback(_check_errors)
return d
def test_write_downloaded_file(self):
workdir = u"cli/MagicFolder/write-downloaded-file"
local_file = fileutil.abspath_expanduser_unicode(os.path.join(workdir, "foobar"))
class TestWriteFileMixin(WriteFileMixin):
def _log(self, msg):
pass
writefile = TestWriteFileMixin()
# create a file with name "foobar" with content "foo"
# write downloaded file content "bar" into "foobar" with is_conflict = False
fileutil.make_dirs(workdir)
fileutil.write(local_file, "foo")
# if is_conflict is False, then the .conflict file shouldn't exist.
writefile._write_downloaded_file(local_file, "bar", False, None)
conflicted_path = local_file + u".conflict"
self.failIf(os.path.exists(conflicted_path))
# At this point, the backup file should exist with content "foo"
backup_path = local_file + u".backup"
self.failUnless(os.path.exists(backup_path))
self.failUnlessEqual(fileutil.read(backup_path), "foo")
# .tmp file shouldn't exist
self.failIf(os.path.exists(local_file + u".tmp"))
# .. and the original file should have the new content
self.failUnlessEqual(fileutil.read(local_file), "bar")
# now a test for conflicted case
writefile._write_downloaded_file(local_file, "bar", True, None)
self.failUnless(os.path.exists(conflicted_path))
# .tmp file shouldn't exist
self.failIf(os.path.exists(local_file + u".tmp"))
class RealTest(MagicFolderTestMixin, unittest.TestCase):
"""This is skipped unless both Twisted and the platform support inotify."""
def setUp(self):
MagicFolderTestMixin.setUp(self)
self.inotify = magic_folder.get_inotify_module()
def notify(self, path, mask, magic=None, flush=True):
# Writing to the filesystem causes the notification.
# However, flushing filesystem buffers may be necessary on Windows.
if flush:
fileutil.flush_volume(path.path)
try:
magic_folder.get_inotify_module()
except NotImplementedError:
RealTest.skip = "Magic Folder support can only be tested for-real on an OS that supports inotify or equivalent."

View File

@ -0,0 +1,28 @@
from twisted.trial import unittest
from allmydata import magicpath
class MagicPath(unittest.TestCase):
tests = {
u"Documents/work/critical-project/qed.txt": u"Documents@_work@_critical-project@_qed.txt",
u"Documents/emails/bunnyfufu@hoppingforest.net": u"Documents@_emails@_bunnyfufu@@hoppingforest.net",
u"foo/@/bar": u"foo@_@@@_bar",
}
def test_path2magic(self):
for test, expected in self.tests.items():
self.failUnlessEqual(magicpath.path2magic(test), expected)
def test_magic2path(self):
for expected, test in self.tests.items():
self.failUnlessEqual(magicpath.magic2path(test), expected)
def test_should_ignore(self):
self.failUnlessEqual(magicpath.should_ignore_file(u".bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"bashrc."), False)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/branch/.bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/.branch/bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/.tree/branch/bashrc"), True)
self.failUnlessEqual(magicpath.should_ignore_file(u"forest/tree/branch/bashrc"), False)

View File

@ -234,7 +234,7 @@ def make_storagebroker(s=None, num_peers=10):
s = FakeStorage()
peerids = [tagged_hash("peerid", "%d" % i)[:20]
for i in range(num_peers)]
storage_broker = StorageFarmBroker(None, True)
storage_broker = StorageFarmBroker(None, True, 0, None)
for peerid in peerids:
fss = FakeStorageServer(peerid, s)
ann = {"anonymous-storage-FURL": "pb://%s@nowhere/fake" % base32.b2a(peerid),

View File

@ -7,7 +7,8 @@ from twisted.python import usage, runtime
from twisted.internet import threads
from allmydata.util import fileutil, pollmixin
from allmydata.util.encodingutil import unicode_to_argv, unicode_to_output, get_filesystem_encoding
from allmydata.util.encodingutil import unicode_to_argv, unicode_to_output, \
get_filesystem_encoding
from allmydata.scripts import runner
from allmydata.client import Client
from allmydata.test import common_util
@ -265,7 +266,7 @@ class CreateNode(unittest.TestCase):
self.failUnless(re.search(r"\n\[storage\]\n#.*\nenabled = true\n", content), content)
self.failUnless("\nreserved_space = 1G\n" in content)
self.failUnless(re.search(r"\n\[drop_upload\]\n#.*\nenabled = false\n", content), content)
self.failUnless(re.search(r"\n\[magic_folder\]\n#.*\n#enabled = false\n", content), content)
# creating the node a second time should be rejected
rc, out, err = self.run_tahoe(argv)
@ -298,6 +299,19 @@ class CreateNode(unittest.TestCase):
self.failUnless(os.path.exists(n3))
self.failUnless(os.path.exists(os.path.join(n3, tac)))
if kind in ("client", "node", "introducer"):
# test that the output (without --quiet) includes the base directory
n4 = os.path.join(basedir, command + "-n4")
argv = [command, n4]
rc, out, err = self.run_tahoe(argv)
self.failUnlessEqual(err, "")
self.failUnlessIn(" created in ", out)
self.failUnlessIn(n4, out)
self.failIfIn("\\\\?\\", out)
self.failUnlessEqual(rc, 0)
self.failUnless(os.path.exists(n4))
self.failUnless(os.path.exists(os.path.join(n4, tac)))
# make sure it rejects too many arguments
argv = [command, "basedir", "extraarg"]
self.failUnlessRaises(usage.UsageError,

View File

@ -198,7 +198,7 @@ class FakeClient:
mode = dict([i,mode] for i in range(num_servers))
servers = [ ("%20d"%fakeid, FakeStorageServer(mode[fakeid]))
for fakeid in range(self.num_servers) ]
self.storage_broker = StorageFarmBroker(None, permute_peers=True)
self.storage_broker = StorageFarmBroker(None, True, 0, None)
for (serverid, rref) in servers:
ann = {"anonymous-storage-FURL": "pb://%s@nowhere/fake" % base32.b2a(serverid),
"permutation-seed-base32": base32.b2a(serverid) }

View File

@ -441,6 +441,74 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
self.failIf(os.path.exists(fn))
self.failUnless(os.path.exists(fn2))
def test_rename_no_overwrite(self):
workdir = fileutil.abspath_expanduser_unicode(u"test_rename_no_overwrite")
fileutil.make_dirs(workdir)
source_path = os.path.join(workdir, "source")
dest_path = os.path.join(workdir, "dest")
# when neither file exists
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
# when only dest exists
fileutil.write(dest_path, "dest")
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
self.failUnlessEqual(fileutil.read(dest_path), "dest")
# when both exist
fileutil.write(source_path, "source")
self.failUnlessRaises(OSError, fileutil.rename_no_overwrite, source_path, dest_path)
self.failUnlessEqual(fileutil.read(source_path), "source")
self.failUnlessEqual(fileutil.read(dest_path), "dest")
# when only source exists
os.remove(dest_path)
fileutil.rename_no_overwrite(source_path, dest_path)
self.failUnlessEqual(fileutil.read(dest_path), "source")
self.failIf(os.path.exists(source_path))
def test_replace_file(self):
workdir = fileutil.abspath_expanduser_unicode(u"test_replace_file")
fileutil.make_dirs(workdir)
backup_path = os.path.join(workdir, "backup")
replaced_path = os.path.join(workdir, "replaced")
replacement_path = os.path.join(workdir, "replacement")
# when none of the files exist
self.failUnlessRaises(fileutil.ConflictError, fileutil.replace_file, replaced_path, replacement_path, backup_path)
# when only replaced exists
fileutil.write(replaced_path, "foo")
self.failUnlessRaises(fileutil.ConflictError, fileutil.replace_file, replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(replaced_path), "foo")
# when both replaced and replacement exist, but not backup
fileutil.write(replacement_path, "bar")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(backup_path), "foo")
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
# when only replacement exists
os.remove(backup_path)
os.remove(replaced_path)
fileutil.write(replacement_path, "bar")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
self.failIf(os.path.exists(backup_path))
# when replaced, replacement and backup all exist
fileutil.write(replaced_path, "foo")
fileutil.write(replacement_path, "bar")
fileutil.write(backup_path, "bak")
fileutil.replace_file(replaced_path, replacement_path, backup_path)
self.failUnlessEqual(fileutil.read(backup_path), "foo")
self.failUnlessEqual(fileutil.read(replaced_path), "bar")
self.failIf(os.path.exists(replacement_path))
def test_du(self):
basedir = "util/FileUtil/test_du"
fileutil.make_dirs(basedir)
@ -460,12 +528,14 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
saved_cwd = os.path.normpath(os.getcwdu())
abspath_cwd = fileutil.abspath_expanduser_unicode(u".")
abspath_cwd_notlong = fileutil.abspath_expanduser_unicode(u".", long_path=False)
self.failUnless(isinstance(saved_cwd, unicode), saved_cwd)
self.failUnless(isinstance(abspath_cwd, unicode), abspath_cwd)
if sys.platform == "win32":
self.failUnlessReallyEqual(abspath_cwd, fileutil.to_windows_long_path(saved_cwd))
else:
self.failUnlessReallyEqual(abspath_cwd, saved_cwd)
self.failUnlessReallyEqual(abspath_cwd_notlong, saved_cwd)
self.failUnlessReallyEqual(fileutil.to_windows_long_path(u"\\\\?\\foo"), u"\\\\?\\foo")
self.failUnlessReallyEqual(fileutil.to_windows_long_path(u"\\\\.\\foo"), u"\\\\.\\foo")
@ -494,7 +564,19 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
self.failUnlessReallyEqual(baz[4], bar[4]) # same drive
baz_notlong = fileutil.abspath_expanduser_unicode(u"\\baz", long_path=False)
self.failIf(baz_notlong.startswith(u"\\\\?\\"), baz_notlong)
self.failUnlessReallyEqual(baz_notlong[1 :], u":\\baz")
bar_notlong = fileutil.abspath_expanduser_unicode(u"\\bar", base=baz_notlong, long_path=False)
self.failIf(bar_notlong.startswith(u"\\\\?\\"), bar_notlong)
self.failUnlessReallyEqual(bar_notlong[1 :], u":\\bar")
# not u":\\baz\\bar", because \bar is absolute on the current drive.
self.failUnlessReallyEqual(baz_notlong[0], bar_notlong[0]) # same drive
self.failIfIn(u"~", fileutil.abspath_expanduser_unicode(u"~"))
self.failIfIn(u"~", fileutil.abspath_expanduser_unicode(u"~", long_path=False))
cwds = ['cwd']
try:
@ -510,6 +592,9 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
for upath in (u'', u'fuu', u'f\xf9\xf9', u'/fuu', u'U:\\', u'~'):
uabspath = fileutil.abspath_expanduser_unicode(upath)
self.failUnless(isinstance(uabspath, unicode), uabspath)
uabspath_notlong = fileutil.abspath_expanduser_unicode(upath, long_path=False)
self.failUnless(isinstance(uabspath_notlong, unicode), uabspath_notlong)
finally:
os.chdir(saved_cwd)
@ -567,6 +652,60 @@ class FileUtil(ReallyEqualMixin, unittest.TestCase):
disk = fileutil.get_disk_stats('.', 2**128)
self.failUnlessEqual(disk['avail'], 0)
def test_get_pathinfo(self):
basedir = "util/FileUtil/test_get_pathinfo"
fileutil.make_dirs(basedir)
# create a directory
self.mkdir(basedir, "a")
dirinfo = fileutil.get_pathinfo(basedir)
self.failUnlessTrue(dirinfo.isdir)
self.failUnlessTrue(dirinfo.exists)
self.failUnlessFalse(dirinfo.isfile)
self.failUnlessFalse(dirinfo.islink)
# create a file
f = os.path.join(basedir, "1.txt")
fileutil.write(f, "a"*10)
fileinfo = fileutil.get_pathinfo(f)
self.failUnlessTrue(fileinfo.isfile)
self.failUnlessTrue(fileinfo.exists)
self.failUnlessFalse(fileinfo.isdir)
self.failUnlessFalse(fileinfo.islink)
self.failUnlessEqual(fileinfo.size, 10)
# path at which nothing exists
dnename = os.path.join(basedir, "doesnotexist")
now = time.time()
dneinfo = fileutil.get_pathinfo(dnename, now=now)
self.failUnlessFalse(dneinfo.exists)
self.failUnlessFalse(dneinfo.isfile)
self.failUnlessFalse(dneinfo.isdir)
self.failUnlessFalse(dneinfo.islink)
self.failUnlessEqual(dneinfo.size, None)
self.failUnlessEqual(dneinfo.mtime, now)
self.failUnlessEqual(dneinfo.ctime, now)
def test_get_pathinfo_symlink(self):
if not hasattr(os, 'symlink'):
raise unittest.SkipTest("can't create symlinks on this platform")
basedir = "util/FileUtil/test_get_pathinfo"
fileutil.make_dirs(basedir)
f = os.path.join(basedir, "1.txt")
fileutil.write(f, "a"*10)
# create a symlink pointing to 1.txt
slname = os.path.join(basedir, "linkto1.txt")
os.symlink(f, slname)
symlinkinfo = fileutil.get_pathinfo(slname)
self.failUnlessTrue(symlinkinfo.islink)
self.failUnlessTrue(symlinkinfo.exists)
self.failUnlessFalse(symlinkinfo.isfile)
self.failUnlessFalse(symlinkinfo.isdir)
class PollMixinTests(unittest.TestCase):
def setUp(self):
self.pm = pollmixin.PollMixin()

View File

@ -239,7 +239,7 @@ class FakeClient(Client):
self._secret_holder = SecretHolder("lease secret", "convergence secret")
self.helper = None
self.convergence = "some random string"
self.storage_broker = StorageFarmBroker(None, permute_peers=True)
self.storage_broker = StorageFarmBroker(None, True, 0, None)
# fake knowledge of another server
self.storage_broker.test_add_server("other_nodeid",
FakeDisplayableServer("other_nodeid", u"other_nickname \u263B"))

View File

@ -730,7 +730,7 @@ ALLEGED_IMMUTABLE_PREFIX = 'imm.'
def from_string(u, deep_immutable=False, name=u"<unknown name>"):
if not isinstance(u, str):
raise TypeError("unknown URI type: %s.." % str(u)[:100])
raise TypeError("URI must be str: %r" % (u,))
# We allow and check ALLEGED_READONLY_PREFIX or ALLEGED_IMMUTABLE_PREFIX
# on all URIs, even though we would only strictly need to do so for caps of

View File

@ -5,6 +5,7 @@ from foolscap.api import eventually, fireEventually
from twisted.internet import defer, reactor
from allmydata.util import log
from allmydata.util.assertutil import _assert
from allmydata.util.pollmixin import PollMixin
@ -77,28 +78,35 @@ class HookMixin:
I am a helper mixin that maintains a collection of named hooks, primarily
for use in tests. Each hook is set to an unfired Deferred using 'set_hook',
and can then be fired exactly once at the appropriate time by '_call_hook'.
If 'ignore_count' is given, that number of calls to '_call_hook' will be
ignored before firing the hook.
I assume a '_hooks' attribute that should set by the class constructor to
a dict mapping each valid hook name to None.
"""
def set_hook(self, name, d=None):
def set_hook(self, name, d=None, ignore_count=0):
"""
Called by the hook observer (e.g. by a test).
If d is not given, an unfired Deferred is created and returned.
The hook must not already be set.
"""
self._log("set_hook %r, ignore_count=%r" % (name, ignore_count))
if d is None:
d = defer.Deferred()
assert self._hooks[name] is None, self._hooks[name]
assert isinstance(d, defer.Deferred), d
self._hooks[name] = d
_assert(ignore_count >= 0, ignore_count=ignore_count)
_assert(name in self._hooks, name=name)
_assert(self._hooks[name] is None, name=name, hook=self._hooks[name])
_assert(isinstance(d, defer.Deferred), d=d)
self._hooks[name] = (d, ignore_count)
return d
def _call_hook(self, res, name):
"""
Called to trigger the hook, with argument 'res'. This is a no-op if the
hook is unset. Otherwise, the hook will be unset, and then its Deferred
will be fired synchronously.
Called to trigger the hook, with argument 'res'. This is a no-op if
the hook is unset. If the hook's ignore_count is positive, it will be
decremented; if it was already zero, the hook will be unset, and then
its Deferred will be fired synchronously.
The expected usage is "deferred.addBoth(self._call_hook, 'hookname')".
This ensures that if 'res' is a failure, the hook will be errbacked,
@ -106,13 +114,22 @@ class HookMixin:
'res' is returned so that the current result or failure will be passed
through.
"""
d = self._hooks[name]
if d is None:
return defer.succeed(None)
self._hooks[name] = None
_with_log(d.callback, res)
hook = self._hooks[name]
if hook is None:
return None
(d, ignore_count) = hook
self._log("call_hook %r, ignore_count=%r" % (name, ignore_count))
if ignore_count > 0:
self._hooks[name] = (d, ignore_count - 1)
else:
self._hooks[name] = None
_with_log(d.callback, res)
return res
def _log(self, msg):
log.msg(msg, level=log.NOISY)
def async_iterate(process, iterable, *extra_args, **kwargs):
"""

View File

@ -6,8 +6,9 @@ unicode and back.
import sys, os, re, locale
from types import NoneType
from allmydata.util.assertutil import precondition
from allmydata.util.assertutil import precondition, _assert
from twisted.python import usage
from twisted.python.filepath import FilePath
from allmydata.util import log
from allmydata.util.fileutil import abspath_expanduser_unicode
@ -35,9 +36,10 @@ def check_encoding(encoding):
filesystem_encoding = None
io_encoding = None
is_unicode_platform = False
use_unicode_filepath = False
def _reload():
global filesystem_encoding, io_encoding, is_unicode_platform
global filesystem_encoding, io_encoding, is_unicode_platform, use_unicode_filepath
filesystem_encoding = canonical_encoding(sys.getfilesystemencoding())
check_encoding(filesystem_encoding)
@ -61,6 +63,12 @@ def _reload():
is_unicode_platform = sys.platform in ["win32", "darwin"]
# Despite the Unicode-mode FilePath support added to Twisted in
# <https://twistedmatrix.com/trac/ticket/7805>, we can't yet use
# Unicode-mode FilePaths with INotify on non-Windows platforms
# due to <https://twistedmatrix.com/trac/ticket/7928>.
use_unicode_filepath = sys.platform == "win32"
_reload()
@ -88,12 +96,16 @@ def argv_to_unicode(s):
raise usage.UsageError("Argument %s cannot be decoded as %s." %
(quote_output(s), io_encoding))
def argv_to_abspath(s):
def argv_to_abspath(s, long_path=True):
"""
Convenience function to decode an argv element to an absolute path, with ~ expanded.
If this fails, raise a UsageError.
"""
return abspath_expanduser_unicode(argv_to_unicode(s))
decoded = argv_to_unicode(s)
if decoded.startswith(u'-'):
raise usage.UsageError("Path argument %s cannot start with '-'.\nUse %s if you intended to refer to a file."
% (quote_output(s), quote_output(os.path.join('.', s))))
return abspath_expanduser_unicode(decoded, long_path=long_path)
def unicode_to_argv(s, mangle=False):
"""
@ -245,6 +257,54 @@ def quote_local_unicode_path(path, quotemarks=True):
return quote_output(path, quotemarks=quotemarks, quote_newlines=True)
def quote_filepath(path, quotemarks=True):
return quote_local_unicode_path(unicode_from_filepath(path), quotemarks=quotemarks)
def extend_filepath(fp, segments):
# We cannot use FilePath.preauthChild, because
# * it has the security flaw described in <https://twistedmatrix.com/trac/ticket/6527>;
# * it may return a FilePath in the wrong mode.
for segment in segments:
fp = fp.child(segment)
if isinstance(fp.path, unicode) and not use_unicode_filepath:
return FilePath(fp.path.encode(filesystem_encoding))
else:
return fp
def to_filepath(path):
precondition(isinstance(path, unicode if use_unicode_filepath else basestring),
path=path)
if isinstance(path, unicode) and not use_unicode_filepath:
path = path.encode(filesystem_encoding)
if sys.platform == "win32":
_assert(isinstance(path, unicode), path=path)
if path.startswith(u"\\\\?\\") and len(path) > 4:
# FilePath normally strips trailing path separators, but not in this case.
path = path.rstrip(u"\\")
return FilePath(path)
def _decode(s):
precondition(isinstance(s, basestring), s=s)
if isinstance(s, bytes):
return s.decode(filesystem_encoding)
else:
return s
def unicode_from_filepath(fp):
precondition(isinstance(fp, FilePath), fp=fp)
return _decode(fp.path)
def unicode_segments_from(base_fp, ancestor_fp):
precondition(isinstance(base_fp, FilePath), base_fp=base_fp)
precondition(isinstance(ancestor_fp, FilePath), ancestor_fp=ancestor_fp)
return base_fp.asTextMode().segmentsFrom(ancestor_fp.asTextMode())
def unicode_platform():
"""
@ -292,3 +352,6 @@ def listdir_unicode(path):
return os.listdir(path)
else:
return listdir_unicode_fallback(path)
def listdir_filepath(fp):
return listdir_unicode(unicode_from_filepath(fp))

View File

@ -3,6 +3,13 @@ Futz with files like a pro.
"""
import sys, exceptions, os, stat, tempfile, time, binascii
from collections import namedtuple
from errno import ENOENT
if sys.platform == "win32":
from ctypes import WINFUNCTYPE, WinError, windll, POINTER, byref, c_ulonglong, \
create_unicode_buffer, get_last_error
from ctypes.wintypes import BOOL, HANDLE, DWORD, LPCWSTR, LPWSTR, LPVOID
from twisted.python import log
@ -274,17 +281,18 @@ try:
except ImportError:
pass
def abspath_expanduser_unicode(path, base=None):
def abspath_expanduser_unicode(path, base=None, long_path=True):
"""
Return the absolute version of a path. If 'base' is given and 'path' is relative,
the path will be expanded relative to 'base'.
'path' must be a Unicode string. 'base', if given, must be a Unicode string
corresponding to an absolute path as returned by a previous call to
abspath_expanduser_unicode.
On Windows, the result will be a long path unless long_path is given as False.
"""
if not isinstance(path, unicode):
raise AssertionError("paths must be Unicode strings")
if base is not None:
if base is not None and long_path:
precondition_abspath(base)
path = expanduser(path)
@ -311,7 +319,7 @@ def abspath_expanduser_unicode(path, base=None):
# there is always at least one Unicode path component.
path = os.path.normpath(path)
if sys.platform == "win32":
if sys.platform == "win32" and long_path:
path = to_windows_long_path(path)
return path
@ -335,16 +343,11 @@ def to_windows_long_path(path):
have_GetDiskFreeSpaceExW = False
if sys.platform == "win32":
from ctypes import WINFUNCTYPE, windll, POINTER, byref, c_ulonglong, create_unicode_buffer, \
get_last_error
from ctypes.wintypes import BOOL, DWORD, LPCWSTR, LPWSTR
# <http://msdn.microsoft.com/en-us/library/windows/desktop/ms683188%28v=vs.85%29.aspx>
GetEnvironmentVariableW = WINFUNCTYPE(
DWORD,
LPCWSTR, LPWSTR, DWORD,
DWORD, LPCWSTR, LPWSTR, DWORD,
use_last_error=True
)(("GetEnvironmentVariableW", windll.kernel32))
)(("GetEnvironmentVariableW", windll.kernel32))
try:
# <http://msdn.microsoft.com/en-us/library/aa383742%28v=VS.85%29.aspx>
@ -352,10 +355,9 @@ if sys.platform == "win32":
# <http://msdn.microsoft.com/en-us/library/aa364937%28VS.85%29.aspx>
GetDiskFreeSpaceExW = WINFUNCTYPE(
BOOL,
LPCWSTR, PULARGE_INTEGER, PULARGE_INTEGER, PULARGE_INTEGER,
BOOL, LPCWSTR, PULARGE_INTEGER, PULARGE_INTEGER, PULARGE_INTEGER,
use_last_error=True
)(("GetDiskFreeSpaceExW", windll.kernel32))
)(("GetDiskFreeSpaceExW", windll.kernel32))
have_GetDiskFreeSpaceExW = True
except Exception:
@ -403,8 +405,8 @@ def windows_getenv(name):
err = get_last_error()
if err == ERROR_ENVVAR_NOT_FOUND:
return None
raise OSError("Windows error %d attempting to read size of environment variable %r"
% (err, name))
raise OSError("WinError: %s\n attempting to read size of environment variable %r"
% (WinError(err), name))
if n == 1:
# Avoid an ambiguity between a zero-length string and an error in the return value of the
# call to GetEnvironmentVariableW below.
@ -416,8 +418,8 @@ def windows_getenv(name):
err = get_last_error()
if err == ERROR_ENVVAR_NOT_FOUND:
return None
raise OSError("Windows error %d attempting to read environment variable %r"
% (err, name))
raise OSError("WinError: %s\n attempting to read environment variable %r"
% (WinError(err), name))
if retval >= n:
raise OSError("Unexpected result %d (expected less than %d) from GetEnvironmentVariableW attempting to read environment variable %r"
% (retval, n, name))
@ -459,8 +461,8 @@ def get_disk_stats(whichdir, reserved_space=0):
byref(n_total),
byref(n_free_for_root))
if retval == 0:
raise OSError("Windows error %d attempting to get disk statistics for %r"
% (get_last_error(), whichdir))
raise OSError("WinError: %s\n attempting to get disk statistics for %r"
% (WinError(get_last_error()), whichdir))
free_for_nonroot = n_free_for_nonroot.value
total = n_total.value
free_for_root = n_free_for_root.value
@ -515,3 +517,157 @@ def get_available_space(whichdir, reserved_space):
except EnvironmentError:
log.msg("OS call to get disk statistics failed")
return 0
if sys.platform == "win32":
# <http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx>
CreateFileW = WINFUNCTYPE(
HANDLE, LPCWSTR, DWORD, DWORD, LPVOID, DWORD, DWORD, HANDLE,
use_last_error=True
)(("CreateFileW", windll.kernel32))
GENERIC_WRITE = 0x40000000
FILE_SHARE_READ = 0x00000001
FILE_SHARE_WRITE = 0x00000002
OPEN_EXISTING = 3
INVALID_HANDLE_VALUE = 0xFFFFFFFF
# <http://msdn.microsoft.com/en-us/library/aa364439%28v=vs.85%29.aspx>
FlushFileBuffers = WINFUNCTYPE(
BOOL, HANDLE,
use_last_error=True
)(("FlushFileBuffers", windll.kernel32))
# <http://msdn.microsoft.com/en-us/library/ms724211%28v=vs.85%29.aspx>
CloseHandle = WINFUNCTYPE(
BOOL, HANDLE,
use_last_error=True
)(("CloseHandle", windll.kernel32))
# <http://social.msdn.microsoft.com/forums/en-US/netfxbcl/thread/4465cafb-f4ed-434f-89d8-c85ced6ffaa8/>
def flush_volume(path):
abspath = os.path.realpath(path)
if abspath.startswith("\\\\?\\"):
abspath = abspath[4 :]
drive = os.path.splitdrive(abspath)[0]
print "flushing %r" % (drive,)
hVolume = CreateFileW(u"\\\\.\\" + drive,
GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
None,
OPEN_EXISTING,
0,
None
)
if hVolume == INVALID_HANDLE_VALUE:
raise WinError(get_last_error())
if FlushFileBuffers(hVolume) == 0:
raise WinError(get_last_error())
CloseHandle(hVolume)
else:
def flush_volume(path):
# use sync()?
pass
class ConflictError(Exception):
pass
class UnableToUnlinkReplacementError(Exception):
pass
def reraise(wrapper):
_, exc, tb = sys.exc_info()
wrapper_exc = wrapper("%s: %s" % (exc.__class__.__name__, exc))
raise wrapper_exc.__class__, wrapper_exc, tb
if sys.platform == "win32":
# <https://msdn.microsoft.com/en-us/library/windows/desktop/aa365512%28v=vs.85%29.aspx>
ReplaceFileW = WINFUNCTYPE(
BOOL, LPCWSTR, LPCWSTR, LPCWSTR, DWORD, LPVOID, LPVOID,
use_last_error=True
)(("ReplaceFileW", windll.kernel32))
REPLACEFILE_IGNORE_MERGE_ERRORS = 0x00000002
# <https://msdn.microsoft.com/en-us/library/windows/desktop/ms681382%28v=vs.85%29.aspx>
ERROR_FILE_NOT_FOUND = 2
def rename_no_overwrite(source_path, dest_path):
os.rename(source_path, dest_path)
def replace_file(replaced_path, replacement_path, backup_path):
precondition_abspath(replaced_path)
precondition_abspath(replacement_path)
precondition_abspath(backup_path)
r = ReplaceFileW(replaced_path, replacement_path, backup_path,
REPLACEFILE_IGNORE_MERGE_ERRORS, None, None)
if r == 0:
# The UnableToUnlinkReplacementError case does not happen on Windows;
# all errors should be treated as signalling a conflict.
err = get_last_error()
if err != ERROR_FILE_NOT_FOUND:
raise ConflictError("WinError: %s" % (WinError(err),))
try:
rename_no_overwrite(replacement_path, replaced_path)
except EnvironmentError:
reraise(ConflictError)
else:
def rename_no_overwrite(source_path, dest_path):
# link will fail with EEXIST if there is already something at dest_path.
os.link(source_path, dest_path)
try:
os.unlink(source_path)
except EnvironmentError:
reraise(UnableToUnlinkReplacementError)
def replace_file(replaced_path, replacement_path, backup_path):
precondition_abspath(replaced_path)
precondition_abspath(replacement_path)
precondition_abspath(backup_path)
if not os.path.exists(replacement_path):
raise ConflictError("Replacement file not found: %r" % (replacement_path,))
try:
os.rename(replaced_path, backup_path)
except OSError as e:
if e.errno != ENOENT:
raise
try:
rename_no_overwrite(replacement_path, replaced_path)
except EnvironmentError:
reraise(ConflictError)
PathInfo = namedtuple('PathInfo', 'isdir isfile islink exists size mtime ctime')
def get_pathinfo(path_u, now=None):
try:
statinfo = os.lstat(path_u)
mode = statinfo.st_mode
return PathInfo(isdir =stat.S_ISDIR(mode),
isfile=stat.S_ISREG(mode),
islink=stat.S_ISLNK(mode),
exists=True,
size =statinfo.st_size,
mtime =statinfo.st_mtime,
ctime =statinfo.st_ctime,
)
except OSError as e:
if e.errno == ENOENT:
if now is None:
now = time.time()
return PathInfo(isdir =False,
isfile=False,
islink=False,
exists=False,
size =None,
mtime =now,
ctime =now,
)
raise

View File

@ -9,13 +9,18 @@ def initialize():
done = True
import codecs, re
from ctypes import WINFUNCTYPE, windll, POINTER, byref, c_int
from ctypes import WINFUNCTYPE, WinError, windll, POINTER, byref, c_int, get_last_error
from ctypes.wintypes import BOOL, HANDLE, DWORD, UINT, LPWSTR, LPCWSTR, LPVOID
from allmydata.util import log
from allmydata.util.encodingutil import canonical_encoding
# <https://msdn.microsoft.com/en-us/library/ms680621%28VS.85%29.aspx>
SetErrorMode = WINFUNCTYPE(UINT, UINT)(("SetErrorMode", windll.kernel32))
SetErrorMode = WINFUNCTYPE(
UINT, UINT,
use_last_error=True
)(("SetErrorMode", windll.kernel32))
SEM_FAILCRITICALERRORS = 0x0001
SEM_NOOPENFILEERRORBOX = 0x8000
@ -50,13 +55,27 @@ def initialize():
# <https://msdn.microsoft.com/en-us/library/ms683167(VS.85).aspx>
# BOOL WINAPI GetConsoleMode(HANDLE hConsole, LPDWORD lpMode);
GetStdHandle = WINFUNCTYPE(HANDLE, DWORD)(("GetStdHandle", windll.kernel32))
GetStdHandle = WINFUNCTYPE(
HANDLE, DWORD,
use_last_error=True
)(("GetStdHandle", windll.kernel32))
STD_OUTPUT_HANDLE = DWORD(-11)
STD_ERROR_HANDLE = DWORD(-12)
GetFileType = WINFUNCTYPE(DWORD, DWORD)(("GetFileType", windll.kernel32))
GetFileType = WINFUNCTYPE(
DWORD, DWORD,
use_last_error=True
)(("GetFileType", windll.kernel32))
FILE_TYPE_CHAR = 0x0002
FILE_TYPE_REMOTE = 0x8000
GetConsoleMode = WINFUNCTYPE(BOOL, HANDLE, POINTER(DWORD))(("GetConsoleMode", windll.kernel32))
GetConsoleMode = WINFUNCTYPE(
BOOL, HANDLE, POINTER(DWORD),
use_last_error=True
)(("GetConsoleMode", windll.kernel32))
INVALID_HANDLE_VALUE = DWORD(-1).value
def not_a_console(handle):
@ -88,11 +107,14 @@ def initialize():
real_stderr = False
if real_stdout or real_stderr:
# <https://msdn.microsoft.com/en-us/library/windows/desktop/ms687401%28v=vs.85%29.aspx>
# BOOL WINAPI WriteConsoleW(HANDLE hOutput, LPWSTR lpBuffer, DWORD nChars,
# LPDWORD lpCharsWritten, LPVOID lpReserved);
WriteConsoleW = WINFUNCTYPE(BOOL, HANDLE, LPWSTR, DWORD, POINTER(DWORD), LPVOID) \
(("WriteConsoleW", windll.kernel32))
WriteConsoleW = WINFUNCTYPE(
BOOL, HANDLE, LPWSTR, DWORD, POINTER(DWORD), LPVOID,
use_last_error=True
)(("WriteConsoleW", windll.kernel32))
class UnicodeOutput:
def __init__(self, hConsole, stream, fileno, name):
@ -139,8 +161,10 @@ def initialize():
# There is a shorter-than-documented limitation on the length of the string
# passed to WriteConsoleW (see #1232).
retval = WriteConsoleW(self._hConsole, text, min(remaining, 10000), byref(n), None)
if retval == 0 or n.value == 0:
raise IOError("WriteConsoleW returned %r, n.value = %r" % (retval, n.value))
if retval == 0:
raise IOError("WriteConsoleW failed with WinError: %s" % (WinError(get_last_error()),))
if n.value == 0:
raise IOError("WriteConsoleW returned %r, n.value = 0" % (retval,))
remaining -= n.value
if remaining == 0: break
text = text[n.value:]
@ -169,12 +193,23 @@ def initialize():
_complain("exception %r while fixing up sys.stdout and sys.stderr" % (e,))
# This works around <http://bugs.python.org/issue2128>.
GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32))
CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int)) \
(("CommandLineToArgvW", windll.shell32))
# <https://msdn.microsoft.com/en-us/library/windows/desktop/ms683156%28v=vs.85%29.aspx>
GetCommandLineW = WINFUNCTYPE(
LPWSTR,
use_last_error=True
)(("GetCommandLineW", windll.kernel32))
# <https://msdn.microsoft.com/en-us/library/windows/desktop/bb776391%28v=vs.85%29.aspx>
CommandLineToArgvW = WINFUNCTYPE(
POINTER(LPWSTR), LPCWSTR, POINTER(c_int),
use_last_error=True
)(("CommandLineToArgvW", windll.shell32))
argc = c_int(0)
argv_unicode = CommandLineToArgvW(GetCommandLineW(), byref(argc))
if argv_unicode is None:
raise WinError(get_last_error())
# Because of <http://bugs.python.org/issue8775> (and similar limitations in
# twisted), the 'bin/tahoe' script cannot invoke us with the actual Unicode arguments.

View File

@ -0,0 +1,285 @@
# Windows near-equivalent to twisted.internet.inotify
# This should only be imported on Windows.
import os, sys
from twisted.internet import reactor
from twisted.internet.threads import deferToThread
from allmydata.util.fake_inotify import humanReadableMask, \
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED
[humanReadableMask, \
IN_WATCH_MASK, IN_ACCESS, IN_MODIFY, IN_ATTRIB, IN_CLOSE_NOWRITE, IN_CLOSE_WRITE, \
IN_OPEN, IN_MOVED_FROM, IN_MOVED_TO, IN_CREATE, IN_DELETE, IN_DELETE_SELF, \
IN_MOVE_SELF, IN_UNMOUNT, IN_Q_OVERFLOW, IN_IGNORED, IN_ONLYDIR, IN_DONT_FOLLOW, \
IN_MASK_ADD, IN_ISDIR, IN_ONESHOT, IN_CLOSE, IN_MOVED, IN_CHANGED]
from allmydata.util.assertutil import _assert, precondition
from allmydata.util.encodingutil import quote_output
from allmydata.util import log, fileutil
from allmydata.util.pollmixin import PollMixin
from ctypes import WINFUNCTYPE, WinError, windll, POINTER, byref, create_string_buffer, \
addressof, get_last_error
from ctypes.wintypes import BOOL, HANDLE, DWORD, LPCWSTR, LPVOID
# <http://msdn.microsoft.com/en-us/library/gg258116%28v=vs.85%29.aspx>
FILE_LIST_DIRECTORY = 1
# <http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx>
CreateFileW = WINFUNCTYPE(
HANDLE, LPCWSTR, DWORD, DWORD, LPVOID, DWORD, DWORD, HANDLE,
use_last_error=True
)(("CreateFileW", windll.kernel32))
FILE_SHARE_READ = 0x00000001
FILE_SHARE_WRITE = 0x00000002
FILE_SHARE_DELETE = 0x00000004
OPEN_EXISTING = 3
FILE_FLAG_BACKUP_SEMANTICS = 0x02000000
# <http://msdn.microsoft.com/en-us/library/ms724211%28v=vs.85%29.aspx>
CloseHandle = WINFUNCTYPE(
BOOL, HANDLE,
use_last_error=True
)(("CloseHandle", windll.kernel32))
# <http://msdn.microsoft.com/en-us/library/aa365465%28v=vs.85%29.aspx>
ReadDirectoryChangesW = WINFUNCTYPE(
BOOL, HANDLE, LPVOID, DWORD, BOOL, DWORD, POINTER(DWORD), LPVOID, LPVOID,
use_last_error=True
)(("ReadDirectoryChangesW", windll.kernel32))
FILE_NOTIFY_CHANGE_FILE_NAME = 0x00000001
FILE_NOTIFY_CHANGE_DIR_NAME = 0x00000002
FILE_NOTIFY_CHANGE_ATTRIBUTES = 0x00000004
#FILE_NOTIFY_CHANGE_SIZE = 0x00000008
FILE_NOTIFY_CHANGE_LAST_WRITE = 0x00000010
FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x00000020
#FILE_NOTIFY_CHANGE_CREATION = 0x00000040
FILE_NOTIFY_CHANGE_SECURITY = 0x00000100
# <http://msdn.microsoft.com/en-us/library/aa364391%28v=vs.85%29.aspx>
FILE_ACTION_ADDED = 0x00000001
FILE_ACTION_REMOVED = 0x00000002
FILE_ACTION_MODIFIED = 0x00000003
FILE_ACTION_RENAMED_OLD_NAME = 0x00000004
FILE_ACTION_RENAMED_NEW_NAME = 0x00000005
_action_to_string = {
FILE_ACTION_ADDED : "FILE_ACTION_ADDED",
FILE_ACTION_REMOVED : "FILE_ACTION_REMOVED",
FILE_ACTION_MODIFIED : "FILE_ACTION_MODIFIED",
FILE_ACTION_RENAMED_OLD_NAME : "FILE_ACTION_RENAMED_OLD_NAME",
FILE_ACTION_RENAMED_NEW_NAME : "FILE_ACTION_RENAMED_NEW_NAME",
}
_action_to_inotify_mask = {
FILE_ACTION_ADDED : IN_CREATE,
FILE_ACTION_REMOVED : IN_DELETE,
FILE_ACTION_MODIFIED : IN_CHANGED,
FILE_ACTION_RENAMED_OLD_NAME : IN_MOVED_FROM,
FILE_ACTION_RENAMED_NEW_NAME : IN_MOVED_TO,
}
INVALID_HANDLE_VALUE = 0xFFFFFFFF
TRUE = 0
FALSE = 1
class Event(object):
"""
* action: a FILE_ACTION_* constant (not a bit mask)
* filename: a Unicode string, giving the name relative to the watched directory
"""
def __init__(self, action, filename):
self.action = action
self.filename = filename
def __repr__(self):
return "Event(%r, %r)" % (_action_to_string.get(self.action, self.action), self.filename)
class FileNotifyInformation(object):
"""
I represent a buffer containing FILE_NOTIFY_INFORMATION structures, and can
iterate over those structures, decoding them into Event objects.
"""
def __init__(self, size=1024):
self.size = size
self.buffer = create_string_buffer(size)
address = addressof(self.buffer)
_assert(address & 3 == 0, "address 0x%X returned by create_string_buffer is not DWORD-aligned" % (address,))
self.data = None
def read_changes(self, hDirectory, recursive, filter):
bytes_returned = DWORD(0)
r = ReadDirectoryChangesW(hDirectory,
self.buffer,
self.size,
recursive,
filter,
byref(bytes_returned),
None, # NULL -> no overlapped I/O
None # NULL -> no completion routine
)
if r == 0:
raise WinError(get_last_error())
self.data = self.buffer.raw[:bytes_returned.value]
def __iter__(self):
# Iterator implemented as generator: <http://docs.python.org/library/stdtypes.html#generator-types>
pos = 0
while True:
bytes = self._read_dword(pos+8)
s = Event(self._read_dword(pos+4),
self.data[pos+12 : pos+12+bytes].decode('utf-16-le'))
next_entry_offset = self._read_dword(pos)
yield s
if next_entry_offset == 0:
break
pos = pos + next_entry_offset
def _read_dword(self, i):
# little-endian
return ( ord(self.data[i]) |
(ord(self.data[i+1]) << 8) |
(ord(self.data[i+2]) << 16) |
(ord(self.data[i+3]) << 24))
def _open_directory(path_u):
hDirectory = CreateFileW(path_u,
FILE_LIST_DIRECTORY, # access rights
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
# don't prevent other processes from accessing
None, # no security descriptor
OPEN_EXISTING, # directory must already exist
FILE_FLAG_BACKUP_SEMANTICS, # necessary to open a directory
None # no template file
)
if hDirectory == INVALID_HANDLE_VALUE:
e = WinError(get_last_error())
raise OSError("Opening directory %s gave WinError: %s" % (quote_output(path_u), e))
return hDirectory
def simple_test():
path_u = u"test"
filter = FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_LAST_WRITE
recursive = FALSE
hDirectory = _open_directory(path_u)
fni = FileNotifyInformation()
print "Waiting..."
while True:
fni.read_changes(hDirectory, recursive, filter)
print repr(fni.data)
for info in fni:
print info
NOT_STARTED = "NOT_STARTED"
STARTED = "STARTED"
STOPPING = "STOPPING"
STOPPED = "STOPPED"
class INotify(PollMixin):
def __init__(self):
self._state = NOT_STARTED
self._filter = None
self._callbacks = None
self._hDirectory = None
self._path = None
self._pending = set()
self._pending_delay = 1.0
self.recursive_includes_new_subdirectories = True
def set_pending_delay(self, delay):
self._pending_delay = delay
def startReading(self):
deferToThread(self._thread)
return self.poll(lambda: self._state != NOT_STARTED)
def stopReading(self):
# FIXME race conditions
if self._state != STOPPED:
self._state = STOPPING
def wait_until_stopped(self):
fileutil.write(os.path.join(self._path.path, u".ignore-me"), "")
return self.poll(lambda: self._state == STOPPED)
def watch(self, path, mask=IN_WATCH_MASK, autoAdd=False, callbacks=None, recursive=False):
precondition(self._state == NOT_STARTED, "watch() can only be called before startReading()", state=self._state)
precondition(self._filter is None, "only one watch is supported")
precondition(isinstance(autoAdd, bool), autoAdd=autoAdd)
precondition(isinstance(recursive, bool), recursive=recursive)
#precondition(autoAdd == recursive, "need autoAdd and recursive to be the same", autoAdd=autoAdd, recursive=recursive)
self._path = path
path_u = path.path
if not isinstance(path_u, unicode):
path_u = path_u.decode(sys.getfilesystemencoding())
_assert(isinstance(path_u, unicode), path_u=path_u)
self._filter = FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_LAST_WRITE
if mask & (IN_ACCESS | IN_CLOSE_NOWRITE | IN_OPEN):
self._filter = self._filter | FILE_NOTIFY_CHANGE_LAST_ACCESS
if mask & IN_ATTRIB:
self._filter = self._filter | FILE_NOTIFY_CHANGE_ATTRIBUTES | FILE_NOTIFY_CHANGE_SECURITY
self._recursive = TRUE if recursive else FALSE
self._callbacks = callbacks or []
self._hDirectory = _open_directory(path_u)
def _thread(self):
try:
_assert(self._filter is not None, "no watch set")
# To call Twisted or Tahoe APIs, use reactor.callFromThread as described in
# <http://twistedmatrix.com/documents/current/core/howto/threading.html>.
fni = FileNotifyInformation()
while True:
self._state = STARTED
fni.read_changes(self._hDirectory, self._recursive, self._filter)
for info in fni:
if self._state == STOPPING:
hDirectory = self._hDirectory
self._callbacks = None
self._hDirectory = None
CloseHandle(hDirectory)
self._state = STOPPED
return
path = self._path.preauthChild(info.filename) # FilePath with Unicode path
#mask = _action_to_inotify_mask.get(info.action, IN_CHANGED)
def _maybe_notify(path):
if path not in self._pending:
self._pending.add(path)
def _do_callbacks():
self._pending.remove(path)
for cb in self._callbacks:
try:
cb(None, path, IN_CHANGED)
except Exception, e:
log.err(e)
reactor.callLater(self._pending_delay, _do_callbacks)
reactor.callFromThread(_maybe_notify, path)
except Exception, e:
log.err(e)
self._state = STOPPED
raise