Mon Aug  9 16:32:44 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * interfaces.py: Add #993 interfaces

Mon Aug  9 16:35:35 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * frontends/sftpd.py: Modify the sftp frontend to work with the MDMF changes

Mon Aug  9 17:06:19 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * immutable/filenode.py: Make the immutable file node implement the same interfaces as the mutable one

Mon Aug  9 17:06:33 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * immutable/literal.py: implement the same interfaces as other filenodes

Fri Aug 13 16:49:57 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * scripts: tell 'tahoe put' about MDMF

Sat Aug 14 01:10:12 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * web: Alter the webapi to get along with and take advantage of the MDMF changes
  
  The main benefit that the webapi gets from MDMF, at least initially, is
  the ability to do a streaming download of an MDMF mutable file. It also
  exposes a way (through the PUT verb) to append to or otherwise modify
  (in-place) an MDMF mutable file.

Sat Aug 14 15:57:11 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * client.py: learn how to create different kinds of mutable files

Wed Aug 18 17:32:16 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/checker.py and mutable/repair.py: Modify checker and repairer to work with MDMF
  
  The checker and repairer required minimal changes to work with the MDMF
  modifications made elsewhere. The checker duplicated a lot of the code
  that was already in the downloader, so I modified the downloader
  slightly to expose this functionality to the checker and removed the
  duplicated code. The repairer only required a minor change to deal with
  data representation.

Wed Aug 18 17:32:31 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/filenode.py: add versions and partial-file updates to the mutable file node
  
  One of the goals of MDMF as a GSoC project is to lay the groundwork for
  LDMF, a format that will allow Tahoe-LAFS to deal with and encourage
  multiple versions of a single cap on the grid. In line with this, there
  is a now a distinction between an overriding mutable file (which can be
  thought to correspond to the cap/unique identifier for that mutable
  file) and versions of the mutable file (which we can download, update,
  and so on). All download, upload, and modification operations end up
  happening on a particular version of a mutable file, but there are
  shortcut methods on the object representing the overriding mutable file
  that perform these operations on the best version of the mutable file
  (which is what code should be doing until we have LDMF and better
  support for other paradigms).
  
  Another goal of MDMF was to take advantage of segmentation to give
  callers more efficient partial file updates or appends. This patch
  implements methods that do that, too.
  

Wed Aug 18 17:33:42 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/publish.py: Modify the publish process to support MDMF
  
  The inner workings of the publishing process needed to be reworked to a
  large extend to cope with segmented mutable files, and to cope with
  partial-file updates of mutable files. This patch does that. It also
  introduces wrappers for uploadable data, allowing the use of
  filehandle-like objects as data sources, in addition to strings. This
  reduces memory inefficiency when dealing with large files through the
  webapi, and clarifies update code there.

Wed Aug 18 17:35:09 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * nodemaker.py: Make nodemaker expose a way to create MDMF files

Sat Aug 14 15:56:44 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * docs: update docs to mention MDMF

Wed Aug 18 17:33:04 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/layout.py and interfaces.py: add MDMF writer and reader
  
  The MDMF writer is responsible for keeping state as plaintext is
  gradually processed into share data by the upload process. When the
  upload finishes, it will write all of its share data to a remote server,
  reporting its status back to the publisher.
  
  The MDMF reader is responsible for abstracting an MDMF file as it sits
  on the grid from the downloader; specifically, by receiving and
  responding to requests for arbitrary data within the MDMF file.
  
  The interfaces.py file has also been modified to contain an interface
  for the writer.

Wed Aug 18 17:34:09 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/retrieve.py: Modify the retrieval process to support MDMF
  
  The logic behind a mutable file download had to be adapted to work with
  segmented mutable files; this patch performs those adaptations. It also
  exposes some decoding and decrypting functionality to make partial-file
  updates a little easier, and supports efficient random-access downloads
  of parts of an MDMF file.

Wed Aug 18 17:34:39 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/servermap.py: Alter the servermap updater to work with MDMF files
  
  These modifications were basically all to the end of having the
  servermap updater use the unified MDMF + SDMF read interface whenever
  possible -- this reduces the complexity of the code, making it easier to
  read and maintain. To do this, I needed to modify the process of
  updating the servermap a little bit.
  
  To support partial-file updates, I also modified the servermap updater
  to fetch the block hash trees and certain segments of files while it
  performed a servermap update (this can be done without adding any new
  roundtrips because of batch-read functionality that the read proxy has).
  

Wed Aug 18 17:35:31 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * tests:
  
      - A lot of existing tests relied on aspects of the mutable file
        implementation that were changed. This patch updates those tests
        to work with the changes.
      - This patch also adds tests for new features.

Sun Feb 20 15:02:01 PST 2011  "Brian Warner <warner@lothar.com>"
  * resolve conflicts between 393-MDMF patches and trunk as of 1.8.2

Sun Feb 20 17:46:59 PST 2011  "Brian Warner <warner@lothar.com>"
  * mutable/filenode.py: fix create_mutable_file('string')

Sun Feb 20 21:56:00 PST 2011  "Brian Warner <warner@lothar.com>"
  * resolve more conflicts with current trunk

Sun Feb 20 22:10:04 PST 2011  "Brian Warner <warner@lothar.com>"
  * update MDMF code with StorageFarmBroker changes

Fri Feb 25 17:04:33 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/filenode: Clean up servermap handling in MutableFileVersion
  
  We want to update the servermap before attempting to modify a file,
  which we now do. This introduced code duplication, which was addressed
  by refactoring the servermap update into its own method, and then
  eliminating duplicate servermap updates throughout the
  MutableFileVersion.

Sun Feb 27 15:16:43 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * web: Use the string "replace" to trigger whole-file replacement when processing an offset parameter.

Sun Feb 27 16:34:26 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * docs/configuration.rst: fix more conflicts between #393 and trunk

Sun Feb 27 17:06:37 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/layout: remove references to the salt hash tree.

Sun Feb 27 18:10:56 PST 2011  warner@lothar.com
  * test_mutable.py: add test to exercise fencepost bug

Mon Feb 28 00:33:27 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/publish: account for offsets on segment boundaries.

Mon Feb 28 19:08:07 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * tahoe-put: raise UsageError when given a nonsensical mutable type, move option validation code to the option parser.

Fri Mar  4 17:08:58 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * web: use None instead of False in the case of no offset, use object identity comparison to check whether or not an offset was specified.

Mon Mar  7 00:17:13 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/filenode: remove incorrect comments about segment boundaries

Mon Mar  7 00:22:29 PST 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable: use integer division where appropriate

Sun May  1 15:41:25 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/layout.py: reorder on-disk format to aput variable-length fields at the end of the share, after a predictably long preamble

Sun May  1 15:42:49 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * uri.py: Add MDMF cap

Sun May  1 15:45:23 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * nodemaker, mutable/filenode: train nodemaker and filenode to handle MDMF caps

Sun May 15 15:59:46 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/retrieve: fix typo in paused check

Sun May 15 16:00:08 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * scripts/tahoe_put.py: teach tahoe put about MDMF caps

Sun May 15 16:00:38 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/common.py: fix some MDMF-related bugs in common test fixtures

Sun May 15 16:00:54 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_cli: Alter existing MDMF tests to test for MDMF caps

Sun May 15 16:02:07 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_mutable.py: write a test for pausing during retrieval, write support structure for that test

Sun May 15 16:03:26 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_mutable.py: implement cap type checking

Sun May 15 16:03:58 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_web: add MDMF cap tests

Sun May 15 16:04:21 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * web/filenode.py: complain if a PUT is requested with a readonly cap

Sun May 15 16:04:44 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * web/info.py: Display mutable type information when describing a mutable file

Mon May 30 18:20:36 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * uri: teach mutable URI objects how to allow other objects to give them extension parameters

Mon May 30 18:22:01 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * interfaces: working update to interfaces.py for extension handling

Mon May 30 18:24:47 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/publish: tell filenodes about encoding parameters so they can be put in the cap

Mon May 30 18:25:57 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/servermap: caps imply protocol version, so the servermap doesn't need to tell the filenode what it is anymore.

Mon May 30 18:26:41 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/filenode: pass downloader hints between publisher, MutableFileNode, and MutableFileVersion as convenient
  
  We still need to work on making this more thorough; i.e., passing hints
  when other operations change encoding parameters.

Mon May 30 18:27:39 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test: change test fixtures to work with our new extension passing API; add, change, and delete tests as appropriate to reflect the fact that caps without hints are now the exception rather than the norm

Fri Jun 17 10:58:08 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * Add MDMF dirnodes

Fri Jun 17 10:59:50 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * Add tests for MDMF directories

Fri Jun 17 11:00:19 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * web: teach WUI and webapi to create MDMF directories

Fri Jun 17 11:01:00 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_web: test webapi and WUI for MDMF directory handling

Fri Jun 17 11:01:37 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * scripts: teach CLI to make MDMF directories

Fri Jun 17 11:02:09 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_cli: test CLI's MDMF creation powers

Wed Jul 27 09:28:55 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/layout: Add tighter bounds on the sizes of certain share components

Wed Jul 27 09:29:55 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_mutable: write tests that see what happens when there are 255 shares, amend existing tests to also test for this condition.

Thu Jul 28 10:16:01 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_mutable: add interoperability tests

Thu Jul 28 16:40:15 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * scripts/cli: resolve merge conflicts

Sat Jul 30 14:41:13 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable/publish: clean up error handling.

Sat Jul 30 15:02:08 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * mutable: address failure to publish when there are as  many writers as k, add/fix tests for this

Sat Jul 30 15:03:00 PDT 2011  Kevan Carstensen <kevan@isnotajoke.com>
  * test/test_cli: rework a tahoe cp test that relied on an old webapi error message

New patches:

[interfaces.py: Add #993 interfaces
Kevan Carstensen <kevan@isnotajoke.com>**20100809233244
 Ignore-this: b58621ac5cc86f1b4b4149f9e6c6a1ce
] {
hunk ./src/allmydata/interfaces.py 499
 class MustNotBeUnknownRWError(CapConstraintError):
     """Cannot add an unknown child cap specified in a rw_uri field."""
 
+
+class IReadable(Interface):
+    """I represent a readable object -- either an immutable file, or a
+    specific version of a mutable file.
+    """
+
+    def is_readonly():
+        """Return True if this reference provides mutable access to the given
+        file or directory (i.e. if you can modify it), or False if not. Note
+        that even if this reference is read-only, someone else may hold a
+        read-write reference to it.
+
+        For an IReadable returned by get_best_readable_version(), this will
+        always return True, but for instances of subinterfaces such as
+        IMutableFileVersion, it may return False."""
+
+    def is_mutable():
+        """Return True if this file or directory is mutable (by *somebody*,
+        not necessarily you), False if it is is immutable. Note that a file
+        might be mutable overall, but your reference to it might be
+        read-only. On the other hand, all references to an immutable file
+        will be read-only; there are no read-write references to an immutable
+        file."""
+
+    def get_storage_index():
+        """Return the storage index of the file."""
+
+    def get_size():
+        """Return the length (in bytes) of this readable object."""
+
+    def download_to_data():
+        """Download all of the file contents. I return a Deferred that fires
+        with the contents as a byte string."""
+
+    def read(consumer, offset=0, size=None):
+        """Download a portion (possibly all) of the file's contents, making
+        them available to the given IConsumer. Return a Deferred that fires
+        (with the consumer) when the consumer is unregistered (either because
+        the last byte has been given to it, or because the consumer threw an
+        exception during write(), possibly because it no longer wants to
+        receive data). The portion downloaded will start at 'offset' and
+        contain 'size' bytes (or the remainder of the file if size==None).
+
+        The consumer will be used in non-streaming mode: an IPullProducer
+        will be attached to it.
+
+        The consumer will not receive data right away: several network trips
+        must occur first. The order of events will be::
+
+         consumer.registerProducer(p, streaming)
+          (if streaming == False)::
+           consumer does p.resumeProducing()
+            consumer.write(data)
+           consumer does p.resumeProducing()
+            consumer.write(data).. (repeat until all data is written)
+         consumer.unregisterProducer()
+         deferred.callback(consumer)
+
+        If a download error occurs, or an exception is raised by
+        consumer.registerProducer() or consumer.write(), I will call
+        consumer.unregisterProducer() and then deliver the exception via
+        deferred.errback(). To cancel the download, the consumer should call
+        p.stopProducing(), which will result in an exception being delivered
+        via deferred.errback().
+
+        See src/allmydata/util/consumer.py for an example of a simple
+        download-to-memory consumer.
+        """
+
+
+class IWritable(Interface):
+    """
+    I define methods that callers can use to update SDMF and MDMF
+    mutable files on a Tahoe-LAFS grid.
+    """
+    # XXX: For the moment, we have only this. It is possible that we
+    #      want to move overwrite() and modify() in here too.
+    def update(data, offset):
+        """
+        I write the data from my data argument to the MDMF file,
+        starting at offset. I continue writing data until my data
+        argument is exhausted, appending data to the file as necessary.
+        """
+        # assert IMutableUploadable.providedBy(data)
+        # to append data: offset=node.get_size_of_best_version()
+        # do we want to support compacting MDMF?
+        # for an MDMF file, this can be done with O(data.get_size())
+        # memory. For an SDMF file, any modification takes
+        # O(node.get_size_of_best_version()).
+
+
+class IMutableFileVersion(IReadable):
+    """I provide access to a particular version of a mutable file. The
+    access is read/write if I was obtained from a filenode derived from
+    a write cap, or read-only if the filenode was derived from a read cap.
+    """
+
+    def get_sequence_number():
+        """Return the sequence number of this version."""
+
+    def get_servermap():
+        """Return the IMutableFileServerMap instance that was used to create
+        this object.
+        """
+
+    def get_writekey():
+        """Return this filenode's writekey, or None if the node does not have
+        write-capability. This may be used to assist with data structures
+        that need to make certain data available only to writers, such as the
+        read-write child caps in dirnodes. The recommended process is to have
+        reader-visible data be submitted to the filenode in the clear (where
+        it will be encrypted by the filenode using the readkey), but encrypt
+        writer-visible data using this writekey.
+        """
+
+    # TODO: Can this be overwrite instead of replace?
+    def replace(new_contents):
+        """Replace the contents of the mutable file, provided that no other
+        node has published (or is attempting to publish, concurrently) a
+        newer version of the file than this one.
+
+        I will avoid modifying any share that is different than the version
+        given by get_sequence_number(). However, if another node is writing
+        to the file at the same time as me, I may manage to update some shares
+        while they update others. If I see any evidence of this, I will signal
+        UncoordinatedWriteError, and the file will be left in an inconsistent
+        state (possibly the version you provided, possibly the old version,
+        possibly somebody else's version, and possibly a mix of shares from
+        all of these).
+
+        The recommended response to UncoordinatedWriteError is to either
+        return it to the caller (since they failed to coordinate their
+        writes), or to attempt some sort of recovery. It may be sufficient to
+        wait a random interval (with exponential backoff) and repeat your
+        operation. If I do not signal UncoordinatedWriteError, then I was
+        able to write the new version without incident.
+
+        I return a Deferred that fires (with a PublishStatus object) when the
+        update has completed.
+        """
+
+    def modify(modifier_cb):
+        """Modify the contents of the file, by downloading this version,
+        applying the modifier function (or bound method), then uploading
+        the new version. This will succeed as long as no other node
+        publishes a version between the download and the upload.
+        I return a Deferred that fires (with a PublishStatus object) when
+        the update is complete.
+
+        The modifier callable will be given three arguments: a string (with
+        the old contents), a 'first_time' boolean, and a servermap. As with
+        download_to_data(), the old contents will be from this version,
+        but the modifier can use the servermap to make other decisions
+        (such as refusing to apply the delta if there are multiple parallel
+        versions, or if there is evidence of a newer unrecoverable version).
+        'first_time' will be True the first time the modifier is called,
+        and False on any subsequent calls.
+
+        The callable should return a string with the new contents. The
+        callable must be prepared to be called multiple times, and must
+        examine the input string to see if the change that it wants to make
+        is already present in the old version. If it does not need to make
+        any changes, it can either return None, or return its input string.
+
+        If the modifier raises an exception, it will be returned in the
+        errback.
+        """
+
+
 # The hierarchy looks like this:
 #  IFilesystemNode
 #   IFileNode
hunk ./src/allmydata/interfaces.py 758
     def raise_error():
         """Raise any error associated with this node."""
 
+    # XXX: These may not be appropriate outside the context of an IReadable.
     def get_size():
         """Return the length (in bytes) of the data this node represents. For
         directory nodes, I return the size of the backing store. I return
hunk ./src/allmydata/interfaces.py 775
 class IFileNode(IFilesystemNode):
     """I am a node which represents a file: a sequence of bytes. I am not a
     container, like IDirectoryNode."""
+    def get_best_readable_version():
+        """Return a Deferred that fires with an IReadable for the 'best'
+        available version of the file. The IReadable provides only read
+        access, even if this filenode was derived from a write cap.
 
hunk ./src/allmydata/interfaces.py 780
-class IImmutableFileNode(IFileNode):
-    def read(consumer, offset=0, size=None):
-        """Download a portion (possibly all) of the file's contents, making
-        them available to the given IConsumer. Return a Deferred that fires
-        (with the consumer) when the consumer is unregistered (either because
-        the last byte has been given to it, or because the consumer threw an
-        exception during write(), possibly because it no longer wants to
-        receive data). The portion downloaded will start at 'offset' and
-        contain 'size' bytes (or the remainder of the file if size==None).
-
-        The consumer will be used in non-streaming mode: an IPullProducer
-        will be attached to it.
+        For an immutable file, there is only one version. For a mutable
+        file, the 'best' version is the recoverable version with the
+        highest sequence number. If no uncoordinated writes have occurred,
+        and if enough shares are available, then this will be the most
+        recent version that has been uploaded. If no version is recoverable,
+        the Deferred will errback with an UnrecoverableFileError.
+        """
 
hunk ./src/allmydata/interfaces.py 788
-        The consumer will not receive data right away: several network trips
-        must occur first. The order of events will be::
+    def download_best_version():
+        """Download the contents of the version that would be returned
+        by get_best_readable_version(). This is equivalent to calling
+        download_to_data() on the IReadable given by that method.
 
hunk ./src/allmydata/interfaces.py 793
-         consumer.registerProducer(p, streaming)
-          (if streaming == False)::
-           consumer does p.resumeProducing()
-            consumer.write(data)
-           consumer does p.resumeProducing()
-            consumer.write(data).. (repeat until all data is written)
-         consumer.unregisterProducer()
-         deferred.callback(consumer)
+        I return a Deferred that fires with a byte string when the file
+        has been fully downloaded. To support streaming download, use
+        the 'read' method of IReadable. If no version is recoverable,
+        the Deferred will errback with an UnrecoverableFileError.
+        """
 
hunk ./src/allmydata/interfaces.py 799
-        If a download error occurs, or an exception is raised by
-        consumer.registerProducer() or consumer.write(), I will call
-        consumer.unregisterProducer() and then deliver the exception via
-        deferred.errback(). To cancel the download, the consumer should call
-        p.stopProducing(), which will result in an exception being delivered
-        via deferred.errback().
+    def get_size_of_best_version():
+        """Find the size of the version that would be returned by
+        get_best_readable_version().
 
hunk ./src/allmydata/interfaces.py 803
-        See src/allmydata/util/consumer.py for an example of a simple
-        download-to-memory consumer.
+        I return a Deferred that fires with an integer. If no version
+        is recoverable, the Deferred will errback with an
+        UnrecoverableFileError.
         """
 
hunk ./src/allmydata/interfaces.py 808
+
+class IImmutableFileNode(IFileNode, IReadable):
+    """I am a node representing an immutable file. Immutable files have
+    only one version"""
+
+
 class IMutableFileNode(IFileNode):
     """I provide access to a 'mutable file', which retains its identity
     regardless of what contents are put in it.
hunk ./src/allmydata/interfaces.py 873
     only be retrieved and updated all-at-once, as a single big string. Future
     versions of our mutable files will remove this restriction.
     """
-
-    def download_best_version():
-        """Download the 'best' available version of the file, meaning one of
-        the recoverable versions with the highest sequence number. If no
+    def get_best_mutable_version():
+        """Return a Deferred that fires with an IMutableFileVersion for
+        the 'best' available version of the file. The best version is
+        the recoverable version with the highest sequence number. If no
         uncoordinated writes have occurred, and if enough shares are
hunk ./src/allmydata/interfaces.py 878
-        available, then this will be the most recent version that has been
-        uploaded.
+        available, then this will be the most recent version that has
+        been uploaded.
 
hunk ./src/allmydata/interfaces.py 881
-        I update an internal servermap with MODE_READ, determine which
-        version of the file is indicated by
-        servermap.best_recoverable_version(), and return a Deferred that
-        fires with its contents. If no version is recoverable, the Deferred
-        will errback with UnrecoverableFileError.
-        """
-
-    def get_size_of_best_version():
-        """Find the size of the version that would be downloaded with
-        download_best_version(), without actually downloading the whole file.
-
-        I return a Deferred that fires with an integer.
+        If no version is recoverable, the Deferred will errback with an
+        UnrecoverableFileError.
         """
 
     def overwrite(new_contents):
hunk ./src/allmydata/interfaces.py 921
         errback.
         """
 
-
     def get_servermap(mode):
         """Return a Deferred that fires with an IMutableFileServerMap
         instance, updated using the given mode.
hunk ./src/allmydata/interfaces.py 974
         writer-visible data using this writekey.
         """
 
+    def set_version(version):
+        """Tahoe-LAFS supports SDMF and MDMF mutable files. By default,
+        we upload in SDMF for reasons of compatibility. If you want to
+        change this, set_version will let you do that.
+
+        To say that this file should be uploaded in SDMF, pass in a 0. To
+        say that the file should be uploaded as MDMF, pass in a 1.
+        """
+
+    def get_version():
+        """Returns the mutable file protocol version."""
+
 class NotEnoughSharesError(Exception):
     """Download was unable to get enough shares"""
 
hunk ./src/allmydata/interfaces.py 1822
         """The upload is finished, and whatever filehandle was in use may be
         closed."""
 
+
+class IMutableUploadable(Interface):
+    """
+    I represent content that is due to be uploaded to a mutable filecap.
+    """
+    # This is somewhat simpler than the IUploadable interface above
+    # because mutable files do not need to be concerned with possibly
+    # generating a CHK, nor with per-file keys. It is a subset of the
+    # methods in IUploadable, though, so we could just as well implement
+    # the mutable uploadables as IUploadables that don't happen to use
+    # those methods (with the understanding that the unused methods will
+    # never be called on such objects)
+    def get_size():
+        """
+        Returns a Deferred that fires with the size of the content held
+        by the uploadable.
+        """
+
+    def read(length):
+        """
+        Returns a list of strings which, when concatenated, are the next
+        length bytes of the file, or fewer if there are fewer bytes
+        between the current location and the end of the file.
+        """
+
+    def close():
+        """
+        The process that used the Uploadable is finished using it, so
+        the uploadable may be closed.
+        """
+
 class IUploadResults(Interface):
     """I am returned by upload() methods. I contain a number of public
     attributes which can be read to determine the results of the upload. Some
}
[frontends/sftpd.py: Modify the sftp frontend to work with the MDMF changes
Kevan Carstensen <kevan@isnotajoke.com>**20100809233535
 Ignore-this: 2d25e2cfcd0d7bbcbba660c7e1da12f
] {
hunk ./src/allmydata/frontends/sftpd.py 33
 from allmydata.interfaces import IFileNode, IDirectoryNode, ExistingChildError, \
      NoSuchChildError, ChildOfWrongTypeError
 from allmydata.mutable.common import NotWriteableError
+from allmydata.mutable.publish import MutableFileHandle
 from allmydata.immutable.upload import FileHandle
 from allmydata.dirnode import update_metadata
 from allmydata.util.fileutil import EncryptedTemporaryFile
hunk ./src/allmydata/frontends/sftpd.py 667
         else:
             assert IFileNode.providedBy(filenode), filenode
 
-            if filenode.is_mutable():
-                self.async.addCallback(lambda ign: filenode.download_best_version())
-                def _downloaded(data):
-                    self.consumer = OverwriteableFileConsumer(len(data), tempfile_maker)
-                    self.consumer.write(data)
-                    self.consumer.finish()
-                    return None
-                self.async.addCallback(_downloaded)
-            else:
-                download_size = filenode.get_size()
-                assert download_size is not None, "download_size is None"
+            self.async.addCallback(lambda ignored: filenode.get_best_readable_version())
+
+            def _read(version):
+                if noisy: self.log("_read", level=NOISY)
+                download_size = version.get_size()
+                assert download_size is not None
+
                 self.consumer = OverwriteableFileConsumer(download_size, tempfile_maker)
hunk ./src/allmydata/frontends/sftpd.py 675
-                def _read(ign):
-                    if noisy: self.log("_read immutable", level=NOISY)
-                    filenode.read(self.consumer, 0, None)
-                self.async.addCallback(_read)
+
+                version.read(self.consumer, 0, None)
+            self.async.addCallback(_read)
 
         eventually(self.async.callback, None)
 
hunk ./src/allmydata/frontends/sftpd.py 821
                     assert parent and childname, (parent, childname, self.metadata)
                     d2.addCallback(lambda ign: parent.set_metadata_for(childname, self.metadata))
 
-                d2.addCallback(lambda ign: self.consumer.get_current_size())
-                d2.addCallback(lambda size: self.consumer.read(0, size))
-                d2.addCallback(lambda new_contents: self.filenode.overwrite(new_contents))
+                d2.addCallback(lambda ign: self.filenode.overwrite(MutableFileHandle(self.consumer.get_file())))
             else:
                 def _add_file(ign):
                     self.log("_add_file childname=%r" % (childname,), level=OPERATIONAL)
}
[immutable/filenode.py: Make the immutable file node implement the same interfaces as the mutable one
Kevan Carstensen <kevan@isnotajoke.com>**20100810000619
 Ignore-this: 93e536c0f8efb705310f13ff64621527
] {
hunk ./src/allmydata/immutable/filenode.py 8
 now = time.time
 from zope.interface import implements
 from twisted.internet import defer
-from twisted.internet.interfaces import IConsumer
 
hunk ./src/allmydata/immutable/filenode.py 9
-from allmydata.interfaces import IImmutableFileNode, IUploadResults
 from allmydata import uri
hunk ./src/allmydata/immutable/filenode.py 10
+from twisted.internet.interfaces import IConsumer
+from twisted.protocols import basic
+from foolscap.api import eventually
+from allmydata.interfaces import IImmutableFileNode, ICheckable, \
+     IDownloadTarget, IUploadResults
+from allmydata.util import dictutil, log, base32, consumer
+from allmydata.immutable.checker import Checker
 from allmydata.check_results import CheckResults, CheckAndRepairResults
 from allmydata.util.dictutil import DictOfSets
 from pycryptopp.cipher.aes import AES
hunk ./src/allmydata/immutable/filenode.py 285
         return self._cnode.check_and_repair(monitor, verify, add_lease)
     def check(self, monitor, verify=False, add_lease=False):
         return self._cnode.check(monitor, verify, add_lease)
+
+    def get_best_readable_version(self):
+        """
+        Return an IReadable of the best version of this file. Since
+        immutable files can have only one version, we just return the
+        current filenode.
+        """
+        return defer.succeed(self)
+
+
+    def download_best_version(self):
+        """
+        Download the best version of this file, returning its contents
+        as a bytestring. Since there is only one version of an immutable
+        file, we download and return the contents of this file.
+        """
+        d = consumer.download_to_data(self)
+        return d
+
+    # for an immutable file, download_to_data (specified in IReadable)
+    # is the same as download_best_version (specified in IFileNode). For
+    # mutable files, the difference is more meaningful, since they can
+    # have multiple versions.
+    download_to_data = download_best_version
+
+
+    # get_size() (IReadable), get_current_size() (IFilesystemNode), and
+    # get_size_of_best_version(IFileNode) are all the same for immutable
+    # files.
+    get_size_of_best_version = get_current_size
}
[immutable/literal.py: implement the same interfaces as other filenodes
Kevan Carstensen <kevan@isnotajoke.com>**20100810000633
 Ignore-this: b50dd5df2d34ecd6477b8499a27aef13
] hunk ./src/allmydata/immutable/literal.py 106
         d.addCallback(lambda lastSent: consumer)
         return d
 
+    # IReadable, IFileNode, IFilesystemNode
+    def get_best_readable_version(self):
+        return defer.succeed(self)
+
+
+    def download_best_version(self):
+        return defer.succeed(self.u.data)
+
+
+    download_to_data = download_best_version
+    get_size_of_best_version = get_current_size
+
[scripts: tell 'tahoe put' about MDMF
Kevan Carstensen <kevan@isnotajoke.com>**20100813234957
 Ignore-this: c106b3384fc676bd3c0fb466d2a52b1b
] {
hunk ./src/allmydata/scripts/cli.py 167
     optFlags = [
         ("mutable", "m", "Create a mutable file instead of an immutable one."),
         ]
+    optParameters = [
+        ("mutable-type", None, False, "Create a mutable file in the given format. Valid formats are 'sdmf' for SDMF and 'mdmf' for MDMF"),
+        ]
 
     def parseArgs(self, arg1=None, arg2=None):
         # see Examples below
hunk ./src/allmydata/scripts/tahoe_put.py 21
     from_file = options.from_file
     to_file = options.to_file
     mutable = options['mutable']
+    mutable_type = False
+
+    if mutable:
+        mutable_type = options['mutable-type']
     if options['quiet']:
         verbosity = 0
     else:
hunk ./src/allmydata/scripts/tahoe_put.py 33
     stdout = options.stdout
     stderr = options.stderr
 
+    if mutable_type and mutable_type not in ('sdmf', 'mdmf'):
+        # Don't try to pass unsupported types to the webapi
+        print >>stderr, "error: %s is an invalid format" % mutable_type
+        return 1
+
     if nodeurl[-1] != "/":
         nodeurl += "/"
     if to_file:
hunk ./src/allmydata/scripts/tahoe_put.py 76
         url = nodeurl + "uri"
     if mutable:
         url += "?mutable=true"
+    if mutable_type:
+        assert mutable
+        url += "&mutable-type=%s" % mutable_type
+
     if from_file:
         infileobj = open(os.path.expanduser(from_file), "rb")
     else:
}
[web: Alter the webapi to get along with and take advantage of the MDMF changes
Kevan Carstensen <kevan@isnotajoke.com>**20100814081012
 Ignore-this: 96c2ed4e4a9f450fb84db5d711d10bd6
 
 The main benefit that the webapi gets from MDMF, at least initially, is
 the ability to do a streaming download of an MDMF mutable file. It also
 exposes a way (through the PUT verb) to append to or otherwise modify
 (in-place) an MDMF mutable file.
] {
hunk ./src/allmydata/web/common.py 12
 from allmydata.interfaces import ExistingChildError, NoSuchChildError, \
      FileTooLargeError, NotEnoughSharesError, NoSharesError, \
      EmptyPathnameComponentError, MustBeDeepImmutableError, \
-     MustBeReadonlyError, MustNotBeUnknownRWError
+     MustBeReadonlyError, MustNotBeUnknownRWError, SDMF_VERSION, MDMF_VERSION
 from allmydata.mutable.common import UnrecoverableFileError
 from allmydata.util import abbreviate
 from allmydata.util.encodingutil import to_str, quote_output
hunk ./src/allmydata/web/common.py 35
     else:
         return boolean_of_arg(replace)
 
+
+def parse_mutable_type_arg(arg):
+    if not arg:
+        return None # interpreted by the caller as "let the nodemaker decide"
+
+    arg = arg.lower()
+    assert arg in ("mdmf", "sdmf")
+
+    if arg == "mdmf":
+        return MDMF_VERSION
+
+    return SDMF_VERSION
+
+
+def parse_offset_arg(offset):
+    # XXX: This will raise a ValueError when invoked on something that
+    # is not an integer. Is that okay? Or do we want a better error
+    # message? Since this call is going to be used by programmers and
+    # their tools rather than users (through the wui), it is not
+    # inconsistent to return that, I guess.
+    offset = int(offset)
+    return offset
+
+
 def get_root(ctx_or_req):
     req = IRequest(ctx_or_req)
     # the addSlash=True gives us one extra (empty) segment
hunk ./src/allmydata/web/directory.py 19
 from allmydata.uri import from_string_dirnode
 from allmydata.interfaces import IDirectoryNode, IFileNode, IFilesystemNode, \
      IImmutableFileNode, IMutableFileNode, ExistingChildError, \
-     NoSuchChildError, EmptyPathnameComponentError
+     NoSuchChildError, EmptyPathnameComponentError, SDMF_VERSION, MDMF_VERSION
 from allmydata.monitor import Monitor, OperationCancelledError
 from allmydata import dirnode
 from allmydata.web.common import text_plain, WebError, \
hunk ./src/allmydata/web/directory.py 153
         if not t:
             # render the directory as HTML, using the docFactory and Nevow's
             # whole templating thing.
-            return DirectoryAsHTML(self.node)
+            return DirectoryAsHTML(self.node,
+                                   self.client.mutable_file_default)
 
         if t == "json":
             return DirectoryJSONMetadata(ctx, self.node)
hunk ./src/allmydata/web/directory.py 556
     docFactory = getxmlfile("directory.xhtml")
     addSlash = True
 
-    def __init__(self, node):
+    def __init__(self, node, default_mutable_format):
         rend.Page.__init__(self)
         self.node = node
 
hunk ./src/allmydata/web/directory.py 560
+        assert default_mutable_format in (MDMF_VERSION, SDMF_VERSION)
+        self.default_mutable_format = default_mutable_format
+
     def beforeRender(self, ctx):
         # attempt to get the dirnode's children, stashing them (or the
         # failure that results) for later use
hunk ./src/allmydata/web/directory.py 780
             ]]
         forms.append(T.div(class_="freeform-form")[mkdir])
 
+        # Build input elements for mutable file type. We do this outside
+        # of the list so we can check the appropriate format, based on
+        # the default configured in the client (which reflects the
+        # default configured in tahoe.cfg)
+        if self.default_mutable_format == MDMF_VERSION:
+            mdmf_input = T.input(type='radio', name='mutable-type',
+                                 id='mutable-type-mdmf', value='mdmf',
+                                 checked='checked')
+        else:
+            mdmf_input = T.input(type='radio', name='mutable-type',
+                                 id='mutable-type-mdmf', value='mdmf')
+
+        if self.default_mutable_format == SDMF_VERSION:
+            sdmf_input = T.input(type='radio', name='mutable-type',
+                                 id='mutable-type-sdmf', value='sdmf',
+                                 checked="checked")
+        else:
+            sdmf_input = T.input(type='radio', name='mutable-type',
+                                 id='mutable-type-sdmf', value='sdmf')
+
         upload = T.form(action=".", method="post",
                         enctype="multipart/form-data")[
             T.fieldset[
hunk ./src/allmydata/web/directory.py 812
             T.input(type="submit", value="Upload"),
             " Mutable?:",
             T.input(type="checkbox", name="mutable"),
+            sdmf_input, T.label(for_="mutable-type-sdmf")["SDMF"],
+            mdmf_input,
+            T.label(for_="mutable-type-mdmf")["MDMF (experimental)"],
             ]]
         forms.append(T.div(class_="freeform-form")[upload])
 
hunk ./src/allmydata/web/directory.py 850
                 kiddata = ("filenode", {'size': childnode.get_size(),
                                         'mutable': childnode.is_mutable(),
                                         })
+                if childnode.is_mutable() and \
+                    childnode.get_version() is not None:
+                    mutable_type = childnode.get_version()
+                    assert mutable_type in (SDMF_VERSION, MDMF_VERSION)
+
+                    if mutable_type == MDMF_VERSION:
+                        mutable_type = "mdmf"
+                    else:
+                        mutable_type = "sdmf"
+                    kiddata[1]['mutable-type'] = mutable_type
+
             elif IDirectoryNode.providedBy(childnode):
                 kiddata = ("dirnode", {'mutable': childnode.is_mutable()})
             else:
hunk ./src/allmydata/web/filenode.py 9
 from nevow import url, rend
 from nevow.inevow import IRequest
 
-from allmydata.interfaces import ExistingChildError
+from allmydata.interfaces import ExistingChildError, SDMF_VERSION, MDMF_VERSION
 from allmydata.monitor import Monitor
 from allmydata.immutable.upload import FileHandle
hunk ./src/allmydata/web/filenode.py 12
+from allmydata.mutable.publish import MutableFileHandle
+from allmydata.mutable.common import MODE_READ
 from allmydata.util import log, base32
 
 from allmydata.web.common import text_plain, WebError, RenderMixin, \
hunk ./src/allmydata/web/filenode.py 18
      boolean_of_arg, get_arg, should_create_intermediate_directories, \
-     MyExceptionHandler, parse_replace_arg
+     MyExceptionHandler, parse_replace_arg, parse_offset_arg, \
+     parse_mutable_type_arg
 from allmydata.web.check_results import CheckResults, \
      CheckAndRepairResults, LiteralCheckResults
 from allmydata.web.info import MoreInfo
hunk ./src/allmydata/web/filenode.py 29
         # a new file is being uploaded in our place.
         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
         if mutable:
-            req.content.seek(0)
-            data = req.content.read()
-            d = client.create_mutable_file(data)
+            mutable_type = parse_mutable_type_arg(get_arg(req,
+                                                          "mutable-type",
+                                                          None))
+            data = MutableFileHandle(req.content)
+            d = client.create_mutable_file(data, version=mutable_type)
             def _uploaded(newnode):
                 d2 = self.parentnode.set_node(self.name, newnode,
                                               overwrite=replace)
hunk ./src/allmydata/web/filenode.py 66
         d.addCallback(lambda res: childnode.get_uri())
         return d
 
-    def _read_data_from_formpost(self, req):
-        # SDMF: files are small, and we can only upload data, so we read
-        # the whole file into memory before uploading.
-        contents = req.fields["file"]
-        contents.file.seek(0)
-        data = contents.file.read()
-        return data
 
     def replace_me_with_a_formpost(self, req, client, replace):
         # create a new file, maybe mutable, maybe immutable
hunk ./src/allmydata/web/filenode.py 71
         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
 
+        # create an immutable file
+        contents = req.fields["file"]
         if mutable:
hunk ./src/allmydata/web/filenode.py 74
-            data = self._read_data_from_formpost(req)
-            d = client.create_mutable_file(data)
+            mutable_type = parse_mutable_type_arg(get_arg(req, "mutable-type",
+                                                          None))
+            uploadable = MutableFileHandle(contents.file)
+            d = client.create_mutable_file(uploadable, version=mutable_type)
             def _uploaded(newnode):
                 d2 = self.parentnode.set_node(self.name, newnode,
                                               overwrite=replace)
hunk ./src/allmydata/web/filenode.py 85
                 return d2
             d.addCallback(_uploaded)
             return d
-        # create an immutable file
-        contents = req.fields["file"]
+
         uploadable = FileHandle(contents.file, convergence=client.convergence)
         d = self.parentnode.add_file(self.name, uploadable, overwrite=replace)
         d.addCallback(lambda newnode: newnode.get_uri())
hunk ./src/allmydata/web/filenode.py 91
         return d
 
+
 class PlaceHolderNodeHandler(RenderMixin, rend.Page, ReplaceMeMixin):
     def __init__(self, client, parentnode, name):
         rend.Page.__init__(self)
hunk ./src/allmydata/web/filenode.py 174
             # properly. So we assume that at least the browser will agree
             # with itself, and echo back the same bytes that we were given.
             filename = get_arg(req, "filename", self.name) or "unknown"
-            if self.node.is_mutable():
-                # some day: d = self.node.get_best_version()
-                d = makeMutableDownloadable(self.node)
-            else:
-                d = defer.succeed(self.node)
+            d = self.node.get_best_readable_version()
             d.addCallback(lambda dn: FileDownloader(dn, filename))
             return d
         if t == "json":
hunk ./src/allmydata/web/filenode.py 178
-            if self.parentnode and self.name:
-                d = self.parentnode.get_metadata_for(self.name)
+            # We do this to make sure that fields like size and
+            # mutable-type (which depend on the file on the grid and not
+            # just on the cap) are filled in. The latter gets used in
+            # tests, in particular.
+            #
+            # TODO: Make it so that the servermap knows how to update in
+            # a mode specifically designed to fill in these fields, and
+            # then update it in that mode.
+            if self.node.is_mutable():
+                d = self.node.get_servermap(MODE_READ)
             else:
                 d = defer.succeed(None)
hunk ./src/allmydata/web/filenode.py 190
+            if self.parentnode and self.name:
+                d.addCallback(lambda ignored:
+                    self.parentnode.get_metadata_for(self.name))
+            else:
+                d.addCallback(lambda ignored: None)
             d.addCallback(lambda md: FileJSONMetadata(ctx, self.node, md))
             return d
         if t == "info":
hunk ./src/allmydata/web/filenode.py 211
         if t:
             raise WebError("GET file: bad t=%s" % t)
         filename = get_arg(req, "filename", self.name) or "unknown"
-        if self.node.is_mutable():
-            # some day: d = self.node.get_best_version()
-            d = makeMutableDownloadable(self.node)
-        else:
-            d = defer.succeed(self.node)
+        d = self.node.get_best_readable_version()
         d.addCallback(lambda dn: FileDownloader(dn, filename))
         return d
 
hunk ./src/allmydata/web/filenode.py 219
         req = IRequest(ctx)
         t = get_arg(req, "t", "").strip()
         replace = parse_replace_arg(get_arg(req, "replace", "true"))
+        offset = parse_offset_arg(get_arg(req, "offset", -1))
 
         if not t:
hunk ./src/allmydata/web/filenode.py 222
-            if self.node.is_mutable():
+            if self.node.is_mutable() and offset >= 0:
+                return self.update_my_contents(req, offset)
+
+            elif self.node.is_mutable():
                 return self.replace_my_contents(req)
             if not replace:
                 # this is the early trap: if someone else modifies the
hunk ./src/allmydata/web/filenode.py 232
                 # directory while we're uploading, the add_file(overwrite=)
                 # call in replace_me_with_a_child will do the late trap.
                 raise ExistingChildError()
+            if offset >= 0:
+                raise WebError("PUT to a file: append operation invoked "
+                               "on an immutable cap")
+
+
             assert self.parentnode and self.name
             return self.replace_me_with_a_child(req, self.client, replace)
         if t == "uri":
hunk ./src/allmydata/web/filenode.py 299
 
     def replace_my_contents(self, req):
         req.content.seek(0)
-        new_contents = req.content.read()
+        new_contents = MutableFileHandle(req.content)
         d = self.node.overwrite(new_contents)
         d.addCallback(lambda res: self.node.get_uri())
         return d
hunk ./src/allmydata/web/filenode.py 304
 
+
+    def update_my_contents(self, req, offset):
+        req.content.seek(0)
+        added_contents = MutableFileHandle(req.content)
+
+        d = self.node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(added_contents, offset))
+        d.addCallback(lambda ignored:
+            self.node.get_uri())
+        return d
+
+
     def replace_my_contents_with_a_formpost(self, req):
         # we have a mutable file. Get the data from the formpost, and replace
         # the mutable file's contents with it.
hunk ./src/allmydata/web/filenode.py 320
-        new_contents = self._read_data_from_formpost(req)
+        new_contents = req.fields['file']
+        new_contents = MutableFileHandle(new_contents.file)
+
         d = self.node.overwrite(new_contents)
         d.addCallback(lambda res: self.node.get_uri())
         return d
hunk ./src/allmydata/web/filenode.py 327
 
-class MutableDownloadable:
-    #implements(IDownloadable)
-    def __init__(self, size, node):
-        self.size = size
-        self.node = node
-    def get_size(self):
-        return self.size
-    def is_mutable(self):
-        return True
-    def read(self, consumer, offset=0, size=None):
-        d = self.node.download_best_version()
-        d.addCallback(self._got_data, consumer, offset, size)
-        return d
-    def _got_data(self, contents, consumer, offset, size):
-        start = offset
-        if size is not None:
-            end = offset+size
-        else:
-            end = self.size
-        # SDMF: we can write the whole file in one big chunk
-        consumer.write(contents[start:end])
-        return consumer
-
-def makeMutableDownloadable(n):
-    d = defer.maybeDeferred(n.get_size_of_best_version)
-    d.addCallback(MutableDownloadable, n)
-    return d
 
 class FileDownloader(rend.Page):
     # since we override the rendering process (to let the tahoe Downloader
hunk ./src/allmydata/web/filenode.py 509
     data[1]['mutable'] = filenode.is_mutable()
     if edge_metadata is not None:
         data[1]['metadata'] = edge_metadata
+
+    if filenode.is_mutable() and filenode.get_version() is not None:
+        mutable_type = filenode.get_version()
+        assert mutable_type in (MDMF_VERSION, SDMF_VERSION)
+        if mutable_type == MDMF_VERSION:
+            mutable_type = "mdmf"
+        else:
+            mutable_type = "sdmf"
+        data[1]['mutable-type'] = mutable_type
+
     return text_plain(simplejson.dumps(data, indent=1) + "\n", ctx)
 
 def FileURI(ctx, filenode):
hunk ./src/allmydata/web/root.py 15
 from allmydata import get_package_versions_string
 from allmydata import provisioning
 from allmydata.util import idlib, log
-from allmydata.interfaces import IFileNode
+from allmydata.interfaces import IFileNode, MDMF_VERSION, SDMF_VERSION
 from allmydata.web import filenode, directory, unlinked, status, operations
 from allmydata.web import reliability, storage
 from allmydata.web.common import abbreviate_size, getxmlfile, WebError, \
hunk ./src/allmydata/web/root.py 19
-     get_arg, RenderMixin, boolean_of_arg
+     get_arg, RenderMixin, boolean_of_arg, parse_mutable_type_arg
 
 
 class URIHandler(RenderMixin, rend.Page):
hunk ./src/allmydata/web/root.py 50
         if t == "":
             mutable = boolean_of_arg(get_arg(req, "mutable", "false").strip())
             if mutable:
-                return unlinked.PUTUnlinkedSSK(req, self.client)
+                version = parse_mutable_type_arg(get_arg(req, "mutable-type",
+                                                 None))
+                return unlinked.PUTUnlinkedSSK(req, self.client, version)
             else:
                 return unlinked.PUTUnlinkedCHK(req, self.client)
         if t == "mkdir":
hunk ./src/allmydata/web/root.py 70
         if t in ("", "upload"):
             mutable = bool(get_arg(req, "mutable", "").strip())
             if mutable:
-                return unlinked.POSTUnlinkedSSK(req, self.client)
+                version = parse_mutable_type_arg(get_arg(req, "mutable-type",
+                                                         None))
+                return unlinked.POSTUnlinkedSSK(req, self.client, version)
             else:
                 return unlinked.POSTUnlinkedCHK(req, self.client)
         if t == "mkdir":
hunk ./src/allmydata/web/root.py 329
 
     def render_upload_form(self, ctx, data):
         # this is a form where users can upload unlinked files
+        #
+        # for mutable files, users can choose the format by selecting
+        # MDMF or SDMF from a radio button. They can also configure a
+        # default format in tahoe.cfg, which they rightly expect us to
+        # obey. we convey to them that we are obeying their choice by
+        # ensuring that the one that they've chosen is selected in the
+        # interface.
+        if self.client.mutable_file_default == MDMF_VERSION:
+            mdmf_input = T.input(type='radio', name='mutable-type',
+                                 value='mdmf', id='mutable-type-mdmf',
+                                 checked='checked')
+        else:
+            mdmf_input = T.input(type='radio', name='mutable-type',
+                                 value='mdmf', id='mutable-type-mdmf')
+
+        if self.client.mutable_file_default == SDMF_VERSION:
+            sdmf_input = T.input(type='radio', name='mutable-type',
+                                 value='sdmf', id='mutable-type-sdmf',
+                                 checked='checked')
+        else:
+            sdmf_input = T.input(type='radio', name='mutable-type',
+                                 value='sdmf', id='mutable-type-sdmf')
+
+
         form = T.form(action="uri", method="post",
                       enctype="multipart/form-data")[
             T.fieldset[
hunk ./src/allmydata/web/root.py 361
                   T.input(type="file", name="file", class_="freeform-input-file")],
             T.input(type="hidden", name="t", value="upload"),
             T.div[T.input(type="checkbox", name="mutable"), T.label(for_="mutable")["Create mutable file"],
+                  sdmf_input, T.label(for_="mutable-type-sdmf")["SDMF"],
+                  mdmf_input,
+                  T.label(for_='mutable-type-mdmf')['MDMF (experimental)'],
                   " ", T.input(type="submit", value="Upload!")],
             ]]
         return T.div[form]
hunk ./src/allmydata/web/unlinked.py 7
 from twisted.internet import defer
 from nevow import rend, url, tags as T
 from allmydata.immutable.upload import FileHandle
+from allmydata.mutable.publish import MutableFileHandle
 from allmydata.web.common import getxmlfile, get_arg, boolean_of_arg, \
      convert_children_json, WebError
 from allmydata.web import status
hunk ./src/allmydata/web/unlinked.py 20
     # that fires with the URI of the new file
     return d
 
-def PUTUnlinkedSSK(req, client):
+def PUTUnlinkedSSK(req, client, version):
     # SDMF: files are small, and we can only upload data
     req.content.seek(0)
hunk ./src/allmydata/web/unlinked.py 23
-    data = req.content.read()
-    d = client.create_mutable_file(data)
+    data = MutableFileHandle(req.content)
+    d = client.create_mutable_file(data, version=version)
     d.addCallback(lambda n: n.get_uri())
     return d
 
hunk ./src/allmydata/web/unlinked.py 83
                       ["/uri/" + res.uri])
         return d
 
-def POSTUnlinkedSSK(req, client):
+def POSTUnlinkedSSK(req, client, version):
     # "POST /uri", to create an unlinked file.
     # SDMF: files are small, and we can only upload data
hunk ./src/allmydata/web/unlinked.py 86
-    contents = req.fields["file"]
-    contents.file.seek(0)
-    data = contents.file.read()
-    d = client.create_mutable_file(data)
+    contents = req.fields["file"].file
+    data = MutableFileHandle(contents)
+    d = client.create_mutable_file(data, version=version)
     d.addCallback(lambda n: n.get_uri())
     return d
 
}
[client.py: learn how to create different kinds of mutable files
Kevan Carstensen <kevan@isnotajoke.com>**20100814225711
 Ignore-this: 61ff665bc050cba5f58bf2ed779d692b
] {
hunk ./src/allmydata/client.py 25
 from allmydata.util.time_format import parse_duration, parse_date
 from allmydata.stats import StatsProvider
 from allmydata.history import History
-from allmydata.interfaces import IStatsProducer, RIStubClient
+from allmydata.interfaces import IStatsProducer, RIStubClient, \
+                                 SDMF_VERSION, MDMF_VERSION
 from allmydata.nodemaker import NodeMaker
 
 
hunk ./src/allmydata/client.py 357
                                    self.terminator,
                                    self.get_encoding_parameters(),
                                    self._key_generator)
+        default = self.get_config("client", "mutable.format", default="sdmf")
+        if default == "mdmf":
+            self.mutable_file_default = MDMF_VERSION
+        else:
+            self.mutable_file_default = SDMF_VERSION
 
     def get_history(self):
         return self.history
hunk ./src/allmydata/client.py 500
     def create_immutable_dirnode(self, children, convergence=None):
         return self.nodemaker.create_immutable_directory(children, convergence)
 
-    def create_mutable_file(self, contents=None, keysize=None):
-        return self.nodemaker.create_mutable_file(contents, keysize)
+    def create_mutable_file(self, contents=None, keysize=None, version=None):
+        if not version:
+            version = self.mutable_file_default
+        return self.nodemaker.create_mutable_file(contents, keysize,
+                                                  version=version)
 
     def upload(self, uploadable):
         uploader = self.getServiceNamed("uploader")
}
[mutable/checker.py and mutable/repair.py: Modify checker and repairer to work with MDMF
Kevan Carstensen <kevan@isnotajoke.com>**20100819003216
 Ignore-this: d3bd3260742be8964877f0a53543b01b
 
 The checker and repairer required minimal changes to work with the MDMF
 modifications made elsewhere. The checker duplicated a lot of the code
 that was already in the downloader, so I modified the downloader
 slightly to expose this functionality to the checker and removed the
 duplicated code. The repairer only required a minor change to deal with
 data representation.
] {
hunk ./src/allmydata/mutable/checker.py 2
 
-from twisted.internet import defer
-from twisted.python import failure
-from allmydata import hashtree
 from allmydata.uri import from_string
hunk ./src/allmydata/mutable/checker.py 3
-from allmydata.util import hashutil, base32, idlib, log
+from allmydata.util import base32, idlib, log
 from allmydata.check_results import CheckAndRepairResults, CheckResults
 
 from allmydata.mutable.common import MODE_CHECK, CorruptShareError
hunk ./src/allmydata/mutable/checker.py 8
 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
-from allmydata.mutable.layout import unpack_share, SIGNED_PREFIX_LENGTH
+from allmydata.mutable.retrieve import Retrieve # for verifying
 
 class MutableChecker:
 
hunk ./src/allmydata/mutable/checker.py 25
 
     def check(self, verify=False, add_lease=False):
         servermap = ServerMap()
+        # Updating the servermap in MODE_CHECK will stand a good chance
+        # of finding all of the shares, and getting a good idea of
+        # recoverability, etc, without verifying.
         u = ServermapUpdater(self._node, self._storage_broker, self._monitor,
                              servermap, MODE_CHECK, add_lease=add_lease)
         if self._history:
hunk ./src/allmydata/mutable/checker.py 51
         if num_recoverable:
             self.best_version = servermap.best_recoverable_version()
 
+        # The file is unhealthy and needs to be repaired if:
+        # - There are unrecoverable versions.
         if servermap.unrecoverable_versions():
             self.need_repair = True
hunk ./src/allmydata/mutable/checker.py 55
+        # - There isn't a recoverable version.
         if num_recoverable != 1:
             self.need_repair = True
hunk ./src/allmydata/mutable/checker.py 58
+        # - The best recoverable version is missing some shares.
         if self.best_version:
             available_shares = servermap.shares_available()
             (num_distinct_shares, k, N) = available_shares[self.best_version]
hunk ./src/allmydata/mutable/checker.py 69
 
     def _verify_all_shares(self, servermap):
         # read every byte of each share
+        #
+        # This logic is going to be very nearly the same as the
+        # downloader. I bet we could pass the downloader a flag that
+        # makes it do this, and piggyback onto that instead of
+        # duplicating a bunch of code.
+        # 
+        # Like:
+        #  r = Retrieve(blah, blah, blah, verify=True)
+        #  d = r.download()
+        #  (wait, wait, wait, d.callback)
+        #  
+        #  Then, when it has finished, we can check the servermap (which
+        #  we provided to Retrieve) to figure out which shares are bad,
+        #  since the Retrieve process will have updated the servermap as
+        #  it went along.
+        #
+        #  By passing the verify=True flag to the constructor, we are
+        #  telling the downloader a few things.
+        # 
+        #  1. It needs to download all N shares, not just K shares.
+        #  2. It doesn't need to decrypt or decode the shares, only
+        #     verify them.
         if not self.best_version:
             return
hunk ./src/allmydata/mutable/checker.py 93
-        versionmap = servermap.make_versionmap()
-        shares = versionmap[self.best_version]
-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
-         offsets_tuple) = self.best_version
-        offsets = dict(offsets_tuple)
-        readv = [ (0, offsets["EOF"]) ]
-        dl = []
-        for (shnum, peerid, timestamp) in shares:
-            ss = servermap.connections[peerid]
-            d = self._do_read(ss, peerid, self._storage_index, [shnum], readv)
-            d.addCallback(self._got_answer, peerid, servermap)
-            dl.append(d)
-        return defer.DeferredList(dl, fireOnOneErrback=True, consumeErrors=True)
 
hunk ./src/allmydata/mutable/checker.py 94
-    def _do_read(self, ss, peerid, storage_index, shnums, readv):
-        # isolate the callRemote to a separate method, so tests can subclass
-        # Publish and override it
-        d = ss.callRemote("slot_readv", storage_index, shnums, readv)
+        r = Retrieve(self._node, servermap, self.best_version, verify=True)
+        d = r.download()
+        d.addCallback(self._process_bad_shares)
         return d
 
hunk ./src/allmydata/mutable/checker.py 99
-    def _got_answer(self, datavs, peerid, servermap):
-        for shnum,datav in datavs.items():
-            data = datav[0]
-            try:
-                self._got_results_one_share(shnum, peerid, data)
-            except CorruptShareError:
-                f = failure.Failure()
-                self.need_repair = True
-                self.bad_shares.append( (peerid, shnum, f) )
-                prefix = data[:SIGNED_PREFIX_LENGTH]
-                servermap.mark_bad_share(peerid, shnum, prefix)
-                ss = servermap.connections[peerid]
-                self.notify_server_corruption(ss, shnum, str(f.value))
-
-    def check_prefix(self, peerid, shnum, data):
-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
-         offsets_tuple) = self.best_version
-        got_prefix = data[:SIGNED_PREFIX_LENGTH]
-        if got_prefix != prefix:
-            raise CorruptShareError(peerid, shnum,
-                                    "prefix mismatch: share changed while we were reading it")
-
-    def _got_results_one_share(self, shnum, peerid, data):
-        self.check_prefix(peerid, shnum, data)
-
-        # the [seqnum:signature] pieces are validated by _compare_prefix,
-        # which checks their signature against the pubkey known to be
-        # associated with this file.
 
hunk ./src/allmydata/mutable/checker.py 100
-        (seqnum, root_hash, IV, k, N, segsize, datalen, pubkey, signature,
-         share_hash_chain, block_hash_tree, share_data,
-         enc_privkey) = unpack_share(data)
-
-        # validate [share_hash_chain,block_hash_tree,share_data]
-
-        leaves = [hashutil.block_hash(share_data)]
-        t = hashtree.HashTree(leaves)
-        if list(t) != block_hash_tree:
-            raise CorruptShareError(peerid, shnum, "block hash tree failure")
-        share_hash_leaf = t[0]
-        t2 = hashtree.IncompleteHashTree(N)
-        # root_hash was checked by the signature
-        t2.set_hashes({0: root_hash})
-        try:
-            t2.set_hashes(hashes=share_hash_chain,
-                          leaves={shnum: share_hash_leaf})
-        except (hashtree.BadHashError, hashtree.NotEnoughHashesError,
-                IndexError), e:
-            msg = "corrupt hashes: %s" % (e,)
-            raise CorruptShareError(peerid, shnum, msg)
-
-        # validate enc_privkey: only possible if we have a write-cap
-        if not self._node.is_readonly():
-            alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
-            alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
-            if alleged_writekey != self._node.get_writekey():
-                raise CorruptShareError(peerid, shnum, "invalid privkey")
+    def _process_bad_shares(self, bad_shares):
+        if bad_shares:
+            self.need_repair = True
+        self.bad_shares = bad_shares
 
hunk ./src/allmydata/mutable/checker.py 105
-    def notify_server_corruption(self, ss, shnum, reason):
-        ss.callRemoteOnly("advise_corrupt_share",
-                          "mutable", self._storage_index, shnum, reason)
 
     def _count_shares(self, smap, version):
         available_shares = smap.shares_available()
hunk ./src/allmydata/mutable/repairer.py 5
 from zope.interface import implements
 from twisted.internet import defer
 from allmydata.interfaces import IRepairResults, ICheckResults
+from allmydata.mutable.publish import MutableData
 
 class RepairResults:
     implements(IRepairResults)
hunk ./src/allmydata/mutable/repairer.py 108
             raise RepairRequiresWritecapError("Sorry, repair currently requires a writecap, to set the write-enabler properly.")
 
         d = self.node.download_version(smap, best_version, fetch_privkey=True)
+        d.addCallback(lambda data:
+            MutableData(data))
         d.addCallback(self.node.upload, smap)
         d.addCallback(self.get_results, smap)
         return d
}
[mutable/filenode.py: add versions and partial-file updates to the mutable file node
Kevan Carstensen <kevan@isnotajoke.com>**20100819003231
 Ignore-this: b7b5434201fdb9b48f902d7ab25ef45c
 
 One of the goals of MDMF as a GSoC project is to lay the groundwork for
 LDMF, a format that will allow Tahoe-LAFS to deal with and encourage
 multiple versions of a single cap on the grid. In line with this, there
 is a now a distinction between an overriding mutable file (which can be
 thought to correspond to the cap/unique identifier for that mutable
 file) and versions of the mutable file (which we can download, update,
 and so on). All download, upload, and modification operations end up
 happening on a particular version of a mutable file, but there are
 shortcut methods on the object representing the overriding mutable file
 that perform these operations on the best version of the mutable file
 (which is what code should be doing until we have LDMF and better
 support for other paradigms).
 
 Another goal of MDMF was to take advantage of segmentation to give
 callers more efficient partial file updates or appends. This patch
 implements methods that do that, too.
 
] {
hunk ./src/allmydata/mutable/filenode.py 7
 from zope.interface import implements
 from twisted.internet import defer, reactor
 from foolscap.api import eventually
-from allmydata.interfaces import IMutableFileNode, \
-     ICheckable, ICheckResults, NotEnoughSharesError
-from allmydata.util import hashutil, log
+from allmydata.interfaces import IMutableFileNode, ICheckable, ICheckResults, \
+     NotEnoughSharesError, MDMF_VERSION, SDMF_VERSION, IMutableUploadable, \
+     IMutableFileVersion, IWritable
+from allmydata.util import hashutil, log, consumer, deferredutil, mathutil
 from allmydata.util.assertutil import precondition
 from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI
 from allmydata.monitor import Monitor
hunk ./src/allmydata/mutable/filenode.py 16
 from pycryptopp.cipher.aes import AES
 
-from allmydata.mutable.publish import Publish
+from allmydata.mutable.publish import Publish, MutableData,\
+                                      DEFAULT_MAX_SEGMENT_SIZE, \
+                                      TransformingUploadable
 from allmydata.mutable.common import MODE_READ, MODE_WRITE, UnrecoverableFileError, \
      ResponseCache, UncoordinatedWriteError
 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
hunk ./src/allmydata/mutable/filenode.py 70
         self._sharemap = {} # known shares, shnum-to-[nodeids]
         self._cache = ResponseCache()
         self._most_recent_size = None
+        # filled in after __init__ if we're being created for the first time;
+        # filled in by the servermap updater before publishing, otherwise.
+        # set to this default value in case neither of those things happen,
+        # or in case the servermap can't find any shares to tell us what
+        # to publish as.
+        # TODO: Set this back to None, and find out why the tests fail
+        #       with it set to None.
+        self._protocol_version = None
 
         # all users of this MutableFileNode go through the serializer. This
         # takes advantage of the fact that Deferreds discard the callbacks
hunk ./src/allmydata/mutable/filenode.py 134
         return self._upload(initial_contents, None)
 
     def _get_initial_contents(self, contents):
-        if isinstance(contents, str):
-            return contents
         if contents is None:
hunk ./src/allmydata/mutable/filenode.py 135
-            return ""
+            return MutableData("")
+
+        if IMutableUploadable.providedBy(contents):
+            return contents
+
         assert callable(contents), "%s should be callable, not %s" % \
                (contents, type(contents))
         return contents(self)
hunk ./src/allmydata/mutable/filenode.py 209
 
     def get_size(self):
         return self._most_recent_size
+
     def get_current_size(self):
         d = self.get_size_of_best_version()
         d.addCallback(self._stash_size)
hunk ./src/allmydata/mutable/filenode.py 214
         return d
+
     def _stash_size(self, size):
         self._most_recent_size = size
         return size
hunk ./src/allmydata/mutable/filenode.py 273
             return cmp(self.__class__, them.__class__)
         return cmp(self._uri, them._uri)
 
-    def _do_serialized(self, cb, *args, **kwargs):
-        # note: to avoid deadlock, this callable is *not* allowed to invoke
-        # other serialized methods within this (or any other)
-        # MutableFileNode. The callable should be a bound method of this same
-        # MFN instance.
-        d = defer.Deferred()
-        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
-        # we need to put off d.callback until this Deferred is finished being
-        # processed. Otherwise the caller's subsequent activities (like,
-        # doing other things with this node) can cause reentrancy problems in
-        # the Deferred code itself
-        self._serializer.addBoth(lambda res: eventually(d.callback, res))
-        # add a log.err just in case something really weird happens, because
-        # self._serializer stays around forever, therefore we won't see the
-        # usual Unhandled Error in Deferred that would give us a hint.
-        self._serializer.addErrback(log.err)
-        return d
 
     #################################
     # ICheckable
hunk ./src/allmydata/mutable/filenode.py 298
 
 
     #################################
-    # IMutableFileNode
+    # IFileNode
+
+    def get_best_readable_version(self):
+        """
+        I return a Deferred that fires with a MutableFileVersion
+        representing the best readable version of the file that I
+        represent
+        """
+        return self.get_readable_version()
+
+
+    def get_readable_version(self, servermap=None, version=None):
+        """
+        I return a Deferred that fires with an MutableFileVersion for my
+        version argument, if there is a recoverable file of that version
+        on the grid. If there is no recoverable version, I fire with an
+        UnrecoverableFileError.
+
+        If a servermap is provided, I look in there for the requested
+        version. If no servermap is provided, I create and update a new
+        one.
+
+        If no version is provided, then I return a MutableFileVersion
+        representing the best recoverable version of the file.
+        """
+        d = self._get_version_from_servermap(MODE_READ, servermap, version)
+        def _build_version((servermap, their_version)):
+            assert their_version in servermap.recoverable_versions()
+            assert their_version in servermap.make_versionmap()
+
+            mfv = MutableFileVersion(self,
+                                     servermap,
+                                     their_version,
+                                     self._storage_index,
+                                     self._storage_broker,
+                                     self._readkey,
+                                     history=self._history)
+            assert mfv.is_readonly()
+            # our caller can use this to download the contents of the
+            # mutable file.
+            return mfv
+        return d.addCallback(_build_version)
+
+
+    def _get_version_from_servermap(self,
+                                    mode,
+                                    servermap=None,
+                                    version=None):
+        """
+        I return a Deferred that fires with (servermap, version).
+
+        This function performs validation and a servermap update. If it
+        returns (servermap, version), the caller can assume that:
+            - servermap was last updated in mode.
+            - version is recoverable, and corresponds to the servermap.
+
+        If version and servermap are provided to me, I will validate
+        that version exists in the servermap, and that the servermap was
+        updated correctly.
+
+        If version is not provided, but servermap is, I will validate
+        the servermap and return the best recoverable version that I can
+        find in the servermap.
+
+        If the version is provided but the servermap isn't, I will
+        obtain a servermap that has been updated in the correct mode and
+        validate that version is found and recoverable.
+
+        If neither servermap nor version are provided, I will obtain a
+        servermap updated in the correct mode, and return the best
+        recoverable version that I can find in there.
+        """
+        # XXX: wording ^^^^
+        if servermap and servermap.last_update_mode == mode:
+            d = defer.succeed(servermap)
+        else:
+            d = self._get_servermap(mode)
+
+        def _get_version(servermap, v):
+            if v and v not in servermap.recoverable_versions():
+                v = None
+            elif not v:
+                v = servermap.best_recoverable_version()
+            if not v:
+                raise UnrecoverableFileError("no recoverable versions")
+
+            return (servermap, v)
+        return d.addCallback(_get_version, version)
+
 
     def download_best_version(self):
hunk ./src/allmydata/mutable/filenode.py 389
+        """
+        I return a Deferred that fires with the contents of the best
+        version of this mutable file.
+        """
         return self._do_serialized(self._download_best_version)
hunk ./src/allmydata/mutable/filenode.py 394
+
+
     def _download_best_version(self):
hunk ./src/allmydata/mutable/filenode.py 397
-        servermap = ServerMap()
-        d = self._try_once_to_download_best_version(servermap, MODE_READ)
-        def _maybe_retry(f):
-            f.trap(NotEnoughSharesError)
-            # the download is worth retrying once. Make sure to use the
-            # old servermap, since it is what remembers the bad shares,
-            # but use MODE_WRITE to make it look for even more shares.
-            # TODO: consider allowing this to retry multiple times.. this
-            # approach will let us tolerate about 8 bad shares, I think.
-            return self._try_once_to_download_best_version(servermap,
-                                                           MODE_WRITE)
+        """
+        I am the serialized sibling of download_best_version.
+        """
+        d = self.get_best_readable_version()
+        d.addCallback(self._record_size)
+        d.addCallback(lambda version: version.download_to_data())
+
+        # It is possible that the download will fail because there
+        # aren't enough shares to be had. If so, we will try again after
+        # updating the servermap in MODE_WRITE, which may find more
+        # shares than updating in MODE_READ, as we just did. We can do
+        # this by getting the best mutable version and downloading from
+        # that -- the best mutable version will be a MutableFileVersion
+        # with a servermap that was last updated in MODE_WRITE, as we
+        # want. If this fails, then we give up.
+        def _maybe_retry(failure):
+            failure.trap(NotEnoughSharesError)
+
+            d = self.get_best_mutable_version()
+            d.addCallback(self._record_size)
+            d.addCallback(lambda version: version.download_to_data())
+            return d
+
         d.addErrback(_maybe_retry)
         return d
hunk ./src/allmydata/mutable/filenode.py 422
-    def _try_once_to_download_best_version(self, servermap, mode):
-        d = self._update_servermap(servermap, mode)
-        d.addCallback(self._once_updated_download_best_version, servermap)
-        return d
-    def _once_updated_download_best_version(self, ignored, servermap):
-        goal = servermap.best_recoverable_version()
-        if not goal:
-            raise UnrecoverableFileError("no recoverable versions")
-        return self._try_once_to_download_version(servermap, goal)
+
+
+    def _record_size(self, mfv):
+        """
+        I record the size of a mutable file version.
+        """
+        self._most_recent_size = mfv.get_size()
+        return mfv
+
 
     def get_size_of_best_version(self):
hunk ./src/allmydata/mutable/filenode.py 433
-        d = self.get_servermap(MODE_READ)
-        def _got_servermap(smap):
-            ver = smap.best_recoverable_version()
-            if not ver:
-                raise UnrecoverableFileError("no recoverable version")
-            return smap.size_of_version(ver)
-        d.addCallback(_got_servermap)
-        return d
+        """
+        I return the size of the best version of this mutable file.
 
hunk ./src/allmydata/mutable/filenode.py 436
+        This is equivalent to calling get_size() on the result of
+        get_best_readable_version().
+        """
+        d = self.get_best_readable_version()
+        return d.addCallback(lambda mfv: mfv.get_size())
+
+
+    #################################
+    # IMutableFileNode
+
+    def get_best_mutable_version(self, servermap=None):
+        """
+        I return a Deferred that fires with a MutableFileVersion
+        representing the best readable version of the file that I
+        represent. I am like get_best_readable_version, except that I
+        will try to make a writable version if I can.
+        """
+        return self.get_mutable_version(servermap=servermap)
+
+
+    def get_mutable_version(self, servermap=None, version=None):
+        """
+        I return a version of this mutable file. I return a Deferred
+        that fires with a MutableFileVersion
+
+        If version is provided, the Deferred will fire with a
+        MutableFileVersion initailized with that version. Otherwise, it
+        will fire with the best version that I can recover.
+
+        If servermap is provided, I will use that to find versions
+        instead of performing my own servermap update.
+        """
+        if self.is_readonly():
+            return self.get_readable_version(servermap=servermap,
+                                             version=version)
+
+        # get_mutable_version => write intent, so we require that the
+        # servermap is updated in MODE_WRITE
+        d = self._get_version_from_servermap(MODE_WRITE, servermap, version)
+        def _build_version((servermap, smap_version)):
+            # these should have been set by the servermap update.
+            assert self._secret_holder
+            assert self._writekey
+
+            mfv = MutableFileVersion(self,
+                                     servermap,
+                                     smap_version,
+                                     self._storage_index,
+                                     self._storage_broker,
+                                     self._readkey,
+                                     self._writekey,
+                                     self._secret_holder,
+                                     history=self._history)
+            assert not mfv.is_readonly()
+            return mfv
+
+        return d.addCallback(_build_version)
+
+
+    # XXX: I'm uncomfortable with the difference between upload and
+    #      overwrite, which, FWICT, is basically that you don't have to
+    #      do a servermap update before you overwrite. We split them up
+    #      that way anyway, so I guess there's no real difficulty in
+    #      offering both ways to callers, but it also makes the
+    #      public-facing API cluttery, and makes it hard to discern the
+    #      right way of doing things.
+
+    # In general, we leave it to callers to ensure that they aren't
+    # going to cause UncoordinatedWriteErrors when working with
+    # MutableFileVersions. We know that the next three operations
+    # (upload, overwrite, and modify) will all operate on the same
+    # version, so we say that only one of them can be going on at once,
+    # and serialize them to ensure that that actually happens, since as
+    # the caller in this situation it is our job to do that.
     def overwrite(self, new_contents):
hunk ./src/allmydata/mutable/filenode.py 511
+        """
+        I overwrite the contents of the best recoverable version of this
+        mutable file with new_contents. This is equivalent to calling
+        overwrite on the result of get_best_mutable_version with
+        new_contents as an argument. I return a Deferred that eventually
+        fires with the results of my replacement process.
+        """
         return self._do_serialized(self._overwrite, new_contents)
hunk ./src/allmydata/mutable/filenode.py 519
+
+
     def _overwrite(self, new_contents):
hunk ./src/allmydata/mutable/filenode.py 522
+        """
+        I am the serialized sibling of overwrite.
+        """
+        d = self.get_best_mutable_version()
+        d.addCallback(lambda mfv: mfv.overwrite(new_contents))
+        d.addCallback(self._did_upload, new_contents.get_size())
+        return d
+
+
+
+    def upload(self, new_contents, servermap):
+        """
+        I overwrite the contents of the best recoverable version of this
+        mutable file with new_contents, using servermap instead of
+        creating/updating our own servermap. I return a Deferred that
+        fires with the results of my upload.
+        """
+        return self._do_serialized(self._upload, new_contents, servermap)
+
+
+    def modify(self, modifier, backoffer=None):
+        """
+        I modify the contents of the best recoverable version of this
+        mutable file with the modifier. This is equivalent to calling
+        modify on the result of get_best_mutable_version. I return a
+        Deferred that eventually fires with an UploadResults instance
+        describing this process.
+        """
+        return self._do_serialized(self._modify, modifier, backoffer)
+
+
+    def _modify(self, modifier, backoffer):
+        """
+        I am the serialized sibling of modify.
+        """
+        d = self.get_best_mutable_version()
+        d.addCallback(lambda mfv: mfv.modify(modifier, backoffer))
+        return d
+
+
+    def download_version(self, servermap, version, fetch_privkey=False):
+        """
+        Download the specified version of this mutable file. I return a
+        Deferred that fires with the contents of the specified version
+        as a bytestring, or errbacks if the file is not recoverable.
+        """
+        d = self.get_readable_version(servermap, version)
+        return d.addCallback(lambda mfv: mfv.download_to_data(fetch_privkey))
+
+
+    def get_servermap(self, mode):
+        """
+        I return a servermap that has been updated in mode.
+
+        mode should be one of MODE_READ, MODE_WRITE, MODE_CHECK or
+        MODE_ANYTHING. See servermap.py for more on what these mean.
+        """
+        return self._do_serialized(self._get_servermap, mode)
+
+
+    def _get_servermap(self, mode):
+        """
+        I am a serialized twin to get_servermap.
+        """
         servermap = ServerMap()
hunk ./src/allmydata/mutable/filenode.py 587
-        d = self._update_servermap(servermap, mode=MODE_WRITE)
-        d.addCallback(lambda ignored: self._upload(new_contents, servermap))
+        d = self._update_servermap(servermap, mode)
+        # The servermap will tell us about the most recent size of the
+        # file, so we may as well set that so that callers might get
+        # more data about us.
+        if not self._most_recent_size:
+            d.addCallback(self._get_size_from_servermap)
+        return d
+
+
+    def _get_size_from_servermap(self, servermap):
+        """
+        I extract the size of the best version of this file and record
+        it in self._most_recent_size. I return the servermap that I was
+        given.
+        """
+        if servermap.recoverable_versions():
+            v = servermap.best_recoverable_version()
+            size = v[4] # verinfo[4] == size
+            self._most_recent_size = size
+        return servermap
+
+
+    def _update_servermap(self, servermap, mode):
+        u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap,
+                             mode)
+        if self._history:
+            self._history.notify_mapupdate(u.get_status())
+        return u.update()
+
+
+    def set_version(self, version):
+        # I can be set in two ways:
+        #  1. When the node is created.
+        #  2. (for an existing share) when the Servermap is updated 
+        #     before I am read.
+        assert version in (MDMF_VERSION, SDMF_VERSION)
+        self._protocol_version = version
+
+
+    def get_version(self):
+        return self._protocol_version
+
+
+    def _do_serialized(self, cb, *args, **kwargs):
+        # note: to avoid deadlock, this callable is *not* allowed to invoke
+        # other serialized methods within this (or any other)
+        # MutableFileNode. The callable should be a bound method of this same
+        # MFN instance.
+        d = defer.Deferred()
+        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
+        # we need to put off d.callback until this Deferred is finished being
+        # processed. Otherwise the caller's subsequent activities (like,
+        # doing other things with this node) can cause reentrancy problems in
+        # the Deferred code itself
+        self._serializer.addBoth(lambda res: eventually(d.callback, res))
+        # add a log.err just in case something really weird happens, because
+        # self._serializer stays around forever, therefore we won't see the
+        # usual Unhandled Error in Deferred that would give us a hint.
+        self._serializer.addErrback(log.err)
         return d
 
 
hunk ./src/allmydata/mutable/filenode.py 649
+    def _upload(self, new_contents, servermap):
+        """
+        A MutableFileNode still has to have some way of getting
+        published initially, which is what I am here for. After that,
+        all publishing, updating, modifying and so on happens through
+        MutableFileVersions.
+        """
+        assert self._pubkey, "update_servermap must be called before publish"
+
+        p = Publish(self, self._storage_broker, servermap)
+        if self._history:
+            self._history.notify_publish(p.get_status(),
+                                         new_contents.get_size())
+        d = p.publish(new_contents)
+        d.addCallback(self._did_upload, new_contents.get_size())
+        return d
+
+
+    def _did_upload(self, res, size):
+        self._most_recent_size = size
+        return res
+
+
+class MutableFileVersion:
+    """
+    I represent a specific version (most likely the best version) of a
+    mutable file.
+
+    Since I implement IReadable, instances which hold a
+    reference to an instance of me are guaranteed the ability (absent
+    connection difficulties or unrecoverable versions) to read the file
+    that I represent. Depending on whether I was initialized with a
+    write capability or not, I may also provide callers the ability to
+    overwrite or modify the contents of the mutable file that I
+    reference.
+    """
+    implements(IMutableFileVersion, IWritable)
+
+    def __init__(self,
+                 node,
+                 servermap,
+                 version,
+                 storage_index,
+                 storage_broker,
+                 readcap,
+                 writekey=None,
+                 write_secrets=None,
+                 history=None):
+
+        self._node = node
+        self._servermap = servermap
+        self._version = version
+        self._storage_index = storage_index
+        self._write_secrets = write_secrets
+        self._history = history
+        self._storage_broker = storage_broker
+
+        #assert isinstance(readcap, IURI)
+        self._readcap = readcap
+
+        self._writekey = writekey
+        self._serializer = defer.succeed(None)
+
+
+    def get_sequence_number(self):
+        """
+        Get the sequence number of the mutable version that I represent.
+        """
+        return self._version[0] # verinfo[0] == the sequence number
+
+
+    # TODO: Terminology?
+    def get_writekey(self):
+        """
+        I return a writekey or None if I don't have a writekey.
+        """
+        return self._writekey
+
+
+    def overwrite(self, new_contents):
+        """
+        I overwrite the contents of this mutable file version with the
+        data in new_contents.
+        """
+        assert not self.is_readonly()
+
+        return self._do_serialized(self._overwrite, new_contents)
+
+
+    def _overwrite(self, new_contents):
+        assert IMutableUploadable.providedBy(new_contents)
+        assert self._servermap.last_update_mode == MODE_WRITE
+
+        return self._upload(new_contents)
+
+
     def modify(self, modifier, backoffer=None):
         """I use a modifier callback to apply a change to the mutable file.
         I implement the following pseudocode::
hunk ./src/allmydata/mutable/filenode.py 785
         backoffer should not invoke any methods on this MutableFileNode
         instance, and it needs to be highly conscious of deadlock issues.
         """
+        assert not self.is_readonly()
+
         return self._do_serialized(self._modify, modifier, backoffer)
hunk ./src/allmydata/mutable/filenode.py 788
+
+
     def _modify(self, modifier, backoffer):
hunk ./src/allmydata/mutable/filenode.py 791
-        servermap = ServerMap()
         if backoffer is None:
             backoffer = BackoffAgent().delay
hunk ./src/allmydata/mutable/filenode.py 793
-        return self._modify_and_retry(servermap, modifier, backoffer, True)
-    def _modify_and_retry(self, servermap, modifier, backoffer, first_time):
-        d = self._modify_once(servermap, modifier, first_time)
+        return self._modify_and_retry(modifier, backoffer, True)
+
+
+    def _modify_and_retry(self, modifier, backoffer, first_time):
+        """
+        I try to apply modifier to the contents of this version of the
+        mutable file. If I succeed, I return an UploadResults instance
+        describing my success. If I fail, I try again after waiting for
+        a little bit.
+        """
+        log.msg("doing modify")
+        d = self._modify_once(modifier, first_time)
         def _retry(f):
             f.trap(UncoordinatedWriteError)
             d2 = defer.maybeDeferred(backoffer, self, f)
hunk ./src/allmydata/mutable/filenode.py 809
             d2.addCallback(lambda ignored:
-                           self._modify_and_retry(servermap, modifier,
+                           self._modify_and_retry(modifier,
                                                   backoffer, False))
             return d2
         d.addErrback(_retry)
hunk ./src/allmydata/mutable/filenode.py 814
         return d
-    def _modify_once(self, servermap, modifier, first_time):
-        d = self._update_servermap(servermap, MODE_WRITE)
-        d.addCallback(self._once_updated_download_best_version, servermap)
+
+
+    def _modify_once(self, modifier, first_time):
+        """
+        I attempt to apply a modifier to the contents of the mutable
+        file.
+        """
+        # XXX: This is wrong -- we could get more servers if we updated
+        # in MODE_ANYTHING and possibly MODE_CHECK. Probably we want to
+        # assert that the last update wasn't MODE_READ
+        assert self._servermap.last_update_mode == MODE_WRITE
+
+        # download_to_data is serialized, so we have to call this to
+        # avoid deadlock.
+        d = self._try_to_download_data()
         def _apply(old_contents):
hunk ./src/allmydata/mutable/filenode.py 830
-            new_contents = modifier(old_contents, servermap, first_time)
+            new_contents = modifier(old_contents, self._servermap, first_time)
+            precondition((isinstance(new_contents, str) or
+                          new_contents is None),
+                         "Modifier function must return a string "
+                         "or None")
+
             if new_contents is None or new_contents == old_contents:
hunk ./src/allmydata/mutable/filenode.py 837
+                log.msg("no changes")
                 # no changes need to be made
                 if first_time:
                     return
hunk ./src/allmydata/mutable/filenode.py 845
                 # recovery when it observes UCWE, we need to do a second
                 # publish. See #551 for details. We'll basically loop until
                 # we managed an uncontested publish.
-                new_contents = old_contents
-            precondition(isinstance(new_contents, str),
-                         "Modifier function must return a string or None")
-            return self._upload(new_contents, servermap)
+                old_uploadable = MutableData(old_contents)
+                new_contents = old_uploadable
+            else:
+                new_contents = MutableData(new_contents)
+
+            return self._upload(new_contents)
         d.addCallback(_apply)
         return d
 
hunk ./src/allmydata/mutable/filenode.py 854
-    def get_servermap(self, mode):
-        return self._do_serialized(self._get_servermap, mode)
-    def _get_servermap(self, mode):
-        servermap = ServerMap()
-        return self._update_servermap(servermap, mode)
-    def _update_servermap(self, servermap, mode):
-        u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap,
-                             mode)
-        if self._history:
-            self._history.notify_mapupdate(u.get_status())
-        return u.update()
 
hunk ./src/allmydata/mutable/filenode.py 855
-    def download_version(self, servermap, version, fetch_privkey=False):
-        return self._do_serialized(self._try_once_to_download_version,
-                                   servermap, version, fetch_privkey)
-    def _try_once_to_download_version(self, servermap, version,
-                                      fetch_privkey=False):
-        r = Retrieve(self, servermap, version, fetch_privkey)
+    def is_readonly(self):
+        """
+        I return True if this MutableFileVersion provides no write
+        access to the file that it encapsulates, and False if it
+        provides the ability to modify the file.
+        """
+        return self._writekey is None
+
+
+    def is_mutable(self):
+        """
+        I return True, since mutable files are always mutable by
+        somebody.
+        """
+        return True
+
+
+    def get_storage_index(self):
+        """
+        I return the storage index of the reference that I encapsulate.
+        """
+        return self._storage_index
+
+
+    def get_size(self):
+        """
+        I return the length, in bytes, of this readable object.
+        """
+        return self._servermap.size_of_version(self._version)
+
+
+    def download_to_data(self, fetch_privkey=False):
+        """
+        I return a Deferred that fires with the contents of this
+        readable object as a byte string.
+
+        """
+        c = consumer.MemoryConsumer()
+        d = self.read(c, fetch_privkey=fetch_privkey)
+        d.addCallback(lambda mc: "".join(mc.chunks))
+        return d
+
+
+    def _try_to_download_data(self):
+        """
+        I am an unserialized cousin of download_to_data; I am called
+        from the children of modify() to download the data associated
+        with this mutable version.
+        """
+        c = consumer.MemoryConsumer()
+        # modify will almost certainly write, so we need the privkey.
+        d = self._read(c, fetch_privkey=True)
+        d.addCallback(lambda mc: "".join(mc.chunks))
+        return d
+
+
+    def read(self, consumer, offset=0, size=None, fetch_privkey=False):
+        """
+        I read a portion (possibly all) of the mutable file that I
+        reference into consumer.
+        """
+        return self._do_serialized(self._read, consumer, offset, size,
+                                   fetch_privkey)
+
+
+    def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
+        """
+        I am the serialized companion of read.
+        """
+        r = Retrieve(self._node, self._servermap, self._version, fetch_privkey)
         if self._history:
             self._history.notify_retrieve(r.get_status())
hunk ./src/allmydata/mutable/filenode.py 927
-        d = r.download()
-        d.addCallback(self._downloaded_version)
+        d = r.download(consumer, offset, size)
         return d
hunk ./src/allmydata/mutable/filenode.py 929
-    def _downloaded_version(self, data):
-        self._most_recent_size = len(data)
-        return data
 
hunk ./src/allmydata/mutable/filenode.py 930
-    def upload(self, new_contents, servermap):
-        return self._do_serialized(self._upload, new_contents, servermap)
-    def _upload(self, new_contents, servermap):
-        assert self._pubkey, "update_servermap must be called before publish"
-        p = Publish(self, self._storage_broker, servermap)
+
+    def _do_serialized(self, cb, *args, **kwargs):
+        # note: to avoid deadlock, this callable is *not* allowed to invoke
+        # other serialized methods within this (or any other)
+        # MutableFileNode. The callable should be a bound method of this same
+        # MFN instance.
+        d = defer.Deferred()
+        self._serializer.addCallback(lambda ignore: cb(*args, **kwargs))
+        # we need to put off d.callback until this Deferred is finished being
+        # processed. Otherwise the caller's subsequent activities (like,
+        # doing other things with this node) can cause reentrancy problems in
+        # the Deferred code itself
+        self._serializer.addBoth(lambda res: eventually(d.callback, res))
+        # add a log.err just in case something really weird happens, because
+        # self._serializer stays around forever, therefore we won't see the
+        # usual Unhandled Error in Deferred that would give us a hint.
+        self._serializer.addErrback(log.err)
+        return d
+
+
+    def _upload(self, new_contents):
+        #assert self._pubkey, "update_servermap must be called before publish"
+        p = Publish(self._node, self._storage_broker, self._servermap)
         if self._history:
hunk ./src/allmydata/mutable/filenode.py 954
-            self._history.notify_publish(p.get_status(), len(new_contents))
+            self._history.notify_publish(p.get_status(),
+                                         new_contents.get_size())
         d = p.publish(new_contents)
hunk ./src/allmydata/mutable/filenode.py 957
-        d.addCallback(self._did_upload, len(new_contents))
+        d.addCallback(self._did_upload, new_contents.get_size())
         return d
hunk ./src/allmydata/mutable/filenode.py 959
+
+
     def _did_upload(self, res, size):
         self._most_recent_size = size
         return res
hunk ./src/allmydata/mutable/filenode.py 964
+
+    def update(self, data, offset):
+        """
+        Do an update of this mutable file version by inserting data at
+        offset within the file. If offset is the EOF, this is an append
+        operation. I return a Deferred that fires with the results of
+        the update operation when it has completed.
+
+        In cases where update does not append any data, or where it does
+        not append so many blocks that the block count crosses a
+        power-of-two boundary, this operation will use roughly
+        O(data.get_size()) memory/bandwidth/CPU to perform the update.
+        Otherwise, it must download, re-encode, and upload the entire
+        file again, which will use O(filesize) resources.
+        """
+        return self._do_serialized(self._update, data, offset)
+
+
+    def _update(self, data, offset):
+        """
+        I update the mutable file version represented by this particular
+        IMutableVersion by inserting the data in data at the offset
+        offset. I return a Deferred that fires when this has been
+        completed.
+        """
+        # We have two cases here:
+        # 1. The new data will add few enough segments so that it does
+        #    not cross into the next power-of-two boundary.
+        # 2. It doesn't.
+        # 
+        # In the former case, we can modify the file in place. In the
+        # latter case, we need to re-encode the file.
+        new_size = data.get_size() + offset
+        old_size = self.get_size()
+        segment_size = self._version[3]
+        num_old_segments = mathutil.div_ceil(old_size,
+                                             segment_size)
+        num_new_segments = mathutil.div_ceil(new_size,
+                                             segment_size)
+        log.msg("got %d old segments, %d new segments" % \
+                        (num_old_segments, num_new_segments))
+
+        # We also do a whole file re-encode if the file is an SDMF file. 
+        if self._version[2]: # version[2] == SDMF salt, which MDMF lacks
+            log.msg("doing re-encode instead of in-place update")
+            return self._do_modify_update(data, offset)
+
+        log.msg("updating in place")
+        d = self._do_update_update(data, offset)
+        d.addCallback(self._decode_and_decrypt_segments, data, offset)
+        d.addCallback(self._build_uploadable_and_finish, data, offset)
+        return d
+
+
+    def _do_modify_update(self, data, offset):
+        """
+        I perform a file update by modifying the contents of the file
+        after downloading it, then reuploading it. I am less efficient
+        than _do_update_update, but am necessary for certain updates.
+        """
+        def m(old, servermap, first_time):
+            start = offset
+            rest = offset + data.get_size()
+            new = old[:start]
+            new += "".join(data.read(data.get_size()))
+            new += old[rest:]
+            return new
+        return self._modify(m, None)
+
+
+    def _do_update_update(self, data, offset):
+        """
+        I start the Servermap update that gets us the data we need to
+        continue the update process. I return a Deferred that fires when
+        the servermap update is done.
+        """
+        assert IMutableUploadable.providedBy(data)
+        assert self.is_mutable()
+        # offset == self.get_size() is valid and means that we are
+        # appending data to the file.
+        assert offset <= self.get_size()
+
+        # We'll need the segment that the data starts in, regardless of
+        # what we'll do later.
+        start_segment = mathutil.div_ceil(offset, DEFAULT_MAX_SEGMENT_SIZE)
+        start_segment -= 1
+
+        # We only need the end segment if the data we append does not go
+        # beyond the current end-of-file.
+        end_segment = start_segment
+        if offset + data.get_size() < self.get_size():
+            end_data = offset + data.get_size()
+            end_segment = mathutil.div_ceil(end_data, DEFAULT_MAX_SEGMENT_SIZE)
+            end_segment -= 1
+        self._start_segment = start_segment
+        self._end_segment = end_segment
+
+        # Now ask for the servermap to be updated in MODE_WRITE with
+        # this update range.
+        u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
+                             self._servermap,
+                             mode=MODE_WRITE,
+                             update_range=(start_segment, end_segment))
+        return u.update()
+
+
+    def _decode_and_decrypt_segments(self, ignored, data, offset):
+        """
+        After the servermap update, I take the encrypted and encoded
+        data that the servermap fetched while doing its update and
+        transform it into decoded-and-decrypted plaintext that can be
+        used by the new uploadable. I return a Deferred that fires with
+        the segments.
+        """
+        r = Retrieve(self._node, self._servermap, self._version)
+        # decode: takes in our blocks and salts from the servermap,
+        # returns a Deferred that fires with the corresponding plaintext
+        # segments. Does not download -- simply takes advantage of
+        # existing infrastructure within the Retrieve class to avoid
+        # duplicating code.
+        sm = self._servermap
+        # XXX: If the methods in the servermap don't work as
+        # abstractions, you should rewrite them instead of going around
+        # them.
+        update_data = sm.update_data
+        start_segments = {} # shnum -> start segment
+        end_segments = {} # shnum -> end segment
+        blockhashes = {} # shnum -> blockhash tree
+        for (shnum, data) in update_data.iteritems():
+            data = [d[1] for d in data if d[0] == self._version]
+
+            # Every data entry in our list should now be share shnum for
+            # a particular version of the mutable file, so all of the
+            # entries should be identical.
+            datum = data[0]
+            assert filter(lambda x: x != datum, data) == []
+
+            blockhashes[shnum] = datum[0]
+            start_segments[shnum] = datum[1]
+            end_segments[shnum] = datum[2]
+
+        d1 = r.decode(start_segments, self._start_segment)
+        d2 = r.decode(end_segments, self._end_segment)
+        d3 = defer.succeed(blockhashes)
+        return deferredutil.gatherResults([d1, d2, d3])
+
+
+    def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
+        """
+        After the process has the plaintext segments, I build the
+        TransformingUploadable that the publisher will eventually
+        re-upload to the grid. I then invoke the publisher with that
+        uploadable, and return a Deferred when the publish operation has
+        completed without issue.
+        """
+        u = TransformingUploadable(data, offset,
+                                   self._version[3],
+                                   segments_and_bht[0],
+                                   segments_and_bht[1])
+        p = Publish(self._node, self._storage_broker, self._servermap)
+        return p.update(u, offset, segments_and_bht[2], self._version)
}
[mutable/publish.py: Modify the publish process to support MDMF
Kevan Carstensen <kevan@isnotajoke.com>**20100819003342
 Ignore-this: 2bb379974927e2e20cff75bae8302d1d
 
 The inner workings of the publishing process needed to be reworked to a
 large extend to cope with segmented mutable files, and to cope with
 partial-file updates of mutable files. This patch does that. It also
 introduces wrappers for uploadable data, allowing the use of
 filehandle-like objects as data sources, in addition to strings. This
 reduces memory inefficiency when dealing with large files through the
 webapi, and clarifies update code there.
] {
hunk ./src/allmydata/mutable/publish.py 3
 
 
-import os, struct, time
+import os, time
+from StringIO import StringIO
 from itertools import count
 from zope.interface import implements
 from twisted.internet import defer
hunk ./src/allmydata/mutable/publish.py 9
 from twisted.python import failure
-from allmydata.interfaces import IPublishStatus
+from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION, \
+                                 IMutableUploadable
 from allmydata.util import base32, hashutil, mathutil, idlib, log
 from allmydata.util.dictutil import DictOfSets
 from allmydata import hashtree, codec
hunk ./src/allmydata/mutable/publish.py 21
 from allmydata.mutable.common import MODE_WRITE, MODE_CHECK, \
      UncoordinatedWriteError, NotEnoughServersError
 from allmydata.mutable.servermap import ServerMap
-from allmydata.mutable.layout import pack_prefix, pack_share, unpack_header, pack_checkstring, \
-     unpack_checkstring, SIGNED_PREFIX
+from allmydata.mutable.layout import unpack_checkstring, MDMFSlotWriteProxy, \
+                                     SDMFSlotWriteProxy
+
+KiB = 1024
+DEFAULT_MAX_SEGMENT_SIZE = 128 * KiB
+PUSHING_BLOCKS_STATE = 0
+PUSHING_EVERYTHING_ELSE_STATE = 1
+DONE_STATE = 2
 
 class PublishStatus:
     implements(IPublishStatus)
hunk ./src/allmydata/mutable/publish.py 118
         self._status.set_helper(False)
         self._status.set_progress(0.0)
         self._status.set_active(True)
+        self._version = self._node.get_version()
+        assert self._version in (SDMF_VERSION, MDMF_VERSION)
+
 
     def get_status(self):
         return self._status
hunk ./src/allmydata/mutable/publish.py 132
             kwargs["facility"] = "tahoe.mutable.publish"
         return log.msg(*args, **kwargs)
 
+
+    def update(self, data, offset, blockhashes, version):
+        """
+        I replace the contents of this file with the contents of data,
+        starting at offset. I return a Deferred that fires with None
+        when the replacement has been completed, or with an error if
+        something went wrong during the process.
+
+        Note that this process will not upload new shares. If the file
+        being updated is in need of repair, callers will have to repair
+        it on their own.
+        """
+        # How this works:
+        # 1: Make peer assignments. We'll assign each share that we know
+        # about on the grid to that peer that currently holds that
+        # share, and will not place any new shares.
+        # 2: Setup encoding parameters. Most of these will stay the same
+        # -- datalength will change, as will some of the offsets.
+        # 3. Upload the new segments.
+        # 4. Be done.
+        assert IMutableUploadable.providedBy(data)
+
+        self.data = data
+
+        # XXX: Use the MutableFileVersion instead.
+        self.datalength = self._node.get_size()
+        if data.get_size() > self.datalength:
+            self.datalength = data.get_size()
+
+        self.log("starting update")
+        self.log("adding new data of length %d at offset %d" % \
+                    (data.get_size(), offset))
+        self.log("new data length is %d" % self.datalength)
+        self._status.set_size(self.datalength)
+        self._status.set_status("Started")
+        self._started = time.time()
+
+        self.done_deferred = defer.Deferred()
+
+        self._writekey = self._node.get_writekey()
+        assert self._writekey, "need write capability to publish"
+
+        # first, which servers will we publish to? We require that the
+        # servermap was updated in MODE_WRITE, so we can depend upon the
+        # peerlist computed by that process instead of computing our own.
+        assert self._servermap
+        assert self._servermap.last_update_mode in (MODE_WRITE, MODE_CHECK)
+        # we will push a version that is one larger than anything present
+        # in the grid, according to the servermap.
+        self._new_seqnum = self._servermap.highest_seqnum() + 1
+        self._status.set_servermap(self._servermap)
+
+        self.log(format="new seqnum will be %(seqnum)d",
+                 seqnum=self._new_seqnum, level=log.NOISY)
+
+        # We're updating an existing file, so all of the following
+        # should be available.
+        self.readkey = self._node.get_readkey()
+        self.required_shares = self._node.get_required_shares()
+        assert self.required_shares is not None
+        self.total_shares = self._node.get_total_shares()
+        assert self.total_shares is not None
+        self._status.set_encoding(self.required_shares, self.total_shares)
+
+        self._pubkey = self._node.get_pubkey()
+        assert self._pubkey
+        self._privkey = self._node.get_privkey()
+        assert self._privkey
+        self._encprivkey = self._node.get_encprivkey()
+
+        sb = self._storage_broker
+        full_peerlist = sb.get_servers_for_index(self._storage_index)
+        self.full_peerlist = full_peerlist # for use later, immutable
+        self.bad_peers = set() # peerids who have errbacked/refused requests
+
+        # This will set self.segment_size, self.num_segments, and
+        # self.fec. TODO: Does it know how to do the offset? Probably
+        # not. So do that part next.
+        self.setup_encoding_parameters(offset=offset)
+
+        # if we experience any surprises (writes which were rejected because
+        # our test vector did not match, or shares which we didn't expect to
+        # see), we set this flag and report an UncoordinatedWriteError at the
+        # end of the publish process.
+        self.surprised = False
+
+        # we keep track of three tables. The first is our goal: which share
+        # we want to see on which servers. This is initially populated by the
+        # existing servermap.
+        self.goal = set() # pairs of (peerid, shnum) tuples
+
+        # the second table is our list of outstanding queries: those which
+        # are in flight and may or may not be delivered, accepted, or
+        # acknowledged. Items are added to this table when the request is
+        # sent, and removed when the response returns (or errbacks).
+        self.outstanding = set() # (peerid, shnum) tuples
+
+        # the third is a table of successes: share which have actually been
+        # placed. These are populated when responses come back with success.
+        # When self.placed == self.goal, we're done.
+        self.placed = set() # (peerid, shnum) tuples
+
+        # we also keep a mapping from peerid to RemoteReference. Each time we
+        # pull a connection out of the full peerlist, we add it to this for
+        # use later.
+        self.connections = {}
+
+        self.bad_share_checkstrings = {}
+
+        # This is set at the last step of the publishing process.
+        self.versioninfo = ""
+
+        # we use the servermap to populate the initial goal: this way we will
+        # try to update each existing share in place. Since we're
+        # updating, we ignore damaged and missing shares -- callers must
+        # do a repair to repair and recreate these.
+        for (peerid, shnum) in self._servermap.servermap:
+            self.goal.add( (peerid, shnum) )
+            self.connections[peerid] = self._servermap.connections[peerid]
+        self.writers = {}
+
+        # SDMF files are updated differently.
+        self._version = MDMF_VERSION
+        writer_class = MDMFSlotWriteProxy
+
+        # For each (peerid, shnum) in self.goal, we make a
+        # write proxy for that peer. We'll use this to write
+        # shares to the peer.
+        for key in self.goal:
+            peerid, shnum = key
+            write_enabler = self._node.get_write_enabler(peerid)
+            renew_secret = self._node.get_renewal_secret(peerid)
+            cancel_secret = self._node.get_cancel_secret(peerid)
+            secrets = (write_enabler, renew_secret, cancel_secret)
+
+            self.writers[shnum] =  writer_class(shnum,
+                                                self.connections[peerid],
+                                                self._storage_index,
+                                                secrets,
+                                                self._new_seqnum,
+                                                self.required_shares,
+                                                self.total_shares,
+                                                self.segment_size,
+                                                self.datalength)
+            self.writers[shnum].peerid = peerid
+            assert (peerid, shnum) in self._servermap.servermap
+            old_versionid, old_timestamp = self._servermap.servermap[key]
+            (old_seqnum, old_root_hash, old_salt, old_segsize,
+             old_datalength, old_k, old_N, old_prefix,
+             old_offsets_tuple) = old_versionid
+            self.writers[shnum].set_checkstring(old_seqnum,
+                                                old_root_hash,
+                                                old_salt)
+
+        # Our remote shares will not have a complete checkstring until
+        # after we are done writing share data and have started to write
+        # blocks. In the meantime, we need to know what to look for when
+        # writing, so that we can detect UncoordinatedWriteErrors.
+        self._checkstring = self.writers.values()[0].get_checkstring()
+
+        # Now, we start pushing shares.
+        self._status.timings["setup"] = time.time() - self._started
+        # First, we encrypt, encode, and publish the shares that we need
+        # to encrypt, encode, and publish.
+
+        # Our update process fetched these for us. We need to update
+        # them in place as publishing happens.
+        self.blockhashes = {} # (shnum, [blochashes])
+        for (i, bht) in blockhashes.iteritems():
+            # We need to extract the leaves from our old hash tree.
+            old_segcount = mathutil.div_ceil(version[4],
+                                             version[3])
+            h = hashtree.IncompleteHashTree(old_segcount)
+            bht = dict(enumerate(bht))
+            h.set_hashes(bht)
+            leaves = h[h.get_leaf_index(0):]
+            for j in xrange(self.num_segments - len(leaves)):
+                leaves.append(None)
+
+            assert len(leaves) >= self.num_segments
+            self.blockhashes[i] = leaves
+            # This list will now be the leaves that were set during the
+            # initial upload + enough empty hashes to make it a
+            # power-of-two. If we exceed a power of two boundary, we
+            # should be encoding the file over again, and should not be
+            # here. So, we have
+            #assert len(self.blockhashes[i]) == \
+            #    hashtree.roundup_pow2(self.num_segments), \
+            #        len(self.blockhashes[i])
+            # XXX: Except this doesn't work. Figure out why.
+
+        # These are filled in later, after we've modified the block hash
+        # tree suitably.
+        self.sharehash_leaves = None # eventually [sharehashes]
+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to 
+                              # validate the share]
+
+        self.log("Starting push")
+
+        self._state = PUSHING_BLOCKS_STATE
+        self._push()
+
+        return self.done_deferred
+
+
     def publish(self, newdata):
         """Publish the filenode's current contents.  Returns a Deferred that
         fires (with None) when the publish has done as much work as it's ever
hunk ./src/allmydata/mutable/publish.py 344
         simultaneous write.
         """
 
-        # 1: generate shares (SDMF: files are small, so we can do it in RAM)
-        # 2: perform peer selection, get candidate servers
-        #  2a: send queries to n+epsilon servers, to determine current shares
-        #  2b: based upon responses, create target map
-        # 3: send slot_testv_and_readv_and_writev messages
-        # 4: as responses return, update share-dispatch table
-        # 4a: may need to run recovery algorithm
-        # 5: when enough responses are back, we're done
+        # 0. Setup encoding parameters, encoder, and other such things.
+        # 1. Encrypt, encode, and publish segments.
+        assert IMutableUploadable.providedBy(newdata)
 
hunk ./src/allmydata/mutable/publish.py 348
-        self.log("starting publish, datalen is %s" % len(newdata))
-        self._status.set_size(len(newdata))
+        self.data = newdata
+        self.datalength = newdata.get_size()
+        #if self.datalength >= DEFAULT_MAX_SEGMENT_SIZE:
+        #    self._version = MDMF_VERSION
+        #else:
+        #    self._version = SDMF_VERSION
+
+        self.log("starting publish, datalen is %s" % self.datalength)
+        self._status.set_size(self.datalength)
         self._status.set_status("Started")
         self._started = time.time()
 
hunk ./src/allmydata/mutable/publish.py 405
         self.full_peerlist = full_peerlist # for use later, immutable
         self.bad_peers = set() # peerids who have errbacked/refused requests
 
-        self.newdata = newdata
-        self.salt = os.urandom(16)
-
+        # This will set self.segment_size, self.num_segments, and
+        # self.fec.
         self.setup_encoding_parameters()
 
         # if we experience any surprises (writes which were rejected because
hunk ./src/allmydata/mutable/publish.py 415
         # end of the publish process.
         self.surprised = False
 
-        # as a failsafe, refuse to iterate through self.loop more than a
-        # thousand times.
-        self.looplimit = 1000
-
         # we keep track of three tables. The first is our goal: which share
         # we want to see on which servers. This is initially populated by the
         # existing servermap.
hunk ./src/allmydata/mutable/publish.py 438
 
         self.bad_share_checkstrings = {}
 
+        # This is set at the last step of the publishing process.
+        self.versioninfo = ""
+
         # we use the servermap to populate the initial goal: this way we will
         # try to update each existing share in place.
         for (peerid, shnum) in self._servermap.servermap:
hunk ./src/allmydata/mutable/publish.py 454
             self.bad_share_checkstrings[key] = old_checkstring
             self.connections[peerid] = self._servermap.connections[peerid]
 
-        # create the shares. We'll discard these as they are delivered. SDMF:
-        # we're allowed to hold everything in memory.
+        # TODO: Make this part do peer selection.
+        self.update_goal()
+        self.writers = {}
+        if self._version == MDMF_VERSION:
+            writer_class = MDMFSlotWriteProxy
+        else:
+            writer_class = SDMFSlotWriteProxy
 
hunk ./src/allmydata/mutable/publish.py 462
+        # For each (peerid, shnum) in self.goal, we make a
+        # write proxy for that peer. We'll use this to write
+        # shares to the peer.
+        for key in self.goal:
+            peerid, shnum = key
+            write_enabler = self._node.get_write_enabler(peerid)
+            renew_secret = self._node.get_renewal_secret(peerid)
+            cancel_secret = self._node.get_cancel_secret(peerid)
+            secrets = (write_enabler, renew_secret, cancel_secret)
+
+            self.writers[shnum] =  writer_class(shnum,
+                                                self.connections[peerid],
+                                                self._storage_index,
+                                                secrets,
+                                                self._new_seqnum,
+                                                self.required_shares,
+                                                self.total_shares,
+                                                self.segment_size,
+                                                self.datalength)
+            self.writers[shnum].peerid = peerid
+            if (peerid, shnum) in self._servermap.servermap:
+                old_versionid, old_timestamp = self._servermap.servermap[key]
+                (old_seqnum, old_root_hash, old_salt, old_segsize,
+                 old_datalength, old_k, old_N, old_prefix,
+                 old_offsets_tuple) = old_versionid
+                self.writers[shnum].set_checkstring(old_seqnum,
+                                                    old_root_hash,
+                                                    old_salt)
+            elif (peerid, shnum) in self.bad_share_checkstrings:
+                old_checkstring = self.bad_share_checkstrings[(peerid, shnum)]
+                self.writers[shnum].set_checkstring(old_checkstring)
+
+        # Our remote shares will not have a complete checkstring until
+        # after we are done writing share data and have started to write
+        # blocks. In the meantime, we need to know what to look for when
+        # writing, so that we can detect UncoordinatedWriteErrors.
+        self._checkstring = self.writers.values()[0].get_checkstring()
+
+        # Now, we start pushing shares.
         self._status.timings["setup"] = time.time() - self._started
hunk ./src/allmydata/mutable/publish.py 502
-        d = self._encrypt_and_encode()
-        d.addCallback(self._generate_shares)
-        def _start_pushing(res):
-            self._started_pushing = time.time()
-            return res
-        d.addCallback(_start_pushing)
-        d.addCallback(self.loop) # trigger delivery
-        d.addErrback(self._fatal_error)
+        # First, we encrypt, encode, and publish the shares that we need
+        # to encrypt, encode, and publish.
+
+        # This will eventually hold the block hash chain for each share
+        # that we publish. We define it this way so that empty publishes
+        # will still have something to write to the remote slot.
+        self.blockhashes = dict([(i, []) for i in xrange(self.total_shares)])
+        for i in xrange(self.total_shares):
+            blocks = self.blockhashes[i]
+            for j in xrange(self.num_segments):
+                blocks.append(None)
+        self.sharehash_leaves = None # eventually [sharehashes]
+        self.sharehashes = {} # shnum -> [sharehash leaves necessary to 
+                              # validate the share]
+
+        self.log("Starting push")
+
+        self._state = PUSHING_BLOCKS_STATE
+        self._push()
 
         return self.done_deferred
 
hunk ./src/allmydata/mutable/publish.py 524
-    def setup_encoding_parameters(self):
-        segment_size = len(self.newdata)
+
+    def _update_status(self):
+        self._status.set_status("Sending Shares: %d placed out of %d, "
+                                "%d messages outstanding" %
+                                (len(self.placed),
+                                 len(self.goal),
+                                 len(self.outstanding)))
+        self._status.set_progress(1.0 * len(self.placed) / len(self.goal))
+
+
+    def setup_encoding_parameters(self, offset=0):
+        if self._version == MDMF_VERSION:
+            segment_size = DEFAULT_MAX_SEGMENT_SIZE # 128 KiB by default
+        else:
+            segment_size = self.datalength # SDMF is only one segment
         # this must be a multiple of self.required_shares
         segment_size = mathutil.next_multiple(segment_size,
                                               self.required_shares)
hunk ./src/allmydata/mutable/publish.py 543
         self.segment_size = segment_size
+
+        # Calculate the starting segment for the upload.
         if segment_size:
hunk ./src/allmydata/mutable/publish.py 546
-            self.num_segments = mathutil.div_ceil(len(self.newdata),
+            self.num_segments = mathutil.div_ceil(self.datalength,
                                                   segment_size)
hunk ./src/allmydata/mutable/publish.py 548
+            self.starting_segment = mathutil.div_ceil(offset,
+                                                      segment_size)
+            self.starting_segment -= 1
+            if offset == 0:
+                self.starting_segment = 0
+
         else:
             self.num_segments = 0
hunk ./src/allmydata/mutable/publish.py 556
-        assert self.num_segments in [0, 1,] # SDMF restrictions
+            self.starting_segment = 0
+
+
+        self.log("building encoding parameters for file")
+        self.log("got segsize %d" % self.segment_size)
+        self.log("got %d segments" % self.num_segments)
+
+        if self._version == SDMF_VERSION:
+            assert self.num_segments in (0, 1) # SDMF
+        # calculate the tail segment size.
+
+        if segment_size and self.datalength:
+            self.tail_segment_size = self.datalength % segment_size
+            self.log("got tail segment size %d" % self.tail_segment_size)
+        else:
+            self.tail_segment_size = 0
+
+        if self.tail_segment_size == 0 and segment_size:
+            # The tail segment is the same size as the other segments.
+            self.tail_segment_size = segment_size
+
+        # Make FEC encoders
+        fec = codec.CRSEncoder()
+        fec.set_params(self.segment_size,
+                       self.required_shares, self.total_shares)
+        self.piece_size = fec.get_block_size()
+        self.fec = fec
+
+        if self.tail_segment_size == self.segment_size:
+            self.tail_fec = self.fec
+        else:
+            tail_fec = codec.CRSEncoder()
+            tail_fec.set_params(self.tail_segment_size,
+                                self.required_shares,
+                                self.total_shares)
+            self.tail_fec = tail_fec
+
+        self._current_segment = self.starting_segment
+        self.end_segment = self.num_segments - 1
+        # Now figure out where the last segment should be.
+        if self.data.get_size() != self.datalength:
+            end = self.data.get_size()
+            self.end_segment = mathutil.div_ceil(end,
+                                                 segment_size)
+            self.end_segment -= 1
+        self.log("got start segment %d" % self.starting_segment)
+        self.log("got end segment %d" % self.end_segment)
+
+
+    def _push(self, ignored=None):
+        """
+        I manage state transitions. In particular, I see that we still
+        have a good enough number of writers to complete the upload
+        successfully.
+        """
+        # Can we still successfully publish this file?
+        # TODO: Keep track of outstanding queries before aborting the
+        #       process.
+        if len(self.writers) <= self.required_shares or self.surprised:
+            return self._failure()
+
+        # Figure out what we need to do next. Each of these needs to
+        # return a deferred so that we don't block execution when this
+        # is first called in the upload method.
+        if self._state == PUSHING_BLOCKS_STATE:
+            return self.push_segment(self._current_segment)
+
+        elif self._state == PUSHING_EVERYTHING_ELSE_STATE:
+            return self.push_everything_else()
+
+        # If we make it to this point, we were successful in placing the
+        # file.
+        return self._done(None)
+
+
+    def push_segment(self, segnum):
+        if self.num_segments == 0 and self._version == SDMF_VERSION:
+            self._add_dummy_salts()
 
hunk ./src/allmydata/mutable/publish.py 635
-    def _fatal_error(self, f):
-        self.log("error during loop", failure=f, level=log.UNUSUAL)
-        self._done(f)
+        if segnum > self.end_segment:
+            # We don't have any more segments to push.
+            self._state = PUSHING_EVERYTHING_ELSE_STATE
+            return self._push()
+
+        d = self._encode_segment(segnum)
+        d.addCallback(self._push_segment, segnum)
+        def _increment_segnum(ign):
+            self._current_segment += 1
+        # XXX: I don't think we need to do addBoth here -- any errBacks
+        # should be handled within push_segment.
+        d.addBoth(_increment_segnum)
+        d.addBoth(self._turn_barrier)
+        d.addBoth(self._push)
+
+
+    def _turn_barrier(self, result):
+        """
+        I help the publish process avoid the recursion limit issues
+        described in #237.
+        """
+        return fireEventually(result)
+
+
+    def _add_dummy_salts(self):
+        """
+        SDMF files need a salt even if they're empty, or the signature
+        won't make sense. This method adds a dummy salt to each of our
+        SDMF writers so that they can write the signature later.
+        """
+        salt = os.urandom(16)
+        assert self._version == SDMF_VERSION
+
+        for writer in self.writers.itervalues():
+            writer.put_salt(salt)
+
+
+    def _encode_segment(self, segnum):
+        """
+        I encrypt and encode the segment segnum.
+        """
+        started = time.time()
+
+        if segnum + 1 == self.num_segments:
+            segsize = self.tail_segment_size
+        else:
+            segsize = self.segment_size
+
+
+        self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments))
+        data = self.data.read(segsize)
+        # XXX: This is dumb. Why return a list?
+        data = "".join(data)
+
+        assert len(data) == segsize, len(data)
+
+        salt = os.urandom(16)
+
+        key = hashutil.ssk_readkey_data_hash(salt, self.readkey)
+        self._status.set_status("Encrypting")
+        enc = AES(key)
+        crypttext = enc.process(data)
+        assert len(crypttext) == len(data)
+
+        now = time.time()
+        self._status.timings["encrypt"] = now - started
+        started = now
+
+        # now apply FEC
+        if segnum + 1 == self.num_segments:
+            fec = self.tail_fec
+        else:
+            fec = self.fec
+
+        self._status.set_status("Encoding")
+        crypttext_pieces = [None] * self.required_shares
+        piece_size = fec.get_block_size()
+        for i in range(len(crypttext_pieces)):
+            offset = i * piece_size
+            piece = crypttext[offset:offset+piece_size]
+            piece = piece + "\x00"*(piece_size - len(piece)) # padding
+            crypttext_pieces[i] = piece
+            assert len(piece) == piece_size
+        d = fec.encode(crypttext_pieces)
+        def _done_encoding(res):
+            elapsed = time.time() - started
+            self._status.timings["encode"] = elapsed
+            return (res, salt)
+        d.addCallback(_done_encoding)
+        return d
+
+
+    def _push_segment(self, encoded_and_salt, segnum):
+        """
+        I push (data, salt) as segment number segnum.
+        """
+        results, salt = encoded_and_salt
+        shares, shareids = results
+        self._status.set_status("Pushing segment")
+        for i in xrange(len(shares)):
+            sharedata = shares[i]
+            shareid = shareids[i]
+            if self._version == MDMF_VERSION:
+                hashed = salt + sharedata
+            else:
+                hashed = sharedata
+            block_hash = hashutil.block_hash(hashed)
+            self.blockhashes[shareid][segnum] = block_hash
+            # find the writer for this share
+            writer = self.writers[shareid]
+            writer.put_block(sharedata, segnum, salt)
+
+
+    def push_everything_else(self):
+        """
+        I put everything else associated with a share.
+        """
+        self._pack_started = time.time()
+        self.push_encprivkey()
+        self.push_blockhashes()
+        self.push_sharehashes()
+        self.push_toplevel_hashes_and_signature()
+        d = self.finish_publishing()
+        def _change_state(ignored):
+            self._state = DONE_STATE
+        d.addCallback(_change_state)
+        d.addCallback(self._push)
+        return d
+
+
+    def push_encprivkey(self):
+        encprivkey = self._encprivkey
+        self._status.set_status("Pushing encrypted private key")
+        for writer in self.writers.itervalues():
+            writer.put_encprivkey(encprivkey)
+
+
+    def push_blockhashes(self):
+        self.sharehash_leaves = [None] * len(self.blockhashes)
+        self._status.set_status("Building and pushing block hash tree")
+        for shnum, blockhashes in self.blockhashes.iteritems():
+            t = hashtree.HashTree(blockhashes)
+            self.blockhashes[shnum] = list(t)
+            # set the leaf for future use.
+            self.sharehash_leaves[shnum] = t[0]
+
+            writer = self.writers[shnum]
+            writer.put_blockhashes(self.blockhashes[shnum])
+
+
+    def push_sharehashes(self):
+        self._status.set_status("Building and pushing share hash chain")
+        share_hash_tree = hashtree.HashTree(self.sharehash_leaves)
+        for shnum in xrange(len(self.sharehash_leaves)):
+            needed_indices = share_hash_tree.needed_hashes(shnum)
+            self.sharehashes[shnum] = dict( [ (i, share_hash_tree[i])
+                                             for i in needed_indices] )
+            writer = self.writers[shnum]
+            writer.put_sharehashes(self.sharehashes[shnum])
+        self.root_hash = share_hash_tree[0]
+
+
+    def push_toplevel_hashes_and_signature(self):
+        # We need to to three things here:
+        #   - Push the root hash and salt hash
+        #   - Get the checkstring of the resulting layout; sign that.
+        #   - Push the signature
+        self._status.set_status("Pushing root hashes and signature")
+        for shnum in xrange(self.total_shares):
+            writer = self.writers[shnum]
+            writer.put_root_hash(self.root_hash)
+        self._update_checkstring()
+        self._make_and_place_signature()
+
+
+    def _update_checkstring(self):
+        """
+        After putting the root hash, MDMF files will have the
+        checkstring written to the storage server. This means that we
+        can update our copy of the checkstring so we can detect
+        uncoordinated writes. SDMF files will have the same checkstring,
+        so we need not do anything.
+        """
+        self._checkstring = self.writers.values()[0].get_checkstring()
+
+
+    def _make_and_place_signature(self):
+        """
+        I create and place the signature.
+        """
+        started = time.time()
+        self._status.set_status("Signing prefix")
+        signable = self.writers[0].get_signable()
+        self.signature = self._privkey.sign(signable)
+
+        for (shnum, writer) in self.writers.iteritems():
+            writer.put_signature(self.signature)
+        self._status.timings['sign'] = time.time() - started
+
+
+    def finish_publishing(self):
+        # We're almost done -- we just need to put the verification key
+        # and the offsets
+        started = time.time()
+        self._status.set_status("Pushing shares")
+        self._started_pushing = started
+        ds = []
+        verification_key = self._pubkey.serialize()
+
+
+        # TODO: Bad, since we remove from this same dict. We need to
+        # make a copy, or just use a non-iterated value.
+        for (shnum, writer) in self.writers.iteritems():
+            writer.put_verification_key(verification_key)
+            d = writer.finish_publishing()
+            # Add the (peerid, shnum) tuple to our list of outstanding
+            # queries. This gets used by _loop if some of our queries
+            # fail to place shares.
+            self.outstanding.add((writer.peerid, writer.shnum))
+            d.addCallback(self._got_write_answer, writer, started)
+            d.addErrback(self._connection_problem, writer)
+            ds.append(d)
+        self._record_verinfo()
+        self._status.timings['pack'] = time.time() - started
+        return defer.DeferredList(ds)
+
+
+    def _record_verinfo(self):
+        self.versioninfo = self.writers.values()[0].get_verinfo()
+
+
+    def _connection_problem(self, f, writer):
+        """
+        We ran into a connection problem while working with writer, and
+        need to deal with that.
+        """
+        self.log("found problem: %s" % str(f))
+        self._last_failure = f
+        del(self.writers[writer.shnum])
 
hunk ./src/allmydata/mutable/publish.py 875
-    def _update_status(self):
-        self._status.set_status("Sending Shares: %d placed out of %d, "
-                                "%d messages outstanding" %
-                                (len(self.placed),
-                                 len(self.goal),
-                                 len(self.outstanding)))
-        self._status.set_progress(1.0 * len(self.placed) / len(self.goal))
 
hunk ./src/allmydata/mutable/publish.py 876
-    def loop(self, ignored=None):
-        self.log("entering loop", level=log.NOISY)
-        if not self._running:
-            return
-
-        self.looplimit -= 1
-        if self.looplimit <= 0:
-            raise LoopLimitExceededError("loop limit exceeded")
-
-        if self.surprised:
-            # don't send out any new shares, just wait for the outstanding
-            # ones to be retired.
-            self.log("currently surprised, so don't send any new shares",
-                     level=log.NOISY)
-        else:
-            self.update_goal()
-            # how far are we from our goal?
-            needed = self.goal - self.placed - self.outstanding
-            self._update_status()
-
-            if needed:
-                # we need to send out new shares
-                self.log(format="need to send %(needed)d new shares",
-                         needed=len(needed), level=log.NOISY)
-                self._send_shares(needed)
-                return
-
-        if self.outstanding:
-            # queries are still pending, keep waiting
-            self.log(format="%(outstanding)d queries still outstanding",
-                     outstanding=len(self.outstanding),
-                     level=log.NOISY)
-            return
-
-        # no queries outstanding, no placements needed: we're done
-        self.log("no queries outstanding, no placements needed: done",
-                 level=log.OPERATIONAL)
-        now = time.time()
-        elapsed = now - self._started_pushing
-        self._status.timings["push"] = elapsed
-        return self._done(None)
-
     def log_goal(self, goal, message=""):
         logmsg = [message]
         for (shnum, peerid) in sorted([(s,p) for (p,s) in goal]):
hunk ./src/allmydata/mutable/publish.py 957
             self.log_goal(self.goal, "after update: ")
 
 
+    def _got_write_answer(self, answer, writer, started):
+        if not answer:
+            # SDMF writers only pretend to write when readers set their
+            # blocks, salts, and so on -- they actually just write once,
+            # at the end of the upload process. In fake writes, they
+            # return defer.succeed(None). If we see that, we shouldn't
+            # bother checking it.
+            return
 
hunk ./src/allmydata/mutable/publish.py 966
-    def _encrypt_and_encode(self):
-        # this returns a Deferred that fires with a list of (sharedata,
-        # sharenum) tuples. TODO: cache the ciphertext, only produce the
-        # shares that we care about.
-        self.log("_encrypt_and_encode")
-
-        self._status.set_status("Encrypting")
-        started = time.time()
-
-        key = hashutil.ssk_readkey_data_hash(self.salt, self.readkey)
-        enc = AES(key)
-        crypttext = enc.process(self.newdata)
-        assert len(crypttext) == len(self.newdata)
+        peerid = writer.peerid
+        lp = self.log("_got_write_answer from %s, share %d" %
+                      (idlib.shortnodeid_b2a(peerid), writer.shnum))
 
         now = time.time()
hunk ./src/allmydata/mutable/publish.py 971
-        self._status.timings["encrypt"] = now - started
-        started = now
-
-        # now apply FEC
-
-        self._status.set_status("Encoding")
-        fec = codec.CRSEncoder()
-        fec.set_params(self.segment_size,
-                       self.required_shares, self.total_shares)
-        piece_size = fec.get_block_size()
-        crypttext_pieces = [None] * self.required_shares
-        for i in range(len(crypttext_pieces)):
-            offset = i * piece_size
-            piece = crypttext[offset:offset+piece_size]
-            piece = piece + "\x00"*(piece_size - len(piece)) # padding
-            crypttext_pieces[i] = piece
-            assert len(piece) == piece_size
-
-        d = fec.encode(crypttext_pieces)
-        def _done_encoding(res):
-            elapsed = time.time() - started
-            self._status.timings["encode"] = elapsed
-            return res
-        d.addCallback(_done_encoding)
-        return d
-
-    def _generate_shares(self, shares_and_shareids):
-        # this sets self.shares and self.root_hash
-        self.log("_generate_shares")
-        self._status.set_status("Generating Shares")
-        started = time.time()
-
-        # we should know these by now
-        privkey = self._privkey
-        encprivkey = self._encprivkey
-        pubkey = self._pubkey
-
-        (shares, share_ids) = shares_and_shareids
-
-        assert len(shares) == len(share_ids)
-        assert len(shares) == self.total_shares
-        all_shares = {}
-        block_hash_trees = {}
-        share_hash_leaves = [None] * len(shares)
-        for i in range(len(shares)):
-            share_data = shares[i]
-            shnum = share_ids[i]
-            all_shares[shnum] = share_data
-
-            # build the block hash tree. SDMF has only one leaf.
-            leaves = [hashutil.block_hash(share_data)]
-            t = hashtree.HashTree(leaves)
-            block_hash_trees[shnum] = list(t)
-            share_hash_leaves[shnum] = t[0]
-        for leaf in share_hash_leaves:
-            assert leaf is not None
-        share_hash_tree = hashtree.HashTree(share_hash_leaves)
-        share_hash_chain = {}
-        for shnum in range(self.total_shares):
-            needed_hashes = share_hash_tree.needed_hashes(shnum)
-            share_hash_chain[shnum] = dict( [ (i, share_hash_tree[i])
-                                              for i in needed_hashes ] )
-        root_hash = share_hash_tree[0]
-        assert len(root_hash) == 32
-        self.log("my new root_hash is %s" % base32.b2a(root_hash))
-        self._new_version_info = (self._new_seqnum, root_hash, self.salt)
-
-        prefix = pack_prefix(self._new_seqnum, root_hash, self.salt,
-                             self.required_shares, self.total_shares,
-                             self.segment_size, len(self.newdata))
-
-        # now pack the beginning of the share. All shares are the same up
-        # to the signature, then they have divergent share hash chains,
-        # then completely different block hash trees + salt + share data,
-        # then they all share the same encprivkey at the end. The sizes
-        # of everything are the same for all shares.
-
-        sign_started = time.time()
-        signature = privkey.sign(prefix)
-        self._status.timings["sign"] = time.time() - sign_started
-
-        verification_key = pubkey.serialize()
-
-        final_shares = {}
-        for shnum in range(self.total_shares):
-            final_share = pack_share(prefix,
-                                     verification_key,
-                                     signature,
-                                     share_hash_chain[shnum],
-                                     block_hash_trees[shnum],
-                                     all_shares[shnum],
-                                     encprivkey)
-            final_shares[shnum] = final_share
-        elapsed = time.time() - started
-        self._status.timings["pack"] = elapsed
-        self.shares = final_shares
-        self.root_hash = root_hash
-
-        # we also need to build up the version identifier for what we're
-        # pushing. Extract the offsets from one of our shares.
-        assert final_shares
-        offsets = unpack_header(final_shares.values()[0])[-1]
-        offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
-        verinfo = (self._new_seqnum, root_hash, self.salt,
-                   self.segment_size, len(self.newdata),
-                   self.required_shares, self.total_shares,
-                   prefix, offsets_tuple)
-        self.versioninfo = verinfo
-
-
-
-    def _send_shares(self, needed):
-        self.log("_send_shares")
-
-        # we're finally ready to send out our shares. If we encounter any
-        # surprises here, it's because somebody else is writing at the same
-        # time. (Note: in the future, when we remove the _query_peers() step
-        # and instead speculate about [or remember] which shares are where,
-        # surprises here are *not* indications of UncoordinatedWriteError,
-        # and we'll need to respond to them more gracefully.)
-
-        # needed is a set of (peerid, shnum) tuples. The first thing we do is
-        # organize it by peerid.
-
-        peermap = DictOfSets()
-        for (peerid, shnum) in needed:
-            peermap.add(peerid, shnum)
-
-        # the next thing is to build up a bunch of test vectors. The
-        # semantics of Publish are that we perform the operation if the world
-        # hasn't changed since the ServerMap was constructed (more or less).
-        # For every share we're trying to place, we create a test vector that
-        # tests to see if the server*share still corresponds to the
-        # map.
-
-        all_tw_vectors = {} # maps peerid to tw_vectors
-        sm = self._servermap.servermap
-
-        for key in needed:
-            (peerid, shnum) = key
-
-            if key in sm:
-                # an old version of that share already exists on the
-                # server, according to our servermap. We will create a
-                # request that attempts to replace it.
-                old_versionid, old_timestamp = sm[key]
-                (old_seqnum, old_root_hash, old_salt, old_segsize,
-                 old_datalength, old_k, old_N, old_prefix,
-                 old_offsets_tuple) = old_versionid
-                old_checkstring = pack_checkstring(old_seqnum,
-                                                   old_root_hash,
-                                                   old_salt)
-                testv = (0, len(old_checkstring), "eq", old_checkstring)
-
-            elif key in self.bad_share_checkstrings:
-                old_checkstring = self.bad_share_checkstrings[key]
-                testv = (0, len(old_checkstring), "eq", old_checkstring)
-
-            else:
-                # add a testv that requires the share not exist
-
-                # Unfortunately, foolscap-0.2.5 has a bug in the way inbound
-                # constraints are handled. If the same object is referenced
-                # multiple times inside the arguments, foolscap emits a
-                # 'reference' token instead of a distinct copy of the
-                # argument. The bug is that these 'reference' tokens are not
-                # accepted by the inbound constraint code. To work around
-                # this, we need to prevent python from interning the
-                # (constant) tuple, by creating a new copy of this vector
-                # each time.
-
-                # This bug is fixed in foolscap-0.2.6, and even though this
-                # version of Tahoe requires foolscap-0.3.1 or newer, we are
-                # supposed to be able to interoperate with older versions of
-                # Tahoe which are allowed to use older versions of foolscap,
-                # including foolscap-0.2.5 . In addition, I've seen other
-                # foolscap problems triggered by 'reference' tokens (see #541
-                # for details). So we must keep this workaround in place.
-
-                #testv = (0, 1, 'eq', "")
-                testv = tuple([0, 1, 'eq', ""])
-
-            testvs = [testv]
-            # the write vector is simply the share
-            writev = [(0, self.shares[shnum])]
-
-            if peerid not in all_tw_vectors:
-                all_tw_vectors[peerid] = {}
-                # maps shnum to (testvs, writevs, new_length)
-            assert shnum not in all_tw_vectors[peerid]
-
-            all_tw_vectors[peerid][shnum] = (testvs, writev, None)
-
-        # we read the checkstring back from each share, however we only use
-        # it to detect whether there was a new share that we didn't know
-        # about. The success or failure of the write will tell us whether
-        # there was a collision or not. If there is a collision, the first
-        # thing we'll do is update the servermap, which will find out what
-        # happened. We could conceivably reduce a roundtrip by using the
-        # readv checkstring to populate the servermap, but really we'd have
-        # to read enough data to validate the signatures too, so it wouldn't
-        # be an overall win.
-        read_vector = [(0, struct.calcsize(SIGNED_PREFIX))]
-
-        # ok, send the messages!
-        self.log("sending %d shares" % len(all_tw_vectors), level=log.NOISY)
-        started = time.time()
-        for (peerid, tw_vectors) in all_tw_vectors.items():
-
-            write_enabler = self._node.get_write_enabler(peerid)
-            renew_secret = self._node.get_renewal_secret(peerid)
-            cancel_secret = self._node.get_cancel_secret(peerid)
-            secrets = (write_enabler, renew_secret, cancel_secret)
-            shnums = tw_vectors.keys()
-
-            for shnum in shnums:
-                self.outstanding.add( (peerid, shnum) )
+        elapsed = now - started
 
hunk ./src/allmydata/mutable/publish.py 973
-            d = self._do_testreadwrite(peerid, secrets,
-                                       tw_vectors, read_vector)
-            d.addCallbacks(self._got_write_answer, self._got_write_error,
-                           callbackArgs=(peerid, shnums, started),
-                           errbackArgs=(peerid, shnums, started))
-            # tolerate immediate errback, like with DeadReferenceError
-            d.addBoth(fireEventually)
-            d.addCallback(self.loop)
-            d.addErrback(self._fatal_error)
+        self._status.add_per_server_time(peerid, elapsed)
 
hunk ./src/allmydata/mutable/publish.py 975
-        self._update_status()
-        self.log("%d shares sent" % len(all_tw_vectors), level=log.NOISY)
+        wrote, read_data = answer
 
hunk ./src/allmydata/mutable/publish.py 977
-    def _do_testreadwrite(self, peerid, secrets,
-                          tw_vectors, read_vector):
-        storage_index = self._storage_index
-        ss = self.connections[peerid]
+        surprise_shares = set(read_data.keys()) - set([writer.shnum])
 
hunk ./src/allmydata/mutable/publish.py 979
-        #print "SS[%s] is %s" % (idlib.shortnodeid_b2a(peerid), ss), ss.tracker.interfaceName
-        d = ss.callRemote("slot_testv_and_readv_and_writev",
-                          storage_index,
-                          secrets,
-                          tw_vectors,
-                          read_vector)
-        return d
+        # We need to remove from surprise_shares any shares that we are
+        # knowingly also writing to that peer from other writers.
 
hunk ./src/allmydata/mutable/publish.py 982
-    def _got_write_answer(self, answer, peerid, shnums, started):
-        lp = self.log("_got_write_answer from %s" %
-                      idlib.shortnodeid_b2a(peerid))
-        for shnum in shnums:
-            self.outstanding.discard( (peerid, shnum) )
+        # TODO: Precompute this.
+        known_shnums = [x.shnum for x in self.writers.values()
+                        if x.peerid == peerid]
+        surprise_shares -= set(known_shnums)
+        self.log("found the following surprise shares: %s" %
+                 str(surprise_shares))
 
hunk ./src/allmydata/mutable/publish.py 989
-        now = time.time()
-        elapsed = now - started
-        self._status.add_per_server_time(peerid, elapsed)
-
-        wrote, read_data = answer
-
-        surprise_shares = set(read_data.keys()) - set(shnums)
+        # Now surprise shares contains all of the shares that we did not
+        # expect to be there.
 
         surprised = False
         for shnum in surprise_shares:
hunk ./src/allmydata/mutable/publish.py 996
             # read_data is a dict mapping shnum to checkstring (SIGNED_PREFIX)
             checkstring = read_data[shnum][0]
-            their_version_info = unpack_checkstring(checkstring)
-            if their_version_info == self._new_version_info:
+            # What we want to do here is to see if their (seqnum,
+            # roothash, salt) is the same as our (seqnum, roothash,
+            # salt), or the equivalent for MDMF. The best way to do this
+            # is to store a packed representation of our checkstring
+            # somewhere, then not bother unpacking the other
+            # checkstring.
+            if checkstring == self._checkstring:
                 # they have the right share, somehow
 
                 if (peerid,shnum) in self.goal:
hunk ./src/allmydata/mutable/publish.py 1081
             self.log("our testv failed, so the write did not happen",
                      parent=lp, level=log.WEIRD, umid="8sc26g")
             self.surprised = True
-            self.bad_peers.add(peerid) # don't ask them again
+            self.bad_peers.add(writer) # don't ask them again
             # use the checkstring to add information to the log message
             for (shnum,readv) in read_data.items():
                 checkstring = readv[0]
hunk ./src/allmydata/mutable/publish.py 1103
                 # if expected_version==None, then we didn't expect to see a
                 # share on that peer, and the 'surprise_shares' clause above
                 # will have logged it.
-            # self.loop() will take care of finding new homes
             return
 
hunk ./src/allmydata/mutable/publish.py 1105
-        for shnum in shnums:
-            self.placed.add( (peerid, shnum) )
-            # and update the servermap
-            self._servermap.add_new_share(peerid, shnum,
+        # and update the servermap
+        # self.versioninfo is set during the last phase of publishing.
+        # If we get there, we know that responses correspond to placed
+        # shares, and can safely execute these statements.
+        if self.versioninfo:
+            self.log("wrote successfully: adding new share to servermap")
+            self._servermap.add_new_share(peerid, writer.shnum,
                                           self.versioninfo, started)
hunk ./src/allmydata/mutable/publish.py 1113
-
-        # self.loop() will take care of checking to see if we're done
+            self.placed.add( (peerid, writer.shnum) )
+        self._update_status()
+        # the next method in the deferred chain will check to see if
+        # we're done and successful.
         return
 
hunk ./src/allmydata/mutable/publish.py 1119
-    def _got_write_error(self, f, peerid, shnums, started):
-        for shnum in shnums:
-            self.outstanding.discard( (peerid, shnum) )
-        self.bad_peers.add(peerid)
-        if self._first_write_error is None:
-            self._first_write_error = f
-        self.log(format="error while writing shares %(shnums)s to peerid %(peerid)s",
-                 shnums=list(shnums), peerid=idlib.shortnodeid_b2a(peerid),
-                 failure=f,
-                 level=log.UNUSUAL)
-        # self.loop() will take care of checking to see if we're done
-        return
-
 
     def _done(self, res):
         if not self._running:
hunk ./src/allmydata/mutable/publish.py 1126
         self._running = False
         now = time.time()
         self._status.timings["total"] = now - self._started
+
+        elapsed = now - self._started_pushing
+        self._status.timings['push'] = elapsed
+
         self._status.set_active(False)
hunk ./src/allmydata/mutable/publish.py 1131
-        if isinstance(res, failure.Failure):
-            self.log("Publish done, with failure", failure=res,
-                     level=log.WEIRD, umid="nRsR9Q")
-            self._status.set_status("Failed")
-        elif self.surprised:
-            self.log("Publish done, UncoordinatedWriteError", level=log.UNUSUAL)
-            self._status.set_status("UncoordinatedWriteError")
-            # deliver a failure
-            res = failure.Failure(UncoordinatedWriteError())
-            # TODO: recovery
-        else:
-            self.log("Publish done, success")
-            self._status.set_status("Finished")
-            self._status.set_progress(1.0)
+        self.log("Publish done, success")
+        self._status.set_status("Finished")
+        self._status.set_progress(1.0)
         eventually(self.done_deferred.callback, res)
 
hunk ./src/allmydata/mutable/publish.py 1136
+    def _failure(self):
+
+        if not self.surprised:
+            # We ran out of servers
+            self.log("Publish ran out of good servers, "
+                     "last failure was: %s" % str(self._last_failure))
+            e = NotEnoughServersError("Ran out of non-bad servers, "
+                                      "last failure was %s" %
+                                      str(self._last_failure))
+        else:
+            # We ran into shares that we didn't recognize, which means
+            # that we need to return an UncoordinatedWriteError.
+            self.log("Publish failed with UncoordinatedWriteError")
+            e = UncoordinatedWriteError()
+        f = failure.Failure(e)
+        eventually(self.done_deferred.callback, f)
+
+
+class MutableFileHandle:
+    """
+    I am a mutable uploadable built around a filehandle-like object,
+    usually either a StringIO instance or a handle to an actual file.
+    """
+    implements(IMutableUploadable)
+
+    def __init__(self, filehandle):
+        # The filehandle is defined as a generally file-like object that
+        # has these two methods. We don't care beyond that.
+        assert hasattr(filehandle, "read")
+        assert hasattr(filehandle, "close")
+
+        self._filehandle = filehandle
+        # We must start reading at the beginning of the file, or we risk
+        # encountering errors when the data read does not match the size
+        # reported to the uploader.
+        self._filehandle.seek(0)
+
+        # We have not yet read anything, so our position is 0.
+        self._marker = 0
+
+
+    def get_size(self):
+        """
+        I return the amount of data in my filehandle.
+        """
+        if not hasattr(self, "_size"):
+            old_position = self._filehandle.tell()
+            # Seek to the end of the file by seeking 0 bytes from the
+            # file's end
+            self._filehandle.seek(0, 2) # 2 == os.SEEK_END in 2.5+
+            self._size = self._filehandle.tell()
+            # Restore the previous position, in case this was called
+            # after a read.
+            self._filehandle.seek(old_position)
+            assert self._filehandle.tell() == old_position
+
+        assert hasattr(self, "_size")
+        return self._size
+
+
+    def pos(self):
+        """
+        I return the position of my read marker -- i.e., how much data I
+        have already read and returned to callers.
+        """
+        return self._marker
+
+
+    def read(self, length):
+        """
+        I return some data (up to length bytes) from my filehandle.
+
+        In most cases, I return length bytes, but sometimes I won't --
+        for example, if I am asked to read beyond the end of a file, or
+        an error occurs.
+        """
+        results = self._filehandle.read(length)
+        self._marker += len(results)
+        return [results]
+
+
+    def close(self):
+        """
+        I close the underlying filehandle. Any further operations on the
+        filehandle fail at this point.
+        """
+        self._filehandle.close()
+
+
+class MutableData(MutableFileHandle):
+    """
+    I am a mutable uploadable built around a string, which I then cast
+    into a StringIO and treat as a filehandle.
+    """
+
+    def __init__(self, s):
+        # Take a string and return a file-like uploadable.
+        assert isinstance(s, str)
+
+        MutableFileHandle.__init__(self, StringIO(s))
+
+
+class TransformingUploadable:
+    """
+    I am an IMutableUploadable that wraps another IMutableUploadable,
+    and some segments that are already on the grid. When I am called to
+    read, I handle merging of boundary segments.
+    """
+    implements(IMutableUploadable)
+
+
+    def __init__(self, data, offset, segment_size, start, end):
+        assert IMutableUploadable.providedBy(data)
+
+        self._newdata = data
+        self._offset = offset
+        self._segment_size = segment_size
+        self._start = start
+        self._end = end
+
+        self._read_marker = 0
+
+        self._first_segment_offset = offset % segment_size
+
+        num = self.log("TransformingUploadable: starting", parent=None)
+        self._log_number = num
+        self.log("got fso: %d" % self._first_segment_offset)
+        self.log("got offset: %d" % self._offset)
+
+
+    def log(self, *args, **kwargs):
+        if 'parent' not in kwargs:
+            kwargs['parent'] = self._log_number
+        if "facility" not in kwargs:
+            kwargs["facility"] = "tahoe.mutable.transforminguploadable"
+        return log.msg(*args, **kwargs)
+
+
+    def get_size(self):
+        return self._offset + self._newdata.get_size()
+
+
+    def read(self, length):
+        # We can get data from 3 sources here. 
+        #   1. The first of the segments provided to us.
+        #   2. The data that we're replacing things with.
+        #   3. The last of the segments provided to us.
+
+        # are we in state 0?
+        self.log("reading %d bytes" % length)
+
+        old_start_data = ""
+        old_data_length = self._first_segment_offset - self._read_marker
+        if old_data_length > 0:
+            if old_data_length > length:
+                old_data_length = length
+            self.log("returning %d bytes of old start data" % old_data_length)
+
+            old_data_end = old_data_length + self._read_marker
+            old_start_data = self._start[self._read_marker:old_data_end]
+            length -= old_data_length
+        else:
+            # otherwise calculations later get screwed up.
+            old_data_length = 0
+
+        # Is there enough new data to satisfy this read? If not, we need
+        # to pad the end of the data with data from our last segment.
+        old_end_length = length - \
+            (self._newdata.get_size() - self._newdata.pos())
+        old_end_data = ""
+        if old_end_length > 0:
+            self.log("reading %d bytes of old end data" % old_end_length)
+
+            # TODO: We're not explicitly checking for tail segment size
+            # here. Is that a problem?
+            old_data_offset = (length - old_end_length + \
+                               old_data_length) % self._segment_size
+            self.log("reading at offset %d" % old_data_offset)
+            old_end = old_data_offset + old_end_length
+            old_end_data = self._end[old_data_offset:old_end]
+            length -= old_end_length
+            assert length == self._newdata.get_size() - self._newdata.pos()
+
+        self.log("reading %d bytes of new data" % length)
+        new_data = self._newdata.read(length)
+        new_data = "".join(new_data)
+
+        self._read_marker += len(old_start_data + new_data + old_end_data)
+
+        return old_start_data + new_data + old_end_data
 
hunk ./src/allmydata/mutable/publish.py 1327
+    def close(self):
+        pass
}
[nodemaker.py: Make nodemaker expose a way to create MDMF files
Kevan Carstensen <kevan@isnotajoke.com>**20100819003509
 Ignore-this: a6701746d6b992fc07bc0556a2b4a61d
] {
hunk ./src/allmydata/nodemaker.py 3
 import weakref
 from zope.interface import implements
-from allmydata.interfaces import INodeMaker
+from allmydata.util.assertutil import precondition
+from allmydata.interfaces import INodeMaker, SDMF_VERSION
 from allmydata.immutable.literal import LiteralFileNode
 from allmydata.immutable.filenode import ImmutableFileNode, CiphertextFileNode
 from allmydata.immutable.upload import Data
hunk ./src/allmydata/nodemaker.py 9
 from allmydata.mutable.filenode import MutableFileNode
+from allmydata.mutable.publish import MutableData
 from allmydata.dirnode import DirectoryNode, pack_children
 from allmydata.unknown import UnknownNode
 from allmydata import uri
hunk ./src/allmydata/nodemaker.py 92
             return self._create_dirnode(filenode)
         return None
 
-    def create_mutable_file(self, contents=None, keysize=None):
+    def create_mutable_file(self, contents=None, keysize=None,
+                            version=SDMF_VERSION):
         n = MutableFileNode(self.storage_broker, self.secret_holder,
                             self.default_encoding_parameters, self.history)
hunk ./src/allmydata/nodemaker.py 96
+        n.set_version(version)
         d = self.key_generator.generate(keysize)
         d.addCallback(n.create_with_keys, contents)
         d.addCallback(lambda res: n)
hunk ./src/allmydata/nodemaker.py 103
         return d
 
     def create_new_mutable_directory(self, initial_children={}):
+        # mutable directories will always be SDMF for now, to help
+        # compatibility with older clients.
+        version = SDMF_VERSION
+        # initial_children must have metadata (i.e. {} instead of None)
+        for (name, (node, metadata)) in initial_children.iteritems():
+            precondition(isinstance(metadata, dict),
+                         "create_new_mutable_directory requires metadata to be a dict, not None", metadata)
+            node.raise_error()
         d = self.create_mutable_file(lambda n:
hunk ./src/allmydata/nodemaker.py 112
-                                     pack_children(initial_children, n.get_writekey()))
+                                     MutableData(pack_children(initial_children,
+                                                    n.get_writekey())),
+                                     version=version)
         d.addCallback(self._create_dirnode)
         return d
 
}
[docs: update docs to mention MDMF
Kevan Carstensen <kevan@isnotajoke.com>**20100814225644
 Ignore-this: 1c3caa3cd44831007dcfbef297814308
] {
merger 0.0 (
hunk ./docs/configuration.rst 324
+Frontend Configuration
+======================
+
+The Tahoe client process can run a variety of frontend file-access protocols.
+You will use these to create and retrieve files from the virtual filesystem.
+Configuration details for each are documented in the following
+protocol-specific guides:
+
+HTTP
+
+    Tahoe runs a webserver by default on port 3456. This interface provides a
+    human-oriented "WUI", with pages to create, modify, and browse
+    directories and files, as well as a number of pages to check on the
+    status of your Tahoe node. It also provides a machine-oriented "WAPI",
+    with a REST-ful HTTP interface that can be used by other programs
+    (including the CLI tools). Please see `<frontends/webapi.rst>`_ for full
+    details, and the ``web.port`` and ``web.static`` config variables above.
+    The `<frontends/download-status.rst>`_ document also describes a few WUI
+    status pages.
+
+CLI
+
+    The main "bin/tahoe" executable includes subcommands for manipulating the
+    filesystem, uploading/downloading files, and creating/running Tahoe
+    nodes. See `<frontends/CLI.rst>`_ for details.
+
+FTP, SFTP
+
+    Tahoe can also run both FTP and SFTP servers, and map a username/password
+    pair to a top-level Tahoe directory. See `<frontends/FTP-and-SFTP.rst>`_
+    for instructions on configuring these services, and the ``[ftpd]`` and
+    ``[sftpd]`` sections of ``tahoe.cfg``.
+
merger 0.0 (
replace ./docs/configuration.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
merger 0.0 (
hunk ./docs/configuration.rst 384
-shares.needed = (int, optional) aka "k", default 3
-shares.total = (int, optional) aka "N", N >= k, default 10
-shares.happy = (int, optional) 1 <= happy <= N, default 7
-
- These three values set the default encoding parameters. Each time a new file
- is uploaded, erasure-coding is used to break the ciphertext into separate
- pieces. There will be "N" (i.e. shares.total) pieces created, and the file
- will be recoverable if any "k" (i.e. shares.needed) pieces are retrieved.
- The default values are 3-of-10 (i.e. shares.needed = 3, shares.total = 10).
- Setting k to 1 is equivalent to simple replication (uploading N copies of
- the file).
-
- These values control the tradeoff between storage overhead, performance, and
- reliability. To a first approximation, a 1MB file will use (1MB*N/k) of
- backend storage space (the actual value will be a bit more, because of other
- forms of overhead). Up to N-k shares can be lost before the file becomes
- unrecoverable, so assuming there are at least N servers, up to N-k servers
- can be offline without losing the file. So large N/k ratios are more
- reliable, and small N/k ratios use less disk space. Clearly, k must never be
- smaller than N.
-
- Large values of N will slow down upload operations slightly, since more
- servers must be involved, and will slightly increase storage overhead due to
- the hash trees that are created. Large values of k will cause downloads to
- be marginally slower, because more servers must be involved. N cannot be
- larger than 256, because of the 8-bit erasure-coding algorithm that Tahoe
- uses.
-
- shares.happy allows you control over the distribution of your immutable file.
- For a successful upload, shares are guaranteed to be initially placed on
- at least 'shares.happy' distinct servers, the correct functioning of any
- k of which is sufficient to guarantee the availability of the uploaded file.
- This value should not be larger than the number of servers on your grid.
- 
- A value of shares.happy <= k is allowed, but does not provide any redundancy
- if some servers fail or lose shares.
-
- (Mutable files use a different share placement algorithm that does not 
-  consider this parameter.)
-
-
-== Storage Server Configuration ==
-
-[storage]
-enabled = (boolean, optional)
-
- If this is True, the node will run a storage server, offering space to other
- clients. If it is False, the node will not run a storage server, meaning
- that no shares will be stored on this node. Use False this for clients who
- do not wish to provide storage service. The default value is True.
-
-readonly = (boolean, optional)
-
- If True, the node will run a storage server but will not accept any shares,
- making it effectively read-only. Use this for storage servers which are
- being decommissioned: the storage/ directory could be mounted read-only,
- while shares are moved to other servers. Note that this currently only
- affects immutable shares. Mutable shares (used for directories) will be
- written and modified anyway. See ticket #390 for the current status of this
- bug. The default value is False.
-
-reserved_space = (str, optional)
-
- If provided, this value defines how much disk space is reserved: the storage
- server will not accept any share which causes the amount of free disk space
- to drop below this value. (The free space is measured by a call to statvfs(2)
- on Unix, or GetDiskFreeSpaceEx on Windows, and is the space available to the
- user account under which the storage server runs.)
-
- This string contains a number, with an optional case-insensitive scale
- suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
- "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the same
- thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same thing.
-
-expire.enabled =
-expire.mode =
-expire.override_lease_duration =
-expire.cutoff_date =
-expire.immutable =
-expire.mutable =
-
- These settings control garbage-collection, in which the server will delete
- shares that no longer have an up-to-date lease on them. Please see the
- neighboring "garbage-collection.txt" document for full details.
-
-
-== Running A Helper ==
+Running A Helper
+================
hunk ./docs/configuration.rst 424
+mutable.format = sdmf or mdmf
+
+ This value tells Tahoe-LAFS what the default mutable file format should
+ be. If mutable.format=sdmf, then newly created mutable files will be in
+ the old SDMF format. This is desirable for clients that operate on
+ grids where some peers run older versions of Tahoe-LAFS, as these older
+ versions cannot read the new MDMF mutable file format. If
+ mutable.format = mdmf, then newly created mutable files will use the
+ new MDMF format, which supports efficient in-place modification and
+ streaming downloads. You can overwrite this value using a special
+ mutable-type parameter in the webapi. If you do not specify a value
+ here, Tahoe-LAFS will use SDMF for all newly-created mutable files.
+
+ Note that this parameter only applies to mutable files. Mutable
+ directories, which are stored as mutable files, are not controlled by
+ this parameter and will always use SDMF. We may revisit this decision
+ in future versions of Tahoe-LAFS.
)
)
)
hunk ./docs/frontends/webapi.rst 363
  writeable mutable file, that file's contents will be overwritten in-place. If
  it is a read-cap for a mutable file, an error will occur. If it is an
  immutable file, the old file will be discarded, and a new one will be put in
- its place.
+ its place. If the target file is a writable mutable file, you may also
+ specify an "offset" parameter -- a byte offset that determines where in
+ the mutable file the data from the HTTP request body is placed. This
+ operation is relatively efficient for MDMF mutable files, and is
+ relatively inefficient (but still supported) for SDMF mutable files.
 
  When creating a new file, if "mutable=true" is in the query arguments, the
  operation will create a mutable file instead of an immutable one.
hunk ./docs/frontends/webapi.rst 388
 
  If "mutable=true" is in the query arguments, the operation will create a
  mutable file, and return its write-cap in the HTTP respose. The default is
- to create an immutable file, returning the read-cap as a response.
+ to create an immutable file, returning the read-cap as a response. If
+ you create a mutable file, you can also use the "mutable-type" query
+ parameter. If "mutable-type=sdmf", then the mutable file will be created
+ in the old SDMF mutable file format. This is desirable for files that
+ need to be read by old clients. If "mutable-type=mdmf", then the file
+ will be created in the new MDMF mutable file format. MDMF mutable files
+ can be downloaded more efficiently, and modified in-place efficiently,
+ but are not compatible with older versions of Tahoe-LAFS. If no
+ "mutable-type" argument is given, the file is created in whatever
+ format was configured in tahoe.cfg.
 
 Creating A New Directory
 ------------------------
hunk ./docs/frontends/webapi.rst 1082
  If a "mutable=true" argument is provided, the operation will create a
  mutable file, and the response body will contain the write-cap instead of
  the upload results page. The default is to create an immutable file,
- returning the upload results page as a response.
+ returning the upload results page as a response. If you create a
+ mutable file, you may choose to specify the format of that mutable file
+ with the "mutable-type" parameter. If "mutable-type=mdmf", then the
+ file will be created as an MDMF mutable file. If "mutable-type=sdmf",
+ then the file will be created as an SDMF mutable file. If no value is
+ specified, the file will be created in whatever format is specified in
+ tahoe.cfg.
 
 
 ``POST /uri/$DIRCAP/[SUBDIRS../]?t=upload``
}
[mutable/layout.py and interfaces.py: add MDMF writer and reader
Kevan Carstensen <kevan@isnotajoke.com>**20100819003304
 Ignore-this: 44400fec923987b62830da2ed5075fb4
 
 The MDMF writer is responsible for keeping state as plaintext is
 gradually processed into share data by the upload process. When the
 upload finishes, it will write all of its share data to a remote server,
 reporting its status back to the publisher.
 
 The MDMF reader is responsible for abstracting an MDMF file as it sits
 on the grid from the downloader; specifically, by receiving and
 responding to requests for arbitrary data within the MDMF file.
 
 The interfaces.py file has also been modified to contain an interface
 for the writer.
] {
hunk ./src/allmydata/interfaces.py 7
      ChoiceOf, IntegerConstraint, Any, RemoteInterface, Referenceable
 
 HASH_SIZE=32
+SALT_SIZE=16
+
+SDMF_VERSION=0
+MDMF_VERSION=1
 
 Hash = StringConstraint(maxLength=HASH_SIZE,
                         minLength=HASH_SIZE)# binary format 32-byte SHA256 hash
hunk ./src/allmydata/interfaces.py 424
         """
 
 
+class IMutableSlotWriter(Interface):
+    """
+    The interface for a writer around a mutable slot on a remote server.
+    """
+    def set_checkstring(checkstring, *args):
+        """
+        Set the checkstring that I will pass to the remote server when
+        writing.
+
+            @param checkstring A packed checkstring to use.
+
+        Note that implementations can differ in which semantics they
+        wish to support for set_checkstring -- they can, for example,
+        build the checkstring themselves from its constituents, or
+        some other thing.
+        """
+
+    def get_checkstring():
+        """
+        Get the checkstring that I think currently exists on the remote
+        server.
+        """
+
+    def put_block(data, segnum, salt):
+        """
+        Add a block and salt to the share.
+        """
+
+    def put_encprivey(encprivkey):
+        """
+        Add the encrypted private key to the share.
+        """
+
+    def put_blockhashes(blockhashes=list):
+        """
+        Add the block hash tree to the share.
+        """
+
+    def put_sharehashes(sharehashes=dict):
+        """
+        Add the share hash chain to the share.
+        """
+
+    def get_signable():
+        """
+        Return the part of the share that needs to be signed.
+        """
+
+    def put_signature(signature):
+        """
+        Add the signature to the share.
+        """
+
+    def put_verification_key(verification_key):
+        """
+        Add the verification key to the share.
+        """
+
+    def finish_publishing():
+        """
+        Do anything necessary to finish writing the share to a remote
+        server. I require that no further publishing needs to take place
+        after this method has been called.
+        """
+
+
 class IURI(Interface):
     def init_from_string(uri):
         """Accept a string (as created by my to_string() method) and populate
hunk ./src/allmydata/mutable/layout.py 4
 
 import struct
 from allmydata.mutable.common import NeedMoreDataError, UnknownVersionError
+from allmydata.interfaces import HASH_SIZE, SALT_SIZE, SDMF_VERSION, \
+                                 MDMF_VERSION, IMutableSlotWriter
+from allmydata.util import mathutil, observer
+from twisted.python import failure
+from twisted.internet import defer
+from zope.interface import implements
+
+
+# These strings describe the format of the packed structs they help process
+# Here's what they mean:
+#
+#  PREFIX:
+#    >: Big-endian byte order; the most significant byte is first (leftmost).
+#    B: The version information; an 8 bit version identifier. Stored as
+#       an unsigned char. This is currently 00 00 00 00; our modifications
+#       will turn it into 00 00 00 01.
+#    Q: The sequence number; this is sort of like a revision history for
+#       mutable files; they start at 1 and increase as they are changed after
+#       being uploaded. Stored as an unsigned long long, which is 8 bytes in
+#       length.
+#  32s: The root hash of the share hash tree. We use sha-256d, so we use 32 
+#       characters = 32 bytes to store the value.
+#  16s: The salt for the readkey. This is a 16-byte random value, stored as
+#       16 characters.
+#
+#  SIGNED_PREFIX additions, things that are covered by the signature:
+#    B: The "k" encoding parameter. We store this as an 8-bit character, 
+#       which is convenient because our erasure coding scheme cannot 
+#       encode if you ask for more than 255 pieces.
+#    B: The "N" encoding parameter. Stored as an 8-bit character for the 
+#       same reasons as above.
+#    Q: The segment size of the uploaded file. This will essentially be the
+#       length of the file in SDMF. An unsigned long long, so we can store 
+#       files of quite large size.
+#    Q: The data length of the uploaded file. Modulo padding, this will be
+#       the same of the data length field. Like the data length field, it is
+#       an unsigned long long and can be quite large.
+#
+#   HEADER additions:
+#     L: The offset of the signature of this. An unsigned long.
+#     L: The offset of the share hash chain. An unsigned long.
+#     L: The offset of the block hash tree. An unsigned long.
+#     L: The offset of the share data. An unsigned long.
+#     Q: The offset of the encrypted private key. An unsigned long long, to
+#        account for the possibility of a lot of share data.
+#     Q: The offset of the EOF. An unsigned long long, to account for the
+#        possibility of a lot of share data.
+# 
+#  After all of these, we have the following:
+#    - The verification key: Occupies the space between the end of the header
+#      and the start of the signature (i.e.: data[HEADER_LENGTH:o['signature']].
+#    - The signature, which goes from the signature offset to the share hash
+#      chain offset.
+#    - The share hash chain, which goes from the share hash chain offset to
+#      the block hash tree offset.
+#    - The share data, which goes from the share data offset to the encrypted
+#      private key offset.
+#    - The encrypted private key offset, which goes until the end of the file.
+# 
+#  The block hash tree in this encoding has only one share, so the offset of
+#  the share data will be 32 bits more than the offset of the block hash tree.
+#  Given this, we may need to check to see how many bytes a reasonably sized
+#  block hash tree will take up.
 
 PREFIX = ">BQ32s16s" # each version has a different prefix
 SIGNED_PREFIX = ">BQ32s16s BBQQ" # this is covered by the signature
hunk ./src/allmydata/mutable/layout.py 73
 SIGNED_PREFIX_LENGTH = struct.calcsize(SIGNED_PREFIX)
 HEADER = ">BQ32s16s BBQQ LLLLQQ" # includes offsets
 HEADER_LENGTH = struct.calcsize(HEADER)
+OFFSETS = ">LLLLQQ"
+OFFSETS_LENGTH = struct.calcsize(OFFSETS)
 
hunk ./src/allmydata/mutable/layout.py 76
+# These are still used for some tests.
 def unpack_header(data):
     o = {}
     (version,
hunk ./src/allmydata/mutable/layout.py 92
      o['EOF']) = struct.unpack(HEADER, data[:HEADER_LENGTH])
     return (version, seqnum, root_hash, IV, k, N, segsize, datalen, o)
 
-def unpack_prefix_and_signature(data):
-    assert len(data) >= HEADER_LENGTH, len(data)
-    prefix = data[:SIGNED_PREFIX_LENGTH]
-
-    (version,
-     seqnum,
-     root_hash,
-     IV,
-     k, N, segsize, datalen,
-     o) = unpack_header(data)
-
-    if version != 0:
-        raise UnknownVersionError("got mutable share version %d, but I only understand version 0" % version)
-
-    if len(data) < o['share_hash_chain']:
-        raise NeedMoreDataError(o['share_hash_chain'],
-                                o['enc_privkey'], o['EOF']-o['enc_privkey'])
-
-    pubkey_s = data[HEADER_LENGTH:o['signature']]
-    signature = data[o['signature']:o['share_hash_chain']]
-
-    return (seqnum, root_hash, IV, k, N, segsize, datalen,
-            pubkey_s, signature, prefix)
-
 def unpack_share(data):
     assert len(data) >= HEADER_LENGTH
     o = {}
hunk ./src/allmydata/mutable/layout.py 139
             pubkey, signature, share_hash_chain, block_hash_tree,
             share_data, enc_privkey)
 
-def unpack_share_data(verinfo, hash_and_data):
-    (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, o_t) = verinfo
-
-    # hash_and_data starts with the share_hash_chain, so figure out what the
-    # offsets really are
-    o = dict(o_t)
-    o_share_hash_chain = 0
-    o_block_hash_tree = o['block_hash_tree'] - o['share_hash_chain']
-    o_share_data = o['share_data'] - o['share_hash_chain']
-    o_enc_privkey = o['enc_privkey'] - o['share_hash_chain']
-
-    share_hash_chain_s = hash_and_data[o_share_hash_chain:o_block_hash_tree]
-    share_hash_format = ">H32s"
-    hsize = struct.calcsize(share_hash_format)
-    assert len(share_hash_chain_s) % hsize == 0, len(share_hash_chain_s)
-    share_hash_chain = []
-    for i in range(0, len(share_hash_chain_s), hsize):
-        chunk = share_hash_chain_s[i:i+hsize]
-        (hid, h) = struct.unpack(share_hash_format, chunk)
-        share_hash_chain.append( (hid, h) )
-    share_hash_chain = dict(share_hash_chain)
-    block_hash_tree_s = hash_and_data[o_block_hash_tree:o_share_data]
-    assert len(block_hash_tree_s) % 32 == 0, len(block_hash_tree_s)
-    block_hash_tree = []
-    for i in range(0, len(block_hash_tree_s), 32):
-        block_hash_tree.append(block_hash_tree_s[i:i+32])
-
-    share_data = hash_and_data[o_share_data:o_enc_privkey]
-
-    return (share_hash_chain, block_hash_tree, share_data)
-
-
-def pack_checkstring(seqnum, root_hash, IV):
-    return struct.pack(PREFIX,
-                       0, # version,
-                       seqnum,
-                       root_hash,
-                       IV)
-
 def unpack_checkstring(checkstring):
     cs_len = struct.calcsize(PREFIX)
     version, seqnum, root_hash, IV = struct.unpack(PREFIX, checkstring[:cs_len])
hunk ./src/allmydata/mutable/layout.py 146
         raise UnknownVersionError("got mutable share version %d, but I only understand version 0" % version)
     return (seqnum, root_hash, IV)
 
-def pack_prefix(seqnum, root_hash, IV,
-                required_shares, total_shares,
-                segment_size, data_length):
-    prefix = struct.pack(SIGNED_PREFIX,
-                         0, # version,
-                         seqnum,
-                         root_hash,
-                         IV,
-
-                         required_shares,
-                         total_shares,
-                         segment_size,
-                         data_length,
-                         )
-    return prefix
 
 def pack_offsets(verification_key_length, signature_length,
                  share_hash_chain_length, block_hash_tree_length,
hunk ./src/allmydata/mutable/layout.py 192
                            encprivkey])
     return final_share
 
+def pack_prefix(seqnum, root_hash, IV,
+                required_shares, total_shares,
+                segment_size, data_length):
+    prefix = struct.pack(SIGNED_PREFIX,
+                         0, # version,
+                         seqnum,
+                         root_hash,
+                         IV,
+                         required_shares,
+                         total_shares,
+                         segment_size,
+                         data_length,
+                         )
+    return prefix
+
+
+class SDMFSlotWriteProxy:
+    implements(IMutableSlotWriter)
+    """
+    I represent a remote write slot for an SDMF mutable file. I build a
+    share in memory, and then write it in one piece to the remote
+    server. This mimics how SDMF shares were built before MDMF (and the
+    new MDMF uploader), but provides that functionality in a way that
+    allows the MDMF uploader to be built without much special-casing for
+    file format, which makes the uploader code more readable.
+    """
+    def __init__(self,
+                 shnum,
+                 rref, # a remote reference to a storage server
+                 storage_index,
+                 secrets, # (write_enabler, renew_secret, cancel_secret)
+                 seqnum, # the sequence number of the mutable file
+                 required_shares,
+                 total_shares,
+                 segment_size,
+                 data_length): # the length of the original file
+        self.shnum = shnum
+        self._rref = rref
+        self._storage_index = storage_index
+        self._secrets = secrets
+        self._seqnum = seqnum
+        self._required_shares = required_shares
+        self._total_shares = total_shares
+        self._segment_size = segment_size
+        self._data_length = data_length
+
+        # This is an SDMF file, so it should have only one segment, so, 
+        # modulo padding of the data length, the segment size and the
+        # data length should be the same.
+        expected_segment_size = mathutil.next_multiple(data_length,
+                                                       self._required_shares)
+        assert expected_segment_size == segment_size
+
+        self._block_size = self._segment_size / self._required_shares
+
+        # This is meant to mimic how SDMF files were built before MDMF
+        # entered the picture: we generate each share in its entirety,
+        # then push it off to the storage server in one write. When
+        # callers call set_*, they are just populating this dict.
+        # finish_publishing will stitch these pieces together into a
+        # coherent share, and then write the coherent share to the
+        # storage server.
+        self._share_pieces = {}
+
+        # This tells the write logic what checkstring to use when
+        # writing remote shares.
+        self._testvs = []
+
+        self._readvs = [(0, struct.calcsize(PREFIX))]
+
+
+    def set_checkstring(self, checkstring_or_seqnum,
+                              root_hash=None,
+                              salt=None):
+        """
+        Set the checkstring that I will pass to the remote server when
+        writing.
+
+            @param checkstring_or_seqnum: A packed checkstring to use,
+                   or a sequence number. I will treat this as a checkstr
+
+        Note that implementations can differ in which semantics they
+        wish to support for set_checkstring -- they can, for example,
+        build the checkstring themselves from its constituents, or
+        some other thing.
+        """
+        if root_hash and salt:
+            checkstring = struct.pack(PREFIX,
+                                      0,
+                                      checkstring_or_seqnum,
+                                      root_hash,
+                                      salt)
+        else:
+            checkstring = checkstring_or_seqnum
+        self._testvs = [(0, len(checkstring), "eq", checkstring)]
+
+
+    def get_checkstring(self):
+        """
+        Get the checkstring that I think currently exists on the remote
+        server.
+        """
+        if self._testvs:
+            return self._testvs[0][3]
+        return ""
+
+
+    def put_block(self, data, segnum, salt):
+        """
+        Add a block and salt to the share.
+        """
+        # SDMF files have only one segment
+        assert segnum == 0
+        assert len(data) == self._block_size
+        assert len(salt) == SALT_SIZE
+
+        self._share_pieces['sharedata'] = data
+        self._share_pieces['salt'] = salt
+
+        # TODO: Figure out something intelligent to return.
+        return defer.succeed(None)
+
+
+    def put_encprivkey(self, encprivkey):
+        """
+        Add the encrypted private key to the share.
+        """
+        self._share_pieces['encprivkey'] = encprivkey
+
+        return defer.succeed(None)
+
+
+    def put_blockhashes(self, blockhashes):
+        """
+        Add the block hash tree to the share.
+        """
+        assert isinstance(blockhashes, list)
+        for h in blockhashes:
+            assert len(h) == HASH_SIZE
+
+        # serialize the blockhashes, then set them.
+        blockhashes_s = "".join(blockhashes)
+        self._share_pieces['block_hash_tree'] = blockhashes_s
+
+        return defer.succeed(None)
+
+
+    def put_sharehashes(self, sharehashes):
+        """
+        Add the share hash chain to the share.
+        """
+        assert isinstance(sharehashes, dict)
+        for h in sharehashes.itervalues():
+            assert len(h) == HASH_SIZE
+
+        # serialize the sharehashes, then set them.
+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
+                                 for i in sorted(sharehashes.keys())])
+        self._share_pieces['share_hash_chain'] = sharehashes_s
+
+        return defer.succeed(None)
+
+
+    def put_root_hash(self, root_hash):
+        """
+        Add the root hash to the share.
+        """
+        assert len(root_hash) == HASH_SIZE
+
+        self._share_pieces['root_hash'] = root_hash
+
+        return defer.succeed(None)
+
+
+    def put_salt(self, salt):
+        """
+        Add a salt to an empty SDMF file.
+        """
+        assert len(salt) == SALT_SIZE
+
+        self._share_pieces['salt'] = salt
+        self._share_pieces['sharedata'] = ""
+
+
+    def get_signable(self):
+        """
+        Return the part of the share that needs to be signed.
+
+        SDMF writers need to sign the packed representation of the
+        first eight fields of the remote share, that is:
+            - version number (0)
+            - sequence number
+            - root of the share hash tree
+            - salt
+            - k
+            - n
+            - segsize
+            - datalen
+
+        This method is responsible for returning that to callers.
+        """
+        return struct.pack(SIGNED_PREFIX,
+                           0,
+                           self._seqnum,
+                           self._share_pieces['root_hash'],
+                           self._share_pieces['salt'],
+                           self._required_shares,
+                           self._total_shares,
+                           self._segment_size,
+                           self._data_length)
+
+
+    def put_signature(self, signature):
+        """
+        Add the signature to the share.
+        """
+        self._share_pieces['signature'] = signature
+
+        return defer.succeed(None)
+
+
+    def put_verification_key(self, verification_key):
+        """
+        Add the verification key to the share.
+        """
+        self._share_pieces['verification_key'] = verification_key
+
+        return defer.succeed(None)
+
+
+    def get_verinfo(self):
+        """
+        I return my verinfo tuple. This is used by the ServermapUpdater
+        to keep track of versions of mutable files.
+
+        The verinfo tuple for MDMF files contains:
+            - seqnum
+            - root hash
+            - a blank (nothing)
+            - segsize
+            - datalen
+            - k
+            - n
+            - prefix (the thing that you sign)
+            - a tuple of offsets
+
+        We include the nonce in MDMF to simplify processing of version
+        information tuples.
+
+        The verinfo tuple for SDMF files is the same, but contains a
+        16-byte IV instead of a hash of salts.
+        """
+        return (self._seqnum,
+                self._share_pieces['root_hash'],
+                self._share_pieces['salt'],
+                self._segment_size,
+                self._data_length,
+                self._required_shares,
+                self._total_shares,
+                self.get_signable(),
+                self._get_offsets_tuple())
+
+    def _get_offsets_dict(self):
+        post_offset = HEADER_LENGTH
+        offsets = {}
+
+        verification_key_length = len(self._share_pieces['verification_key'])
+        o1 = offsets['signature'] = post_offset + verification_key_length
+
+        signature_length = len(self._share_pieces['signature'])
+        o2 = offsets['share_hash_chain'] = o1 + signature_length
+
+        share_hash_chain_length = len(self._share_pieces['share_hash_chain'])
+        o3 = offsets['block_hash_tree'] = o2 + share_hash_chain_length
+
+        block_hash_tree_length = len(self._share_pieces['block_hash_tree'])
+        o4 = offsets['share_data'] = o3 + block_hash_tree_length
+
+        share_data_length = len(self._share_pieces['sharedata'])
+        o5 = offsets['enc_privkey'] = o4 + share_data_length
+
+        encprivkey_length = len(self._share_pieces['encprivkey'])
+        offsets['EOF'] = o5 + encprivkey_length
+        return offsets
+
+
+    def _get_offsets_tuple(self):
+        offsets = self._get_offsets_dict()
+        return tuple([(key, value) for key, value in offsets.items()])
+
+
+    def _pack_offsets(self):
+        offsets = self._get_offsets_dict()
+        return struct.pack(">LLLLQQ",
+                           offsets['signature'],
+                           offsets['share_hash_chain'],
+                           offsets['block_hash_tree'],
+                           offsets['share_data'],
+                           offsets['enc_privkey'],
+                           offsets['EOF'])
+
+
+    def finish_publishing(self):
+        """
+        Do anything necessary to finish writing the share to a remote
+        server. I require that no further publishing needs to take place
+        after this method has been called.
+        """
+        for k in ["sharedata", "encprivkey", "signature", "verification_key",
+                  "share_hash_chain", "block_hash_tree"]:
+            assert k in self._share_pieces
+        # This is the only method that actually writes something to the
+        # remote server.
+        # First, we need to pack the share into data that we can write
+        # to the remote server in one write.
+        offsets = self._pack_offsets()
+        prefix = self.get_signable()
+        final_share = "".join([prefix,
+                               offsets,
+                               self._share_pieces['verification_key'],
+                               self._share_pieces['signature'],
+                               self._share_pieces['share_hash_chain'],
+                               self._share_pieces['block_hash_tree'],
+                               self._share_pieces['sharedata'],
+                               self._share_pieces['encprivkey']])
+
+        # Our only data vector is going to be writing the final share,
+        # in its entirely.
+        datavs = [(0, final_share)]
+
+        if not self._testvs:
+            # Our caller has not provided us with another checkstring
+            # yet, so we assume that we are writing a new share, and set
+            # a test vector that will allow a new share to be written.
+            self._testvs = []
+            self._testvs.append(tuple([0, 1, "eq", ""]))
+
+        tw_vectors = {}
+        tw_vectors[self.shnum] = (self._testvs, datavs, None)
+        return self._rref.callRemote("slot_testv_and_readv_and_writev",
+                                     self._storage_index,
+                                     self._secrets,
+                                     tw_vectors,
+                                     # TODO is it useful to read something?
+                                     self._readvs)
+
+
+MDMFHEADER = ">BQ32sBBQQ QQQQQQ"
+MDMFHEADERWITHOUTOFFSETS = ">BQ32sBBQQ"
+MDMFHEADERSIZE = struct.calcsize(MDMFHEADER)
+MDMFHEADERWITHOUTOFFSETSSIZE = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
+MDMFCHECKSTRING = ">BQ32s"
+MDMFSIGNABLEHEADER = ">BQ32sBBQQ"
+MDMFOFFSETS = ">QQQQQQ"
+MDMFOFFSETS_LENGTH = struct.calcsize(MDMFOFFSETS)
+
+class MDMFSlotWriteProxy:
+    implements(IMutableSlotWriter)
+
+    """
+    I represent a remote write slot for an MDMF mutable file.
+
+    I abstract away from my caller the details of block and salt
+    management, and the implementation of the on-disk format for MDMF
+    shares.
+    """
+    # Expected layout, MDMF:
+    # offset:     size:       name:
+    #-- signed part --
+    # 0           1           version number (01)
+    # 1           8           sequence number 
+    # 9           32          share tree root hash
+    # 41          1           The "k" encoding parameter
+    # 42          1           The "N" encoding parameter
+    # 43          8           The segment size of the uploaded file
+    # 51          8           The data length of the original plaintext
+    #-- end signed part --
+    # 59          8           The offset of the encrypted private key
+    # 83          8           The offset of the signature
+    # 91          8           The offset of the verification key
+    # 67          8           The offset of the block hash tree
+    # 75          8           The offset of the share hash chain 
+    # 99          8           The offset of the EOF
+    #
+    # followed by salts and share data, the encrypted private key, the
+    # block hash tree, the salt hash tree, the share hash chain, a
+    # signature over the first eight fields, and a verification key.
+    # 
+    # The checkstring is the first three fields -- the version number,
+    # sequence number, root hash and root salt hash. This is consistent
+    # in meaning to what we have with SDMF files, except now instead of
+    # using the literal salt, we use a value derived from all of the
+    # salts -- the share hash root.
+    # 
+    # The salt is stored before the block for each segment. The block
+    # hash tree is computed over the combination of block and salt for
+    # each segment. In this way, we get integrity checking for both
+    # block and salt with the current block hash tree arrangement.
+    # 
+    # The ordering of the offsets is different to reflect the dependencies
+    # that we'll run into with an MDMF file. The expected write flow is
+    # something like this:
+    #
+    #   0: Initialize with the sequence number, encoding parameters and
+    #      data length. From this, we can deduce the number of segments,
+    #      and where they should go.. We can also figure out where the
+    #      encrypted private key should go, because we can figure out how
+    #      big the share data will be.
+    # 
+    #   1: Encrypt, encode, and upload the file in chunks. Do something
+    #      like 
+    #
+    #       put_block(data, segnum, salt)
+    #
+    #      to write a block and a salt to the disk. We can do both of
+    #      these operations now because we have enough of the offsets to
+    #      know where to put them.
+    # 
+    #   2: Put the encrypted private key. Use:
+    #
+    #        put_encprivkey(encprivkey)
+    #
+    #      Now that we know the length of the private key, we can fill
+    #      in the offset for the block hash tree.
+    #
+    #   3: We're now in a position to upload the block hash tree for
+    #      a share. Put that using something like:
+    #       
+    #        put_blockhashes(block_hash_tree)
+    #
+    #      Note that block_hash_tree is a list of hashes -- we'll take
+    #      care of the details of serializing that appropriately. When
+    #      we get the block hash tree, we are also in a position to
+    #      calculate the offset for the share hash chain, and fill that
+    #      into the offsets table.
+    #
+    #   4: At the same time, we're in a position to upload the salt hash
+    #      tree. This is a Merkle tree over all of the salts. We use a
+    #      Merkle tree so that we can validate each block,salt pair as
+    #      we download them later. We do this using 
+    #
+    #        put_salthashes(salt_hash_tree)
+    #
+    #      When you do this, I automatically put the root of the tree
+    #      (the hash at index 0 of the list) in its appropriate slot in
+    #      the signed prefix of the share.
+    #
+    #   5: We're now in a position to upload the share hash chain for
+    #      a share. Do that with something like:
+    #      
+    #        put_sharehashes(share_hash_chain) 
+    #
+    #      share_hash_chain should be a dictionary mapping shnums to 
+    #      32-byte hashes -- the wrapper handles serialization.
+    #      We'll know where to put the signature at this point, also.
+    #      The root of this tree will be put explicitly in the next
+    #      step.
+    # 
+    #      TODO: Why? Why not just include it in the tree here?
+    #
+    #   6: Before putting the signature, we must first put the
+    #      root_hash. Do this with:
+    # 
+    #        put_root_hash(root_hash).
+    #      
+    #      In terms of knowing where to put this value, it was always
+    #      possible to place it, but it makes sense semantically to
+    #      place it after the share hash tree, so that's why you do it
+    #      in this order.
+    #
+    #   6: With the root hash put, we can now sign the header. Use:
+    #
+    #        get_signable()
+    #
+    #      to get the part of the header that you want to sign, and use:
+    #       
+    #        put_signature(signature)
+    #
+    #      to write your signature to the remote server.
+    #
+    #   6: Add the verification key, and finish. Do:
+    #
+    #        put_verification_key(key) 
+    #
+    #      and 
+    #
+    #        finish_publish()
+    #
+    # Checkstring management:
+    # 
+    # To write to a mutable slot, we have to provide test vectors to ensure
+    # that we are writing to the same data that we think we are. These
+    # vectors allow us to detect uncoordinated writes; that is, writes
+    # where both we and some other shareholder are writing to the
+    # mutable slot, and to report those back to the parts of the program
+    # doing the writing. 
+    #
+    # With SDMF, this was easy -- all of the share data was written in
+    # one go, so it was easy to detect uncoordinated writes, and we only
+    # had to do it once. With MDMF, not all of the file is written at
+    # once.
+    #
+    # If a share is new, we write out as much of the header as we can
+    # before writing out anything else. This gives other writers a
+    # canary that they can use to detect uncoordinated writes, and, if
+    # they do the same thing, gives us the same canary. We them update
+    # the share. We won't be able to write out two fields of the header
+    # -- the share tree hash and the salt hash -- until we finish
+    # writing out the share. We only require the writer to provide the
+    # initial checkstring, and keep track of what it should be after
+    # updates ourselves.
+    #
+    # If we haven't written anything yet, then on the first write (which
+    # will probably be a block + salt of a share), we'll also write out
+    # the header. On subsequent passes, we'll expect to see the header.
+    # This changes in two places:
+    #
+    #   - When we write out the salt hash
+    #   - When we write out the root of the share hash tree
+    #
+    # since these values will change the header. It is possible that we 
+    # can just make those be written in one operation to minimize
+    # disruption.
+    def __init__(self,
+                 shnum,
+                 rref, # a remote reference to a storage server
+                 storage_index,
+                 secrets, # (write_enabler, renew_secret, cancel_secret)
+                 seqnum, # the sequence number of the mutable file
+                 required_shares,
+                 total_shares,
+                 segment_size,
+                 data_length): # the length of the original file
+        self.shnum = shnum
+        self._rref = rref
+        self._storage_index = storage_index
+        self._seqnum = seqnum
+        self._required_shares = required_shares
+        assert self.shnum >= 0 and self.shnum < total_shares
+        self._total_shares = total_shares
+        # We build up the offset table as we write things. It is the
+        # last thing we write to the remote server. 
+        self._offsets = {}
+        self._testvs = []
+        # This is a list of write vectors that will be sent to our
+        # remote server once we are directed to write things there.
+        self._writevs = []
+        self._secrets = secrets
+        # The segment size needs to be a multiple of the k parameter --
+        # any padding should have been carried out by the publisher
+        # already.
+        assert segment_size % required_shares == 0
+        self._segment_size = segment_size
+        self._data_length = data_length
+
+        # These are set later -- we define them here so that we can
+        # check for their existence easily
+
+        # This is the root of the share hash tree -- the Merkle tree
+        # over the roots of the block hash trees computed for shares in
+        # this upload.
+        self._root_hash = None
+
+        # We haven't yet written anything to the remote bucket. By
+        # setting this, we tell the _write method as much. The write
+        # method will then know that it also needs to add a write vector
+        # for the checkstring (or what we have of it) to the first write
+        # request. We'll then record that value for future use.  If
+        # we're expecting something to be there already, we need to call
+        # set_checkstring before we write anything to tell the first
+        # write about that.
+        self._written = False
+
+        # When writing data to the storage servers, we get a read vector
+        # for free. We'll read the checkstring, which will help us
+        # figure out what's gone wrong if a write fails.
+        self._readv = [(0, struct.calcsize(MDMFCHECKSTRING))]
+
+        # We calculate the number of segments because it tells us
+        # where the salt part of the file ends/share segment begins,
+        # and also because it provides a useful amount of bounds checking.
+        self._num_segments = mathutil.div_ceil(self._data_length,
+                                               self._segment_size)
+        self._block_size = self._segment_size / self._required_shares
+        # We also calculate the share size, to help us with block
+        # constraints later.
+        tail_size = self._data_length % self._segment_size
+        if not tail_size:
+            self._tail_block_size = self._block_size
+        else:
+            self._tail_block_size = mathutil.next_multiple(tail_size,
+                                                           self._required_shares)
+            self._tail_block_size /= self._required_shares
+
+        # We already know where the sharedata starts; right after the end
+        # of the header (which is defined as the signable part + the offsets)
+        # We can also calculate where the encrypted private key begins
+        # from what we know know.
+        self._actual_block_size = self._block_size + SALT_SIZE
+        data_size = self._actual_block_size * (self._num_segments - 1)
+        data_size += self._tail_block_size
+        data_size += SALT_SIZE
+        self._offsets['enc_privkey'] = MDMFHEADERSIZE
+        self._offsets['enc_privkey'] += data_size
+        # We'll wait for the rest. Callers can now call my "put_block" and
+        # "set_checkstring" methods.
+
+
+    def set_checkstring(self,
+                        seqnum_or_checkstring,
+                        root_hash=None,
+                        salt=None):
+        """
+        Set checkstring checkstring for the given shnum.
+
+        This can be invoked in one of two ways.
+
+        With one argument, I assume that you are giving me a literal
+        checkstring -- e.g., the output of get_checkstring. I will then
+        set that checkstring as it is. This form is used by unit tests.
+
+        With two arguments, I assume that you are giving me a sequence
+        number and root hash to make a checkstring from. In that case, I
+        will build a checkstring and set it for you. This form is used
+        by the publisher.
+
+        By default, I assume that I am writing new shares to the grid.
+        If you don't explcitly set your own checkstring, I will use
+        one that requires that the remote share not exist. You will want
+        to use this method if you are updating a share in-place;
+        otherwise, writes will fail.
+        """
+        # You're allowed to overwrite checkstrings with this method;
+        # I assume that users know what they are doing when they call
+        # it.
+        if root_hash:
+            checkstring = struct.pack(MDMFCHECKSTRING,
+                                      1,
+                                      seqnum_or_checkstring,
+                                      root_hash)
+        else:
+            checkstring = seqnum_or_checkstring
+
+        if checkstring == "":
+            # We special-case this, since len("") = 0, but we need
+            # length of 1 for the case of an empty share to work on the
+            # storage server, which is what a checkstring that is the
+            # empty string means.
+            self._testvs = []
+        else:
+            self._testvs = []
+            self._testvs.append((0, len(checkstring), "eq", checkstring))
+
+
+    def __repr__(self):
+        return "MDMFSlotWriteProxy for share %d" % self.shnum
+
+
+    def get_checkstring(self):
+        """
+        Given a share number, I return a representation of what the
+        checkstring for that share on the server will look like.
+
+        I am mostly used for tests.
+        """
+        if self._root_hash:
+            roothash = self._root_hash
+        else:
+            roothash = "\x00" * 32
+        return struct.pack(MDMFCHECKSTRING,
+                           1,
+                           self._seqnum,
+                           roothash)
+
+
+    def put_block(self, data, segnum, salt):
+        """
+        I queue a write vector for the data, salt, and segment number
+        provided to me. I return None, as I do not actually cause
+        anything to be written yet.
+        """
+        if segnum >= self._num_segments:
+            raise LayoutInvalid("I won't overwrite the private key")
+        if len(salt) != SALT_SIZE:
+            raise LayoutInvalid("I was given a salt of size %d, but "
+                                "I wanted a salt of size %d")
+        if segnum + 1 == self._num_segments:
+            if len(data) != self._tail_block_size:
+                raise LayoutInvalid("I was given the wrong size block to write")
+        elif len(data) != self._block_size:
+            raise LayoutInvalid("I was given the wrong size block to write")
+
+        # We want to write at len(MDMFHEADER) + segnum * block_size.
+
+        offset = MDMFHEADERSIZE + (self._actual_block_size * segnum)
+        data = salt + data
+
+        self._writevs.append(tuple([offset, data]))
+
+
+    def put_encprivkey(self, encprivkey):
+        """
+        I queue a write vector for the encrypted private key provided to
+        me.
+        """
+        assert self._offsets
+        assert self._offsets['enc_privkey']
+        # You shouldn't re-write the encprivkey after the block hash
+        # tree is written, since that could cause the private key to run
+        # into the block hash tree. Before it writes the block hash
+        # tree, the block hash tree writing method writes the offset of
+        # the salt hash tree. So that's a good indicator of whether or
+        # not the block hash tree has been written.
+        if "share_hash_chain" in self._offsets:
+            raise LayoutInvalid("You must write this before the block hash tree")
+
+        self._offsets['block_hash_tree'] = self._offsets['enc_privkey'] + \
+            len(encprivkey)
+        self._writevs.append(tuple([self._offsets['enc_privkey'], encprivkey]))
+
+
+    def put_blockhashes(self, blockhashes):
+        """
+        I queue a write vector to put the block hash tree in blockhashes
+        onto the remote server.
+
+        The encrypted private key must be queued before the block hash
+        tree, since we need to know how large it is to know where the
+        block hash tree should go. The block hash tree must be put
+        before the salt hash tree, since its size determines the
+        offset of the share hash chain.
+        """
+        assert self._offsets
+        assert isinstance(blockhashes, list)
+        if "block_hash_tree" not in self._offsets:
+            raise LayoutInvalid("You must put the encrypted private key "
+                                "before you put the block hash tree")
+        # If written, the share hash chain causes the signature offset
+        # to be defined.
+        if "signature" in self._offsets:
+            raise LayoutInvalid("You must put the block hash tree before "
+                                "you put the share hash chain")
+        blockhashes_s = "".join(blockhashes)
+        self._offsets['share_hash_chain'] = self._offsets['block_hash_tree'] + len(blockhashes_s)
+
+        self._writevs.append(tuple([self._offsets['block_hash_tree'],
+                                  blockhashes_s]))
+
+
+    def put_sharehashes(self, sharehashes):
+        """
+        I queue a write vector to put the share hash chain in my
+        argument onto the remote server.
+
+        The salt hash tree must be queued before the share hash chain,
+        since we need to know where the salt hash tree ends before we
+        can know where the share hash chain starts. The share hash chain
+        must be put before the signature, since the length of the packed
+        share hash chain determines the offset of the signature. Also,
+        semantically, you must know what the root of the salt hash tree
+        is before you can generate a valid signature.
+        """
+        assert isinstance(sharehashes, dict)
+        if "share_hash_chain" not in self._offsets:
+            raise LayoutInvalid("You need to put the salt hash tree before "
+                                "you can put the share hash chain")
+        # The signature comes after the share hash chain. If the
+        # signature has already been written, we must not write another
+        # share hash chain. The signature writes the verification key
+        # offset when it gets sent to the remote server, so we look for
+        # that.
+        if "verification_key" in self._offsets:
+            raise LayoutInvalid("You must write the share hash chain "
+                                "before you write the signature")
+        sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
+                                  for i in sorted(sharehashes.keys())])
+        self._offsets['signature'] = self._offsets['share_hash_chain'] + len(sharehashes_s)
+        self._writevs.append(tuple([self._offsets['share_hash_chain'],
+                            sharehashes_s]))
+
+
+    def put_root_hash(self, roothash):
+        """
+        Put the root hash (the root of the share hash tree) in the
+        remote slot.
+        """
+        # It does not make sense to be able to put the root 
+        # hash without first putting the share hashes, since you need
+        # the share hashes to generate the root hash.
+        #
+        # Signature is defined by the routine that places the share hash
+        # chain, so it's a good thing to look for in finding out whether
+        # or not the share hash chain exists on the remote server.
+        if "signature" not in self._offsets:
+            raise LayoutInvalid("You need to put the share hash chain "
+                                "before you can put the root share hash")
+        if len(roothash) != HASH_SIZE:
+            raise LayoutInvalid("hashes and salts must be exactly %d bytes"
+                                 % HASH_SIZE)
+        self._root_hash = roothash
+        # To write both of these values, we update the checkstring on
+        # the remote server, which includes them
+        checkstring = self.get_checkstring()
+        self._writevs.append(tuple([0, checkstring]))
+        # This write, if successful, changes the checkstring, so we need
+        # to update our internal checkstring to be consistent with the
+        # one on the server.
+
+
+    def get_signable(self):
+        """
+        Get the first seven fields of the mutable file; the parts that
+        are signed.
+        """
+        if not self._root_hash:
+            raise LayoutInvalid("You need to set the root hash "
+                                "before getting something to "
+                                "sign")
+        return struct.pack(MDMFSIGNABLEHEADER,
+                           1,
+                           self._seqnum,
+                           self._root_hash,
+                           self._required_shares,
+                           self._total_shares,
+                           self._segment_size,
+                           self._data_length)
+
+
+    def put_signature(self, signature):
+        """
+        I queue a write vector for the signature of the MDMF share.
+
+        I require that the root hash and share hash chain have been put
+        to the grid before I will write the signature to the grid.
+        """
+        if "signature" not in self._offsets:
+            raise LayoutInvalid("You must put the share hash chain "
+        # It does not make sense to put a signature without first
+        # putting the root hash and the salt hash (since otherwise
+        # the signature would be incomplete), so we don't allow that.
+                       "before putting the signature")
+        if not self._root_hash:
+            raise LayoutInvalid("You must complete the signed prefix "
+                                "before computing a signature")
+        # If we put the signature after we put the verification key, we
+        # could end up running into the verification key, and will
+        # probably screw up the offsets as well. So we don't allow that.
+        # The method that writes the verification key defines the EOF
+        # offset before writing the verification key, so look for that.
+        if "EOF" in self._offsets:
+            raise LayoutInvalid("You must write the signature before the verification key")
+
+        self._offsets['verification_key'] = self._offsets['signature'] + len(signature)
+        self._writevs.append(tuple([self._offsets['signature'], signature]))
+
+
+    def put_verification_key(self, verification_key):
+        """
+        I queue a write vector for the verification key.
+
+        I require that the signature have been written to the storage
+        server before I allow the verification key to be written to the
+        remote server.
+        """
+        if "verification_key" not in self._offsets:
+            raise LayoutInvalid("You must put the signature before you "
+                                "can put the verification key")
+        self._offsets['EOF'] = self._offsets['verification_key'] + len(verification_key)
+        self._writevs.append(tuple([self._offsets['verification_key'],
+                            verification_key]))
+
+
+    def _get_offsets_tuple(self):
+        return tuple([(key, value) for key, value in self._offsets.items()])
+
+
+    def get_verinfo(self):
+        return (self._seqnum,
+                self._root_hash,
+                self._required_shares,
+                self._total_shares,
+                self._segment_size,
+                self._data_length,
+                self.get_signable(),
+                self._get_offsets_tuple())
+
+
+    def finish_publishing(self):
+        """
+        I add a write vector for the offsets table, and then cause all
+        of the write vectors that I've dealt with so far to be published
+        to the remote server, ending the write process.
+        """
+        if "EOF" not in self._offsets:
+            raise LayoutInvalid("You must put the verification key before "
+                                "you can publish the offsets")
+        offsets_offset = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
+        offsets = struct.pack(MDMFOFFSETS,
+                              self._offsets['enc_privkey'],
+                              self._offsets['block_hash_tree'],
+                              self._offsets['share_hash_chain'],
+                              self._offsets['signature'],
+                              self._offsets['verification_key'],
+                              self._offsets['EOF'])
+        self._writevs.append(tuple([offsets_offset, offsets]))
+        encoding_parameters_offset = struct.calcsize(MDMFCHECKSTRING)
+        params = struct.pack(">BBQQ",
+                             self._required_shares,
+                             self._total_shares,
+                             self._segment_size,
+                             self._data_length)
+        self._writevs.append(tuple([encoding_parameters_offset, params]))
+        return self._write(self._writevs)
+
+
+    def _write(self, datavs, on_failure=None, on_success=None):
+        """I write the data vectors in datavs to the remote slot."""
+        tw_vectors = {}
+        if not self._testvs:
+            self._testvs = []
+            self._testvs.append(tuple([0, 1, "eq", ""]))
+        if not self._written:
+            # Write a new checkstring to the share when we write it, so
+            # that we have something to check later.
+            new_checkstring = self.get_checkstring()
+            datavs.append((0, new_checkstring))
+            def _first_write():
+                self._written = True
+                self._testvs = [(0, len(new_checkstring), "eq", new_checkstring)]
+            on_success = _first_write
+        tw_vectors[self.shnum] = (self._testvs, datavs, None)
+        d = self._rref.callRemote("slot_testv_and_readv_and_writev",
+                                  self._storage_index,
+                                  self._secrets,
+                                  tw_vectors,
+                                  self._readv)
+        def _result(results):
+            if isinstance(results, failure.Failure) or not results[0]:
+                # Do nothing; the write was unsuccessful.
+                if on_failure: on_failure()
+            else:
+                if on_success: on_success()
+            return results
+        d.addCallback(_result)
+        return d
+
+
+class MDMFSlotReadProxy:
+    """
+    I read from a mutable slot filled with data written in the MDMF data
+    format (which is described above).
+
+    I can be initialized with some amount of data, which I will use (if
+    it is valid) to eliminate some of the need to fetch it from servers.
+    """
+    def __init__(self,
+                 rref,
+                 storage_index,
+                 shnum,
+                 data=""):
+        # Start the initialization process.
+        self._rref = rref
+        self._storage_index = storage_index
+        self.shnum = shnum
+
+        # Before doing anything, the reader is probably going to want to
+        # verify that the signature is correct. To do that, they'll need
+        # the verification key, and the signature. To get those, we'll
+        # need the offset table. So fetch the offset table on the
+        # assumption that that will be the first thing that a reader is
+        # going to do.
+
+        # The fact that these encoding parameters are None tells us
+        # that we haven't yet fetched them from the remote share, so we
+        # should. We could just not set them, but the checks will be
+        # easier to read if we don't have to use hasattr.
+        self._version_number = None
+        self._sequence_number = None
+        self._root_hash = None
+        # Filled in if we're dealing with an SDMF file. Unused
+        # otherwise.
+        self._salt = None
+        self._required_shares = None
+        self._total_shares = None
+        self._segment_size = None
+        self._data_length = None
+        self._offsets = None
+
+        # If the user has chosen to initialize us with some data, we'll
+        # try to satisfy subsequent data requests with that data before
+        # asking the storage server for it. If 
+        self._data = data
+        # The way callers interact with cache in the filenode returns
+        # None if there isn't any cached data, but the way we index the
+        # cached data requires a string, so convert None to "".
+        if self._data == None:
+            self._data = ""
+
+        self._queue_observers = observer.ObserverList()
+        self._queue_errbacks = observer.ObserverList()
+        self._readvs = []
+
+
+    def _maybe_fetch_offsets_and_header(self, force_remote=False):
+        """
+        I fetch the offset table and the header from the remote slot if
+        I don't already have them. If I do have them, I do nothing and
+        return an empty Deferred.
+        """
+        if self._offsets:
+            return defer.succeed(None)
+        # At this point, we may be either SDMF or MDMF. Fetching 107 
+        # bytes will be enough to get header and offsets for both SDMF and
+        # MDMF, though we'll be left with 4 more bytes than we
+        # need if this ends up being MDMF. This is probably less
+        # expensive than the cost of a second roundtrip.
+        readvs = [(0, 107)]
+        d = self._read(readvs, force_remote)
+        d.addCallback(self._process_encoding_parameters)
+        d.addCallback(self._process_offsets)
+        return d
+
+
+    def _process_encoding_parameters(self, encoding_parameters):
+        assert self.shnum in encoding_parameters
+        encoding_parameters = encoding_parameters[self.shnum][0]
+        # The first byte is the version number. It will tell us what
+        # to do next.
+        (verno,) = struct.unpack(">B", encoding_parameters[:1])
+        if verno == MDMF_VERSION:
+            read_size = MDMFHEADERWITHOUTOFFSETSSIZE
+            (verno,
+             seqnum,
+             root_hash,
+             k,
+             n,
+             segsize,
+             datalen) = struct.unpack(MDMFHEADERWITHOUTOFFSETS,
+                                      encoding_parameters[:read_size])
+            if segsize == 0 and datalen == 0:
+                # Empty file, no segments.
+                self._num_segments = 0
+            else:
+                self._num_segments = mathutil.div_ceil(datalen, segsize)
+
+        elif verno == SDMF_VERSION:
+            read_size = SIGNED_PREFIX_LENGTH
+            (verno,
+             seqnum,
+             root_hash,
+             salt,
+             k,
+             n,
+             segsize,
+             datalen) = struct.unpack(">BQ32s16s BBQQ",
+                                encoding_parameters[:SIGNED_PREFIX_LENGTH])
+            self._salt = salt
+            if segsize == 0 and datalen == 0:
+                # empty file
+                self._num_segments = 0
+            else:
+                # non-empty SDMF files have one segment.
+                self._num_segments = 1
+        else:
+            raise UnknownVersionError("You asked me to read mutable file "
+                                      "version %d, but I only understand "
+                                      "%d and %d" % (verno, SDMF_VERSION,
+                                                     MDMF_VERSION))
+
+        self._version_number = verno
+        self._sequence_number = seqnum
+        self._root_hash = root_hash
+        self._required_shares = k
+        self._total_shares = n
+        self._segment_size = segsize
+        self._data_length = datalen
+
+        self._block_size = self._segment_size / self._required_shares
+        # We can upload empty files, and need to account for this fact
+        # so as to avoid zero-division and zero-modulo errors.
+        if datalen > 0:
+            tail_size = self._data_length % self._segment_size
+        else:
+            tail_size = 0
+        if not tail_size:
+            self._tail_block_size = self._block_size
+        else:
+            self._tail_block_size = mathutil.next_multiple(tail_size,
+                                                    self._required_shares)
+            self._tail_block_size /= self._required_shares
+
+        return encoding_parameters
+
+
+    def _process_offsets(self, offsets):
+        if self._version_number == 0:
+            read_size = OFFSETS_LENGTH
+            read_offset = SIGNED_PREFIX_LENGTH
+            end = read_size + read_offset
+            (signature,
+             share_hash_chain,
+             block_hash_tree,
+             share_data,
+             enc_privkey,
+             EOF) = struct.unpack(">LLLLQQ",
+                                  offsets[read_offset:end])
+            self._offsets = {}
+            self._offsets['signature'] = signature
+            self._offsets['share_data'] = share_data
+            self._offsets['block_hash_tree'] = block_hash_tree
+            self._offsets['share_hash_chain'] = share_hash_chain
+            self._offsets['enc_privkey'] = enc_privkey
+            self._offsets['EOF'] = EOF
+
+        elif self._version_number == 1:
+            read_offset = MDMFHEADERWITHOUTOFFSETSSIZE
+            read_length = MDMFOFFSETS_LENGTH
+            end = read_offset + read_length
+            (encprivkey,
+             blockhashes,
+             sharehashes,
+             signature,
+             verification_key,
+             eof) = struct.unpack(MDMFOFFSETS,
+                                  offsets[read_offset:end])
+            self._offsets = {}
+            self._offsets['enc_privkey'] = encprivkey
+            self._offsets['block_hash_tree'] = blockhashes
+            self._offsets['share_hash_chain'] = sharehashes
+            self._offsets['signature'] = signature
+            self._offsets['verification_key'] = verification_key
+            self._offsets['EOF'] = eof
+
+
+    def get_block_and_salt(self, segnum, queue=False):
+        """
+        I return (block, salt), where block is the block data and
+        salt is the salt used to encrypt that segment.
+        """
+        d = self._maybe_fetch_offsets_and_header()
+        def _then(ignored):
+            if self._version_number == 1:
+                base_share_offset = MDMFHEADERSIZE
+            else:
+                base_share_offset = self._offsets['share_data']
+
+            if segnum + 1 > self._num_segments:
+                raise LayoutInvalid("Not a valid segment number")
+
+            if self._version_number == 0:
+                share_offset = base_share_offset + self._block_size * segnum
+            else:
+                share_offset = base_share_offset + (self._block_size + \
+                                                    SALT_SIZE) * segnum
+            if segnum + 1 == self._num_segments:
+                data = self._tail_block_size
+            else:
+                data = self._block_size
+
+            if self._version_number == 1:
+                data += SALT_SIZE
+
+            readvs = [(share_offset, data)]
+            return readvs
+        d.addCallback(_then)
+        d.addCallback(lambda readvs:
+            self._read(readvs, queue=queue))
+        def _process_results(results):
+            assert self.shnum in results
+            if self._version_number == 0:
+                # We only read the share data, but we know the salt from
+                # when we fetched the header
+                data = results[self.shnum]
+                if not data:
+                    data = ""
+                else:
+                    assert len(data) == 1
+                    data = data[0]
+                salt = self._salt
+            else:
+                data = results[self.shnum]
+                if not data:
+                    salt = data = ""
+                else:
+                    salt_and_data = results[self.shnum][0]
+                    salt = salt_and_data[:SALT_SIZE]
+                    data = salt_and_data[SALT_SIZE:]
+            return data, salt
+        d.addCallback(_process_results)
+        return d
+
+
+    def get_blockhashes(self, needed=None, queue=False, force_remote=False):
+        """
+        I return the block hash tree
+
+        I take an optional argument, needed, which is a set of indices
+        correspond to hashes that I should fetch. If this argument is
+        missing, I will fetch the entire block hash tree; otherwise, I
+        may attempt to fetch fewer hashes, based on what needed says
+        that I should do. Note that I may fetch as many hashes as I
+        want, so long as the set of hashes that I do fetch is a superset
+        of the ones that I am asked for, so callers should be prepared
+        to tolerate additional hashes.
+        """
+        # TODO: Return only the parts of the block hash tree necessary
+        # to validate the blocknum provided?
+        # This is a good idea, but it is hard to implement correctly. It
+        # is bad to fetch any one block hash more than once, so we
+        # probably just want to fetch the whole thing at once and then
+        # serve it.
+        if needed == set([]):
+            return defer.succeed([])
+        d = self._maybe_fetch_offsets_and_header()
+        def _then(ignored):
+            blockhashes_offset = self._offsets['block_hash_tree']
+            if self._version_number == 1:
+                blockhashes_length = self._offsets['share_hash_chain'] - blockhashes_offset
+            else:
+                blockhashes_length = self._offsets['share_data'] - blockhashes_offset
+            readvs = [(blockhashes_offset, blockhashes_length)]
+            return readvs
+        d.addCallback(_then)
+        d.addCallback(lambda readvs:
+            self._read(readvs, queue=queue, force_remote=force_remote))
+        def _build_block_hash_tree(results):
+            assert self.shnum in results
+
+            rawhashes = results[self.shnum][0]
+            results = [rawhashes[i:i+HASH_SIZE]
+                       for i in range(0, len(rawhashes), HASH_SIZE)]
+            return results
+        d.addCallback(_build_block_hash_tree)
+        return d
+
+
+    def get_sharehashes(self, needed=None, queue=False, force_remote=False):
+        """
+        I return the part of the share hash chain placed to validate
+        this share.
+
+        I take an optional argument, needed. Needed is a set of indices
+        that correspond to the hashes that I should fetch. If needed is
+        not present, I will fetch and return the entire share hash
+        chain. Otherwise, I may fetch and return any part of the share
+        hash chain that is a superset of the part that I am asked to
+        fetch. Callers should be prepared to deal with more hashes than
+        they've asked for.
+        """
+        if needed == set([]):
+            return defer.succeed([])
+        d = self._maybe_fetch_offsets_and_header()
+
+        def _make_readvs(ignored):
+            sharehashes_offset = self._offsets['share_hash_chain']
+            if self._version_number == 0:
+                sharehashes_length = self._offsets['block_hash_tree'] - sharehashes_offset
+            else:
+                sharehashes_length = self._offsets['signature'] - sharehashes_offset
+            readvs = [(sharehashes_offset, sharehashes_length)]
+            return readvs
+        d.addCallback(_make_readvs)
+        d.addCallback(lambda readvs:
+            self._read(readvs, queue=queue, force_remote=force_remote))
+        def _build_share_hash_chain(results):
+            assert self.shnum in results
+
+            sharehashes = results[self.shnum][0]
+            results = [sharehashes[i:i+(HASH_SIZE + 2)]
+                       for i in range(0, len(sharehashes), HASH_SIZE + 2)]
+            results = dict([struct.unpack(">H32s", data)
+                            for data in results])
+            return results
+        d.addCallback(_build_share_hash_chain)
+        return d
+
+
+    def get_encprivkey(self, queue=False):
+        """
+        I return the encrypted private key.
+        """
+        d = self._maybe_fetch_offsets_and_header()
+
+        def _make_readvs(ignored):
+            privkey_offset = self._offsets['enc_privkey']
+            if self._version_number == 0:
+                privkey_length = self._offsets['EOF'] - privkey_offset
+            else:
+                privkey_length = self._offsets['block_hash_tree'] - privkey_offset
+            readvs = [(privkey_offset, privkey_length)]
+            return readvs
+        d.addCallback(_make_readvs)
+        d.addCallback(lambda readvs:
+            self._read(readvs, queue=queue))
+        def _process_results(results):
+            assert self.shnum in results
+            privkey = results[self.shnum][0]
+            return privkey
+        d.addCallback(_process_results)
+        return d
+
+
+    def get_signature(self, queue=False):
+        """
+        I return the signature of my share.
+        """
+        d = self._maybe_fetch_offsets_and_header()
+
+        def _make_readvs(ignored):
+            signature_offset = self._offsets['signature']
+            if self._version_number == 1:
+                signature_length = self._offsets['verification_key'] - signature_offset
+            else:
+                signature_length = self._offsets['share_hash_chain'] - signature_offset
+            readvs = [(signature_offset, signature_length)]
+            return readvs
+        d.addCallback(_make_readvs)
+        d.addCallback(lambda readvs:
+            self._read(readvs, queue=queue))
+        def _process_results(results):
+            assert self.shnum in results
+            signature = results[self.shnum][0]
+            return signature
+        d.addCallback(_process_results)
+        return d
+
+
+    def get_verification_key(self, queue=False):
+        """
+        I return the verification key.
+        """
+        d = self._maybe_fetch_offsets_and_header()
+
+        def _make_readvs(ignored):
+            if self._version_number == 1:
+                vk_offset = self._offsets['verification_key']
+                vk_length = self._offsets['EOF'] - vk_offset
+            else:
+                vk_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
+                vk_length = self._offsets['signature'] - vk_offset
+            readvs = [(vk_offset, vk_length)]
+            return readvs
+        d.addCallback(_make_readvs)
+        d.addCallback(lambda readvs:
+            self._read(readvs, queue=queue))
+        def _process_results(results):
+            assert self.shnum in results
+            verification_key = results[self.shnum][0]
+            return verification_key
+        d.addCallback(_process_results)
+        return d
+
+
+    def get_encoding_parameters(self):
+        """
+        I return (k, n, segsize, datalen)
+        """
+        d = self._maybe_fetch_offsets_and_header()
+        d.addCallback(lambda ignored:
+            (self._required_shares,
+             self._total_shares,
+             self._segment_size,
+             self._data_length))
+        return d
+
+
+    def get_seqnum(self):
+        """
+        I return the sequence number for this share.
+        """
+        d = self._maybe_fetch_offsets_and_header()
+        d.addCallback(lambda ignored:
+            self._sequence_number)
+        return d
+
+
+    def get_root_hash(self):
+        """
+        I return the root of the block hash tree
+        """
+        d = self._maybe_fetch_offsets_and_header()
+        d.addCallback(lambda ignored: self._root_hash)
+        return d
+
+
+    def get_checkstring(self):
+        """
+        I return the packed representation of the following:
+
+            - version number
+            - sequence number
+            - root hash
+            - salt hash
+
+        which my users use as a checkstring to detect other writers.
+        """
+        d = self._maybe_fetch_offsets_and_header()
+        def _build_checkstring(ignored):
+            if self._salt:
+                checkstring = struct.pack(PREFIX,
+                                          self._version_number,
+                                          self._sequence_number,
+                                          self._root_hash,
+                                          self._salt)
+            else:
+                checkstring = struct.pack(MDMFCHECKSTRING,
+                                          self._version_number,
+                                          self._sequence_number,
+                                          self._root_hash)
+
+            return checkstring
+        d.addCallback(_build_checkstring)
+        return d
+
+
+    def get_prefix(self, force_remote):
+        d = self._maybe_fetch_offsets_and_header(force_remote)
+        d.addCallback(lambda ignored:
+            self._build_prefix())
+        return d
+
+
+    def _build_prefix(self):
+        # The prefix is another name for the part of the remote share
+        # that gets signed. It consists of everything up to and
+        # including the datalength, packed by struct.
+        if self._version_number == SDMF_VERSION:
+            return struct.pack(SIGNED_PREFIX,
+                           self._version_number,
+                           self._sequence_number,
+                           self._root_hash,
+                           self._salt,
+                           self._required_shares,
+                           self._total_shares,
+                           self._segment_size,
+                           self._data_length)
+
+        else:
+            return struct.pack(MDMFSIGNABLEHEADER,
+                           self._version_number,
+                           self._sequence_number,
+                           self._root_hash,
+                           self._required_shares,
+                           self._total_shares,
+                           self._segment_size,
+                           self._data_length)
+
+
+    def _get_offsets_tuple(self):
+        # The offsets tuple is another component of the version
+        # information tuple. It is basically our offsets dictionary,
+        # itemized and in a tuple.
+        return self._offsets.copy()
+
+
+    def get_verinfo(self):
+        """
+        I return my verinfo tuple. This is used by the ServermapUpdater
+        to keep track of versions of mutable files.
+
+        The verinfo tuple for MDMF files contains:
+            - seqnum
+            - root hash
+            - a blank (nothing)
+            - segsize
+            - datalen
+            - k
+            - n
+            - prefix (the thing that you sign)
+            - a tuple of offsets
+
+        We include the nonce in MDMF to simplify processing of version
+        information tuples.
+
+        The verinfo tuple for SDMF files is the same, but contains a
+        16-byte IV instead of a hash of salts.
+        """
+        d = self._maybe_fetch_offsets_and_header()
+        def _build_verinfo(ignored):
+            if self._version_number == SDMF_VERSION:
+                salt_to_use = self._salt
+            else:
+                salt_to_use = None
+            return (self._sequence_number,
+                    self._root_hash,
+                    salt_to_use,
+                    self._segment_size,
+                    self._data_length,
+                    self._required_shares,
+                    self._total_shares,
+                    self._build_prefix(),
+                    self._get_offsets_tuple())
+        d.addCallback(_build_verinfo)
+        return d
+
+
+    def flush(self):
+        """
+        I flush my queue of read vectors.
+        """
+        d = self._read(self._readvs)
+        def _then(results):
+            self._readvs = []
+            if isinstance(results, failure.Failure):
+                self._queue_errbacks.notify(results)
+            else:
+                self._queue_observers.notify(results)
+            self._queue_observers = observer.ObserverList()
+            self._queue_errbacks = observer.ObserverList()
+        d.addBoth(_then)
+
+
+    def _read(self, readvs, force_remote=False, queue=False):
+        unsatisfiable = filter(lambda x: x[0] + x[1] > len(self._data), readvs)
+        # TODO: It's entirely possible to tweak this so that it just
+        # fulfills the requests that it can, and not demand that all
+        # requests are satisfiable before running it.
+        if not unsatisfiable and not force_remote:
+            results = [self._data[offset:offset+length]
+                       for (offset, length) in readvs]
+            results = {self.shnum: results}
+            return defer.succeed(results)
+        else:
+            if queue:
+                start = len(self._readvs)
+                self._readvs += readvs
+                end = len(self._readvs)
+                def _get_results(results, start, end):
+                    if not self.shnum in results:
+                        return {self._shnum: [""]}
+                    return {self.shnum: results[self.shnum][start:end]}
+                d = defer.Deferred()
+                d.addCallback(_get_results, start, end)
+                self._queue_observers.subscribe(d.callback)
+                self._queue_errbacks.subscribe(d.errback)
+                return d
+            return self._rref.callRemote("slot_readv",
+                                         self._storage_index,
+                                         [self.shnum],
+                                         readvs)
+
+
+    def is_sdmf(self):
+        """I tell my caller whether or not my remote file is SDMF or MDMF
+        """
+        d = self._maybe_fetch_offsets_and_header()
+        d.addCallback(lambda ignored:
+            self._version_number == 0)
+        return d
+
+
+class LayoutInvalid(Exception):
+    """
+    This isn't a valid MDMF mutable file
+    """
merger 0.0 (
hunk ./src/allmydata/test/test_storage.py 3
-from allmydata.util import log
-
merger 0.0 (
hunk ./src/allmydata/test/test_storage.py 3
-import time, os.path, stat, re, simplejson, struct
+from allmydata.util import log
+
+import mock
hunk ./src/allmydata/test/test_storage.py 3
-import time, os.path, stat, re, simplejson, struct
+import time, os.path, stat, re, simplejson, struct, shutil
)
)
hunk ./src/allmydata/test/test_storage.py 23
 from allmydata.storage.expirer import LeaseCheckingCrawler
 from allmydata.immutable.layout import WriteBucketProxy, WriteBucketProxy_v2, \
      ReadBucketProxy
-from allmydata.interfaces import BadWriteEnablerError
-from allmydata.test.common import LoggingServiceParent
+from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \
+                                     LayoutInvalid, MDMFSIGNABLEHEADER, \
+                                     SIGNED_PREFIX, MDMFHEADER, \
+                                     MDMFOFFSETS, SDMFSlotWriteProxy
+from allmydata.interfaces import BadWriteEnablerError, MDMF_VERSION, \
+                                 SDMF_VERSION
+from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
 from allmydata.test.common_web import WebRenderingMixin
 from allmydata.web.storage import StorageStatus, remove_prefix
 
hunk ./src/allmydata/test/test_storage.py 107
 
 class RemoteBucket:
 
+    def __init__(self):
+        self.read_count = 0
+        self.write_count = 0
+
     def callRemote(self, methname, *args, **kwargs):
         def _call():
             meth = getattr(self.target, "remote_" + methname)
hunk ./src/allmydata/test/test_storage.py 115
             return meth(*args, **kwargs)
+
+        if methname == "slot_readv":
+            self.read_count += 1
+        if "writev" in methname:
+            self.write_count += 1
+
         return defer.maybeDeferred(_call)
 
hunk ./src/allmydata/test/test_storage.py 123
+
 class BucketProxy(unittest.TestCase):
     def make_bucket(self, name, size):
         basedir = os.path.join("storage", "BucketProxy", name)
hunk ./src/allmydata/test/test_storage.py 1306
         self.failUnless(os.path.exists(prefixdir), prefixdir)
         self.failIf(os.path.exists(bucketdir), bucketdir)
 
+
+class MDMFProxies(unittest.TestCase, ShouldFailMixin):
+    def setUp(self):
+        self.sparent = LoggingServiceParent()
+        self._lease_secret = itertools.count()
+        self.ss = self.create("MDMFProxies storage test server")
+        self.rref = RemoteBucket()
+        self.rref.target = self.ss
+        self.secrets = (self.write_enabler("we_secret"),
+                        self.renew_secret("renew_secret"),
+                        self.cancel_secret("cancel_secret"))
+        self.segment = "aaaaaa"
+        self.block = "aa"
+        self.salt = "a" * 16
+        self.block_hash = "a" * 32
+        self.block_hash_tree = [self.block_hash for i in xrange(6)]
+        self.share_hash = self.block_hash
+        self.share_hash_chain = dict([(i, self.share_hash) for i in xrange(6)])
+        self.signature = "foobarbaz"
+        self.verification_key = "vvvvvv"
+        self.encprivkey = "private"
+        self.root_hash = self.block_hash
+        self.salt_hash = self.root_hash
+        self.salt_hash_tree = [self.salt_hash for i in xrange(6)]
+        self.block_hash_tree_s = self.serialize_blockhashes(self.block_hash_tree)
+        self.share_hash_chain_s = self.serialize_sharehashes(self.share_hash_chain)
+        # blockhashes and salt hashes are serialized in the same way,
+        # only we lop off the first element and store that in the
+        # header.
+        self.salt_hash_tree_s = self.serialize_blockhashes(self.salt_hash_tree[1:])
+
+
+    def tearDown(self):
+        self.sparent.stopService()
+        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
+
+
+    def write_enabler(self, we_tag):
+        return hashutil.tagged_hash("we_blah", we_tag)
+
+
+    def renew_secret(self, tag):
+        return hashutil.tagged_hash("renew_blah", str(tag))
+
+
+    def cancel_secret(self, tag):
+        return hashutil.tagged_hash("cancel_blah", str(tag))
+
+
+    def workdir(self, name):
+        basedir = os.path.join("storage", "MutableServer", name)
+        return basedir
+
+
+    def create(self, name):
+        workdir = self.workdir(name)
+        ss = StorageServer(workdir, "\x00" * 20)
+        ss.setServiceParent(self.sparent)
+        return ss
+
+
+    def build_test_mdmf_share(self, tail_segment=False, empty=False):
+        # Start with the checkstring
+        data = struct.pack(">BQ32s",
+                           1,
+                           0,
+                           self.root_hash)
+        self.checkstring = data
+        # Next, the encoding parameters
+        if tail_segment:
+            data += struct.pack(">BBQQ",
+                                3,
+                                10,
+                                6,
+                                33)
+        elif empty:
+            data += struct.pack(">BBQQ",
+                                3,
+                                10,
+                                0,
+                                0)
+        else:
+            data += struct.pack(">BBQQ",
+                                3,
+                                10,
+                                6,
+                                36)
+        # Now we'll build the offsets.
+        sharedata = ""
+        if not tail_segment and not empty:
+            for i in xrange(6):
+                sharedata += self.salt + self.block
+        elif tail_segment:
+            for i in xrange(5):
+                sharedata += self.salt + self.block
+            sharedata += self.salt + "a"
+
+        # The encrypted private key comes after the shares + salts
+        offset_size = struct.calcsize(MDMFOFFSETS)
+        encrypted_private_key_offset = len(data) + offset_size + len(sharedata)
+        # The blockhashes come after the private key
+        blockhashes_offset = encrypted_private_key_offset + len(self.encprivkey)
+        # The sharehashes come after the salt hashes
+        sharehashes_offset = blockhashes_offset + len(self.block_hash_tree_s)
+        # The signature comes after the share hash chain
+        signature_offset = sharehashes_offset + len(self.share_hash_chain_s)
+        # The verification key comes after the signature
+        verification_offset = signature_offset + len(self.signature)
+        # The EOF comes after the verification key
+        eof_offset = verification_offset + len(self.verification_key)
+        data += struct.pack(MDMFOFFSETS,
+                            encrypted_private_key_offset,
+                            blockhashes_offset,
+                            sharehashes_offset,
+                            signature_offset,
+                            verification_offset,
+                            eof_offset)
+        self.offsets = {}
+        self.offsets['enc_privkey'] = encrypted_private_key_offset
+        self.offsets['block_hash_tree'] = blockhashes_offset
+        self.offsets['share_hash_chain'] = sharehashes_offset
+        self.offsets['signature'] = signature_offset
+        self.offsets['verification_key'] = verification_offset
+        self.offsets['EOF'] = eof_offset
+        # Next, we'll add in the salts and share data,
+        data += sharedata
+        # the private key,
+        data += self.encprivkey
+        # the block hash tree,
+        data += self.block_hash_tree_s
+        # the share hash chain,
+        data += self.share_hash_chain_s
+        # the signature,
+        data += self.signature
+        # and the verification key
+        data += self.verification_key
+        return data
+
+
+    def write_test_share_to_server(self,
+                                   storage_index,
+                                   tail_segment=False,
+                                   empty=False):
+        """
+        I write some data for the read tests to read to self.ss
+
+        If tail_segment=True, then I will write a share that has a
+        smaller tail segment than other segments.
+        """
+        write = self.ss.remote_slot_testv_and_readv_and_writev
+        data = self.build_test_mdmf_share(tail_segment, empty)
+        # Finally, we write the whole thing to the storage server in one
+        # pass.
+        testvs = [(0, 1, "eq", "")]
+        tws = {}
+        tws[0] = (testvs, [(0, data)], None)
+        readv = [(0, 1)]
+        results = write(storage_index, self.secrets, tws, readv)
+        self.failUnless(results[0])
+
+
+    def build_test_sdmf_share(self, empty=False):
+        if empty:
+            sharedata = ""
+        else:
+            sharedata = self.segment * 6
+        self.sharedata = sharedata
+        blocksize = len(sharedata) / 3
+        block = sharedata[:blocksize]
+        self.blockdata = block
+        prefix = struct.pack(">BQ32s16s BBQQ",
+                             0, # version,
+                             0,
+                             self.root_hash,
+                             self.salt,
+                             3,
+                             10,
+                             len(sharedata),
+                             len(sharedata),
+                            )
+        post_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
+        signature_offset = post_offset + len(self.verification_key)
+        sharehashes_offset = signature_offset + len(self.signature)
+        blockhashes_offset = sharehashes_offset + len(self.share_hash_chain_s)
+        sharedata_offset = blockhashes_offset + len(self.block_hash_tree_s)
+        encprivkey_offset = sharedata_offset + len(block)
+        eof_offset = encprivkey_offset + len(self.encprivkey)
+        offsets = struct.pack(">LLLLQQ",
+                              signature_offset,
+                              sharehashes_offset,
+                              blockhashes_offset,
+                              sharedata_offset,
+                              encprivkey_offset,
+                              eof_offset)
+        final_share = "".join([prefix,
+                           offsets,
+                           self.verification_key,
+                           self.signature,
+                           self.share_hash_chain_s,
+                           self.block_hash_tree_s,
+                           block,
+                           self.encprivkey])
+        self.offsets = {}
+        self.offsets['signature'] = signature_offset
+        self.offsets['share_hash_chain'] = sharehashes_offset
+        self.offsets['block_hash_tree'] = blockhashes_offset
+        self.offsets['share_data'] = sharedata_offset
+        self.offsets['enc_privkey'] = encprivkey_offset
+        self.offsets['EOF'] = eof_offset
+        return final_share
+
+
+    def write_sdmf_share_to_server(self,
+                                   storage_index,
+                                   empty=False):
+        # Some tests need SDMF shares to verify that we can still 
+        # read them. This method writes one, which resembles but is not
+        assert self.rref
+        write = self.ss.remote_slot_testv_and_readv_and_writev
+        share = self.build_test_sdmf_share(empty)
+        testvs = [(0, 1, "eq", "")]
+        tws = {}
+        tws[0] = (testvs, [(0, share)], None)
+        readv = []
+        results = write(storage_index, self.secrets, tws, readv)
+        self.failUnless(results[0])
+
+
+    def test_read(self):
+        self.write_test_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        # Check that every method equals what we expect it to.
+        d = defer.succeed(None)
+        def _check_block_and_salt((block, salt)):
+            self.failUnlessEqual(block, self.block)
+            self.failUnlessEqual(salt, self.salt)
+
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mr.get_block_and_salt(i))
+            d.addCallback(_check_block_and_salt)
+
+        d.addCallback(lambda ignored:
+            mr.get_encprivkey())
+        d.addCallback(lambda encprivkey:
+            self.failUnlessEqual(self.encprivkey, encprivkey))
+
+        d.addCallback(lambda ignored:
+            mr.get_blockhashes())
+        d.addCallback(lambda blockhashes:
+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
+
+        d.addCallback(lambda ignored:
+            mr.get_sharehashes())
+        d.addCallback(lambda sharehashes:
+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
+
+        d.addCallback(lambda ignored:
+            mr.get_signature())
+        d.addCallback(lambda signature:
+            self.failUnlessEqual(signature, self.signature))
+
+        d.addCallback(lambda ignored:
+            mr.get_verification_key())
+        d.addCallback(lambda verification_key:
+            self.failUnlessEqual(verification_key, self.verification_key))
+
+        d.addCallback(lambda ignored:
+            mr.get_seqnum())
+        d.addCallback(lambda seqnum:
+            self.failUnlessEqual(seqnum, 0))
+
+        d.addCallback(lambda ignored:
+            mr.get_root_hash())
+        d.addCallback(lambda root_hash:
+            self.failUnlessEqual(self.root_hash, root_hash))
+
+        d.addCallback(lambda ignored:
+            mr.get_seqnum())
+        d.addCallback(lambda seqnum:
+            self.failUnlessEqual(0, seqnum))
+
+        d.addCallback(lambda ignored:
+            mr.get_encoding_parameters())
+        def _check_encoding_parameters((k, n, segsize, datalen)):
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            self.failUnlessEqual(segsize, 6)
+            self.failUnlessEqual(datalen, 36)
+        d.addCallback(_check_encoding_parameters)
+
+        d.addCallback(lambda ignored:
+            mr.get_checkstring())
+        d.addCallback(lambda checkstring:
+            self.failUnlessEqual(checkstring, checkstring))
+        return d
+
+
+    def test_read_with_different_tail_segment_size(self):
+        self.write_test_share_to_server("si1", tail_segment=True)
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = mr.get_block_and_salt(5)
+        def _check_tail_segment(results):
+            block, salt = results
+            self.failUnlessEqual(len(block), 1)
+            self.failUnlessEqual(block, "a")
+        d.addCallback(_check_tail_segment)
+        return d
+
+
+    def test_get_block_with_invalid_segnum(self):
+        self.write_test_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "test invalid segnum",
+                            None,
+                            mr.get_block_and_salt, 7))
+        return d
+
+
+    def test_get_encoding_parameters_first(self):
+        self.write_test_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = mr.get_encoding_parameters()
+        def _check_encoding_parameters((k, n, segment_size, datalen)):
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            self.failUnlessEqual(segment_size, 6)
+            self.failUnlessEqual(datalen, 36)
+        d.addCallback(_check_encoding_parameters)
+        return d
+
+
+    def test_get_seqnum_first(self):
+        self.write_test_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = mr.get_seqnum()
+        d.addCallback(lambda seqnum:
+            self.failUnlessEqual(seqnum, 0))
+        return d
+
+
+    def test_get_root_hash_first(self):
+        self.write_test_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = mr.get_root_hash()
+        d.addCallback(lambda root_hash:
+            self.failUnlessEqual(root_hash, self.root_hash))
+        return d
+
+
+    def test_get_checkstring_first(self):
+        self.write_test_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = mr.get_checkstring()
+        d.addCallback(lambda checkstring:
+            self.failUnlessEqual(checkstring, self.checkstring))
+        return d
+
+
+    def test_write_read_vectors(self):
+        # When writing for us, the storage server will return to us a
+        # read vector, along with its result. If a write fails because
+        # the test vectors failed, this read vector can help us to
+        # diagnose the problem. This test ensures that the read vector
+        # is working appropriately.
+        mw = self._make_new_mw("si1", 0)
+
+        for i in xrange(6):
+            mw.put_block(self.block, i, self.salt)
+        mw.put_encprivkey(self.encprivkey)
+        mw.put_blockhashes(self.block_hash_tree)
+        mw.put_sharehashes(self.share_hash_chain)
+        mw.put_root_hash(self.root_hash)
+        mw.put_signature(self.signature)
+        mw.put_verification_key(self.verification_key)
+        d = mw.finish_publishing()
+        def _then(results):
+            self.failUnless(len(results), 2)
+            result, readv = results
+            self.failUnless(result)
+            self.failIf(readv)
+            self.old_checkstring = mw.get_checkstring()
+            mw.set_checkstring("")
+        d.addCallback(_then)
+        d.addCallback(lambda ignored:
+            mw.finish_publishing())
+        def _then_again(results):
+            self.failUnlessEqual(len(results), 2)
+            result, readvs = results
+            self.failIf(result)
+            self.failUnlessIn(0, readvs)
+            readv = readvs[0][0]
+            self.failUnlessEqual(readv, self.old_checkstring)
+        d.addCallback(_then_again)
+        # The checkstring remains the same for the rest of the process.
+        return d
+
+
+    def test_blockhashes_after_share_hash_chain(self):
+        mw = self._make_new_mw("si1", 0)
+        d = defer.succeed(None)
+        # Put everything up to and including the share hash chain
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mw.put_block(self.block, i, self.salt))
+        d.addCallback(lambda ignored:
+            mw.put_encprivkey(self.encprivkey))
+        d.addCallback(lambda ignored:
+            mw.put_blockhashes(self.block_hash_tree))
+        d.addCallback(lambda ignored:
+            mw.put_sharehashes(self.share_hash_chain))
+
+        # Now try to put the block hash tree again.
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "test repeat salthashes",
+                            None,
+                            mw.put_blockhashes, self.block_hash_tree))
+        return d
+
+
+    def test_encprivkey_after_blockhashes(self):
+        mw = self._make_new_mw("si1", 0)
+        d = defer.succeed(None)
+        # Put everything up to and including the block hash tree
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mw.put_block(self.block, i, self.salt))
+        d.addCallback(lambda ignored:
+            mw.put_encprivkey(self.encprivkey))
+        d.addCallback(lambda ignored:
+            mw.put_blockhashes(self.block_hash_tree))
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "out of order private key",
+                            None,
+                            mw.put_encprivkey, self.encprivkey))
+        return d
+
+
+    def test_share_hash_chain_after_signature(self):
+        mw = self._make_new_mw("si1", 0)
+        d = defer.succeed(None)
+        # Put everything up to and including the signature
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mw.put_block(self.block, i, self.salt))
+        d.addCallback(lambda ignored:
+            mw.put_encprivkey(self.encprivkey))
+        d.addCallback(lambda ignored:
+            mw.put_blockhashes(self.block_hash_tree))
+        d.addCallback(lambda ignored:
+            mw.put_sharehashes(self.share_hash_chain))
+        d.addCallback(lambda ignored:
+            mw.put_root_hash(self.root_hash))
+        d.addCallback(lambda ignored:
+            mw.put_signature(self.signature))
+        # Now try to put the share hash chain again. This should fail
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "out of order share hash chain",
+                            None,
+                            mw.put_sharehashes, self.share_hash_chain))
+        return d
+
+
+    def test_signature_after_verification_key(self):
+        mw = self._make_new_mw("si1", 0)
+        d = defer.succeed(None)
+        # Put everything up to and including the verification key.
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mw.put_block(self.block, i, self.salt))
+        d.addCallback(lambda ignored:
+            mw.put_encprivkey(self.encprivkey))
+        d.addCallback(lambda ignored:
+            mw.put_blockhashes(self.block_hash_tree))
+        d.addCallback(lambda ignored:
+            mw.put_sharehashes(self.share_hash_chain))
+        d.addCallback(lambda ignored:
+            mw.put_root_hash(self.root_hash))
+        d.addCallback(lambda ignored:
+            mw.put_signature(self.signature))
+        d.addCallback(lambda ignored:
+            mw.put_verification_key(self.verification_key))
+        # Now try to put the signature again. This should fail
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "signature after verification",
+                            None,
+                            mw.put_signature, self.signature))
+        return d
+
+
+    def test_uncoordinated_write(self):
+        # Make two mutable writers, both pointing to the same storage
+        # server, both at the same storage index, and try writing to the
+        # same share.
+        mw1 = self._make_new_mw("si1", 0)
+        mw2 = self._make_new_mw("si1", 0)
+
+        def _check_success(results):
+            result, readvs = results
+            self.failUnless(result)
+
+        def _check_failure(results):
+            result, readvs = results
+            self.failIf(result)
+
+        def _write_share(mw):
+            for i in xrange(6):
+                mw.put_block(self.block, i, self.salt)
+            mw.put_encprivkey(self.encprivkey)
+            mw.put_blockhashes(self.block_hash_tree)
+            mw.put_sharehashes(self.share_hash_chain)
+            mw.put_root_hash(self.root_hash)
+            mw.put_signature(self.signature)
+            mw.put_verification_key(self.verification_key)
+            return mw.finish_publishing()
+        d = _write_share(mw1)
+        d.addCallback(_check_success)
+        d.addCallback(lambda ignored:
+            _write_share(mw2))
+        d.addCallback(_check_failure)
+        return d
+
+
+    def test_invalid_salt_size(self):
+        # Salts need to be 16 bytes in size. Writes that attempt to
+        # write more or less than this should be rejected.
+        mw = self._make_new_mw("si1", 0)
+        invalid_salt = "a" * 17 # 17 bytes
+        another_invalid_salt = "b" * 15 # 15 bytes
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "salt too big",
+                            None,
+                            mw.put_block, self.block, 0, invalid_salt))
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "salt too small",
+                            None,
+                            mw.put_block, self.block, 0,
+                            another_invalid_salt))
+        return d
+
+
+    def test_write_test_vectors(self):
+        # If we give the write proxy a bogus test vector at 
+        # any point during the process, it should fail to write when we 
+        # tell it to write.
+        def _check_failure(results):
+            self.failUnlessEqual(len(results), 2)
+            res, d = results
+            self.failIf(res)
+
+        def _check_success(results):
+            self.failUnlessEqual(len(results), 2)
+            res, d = results
+            self.failUnless(results)
+
+        mw = self._make_new_mw("si1", 0)
+        mw.set_checkstring("this is a lie")
+        for i in xrange(6):
+            mw.put_block(self.block, i, self.salt)
+        mw.put_encprivkey(self.encprivkey)
+        mw.put_blockhashes(self.block_hash_tree)
+        mw.put_sharehashes(self.share_hash_chain)
+        mw.put_root_hash(self.root_hash)
+        mw.put_signature(self.signature)
+        mw.put_verification_key(self.verification_key)
+        d = mw.finish_publishing()
+        d.addCallback(_check_failure)
+        d.addCallback(lambda ignored:
+            mw.set_checkstring(""))
+        d.addCallback(lambda ignored:
+            mw.finish_publishing())
+        d.addCallback(_check_success)
+        return d
+
+
+    def serialize_blockhashes(self, blockhashes):
+        return "".join(blockhashes)
+
+
+    def serialize_sharehashes(self, sharehashes):
+        ret = "".join([struct.pack(">H32s", i, sharehashes[i])
+                        for i in sorted(sharehashes.keys())])
+        return ret
+
+
+    def test_write(self):
+        # This translates to a file with 6 6-byte segments, and with 2-byte
+        # blocks.
+        mw = self._make_new_mw("si1", 0)
+        # Test writing some blocks.
+        read = self.ss.remote_slot_readv
+        expected_sharedata_offset = struct.calcsize(MDMFHEADER)
+        written_block_size = 2 + len(self.salt)
+        written_block = self.block + self.salt
+        for i in xrange(6):
+            mw.put_block(self.block, i, self.salt)
+
+        mw.put_encprivkey(self.encprivkey)
+        mw.put_blockhashes(self.block_hash_tree)
+        mw.put_sharehashes(self.share_hash_chain)
+        mw.put_root_hash(self.root_hash)
+        mw.put_signature(self.signature)
+        mw.put_verification_key(self.verification_key)
+        d = mw.finish_publishing()
+        def _check_publish(results):
+            self.failUnlessEqual(len(results), 2)
+            result, ign = results
+            self.failUnless(result, "publish failed")
+            for i in xrange(6):
+                self.failUnlessEqual(read("si1", [0], [(expected_sharedata_offset + (i * written_block_size), written_block_size)]),
+                                {0: [written_block]})
+
+            expected_private_key_offset = expected_sharedata_offset + \
+                                      len(written_block) * 6
+            self.failUnlessEqual(len(self.encprivkey), 7)
+            self.failUnlessEqual(read("si1", [0], [(expected_private_key_offset, 7)]),
+                                 {0: [self.encprivkey]})
+
+            expected_block_hash_offset = expected_private_key_offset + len(self.encprivkey)
+            self.failUnlessEqual(len(self.block_hash_tree_s), 32 * 6)
+            self.failUnlessEqual(read("si1", [0], [(expected_block_hash_offset, 32 * 6)]),
+                                 {0: [self.block_hash_tree_s]})
+
+            expected_share_hash_offset = expected_block_hash_offset + len(self.block_hash_tree_s)
+            self.failUnlessEqual(read("si1", [0],[(expected_share_hash_offset, (32 + 2) * 6)]),
+                                 {0: [self.share_hash_chain_s]})
+
+            self.failUnlessEqual(read("si1", [0], [(9, 32)]),
+                                 {0: [self.root_hash]})
+            expected_signature_offset = expected_share_hash_offset + len(self.share_hash_chain_s)
+            self.failUnlessEqual(len(self.signature), 9)
+            self.failUnlessEqual(read("si1", [0], [(expected_signature_offset, 9)]),
+                                 {0: [self.signature]})
+
+            expected_verification_key_offset = expected_signature_offset + len(self.signature)
+            self.failUnlessEqual(len(self.verification_key), 6)
+            self.failUnlessEqual(read("si1", [0], [(expected_verification_key_offset, 6)]),
+                                 {0: [self.verification_key]})
+
+            signable = mw.get_signable()
+            verno, seq, roothash, k, n, segsize, datalen = \
+                                            struct.unpack(">BQ32sBBQQ",
+                                                          signable)
+            self.failUnlessEqual(verno, 1)
+            self.failUnlessEqual(seq, 0)
+            self.failUnlessEqual(roothash, self.root_hash)
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            self.failUnlessEqual(segsize, 6)
+            self.failUnlessEqual(datalen, 36)
+            expected_eof_offset = expected_verification_key_offset + len(self.verification_key)
+
+            # Check the version number to make sure that it is correct.
+            expected_version_number = struct.pack(">B", 1)
+            self.failUnlessEqual(read("si1", [0], [(0, 1)]),
+                                 {0: [expected_version_number]})
+            # Check the sequence number to make sure that it is correct
+            expected_sequence_number = struct.pack(">Q", 0)
+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
+                                 {0: [expected_sequence_number]})
+            # Check that the encoding parameters (k, N, segement size, data
+            # length) are what they should be. These are  3, 10, 6, 36
+            expected_k = struct.pack(">B", 3)
+            self.failUnlessEqual(read("si1", [0], [(41, 1)]),
+                                 {0: [expected_k]})
+            expected_n = struct.pack(">B", 10)
+            self.failUnlessEqual(read("si1", [0], [(42, 1)]),
+                                 {0: [expected_n]})
+            expected_segment_size = struct.pack(">Q", 6)
+            self.failUnlessEqual(read("si1", [0], [(43, 8)]),
+                                 {0: [expected_segment_size]})
+            expected_data_length = struct.pack(">Q", 36)
+            self.failUnlessEqual(read("si1", [0], [(51, 8)]),
+                                 {0: [expected_data_length]})
+            expected_offset = struct.pack(">Q", expected_private_key_offset)
+            self.failUnlessEqual(read("si1", [0], [(59, 8)]),
+                                 {0: [expected_offset]})
+            expected_offset = struct.pack(">Q", expected_block_hash_offset)
+            self.failUnlessEqual(read("si1", [0], [(67, 8)]),
+                                 {0: [expected_offset]})
+            expected_offset = struct.pack(">Q", expected_share_hash_offset)
+            self.failUnlessEqual(read("si1", [0], [(75, 8)]),
+                                 {0: [expected_offset]})
+            expected_offset = struct.pack(">Q", expected_signature_offset)
+            self.failUnlessEqual(read("si1", [0], [(83, 8)]),
+                                 {0: [expected_offset]})
+            expected_offset = struct.pack(">Q", expected_verification_key_offset)
+            self.failUnlessEqual(read("si1", [0], [(91, 8)]),
+                                 {0: [expected_offset]})
+            expected_offset = struct.pack(">Q", expected_eof_offset)
+            self.failUnlessEqual(read("si1", [0], [(99, 8)]),
+                                 {0: [expected_offset]})
+        d.addCallback(_check_publish)
+        return d
+
+    def _make_new_mw(self, si, share, datalength=36):
+        # This is a file of size 36 bytes. Since it has a segment
+        # size of 6, we know that it has 6 byte segments, which will
+        # be split into blocks of 2 bytes because our FEC k
+        # parameter is 3.
+        mw = MDMFSlotWriteProxy(share, self.rref, si, self.secrets, 0, 3, 10,
+                                6, datalength)
+        return mw
+
+
+    def test_write_rejected_with_too_many_blocks(self):
+        mw = self._make_new_mw("si0", 0)
+
+        # Try writing too many blocks. We should not be able to write
+        # more than 6
+        # blocks into each share.
+        d = defer.succeed(None)
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mw.put_block(self.block, i, self.salt))
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "too many blocks",
+                            None,
+                            mw.put_block, self.block, 7, self.salt))
+        return d
+
+
+    def test_write_rejected_with_invalid_salt(self):
+        # Try writing an invalid salt. Salts are 16 bytes -- any more or
+        # less should cause an error.
+        mw = self._make_new_mw("si1", 0)
+        bad_salt = "a" * 17 # 17 bytes
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "test_invalid_salt",
+                            None, mw.put_block, self.block, 7, bad_salt))
+        return d
+
+
+    def test_write_rejected_with_invalid_root_hash(self):
+        # Try writing an invalid root hash. This should be SHA256d, and
+        # 32 bytes long as a result.
+        mw = self._make_new_mw("si2", 0)
+        # 17 bytes != 32 bytes
+        invalid_root_hash = "a" * 17
+        d = defer.succeed(None)
+        # Before this test can work, we need to put some blocks + salts,
+        # a block hash tree, and a share hash tree. Otherwise, we'll see
+        # failures that match what we are looking for, but are caused by
+        # the constraints imposed on operation ordering.
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mw.put_block(self.block, i, self.salt))
+        d.addCallback(lambda ignored:
+            mw.put_encprivkey(self.encprivkey))
+        d.addCallback(lambda ignored:
+            mw.put_blockhashes(self.block_hash_tree))
+        d.addCallback(lambda ignored:
+            mw.put_sharehashes(self.share_hash_chain))
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "invalid root hash",
+                            None, mw.put_root_hash, invalid_root_hash))
+        return d
+
+
+    def test_write_rejected_with_invalid_blocksize(self):
+        # The blocksize implied by the writer that we get from
+        # _make_new_mw is 2bytes -- any more or any less than this
+        # should be cause for failure, unless it is the tail segment, in
+        # which case it may not be failure.
+        invalid_block = "a"
+        mw = self._make_new_mw("si3", 0, 33) # implies a tail segment with
+                                             # one byte blocks
+        # 1 bytes != 2 bytes
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored, invalid_block=invalid_block:
+            self.shouldFail(LayoutInvalid, "test blocksize too small",
+                            None, mw.put_block, invalid_block, 0,
+                            self.salt))
+        invalid_block = invalid_block * 3
+        # 3 bytes != 2 bytes
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "test blocksize too large",
+                            None,
+                            mw.put_block, invalid_block, 0, self.salt))
+        for i in xrange(5):
+            d.addCallback(lambda ignored, i=i:
+                mw.put_block(self.block, i, self.salt))
+        # Try to put an invalid tail segment
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "test invalid tail segment",
+                            None,
+                            mw.put_block, self.block, 5, self.salt))
+        valid_block = "a"
+        d.addCallback(lambda ignored:
+            mw.put_block(valid_block, 5, self.salt))
+        return d
+
+
+    def test_write_enforces_order_constraints(self):
+        # We require that the MDMFSlotWriteProxy be interacted with in a
+        # specific way.
+        # That way is:
+        # 0: __init__
+        # 1: write blocks and salts
+        # 2: Write the encrypted private key
+        # 3: Write the block hashes
+        # 4: Write the share hashes
+        # 5: Write the root hash and salt hash
+        # 6: Write the signature and verification key
+        # 7: Write the file.
+        # 
+        # Some of these can be performed out-of-order, and some can't.
+        # The dependencies that I want to test here are:
+        #  - Private key before block hashes
+        #  - share hashes and block hashes before root hash
+        #  - root hash before signature
+        #  - signature before verification key
+        mw0 = self._make_new_mw("si0", 0)
+        # Write some shares
+        d = defer.succeed(None)
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mw0.put_block(self.block, i, self.salt))
+        # Try to write the block hashes before writing the encrypted
+        # private key
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "block hashes before key",
+                            None, mw0.put_blockhashes,
+                            self.block_hash_tree))
+
+        # Write the private key.
+        d.addCallback(lambda ignored:
+            mw0.put_encprivkey(self.encprivkey))
+
+
+        # Try to write the share hash chain without writing the block
+        # hash tree
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "share hash chain before "
+                                           "salt hash tree",
+                            None,
+                            mw0.put_sharehashes, self.share_hash_chain))
+
+        # Try to write the root hash and without writing either the
+        # block hashes or the or the share hashes
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "root hash before share hashes",
+                            None,
+                            mw0.put_root_hash, self.root_hash))
+
+        # Now write the block hashes and try again
+        d.addCallback(lambda ignored:
+            mw0.put_blockhashes(self.block_hash_tree))
+
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "root hash before share hashes",
+                            None, mw0.put_root_hash, self.root_hash))
+
+        # We haven't yet put the root hash on the share, so we shouldn't
+        # be able to sign it.
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "signature before root hash",
+                            None, mw0.put_signature, self.signature))
+
+        d.addCallback(lambda ignored:
+            self.failUnlessRaises(LayoutInvalid, mw0.get_signable))
+
+        # ..and, since that fails, we also shouldn't be able to put the
+        # verification key.
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "key before signature",
+                            None, mw0.put_verification_key,
+                            self.verification_key))
+
+        # Now write the share hashes.
+        d.addCallback(lambda ignored:
+            mw0.put_sharehashes(self.share_hash_chain))
+        # We should be able to write the root hash now too
+        d.addCallback(lambda ignored:
+            mw0.put_root_hash(self.root_hash))
+
+        # We should still be unable to put the verification key
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "key before signature",
+                            None, mw0.put_verification_key,
+                            self.verification_key))
+
+        d.addCallback(lambda ignored:
+            mw0.put_signature(self.signature))
+
+        # We shouldn't be able to write the offsets to the remote server
+        # until the offset table is finished; IOW, until we have written
+        # the verification key.
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "offsets before verification key",
+                            None,
+                            mw0.finish_publishing))
+
+        d.addCallback(lambda ignored:
+            mw0.put_verification_key(self.verification_key))
+        return d
+
+
+    def test_end_to_end(self):
+        mw = self._make_new_mw("si1", 0)
+        # Write a share using the mutable writer, and make sure that the
+        # reader knows how to read everything back to us.
+        d = defer.succeed(None)
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mw.put_block(self.block, i, self.salt))
+        d.addCallback(lambda ignored:
+            mw.put_encprivkey(self.encprivkey))
+        d.addCallback(lambda ignored:
+            mw.put_blockhashes(self.block_hash_tree))
+        d.addCallback(lambda ignored:
+            mw.put_sharehashes(self.share_hash_chain))
+        d.addCallback(lambda ignored:
+            mw.put_root_hash(self.root_hash))
+        d.addCallback(lambda ignored:
+            mw.put_signature(self.signature))
+        d.addCallback(lambda ignored:
+            mw.put_verification_key(self.verification_key))
+        d.addCallback(lambda ignored:
+            mw.finish_publishing())
+
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        def _check_block_and_salt((block, salt)):
+            self.failUnlessEqual(block, self.block)
+            self.failUnlessEqual(salt, self.salt)
+
+        for i in xrange(6):
+            d.addCallback(lambda ignored, i=i:
+                mr.get_block_and_salt(i))
+            d.addCallback(_check_block_and_salt)
+
+        d.addCallback(lambda ignored:
+            mr.get_encprivkey())
+        d.addCallback(lambda encprivkey:
+            self.failUnlessEqual(self.encprivkey, encprivkey))
+
+        d.addCallback(lambda ignored:
+            mr.get_blockhashes())
+        d.addCallback(lambda blockhashes:
+            self.failUnlessEqual(self.block_hash_tree, blockhashes))
+
+        d.addCallback(lambda ignored:
+            mr.get_sharehashes())
+        d.addCallback(lambda sharehashes:
+            self.failUnlessEqual(self.share_hash_chain, sharehashes))
+
+        d.addCallback(lambda ignored:
+            mr.get_signature())
+        d.addCallback(lambda signature:
+            self.failUnlessEqual(signature, self.signature))
+
+        d.addCallback(lambda ignored:
+            mr.get_verification_key())
+        d.addCallback(lambda verification_key:
+            self.failUnlessEqual(verification_key, self.verification_key))
+
+        d.addCallback(lambda ignored:
+            mr.get_seqnum())
+        d.addCallback(lambda seqnum:
+            self.failUnlessEqual(seqnum, 0))
+
+        d.addCallback(lambda ignored:
+            mr.get_root_hash())
+        d.addCallback(lambda root_hash:
+            self.failUnlessEqual(self.root_hash, root_hash))
+
+        d.addCallback(lambda ignored:
+            mr.get_encoding_parameters())
+        def _check_encoding_parameters((k, n, segsize, datalen)):
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            self.failUnlessEqual(segsize, 6)
+            self.failUnlessEqual(datalen, 36)
+        d.addCallback(_check_encoding_parameters)
+
+        d.addCallback(lambda ignored:
+            mr.get_checkstring())
+        d.addCallback(lambda checkstring:
+            self.failUnlessEqual(checkstring, mw.get_checkstring()))
+        return d
+
+
+    def test_is_sdmf(self):
+        # The MDMFSlotReadProxy should also know how to read SDMF files,
+        # since it will encounter them on the grid. Callers use the
+        # is_sdmf method to test this.
+        self.write_sdmf_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = mr.is_sdmf()
+        d.addCallback(lambda issdmf:
+            self.failUnless(issdmf))
+        return d
+
+
+    def test_reads_sdmf(self):
+        # The slot read proxy should, naturally, know how to tell us
+        # about data in the SDMF format
+        self.write_sdmf_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            mr.is_sdmf())
+        d.addCallback(lambda issdmf:
+            self.failUnless(issdmf))
+
+        # What do we need to read?
+        #  - The sharedata
+        #  - The salt
+        d.addCallback(lambda ignored:
+            mr.get_block_and_salt(0))
+        def _check_block_and_salt(results):
+            block, salt = results
+            # Our original file is 36 bytes long. Then each share is 12
+            # bytes in size. The share is composed entirely of the
+            # letter a. self.block contains 2 as, so 6 * self.block is
+            # what we are looking for.
+            self.failUnlessEqual(block, self.block * 6)
+            self.failUnlessEqual(salt, self.salt)
+        d.addCallback(_check_block_and_salt)
+
+        #  - The blockhashes
+        d.addCallback(lambda ignored:
+            mr.get_blockhashes())
+        d.addCallback(lambda blockhashes:
+            self.failUnlessEqual(self.block_hash_tree,
+                                 blockhashes,
+                                 blockhashes))
+        #  - The sharehashes
+        d.addCallback(lambda ignored:
+            mr.get_sharehashes())
+        d.addCallback(lambda sharehashes:
+            self.failUnlessEqual(self.share_hash_chain,
+                                 sharehashes))
+        #  - The keys
+        d.addCallback(lambda ignored:
+            mr.get_encprivkey())
+        d.addCallback(lambda encprivkey:
+            self.failUnlessEqual(encprivkey, self.encprivkey, encprivkey))
+        d.addCallback(lambda ignored:
+            mr.get_verification_key())
+        d.addCallback(lambda verification_key:
+            self.failUnlessEqual(verification_key,
+                                 self.verification_key,
+                                 verification_key))
+        #  - The signature
+        d.addCallback(lambda ignored:
+            mr.get_signature())
+        d.addCallback(lambda signature:
+            self.failUnlessEqual(signature, self.signature, signature))
+
+        #  - The sequence number
+        d.addCallback(lambda ignored:
+            mr.get_seqnum())
+        d.addCallback(lambda seqnum:
+            self.failUnlessEqual(seqnum, 0, seqnum))
+
+        #  - The root hash
+        d.addCallback(lambda ignored:
+            mr.get_root_hash())
+        d.addCallback(lambda root_hash:
+            self.failUnlessEqual(root_hash, self.root_hash, root_hash))
+        return d
+
+
+    def test_only_reads_one_segment_sdmf(self):
+        # SDMF shares have only one segment, so it doesn't make sense to
+        # read more segments than that. The reader should know this and
+        # complain if we try to do that.
+        self.write_sdmf_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            mr.is_sdmf())
+        d.addCallback(lambda issdmf:
+            self.failUnless(issdmf))
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "test bad segment",
+                            None,
+                            mr.get_block_and_salt, 1))
+        return d
+
+
+    def test_read_with_prefetched_mdmf_data(self):
+        # The MDMFSlotReadProxy will prefill certain fields if you pass
+        # it data that you have already fetched. This is useful for
+        # cases like the Servermap, which prefetches ~2kb of data while
+        # finding out which shares are on the remote peer so that it
+        # doesn't waste round trips.
+        mdmf_data = self.build_test_mdmf_share()
+        self.write_test_share_to_server("si1")
+        def _make_mr(ignored, length):
+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:length])
+            return mr
+
+        d = defer.succeed(None)
+        # This should be enough to fill in both the encoding parameters
+        # and the table of offsets, which will complete the version
+        # information tuple.
+        d.addCallback(_make_mr, 107)
+        d.addCallback(lambda mr:
+            mr.get_verinfo())
+        def _check_verinfo(verinfo):
+            self.failUnless(verinfo)
+            self.failUnlessEqual(len(verinfo), 9)
+            (seqnum,
+             root_hash,
+             salt_hash,
+             segsize,
+             datalen,
+             k,
+             n,
+             prefix,
+             offsets) = verinfo
+            self.failUnlessEqual(seqnum, 0)
+            self.failUnlessEqual(root_hash, self.root_hash)
+            self.failUnlessEqual(segsize, 6)
+            self.failUnlessEqual(datalen, 36)
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            expected_prefix = struct.pack(MDMFSIGNABLEHEADER,
+                                          1,
+                                          seqnum,
+                                          root_hash,
+                                          k,
+                                          n,
+                                          segsize,
+                                          datalen)
+            self.failUnlessEqual(expected_prefix, prefix)
+            self.failUnlessEqual(self.rref.read_count, 0)
+        d.addCallback(_check_verinfo)
+        # This is not enough data to read a block and a share, so the
+        # wrapper should attempt to read this from the remote server.
+        d.addCallback(_make_mr, 107)
+        d.addCallback(lambda mr:
+            mr.get_block_and_salt(0))
+        def _check_block_and_salt((block, salt)):
+            self.failUnlessEqual(block, self.block)
+            self.failUnlessEqual(salt, self.salt)
+            self.failUnlessEqual(self.rref.read_count, 1)
+        # This should be enough data to read one block.
+        d.addCallback(_make_mr, 249)
+        d.addCallback(lambda mr:
+            mr.get_block_and_salt(0))
+        d.addCallback(_check_block_and_salt)
+        return d
+
+
+    def test_read_with_prefetched_sdmf_data(self):
+        sdmf_data = self.build_test_sdmf_share()
+        self.write_sdmf_share_to_server("si1")
+        def _make_mr(ignored, length):
+            mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:length])
+            return mr
+
+        d = defer.succeed(None)
+        # This should be enough to get us the encoding parameters,
+        # offset table, and everything else we need to build a verinfo
+        # string.
+        d.addCallback(_make_mr, 107)
+        d.addCallback(lambda mr:
+            mr.get_verinfo())
+        def _check_verinfo(verinfo):
+            self.failUnless(verinfo)
+            self.failUnlessEqual(len(verinfo), 9)
+            (seqnum,
+             root_hash,
+             salt,
+             segsize,
+             datalen,
+             k,
+             n,
+             prefix,
+             offsets) = verinfo
+            self.failUnlessEqual(seqnum, 0)
+            self.failUnlessEqual(root_hash, self.root_hash)
+            self.failUnlessEqual(salt, self.salt)
+            self.failUnlessEqual(segsize, 36)
+            self.failUnlessEqual(datalen, 36)
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            expected_prefix = struct.pack(SIGNED_PREFIX,
+                                          0,
+                                          seqnum,
+                                          root_hash,
+                                          salt,
+                                          k,
+                                          n,
+                                          segsize,
+                                          datalen)
+            self.failUnlessEqual(expected_prefix, prefix)
+            self.failUnlessEqual(self.rref.read_count, 0)
+        d.addCallback(_check_verinfo)
+        # This shouldn't be enough to read any share data.
+        d.addCallback(_make_mr, 107)
+        d.addCallback(lambda mr:
+            mr.get_block_and_salt(0))
+        def _check_block_and_salt((block, salt)):
+            self.failUnlessEqual(block, self.block * 6)
+            self.failUnlessEqual(salt, self.salt)
+            # TODO: Fix the read routine so that it reads only the data
+            #       that it has cached if it can't read all of it.
+            self.failUnlessEqual(self.rref.read_count, 2)
+
+        # This should be enough to read share data.
+        d.addCallback(_make_mr, self.offsets['share_data'])
+        d.addCallback(lambda mr:
+            mr.get_block_and_salt(0))
+        d.addCallback(_check_block_and_salt)
+        return d
+
+
+    def test_read_with_empty_mdmf_file(self):
+        # Some tests upload a file with no contents to test things
+        # unrelated to the actual handling of the content of the file.
+        # The reader should behave intelligently in these cases.
+        self.write_test_share_to_server("si1", empty=True)
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        # We should be able to get the encoding parameters, and they
+        # should be correct.
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            mr.get_encoding_parameters())
+        def _check_encoding_parameters(params):
+            self.failUnlessEqual(len(params), 4)
+            k, n, segsize, datalen = params
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            self.failUnlessEqual(segsize, 0)
+            self.failUnlessEqual(datalen, 0)
+        d.addCallback(_check_encoding_parameters)
+
+        # We should not be able to fetch a block, since there are no
+        # blocks to fetch
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "get block on empty file",
+                            None,
+                            mr.get_block_and_salt, 0))
+        return d
+
+
+    def test_read_with_empty_sdmf_file(self):
+        self.write_sdmf_share_to_server("si1", empty=True)
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        # We should be able to get the encoding parameters, and they
+        # should be correct
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            mr.get_encoding_parameters())
+        def _check_encoding_parameters(params):
+            self.failUnlessEqual(len(params), 4)
+            k, n, segsize, datalen = params
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            self.failUnlessEqual(segsize, 0)
+            self.failUnlessEqual(datalen, 0)
+        d.addCallback(_check_encoding_parameters)
+
+        # It does not make sense to get a block in this format, so we
+        # should not be able to.
+        d.addCallback(lambda ignored:
+            self.shouldFail(LayoutInvalid, "get block on an empty file",
+                            None,
+                            mr.get_block_and_salt, 0))
+        return d
+
+
+    def test_verinfo_with_sdmf_file(self):
+        self.write_sdmf_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        # We should be able to get the version information.
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            mr.get_verinfo())
+        def _check_verinfo(verinfo):
+            self.failUnless(verinfo)
+            self.failUnlessEqual(len(verinfo), 9)
+            (seqnum,
+             root_hash,
+             salt,
+             segsize,
+             datalen,
+             k,
+             n,
+             prefix,
+             offsets) = verinfo
+            self.failUnlessEqual(seqnum, 0)
+            self.failUnlessEqual(root_hash, self.root_hash)
+            self.failUnlessEqual(salt, self.salt)
+            self.failUnlessEqual(segsize, 36)
+            self.failUnlessEqual(datalen, 36)
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            expected_prefix = struct.pack(">BQ32s16s BBQQ",
+                                          0,
+                                          seqnum,
+                                          root_hash,
+                                          salt,
+                                          k,
+                                          n,
+                                          segsize,
+                                          datalen)
+            self.failUnlessEqual(prefix, expected_prefix)
+            self.failUnlessEqual(offsets, self.offsets)
+        d.addCallback(_check_verinfo)
+        return d
+
+
+    def test_verinfo_with_mdmf_file(self):
+        self.write_test_share_to_server("si1")
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            mr.get_verinfo())
+        def _check_verinfo(verinfo):
+            self.failUnless(verinfo)
+            self.failUnlessEqual(len(verinfo), 9)
+            (seqnum,
+             root_hash,
+             IV,
+             segsize,
+             datalen,
+             k,
+             n,
+             prefix,
+             offsets) = verinfo
+            self.failUnlessEqual(seqnum, 0)
+            self.failUnlessEqual(root_hash, self.root_hash)
+            self.failIf(IV)
+            self.failUnlessEqual(segsize, 6)
+            self.failUnlessEqual(datalen, 36)
+            self.failUnlessEqual(k, 3)
+            self.failUnlessEqual(n, 10)
+            expected_prefix = struct.pack(">BQ32s BBQQ",
+                                          1,
+                                          seqnum,
+                                          root_hash,
+                                          k,
+                                          n,
+                                          segsize,
+                                          datalen)
+            self.failUnlessEqual(prefix, expected_prefix)
+            self.failUnlessEqual(offsets, self.offsets)
+        d.addCallback(_check_verinfo)
+        return d
+
+
+    def test_reader_queue(self):
+        self.write_test_share_to_server('si1')
+        mr = MDMFSlotReadProxy(self.rref, "si1", 0)
+        d1 = mr.get_block_and_salt(0, queue=True)
+        d2 = mr.get_blockhashes(queue=True)
+        d3 = mr.get_sharehashes(queue=True)
+        d4 = mr.get_signature(queue=True)
+        d5 = mr.get_verification_key(queue=True)
+        dl = defer.DeferredList([d1, d2, d3, d4, d5])
+        mr.flush()
+        def _print(results):
+            self.failUnlessEqual(len(results), 5)
+            # We have one read for version information and offsets, and
+            # one for everything else.
+            self.failUnlessEqual(self.rref.read_count, 2)
+            block, salt = results[0][1] # results[0] is a boolean that says
+                                           # whether or not the operation
+                                           # worked.
+            self.failUnlessEqual(self.block, block)
+            self.failUnlessEqual(self.salt, salt)
+
+            blockhashes = results[1][1]
+            self.failUnlessEqual(self.block_hash_tree, blockhashes)
+
+            sharehashes = results[2][1]
+            self.failUnlessEqual(self.share_hash_chain, sharehashes)
+
+            signature = results[3][1]
+            self.failUnlessEqual(self.signature, signature)
+
+            verification_key = results[4][1]
+            self.failUnlessEqual(self.verification_key, verification_key)
+        dl.addCallback(_print)
+        return dl
+
+
+    def test_sdmf_writer(self):
+        # Go through the motions of writing an SDMF share to the storage
+        # server. Then read the storage server to see that the share got
+        # written in the way that we think it should have. 
+
+        # We do this first so that the necessary instance variables get
+        # set the way we want them for the tests below.
+        data = self.build_test_sdmf_share()
+        sdmfr = SDMFSlotWriteProxy(0,
+                                   self.rref,
+                                   "si1",
+                                   self.secrets,
+                                   0, 3, 10, 36, 36)
+        # Put the block and salt.
+        sdmfr.put_block(self.blockdata, 0, self.salt)
+
+        # Put the encprivkey
+        sdmfr.put_encprivkey(self.encprivkey)
+
+        # Put the block and share hash chains
+        sdmfr.put_blockhashes(self.block_hash_tree)
+        sdmfr.put_sharehashes(self.share_hash_chain)
+        sdmfr.put_root_hash(self.root_hash)
+
+        # Put the signature
+        sdmfr.put_signature(self.signature)
+
+        # Put the verification key
+        sdmfr.put_verification_key(self.verification_key)
+
+        # Now check to make sure that nothing has been written yet.
+        self.failUnlessEqual(self.rref.write_count, 0)
+
+        # Now finish publishing
+        d = sdmfr.finish_publishing()
+        def _then(ignored):
+            self.failUnlessEqual(self.rref.write_count, 1)
+            read = self.ss.remote_slot_readv
+            self.failUnlessEqual(read("si1", [0], [(0, len(data))]),
+                                 {0: [data]})
+        d.addCallback(_then)
+        return d
+
+
+    def test_sdmf_writer_preexisting_share(self):
+        data = self.build_test_sdmf_share()
+        self.write_sdmf_share_to_server("si1")
+
+        # Now there is a share on the storage server. To successfully
+        # write, we need to set the checkstring correctly. When we
+        # don't, no write should occur.
+        sdmfw = SDMFSlotWriteProxy(0,
+                                   self.rref,
+                                   "si1",
+                                   self.secrets,
+                                   1, 3, 10, 36, 36)
+        sdmfw.put_block(self.blockdata, 0, self.salt)
+
+        # Put the encprivkey
+        sdmfw.put_encprivkey(self.encprivkey)
+
+        # Put the block and share hash chains
+        sdmfw.put_blockhashes(self.block_hash_tree)
+        sdmfw.put_sharehashes(self.share_hash_chain)
+
+        # Put the root hash
+        sdmfw.put_root_hash(self.root_hash)
+
+        # Put the signature
+        sdmfw.put_signature(self.signature)
+
+        # Put the verification key
+        sdmfw.put_verification_key(self.verification_key)
+
+        # We shouldn't have a checkstring yet
+        self.failUnlessEqual(sdmfw.get_checkstring(), "")
+
+        d = sdmfw.finish_publishing()
+        def _then(results):
+            self.failIf(results[0])
+            # this is the correct checkstring
+            self._expected_checkstring = results[1][0][0]
+            return self._expected_checkstring
+
+        d.addCallback(_then)
+        d.addCallback(sdmfw.set_checkstring)
+        d.addCallback(lambda ignored:
+            sdmfw.get_checkstring())
+        d.addCallback(lambda checkstring:
+            self.failUnlessEqual(checkstring, self._expected_checkstring))
+        d.addCallback(lambda ignored:
+            sdmfw.finish_publishing())
+        def _then_again(results):
+            self.failUnless(results[0])
+            read = self.ss.remote_slot_readv
+            self.failUnlessEqual(read("si1", [0], [(1, 8)]),
+                                 {0: [struct.pack(">Q", 1)]})
+            self.failUnlessEqual(read("si1", [0], [(9, len(data) - 9)]),
+                                 {0: [data[9:]]})
+        d.addCallback(_then_again)
+        return d
+
+
 class Stats(unittest.TestCase):
 
     def setUp(self):
}
[mutable/retrieve.py: Modify the retrieval process to support MDMF
Kevan Carstensen <kevan@isnotajoke.com>**20100819003409
 Ignore-this: c03f4e41aaa0366a9bf44847f2caf9db
 
 The logic behind a mutable file download had to be adapted to work with
 segmented mutable files; this patch performs those adaptations. It also
 exposes some decoding and decrypting functionality to make partial-file
 updates a little easier, and supports efficient random-access downloads
 of parts of an MDMF file.
] {
hunk ./src/allmydata/mutable/retrieve.py 2
 
-import struct, time
+import time
 from itertools import count
 from zope.interface import implements
 from twisted.internet import defer
merger 0.0 (
hunk ./src/allmydata/mutable/retrieve.py 10
+from allmydata.util.dictutil import DictOfSets
hunk ./src/allmydata/mutable/retrieve.py 7
-from foolscap.api import DeadReferenceError, eventually, fireEventually
-from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError
-from allmydata.util import hashutil, idlib, log
+from twisted.internet.interfaces import IPushProducer, IConsumer
+from foolscap.api import eventually, fireEventually
+from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError, \
+                                 MDMF_VERSION, SDMF_VERSION
+from allmydata.util import hashutil, log, mathutil
)
hunk ./src/allmydata/mutable/retrieve.py 16
 from pycryptopp.publickey import rsa
 
 from allmydata.mutable.common import CorruptShareError, UncoordinatedWriteError
-from allmydata.mutable.layout import SIGNED_PREFIX, unpack_share_data
+from allmydata.mutable.layout import MDMFSlotReadProxy
 
 class RetrieveStatus:
     implements(IRetrieveStatus)
hunk ./src/allmydata/mutable/retrieve.py 83
     # times, and each will have a separate response chain. However the
     # Retrieve object will remain tied to a specific version of the file, and
     # will use a single ServerMap instance.
+    implements(IPushProducer)
 
hunk ./src/allmydata/mutable/retrieve.py 85
-    def __init__(self, filenode, servermap, verinfo, fetch_privkey=False):
+    def __init__(self, filenode, servermap, verinfo, fetch_privkey=False,
+                 verify=False):
         self._node = filenode
         assert self._node.get_pubkey()
         self._storage_index = filenode.get_storage_index()
hunk ./src/allmydata/mutable/retrieve.py 104
         self.verinfo = verinfo
         # during repair, we may be called upon to grab the private key, since
         # it wasn't picked up during a verify=False checker run, and we'll
-        # need it for repair to generate the a new version.
-        self._need_privkey = fetch_privkey
-        if self._node.get_privkey():
+        # need it for repair to generate a new version.
+        self._need_privkey = fetch_privkey or verify
+        if self._node.get_privkey() and not verify:
             self._need_privkey = False
 
hunk ./src/allmydata/mutable/retrieve.py 109
+        if self._need_privkey:
+            # TODO: Evaluate the need for this. We'll use it if we want
+            # to limit how many queries are on the wire for the privkey
+            # at once.
+            self._privkey_query_markers = [] # one Marker for each time we've
+                                             # tried to get the privkey.
+
+        # verify means that we are using the downloader logic to verify all
+        # of our shares. This tells the downloader a few things.
+        # 
+        # 1. We need to download all of the shares.
+        # 2. We don't need to decode or decrypt the shares, since our
+        #    caller doesn't care about the plaintext, only the
+        #    information about which shares are or are not valid.
+        # 3. When we are validating readers, we need to validate the
+        #    signature on the prefix. Do we? We already do this in the
+        #    servermap update?
+        self._verify = False
+        if verify:
+            self._verify = True
+
         self._status = RetrieveStatus()
         self._status.set_storage_index(self._storage_index)
         self._status.set_helper(False)
hunk ./src/allmydata/mutable/retrieve.py 139
          offsets_tuple) = self.verinfo
         self._status.set_size(datalength)
         self._status.set_encoding(k, N)
+        self.readers = {}
+        self._paused = False
+        self._paused_deferred = None
+        self._offset = None
+        self._read_length = None
+        self.log("got seqnum %d" % self.verinfo[0])
+
 
     def get_status(self):
         return self._status
hunk ./src/allmydata/mutable/retrieve.py 157
             kwargs["facility"] = "tahoe.mutable.retrieve"
         return log.msg(*args, **kwargs)
 
-    def download(self):
+
+    ###################
+    # IPushProducer
+
+    def pauseProducing(self):
+        """
+        I am called by my download target if we have produced too much
+        data for it to handle. I make the downloader stop producing new
+        data until my resumeProducing method is called.
+        """
+        if self._paused:
+            return
+
+        # fired when the download is unpaused. 
+        self._old_status = self._status.get_status()
+        self._status.set_status("Paused")
+
+        self._pause_deferred = defer.Deferred()
+        self._paused = True
+
+
+    def resumeProducing(self):
+        """
+        I am called by my download target once it is ready to begin
+        receiving data again.
+        """
+        if not self._paused:
+            return
+
+        self._paused = False
+        p = self._pause_deferred
+        self._pause_deferred = None
+        self._status.set_status(self._old_status)
+
+        eventually(p.callback, None)
+
+
+    def _check_for_paused(self, res):
+        """
+        I am called just before a write to the consumer. I return a
+        Deferred that eventually fires with the data that is to be
+        written to the consumer. If the download has not been paused,
+        the Deferred fires immediately. Otherwise, the Deferred fires
+        when the downloader is unpaused.
+        """
+        if self._paused:
+            d = defer.Deferred()
+            self._pause_defered.addCallback(lambda ignored: d.callback(res))
+            return d
+        return defer.succeed(res)
+
+
+    def download(self, consumer=None, offset=0, size=None):
+        assert IConsumer.providedBy(consumer) or self._verify
+
+        if consumer:
+            self._consumer = consumer
+            # we provide IPushProducer, so streaming=True, per
+            # IConsumer.
+            self._consumer.registerProducer(self, streaming=True)
+
         self._done_deferred = defer.Deferred()
         self._started = time.time()
         self._status.set_status("Retrieving Shares")
hunk ./src/allmydata/mutable/retrieve.py 222
 
+        self._offset = offset
+        self._read_length = size
+
         # first, which servers can we use?
         versionmap = self.servermap.make_versionmap()
         shares = versionmap[self.verinfo]
hunk ./src/allmydata/mutable/retrieve.py 232
         self.remaining_sharemap = DictOfSets()
         for (shnum, peerid, timestamp) in shares:
             self.remaining_sharemap.add(shnum, peerid)
+            # If the servermap update fetched anything, it fetched at least 1
+            # KiB, so we ask for that much.
+            # TODO: Change the cache methods to allow us to fetch all of the
+            # data that they have, then change this method to do that.
+            any_cache, timestamp = self._node._read_from_cache(self.verinfo,
+                                                               shnum,
+                                                               0,
+                                                               1000)
+            ss = self.servermap.connections[peerid]
+            reader = MDMFSlotReadProxy(ss,
+                                       self._storage_index,
+                                       shnum,
+                                       any_cache)
+            reader.peerid = peerid
+            self.readers[shnum] = reader
+
 
         self.shares = {} # maps shnum to validated blocks
hunk ./src/allmydata/mutable/retrieve.py 250
+        self._active_readers = [] # list of active readers for this dl.
+        self._validated_readers = set() # set of readers that we have
+                                        # validated the prefix of
+        self._block_hash_trees = {} # shnum => hashtree
 
         # how many shares do we need?
hunk ./src/allmydata/mutable/retrieve.py 256
-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
+        (seqnum,
+         root_hash,
+         IV,
+         segsize,
+         datalength,
+         k,
+         N,
+         prefix,
          offsets_tuple) = self.verinfo
hunk ./src/allmydata/mutable/retrieve.py 265
-        assert len(self.remaining_sharemap) >= k
-        # we start with the lowest shnums we have available, since FEC is
-        # faster if we're using "primary shares"
-        self.active_shnums = set(sorted(self.remaining_sharemap.keys())[:k])
-        for shnum in self.active_shnums:
-            # we use an arbitrary peer who has the share. If shares are
-            # doubled up (more than one share per peer), we could make this
-            # run faster by spreading the load among multiple peers. But the
-            # algorithm to do that is more complicated than I want to write
-            # right now, and a well-provisioned grid shouldn't have multiple
-            # shares per peer.
-            peerid = list(self.remaining_sharemap[shnum])[0]
-            self.get_data(shnum, peerid)
 
hunk ./src/allmydata/mutable/retrieve.py 266
-        # control flow beyond this point: state machine. Receiving responses
-        # from queries is the input. We might send out more queries, or we
-        # might produce a result.
 
hunk ./src/allmydata/mutable/retrieve.py 267
+        # We need one share hash tree for the entire file; its leaves
+        # are the roots of the block hash trees for the shares that
+        # comprise it, and its root is in the verinfo.
+        self.share_hash_tree = hashtree.IncompleteHashTree(N)
+        self.share_hash_tree.set_hashes({0: root_hash})
+
+        # This will set up both the segment decoder and the tail segment
+        # decoder, as well as a variety of other instance variables that
+        # the download process will use.
+        self._setup_encoding_parameters()
+        assert len(self.remaining_sharemap) >= k
+
+        self.log("starting download")
+        self._paused = False
+        self._started_fetching = time.time()
+
+        self._add_active_peers()
+        # The download process beyond this is a state machine.
+        # _add_active_peers will select the peers that we want to use
+        # for the download, and then attempt to start downloading. After
+        # each segment, it will check for doneness, reacting to broken
+        # peers and corrupt shares as necessary. If it runs out of good
+        # peers before downloading all of the segments, _done_deferred
+        # will errback.  Otherwise, it will eventually callback with the
+        # contents of the mutable file.
         return self._done_deferred
 
hunk ./src/allmydata/mutable/retrieve.py 294
-    def get_data(self, shnum, peerid):
-        self.log(format="sending sh#%(shnum)d request to [%(peerid)s]",
-                 shnum=shnum,
-                 peerid=idlib.shortnodeid_b2a(peerid),
-                 level=log.NOISY)
-        ss = self.servermap.connections[peerid]
-        started = time.time()
-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
+
+    def decode(self, blocks_and_salts, segnum):
+        """
+        I am a helper method that the mutable file update process uses
+        as a shortcut to decode and decrypt the segments that it needs
+        to fetch in order to perform a file update. I take in a
+        collection of blocks and salts, and pick some of those to make a
+        segment with. I return the plaintext associated with that
+        segment.
+        """
+        # shnum => block hash tree. Unusued, but setup_encoding_parameters will
+        # want to set this.
+        # XXX: Make it so that it won't set this if we're just decoding.
+        self._block_hash_trees = {}
+        self._setup_encoding_parameters()
+        # This is the form expected by decode.
+        blocks_and_salts = blocks_and_salts.items()
+        blocks_and_salts = [(True, [d]) for d in blocks_and_salts]
+
+        d = self._decode_blocks(blocks_and_salts, segnum)
+        d.addCallback(self._decrypt_segment)
+        return d
+
+
+    def _setup_encoding_parameters(self):
+        """
+        I set up the encoding parameters, including k, n, the number
+        of segments associated with this file, and the segment decoder.
+        """
+        (seqnum,
+         root_hash,
+         IV,
+         segsize,
+         datalength,
+         k,
+         n,
+         known_prefix,
          offsets_tuple) = self.verinfo
hunk ./src/allmydata/mutable/retrieve.py 332
-        offsets = dict(offsets_tuple)
+        self._required_shares = k
+        self._total_shares = n
+        self._segment_size = segsize
+        self._data_length = datalength
 
hunk ./src/allmydata/mutable/retrieve.py 337
-        # we read the checkstring, to make sure that the data we grab is from
-        # the right version.
-        readv = [ (0, struct.calcsize(SIGNED_PREFIX)) ]
+        if not IV:
+            self._version = MDMF_VERSION
+        else:
+            self._version = SDMF_VERSION
 
hunk ./src/allmydata/mutable/retrieve.py 342
-        # We also read the data, and the hashes necessary to validate them
-        # (share_hash_chain, block_hash_tree, share_data). We don't read the
-        # signature or the pubkey, since that was handled during the
-        # servermap phase, and we'll be comparing the share hash chain
-        # against the roothash that was validated back then.
+        if datalength and segsize:
+            self._num_segments = mathutil.div_ceil(datalength, segsize)
+            self._tail_data_size = datalength % segsize
+        else:
+            self._num_segments = 0
+            self._tail_data_size = 0
 
hunk ./src/allmydata/mutable/retrieve.py 349
-        readv.append( (offsets['share_hash_chain'],
-                       offsets['enc_privkey'] - offsets['share_hash_chain'] ) )
+        self._segment_decoder = codec.CRSDecoder()
+        self._segment_decoder.set_params(segsize, k, n)
 
hunk ./src/allmydata/mutable/retrieve.py 352
-        # if we need the private key (for repair), we also fetch that
-        if self._need_privkey:
-            readv.append( (offsets['enc_privkey'],
-                           offsets['EOF'] - offsets['enc_privkey']) )
+        if  not self._tail_data_size:
+            self._tail_data_size = segsize
+
+        self._tail_segment_size = mathutil.next_multiple(self._tail_data_size,
+                                                         self._required_shares)
+        if self._tail_segment_size == self._segment_size:
+            self._tail_decoder = self._segment_decoder
+        else:
+            self._tail_decoder = codec.CRSDecoder()
+            self._tail_decoder.set_params(self._tail_segment_size,
+                                          self._required_shares,
+                                          self._total_shares)
 
hunk ./src/allmydata/mutable/retrieve.py 365
-        m = Marker()
-        self._outstanding_queries[m] = (peerid, shnum, started)
+        self.log("got encoding parameters: "
+                 "k: %d "
+                 "n: %d "
+                 "%d segments of %d bytes each (%d byte tail segment)" % \
+                 (k, n, self._num_segments, self._segment_size,
+                  self._tail_segment_size))
 
         # ask the cache first
         got_from_cache = False
merger 0.0 (
hunk ./src/allmydata/mutable/retrieve.py 376
-            (data, timestamp) = self._node._read_from_cache(self.verinfo, shnum,
-                                                            offset, length)
+            data = self._node._read_from_cache(self.verinfo, shnum, offset, length)
hunk ./src/allmydata/mutable/retrieve.py 372
-        # ask the cache first
-        got_from_cache = False
-        datavs = []
-        for (offset, length) in readv:
-            (data, timestamp) = self._node._read_from_cache(self.verinfo, shnum,
-                                                            offset, length)
-            if data is not None:
-                datavs.append(data)
-        if len(datavs) == len(readv):
-            self.log("got data from cache")
-            got_from_cache = True
-            d = fireEventually({shnum: datavs})
-            # datavs is a dict mapping shnum to a pair of strings
+        for i in xrange(self._total_shares):
+            # So we don't have to do this later.
+            self._block_hash_trees[i] = hashtree.IncompleteHashTree(self._num_segments)
+
+        # Our last task is to tell the downloader where to start and
+        # where to stop. We use three parameters for that:
+        #   - self._start_segment: the segment that we need to start
+        #     downloading from. 
+        #   - self._current_segment: the next segment that we need to
+        #     download.
+        #   - self._last_segment: The last segment that we were asked to
+        #     download.
+        #
+        #  We say that the download is complete when
+        #  self._current_segment > self._last_segment. We use
+        #  self._start_segment and self._last_segment to know when to
+        #  strip things off of segments, and how much to strip.
+        if self._offset:
+            self.log("got offset: %d" % self._offset)
+            # our start segment is the first segment containing the
+            # offset we were given. 
+            start = mathutil.div_ceil(self._offset,
+                                      self._segment_size)
+            # this gets us the first segment after self._offset. Then
+            # our start segment is the one before it.
+            start -= 1
+
+            assert start < self._num_segments
+            self._start_segment = start
+            self.log("got start segment: %d" % self._start_segment)
)
hunk ./src/allmydata/mutable/retrieve.py 386
             d = fireEventually({shnum: datavs})
             # datavs is a dict mapping shnum to a pair of strings
         else:
-            d = self._do_read(ss, peerid, self._storage_index, [shnum], readv)
-        self.remaining_sharemap.discard(shnum, peerid)
+            self._start_segment = 0
 
hunk ./src/allmydata/mutable/retrieve.py 388
-        d.addCallback(self._got_results, m, peerid, started, got_from_cache)
-        d.addErrback(self._query_failed, m, peerid)
-        # errors that aren't handled by _query_failed (and errors caused by
-        # _query_failed) get logged, but we still want to check for doneness.
-        def _oops(f):
-            self.log(format="problem in _query_failed for sh#%(shnum)d to %(peerid)s",
-                     shnum=shnum,
-                     peerid=idlib.shortnodeid_b2a(peerid),
-                     failure=f,
-                     level=log.WEIRD, umid="W0xnQA")
-        d.addErrback(_oops)
-        d.addBoth(self._check_for_done)
-        # any error during _check_for_done means the download fails. If the
-        # download is successful, _check_for_done will fire _done by itself.
-        d.addErrback(self._done)
-        d.addErrback(log.err)
-        return d # purely for testing convenience
 
hunk ./src/allmydata/mutable/retrieve.py 389
-    def _do_read(self, ss, peerid, storage_index, shnums, readv):
-        # isolate the callRemote to a separate method, so tests can subclass
-        # Publish and override it
-        d = ss.callRemote("slot_readv", storage_index, shnums, readv)
-        return d
+        if self._read_length:
+            # our end segment is the last segment containing part of the
+            # segment that we were asked to read.
+            self.log("got read length %d" % self._read_length)
+            end_data = self._offset + self._read_length
+            end = mathutil.div_ceil(end_data,
+                                    self._segment_size)
+            end -= 1
+            assert end < self._num_segments
+            self._last_segment = end
+            self.log("got end segment: %d" % self._last_segment)
+        else:
+            self._last_segment = self._num_segments - 1
 
hunk ./src/allmydata/mutable/retrieve.py 403
-    def remove_peer(self, peerid):
-        for shnum in list(self.remaining_sharemap.keys()):
-            self.remaining_sharemap.discard(shnum, peerid)
+        self._current_segment = self._start_segment
 
hunk ./src/allmydata/mutable/retrieve.py 405
-    def _got_results(self, datavs, marker, peerid, started, got_from_cache):
-        now = time.time()
-        elapsed = now - started
-        if not got_from_cache:
-            self._status.add_fetch_timing(peerid, elapsed)
-        self.log(format="got results (%(shares)d shares) from [%(peerid)s]",
-                 shares=len(datavs),
-                 peerid=idlib.shortnodeid_b2a(peerid),
-                 level=log.NOISY)
-        self._outstanding_queries.pop(marker, None)
-        if not self._running:
-            return
+    def _add_active_peers(self):
+        """
+        I populate self._active_readers with enough active readers to
+        retrieve the contents of this mutable file. I am called before
+        downloading starts, and (eventually) after each validation
+        error, connection error, or other problem in the download.
+        """
+        # TODO: It would be cool to investigate other heuristics for
+        # reader selection. For instance, the cost (in time the user
+        # spends waiting for their file) of selecting a really slow peer
+        # that happens to have a primary share is probably more than
+        # selecting a really fast peer that doesn't have a primary
+        # share. Maybe the servermap could be extended to provide this
+        # information; it could keep track of latency information while
+        # it gathers more important data, and then this routine could
+        # use that to select active readers.
+        #
+        # (these and other questions would be easier to answer with a
+        #  robust, configurable tahoe-lafs simulator, which modeled node
+        #  failures, differences in node speed, and other characteristics
+        #  that we expect storage servers to have.  You could have
+        #  presets for really stable grids (like allmydata.com),
+        #  friendnets, make it easy to configure your own settings, and
+        #  then simulate the effect of big changes on these use cases
+        #  instead of just reasoning about what the effect might be. Out
+        #  of scope for MDMF, though.)
 
hunk ./src/allmydata/mutable/retrieve.py 432
-        # note that we only ask for a single share per query, so we only
-        # expect a single share back. On the other hand, we use the extra
-        # shares if we get them.. seems better than an assert().
+        # We need at least self._required_shares readers to download a
+        # segment.
+        if self._verify:
+            needed = self._total_shares
+        else:
+            needed = self._required_shares - len(self._active_readers)
+        # XXX: Why don't format= log messages work here?
+        self.log("adding %d peers to the active peers list" % needed)
 
hunk ./src/allmydata/mutable/retrieve.py 441
-        for shnum,datav in datavs.items():
-            (prefix, hash_and_data) = datav[:2]
-            try:
-                self._got_results_one_share(shnum, peerid,
-                                            prefix, hash_and_data)
-            except CorruptShareError, e:
-                # log it and give the other shares a chance to be processed
-                f = failure.Failure()
-                self.log(format="bad share: %(f_value)s",
-                         f_value=str(f.value), failure=f,
-                         level=log.WEIRD, umid="7fzWZw")
-                self.notify_server_corruption(peerid, shnum, str(e))
-                self.remove_peer(peerid)
-                self.servermap.mark_bad_share(peerid, shnum, prefix)
-                self._bad_shares.add( (peerid, shnum) )
-                self._status.problems[peerid] = f
-                self._last_failure = f
-                pass
-            if self._need_privkey and len(datav) > 2:
-                lp = None
-                self._try_to_validate_privkey(datav[2], peerid, shnum, lp)
-        # all done!
+        # We favor lower numbered shares, since FEC is faster with
+        # primary shares than with other shares, and lower-numbered
+        # shares are more likely to be primary than higher numbered
+        # shares.
+        active_shnums = set(sorted(self.remaining_sharemap.keys()))
+        # We shouldn't consider adding shares that we already have; this
+        # will cause problems later.
+        active_shnums -= set([reader.shnum for reader in self._active_readers])
+        active_shnums = list(active_shnums)[:needed]
+        if len(active_shnums) < needed and not self._verify:
+            # We don't have enough readers to retrieve the file; fail.
+            return self._failed()
 
hunk ./src/allmydata/mutable/retrieve.py 454
-    def notify_server_corruption(self, peerid, shnum, reason):
-        ss = self.servermap.connections[peerid]
-        ss.callRemoteOnly("advise_corrupt_share",
-                          "mutable", self._storage_index, shnum, reason)
+        for shnum in active_shnums:
+            self._active_readers.append(self.readers[shnum])
+            self.log("added reader for share %d" % shnum)
+        assert len(self._active_readers) >= self._required_shares
+        # Conceptually, this is part of the _add_active_peers step. It
+        # validates the prefixes of newly added readers to make sure
+        # that they match what we are expecting for self.verinfo. If
+        # validation is successful, _validate_active_prefixes will call
+        # _download_current_segment for us. If validation is
+        # unsuccessful, then _validate_prefixes will remove the peer and
+        # call _add_active_peers again, where we will attempt to rectify
+        # the problem by choosing another peer.
+        return self._validate_active_prefixes()
 
hunk ./src/allmydata/mutable/retrieve.py 468
-    def _got_results_one_share(self, shnum, peerid,
-                               got_prefix, got_hash_and_data):
-        self.log("_got_results: got shnum #%d from peerid %s"
-                 % (shnum, idlib.shortnodeid_b2a(peerid)))
-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
-         offsets_tuple) = self.verinfo
-        assert len(got_prefix) == len(prefix), (len(got_prefix), len(prefix))
-        if got_prefix != prefix:
-            msg = "someone wrote to the data since we read the servermap: prefix changed"
-            raise UncoordinatedWriteError(msg)
-        (share_hash_chain, block_hash_tree,
-         share_data) = unpack_share_data(self.verinfo, got_hash_and_data)
 
hunk ./src/allmydata/mutable/retrieve.py 469
-        assert isinstance(share_data, str)
-        # build the block hash tree. SDMF has only one leaf.
-        leaves = [hashutil.block_hash(share_data)]
-        t = hashtree.HashTree(leaves)
-        if list(t) != block_hash_tree:
-            raise CorruptShareError(peerid, shnum, "block hash tree failure")
-        share_hash_leaf = t[0]
-        t2 = hashtree.IncompleteHashTree(N)
-        # root_hash was checked by the signature
-        t2.set_hashes({0: root_hash})
-        try:
-            t2.set_hashes(hashes=share_hash_chain,
-                          leaves={shnum: share_hash_leaf})
-        except (hashtree.BadHashError, hashtree.NotEnoughHashesError,
-                IndexError), e:
-            msg = "corrupt hashes: %s" % (e,)
-            raise CorruptShareError(peerid, shnum, msg)
-        self.log(" data valid! len=%d" % len(share_data))
-        # each query comes down to this: placing validated share data into
-        # self.shares
-        self.shares[shnum] = share_data
+    def _validate_active_prefixes(self):
+        """
+        I check to make sure that the prefixes on the peers that I am
+        currently reading from match the prefix that we want to see, as
+        said in self.verinfo.
 
hunk ./src/allmydata/mutable/retrieve.py 475
-    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
+        If I find that all of the active peers have acceptable prefixes,
+        I pass control to _download_current_segment, which will use
+        those peers to do cool things. If I find that some of the active
+        peers have unacceptable prefixes, I will remove them from active
+        peers (and from further consideration) and call
+        _add_active_peers to attempt to rectify the situation. I keep
+        track of which peers I have already validated so that I don't
+        need to do so again.
+        """
+        assert self._active_readers, "No more active readers"
 
hunk ./src/allmydata/mutable/retrieve.py 486
-        alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
-        alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
-        if alleged_writekey != self._node.get_writekey():
-            self.log("invalid privkey from %s shnum %d" %
-                     (idlib.nodeid_b2a(peerid)[:8], shnum),
-                     parent=lp, level=log.WEIRD, umid="YIw4tA")
-            return
+        ds = []
+        new_readers = set(self._active_readers) - self._validated_readers
+        self.log('validating %d newly-added active readers' % len(new_readers))
 
hunk ./src/allmydata/mutable/retrieve.py 490
-        # it's good
-        self.log("got valid privkey from shnum %d on peerid %s" %
-                 (shnum, idlib.shortnodeid_b2a(peerid)),
-                 parent=lp)
-        privkey = rsa.create_signing_key_from_string(alleged_privkey_s)
-        self._node._populate_encprivkey(enc_privkey)
-        self._node._populate_privkey(privkey)
-        self._need_privkey = False
+        for reader in new_readers:
+            # We force a remote read here -- otherwise, we are relying
+            # on cached data that we already verified as valid, and we
+            # won't detect an uncoordinated write that has occurred
+            # since the last servermap update.
+            d = reader.get_prefix(force_remote=True)
+            d.addCallback(self._try_to_validate_prefix, reader)
+            ds.append(d)
+        dl = defer.DeferredList(ds, consumeErrors=True)
+        def _check_results(results):
+            # Each result in results will be of the form (success, msg).
+            # We don't care about msg, but success will tell us whether
+            # or not the checkstring validated. If it didn't, we need to
+            # remove the offending (peer,share) from our active readers,
+            # and ensure that active readers is again populated.
+            bad_readers = []
+            for i, result in enumerate(results):
+                if not result[0]:
+                    reader = self._active_readers[i]
+                    f = result[1]
+                    assert isinstance(f, failure.Failure)
 
hunk ./src/allmydata/mutable/retrieve.py 512
-    def _query_failed(self, f, marker, peerid):
-        self.log(format="query to [%(peerid)s] failed",
-                 peerid=idlib.shortnodeid_b2a(peerid),
-                 level=log.NOISY)
-        self._status.problems[peerid] = f
-        self._outstanding_queries.pop(marker, None)
-        if not self._running:
-            return
-        self._last_failure = f
-        self.remove_peer(peerid)
-        level = log.WEIRD
-        if f.check(DeadReferenceError):
-            level = log.UNUSUAL
-        self.log(format="error during query: %(f_value)s",
-                 f_value=str(f.value), failure=f, level=level, umid="gOJB5g")
+                    self.log("The reader %s failed to "
+                             "properly validate: %s" % \
+                             (reader, str(f.value)))
+                    bad_readers.append((reader, f))
+                else:
+                    reader = self._active_readers[i]
+                    self.log("the reader %s checks out, so we'll use it" % \
+                             reader)
+                    self._validated_readers.add(reader)
+                    # Each time we validate a reader, we check to see if
+                    # we need the private key. If we do, we politely ask
+                    # for it and then continue computing. If we find
+                    # that we haven't gotten it at the end of
+                    # segment decoding, then we'll take more drastic
+                    # measures.
+                    if self._need_privkey and not self._node.is_readonly():
+                        d = reader.get_encprivkey()
+                        d.addCallback(self._try_to_validate_privkey, reader)
+            if bad_readers:
+                # We do them all at once, or else we screw up list indexing.
+                for (reader, f) in bad_readers:
+                    self._mark_bad_share(reader, f)
+                if self._verify:
+                    if len(self._active_readers) >= self._required_shares:
+                        return self._download_current_segment()
+                    else:
+                        return self._failed()
+                else:
+                    return self._add_active_peers()
+            else:
+                return self._download_current_segment()
+            # The next step will assert that it has enough active
+            # readers to fetch shares; we just need to remove it.
+        dl.addCallback(_check_results)
+        return dl
 
hunk ./src/allmydata/mutable/retrieve.py 548
-    def _check_for_done(self, res):
-        # exit paths:
-        #  return : keep waiting, no new queries
-        #  return self._send_more_queries(outstanding) : send some more queries
-        #  fire self._done(plaintext) : download successful
-        #  raise exception : download fails
 
hunk ./src/allmydata/mutable/retrieve.py 549
-        self.log(format="_check_for_done: running=%(running)s, decoding=%(decoding)s",
-                 running=self._running, decoding=self._decoding,
-                 level=log.NOISY)
-        if not self._running:
-            return
-        if self._decoding:
-            return
-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
+    def _try_to_validate_prefix(self, prefix, reader):
+        """
+        I check that the prefix returned by a candidate server for
+        retrieval matches the prefix that the servermap knows about
+        (and, hence, the prefix that was validated earlier). If it does,
+        I return True, which means that I approve of the use of the
+        candidate server for segment retrieval. If it doesn't, I return
+        False, which means that another server must be chosen.
+        """
+        (seqnum,
+         root_hash,
+         IV,
+         segsize,
+         datalength,
+         k,
+         N,
+         known_prefix,
          offsets_tuple) = self.verinfo
hunk ./src/allmydata/mutable/retrieve.py 567
+        if known_prefix != prefix:
+            self.log("prefix from share %d doesn't match" % reader.shnum)
+            raise UncoordinatedWriteError("Mismatched prefix -- this could "
+                                          "indicate an uncoordinated write")
+        # Otherwise, we're okay -- no issues.
 
hunk ./src/allmydata/mutable/retrieve.py 573
-        if len(self.shares) < k:
-            # we don't have enough shares yet
-            return self._maybe_send_more_queries(k)
-        if self._need_privkey:
-            # we got k shares, but none of them had a valid privkey. TODO:
-            # look further. Adding code to do this is a bit complicated, and
-            # I want to avoid that complication, and this should be pretty
-            # rare (k shares with bitflips in the enc_privkey but not in the
-            # data blocks). If we actually do get here, the subsequent repair
-            # will fail for lack of a privkey.
-            self.log("got k shares but still need_privkey, bummer",
-                     level=log.WEIRD, umid="MdRHPA")
 
hunk ./src/allmydata/mutable/retrieve.py 574
-        # we have enough to finish. All the shares have had their hashes
-        # checked, so if something fails at this point, we don't know how
-        # to fix it, so the download will fail.
+    def _remove_reader(self, reader):
+        """
+        At various points, we will wish to remove a peer from
+        consideration and/or use. These include, but are not necessarily
+        limited to:
 
hunk ./src/allmydata/mutable/retrieve.py 580
-        self._decoding = True # avoid reentrancy
-        self._status.set_status("decoding")
-        now = time.time()
-        elapsed = now - self._started
-        self._status.timings["fetch"] = elapsed
+            - A connection error.
+            - A mismatched prefix (that is, a prefix that does not match
+              our conception of the version information string).
+            - A failing block hash, salt hash, or share hash, which can
+              indicate disk failure/bit flips, or network trouble.
 
hunk ./src/allmydata/mutable/retrieve.py 586
-        d = defer.maybeDeferred(self._decode)
-        d.addCallback(self._decrypt, IV, self._node.get_readkey())
-        d.addBoth(self._done)
-        return d # purely for test convenience
+        This method will do that. I will make sure that the
+        (shnum,reader) combination represented by my reader argument is
+        not used for anything else during this download. I will not
+        advise the reader of any corruption, something that my callers
+        may wish to do on their own.
+        """
+        # TODO: When you're done writing this, see if this is ever
+        # actually used for something that _mark_bad_share isn't. I have
+        # a feeling that they will be used for very similar things, and
+        # that having them both here is just going to be an epic amount
+        # of code duplication.
+        #
+        # (well, okay, not epic, but meaningful)
+        self.log("removing reader %s" % reader)
+        # Remove the reader from _active_readers
+        self._active_readers.remove(reader)
+        # TODO: self.readers.remove(reader)?
+        for shnum in list(self.remaining_sharemap.keys()):
+            self.remaining_sharemap.discard(shnum, reader.peerid)
 
hunk ./src/allmydata/mutable/retrieve.py 606
-    def _maybe_send_more_queries(self, k):
-        # we don't have enough shares yet. Should we send out more queries?
-        # There are some number of queries outstanding, each for a single
-        # share. If we can generate 'needed_shares' additional queries, we do
-        # so. If we can't, then we know this file is a goner, and we raise
-        # NotEnoughSharesError.
-        self.log(format=("_maybe_send_more_queries, have=%(have)d, k=%(k)d, "
-                         "outstanding=%(outstanding)d"),
-                 have=len(self.shares), k=k,
-                 outstanding=len(self._outstanding_queries),
-                 level=log.NOISY)
 
hunk ./src/allmydata/mutable/retrieve.py 607
-        remaining_shares = k - len(self.shares)
-        needed = remaining_shares - len(self._outstanding_queries)
-        if not needed:
-            # we have enough queries in flight already
+    def _mark_bad_share(self, reader, f):
+        """
+        I mark the (peerid, shnum) encapsulated by my reader argument as
+        a bad share, which means that it will not be used anywhere else.
 
hunk ./src/allmydata/mutable/retrieve.py 612
-            # TODO: but if they've been in flight for a long time, and we
-            # have reason to believe that new queries might respond faster
-            # (i.e. we've seen other queries come back faster, then consider
-            # sending out new queries. This could help with peers which have
-            # silently gone away since the servermap was updated, for which
-            # we're still waiting for the 15-minute TCP disconnect to happen.
-            self.log("enough queries are in flight, no more are needed",
-                     level=log.NOISY)
-            return
+        There are several reasons to want to mark something as a bad
+        share. These include:
+
+            - A connection error to the peer.
+            - A mismatched prefix (that is, a prefix that does not match
+              our local conception of the version information string).
+            - A failing block hash, salt hash, share hash, or other
+              integrity check.
 
hunk ./src/allmydata/mutable/retrieve.py 621
-        outstanding_shnums = set([shnum
-                                  for (peerid, shnum, started)
-                                  in self._outstanding_queries.values()])
-        # prefer low-numbered shares, they are more likely to be primary
-        available_shnums = sorted(self.remaining_sharemap.keys())
-        for shnum in available_shnums:
-            if shnum in outstanding_shnums:
-                # skip ones that are already in transit
-                continue
-            if shnum not in self.remaining_sharemap:
-                # no servers for that shnum. note that DictOfSets removes
-                # empty sets from the dict for us.
-                continue
-            peerid = list(self.remaining_sharemap[shnum])[0]
-            # get_data will remove that peerid from the sharemap, and add the
-            # query to self._outstanding_queries
-            self._status.set_status("Retrieving More Shares")
-            self.get_data(shnum, peerid)
-            needed -= 1
-            if not needed:
+        This method will ensure that readers that we wish to mark bad
+        (for these reasons or other reasons) are not used for the rest
+        of the download. Additionally, it will attempt to tell the
+        remote peer (with no guarantee of success) that its share is
+        corrupt.
+        """
+        self.log("marking share %d on server %s as bad" % \
+                 (reader.shnum, reader))
+        prefix = self.verinfo[-2]
+        self.servermap.mark_bad_share(reader.peerid,
+                                      reader.shnum,
+                                      prefix)
+        self._remove_reader(reader)
+        self._bad_shares.add((reader.peerid, reader.shnum, f))
+        self._status.problems[reader.peerid] = f
+        self._last_failure = f
+        self.notify_server_corruption(reader.peerid, reader.shnum,
+                                      str(f.value))
+
+
+    def _download_current_segment(self):
+        """
+        I download, validate, decode, decrypt, and assemble the segment
+        that this Retrieve is currently responsible for downloading.
+        """
+        assert len(self._active_readers) >= self._required_shares
+        if self._current_segment <= self._last_segment:
+            d = self._process_segment(self._current_segment)
+        else:
+            d = defer.succeed(None)
+        d.addBoth(self._turn_barrier)
+        d.addCallback(self._check_for_done)
+        return d
+
+
+    def _turn_barrier(self, result):
+        """
+        I help the download process avoid the recursion limit issues
+        discussed in #237.
+        """
+        return fireEventually(result)
+
+
+    def _process_segment(self, segnum):
+        """
+        I download, validate, decode, and decrypt one segment of the
+        file that this Retrieve is retrieving. This means coordinating
+        the process of getting k blocks of that file, validating them,
+        assembling them into one segment with the decoder, and then
+        decrypting them.
+        """
+        self.log("processing segment %d" % segnum)
+
+        # TODO: The old code uses a marker. Should this code do that
+        # too? What did the Marker do?
+        assert len(self._active_readers) >= self._required_shares
+
+        # We need to ask each of our active readers for its block and
+        # salt. We will then validate those. If validation is
+        # successful, we will assemble the results into plaintext.
+        ds = []
+        for reader in self._active_readers:
+            started = time.time()
+            d = reader.get_block_and_salt(segnum, queue=True)
+            d2 = self._get_needed_hashes(reader, segnum)
+            dl = defer.DeferredList([d, d2], consumeErrors=True)
+            dl.addCallback(self._validate_block, segnum, reader, started)
+            dl.addErrback(self._validation_or_decoding_failed, [reader])
+            ds.append(dl)
+            reader.flush()
+        dl = defer.DeferredList(ds)
+        if self._verify:
+            dl.addCallback(lambda ignored: "")
+            dl.addCallback(self._set_segment)
+        else:
+            dl.addCallback(self._maybe_decode_and_decrypt_segment, segnum)
+        return dl
+
+
+    def _maybe_decode_and_decrypt_segment(self, blocks_and_salts, segnum):
+        """
+        I take the results of fetching and validating the blocks from a
+        callback chain in another method. If the results are such that
+        they tell me that validation and fetching succeeded without
+        incident, I will proceed with decoding and decryption.
+        Otherwise, I will do nothing.
+        """
+        self.log("trying to decode and decrypt segment %d" % segnum)
+        failures = False
+        for block_and_salt in blocks_and_salts:
+            if not block_and_salt[0] or block_and_salt[1] == None:
+                self.log("some validation operations failed; not proceeding")
+                failures = True
                 break
hunk ./src/allmydata/mutable/retrieve.py 715
+        if not failures:
+            self.log("everything looks ok, building segment %d" % segnum)
+            d = self._decode_blocks(blocks_and_salts, segnum)
+            d.addCallback(self._decrypt_segment)
+            d.addErrback(self._validation_or_decoding_failed,
+                         self._active_readers)
+            # check to see whether we've been paused before writing
+            # anything.
+            d.addCallback(self._check_for_paused)
+            d.addCallback(self._set_segment)
+            return d
+        else:
+            return defer.succeed(None)
+
+
+    def _set_segment(self, segment):
+        """
+        Given a plaintext segment, I register that segment with the
+        target that is handling the file download.
+        """
+        self.log("got plaintext for segment %d" % self._current_segment)
+        if self._current_segment == self._start_segment:
+            # We're on the first segment. It's possible that we want
+            # only some part of the end of this segment, and that we
+            # just downloaded the whole thing to get that part. If so,
+            # we need to account for that and give the reader just the
+            # data that they want.
+            n = self._offset % self._segment_size
+            self.log("stripping %d bytes off of the first segment" % n)
+            self.log("original segment length: %d" % len(segment))
+            segment = segment[n:]
+            self.log("new segment length: %d" % len(segment))
+
+        if self._current_segment == self._last_segment and self._read_length is not None:
+            # We're on the last segment. It's possible that we only want
+            # part of the beginning of this segment, and that we
+            # downloaded the whole thing anyway. Make sure to give the
+            # caller only the portion of the segment that they want to
+            # receive.
+            extra = self._read_length
+            if self._start_segment != self._last_segment:
+                extra -= self._segment_size - \
+                            (self._offset % self._segment_size)
+            extra %= self._segment_size
+            self.log("original segment length: %d" % len(segment))
+            segment = segment[:extra]
+            self.log("new segment length: %d" % len(segment))
+            self.log("only taking %d bytes of the last segment" % extra)
+
+        if not self._verify:
+            self._consumer.write(segment)
+        else:
+            # we don't care about the plaintext if we are doing a verify.
+            segment = None
+        self._current_segment += 1
 
hunk ./src/allmydata/mutable/retrieve.py 771
-        # at this point, we have as many outstanding queries as we can. If
-        # needed!=0 then we might not have enough to recover the file.
-        if needed:
-            format = ("ran out of peers: "
-                      "have %(have)d shares (k=%(k)d), "
-                      "%(outstanding)d queries in flight, "
-                      "need %(need)d more, "
-                      "found %(bad)d bad shares")
-            args = {"have": len(self.shares),
-                    "k": k,
-                    "outstanding": len(self._outstanding_queries),
-                    "need": needed,
-                    "bad": len(self._bad_shares),
-                    }
-            self.log(format=format,
-                     level=log.WEIRD, umid="ezTfjw", **args)
-            err = NotEnoughSharesError("%s, last failure: %s" %
-                                      (format % args, self._last_failure))
-            if self._bad_shares:
-                self.log("We found some bad shares this pass. You should "
-                         "update the servermap and try again to check "
-                         "more peers",
-                         level=log.WEIRD, umid="EFkOlA")
-                err.servermap = self.servermap
-            raise err
 
hunk ./src/allmydata/mutable/retrieve.py 772
+    def _validation_or_decoding_failed(self, f, readers):
+        """
+        I am called when a block or a salt fails to correctly validate, or when
+        the decryption or decoding operation fails for some reason.  I react to
+        this failure by notifying the remote server of corruption, and then
+        removing the remote peer from further activity.
+        """
+        assert isinstance(readers, list)
+        bad_shnums = [reader.shnum for reader in readers]
+
+        self.log("validation or decoding failed on share(s) %s, peer(s) %s "
+                 ", segment %d: %s" % \
+                 (bad_shnums, readers, self._current_segment, str(f)))
+        for reader in readers:
+            self._mark_bad_share(reader, f)
         return
 
hunk ./src/allmydata/mutable/retrieve.py 789
-    def _decode(self):
-        started = time.time()
-        (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
-         offsets_tuple) = self.verinfo
 
hunk ./src/allmydata/mutable/retrieve.py 790
-        # shares_dict is a dict mapping shnum to share data, but the codec
-        # wants two lists.
-        shareids = []; shares = []
-        for shareid, share in self.shares.items():
+    def _validate_block(self, results, segnum, reader, started):
+        """
+        I validate a block from one share on a remote server.
+        """
+        # Grab the part of the block hash tree that is necessary to
+        # validate this block, then generate the block hash root.
+        self.log("validating share %d for segment %d" % (reader.shnum,
+                                                             segnum))
+        self._status.add_fetch_timing(reader.peerid, started)
+        self._status.set_status("Valdiating blocks for segment %d" % segnum)
+        # Did we fail to fetch either of the things that we were
+        # supposed to? Fail if so.
+        if not results[0][0] and results[1][0]:
+            # handled by the errback handler.
+
+            # These all get batched into one query, so the resulting
+            # failure should be the same for all of them, so we can just
+            # use the first one.
+            assert isinstance(results[0][1], failure.Failure)
+
+            f = results[0][1]
+            raise CorruptShareError(reader.peerid,
+                                    reader.shnum,
+                                    "Connection error: %s" % str(f))
+
+        block_and_salt, block_and_sharehashes = results
+        block, salt = block_and_salt[1]
+        blockhashes, sharehashes = block_and_sharehashes[1]
+
+        blockhashes = dict(enumerate(blockhashes[1]))
+        self.log("the reader gave me the following blockhashes: %s" % \
+                 blockhashes.keys())
+        self.log("the reader gave me the following sharehashes: %s" % \
+                 sharehashes[1].keys())
+        bht = self._block_hash_trees[reader.shnum]
+
+        if bht.needed_hashes(segnum, include_leaf=True):
+            try:
+                bht.set_hashes(blockhashes)
+            except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
+                    IndexError), e:
+                raise CorruptShareError(reader.peerid,
+                                        reader.shnum,
+                                        "block hash tree failure: %s" % e)
+
+        if self._version == MDMF_VERSION:
+            blockhash = hashutil.block_hash(salt + block)
+        else:
+            blockhash = hashutil.block_hash(block)
+        # If this works without an error, then validation is
+        # successful.
+        try:
+           bht.set_hashes(leaves={segnum: blockhash})
+        except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
+                IndexError), e:
+            raise CorruptShareError(reader.peerid,
+                                    reader.shnum,
+                                    "block hash tree failure: %s" % e)
+
+        # Reaching this point means that we know that this segment
+        # is correct. Now we need to check to see whether the share
+        # hash chain is also correct. 
+        # SDMF wrote share hash chains that didn't contain the
+        # leaves, which would be produced from the block hash tree.
+        # So we need to validate the block hash tree first. If
+        # successful, then bht[0] will contain the root for the
+        # shnum, which will be a leaf in the share hash tree, which
+        # will allow us to validate the rest of the tree.
+        if self.share_hash_tree.needed_hashes(reader.shnum,
+                                              include_leaf=True) or \
+                                              self._verify:
+            try:
+                self.share_hash_tree.set_hashes(hashes=sharehashes[1],
+                                            leaves={reader.shnum: bht[0]})
+            except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \
+                    IndexError), e:
+                raise CorruptShareError(reader.peerid,
+                                        reader.shnum,
+                                        "corrupt hashes: %s" % e)
+
+        self.log('share %d is valid for segment %d' % (reader.shnum,
+                                                       segnum))
+        return {reader.shnum: (block, salt)}
+
+
+    def _get_needed_hashes(self, reader, segnum):
+        """
+        I get the hashes needed to validate segnum from the reader, then return
+        to my caller when this is done.
+        """
+        bht = self._block_hash_trees[reader.shnum]
+        needed = bht.needed_hashes(segnum, include_leaf=True)
+        # The root of the block hash tree is also a leaf in the share
+        # hash tree. So we don't need to fetch it from the remote
+        # server. In the case of files with one segment, this means that
+        # we won't fetch any block hash tree from the remote server,
+        # since the hash of each share of the file is the entire block
+        # hash tree, and is a leaf in the share hash tree. This is fine,
+        # since any share corruption will be detected in the share hash
+        # tree.
+        #needed.discard(0)
+        self.log("getting blockhashes for segment %d, share %d: %s" % \
+                 (segnum, reader.shnum, str(needed)))
+        d1 = reader.get_blockhashes(needed, queue=True, force_remote=True)
+        if self.share_hash_tree.needed_hashes(reader.shnum):
+            need = self.share_hash_tree.needed_hashes(reader.shnum)
+            self.log("also need sharehashes for share %d: %s" % (reader.shnum,
+                                                                 str(need)))
+            d2 = reader.get_sharehashes(need, queue=True, force_remote=True)
+        else:
+            d2 = defer.succeed({}) # the logic in the next method
+                                   # expects a dict
+        dl = defer.DeferredList([d1, d2], consumeErrors=True)
+        return dl
+
+
+    def _decode_blocks(self, blocks_and_salts, segnum):
+        """
+        I take a list of k blocks and salts, and decode that into a
+        single encrypted segment.
+        """
+        d = {}
+        # We want to merge our dictionaries to the form 
+        # {shnum: blocks_and_salts}
+        #
+        # The dictionaries come from validate block that way, so we just
+        # need to merge them.
+        for block_and_salt in blocks_and_salts:
+            d.update(block_and_salt[1])
+
+        # All of these blocks should have the same salt; in SDMF, it is
+        # the file-wide IV, while in MDMF it is the per-segment salt. In
+        # either case, we just need to get one of them and use it.
+        #
+        # d.items()[0] is like (shnum, (block, salt))
+        # d.items()[0][1] is like (block, salt)
+        # d.items()[0][1][1] is the salt.
+        salt = d.items()[0][1][1]
+        # Next, extract just the blocks from the dict. We'll use the
+        # salt in the next step.
+        share_and_shareids = [(k, v[0]) for k, v in d.items()]
+        d2 = dict(share_and_shareids)
+        shareids = []
+        shares = []
+        for shareid, share in d2.items():
             shareids.append(shareid)
             shares.append(share)
 
hunk ./src/allmydata/mutable/retrieve.py 938
-        assert len(shareids) >= k, len(shareids)
+        self._status.set_status("Decoding")
+        started = time.time()
+        assert len(shareids) >= self._required_shares, len(shareids)
         # zfec really doesn't want extra shares
hunk ./src/allmydata/mutable/retrieve.py 942
-        shareids = shareids[:k]
-        shares = shares[:k]
-
-        fec = codec.CRSDecoder()
-        fec.set_params(segsize, k, N)
-
-        self.log("params %s, we have %d shares" % ((segsize, k, N), len(shares)))
-        self.log("about to decode, shareids=%s" % (shareids,))
-        d = defer.maybeDeferred(fec.decode, shares, shareids)
-        def _done(buffers):
-            self._status.timings["decode"] = time.time() - started
-            self.log(" decode done, %d buffers" % len(buffers))
+        shareids = shareids[:self._required_shares]
+        shares = shares[:self._required_shares]
+        self.log("decoding segment %d" % segnum)
+        if segnum == self._num_segments - 1:
+            d = defer.maybeDeferred(self._tail_decoder.decode, shares, shareids)
+        else:
+            d = defer.maybeDeferred(self._segment_decoder.decode, shares, shareids)
+        def _process(buffers):
             segment = "".join(buffers)
hunk ./src/allmydata/mutable/retrieve.py 951
+            self.log(format="now decoding segment %(segnum)s of %(numsegs)s",
+                     segnum=segnum,
+                     numsegs=self._num_segments,
+                     level=log.NOISY)
             self.log(" joined length %d, datalength %d" %
hunk ./src/allmydata/mutable/retrieve.py 956
-                     (len(segment), datalength))
-            segment = segment[:datalength]
+                     (len(segment), self._data_length))
+            if segnum == self._num_segments - 1:
+                size_to_use = self._tail_data_size
+            else:
+                size_to_use = self._segment_size
+            segment = segment[:size_to_use]
             self.log(" segment len=%d" % len(segment))
hunk ./src/allmydata/mutable/retrieve.py 963
-            return segment
-        def _err(f):
-            self.log(" decode failed: %s" % f)
-            return f
-        d.addCallback(_done)
-        d.addErrback(_err)
+            self._status.timings.setdefault("decode", 0)
+            self._status.timings['decode'] = time.time() - started
+            return segment, salt
+        d.addCallback(_process)
         return d
 
hunk ./src/allmydata/mutable/retrieve.py 969
-    def _decrypt(self, crypttext, IV, readkey):
+
+    def _decrypt_segment(self, segment_and_salt):
+        """
+        I take a single segment and its salt, and decrypt it. I return
+        the plaintext of the segment that is in my argument.
+        """
+        segment, salt = segment_and_salt
         self._status.set_status("decrypting")
hunk ./src/allmydata/mutable/retrieve.py 977
+        self.log("decrypting segment %d" % self._current_segment)
         started = time.time()
hunk ./src/allmydata/mutable/retrieve.py 979
-        key = hashutil.ssk_readkey_data_hash(IV, readkey)
+        key = hashutil.ssk_readkey_data_hash(salt, self._node.get_readkey())
         decryptor = AES(key)
hunk ./src/allmydata/mutable/retrieve.py 981
-        plaintext = decryptor.process(crypttext)
-        self._status.timings["decrypt"] = time.time() - started
+        plaintext = decryptor.process(segment)
+        self._status.timings.setdefault("decrypt", 0)
+        self._status.timings['decrypt'] = time.time() - started
         return plaintext
 
hunk ./src/allmydata/mutable/retrieve.py 986
-    def _done(self, res):
-        if not self._running:
+
+    def notify_server_corruption(self, peerid, shnum, reason):
+        ss = self.servermap.connections[peerid]
+        ss.callRemoteOnly("advise_corrupt_share",
+                          "mutable", self._storage_index, shnum, reason)
+
+
+    def _try_to_validate_privkey(self, enc_privkey, reader):
+        alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
+        alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
+        if alleged_writekey != self._node.get_writekey():
+            self.log("invalid privkey from %s shnum %d" %
+                     (reader, reader.shnum),
+                     level=log.WEIRD, umid="YIw4tA")
+            if self._verify:
+                self.servermap.mark_bad_share(reader.peerid, reader.shnum,
+                                              self.verinfo[-2])
+                e = CorruptShareError(reader.peerid,
+                                      reader.shnum,
+                                      "invalid privkey")
+                f = failure.Failure(e)
+                self._bad_shares.add((reader.peerid, reader.shnum, f))
             return
hunk ./src/allmydata/mutable/retrieve.py 1009
+
+        # it's good
+        self.log("got valid privkey from shnum %d on reader %s" %
+                 (reader.shnum, reader))
+        privkey = rsa.create_signing_key_from_string(alleged_privkey_s)
+        self._node._populate_encprivkey(enc_privkey)
+        self._node._populate_privkey(privkey)
+        self._need_privkey = False
+
+
+    def _check_for_done(self, res):
+        """
+        I check to see if this Retrieve object has successfully finished
+        its work.
+
+        I can exit in the following ways:
+            - If there are no more segments to download, then I exit by
+              causing self._done_deferred to fire with the plaintext
+              content requested by the caller.
+            - If there are still segments to be downloaded, and there
+              are enough active readers (readers which have not broken
+              and have not given us corrupt data) to continue
+              downloading, I send control back to
+              _download_current_segment.
+            - If there are still segments to be downloaded but there are
+              not enough active peers to download them, I ask
+              _add_active_peers to add more peers. If it is successful,
+              it will call _download_current_segment. If there are not
+              enough peers to retrieve the file, then that will cause
+              _done_deferred to errback.
+        """
+        self.log("checking for doneness")
+        if self._current_segment > self._last_segment:
+            # No more segments to download, we're done.
+            self.log("got plaintext, done")
+            return self._done()
+
+        if len(self._active_readers) >= self._required_shares:
+            # More segments to download, but we have enough good peers
+            # in self._active_readers that we can do that without issue,
+            # so go nab the next segment.
+            self.log("not done yet: on segment %d of %d" % \
+                     (self._current_segment + 1, self._num_segments))
+            return self._download_current_segment()
+
+        self.log("not done yet: on segment %d of %d, need to add peers" % \
+                 (self._current_segment + 1, self._num_segments))
+        return self._add_active_peers()
+
+
+    def _done(self):
+        """
+        I am called by _check_for_done when the download process has
+        finished successfully. After making some useful logging
+        statements, I return the decrypted contents to the owner of this
+        Retrieve object through self._done_deferred.
+        """
         self._running = False
         self._status.set_active(False)
hunk ./src/allmydata/mutable/retrieve.py 1068
-        self._status.timings["total"] = time.time() - self._started
-        # res is either the new contents, or a Failure
-        if isinstance(res, failure.Failure):
-            self.log("Retrieve done, with failure", failure=res,
-                     level=log.UNUSUAL)
-            self._status.set_status("Failed")
+        now = time.time()
+        self._status.timings['total'] = now - self._started
+        self._status.timings['fetch'] = now - self._started_fetching
+
+        if self._verify:
+            ret = list(self._bad_shares)
+            self.log("done verifying, found %d bad shares" % len(ret))
         else:
hunk ./src/allmydata/mutable/retrieve.py 1076
-            self.log("Retrieve done, success!")
-            self._status.set_status("Finished")
-            self._status.set_progress(1.0)
-            # remember the encoding parameters, use them again next time
-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
-             offsets_tuple) = self.verinfo
-            self._node._populate_required_shares(k)
-            self._node._populate_total_shares(N)
-        eventually(self._done_deferred.callback, res)
+            # TODO: upload status here?
+            ret = self._consumer
+            self._consumer.unregisterProducer()
+        eventually(self._done_deferred.callback, ret)
+
 
hunk ./src/allmydata/mutable/retrieve.py 1082
+    def _failed(self):
+        """
+        I am called by _add_active_peers when there are not enough
+        active peers left to complete the download. After making some
+        useful logging statements, I return an exception to that effect
+        to the caller of this Retrieve object through
+        self._done_deferred.
+        """
+        self._running = False
+        self._status.set_active(False)
+        now = time.time()
+        self._status.timings['total'] = now - self._started
+        self._status.timings['fetch'] = now - self._started_fetching
+
+        if self._verify:
+            ret = list(self._bad_shares)
+        else:
+            format = ("ran out of peers: "
+                      "have %(have)d of %(total)d segments "
+                      "found %(bad)d bad shares "
+                      "encoding %(k)d-of-%(n)d")
+            args = {"have": self._current_segment,
+                    "total": self._num_segments,
+                    "need": self._last_segment,
+                    "k": self._required_shares,
+                    "n": self._total_shares,
+                    "bad": len(self._bad_shares)}
+            e = NotEnoughSharesError("%s, last failure: %s" % \
+                                     (format % args, str(self._last_failure)))
+            f = failure.Failure(e)
+            ret = f
+        eventually(self._done_deferred.callback, ret)
}
[mutable/servermap.py: Alter the servermap updater to work with MDMF files
Kevan Carstensen <kevan@isnotajoke.com>**20100819003439
 Ignore-this: 7e408303194834bd59a2f27efab3bdb
 
 These modifications were basically all to the end of having the
 servermap updater use the unified MDMF + SDMF read interface whenever
 possible -- this reduces the complexity of the code, making it easier to
 read and maintain. To do this, I needed to modify the process of
 updating the servermap a little bit.
 
 To support partial-file updates, I also modified the servermap updater
 to fetch the block hash trees and certain segments of files while it
 performed a servermap update (this can be done without adding any new
 roundtrips because of batch-read functionality that the read proxy has).
 
] {
hunk ./src/allmydata/mutable/servermap.py 2
 
-import sys, time
+import sys, time, struct
 from zope.interface import implements
 from itertools import count
 from twisted.internet import defer
merger 0.0 (
hunk ./src/allmydata/mutable/servermap.py 9
+from allmydata.util.dictutil import DictOfSets
hunk ./src/allmydata/mutable/servermap.py 7
-from foolscap.api import DeadReferenceError, RemoteException, eventually
-from allmydata.util import base32, hashutil, idlib, log
+from foolscap.api import DeadReferenceError, RemoteException, eventually, \
+                         fireEventually
+from allmydata.util import base32, hashutil, idlib, log, deferredutil
)
merger 0.0 (
hunk ./src/allmydata/mutable/servermap.py 14
-     DictOfSets, CorruptShareError, NeedMoreDataError
+     CorruptShareError, NeedMoreDataError
hunk ./src/allmydata/mutable/servermap.py 14
-     DictOfSets, CorruptShareError, NeedMoreDataError
-from allmydata.mutable.layout import unpack_prefix_and_signature, unpack_header, unpack_share, \
-     SIGNED_PREFIX_LENGTH
+     DictOfSets, CorruptShareError
+from allmydata.mutable.layout import SIGNED_PREFIX_LENGTH, MDMFSlotReadProxy
)
hunk ./src/allmydata/mutable/servermap.py 123
         self.bad_shares = {} # maps (peerid,shnum) to old checkstring
         self.last_update_mode = None
         self.last_update_time = 0
+        self.update_data = {} # (verinfo,shnum) => data
 
     def copy(self):
         s = ServerMap()
hunk ./src/allmydata/mutable/servermap.py 254
         """Return a set of versionids, one for each version that is currently
         recoverable."""
         versionmap = self.make_versionmap()
-
         recoverable_versions = set()
         for (verinfo, shares) in versionmap.items():
             (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
hunk ./src/allmydata/mutable/servermap.py 339
         return False
 
 
+    def get_update_data_for_share_and_verinfo(self, shnum, verinfo):
+        """
+        I return the update data for the given shnum
+        """
+        update_data = self.update_data[shnum]
+        update_datum = [i[1] for i in update_data if i[0] == verinfo][0]
+        return update_datum
+
+
+    def set_update_data_for_share_and_verinfo(self, shnum, verinfo, data):
+        """
+        I record the block hash tree for the given shnum.
+        """
+        self.update_data.setdefault(shnum , []).append((verinfo, data))
+
+
 class ServermapUpdater:
     def __init__(self, filenode, storage_broker, monitor, servermap,
hunk ./src/allmydata/mutable/servermap.py 357
-                 mode=MODE_READ, add_lease=False):
+                 mode=MODE_READ, add_lease=False, update_range=None):
         """I update a servermap, locating a sufficient number of useful
         shares and remembering where they are located.
 
hunk ./src/allmydata/mutable/servermap.py 382
         self._servers_responded = set()
 
         # how much data should we read?
+        # SDMF:
         #  * if we only need the checkstring, then [0:75]
         #  * if we need to validate the checkstring sig, then [543ish:799ish]
         #  * if we need the verification key, then [107:436ish]
merger 0.0 (
hunk ./src/allmydata/mutable/servermap.py 392
-        # read 2000 bytes, which also happens to read enough actual data to
-        # pre-fetch a 9-entry dirnode.
+        # read 4000 bytes, which also happens to read enough actual data to
+        # pre-fetch an 18-entry dirnode.
hunk ./src/allmydata/mutable/servermap.py 390
-        # A future version of the SMDF slot format should consider using
-        # fixed-size slots so we can retrieve less data. For now, we'll just
-        # read 2000 bytes, which also happens to read enough actual data to
-        # pre-fetch a 9-entry dirnode.
+        # MDMF:
+        #  * Checkstring? [0:72]
+        #  * If we want to validate the checkstring, then [0:72], [143:?] --
+        #    the offset table will tell us for sure.
+        #  * If we need the verification key, we have to consult the offset
+        #    table as well.
+        # At this point, we don't know which we are. Our filenode can
+        # tell us, but it might be lying -- in some cases, we're
+        # responsible for telling it which kind of file it is.
)
hunk ./src/allmydata/mutable/servermap.py 399
             # we use unpack_prefix_and_signature, so we need 1k
             self._read_size = 1000
         self._need_privkey = False
+
         if mode == MODE_WRITE and not self._node.get_privkey():
             self._need_privkey = True
         # check+repair: repair requires the privkey, so if we didn't happen
hunk ./src/allmydata/mutable/servermap.py 406
         # to ask for it during the check, we'll have problems doing the
         # publish.
 
+        self.fetch_update_data = False
+        if mode == MODE_WRITE and update_range:
+            # We're updating the servermap in preparation for an
+            # in-place file update, so we need to fetch some additional
+            # data from each share that we find.
+            assert len(update_range) == 2
+
+            self.start_segment = update_range[0]
+            self.end_segment = update_range[1]
+            self.fetch_update_data = True
+
         prefix = si_b2a(self._storage_index)[:5]
         self._log_number = log.msg(format="SharemapUpdater(%(si)s): starting (%(mode)s)",
                                    si=prefix, mode=mode)
merger 0.0 (
hunk ./src/allmydata/mutable/servermap.py 455
-        full_peerlist = sb.get_servers_for_index(self._storage_index)
+        full_peerlist = [(s.get_serverid(), s.get_rref())
+                         for s in sb.get_servers_for_psi(self._storage_index)]
hunk ./src/allmydata/mutable/servermap.py 455
+        # All of the peers, permuted by the storage index, as usual.
)
hunk ./src/allmydata/mutable/servermap.py 461
         self._good_peers = set() # peers who had some shares
         self._empty_peers = set() # peers who don't have any shares
         self._bad_peers = set() # peers to whom our queries failed
+        self._readers = {} # peerid -> dict(sharewriters), filled in
+                           # after responses come in.
 
         k = self._node.get_required_shares()
hunk ./src/allmydata/mutable/servermap.py 465
+        # For what cases can these conditions work?
         if k is None:
             # make a guess
             k = 3
hunk ./src/allmydata/mutable/servermap.py 478
         self.num_peers_to_query = k + self.EPSILON
 
         if self.mode == MODE_CHECK:
+            # We want to query all of the peers.
             initial_peers_to_query = dict(full_peerlist)
             must_query = set(initial_peers_to_query.keys())
             self.extra_peers = []
hunk ./src/allmydata/mutable/servermap.py 486
             # we're planning to replace all the shares, so we want a good
             # chance of finding them all. We will keep searching until we've
             # seen epsilon that don't have a share.
+            # We don't query all of the peers because that could take a while.
             self.num_peers_to_query = N + self.EPSILON
             initial_peers_to_query, must_query = self._build_initial_querylist()
             self.required_num_empty_peers = self.EPSILON
hunk ./src/allmydata/mutable/servermap.py 496
             # might also avoid the round trip required to read the encrypted
             # private key.
 
-        else:
+        else: # MODE_READ, MODE_ANYTHING
+            # 2k peers is good enough.
             initial_peers_to_query, must_query = self._build_initial_querylist()
 
         # this is a set of peers that we are required to get responses from:
hunk ./src/allmydata/mutable/servermap.py 512
         # before we can consider ourselves finished, and self.extra_peers
         # contains the overflow (peers that we should tap if we don't get
         # enough responses)
+        # I guess that self._must_query is a subset of
+        # initial_peers_to_query?
+        assert set(must_query).issubset(set(initial_peers_to_query))
 
         self._send_initial_requests(initial_peers_to_query)
         self._status.timings["initial_queries"] = time.time() - self._started
hunk ./src/allmydata/mutable/servermap.py 571
         # errors that aren't handled by _query_failed (and errors caused by
         # _query_failed) get logged, but we still want to check for doneness.
         d.addErrback(log.err)
-        d.addBoth(self._check_for_done)
         d.addErrback(self._fatal_error)
hunk ./src/allmydata/mutable/servermap.py 572
+        d.addCallback(self._check_for_done)
         return d
 
     def _do_read(self, ss, peerid, storage_index, shnums, readv):
hunk ./src/allmydata/mutable/servermap.py 591
         d = ss.callRemote("slot_readv", storage_index, shnums, readv)
         return d
 
+
+    def _got_corrupt_share(self, e, shnum, peerid, data, lp):
+        """
+        I am called when a remote server returns a corrupt share in
+        response to one of our queries. By corrupt, I mean a share
+        without a valid signature. I then record the failure, notify the
+        server of the corruption, and record the share as bad.
+        """
+        f = failure.Failure(e)
+        self.log(format="bad share: %(f_value)s", f_value=str(f),
+                 failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
+        # Notify the server that its share is corrupt.
+        self.notify_server_corruption(peerid, shnum, str(e))
+        # By flagging this as a bad peer, we won't count any of
+        # the other shares on that peer as valid, though if we
+        # happen to find a valid version string amongst those
+        # shares, we'll keep track of it so that we don't need
+        # to validate the signature on those again.
+        self._bad_peers.add(peerid)
+        self._last_failure = f
+        # XXX: Use the reader for this?
+        checkstring = data[:SIGNED_PREFIX_LENGTH]
+        self._servermap.mark_bad_share(peerid, shnum, checkstring)
+        self._servermap.problems.append(f)
+
+
+    def _cache_good_sharedata(self, verinfo, shnum, now, data):
+        """
+        If one of my queries returns successfully (which means that we
+        were able to and successfully did validate the signature), I
+        cache the data that we initially fetched from the storage
+        server. This will help reduce the number of roundtrips that need
+        to occur when the file is downloaded, or when the file is
+        updated.
+        """
+        if verinfo:
+            self._node._add_to_cache(verinfo, shnum, 0, data, now)
+
+
     def _got_results(self, datavs, peerid, readsize, stuff, started):
         lp = self.log(format="got result from [%(peerid)s], %(numshares)d shares",
                       peerid=idlib.shortnodeid_b2a(peerid),
hunk ./src/allmydata/mutable/servermap.py 633
-                      numshares=len(datavs),
-                      level=log.NOISY)
+                      numshares=len(datavs))
         now = time.time()
         elapsed = now - started
hunk ./src/allmydata/mutable/servermap.py 636
-        self._queries_outstanding.discard(peerid)
-        self._servermap.reachable_peers.add(peerid)
-        self._must_query.discard(peerid)
-        self._queries_completed += 1
+        def _done_processing(ignored=None):
+            self._queries_outstanding.discard(peerid)
+            self._servermap.reachable_peers.add(peerid)
+            self._must_query.discard(peerid)
+            self._queries_completed += 1
         if not self._running:
hunk ./src/allmydata/mutable/servermap.py 642
-            self.log("but we're not running, so we'll ignore it", parent=lp,
-                     level=log.NOISY)
+            self.log("but we're not running, so we'll ignore it", parent=lp)
+            _done_processing()
             self._status.add_per_server_time(peerid, "late", started, elapsed)
             return
         self._status.add_per_server_time(peerid, "query", started, elapsed)
hunk ./src/allmydata/mutable/servermap.py 653
         else:
             self._empty_peers.add(peerid)
 
-        last_verinfo = None
-        last_shnum = None
+        ss, storage_index = stuff
+        ds = []
+
         for shnum,datav in datavs.items():
             data = datav[0]
             try:
merger 0.0 (
hunk ./src/allmydata/mutable/servermap.py 662
-                self._node._add_to_cache(verinfo, shnum, 0, data, now)
+                self._node._add_to_cache(verinfo, shnum, 0, data)
hunk ./src/allmydata/mutable/servermap.py 658
-            try:
-                verinfo = self._got_results_one_share(shnum, data, peerid, lp)
-                last_verinfo = verinfo
-                last_shnum = shnum
-                self._node._add_to_cache(verinfo, shnum, 0, data, now)
-            except CorruptShareError, e:
-                # log it and give the other shares a chance to be processed
-                f = failure.Failure()
-                self.log(format="bad share: %(f_value)s", f_value=str(f.value),
-                         failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
-                self.notify_server_corruption(peerid, shnum, str(e))
-                self._bad_peers.add(peerid)
-                self._last_failure = f
-                checkstring = data[:SIGNED_PREFIX_LENGTH]
-                self._servermap.mark_bad_share(peerid, shnum, checkstring)
-                self._servermap.problems.append(f)
-                pass
+            reader = MDMFSlotReadProxy(ss,
+                                       storage_index,
+                                       shnum,
+                                       data)
+            self._readers.setdefault(peerid, dict())[shnum] = reader
+            # our goal, with each response, is to validate the version
+            # information and share data as best we can at this point --
+            # we do this by validating the signature. To do this, we
+            # need to do the following:
+            #   - If we don't already have the public key, fetch the
+            #     public key. We use this to validate the signature.
+            if not self._node.get_pubkey():
+                # fetch and set the public key.
+                d = reader.get_verification_key(queue=True)
+                d.addCallback(lambda results, shnum=shnum, peerid=peerid:
+                    self._try_to_set_pubkey(results, peerid, shnum, lp))
+                # XXX: Make self._pubkey_query_failed?
+                d.addErrback(lambda error, shnum=shnum, peerid=peerid:
+                    self._got_corrupt_share(error, shnum, peerid, data, lp))
+            else:
+                # we already have the public key.
+                d = defer.succeed(None)
)
hunk ./src/allmydata/mutable/servermap.py 676
                 self._servermap.problems.append(f)
                 pass
 
-        self._status.timings["cumulative_verify"] += (time.time() - now)
+            # Neither of these two branches return anything of
+            # consequence, so the first entry in our deferredlist will
+            # be None.
 
hunk ./src/allmydata/mutable/servermap.py 680
-        if self._need_privkey and last_verinfo:
-            # send them a request for the privkey. We send one request per
-            # server.
-            lp2 = self.log("sending privkey request",
-                           parent=lp, level=log.NOISY)
-            (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
-             offsets_tuple) = last_verinfo
-            o = dict(offsets_tuple)
+            # - Next, we need the version information. We almost
+            #   certainly got this by reading the first thousand or so
+            #   bytes of the share on the storage server, so we
+            #   shouldn't need to fetch anything at this step.
+            d2 = reader.get_verinfo()
+            d2.addErrback(lambda error, shnum=shnum, peerid=peerid:
+                self._got_corrupt_share(error, shnum, peerid, data, lp))
+            # - Next, we need the signature. For an SDMF share, it is
+            #   likely that we fetched this when doing our initial fetch
+            #   to get the version information. In MDMF, this lives at
+            #   the end of the share, so unless the file is quite small,
+            #   we'll need to do a remote fetch to get it.
+            d3 = reader.get_signature(queue=True)
+            d3.addErrback(lambda error, shnum=shnum, peerid=peerid:
+                self._got_corrupt_share(error, shnum, peerid, data, lp))
+            #  Once we have all three of these responses, we can move on
+            #  to validating the signature
 
hunk ./src/allmydata/mutable/servermap.py 698
-            self._queries_outstanding.add(peerid)
-            readv = [ (o['enc_privkey'], (o['EOF'] - o['enc_privkey'])) ]
-            ss = self._servermap.connections[peerid]
-            privkey_started = time.time()
-            d = self._do_read(ss, peerid, self._storage_index,
-                              [last_shnum], readv)
-            d.addCallback(self._got_privkey_results, peerid, last_shnum,
-                          privkey_started, lp2)
-            d.addErrback(self._privkey_query_failed, peerid, last_shnum, lp2)
-            d.addErrback(log.err)
-            d.addCallback(self._check_for_done)
-            d.addErrback(self._fatal_error)
+            # Does the node already have a privkey? If not, we'll try to
+            # fetch it here.
+            if self._need_privkey:
+                d4 = reader.get_encprivkey(queue=True)
+                d4.addCallback(lambda results, shnum=shnum, peerid=peerid:
+                    self._try_to_validate_privkey(results, peerid, shnum, lp))
+                d4.addErrback(lambda error, shnum=shnum, peerid=peerid:
+                    self._privkey_query_failed(error, shnum, data, lp))
+            else:
+                d4 = defer.succeed(None)
+
+
+            if self.fetch_update_data:
+                # fetch the block hash tree and first + last segment, as
+                # configured earlier.
+                # Then set them in wherever we happen to want to set
+                # them.
+                ds = []
+                # XXX: We do this above, too. Is there a good way to
+                # make the two routines share the value without
+                # introducing more roundtrips?
+                ds.append(reader.get_verinfo())
+                ds.append(reader.get_blockhashes(queue=True))
+                ds.append(reader.get_block_and_salt(self.start_segment,
+                                                    queue=True))
+                ds.append(reader.get_block_and_salt(self.end_segment,
+                                                    queue=True))
+                d5 = deferredutil.gatherResults(ds)
+                d5.addCallback(self._got_update_results_one_share, shnum)
+            else:
+                d5 = defer.succeed(None)
 
hunk ./src/allmydata/mutable/servermap.py 730
+            dl = defer.DeferredList([d, d2, d3, d4, d5])
+            dl.addBoth(self._turn_barrier)
+            reader.flush()
+            dl.addCallback(lambda results, shnum=shnum, peerid=peerid:
+                self._got_signature_one_share(results, shnum, peerid, lp))
+            dl.addErrback(lambda error, shnum=shnum, data=data:
+               self._got_corrupt_share(error, shnum, peerid, data, lp))
+            dl.addCallback(lambda verinfo, shnum=shnum, peerid=peerid, data=data:
+                self._cache_good_sharedata(verinfo, shnum, now, data))
+            ds.append(dl)
+        # dl is a deferred list that will fire when all of the shares
+        # that we found on this peer are done processing. When dl fires,
+        # we know that processing is done, so we can decrement the
+        # semaphore-like thing that we incremented earlier.
+        dl = defer.DeferredList(ds, fireOnOneErrback=True)
+        # Are we done? Done means that there are no more queries to
+        # send, that there are no outstanding queries, and that we
+        # haven't received any queries that are still processing. If we
+        # are done, self._check_for_done will cause the done deferred
+        # that we returned to our caller to fire, which tells them that
+        # they have a complete servermap, and that we won't be touching
+        # the servermap anymore.
+        dl.addCallback(_done_processing)
+        dl.addCallback(self._check_for_done)
+        dl.addErrback(self._fatal_error)
         # all done!
         self.log("_got_results done", parent=lp, level=log.NOISY)
hunk ./src/allmydata/mutable/servermap.py 757
+        return dl
+
+
+    def _turn_barrier(self, result):
+        """
+        I help the servermap updater avoid the recursion limit issues
+        discussed in #237.
+        """
+        return fireEventually(result)
+
+
+    def _try_to_set_pubkey(self, pubkey_s, peerid, shnum, lp):
+        if self._node.get_pubkey():
+            return # don't go through this again if we don't have to
+        fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
+        assert len(fingerprint) == 32
+        if fingerprint != self._node.get_fingerprint():
+            raise CorruptShareError(peerid, shnum,
+                                "pubkey doesn't match fingerprint")
+        self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
+        assert self._node.get_pubkey()
+
 
     def notify_server_corruption(self, peerid, shnum, reason):
         ss = self._servermap.connections[peerid]
hunk ./src/allmydata/mutable/servermap.py 785
         ss.callRemoteOnly("advise_corrupt_share",
                           "mutable", self._storage_index, shnum, reason)
 
-    def _got_results_one_share(self, shnum, data, peerid, lp):
+
+    def _got_signature_one_share(self, results, shnum, peerid, lp):
+        # It is our job to give versioninfo to our caller. We need to
+        # raise CorruptShareError if the share is corrupt for any
+        # reason, something that our caller will handle.
         self.log(format="_got_results: got shnum #%(shnum)d from peerid %(peerid)s",
                  shnum=shnum,
                  peerid=idlib.shortnodeid_b2a(peerid),
hunk ./src/allmydata/mutable/servermap.py 795
                  level=log.NOISY,
                  parent=lp)
+        if not self._running:
+            # We can't process the results, since we can't touch the
+            # servermap anymore.
+            self.log("but we're not running anymore.")
+            return None
 
hunk ./src/allmydata/mutable/servermap.py 801
-        # this might raise NeedMoreDataError, if the pubkey and signature
-        # live at some weird offset. That shouldn't happen, so I'm going to
-        # treat it as a bad share.
-        (seqnum, root_hash, IV, k, N, segsize, datalength,
-         pubkey_s, signature, prefix) = unpack_prefix_and_signature(data)
-
-        if not self._node.get_pubkey():
-            fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
-            assert len(fingerprint) == 32
-            if fingerprint != self._node.get_fingerprint():
-                raise CorruptShareError(peerid, shnum,
-                                        "pubkey doesn't match fingerprint")
-            self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s))
-
-        if self._need_privkey:
-            self._try_to_extract_privkey(data, peerid, shnum, lp)
-
-        (ig_version, ig_seqnum, ig_root_hash, ig_IV, ig_k, ig_N,
-         ig_segsize, ig_datalen, offsets) = unpack_header(data)
+        _, verinfo, signature, __, ___ = results
+        (seqnum,
+         root_hash,
+         saltish,
+         segsize,
+         datalen,
+         k,
+         n,
+         prefix,
+         offsets) = verinfo[1]
         offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
 
hunk ./src/allmydata/mutable/servermap.py 813
-        verinfo = (seqnum, root_hash, IV, segsize, datalength, k, N, prefix,
+        # XXX: This should be done for us in the method, so
+        # presumably you can go in there and fix it.
+        verinfo = (seqnum,
+                   root_hash,
+                   saltish,
+                   segsize,
+                   datalen,
+                   k,
+                   n,
+                   prefix,
                    offsets_tuple)
hunk ./src/allmydata/mutable/servermap.py 824
+        # This tuple uniquely identifies a share on the grid; we use it
+        # to keep track of the ones that we've already seen.
 
         if verinfo not in self._valid_versions:
hunk ./src/allmydata/mutable/servermap.py 828
-            # it's a new pair. Verify the signature.
-            valid = self._node.get_pubkey().verify(prefix, signature)
+            # This is a new version tuple, and we need to validate it
+            # against the public key before keeping track of it.
+            assert self._node.get_pubkey()
+            valid = self._node.get_pubkey().verify(prefix, signature[1])
             if not valid:
hunk ./src/allmydata/mutable/servermap.py 833
-                raise CorruptShareError(peerid, shnum, "signature is invalid")
+                raise CorruptShareError(peerid, shnum,
+                                        "signature is invalid")
 
hunk ./src/allmydata/mutable/servermap.py 836
-            # ok, it's a valid verinfo. Add it to the list of validated
-            # versions.
-            self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
-                     % (seqnum, base32.b2a(root_hash)[:4],
-                        idlib.shortnodeid_b2a(peerid), shnum,
-                        k, N, segsize, datalength),
-                     parent=lp)
-            self._valid_versions.add(verinfo)
-        # We now know that this is a valid candidate verinfo.
+        # ok, it's a valid verinfo. Add it to the list of validated
+        # versions.
+        self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d"
+                 % (seqnum, base32.b2a(root_hash)[:4],
+                    idlib.shortnodeid_b2a(peerid), shnum,
+                    k, n, segsize, datalen),
+                    parent=lp)
+        self._valid_versions.add(verinfo)
+        # We now know that this is a valid candidate verinfo. Whether or
+        # not this instance of it is valid is a matter for the next
+        # statement; at this point, we just know that if we see this
+        # version info again, that its signature checks out and that
+        # we're okay to skip the signature-checking step.
 
hunk ./src/allmydata/mutable/servermap.py 850
+        # (peerid, shnum) are bound in the method invocation.
         if (peerid, shnum) in self._servermap.bad_shares:
             # we've been told that the rest of the data in this share is
             # unusable, so don't add it to the servermap.
hunk ./src/allmydata/mutable/servermap.py 863
         self._servermap.add_new_share(peerid, shnum, verinfo, timestamp)
         # and the versionmap
         self.versionmap.add(verinfo, (shnum, peerid, timestamp))
+
+        # It's our job to set the protocol version of our parent
+        # filenode if it isn't already set.
+        if not self._node.get_version():
+            # The first byte of the prefix is the version.
+            v = struct.unpack(">B", prefix[:1])[0]
+            self.log("got version %d" % v)
+            self._node.set_version(v)
+
         return verinfo
 
hunk ./src/allmydata/mutable/servermap.py 874
-    def _deserialize_pubkey(self, pubkey_s):
-        verifier = rsa.create_verifying_key_from_string(pubkey_s)
-        return verifier
 
hunk ./src/allmydata/mutable/servermap.py 875
-    def _try_to_extract_privkey(self, data, peerid, shnum, lp):
-        try:
-            r = unpack_share(data)
-        except NeedMoreDataError, e:
-            # this share won't help us. oh well.
-            offset = e.encprivkey_offset
-            length = e.encprivkey_length
-            self.log("shnum %d on peerid %s: share was too short (%dB) "
-                     "to get the encprivkey; [%d:%d] ought to hold it" %
-                     (shnum, idlib.shortnodeid_b2a(peerid), len(data),
-                      offset, offset+length),
-                     parent=lp)
-            # NOTE: if uncoordinated writes are taking place, someone might
-            # change the share (and most probably move the encprivkey) before
-            # we get a chance to do one of these reads and fetch it. This
-            # will cause us to see a NotEnoughSharesError(unable to fetch
-            # privkey) instead of an UncoordinatedWriteError . This is a
-            # nuisance, but it will go away when we move to DSA-based mutable
-            # files (since the privkey will be small enough to fit in the
-            # write cap).
+    def _got_update_results_one_share(self, results, share):
+        """
+        I record the update results in results.
+        """
+        assert len(results) == 4
+        verinfo, blockhashes, start, end = results
+        (seqnum,
+         root_hash,
+         saltish,
+         segsize,
+         datalen,
+         k,
+         n,
+         prefix,
+         offsets) = verinfo
+        offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] )
 
hunk ./src/allmydata/mutable/servermap.py 892
-            return
+        # XXX: This should be done for us in the method, so
+        # presumably you can go in there and fix it.
+        verinfo = (seqnum,
+                   root_hash,
+                   saltish,
+                   segsize,
+                   datalen,
+                   k,
+                   n,
+                   prefix,
+                   offsets_tuple)
 
hunk ./src/allmydata/mutable/servermap.py 904
-        (seqnum, root_hash, IV, k, N, segsize, datalen,
-         pubkey, signature, share_hash_chain, block_hash_tree,
-         share_data, enc_privkey) = r
+        update_data = (blockhashes, start, end)
+        self._servermap.set_update_data_for_share_and_verinfo(share,
+                                                              verinfo,
+                                                              update_data)
 
hunk ./src/allmydata/mutable/servermap.py 909
-        return self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
+
+    def _deserialize_pubkey(self, pubkey_s):
+        verifier = rsa.create_verifying_key_from_string(pubkey_s)
+        return verifier
 
hunk ./src/allmydata/mutable/servermap.py 914
-    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
 
hunk ./src/allmydata/mutable/servermap.py 915
+    def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp):
+        """
+        Given a writekey from a remote server, I validate it against the
+        writekey stored in my node. If it is valid, then I set the
+        privkey and encprivkey properties of the node.
+        """
         alleged_privkey_s = self._node._decrypt_privkey(enc_privkey)
         alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s)
         if alleged_writekey != self._node.get_writekey():
hunk ./src/allmydata/mutable/servermap.py 993
         self._queries_completed += 1
         self._last_failure = f
 
-    def _got_privkey_results(self, datavs, peerid, shnum, started, lp):
-        now = time.time()
-        elapsed = now - started
-        self._status.add_per_server_time(peerid, "privkey", started, elapsed)
-        self._queries_outstanding.discard(peerid)
-        if not self._need_privkey:
-            return
-        if shnum not in datavs:
-            self.log("privkey wasn't there when we asked it",
-                     level=log.WEIRD, umid="VA9uDQ")
-            return
-        datav = datavs[shnum]
-        enc_privkey = datav[0]
-        self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp)
 
     def _privkey_query_failed(self, f, peerid, shnum, lp):
         self._queries_outstanding.discard(peerid)
hunk ./src/allmydata/mutable/servermap.py 1007
         self._servermap.problems.append(f)
         self._last_failure = f
 
+
     def _check_for_done(self, res):
         # exit paths:
         #  return self._send_more_queries(outstanding) : send some more queries
hunk ./src/allmydata/mutable/servermap.py 1013
         #  return self._done() : all done
         #  return : keep waiting, no new queries
-
         lp = self.log(format=("_check_for_done, mode is '%(mode)s', "
                               "%(outstanding)d queries outstanding, "
                               "%(extra)d extra peers available, "
hunk ./src/allmydata/mutable/servermap.py 1204
 
     def _done(self):
         if not self._running:
+            self.log("not running; we're already done")
             return
         self._running = False
         now = time.time()
hunk ./src/allmydata/mutable/servermap.py 1219
         self._servermap.last_update_time = self._started
         # the servermap will not be touched after this
         self.log("servermap: %s" % self._servermap.summarize_versions())
+
         eventually(self._done_deferred.callback, self._servermap)
 
     def _fatal_error(self, f):
}
[tests:
Kevan Carstensen <kevan@isnotajoke.com>**20100819003531
 Ignore-this: 314e8bbcce532ea4d5d2cecc9f31cca0
 
     - A lot of existing tests relied on aspects of the mutable file
       implementation that were changed. This patch updates those tests
       to work with the changes.
     - This patch also adds tests for new features.
] {
hunk ./src/allmydata/test/common.py 11
 from foolscap.api import flushEventualQueue, fireEventually
 from allmydata import uri, dirnode, client
 from allmydata.introducer.server import IntroducerNode
-from allmydata.interfaces import IMutableFileNode, IImmutableFileNode, \
-     FileTooLargeError, NotEnoughSharesError, ICheckable
+from allmydata.interfaces import IMutableFileNode, IImmutableFileNode,\
+                                 NotEnoughSharesError, ICheckable, \
+                                 IMutableUploadable, SDMF_VERSION, \
+                                 MDMF_VERSION
 from allmydata.check_results import CheckResults, CheckAndRepairResults, \
      DeepCheckResults, DeepCheckAndRepairResults
 from allmydata.mutable.common import CorruptShareError
hunk ./src/allmydata/test/common.py 19
 from allmydata.mutable.layout import unpack_header
+from allmydata.mutable.publish import MutableData
 from allmydata.storage.server import storage_index_to_dir
 from allmydata.storage.mutable import MutableShareFile
 from allmydata.util import hashutil, log, fileutil, pollmixin
hunk ./src/allmydata/test/common.py 153
         consumer.write(data[start:end])
         return consumer
 
+
+    def get_best_readable_version(self):
+        return defer.succeed(self)
+
+
+    download_best_version = download_to_data
+
+
+    def download_to_data(self):
+        return download_to_data(self)
+
+
+    def get_size_of_best_version(self):
+        return defer.succeed(self.get_size)
+
+
 def make_chk_file_cap(size):
     return uri.CHKFileURI(key=os.urandom(16),
                           uri_extension_hash=os.urandom(32),
hunk ./src/allmydata/test/common.py 193
     MUTABLE_SIZELIMIT = 10000
     all_contents = {}
     bad_shares = {}
+    file_types = {} # storage index => MDMF_VERSION or SDMF_VERSION
 
     def __init__(self, storage_broker, secret_holder,
                  default_encoding_parameters, history):
hunk ./src/allmydata/test/common.py 200
         self.init_from_cap(make_mutable_file_cap())
     def create(self, contents, key_generator=None, keysize=None):
         initial_contents = self._get_initial_contents(contents)
-        if len(initial_contents) > self.MUTABLE_SIZELIMIT:
-            raise FileTooLargeError("SDMF is limited to one segment, and "
-                                    "%d > %d" % (len(initial_contents),
-                                                 self.MUTABLE_SIZELIMIT))
-        self.all_contents[self.storage_index] = initial_contents
+        data = initial_contents.read(initial_contents.get_size())
+        data = "".join(data)
+        self.all_contents[self.storage_index] = data
         return defer.succeed(self)
     def _get_initial_contents(self, contents):
hunk ./src/allmydata/test/common.py 205
-        if isinstance(contents, str):
-            return contents
         if contents is None:
hunk ./src/allmydata/test/common.py 206
-            return ""
+            return MutableData("")
+
+        if IMutableUploadable.providedBy(contents):
+            return contents
+
         assert callable(contents), "%s should be callable, not %s" % \
                (contents, type(contents))
         return contents(self)
hunk ./src/allmydata/test/common.py 258
     def get_storage_index(self):
         return self.storage_index
 
+    def get_servermap(self, mode):
+        return defer.succeed(None)
+
+    def set_version(self, version):
+        assert version in (SDMF_VERSION, MDMF_VERSION)
+        self.file_types[self.storage_index] = version
+
+    def get_version(self):
+        assert self.storage_index in self.file_types
+        return self.file_types[self.storage_index]
+
     def check(self, monitor, verify=False, add_lease=False):
         r = CheckResults(self.my_uri, self.storage_index)
         is_bad = self.bad_shares.get(self.storage_index, None)
hunk ./src/allmydata/test/common.py 327
         return d
 
     def download_best_version(self):
+        return defer.succeed(self._download_best_version())
+
+
+    def _download_best_version(self, ignored=None):
         if isinstance(self.my_uri, uri.LiteralFileURI):
hunk ./src/allmydata/test/common.py 332
-            return defer.succeed(self.my_uri.data)
+            return self.my_uri.data
         if self.storage_index not in self.all_contents:
hunk ./src/allmydata/test/common.py 334
-            return defer.fail(NotEnoughSharesError(None, 0, 3))
-        return defer.succeed(self.all_contents[self.storage_index])
+            raise NotEnoughSharesError(None, 0, 3)
+        return self.all_contents[self.storage_index]
+
 
     def overwrite(self, new_contents):
hunk ./src/allmydata/test/common.py 339
-        if len(new_contents) > self.MUTABLE_SIZELIMIT:
-            raise FileTooLargeError("SDMF is limited to one segment, and "
-                                    "%d > %d" % (len(new_contents),
-                                                 self.MUTABLE_SIZELIMIT))
         assert not self.is_readonly()
hunk ./src/allmydata/test/common.py 340
-        self.all_contents[self.storage_index] = new_contents
+        new_data = new_contents.read(new_contents.get_size())
+        new_data = "".join(new_data)
+        self.all_contents[self.storage_index] = new_data
         return defer.succeed(None)
     def modify(self, modifier):
         # this does not implement FileTooLargeError, but the real one does
hunk ./src/allmydata/test/common.py 350
     def _modify(self, modifier):
         assert not self.is_readonly()
         old_contents = self.all_contents[self.storage_index]
-        self.all_contents[self.storage_index] = modifier(old_contents, None, True)
+        new_data = modifier(old_contents, None, True)
+        self.all_contents[self.storage_index] = new_data
         return None
 
hunk ./src/allmydata/test/common.py 354
+    # As actually implemented, MutableFilenode and MutableFileVersion
+    # are distinct. However, nothing in the webapi uses (yet) that
+    # distinction -- it just uses the unified download interface
+    # provided by get_best_readable_version and read. When we start
+    # doing cooler things like LDMF, we will want to revise this code to
+    # be less simplistic.
+    def get_best_readable_version(self):
+        return defer.succeed(self)
+
+
+    def get_best_mutable_version(self):
+        return defer.succeed(self)
+
+    # Ditto for this, which is an implementation of IWritable.
+    # XXX: Declare that the same is implemented.
+    def update(self, data, offset):
+        assert not self.is_readonly()
+        def modifier(old, servermap, first_time):
+            new = old[:offset] + "".join(data.read(data.get_size()))
+            new += old[len(new):]
+            return new
+        return self.modify(modifier)
+
+
+    def read(self, consumer, offset=0, size=None):
+        data = self._download_best_version()
+        if size:
+            data = data[offset:offset+size]
+        consumer.write(data)
+        return defer.succeed(consumer)
+
+
 def make_mutable_file_cap():
     return uri.WriteableSSKFileURI(writekey=os.urandom(16),
                                    fingerprint=os.urandom(32))
hunk ./src/allmydata/test/test_checker.py 11
 from allmydata.test.no_network import GridTestMixin
 from allmydata.immutable.upload import Data
 from allmydata.test.common_web import WebRenderingMixin
+from allmydata.mutable.publish import MutableData
 
 class FakeClient:
     def get_storage_broker(self):
hunk ./src/allmydata/test/test_checker.py 291
         def _stash_immutable(ur):
             self.imm = c0.create_node_from_uri(ur.uri)
         d.addCallback(_stash_immutable)
-        d.addCallback(lambda ign: c0.create_mutable_file("contents"))
+        d.addCallback(lambda ign:
+            c0.create_mutable_file(MutableData("contents")))
         def _stash_mutable(node):
             self.mut = node
         d.addCallback(_stash_mutable)
hunk ./src/allmydata/test/test_cli.py 13
 from allmydata.util import fileutil, hashutil, base32
 from allmydata import uri
 from allmydata.immutable import upload
+from allmydata.mutable.publish import MutableData
 from allmydata.dirnode import normalize
 
 # Test that the scripts can be imported.
hunk ./src/allmydata/test/test_cli.py 707
 
         d = self.do_cli("create-alias", etudes_arg)
         def _check_create_unicode((rc, out, err)):
-            self.failUnlessReallyEqual(rc, 0)
+            #self.failUnlessReallyEqual(rc, 0)
             self.failUnlessReallyEqual(err, "")
             self.failUnlessIn("Alias %s created" % quote_output(u"\u00E9tudes"), out)
 
hunk ./src/allmydata/test/test_cli.py 1012
         d.addCallback(lambda (rc,out,err): self.failUnlessReallyEqual(out, DATA2))
         return d
 
+    def test_mutable_type(self):
+        self.basedir = "cli/Put/mutable_type"
+        self.set_up_grid()
+        data = "data" * 100000
+        fn1 = os.path.join(self.basedir, "data")
+        fileutil.write(fn1, data)
+        d = self.do_cli("create-alias", "tahoe")
+        d.addCallback(lambda ignored:
+            self.do_cli("put", "--mutable", "--mutable-type=mdmf",
+                        fn1, "tahoe:uploaded.txt"))
+        d.addCallback(lambda ignored:
+            self.do_cli("ls", "--json", "tahoe:uploaded.txt"))
+        d.addCallback(lambda (rc, json, err): self.failUnlessIn("mdmf", json))
+        d.addCallback(lambda ignored:
+            self.do_cli("put", "--mutable", "--mutable-type=sdmf",
+                        fn1, "tahoe:uploaded2.txt"))
+        d.addCallback(lambda ignored:
+            self.do_cli("ls", "--json", "tahoe:uploaded2.txt"))
+        d.addCallback(lambda (rc, json, err):
+            self.failUnlessIn("sdmf", json))
+        return d
+
+    def test_mutable_type_unlinked(self):
+        self.basedir = "cli/Put/mutable_type_unlinked"
+        self.set_up_grid()
+        data = "data" * 100000
+        fn1 = os.path.join(self.basedir, "data")
+        fileutil.write(fn1, data)
+        d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
+        d.addCallback(lambda (rc, cap, err):
+            self.do_cli("ls", "--json", cap))
+        d.addCallback(lambda (rc, json, err): self.failUnlessIn("mdmf", json))
+        d.addCallback(lambda ignored:
+            self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1))
+        d.addCallback(lambda (rc, cap, err):
+            self.do_cli("ls", "--json", cap))
+        d.addCallback(lambda (rc, json, err):
+            self.failUnlessIn("sdmf", json))
+        return d
+
+    def test_mutable_type_invalid_format(self):
+        self.basedir = "cli/Put/mutable_type_invalid_format"
+        self.set_up_grid()
+        data = "data" * 100000
+        fn1 = os.path.join(self.basedir, "data")
+        fileutil.write(fn1, data)
+        d = self.do_cli("put", "--mutable", "--mutable-type=ldmf", fn1)
+        def _check_failure((rc, out, err)):
+            self.failIfEqual(rc, 0)
+            self.failUnlessIn("invalid", err)
+        d.addCallback(_check_failure)
+        return d
+
     def test_put_with_nonexistent_alias(self):
         # when invoked with an alias that doesn't exist, 'tahoe put'
         # should output a useful error message, not a stack trace
hunk ./src/allmydata/test/test_cli.py 2532
         self.set_up_grid()
         c0 = self.g.clients[0]
         DATA = "data" * 100
-        d = c0.create_mutable_file(DATA)
+        DATA_uploadable = MutableData(DATA)
+        d = c0.create_mutable_file(DATA_uploadable)
         def _stash_uri(n):
             self.uri = n.get_uri()
         d.addCallback(_stash_uri)
hunk ./src/allmydata/test/test_cli.py 2634
                                            upload.Data("literal",
                                                         convergence="")))
         d.addCallback(_stash_uri, "small")
-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"1"))
+        d.addCallback(lambda ign:
+            c0.create_mutable_file(MutableData(DATA+"1")))
         d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
         d.addCallback(_stash_uri, "mutable")
 
hunk ./src/allmydata/test/test_cli.py 2653
         # root/small
         # root/mutable
 
+        # We haven't broken anything yet, so this should all be healthy.
         d.addCallback(lambda ign: self.do_cli("deep-check", "--verbose",
                                               self.rooturi))
         def _check2((rc, out, err)):
hunk ./src/allmydata/test/test_cli.py 2668
                             in lines, out)
         d.addCallback(_check2)
 
+        # Similarly, all of these results should be as we expect them to
+        # be for a healthy file layout.
         d.addCallback(lambda ign: self.do_cli("stats", self.rooturi))
         def _check_stats((rc, out, err)):
             self.failUnlessReallyEqual(err, "")
hunk ./src/allmydata/test/test_cli.py 2685
             self.failUnlessIn(" 317-1000 : 1    (1000 B, 1000 B)", lines)
         d.addCallback(_check_stats)
 
+        # Now we break things.
         def _clobber_shares(ignored):
             shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"])
             self.failUnlessReallyEqual(len(shares), 10)
hunk ./src/allmydata/test/test_cli.py 2710
 
         d.addCallback(lambda ign:
                       self.do_cli("deep-check", "--verbose", self.rooturi))
+        # This should reveal the missing share, but not the corrupt
+        # share, since we didn't tell the deep check operation to also
+        # verify.
         def _check3((rc, out, err)):
             self.failUnlessReallyEqual(err, "")
             self.failUnlessReallyEqual(rc, 0)
hunk ./src/allmydata/test/test_cli.py 2761
                                   "--verbose", "--verify", "--repair",
                                   self.rooturi))
         def _check6((rc, out, err)):
+            # We've just repaired the directory. There is no reason for
+            # that repair to be unsuccessful.
             self.failUnlessReallyEqual(err, "")
             self.failUnlessReallyEqual(rc, 0)
             lines = out.splitlines()
hunk ./src/allmydata/test/test_deepcheck.py 9
 from twisted.internet import threads # CLI tests use deferToThread
 from allmydata.immutable import upload
 from allmydata.mutable.common import UnrecoverableFileError
+from allmydata.mutable.publish import MutableData
 from allmydata.util import idlib
 from allmydata.util import base32
 from allmydata.scripts import runner
hunk ./src/allmydata/test/test_deepcheck.py 38
         self.basedir = "deepcheck/MutableChecker/good"
         self.set_up_grid()
         CONTENTS = "a little bit of data"
-        d = self.g.clients[0].create_mutable_file(CONTENTS)
+        CONTENTS_uploadable = MutableData(CONTENTS)
+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
         def _created(node):
             self.node = node
             self.fileurl = "uri/" + urllib.quote(node.get_uri())
hunk ./src/allmydata/test/test_deepcheck.py 61
         self.basedir = "deepcheck/MutableChecker/corrupt"
         self.set_up_grid()
         CONTENTS = "a little bit of data"
-        d = self.g.clients[0].create_mutable_file(CONTENTS)
+        CONTENTS_uploadable = MutableData(CONTENTS)
+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
         def _stash_and_corrupt(node):
             self.node = node
             self.fileurl = "uri/" + urllib.quote(node.get_uri())
hunk ./src/allmydata/test/test_deepcheck.py 99
         self.basedir = "deepcheck/MutableChecker/delete_share"
         self.set_up_grid()
         CONTENTS = "a little bit of data"
-        d = self.g.clients[0].create_mutable_file(CONTENTS)
+        CONTENTS_uploadable = MutableData(CONTENTS)
+        d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable)
         def _stash_and_delete(node):
             self.node = node
             self.fileurl = "uri/" + urllib.quote(node.get_uri())
hunk ./src/allmydata/test/test_deepcheck.py 223
             self.root = n
             self.root_uri = n.get_uri()
         d.addCallback(_created_root)
-        d.addCallback(lambda ign: c0.create_mutable_file("mutable file contents"))
+        d.addCallback(lambda ign:
+            c0.create_mutable_file(MutableData("mutable file contents")))
         d.addCallback(lambda n: self.root.set_node(u"mutable", n))
         def _created_mutable(n):
             self.mutable = n
hunk ./src/allmydata/test/test_deepcheck.py 965
     def create_mangled(self, ignored, name):
         nodetype, mangletype = name.split("-", 1)
         if nodetype == "mutable":
-            d = self.g.clients[0].create_mutable_file("mutable file contents")
+            mutable_uploadable = MutableData("mutable file contents")
+            d = self.g.clients[0].create_mutable_file(mutable_uploadable)
             d.addCallback(lambda n: self.root.set_node(unicode(name), n))
         elif nodetype == "large":
             large = upload.Data("Lots of data\n" * 1000 + name + "\n", None)
hunk ./src/allmydata/test/test_dirnode.py 1304
     implements(IMutableFileNode)
     counter = 0
     def __init__(self, initial_contents=""):
-        self.data = self._get_initial_contents(initial_contents)
+        data = self._get_initial_contents(initial_contents)
+        self.data = data.read(data.get_size())
+        self.data = "".join(self.data)
+
         counter = FakeMutableFile.counter
         FakeMutableFile.counter += 1
         writekey = hashutil.ssk_writekey_hash(str(counter))
hunk ./src/allmydata/test/test_dirnode.py 1354
         pass
 
     def modify(self, modifier):
-        self.data = modifier(self.data, None, True)
+        data = modifier(self.data, None, True)
+        self.data = data
         return defer.succeed(None)
 
 class FakeNodeMaker(NodeMaker):
hunk ./src/allmydata/test/test_dirnode.py 1359
-    def create_mutable_file(self, contents="", keysize=None):
+    def create_mutable_file(self, contents="", keysize=None, version=None):
         return defer.succeed(FakeMutableFile(contents))
 
 class FakeClient2(Client):
hunk ./src/allmydata/test/test_filenode.py 98
         def _check_segment(res):
             self.failUnlessEqual(res, DATA[1:1+5])
         d.addCallback(_check_segment)
+        d.addCallback(lambda ignored: fn1.get_best_readable_version())
+        d.addCallback(lambda fn2: self.failUnlessEqual(fn1, fn2))
+        d.addCallback(lambda ignored:
+            fn1.get_size_of_best_version())
+        d.addCallback(lambda size:
+            self.failUnlessEqual(size, len(DATA)))
+        d.addCallback(lambda ignored:
+            fn1.download_to_data())
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, DATA))
+        d.addCallback(lambda ignored:
+            fn1.download_best_version())
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, DATA))
 
         return d
 
hunk ./src/allmydata/test/test_hung_server.py 10
 from allmydata.util.consumer import download_to_data
 from allmydata.immutable import upload
 from allmydata.mutable.common import UnrecoverableFileError
+from allmydata.mutable.publish import MutableData
 from allmydata.storage.common import storage_index_to_dir
 from allmydata.test.no_network import GridTestMixin
 from allmydata.test.common import ShouldFailMixin
hunk ./src/allmydata/test/test_hung_server.py 110
         self.servers = self.servers[5:] + self.servers[:5]
 
         if mutable:
-            d = nm.create_mutable_file(mutable_plaintext)
+            uploadable = MutableData(mutable_plaintext)
+            d = nm.create_mutable_file(uploadable)
             def _uploaded_mutable(node):
                 self.uri = node.get_uri()
                 self.shares = self.find_uri_shares(self.uri)
hunk ./src/allmydata/test/test_immutable.py 267
         d.addCallback(_after_attempt)
         return d
 
+    def test_download_to_data(self):
+        d = self.n.download_to_data()
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, common.TEST_DATA))
+        return d
 
hunk ./src/allmydata/test/test_immutable.py 273
+
+    def test_download_best_version(self):
+        d = self.n.download_best_version()
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, common.TEST_DATA))
+        return d
+
+
+    def test_get_best_readable_version(self):
+        d = self.n.get_best_readable_version()
+        d.addCallback(lambda n2:
+            self.failUnlessEqual(n2, self.n))
+        return d
+
+    def test_get_size_of_best_version(self):
+        d = self.n.get_size_of_best_version()
+        d.addCallback(lambda size:
+            self.failUnlessEqual(size, len(common.TEST_DATA)))
+        return d
+
+
 # XXX extend these tests to show bad behavior of various kinds from servers:
 # raising exception from each remove_foo() method, for example
 
hunk ./src/allmydata/test/test_mutable.py 2
 
-import struct
+import os
 from cStringIO import StringIO
 from twisted.trial import unittest
 from twisted.internet import defer, reactor
hunk ./src/allmydata/test/test_mutable.py 8
 from allmydata import uri, client
 from allmydata.nodemaker import NodeMaker
-from allmydata.util import base32
+from allmydata.util import base32, consumer
 from allmydata.util.hashutil import tagged_hash, ssk_writekey_hash, \
      ssk_pubkey_fingerprint_hash
hunk ./src/allmydata/test/test_mutable.py 11
+from allmydata.util.deferredutil import gatherResults
 from allmydata.interfaces import IRepairResults, ICheckAndRepairResults, \
hunk ./src/allmydata/test/test_mutable.py 13
-     NotEnoughSharesError
+     NotEnoughSharesError, SDMF_VERSION, MDMF_VERSION
 from allmydata.monitor import Monitor
 from allmydata.test.common import ShouldFailMixin
 from allmydata.test.no_network import GridTestMixin
hunk ./src/allmydata/test/test_mutable.py 27
      NeedMoreDataError, UnrecoverableFileError, UncoordinatedWriteError, \
      NotEnoughServersError, CorruptShareError
 from allmydata.mutable.retrieve import Retrieve
-from allmydata.mutable.publish import Publish
+from allmydata.mutable.publish import Publish, MutableFileHandle, \
+                                      MutableData, \
+                                      DEFAULT_MAX_SEGMENT_SIZE
 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
hunk ./src/allmydata/test/test_mutable.py 31
-from allmydata.mutable.layout import unpack_header, unpack_share
+from allmydata.mutable.layout import unpack_header, MDMFSlotReadProxy
 from allmydata.mutable.repairer import MustForceRepairError
 
 import allmydata.test.common_util as testutil
hunk ./src/allmydata/test/test_mutable.py 100
         self.storage = storage
         self.queries = 0
     def callRemote(self, methname, *args, **kwargs):
+        self.queries += 1
         def _call():
             meth = getattr(self, methname)
             return meth(*args, **kwargs)
hunk ./src/allmydata/test/test_mutable.py 107
         d = fireEventually()
         d.addCallback(lambda res: _call())
         return d
+
     def callRemoteOnly(self, methname, *args, **kwargs):
hunk ./src/allmydata/test/test_mutable.py 109
+        self.queries += 1
         d = self.callRemote(methname, *args, **kwargs)
         d.addBoth(lambda ignore: None)
         pass
hunk ./src/allmydata/test/test_mutable.py 157
             chr(ord(original[byte_offset]) ^ 0x01) +
             original[byte_offset+1:])
 
+def add_two(original, byte_offset):
+    # It isn't enough to simply flip the bit for the version number,
+    # because 1 is a valid version number. So we add two instead.
+    return (original[:byte_offset] +
+            chr(ord(original[byte_offset]) ^ 0x02) +
+            original[byte_offset+1:])
+
 def corrupt(res, s, offset, shnums_to_corrupt=None, offset_offset=0):
     # if shnums_to_corrupt is None, corrupt all shares. Otherwise it is a
     # list of shnums to corrupt.
hunk ./src/allmydata/test/test_mutable.py 167
+    ds = []
     for peerid in s._peers:
         shares = s._peers[peerid]
         for shnum in shares:
hunk ./src/allmydata/test/test_mutable.py 175
                 and shnum not in shnums_to_corrupt):
                 continue
             data = shares[shnum]
-            (version,
-             seqnum,
-             root_hash,
-             IV,
-             k, N, segsize, datalen,
-             o) = unpack_header(data)
-            if isinstance(offset, tuple):
-                offset1, offset2 = offset
-            else:
-                offset1 = offset
-                offset2 = 0
-            if offset1 == "pubkey":
-                real_offset = 107
-            elif offset1 in o:
-                real_offset = o[offset1]
-            else:
-                real_offset = offset1
-            real_offset = int(real_offset) + offset2 + offset_offset
-            assert isinstance(real_offset, int), offset
-            shares[shnum] = flip_bit(data, real_offset)
-    return res
+            # We're feeding the reader all of the share data, so it
+            # won't need to use the rref that we didn't provide, nor the
+            # storage index that we didn't provide. We do this because
+            # the reader will work for both MDMF and SDMF.
+            reader = MDMFSlotReadProxy(None, None, shnum, data)
+            # We need to get the offsets for the next part.
+            d = reader.get_verinfo()
+            def _do_corruption(verinfo, data, shnum):
+                (seqnum,
+                 root_hash,
+                 IV,
+                 segsize,
+                 datalen,
+                 k, n, prefix, o) = verinfo
+                if isinstance(offset, tuple):
+                    offset1, offset2 = offset
+                else:
+                    offset1 = offset
+                    offset2 = 0
+                if offset1 == "pubkey" and IV:
+                    real_offset = 107
+                elif offset1 == "share_data" and not IV:
+                    real_offset = 107
+                elif offset1 in o:
+                    real_offset = o[offset1]
+                else:
+                    real_offset = offset1
+                real_offset = int(real_offset) + offset2 + offset_offset
+                assert isinstance(real_offset, int), offset
+                if offset1 == 0: # verbyte
+                    f = add_two
+                else:
+                    f = flip_bit
+                shares[shnum] = f(data, real_offset)
+            d.addCallback(_do_corruption, data, shnum)
+            ds.append(d)
+    dl = defer.DeferredList(ds)
+    dl.addCallback(lambda ignored: res)
+    return dl
 
 def make_storagebroker(s=None, num_peers=10):
     if not s:
hunk ./src/allmydata/test/test_mutable.py 256
             self.failUnlessEqual(len(shnums), 1)
         d.addCallback(_created)
         return d
+    test_create.timeout = 15
+
+
+    def test_create_mdmf(self):
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
+            sb = self.nodemaker.storage_broker
+            peer0 = sorted(sb.get_all_serverids())[0]
+            shnums = self._storage._peers[peer0].keys()
+            self.failUnlessEqual(len(shnums), 1)
+        d.addCallback(_created)
+        return d
+
 
     def test_serialize(self):
         n = MutableFileNode(None, None, {"k": 3, "n": 10}, None)
hunk ./src/allmydata/test/test_mutable.py 301
             d.addCallback(lambda smap: smap.dump(StringIO()))
             d.addCallback(lambda sio:
                           self.failUnless("3-of-10" in sio.getvalue()))
-            d.addCallback(lambda res: n.overwrite("contents 1"))
+            d.addCallback(lambda res: n.overwrite(MutableData("contents 1")))
             d.addCallback(lambda res: self.failUnlessIdentical(res, None))
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
hunk ./src/allmydata/test/test_mutable.py 308
             d.addCallback(lambda res: n.get_size_of_best_version())
             d.addCallback(lambda size:
                           self.failUnlessEqual(size, len("contents 1")))
-            d.addCallback(lambda res: n.overwrite("contents 2"))
+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
hunk ./src/allmydata/test/test_mutable.py 312
-            d.addCallback(lambda smap: n.upload("contents 3", smap))
+            d.addCallback(lambda smap: n.upload(MutableData("contents 3"), smap))
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3"))
             d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING))
hunk ./src/allmydata/test/test_mutable.py 324
             # mapupdate-to-retrieve data caching (i.e. make the shares larger
             # than the default readsize, which is 2000 bytes). A 15kB file
             # will have 5kB shares.
-            d.addCallback(lambda res: n.overwrite("large size file" * 1000))
+            d.addCallback(lambda res: n.overwrite(MutableData("large size file" * 1000)))
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res:
                           self.failUnlessEqual(res, "large size file" * 1000))
hunk ./src/allmydata/test/test_mutable.py 332
         d.addCallback(_created)
         return d
 
+
+    def test_upload_and_download_mdmf(self):
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            d = defer.succeed(None)
+            d.addCallback(lambda ignored:
+                n.get_servermap(MODE_READ))
+            def _then(servermap):
+                dumped = servermap.dump(StringIO())
+                self.failUnlessIn("3-of-10", dumped.getvalue())
+            d.addCallback(_then)
+            # Now overwrite the contents with some new contents. We want 
+            # to make them big enough to force the file to be uploaded
+            # in more than one segment.
+            big_contents = "contents1" * 100000 # about 900 KiB
+            big_contents_uploadable = MutableData(big_contents)
+            d.addCallback(lambda ignored:
+                n.overwrite(big_contents_uploadable))
+            d.addCallback(lambda ignored:
+                n.download_best_version())
+            d.addCallback(lambda data:
+                self.failUnlessEqual(data, big_contents))
+            # Overwrite the contents again with some new contents. As
+            # before, they need to be big enough to force multiple
+            # segments, so that we make the downloader deal with
+            # multiple segments.
+            bigger_contents = "contents2" * 1000000 # about 9MiB 
+            bigger_contents_uploadable = MutableData(bigger_contents)
+            d.addCallback(lambda ignored:
+                n.overwrite(bigger_contents_uploadable))
+            d.addCallback(lambda ignored:
+                n.download_best_version())
+            d.addCallback(lambda data:
+                self.failUnlessEqual(data, bigger_contents))
+            return d
+        d.addCallback(_created)
+        return d
+
+
+    def test_mdmf_write_count(self):
+        # Publishing an MDMF file should only cause one write for each
+        # share that is to be published. Otherwise, we introduce
+        # undesirable semantics that are a regression from SDMF
+        upload = MutableData("MDMF" * 100000) # about 400 KiB
+        d = self.nodemaker.create_mutable_file(upload,
+                                               version=MDMF_VERSION)
+        def _check_server_write_counts(ignored):
+            sb = self.nodemaker.storage_broker
+            peers = sb.test_servers.values()
+            for peer in peers:
+                self.failUnlessEqual(peer.queries, 1)
+        d.addCallback(_check_server_write_counts)
+        return d
+
+
     def test_create_with_initial_contents(self):
hunk ./src/allmydata/test/test_mutable.py 388
-        d = self.nodemaker.create_mutable_file("contents 1")
+        upload1 = MutableData("contents 1")
+        d = self.nodemaker.create_mutable_file(upload1)
         def _created(n):
             d = n.download_best_version()
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
hunk ./src/allmydata/test/test_mutable.py 393
-            d.addCallback(lambda res: n.overwrite("contents 2"))
+            upload2 = MutableData("contents 2")
+            d.addCallback(lambda res: n.overwrite(upload2))
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
             return d
hunk ./src/allmydata/test/test_mutable.py 400
         d.addCallback(_created)
         return d
+    test_create_with_initial_contents.timeout = 15
+
+
+    def test_create_mdmf_with_initial_contents(self):
+        initial_contents = "foobarbaz" * 131072 # 900KiB
+        initial_contents_uploadable = MutableData(initial_contents)
+        d = self.nodemaker.create_mutable_file(initial_contents_uploadable,
+                                               version=MDMF_VERSION)
+        def _created(n):
+            d = n.download_best_version()
+            d.addCallback(lambda data:
+                self.failUnlessEqual(data, initial_contents))
+            uploadable2 = MutableData(initial_contents + "foobarbaz")
+            d.addCallback(lambda ignored:
+                n.overwrite(uploadable2))
+            d.addCallback(lambda ignored:
+                n.download_best_version())
+            d.addCallback(lambda data:
+                self.failUnlessEqual(data, initial_contents +
+                                           "foobarbaz"))
+            return d
+        d.addCallback(_created)
+        return d
+    test_create_mdmf_with_initial_contents.timeout = 20
+
 
     def test_response_cache_memory_leak(self):
         d = self.nodemaker.create_mutable_file("contents")
hunk ./src/allmydata/test/test_mutable.py 451
             key = n.get_writekey()
             self.failUnless(isinstance(key, str), key)
             self.failUnlessEqual(len(key), 16) # AES key size
-            return data
+            return MutableData(data)
         d = self.nodemaker.create_mutable_file(_make_contents)
         def _created(n):
             return n.download_best_version()
hunk ./src/allmydata/test/test_mutable.py 459
         d.addCallback(lambda data2: self.failUnlessEqual(data2, data))
         return d
 
+
+    def test_create_mdmf_with_initial_contents_function(self):
+        data = "initial contents" * 100000
+        def _make_contents(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            key = n.get_writekey()
+            self.failUnless(isinstance(key, str), key)
+            self.failUnlessEqual(len(key), 16)
+            return MutableData(data)
+        d = self.nodemaker.create_mutable_file(_make_contents,
+                                               version=MDMF_VERSION)
+        d.addCallback(lambda n:
+            n.download_best_version())
+        d.addCallback(lambda data2:
+            self.failUnlessEqual(data2, data))
+        return d
+
+
     def test_create_with_too_large_contents(self):
         BIG = "a" * (self.OLD_MAX_SEGMENT_SIZE + 1)
hunk ./src/allmydata/test/test_mutable.py 479
-        d = self.nodemaker.create_mutable_file(BIG)
+        BIG_uploadable = MutableData(BIG)
+        d = self.nodemaker.create_mutable_file(BIG_uploadable)
         def _created(n):
hunk ./src/allmydata/test/test_mutable.py 482
-            d = n.overwrite(BIG)
+            other_BIG_uploadable = MutableData(BIG)
+            d = n.overwrite(other_BIG_uploadable)
             return d
         d.addCallback(_created)
         return d
hunk ./src/allmydata/test/test_mutable.py 497
 
     def test_modify(self):
         def _modifier(old_contents, servermap, first_time):
-            return old_contents + "line2"
+            new_contents = old_contents + "line2"
+            return new_contents
         def _non_modifier(old_contents, servermap, first_time):
             return old_contents
         def _none_modifier(old_contents, servermap, first_time):
hunk ./src/allmydata/test/test_mutable.py 506
         def _error_modifier(old_contents, servermap, first_time):
             raise ValueError("oops")
         def _toobig_modifier(old_contents, servermap, first_time):
-            return "b" * (self.OLD_MAX_SEGMENT_SIZE+1)
+            new_content = "b" * (self.OLD_MAX_SEGMENT_SIZE + 1)
+            return new_content
         calls = []
         def _ucw_error_modifier(old_contents, servermap, first_time):
             # simulate an UncoordinatedWriteError once
hunk ./src/allmydata/test/test_mutable.py 514
             calls.append(1)
             if len(calls) <= 1:
                 raise UncoordinatedWriteError("simulated")
-            return old_contents + "line3"
+            new_contents = old_contents + "line3"
+            return new_contents
         def _ucw_error_non_modifier(old_contents, servermap, first_time):
             # simulate an UncoordinatedWriteError once, and don't actually
             # modify the contents on subsequent invocations
hunk ./src/allmydata/test/test_mutable.py 524
                 raise UncoordinatedWriteError("simulated")
             return old_contents
 
-        d = self.nodemaker.create_mutable_file("line1")
+        initial_contents = "line1"
+        d = self.nodemaker.create_mutable_file(MutableData(initial_contents))
         def _created(n):
             d = n.modify(_modifier)
             d.addCallback(lambda res: n.download_best_version())
hunk ./src/allmydata/test/test_mutable.py 582
             return d
         d.addCallback(_created)
         return d
+    test_modify.timeout = 15
+
 
     def test_modify_backoffer(self):
         def _modifier(old_contents, servermap, first_time):
hunk ./src/allmydata/test/test_mutable.py 609
         giveuper._delay = 0.1
         giveuper.factor = 1
 
-        d = self.nodemaker.create_mutable_file("line1")
+        d = self.nodemaker.create_mutable_file(MutableData("line1"))
         def _created(n):
             d = n.modify(_modifier)
             d.addCallback(lambda res: n.download_best_version())
hunk ./src/allmydata/test/test_mutable.py 659
             d.addCallback(lambda smap: smap.dump(StringIO()))
             d.addCallback(lambda sio:
                           self.failUnless("3-of-10" in sio.getvalue()))
-            d.addCallback(lambda res: n.overwrite("contents 1"))
+            d.addCallback(lambda res: n.overwrite(MutableData("contents 1")))
             d.addCallback(lambda res: self.failUnlessIdentical(res, None))
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
hunk ./src/allmydata/test/test_mutable.py 663
-            d.addCallback(lambda res: n.overwrite("contents 2"))
+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
hunk ./src/allmydata/test/test_mutable.py 667
-            d.addCallback(lambda smap: n.upload("contents 3", smap))
+            d.addCallback(lambda smap: n.upload(MutableData("contents 3"), smap))
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3"))
             d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING))
hunk ./src/allmydata/test/test_mutable.py 680
         return d
 
 
-class MakeShares(unittest.TestCase):
-    def test_encrypt(self):
-        nm = make_nodemaker()
-        CONTENTS = "some initial contents"
-        d = nm.create_mutable_file(CONTENTS)
-        def _created(fn):
-            p = Publish(fn, nm.storage_broker, None)
-            p.salt = "SALT" * 4
-            p.readkey = "\x00" * 16
-            p.newdata = CONTENTS
-            p.required_shares = 3
-            p.total_shares = 10
-            p.setup_encoding_parameters()
-            return p._encrypt_and_encode()
+    def test_size_after_servermap_update(self):
+        # a mutable file node should have something to say about how big
+        # it is after a servermap update is performed, since this tells
+        # us how large the best version of that mutable file is.
+        d = self.nodemaker.create_mutable_file()
+        def _created(n):
+            self.n = n
+            return n.get_servermap(MODE_READ)
+        d.addCallback(_created)
+        d.addCallback(lambda ignored:
+            self.failUnlessEqual(self.n.get_size(), 0))
+        d.addCallback(lambda ignored:
+            self.n.overwrite(MutableData("foobarbaz")))
+        d.addCallback(lambda ignored:
+            self.failUnlessEqual(self.n.get_size(), 9))
+        d.addCallback(lambda ignored:
+            self.nodemaker.create_mutable_file(MutableData("foobarbaz")))
+        d.addCallback(_created)
+        d.addCallback(lambda ignored:
+            self.failUnlessEqual(self.n.get_size(), 9))
+        return d
+
+
+class PublishMixin:
+    def publish_one(self):
+        # publish a file and create shares, which can then be manipulated
+        # later.
+        self.CONTENTS = "New contents go here" * 1000
+        self.uploadable = MutableData(self.CONTENTS)
+        self._storage = FakeStorage()
+        self._nodemaker = make_nodemaker(self._storage)
+        self._storage_broker = self._nodemaker.storage_broker
+        d = self._nodemaker.create_mutable_file(self.uploadable)
+        def _created(node):
+            self._fn = node
+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
         d.addCallback(_created)
hunk ./src/allmydata/test/test_mutable.py 717
-        def _done(shares_and_shareids):
-            (shares, share_ids) = shares_and_shareids
-            self.failUnlessEqual(len(shares), 10)
-            for sh in shares:
-                self.failUnless(isinstance(sh, str))
-                self.failUnlessEqual(len(sh), 7)
-            self.failUnlessEqual(len(share_ids), 10)
-        d.addCallback(_done)
         return d
 
hunk ./src/allmydata/test/test_mutable.py 719
-    def test_generate(self):
-        nm = make_nodemaker()
-        CONTENTS = "some initial contents"
-        d = nm.create_mutable_file(CONTENTS)
-        def _created(fn):
-            self._fn = fn
-            p = Publish(fn, nm.storage_broker, None)
-            self._p = p
-            p.newdata = CONTENTS
-            p.required_shares = 3
-            p.total_shares = 10
-            p.setup_encoding_parameters()
-            p._new_seqnum = 3
-            p.salt = "SALT" * 4
-            # make some fake shares
-            shares_and_ids = ( ["%07d" % i for i in range(10)], range(10) )
-            p._privkey = fn.get_privkey()
-            p._encprivkey = fn.get_encprivkey()
-            p._pubkey = fn.get_pubkey()
-            return p._generate_shares(shares_and_ids)
+    def publish_mdmf(self):
+        # like publish_one, except that the result is guaranteed to be
+        # an MDMF file.
+        # self.CONTENTS should have more than one segment.
+        self.CONTENTS = "This is an MDMF file" * 100000
+        self.uploadable = MutableData(self.CONTENTS)
+        self._storage = FakeStorage()
+        self._nodemaker = make_nodemaker(self._storage)
+        self._storage_broker = self._nodemaker.storage_broker
+        d = self._nodemaker.create_mutable_file(self.uploadable, version=MDMF_VERSION)
+        def _created(node):
+            self._fn = node
+            self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
         d.addCallback(_created)
hunk ./src/allmydata/test/test_mutable.py 733
-        def _generated(res):
-            p = self._p
-            final_shares = p.shares
-            root_hash = p.root_hash
-            self.failUnlessEqual(len(root_hash), 32)
-            self.failUnless(isinstance(final_shares, dict))
-            self.failUnlessEqual(len(final_shares), 10)
-            self.failUnlessEqual(sorted(final_shares.keys()), range(10))
-            for i,sh in final_shares.items():
-                self.failUnless(isinstance(sh, str))
-                # feed the share through the unpacker as a sanity-check
-                pieces = unpack_share(sh)
-                (u_seqnum, u_root_hash, IV, k, N, segsize, datalen,
-                 pubkey, signature, share_hash_chain, block_hash_tree,
-                 share_data, enc_privkey) = pieces
-                self.failUnlessEqual(u_seqnum, 3)
-                self.failUnlessEqual(u_root_hash, root_hash)
-                self.failUnlessEqual(k, 3)
-                self.failUnlessEqual(N, 10)
-                self.failUnlessEqual(segsize, 21)
-                self.failUnlessEqual(datalen, len(CONTENTS))
-                self.failUnlessEqual(pubkey, p._pubkey.serialize())
-                sig_material = struct.pack(">BQ32s16s BBQQ",
-                                           0, p._new_seqnum, root_hash, IV,
-                                           k, N, segsize, datalen)
-                self.failUnless(p._pubkey.verify(sig_material, signature))
-                #self.failUnlessEqual(signature, p._privkey.sign(sig_material))
-                self.failUnless(isinstance(share_hash_chain, dict))
-                self.failUnlessEqual(len(share_hash_chain), 4) # ln2(10)++
-                for shnum,share_hash in share_hash_chain.items():
-                    self.failUnless(isinstance(shnum, int))
-                    self.failUnless(isinstance(share_hash, str))
-                    self.failUnlessEqual(len(share_hash), 32)
-                self.failUnless(isinstance(block_hash_tree, list))
-                self.failUnlessEqual(len(block_hash_tree), 1) # very small tree
-                self.failUnlessEqual(IV, "SALT"*4)
-                self.failUnlessEqual(len(share_data), len("%07d" % 1))
-                self.failUnlessEqual(enc_privkey, self._fn.get_encprivkey())
-        d.addCallback(_generated)
         return d
 
hunk ./src/allmydata/test/test_mutable.py 735
-    # TODO: when we publish to 20 peers, we should get one share per peer on 10
-    # when we publish to 3 peers, we should get either 3 or 4 shares per peer
-    # when we publish to zero peers, we should get a NotEnoughSharesError
 
hunk ./src/allmydata/test/test_mutable.py 736
-class PublishMixin:
-    def publish_one(self):
-        # publish a file and create shares, which can then be manipulated
-        # later.
-        self.CONTENTS = "New contents go here" * 1000
+    def publish_sdmf(self):
+        # like publish_one, except that the result is guaranteed to be
+        # an SDMF file
+        self.CONTENTS = "This is an SDMF file" * 1000
+        self.uploadable = MutableData(self.CONTENTS)
         self._storage = FakeStorage()
         self._nodemaker = make_nodemaker(self._storage)
         self._storage_broker = self._nodemaker.storage_broker
hunk ./src/allmydata/test/test_mutable.py 744
-        d = self._nodemaker.create_mutable_file(self.CONTENTS)
+        d = self._nodemaker.create_mutable_file(self.uploadable, version=SDMF_VERSION)
         def _created(node):
             self._fn = node
             self._fn2 = self._nodemaker.create_from_cap(node.get_uri())
hunk ./src/allmydata/test/test_mutable.py 751
         d.addCallback(_created)
         return d
 
-    def publish_multiple(self):
+
+    def publish_multiple(self, version=0):
         self.CONTENTS = ["Contents 0",
                          "Contents 1",
                          "Contents 2",
hunk ./src/allmydata/test/test_mutable.py 758
                          "Contents 3a",
                          "Contents 3b"]
+        self.uploadables = [MutableData(d) for d in self.CONTENTS]
         self._copied_shares = {}
         self._storage = FakeStorage()
         self._nodemaker = make_nodemaker(self._storage)
hunk ./src/allmydata/test/test_mutable.py 762
-        d = self._nodemaker.create_mutable_file(self.CONTENTS[0]) # seqnum=1
+        d = self._nodemaker.create_mutable_file(self.uploadables[0], version=version) # seqnum=1
         def _created(node):
             self._fn = node
             # now create multiple versions of the same file, and accumulate
hunk ./src/allmydata/test/test_mutable.py 769
             # their shares, so we can mix and match them later.
             d = defer.succeed(None)
             d.addCallback(self._copy_shares, 0)
-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[1])) #s2
+            d.addCallback(lambda res: node.overwrite(self.uploadables[1])) #s2
             d.addCallback(self._copy_shares, 1)
hunk ./src/allmydata/test/test_mutable.py 771
-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[2])) #s3
+            d.addCallback(lambda res: node.overwrite(self.uploadables[2])) #s3
             d.addCallback(self._copy_shares, 2)
hunk ./src/allmydata/test/test_mutable.py 773
-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[3])) #s4a
+            d.addCallback(lambda res: node.overwrite(self.uploadables[3])) #s4a
             d.addCallback(self._copy_shares, 3)
             # now we replace all the shares with version s3, and upload a new
             # version to get s4b.
hunk ./src/allmydata/test/test_mutable.py 779
             rollback = dict([(i,2) for i in range(10)])
             d.addCallback(lambda res: self._set_versions(rollback))
-            d.addCallback(lambda res: node.overwrite(self.CONTENTS[4])) #s4b
+            d.addCallback(lambda res: node.overwrite(self.uploadables[4])) #s4b
             d.addCallback(self._copy_shares, 4)
             # we leave the storage in state 4
             return d
hunk ./src/allmydata/test/test_mutable.py 786
         d.addCallback(_created)
         return d
 
+
     def _copy_shares(self, ignored, index):
         shares = self._storage._peers
         # we need a deep copy
hunk ./src/allmydata/test/test_mutable.py 810
                     shares[peerid][shnum] = oldshares[index][peerid][shnum]
 
 
+
+
 class Servermap(unittest.TestCase, PublishMixin):
     def setUp(self):
         return self.publish_one()
hunk ./src/allmydata/test/test_mutable.py 816
 
-    def make_servermap(self, mode=MODE_CHECK, fn=None, sb=None):
+    def make_servermap(self, mode=MODE_CHECK, fn=None, sb=None,
+                       update_range=None):
         if fn is None:
             fn = self._fn
         if sb is None:
hunk ./src/allmydata/test/test_mutable.py 823
             sb = self._storage_broker
         smu = ServermapUpdater(fn, sb, Monitor(),
-                               ServerMap(), mode)
+                               ServerMap(), mode, update_range=update_range)
         d = smu.update()
         return d
 
hunk ./src/allmydata/test/test_mutable.py 889
         # create a new file, which is large enough to knock the privkey out
         # of the early part of the file
         LARGE = "These are Larger contents" * 200 # about 5KB
-        d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE))
+        LARGE_uploadable = MutableData(LARGE)
+        d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE_uploadable))
         def _created(large_fn):
             large_fn2 = self._nodemaker.create_from_cap(large_fn.get_uri())
             return self.make_servermap(MODE_WRITE, large_fn2)
hunk ./src/allmydata/test/test_mutable.py 898
         d.addCallback(lambda sm: self.failUnlessOneRecoverable(sm, 10))
         return d
 
+
     def test_mark_bad(self):
         d = defer.succeed(None)
         ms = self.make_servermap
hunk ./src/allmydata/test/test_mutable.py 944
         self._storage._peers = {} # delete all shares
         ms = self.make_servermap
         d = defer.succeed(None)
-
+#
         d.addCallback(lambda res: ms(mode=MODE_CHECK))
         d.addCallback(lambda sm: self.failUnlessNoneRecoverable(sm))
 
hunk ./src/allmydata/test/test_mutable.py 996
         return d
 
 
+    def test_servermapupdater_finds_mdmf_files(self):
+        # setUp already published an MDMF file for us. We just need to
+        # make sure that when we run the ServermapUpdater, the file is
+        # reported to have one recoverable version.
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            self.publish_mdmf())
+        d.addCallback(lambda ignored:
+            self.make_servermap(mode=MODE_CHECK))
+        # Calling make_servermap also updates the servermap in the mode
+        # that we specify, so we just need to see what it says.
+        def _check_servermap(sm):
+            self.failUnlessEqual(len(sm.recoverable_versions()), 1)
+        d.addCallback(_check_servermap)
+        return d
+
+
+    def test_fetch_update(self):
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            self.publish_mdmf())
+        d.addCallback(lambda ignored:
+            self.make_servermap(mode=MODE_WRITE, update_range=(1, 2)))
+        def _check_servermap(sm):
+            # 10 shares
+            self.failUnlessEqual(len(sm.update_data), 10)
+            # one version
+            for data in sm.update_data.itervalues():
+                self.failUnlessEqual(len(data), 1)
+        d.addCallback(_check_servermap)
+        return d
+
+
+    def test_servermapupdater_finds_sdmf_files(self):
+        d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            self.publish_sdmf())
+        d.addCallback(lambda ignored:
+            self.make_servermap(mode=MODE_CHECK))
+        d.addCallback(lambda servermap:
+            self.failUnlessEqual(len(servermap.recoverable_versions()), 1))
+        return d
+
 
 class Roundtrip(unittest.TestCase, testutil.ShouldFailMixin, PublishMixin):
     def setUp(self):
hunk ./src/allmydata/test/test_mutable.py 1079
         if version is None:
             version = servermap.best_recoverable_version()
         r = Retrieve(self._fn, servermap, version)
-        return r.download()
+        c = consumer.MemoryConsumer()
+        d = r.download(consumer=c)
+        d.addCallback(lambda mc: "".join(mc.chunks))
+        return d
+
 
     def test_basic(self):
         d = self.make_servermap()
hunk ./src/allmydata/test/test_mutable.py 1160
         return d
     test_no_servers_download.timeout = 15
 
+
     def _test_corrupt_all(self, offset, substring,
hunk ./src/allmydata/test/test_mutable.py 1162
-                          should_succeed=False, corrupt_early=True,
-                          failure_checker=None):
+                          should_succeed=False,
+                          corrupt_early=True,
+                          failure_checker=None,
+                          fetch_privkey=False):
         d = defer.succeed(None)
         if corrupt_early:
             d.addCallback(corrupt, self._storage, offset)
hunk ./src/allmydata/test/test_mutable.py 1182
                     self.failUnlessIn(substring, "".join(allproblems))
                 return servermap
             if should_succeed:
-                d1 = self._fn.download_version(servermap, ver)
+                d1 = self._fn.download_version(servermap, ver,
+                                               fetch_privkey)
                 d1.addCallback(lambda new_contents:
                                self.failUnlessEqual(new_contents, self.CONTENTS))
             else:
hunk ./src/allmydata/test/test_mutable.py 1190
                 d1 = self.shouldFail(NotEnoughSharesError,
                                      "_corrupt_all(offset=%s)" % (offset,),
                                      substring,
-                                     self._fn.download_version, servermap, ver)
+                                     self._fn.download_version, servermap,
+                                                                ver,
+                                                                fetch_privkey)
             if failure_checker:
                 d1.addCallback(failure_checker)
             d1.addCallback(lambda res: servermap)
hunk ./src/allmydata/test/test_mutable.py 1201
         return d
 
     def test_corrupt_all_verbyte(self):
-        # when the version byte is not 0, we hit an UnknownVersionError error
-        # in unpack_share().
+        # when the version byte is not 0 or 1, we hit an UnknownVersionError
+        # error in unpack_share().
         d = self._test_corrupt_all(0, "UnknownVersionError")
         def _check_servermap(servermap):
             # and the dump should mention the problems
hunk ./src/allmydata/test/test_mutable.py 1208
             s = StringIO()
             dump = servermap.dump(s).getvalue()
-            self.failUnless("10 PROBLEMS" in dump, dump)
+            self.failUnless("30 PROBLEMS" in dump, dump)
         d.addCallback(_check_servermap)
         return d
 
hunk ./src/allmydata/test/test_mutable.py 1278
         return self._test_corrupt_all("enc_privkey", None, should_succeed=True)
 
 
+    def test_corrupt_all_encprivkey_late(self):
+        # this should work for the same reason as above, but we corrupt 
+        # after the servermap update to exercise the error handling
+        # code.
+        # We need to remove the privkey from the node, or the retrieve
+        # process won't know to update it.
+        self._fn._privkey = None
+        return self._test_corrupt_all("enc_privkey",
+                                      None, # this shouldn't fail
+                                      should_succeed=True,
+                                      corrupt_early=False,
+                                      fetch_privkey=True)
+
+
     def test_corrupt_all_seqnum_late(self):
         # corrupting the seqnum between mapupdate and retrieve should result
         # in NotEnoughSharesError, since each share will look invalid
hunk ./src/allmydata/test/test_mutable.py 1298
         def _check(res):
             f = res[0]
             self.failUnless(f.check(NotEnoughSharesError))
-            self.failUnless("someone wrote to the data since we read the servermap" in str(f))
+            self.failUnless("uncoordinated write" in str(f))
         return self._test_corrupt_all(1, "ran out of peers",
                                       corrupt_early=False,
                                       failure_checker=_check)
hunk ./src/allmydata/test/test_mutable.py 1342
                             in str(servermap.problems[0]))
             ver = servermap.best_recoverable_version()
             r = Retrieve(self._fn, servermap, ver)
-            return r.download()
+            c = consumer.MemoryConsumer()
+            return r.download(c)
         d.addCallback(_do_retrieve)
hunk ./src/allmydata/test/test_mutable.py 1345
+        d.addCallback(lambda mc: "".join(mc.chunks))
         d.addCallback(lambda new_contents:
                       self.failUnlessEqual(new_contents, self.CONTENTS))
         return d
hunk ./src/allmydata/test/test_mutable.py 1350
 
-    def test_corrupt_some(self):
-        # corrupt the data of first five shares (so the servermap thinks
-        # they're good but retrieve marks them as bad), so that the
-        # MODE_READ set of 6 will be insufficient, forcing node.download to
-        # retry with more servers.
-        corrupt(None, self._storage, "share_data", range(5))
-        d = self.make_servermap()
+
+    def _test_corrupt_some(self, offset, mdmf=False):
+        if mdmf:
+            d = self.publish_mdmf()
+        else:
+            d = defer.succeed(None)
+        d.addCallback(lambda ignored:
+            corrupt(None, self._storage, offset, range(5)))
+        d.addCallback(lambda ignored:
+            self.make_servermap())
         def _do_retrieve(servermap):
             ver = servermap.best_recoverable_version()
             self.failUnless(ver)
hunk ./src/allmydata/test/test_mutable.py 1366
             return self._fn.download_best_version()
         d.addCallback(_do_retrieve)
         d.addCallback(lambda new_contents:
-                      self.failUnlessEqual(new_contents, self.CONTENTS))
+            self.failUnlessEqual(new_contents, self.CONTENTS))
         return d
 
hunk ./src/allmydata/test/test_mutable.py 1369
+
+    def test_corrupt_some(self):
+        # corrupt the data of first five shares (so the servermap thinks
+        # they're good but retrieve marks them as bad), so that the
+        # MODE_READ set of 6 will be insufficient, forcing node.download to
+        # retry with more servers.
+        return self._test_corrupt_some("share_data")
+
+
     def test_download_fails(self):
hunk ./src/allmydata/test/test_mutable.py 1379
-        corrupt(None, self._storage, "signature")
-        d = self.shouldFail(UnrecoverableFileError, "test_download_anyway",
+        d = corrupt(None, self._storage, "signature")
+        d.addCallback(lambda ignored:
+            self.shouldFail(UnrecoverableFileError, "test_download_anyway",
                             "no recoverable versions",
hunk ./src/allmydata/test/test_mutable.py 1383
-                            self._fn.download_best_version)
+                            self._fn.download_best_version))
         return d
 
 
hunk ./src/allmydata/test/test_mutable.py 1387
+
+    def test_corrupt_mdmf_block_hash_tree(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            self._test_corrupt_all(("block_hash_tree", 12 * 32),
+                                   "block hash tree failure",
+                                   corrupt_early=False,
+                                   should_succeed=False))
+        return d
+
+
+    def test_corrupt_mdmf_block_hash_tree_late(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            self._test_corrupt_all(("block_hash_tree", 12 * 32),
+                                   "block hash tree failure",
+                                   corrupt_early=True,
+                                   should_succeed=False))
+        return d
+
+
+    def test_corrupt_mdmf_share_data(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            # TODO: Find out what the block size is and corrupt a
+            # specific block, rather than just guessing.
+            self._test_corrupt_all(("share_data", 12 * 40),
+                                    "block hash tree failure",
+                                    corrupt_early=True,
+                                    should_succeed=False))
+        return d
+
+
+    def test_corrupt_some_mdmf(self):
+        return self._test_corrupt_some(("share_data", 12 * 40),
+                                       mdmf=True)
+
+
 class CheckerMixin:
     def check_good(self, r, where):
         self.failUnless(r.is_healthy(), where)
hunk ./src/allmydata/test/test_mutable.py 1455
         d.addCallback(self.check_good, "test_check_good")
         return d
 
+    def test_check_mdmf_good(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor()))
+        d.addCallback(self.check_good, "test_check_mdmf_good")
+        return d
+
     def test_check_no_shares(self):
         for shares in self._storage._peers.values():
             shares.clear()
hunk ./src/allmydata/test/test_mutable.py 1469
         d.addCallback(self.check_bad, "test_check_no_shares")
         return d
 
+    def test_check_mdmf_no_shares(self):
+        d = self.publish_mdmf()
+        def _then(ignored):
+            for share in self._storage._peers.values():
+                share.clear()
+        d.addCallback(_then)
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor()))
+        d.addCallback(self.check_bad, "test_check_mdmf_no_shares")
+        return d
+
     def test_check_not_enough_shares(self):
         for shares in self._storage._peers.values():
             for shnum in shares.keys():
hunk ./src/allmydata/test/test_mutable.py 1489
         d.addCallback(self.check_bad, "test_check_not_enough_shares")
         return d
 
+    def test_check_mdmf_not_enough_shares(self):
+        d = self.publish_mdmf()
+        def _then(ignored):
+            for shares in self._storage._peers.values():
+                for shnum in shares.keys():
+                    if shnum > 0:
+                        del shares[shnum]
+        d.addCallback(_then)
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor()))
+        d.addCallback(self.check_bad, "test_check_mdmf_not_enougH_shares")
+        return d
+
+
     def test_check_all_bad_sig(self):
hunk ./src/allmydata/test/test_mutable.py 1504
-        corrupt(None, self._storage, 1) # bad sig
-        d = self._fn.check(Monitor())
+        d = corrupt(None, self._storage, 1) # bad sig
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor()))
         d.addCallback(self.check_bad, "test_check_all_bad_sig")
         return d
 
hunk ./src/allmydata/test/test_mutable.py 1510
+    def test_check_mdmf_all_bad_sig(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            corrupt(None, self._storage, 1))
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor()))
+        d.addCallback(self.check_bad, "test_check_mdmf_all_bad_sig")
+        return d
+
     def test_check_all_bad_blocks(self):
hunk ./src/allmydata/test/test_mutable.py 1520
-        corrupt(None, self._storage, "share_data", [9]) # bad blocks
+        d = corrupt(None, self._storage, "share_data", [9]) # bad blocks
         # the Checker won't notice this.. it doesn't look at actual data
hunk ./src/allmydata/test/test_mutable.py 1522
-        d = self._fn.check(Monitor())
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor()))
         d.addCallback(self.check_good, "test_check_all_bad_blocks")
         return d
 
hunk ./src/allmydata/test/test_mutable.py 1527
+
+    def test_check_mdmf_all_bad_blocks(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            corrupt(None, self._storage, "share_data"))
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor()))
+        d.addCallback(self.check_good, "test_check_mdmf_all_bad_blocks")
+        return d
+
     def test_verify_good(self):
         d = self._fn.check(Monitor(), verify=True)
         d.addCallback(self.check_good, "test_verify_good")
hunk ./src/allmydata/test/test_mutable.py 1541
         return d
+    test_verify_good.timeout = 15
 
     def test_verify_all_bad_sig(self):
hunk ./src/allmydata/test/test_mutable.py 1544
-        corrupt(None, self._storage, 1) # bad sig
-        d = self._fn.check(Monitor(), verify=True)
+        d = corrupt(None, self._storage, 1) # bad sig
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
         d.addCallback(self.check_bad, "test_verify_all_bad_sig")
         return d
 
hunk ./src/allmydata/test/test_mutable.py 1551
     def test_verify_one_bad_sig(self):
-        corrupt(None, self._storage, 1, [9]) # bad sig
-        d = self._fn.check(Monitor(), verify=True)
+        d = corrupt(None, self._storage, 1, [9]) # bad sig
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
         d.addCallback(self.check_bad, "test_verify_one_bad_sig")
         return d
 
hunk ./src/allmydata/test/test_mutable.py 1558
     def test_verify_one_bad_block(self):
-        corrupt(None, self._storage, "share_data", [9]) # bad blocks
+        d = corrupt(None, self._storage, "share_data", [9]) # bad blocks
         # the Verifier *will* notice this, since it examines every byte
hunk ./src/allmydata/test/test_mutable.py 1560
-        d = self._fn.check(Monitor(), verify=True)
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
         d.addCallback(self.check_bad, "test_verify_one_bad_block")
         d.addCallback(self.check_expected_failure,
                       CorruptShareError, "block hash tree failure",
hunk ./src/allmydata/test/test_mutable.py 1569
         return d
 
     def test_verify_one_bad_sharehash(self):
-        corrupt(None, self._storage, "share_hash_chain", [9], 5)
-        d = self._fn.check(Monitor(), verify=True)
+        d = corrupt(None, self._storage, "share_hash_chain", [9], 5)
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
         d.addCallback(self.check_bad, "test_verify_one_bad_sharehash")
         d.addCallback(self.check_expected_failure,
                       CorruptShareError, "corrupt hashes",
hunk ./src/allmydata/test/test_mutable.py 1579
         return d
 
     def test_verify_one_bad_encprivkey(self):
-        corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
-        d = self._fn.check(Monitor(), verify=True)
+        d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
         d.addCallback(self.check_bad, "test_verify_one_bad_encprivkey")
         d.addCallback(self.check_expected_failure,
                       CorruptShareError, "invalid privkey",
hunk ./src/allmydata/test/test_mutable.py 1589
         return d
 
     def test_verify_one_bad_encprivkey_uncheckable(self):
-        corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
+        d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey
         readonly_fn = self._fn.get_readonly()
         # a read-only node has no way to validate the privkey
hunk ./src/allmydata/test/test_mutable.py 1592
-        d = readonly_fn.check(Monitor(), verify=True)
+        d.addCallback(lambda ignored:
+            readonly_fn.check(Monitor(), verify=True))
         d.addCallback(self.check_good,
                       "test_verify_one_bad_encprivkey_uncheckable")
         return d
hunk ./src/allmydata/test/test_mutable.py 1598
 
+
+    def test_verify_mdmf_good(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
+        d.addCallback(self.check_good, "test_verify_mdmf_good")
+        return d
+
+
+    def test_verify_mdmf_one_bad_block(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            corrupt(None, self._storage, "share_data", [1]))
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
+        # We should find one bad block here
+        d.addCallback(self.check_bad, "test_verify_mdmf_one_bad_block")
+        d.addCallback(self.check_expected_failure,
+                      CorruptShareError, "block hash tree failure",
+                      "test_verify_mdmf_one_bad_block")
+        return d
+
+
+    def test_verify_mdmf_bad_encprivkey(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            corrupt(None, self._storage, "enc_privkey", [1]))
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
+        d.addCallback(self.check_bad, "test_verify_mdmf_bad_encprivkey")
+        d.addCallback(self.check_expected_failure,
+                      CorruptShareError, "privkey",
+                      "test_verify_mdmf_bad_encprivkey")
+        return d
+
+
+    def test_verify_mdmf_bad_sig(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            corrupt(None, self._storage, 1, [1]))
+        d.addCallback(lambda ignored:
+            self._fn.check(Monitor(), verify=True))
+        d.addCallback(self.check_bad, "test_verify_mdmf_bad_sig")
+        return d
+
+
+    def test_verify_mdmf_bad_encprivkey_uncheckable(self):
+        d = self.publish_mdmf()
+        d.addCallback(lambda ignored:
+            corrupt(None, self._storage, "enc_privkey", [1]))
+        d.addCallback(lambda ignored:
+            self._fn.get_readonly())
+        d.addCallback(lambda fn:
+            fn.check(Monitor(), verify=True))
+        d.addCallback(self.check_good,
+                      "test_verify_mdmf_bad_encprivkey_uncheckable")
+        return d
+
+
 class Repair(unittest.TestCase, PublishMixin, ShouldFailMixin):
 
     def get_shares(self, s):
hunk ./src/allmydata/test/test_mutable.py 1722
         current_shares = self.old_shares[-1]
         self.failUnlessEqual(old_shares, current_shares)
 
+
     def test_unrepairable_0shares(self):
         d = self.publish_one()
         def _delete_all_shares(ign):
hunk ./src/allmydata/test/test_mutable.py 1737
         d.addCallback(_check)
         return d
 
+    def test_mdmf_unrepairable_0shares(self):
+        d = self.publish_mdmf()
+        def _delete_all_shares(ign):
+            shares = self._storage._peers
+            for peerid in shares:
+                shares[peerid] = {}
+        d.addCallback(_delete_all_shares)
+        d.addCallback(lambda ign: self._fn.check(Monitor()))
+        d.addCallback(lambda check_results: self._fn.repair(check_results))
+        d.addCallback(lambda crr: self.failIf(crr.get_successful()))
+        return d
+
+
     def test_unrepairable_1share(self):
         d = self.publish_one()
         def _delete_all_shares(ign):
hunk ./src/allmydata/test/test_mutable.py 1766
         d.addCallback(_check)
         return d
 
+    def test_mdmf_unrepairable_1share(self):
+        d = self.publish_mdmf()
+        def _delete_all_shares(ign):
+            shares = self._storage._peers
+            for peerid in shares:
+                for shnum in list(shares[peerid]):
+                    if shnum > 0:
+                        del shares[peerid][shnum]
+        d.addCallback(_delete_all_shares)
+        d.addCallback(lambda ign: self._fn.check(Monitor()))
+        d.addCallback(lambda check_results: self._fn.repair(check_results))
+        def _check(crr):
+            self.failUnlessEqual(crr.get_successful(), False)
+        d.addCallback(_check)
+        return d
+
+    def test_repairable_5shares(self):
+        d = self.publish_mdmf()
+        def _delete_all_shares(ign):
+            shares = self._storage._peers
+            for peerid in shares:
+                for shnum in list(shares[peerid]):
+                    if shnum > 4:
+                        del shares[peerid][shnum]
+        d.addCallback(_delete_all_shares)
+        d.addCallback(lambda ign: self._fn.check(Monitor()))
+        d.addCallback(lambda check_results: self._fn.repair(check_results))
+        def _check(crr):
+            self.failUnlessEqual(crr.get_successful(), True)
+        d.addCallback(_check)
+        return d
+
+    def test_mdmf_repairable_5shares(self):
+        d = self.publish_mdmf()
+        def _delete_some_shares(ign):
+            shares = self._storage._peers
+            for peerid in shares:
+                for shnum in list(shares[peerid]):
+                    if shnum > 5:
+                        del shares[peerid][shnum]
+        d.addCallback(_delete_some_shares)
+        d.addCallback(lambda ign: self._fn.check(Monitor()))
+        def _check(cr):
+            self.failIf(cr.is_healthy())
+            self.failUnless(cr.is_recoverable())
+            return cr
+        d.addCallback(_check)
+        d.addCallback(lambda check_results: self._fn.repair(check_results))
+        def _check1(crr):
+            self.failUnlessEqual(crr.get_successful(), True)
+        d.addCallback(_check1)
+        return d
+
+
     def test_merge(self):
         self.old_shares = []
         d = self.publish_multiple()
hunk ./src/allmydata/test/test_mutable.py 1934
 class MultipleEncodings(unittest.TestCase):
     def setUp(self):
         self.CONTENTS = "New contents go here"
+        self.uploadable = MutableData(self.CONTENTS)
         self._storage = FakeStorage()
         self._nodemaker = make_nodemaker(self._storage, num_peers=20)
         self._storage_broker = self._nodemaker.storage_broker
hunk ./src/allmydata/test/test_mutable.py 1938
-        d = self._nodemaker.create_mutable_file(self.CONTENTS)
+        d = self._nodemaker.create_mutable_file(self.uploadable)
         def _created(node):
             self._fn = node
         d.addCallback(_created)
hunk ./src/allmydata/test/test_mutable.py 1944
         return d
 
-    def _encode(self, k, n, data):
+    def _encode(self, k, n, data, version=SDMF_VERSION):
         # encode 'data' into a peerid->shares dict.
 
         fn = self._fn
hunk ./src/allmydata/test/test_mutable.py 1960
         # and set the encoding parameters to something completely different
         fn2._required_shares = k
         fn2._total_shares = n
+        # Normally a servermap update would occur before a publish.
+        # Here, it doesn't, so we have to do it ourselves.
+        fn2.set_version(version)
 
         s = self._storage
         s._peers = {} # clear existing storage
hunk ./src/allmydata/test/test_mutable.py 1967
         p2 = Publish(fn2, self._storage_broker, None)
-        d = p2.publish(data)
+        uploadable = MutableData(data)
+        d = p2.publish(uploadable)
         def _published(res):
             shares = s._peers
             s._peers = {}
hunk ./src/allmydata/test/test_mutable.py 2235
         self.basedir = "mutable/Problems/test_publish_surprise"
         self.set_up_grid()
         nm = self.g.clients[0].nodemaker
-        d = nm.create_mutable_file("contents 1")
+        d = nm.create_mutable_file(MutableData("contents 1"))
         def _created(n):
             d = defer.succeed(None)
             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
hunk ./src/allmydata/test/test_mutable.py 2245
             d.addCallback(_got_smap1)
             # then modify the file, leaving the old map untouched
             d.addCallback(lambda res: log.msg("starting winning write"))
-            d.addCallback(lambda res: n.overwrite("contents 2"))
+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
             # now attempt to modify the file with the old servermap. This
             # will look just like an uncoordinated write, in which every
             # single share got updated between our mapupdate and our publish
hunk ./src/allmydata/test/test_mutable.py 2254
                           self.shouldFail(UncoordinatedWriteError,
                                           "test_publish_surprise", None,
                                           n.upload,
-                                          "contents 2a", self.old_map))
+                                          MutableData("contents 2a"), self.old_map))
             return d
         d.addCallback(_created)
         return d
hunk ./src/allmydata/test/test_mutable.py 2263
         self.basedir = "mutable/Problems/test_retrieve_surprise"
         self.set_up_grid()
         nm = self.g.clients[0].nodemaker
-        d = nm.create_mutable_file("contents 1")
+        d = nm.create_mutable_file(MutableData("contents 1"))
         def _created(n):
             d = defer.succeed(None)
             d.addCallback(lambda res: n.get_servermap(MODE_READ))
hunk ./src/allmydata/test/test_mutable.py 2273
             d.addCallback(_got_smap1)
             # then modify the file, leaving the old map untouched
             d.addCallback(lambda res: log.msg("starting winning write"))
-            d.addCallback(lambda res: n.overwrite("contents 2"))
+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
             # now attempt to retrieve the old version with the old servermap.
             # This will look like someone has changed the file since we
             # updated the servermap.
hunk ./src/allmydata/test/test_mutable.py 2282
             d.addCallback(lambda res:
                           self.shouldFail(NotEnoughSharesError,
                                           "test_retrieve_surprise",
-                                          "ran out of peers: have 0 shares (k=3)",
+                                          "ran out of peers: have 0 of 1",
                                           n.download_version,
                                           self.old_map,
                                           self.old_map.best_recoverable_version(),
hunk ./src/allmydata/test/test_mutable.py 2291
         d.addCallback(_created)
         return d
 
+
     def test_unexpected_shares(self):
         # upload the file, take a servermap, shut down one of the servers,
         # upload it again (causing shares to appear on a new server), then
hunk ./src/allmydata/test/test_mutable.py 2301
         self.basedir = "mutable/Problems/test_unexpected_shares"
         self.set_up_grid()
         nm = self.g.clients[0].nodemaker
-        d = nm.create_mutable_file("contents 1")
+        d = nm.create_mutable_file(MutableData("contents 1"))
         def _created(n):
             d = defer.succeed(None)
             d.addCallback(lambda res: n.get_servermap(MODE_WRITE))
hunk ./src/allmydata/test/test_mutable.py 2313
                 self.g.remove_server(peer0)
                 # then modify the file, leaving the old map untouched
                 log.msg("starting winning write")
-                return n.overwrite("contents 2")
+                return n.overwrite(MutableData("contents 2"))
             d.addCallback(_got_smap1)
             # now attempt to modify the file with the old servermap. This
             # will look just like an uncoordinated write, in which every
hunk ./src/allmydata/test/test_mutable.py 2323
                           self.shouldFail(UncoordinatedWriteError,
                                           "test_surprise", None,
                                           n.upload,
-                                          "contents 2a", self.old_map))
+                                          MutableData("contents 2a"), self.old_map))
             return d
         d.addCallback(_created)
         return d
hunk ./src/allmydata/test/test_mutable.py 2327
+    test_unexpected_shares.timeout = 15
 
     def test_bad_server(self):
         # Break one server, then create the file: the initial publish should
hunk ./src/allmydata/test/test_mutable.py 2361
         d.addCallback(_break_peer0)
         # now "create" the file, using the pre-established key, and let the
         # initial publish finally happen
-        d.addCallback(lambda res: nm.create_mutable_file("contents 1"))
+        d.addCallback(lambda res: nm.create_mutable_file(MutableData("contents 1")))
         # that ought to work
         def _got_node(n):
             d = n.download_best_version()
hunk ./src/allmydata/test/test_mutable.py 2370
             def _break_peer1(res):
                 self.g.break_server(self.server1.get_serverid())
             d.addCallback(_break_peer1)
-            d.addCallback(lambda res: n.overwrite("contents 2"))
+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
             # that ought to work too
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
hunk ./src/allmydata/test/test_mutable.py 2402
         peerids = [s.get_serverid() for s in sb.get_connected_servers()]
         self.g.break_server(peerids[0])
 
-        d = nm.create_mutable_file("contents 1")
+        d = nm.create_mutable_file(MutableData("contents 1"))
         def _created(n):
             d = n.download_best_version()
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1"))
hunk ./src/allmydata/test/test_mutable.py 2410
             def _break_second_server(res):
                 self.g.break_server(peerids[1])
             d.addCallback(_break_second_server)
-            d.addCallback(lambda res: n.overwrite("contents 2"))
+            d.addCallback(lambda res: n.overwrite(MutableData("contents 2")))
             # that ought to work too
             d.addCallback(lambda res: n.download_best_version())
             d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2"))
hunk ./src/allmydata/test/test_mutable.py 2429
         d = self.shouldFail(NotEnoughServersError,
                             "test_publish_all_servers_bad",
                             "Ran out of non-bad servers",
-                            nm.create_mutable_file, "contents")
+                            nm.create_mutable_file, MutableData("contents"))
         return d
 
     def test_publish_no_servers(self):
hunk ./src/allmydata/test/test_mutable.py 2441
         d = self.shouldFail(NotEnoughServersError,
                             "test_publish_no_servers",
                             "Ran out of non-bad servers",
-                            nm.create_mutable_file, "contents")
+                            nm.create_mutable_file, MutableData("contents"))
         return d
     test_publish_no_servers.timeout = 30
 
hunk ./src/allmydata/test/test_mutable.py 2459
         # we need some contents that are large enough to push the privkey out
         # of the early part of the file
         LARGE = "These are Larger contents" * 2000 # about 50KB
-        d = nm.create_mutable_file(LARGE)
+        LARGE_uploadable = MutableData(LARGE)
+        d = nm.create_mutable_file(LARGE_uploadable)
         def _created(n):
             self.uri = n.get_uri()
             self.n2 = nm.create_from_cap(self.uri)
hunk ./src/allmydata/test/test_mutable.py 2495
         self.basedir = "mutable/Problems/test_privkey_query_missing"
         self.set_up_grid(num_servers=20)
         nm = self.g.clients[0].nodemaker
-        LARGE = "These are Larger contents" * 2000 # about 50KB
+        LARGE = "These are Larger contents" * 2000 # about 50KiB
+        LARGE_uploadable = MutableData(LARGE)
         nm._node_cache = DevNullDictionary() # disable the nodecache
 
hunk ./src/allmydata/test/test_mutable.py 2499
-        d = nm.create_mutable_file(LARGE)
+        d = nm.create_mutable_file(LARGE_uploadable)
         def _created(n):
             self.uri = n.get_uri()
             self.n2 = nm.create_from_cap(self.uri)
hunk ./src/allmydata/test/test_mutable.py 2509
         d.addCallback(_created)
         d.addCallback(lambda res: self.n2.get_servermap(MODE_WRITE))
         return d
+
+
+    def test_block_and_hash_query_error(self):
+        # This tests for what happens when a query to a remote server
+        # fails in either the hash validation step or the block getting
+        # step (because of batching, this is the same actual query).
+        # We need to have the storage server persist up until the point
+        # that its prefix is validated, then suddenly die. This
+        # exercises some exception handling code in Retrieve.
+        self.basedir = "mutable/Problems/test_block_and_hash_query_error"
+        self.set_up_grid(num_servers=20)
+        nm = self.g.clients[0].nodemaker
+        CONTENTS = "contents" * 2000
+        CONTENTS_uploadable = MutableData(CONTENTS)
+        d = nm.create_mutable_file(CONTENTS_uploadable)
+        def _created(node):
+            self._node = node
+        d.addCallback(_created)
+        d.addCallback(lambda ignored:
+            self._node.get_servermap(MODE_READ))
+        def _then(servermap):
+            # we have our servermap. Now we set up the servers like the
+            # tests above -- the first one that gets a read call should
+            # start throwing errors, but only after returning its prefix
+            # for validation. Since we'll download without fetching the
+            # private key, the next query to the remote server will be
+            # for either a block and salt or for hashes, either of which
+            # will exercise the error handling code.
+            killer = FirstServerGetsKilled()
+            for (serverid, ss) in nm.storage_broker.get_all_servers():
+                ss.post_call_notifier = killer.notify
+            ver = servermap.best_recoverable_version()
+            assert ver
+            return self._node.download_version(servermap, ver)
+        d.addCallback(_then)
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, CONTENTS))
+        return d
+
+
+class FileHandle(unittest.TestCase):
+    def setUp(self):
+        self.test_data = "Test Data" * 50000
+        self.sio = StringIO(self.test_data)
+        self.uploadable = MutableFileHandle(self.sio)
+
+
+    def test_filehandle_read(self):
+        self.basedir = "mutable/FileHandle/test_filehandle_read"
+        chunk_size = 10
+        for i in xrange(0, len(self.test_data), chunk_size):
+            data = self.uploadable.read(chunk_size)
+            data = "".join(data)
+            start = i
+            end = i + chunk_size
+            self.failUnlessEqual(data, self.test_data[start:end])
+
+
+    def test_filehandle_get_size(self):
+        self.basedir = "mutable/FileHandle/test_filehandle_get_size"
+        actual_size = len(self.test_data)
+        size = self.uploadable.get_size()
+        self.failUnlessEqual(size, actual_size)
+
+
+    def test_filehandle_get_size_out_of_order(self):
+        # We should be able to call get_size whenever we want without
+        # disturbing the location of the seek pointer.
+        chunk_size = 100
+        data = self.uploadable.read(chunk_size)
+        self.failUnlessEqual("".join(data), self.test_data[:chunk_size])
+
+        # Now get the size.
+        size = self.uploadable.get_size()
+        self.failUnlessEqual(size, len(self.test_data))
+
+        # Now get more data. We should be right where we left off.
+        more_data = self.uploadable.read(chunk_size)
+        start = chunk_size
+        end = chunk_size * 2
+        self.failUnlessEqual("".join(more_data), self.test_data[start:end])
+
+
+    def test_filehandle_file(self):
+        # Make sure that the MutableFileHandle works on a file as well
+        # as a StringIO object, since in some cases it will be asked to
+        # deal with files.
+        self.basedir = self.mktemp()
+        # necessary? What am I doing wrong here?
+        os.mkdir(self.basedir)
+        f_path = os.path.join(self.basedir, "test_file")
+        f = open(f_path, "w")
+        f.write(self.test_data)
+        f.close()
+        f = open(f_path, "r")
+
+        uploadable = MutableFileHandle(f)
+
+        data = uploadable.read(len(self.test_data))
+        self.failUnlessEqual("".join(data), self.test_data)
+        size = uploadable.get_size()
+        self.failUnlessEqual(size, len(self.test_data))
+
+
+    def test_close(self):
+        # Make sure that the MutableFileHandle closes its handle when
+        # told to do so.
+        self.uploadable.close()
+        self.failUnless(self.sio.closed)
+
+
+class DataHandle(unittest.TestCase):
+    def setUp(self):
+        self.test_data = "Test Data" * 50000
+        self.uploadable = MutableData(self.test_data)
+
+
+    def test_datahandle_read(self):
+        chunk_size = 10
+        for i in xrange(0, len(self.test_data), chunk_size):
+            data = self.uploadable.read(chunk_size)
+            data = "".join(data)
+            start = i
+            end = i + chunk_size
+            self.failUnlessEqual(data, self.test_data[start:end])
+
+
+    def test_datahandle_get_size(self):
+        actual_size = len(self.test_data)
+        size = self.uploadable.get_size()
+        self.failUnlessEqual(size, actual_size)
+
+
+    def test_datahandle_get_size_out_of_order(self):
+        # We should be able to call get_size whenever we want without
+        # disturbing the location of the seek pointer.
+        chunk_size = 100
+        data = self.uploadable.read(chunk_size)
+        self.failUnlessEqual("".join(data), self.test_data[:chunk_size])
+
+        # Now get the size.
+        size = self.uploadable.get_size()
+        self.failUnlessEqual(size, len(self.test_data))
+
+        # Now get more data. We should be right where we left off.
+        more_data = self.uploadable.read(chunk_size)
+        start = chunk_size
+        end = chunk_size * 2
+        self.failUnlessEqual("".join(more_data), self.test_data[start:end])
+
+
+class Version(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin, \
+              PublishMixin):
+    def setUp(self):
+        GridTestMixin.setUp(self)
+        self.basedir = self.mktemp()
+        self.set_up_grid()
+        self.c = self.g.clients[0]
+        self.nm = self.c.nodemaker
+        self.data = "test data" * 100000 # about 900 KiB; MDMF
+        self.small_data = "test data" * 10 # about 90 B; SDMF
+        return self.do_upload()
+
+
+    def do_upload(self):
+        d1 = self.nm.create_mutable_file(MutableData(self.data),
+                                         version=MDMF_VERSION)
+        d2 = self.nm.create_mutable_file(MutableData(self.small_data))
+        dl = gatherResults([d1, d2])
+        def _then((n1, n2)):
+            assert isinstance(n1, MutableFileNode)
+            assert isinstance(n2, MutableFileNode)
+
+            self.mdmf_node = n1
+            self.sdmf_node = n2
+        dl.addCallback(_then)
+        return dl
+
+
+    def test_get_readonly_mutable_version(self):
+        # Attempting to get a mutable version of a mutable file from a
+        # filenode initialized with a readcap should return a readonly
+        # version of that same node.
+        ro = self.mdmf_node.get_readonly()
+        d = ro.get_best_mutable_version()
+        d.addCallback(lambda version:
+            self.failUnless(version.is_readonly()))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.get_readonly())
+        d.addCallback(lambda version:
+            self.failUnless(version.is_readonly()))
+        return d
+
+
+    def test_get_sequence_number(self):
+        d = self.mdmf_node.get_best_readable_version()
+        d.addCallback(lambda bv:
+            self.failUnlessEqual(bv.get_sequence_number(), 1))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.get_best_readable_version())
+        d.addCallback(lambda bv:
+            self.failUnlessEqual(bv.get_sequence_number(), 1))
+        # Now update. The sequence number in both cases should be 1 in
+        # both cases.
+        def _do_update(ignored):
+            new_data = MutableData("foo bar baz" * 100000)
+            new_small_data = MutableData("foo bar baz" * 10)
+            d1 = self.mdmf_node.overwrite(new_data)
+            d2 = self.sdmf_node.overwrite(new_small_data)
+            dl = gatherResults([d1, d2])
+            return dl
+        d.addCallback(_do_update)
+        d.addCallback(lambda ignored:
+            self.mdmf_node.get_best_readable_version())
+        d.addCallback(lambda bv:
+            self.failUnlessEqual(bv.get_sequence_number(), 2))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.get_best_readable_version())
+        d.addCallback(lambda bv:
+            self.failUnlessEqual(bv.get_sequence_number(), 2))
+        return d
+
+
+    def test_get_writekey(self):
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda bv:
+            self.failUnlessEqual(bv.get_writekey(),
+                                 self.mdmf_node.get_writekey()))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.get_best_mutable_version())
+        d.addCallback(lambda bv:
+            self.failUnlessEqual(bv.get_writekey(),
+                                 self.sdmf_node.get_writekey()))
+        return d
+
+
+    def test_get_storage_index(self):
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda bv:
+            self.failUnlessEqual(bv.get_storage_index(),
+                                 self.mdmf_node.get_storage_index()))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.get_best_mutable_version())
+        d.addCallback(lambda bv:
+            self.failUnlessEqual(bv.get_storage_index(),
+                                 self.sdmf_node.get_storage_index()))
+        return d
+
+
+    def test_get_readonly_version(self):
+        d = self.mdmf_node.get_best_readable_version()
+        d.addCallback(lambda bv:
+            self.failUnless(bv.is_readonly()))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.get_best_readable_version())
+        d.addCallback(lambda bv:
+            self.failUnless(bv.is_readonly()))
+        return d
+
+
+    def test_get_mutable_version(self):
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda bv:
+            self.failIf(bv.is_readonly()))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.get_best_mutable_version())
+        d.addCallback(lambda bv:
+            self.failIf(bv.is_readonly()))
+        return d
+
+
+    def test_toplevel_overwrite(self):
+        new_data = MutableData("foo bar baz" * 100000)
+        new_small_data = MutableData("foo bar baz" * 10)
+        d = self.mdmf_node.overwrite(new_data)
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, "foo bar baz" * 100000))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.overwrite(new_small_data))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.download_best_version())
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, "foo bar baz" * 10))
+        return d
+
+
+    def test_toplevel_modify(self):
+        def modifier(old_contents, servermap, first_time):
+            return old_contents + "modified"
+        d = self.mdmf_node.modify(modifier)
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda data:
+            self.failUnlessIn("modified", data))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.modify(modifier))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.download_best_version())
+        d.addCallback(lambda data:
+            self.failUnlessIn("modified", data))
+        return d
+
+
+    def test_version_modify(self):
+        # TODO: When we can publish multiple versions, alter this test
+        # to modify a version other than the best usable version, then
+        # test to see that the best recoverable version is that.
+        def modifier(old_contents, servermap, first_time):
+            return old_contents + "modified"
+        d = self.mdmf_node.modify(modifier)
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda data:
+            self.failUnlessIn("modified", data))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.modify(modifier))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.download_best_version())
+        d.addCallback(lambda data:
+            self.failUnlessIn("modified", data))
+        return d
+
+
+    def test_download_version(self):
+        d = self.publish_multiple()
+        # We want to have two recoverable versions on the grid.
+        d.addCallback(lambda res:
+                      self._set_versions({0:0,2:0,4:0,6:0,8:0,
+                                          1:1,3:1,5:1,7:1,9:1}))
+        # Now try to download each version. We should get the plaintext
+        # associated with that version.
+        d.addCallback(lambda ignored:
+            self._fn.get_servermap(mode=MODE_READ))
+        def _got_servermap(smap):
+            versions = smap.recoverable_versions()
+            assert len(versions) == 2
+
+            self.servermap = smap
+            self.version1, self.version2 = versions
+            assert self.version1 != self.version2
+
+            self.version1_seqnum = self.version1[0]
+            self.version2_seqnum = self.version2[0]
+            self.version1_index = self.version1_seqnum - 1
+            self.version2_index = self.version2_seqnum - 1
+
+        d.addCallback(_got_servermap)
+        d.addCallback(lambda ignored:
+            self._fn.download_version(self.servermap, self.version1))
+        d.addCallback(lambda results:
+            self.failUnlessEqual(self.CONTENTS[self.version1_index],
+                                 results))
+        d.addCallback(lambda ignored:
+            self._fn.download_version(self.servermap, self.version2))
+        d.addCallback(lambda results:
+            self.failUnlessEqual(self.CONTENTS[self.version2_index],
+                                 results))
+        return d
+
+
+    def test_download_nonexistent_version(self):
+        d = self.mdmf_node.get_servermap(mode=MODE_WRITE)
+        def _set_servermap(servermap):
+            self.servermap = servermap
+        d.addCallback(_set_servermap)
+        d.addCallback(lambda ignored:
+           self.shouldFail(UnrecoverableFileError, "nonexistent version",
+                           None,
+                           self.mdmf_node.download_version, self.servermap,
+                           "not a version"))
+        return d
+
+
+    def test_partial_read(self):
+        # read only a few bytes at a time, and see that the results are
+        # what we expect.
+        d = self.mdmf_node.get_best_readable_version()
+        def _read_data(version):
+            c = consumer.MemoryConsumer()
+            d2 = defer.succeed(None)
+            for i in xrange(0, len(self.data), 10000):
+                d2.addCallback(lambda ignored, i=i: version.read(c, i, 10000))
+            d2.addCallback(lambda ignored:
+                self.failUnlessEqual(self.data, "".join(c.chunks)))
+            return d2
+        d.addCallback(_read_data)
+        return d
+
+
+    def test_read(self):
+        d = self.mdmf_node.get_best_readable_version()
+        def _read_data(version):
+            c = consumer.MemoryConsumer()
+            d2 = defer.succeed(None)
+            d2.addCallback(lambda ignored: version.read(c))
+            d2.addCallback(lambda ignored:
+                self.failUnlessEqual("".join(c.chunks), self.data))
+            return d2
+        d.addCallback(_read_data)
+        return d
+
+
+    def test_download_best_version(self):
+        d = self.mdmf_node.download_best_version()
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, self.data))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.download_best_version())
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, self.small_data))
+        return d
+
+
+class Update(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
+    def setUp(self):
+        GridTestMixin.setUp(self)
+        self.basedir = self.mktemp()
+        self.set_up_grid()
+        self.c = self.g.clients[0]
+        self.nm = self.c.nodemaker
+        self.data = "test data" * 100000 # about 900 KiB; MDMF
+        self.small_data = "test data" * 10 # about 90 B; SDMF
+        return self.do_upload()
+
+
+    def do_upload(self):
+        d1 = self.nm.create_mutable_file(MutableData(self.data),
+                                         version=MDMF_VERSION)
+        d2 = self.nm.create_mutable_file(MutableData(self.small_data))
+        dl = gatherResults([d1, d2])
+        def _then((n1, n2)):
+            assert isinstance(n1, MutableFileNode)
+            assert isinstance(n2, MutableFileNode)
+
+            self.mdmf_node = n1
+            self.sdmf_node = n2
+        dl.addCallback(_then)
+        return dl
+
+
+    def test_append(self):
+        # We should be able to append data to the middle of a mutable
+        # file and get what we expect.
+        new_data = self.data + "appended"
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(MutableData("appended"), len(self.data)))
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, new_data))
+        return d
+    test_append.timeout = 15
+
+
+    def test_replace(self):
+        # We should be able to replace data in the middle of a mutable
+        # file and get what we expect back. 
+        new_data = self.data[:100]
+        new_data += "appended"
+        new_data += self.data[108:]
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(MutableData("appended"), 100))
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, new_data))
+        return d
+
+
+    def test_replace_and_extend(self):
+        # We should be able to replace data in the middle of a mutable
+        # file and extend that mutable file and get what we expect.
+        new_data = self.data[:100]
+        new_data += "modified " * 100000
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(MutableData("modified " * 100000), 100))
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, new_data))
+        return d
+
+
+    def test_append_power_of_two(self):
+        # If we attempt to extend a mutable file so that its segment
+        # count crosses a power-of-two boundary, the update operation
+        # should know how to reencode the file.
+
+        # Note that the data populating self.mdmf_node is about 900 KiB
+        # long -- this is 7 segments in the default segment size. So we
+        # need to add 2 segments worth of data to push it over a
+        # power-of-two boundary.
+        segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
+        new_data = self.data + (segment * 2)
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(MutableData(segment * 2), len(self.data)))
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, new_data))
+        return d
+    test_append_power_of_two.timeout = 15
+
+
+    def test_update_sdmf(self):
+        # Running update on a single-segment file should still work.
+        new_data = self.small_data + "appended"
+        d = self.sdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(MutableData("appended"), len(self.small_data)))
+        d.addCallback(lambda ignored:
+            self.sdmf_node.download_best_version())
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, new_data))
+        return d
+
+    def test_replace_in_last_segment(self):
+        # The wrapper should know how to handle the tail segment
+        # appropriately.
+        replace_offset = len(self.data) - 100
+        new_data = self.data[:replace_offset] + "replaced"
+        rest_offset = replace_offset + len("replaced")
+        new_data += self.data[rest_offset:]
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(MutableData("replaced"), replace_offset))
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, new_data))
+        return d
+
+
+    def test_multiple_segment_replace(self):
+        replace_offset = 2 * DEFAULT_MAX_SEGMENT_SIZE
+        new_data = self.data[:replace_offset]
+        new_segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
+        new_data += 2 * new_segment
+        new_data += "replaced"
+        rest_offset = len(new_data)
+        new_data += self.data[rest_offset:]
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(MutableData((2 * new_segment) + "replaced"),
+                      replace_offset))
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, new_data))
+        return d
hunk ./src/allmydata/test/test_sftp.py 32
 
 from allmydata.util.consumer import download_to_data
 from allmydata.immutable import upload
+from allmydata.mutable import publish
 from allmydata.test.no_network import GridTestMixin
 from allmydata.test.common import ShouldFailMixin
 from allmydata.test.common_util import ReallyEqualMixin
hunk ./src/allmydata/test/test_sftp.py 80
         return d
 
     def _set_up_tree(self):
-        d = self.client.create_mutable_file("mutable file contents")
+        u = publish.MutableData("mutable file contents")
+        d = self.client.create_mutable_file(u)
         d.addCallback(lambda node: self.root.set_node(u"mutable", node))
         def _created_mutable(n):
             self.mutable = n
hunk ./src/allmydata/test/test_sftp.py 1330
         d.addCallback(lambda ign: self.failUnlessEqual(sftpd.all_heisenfiles, {}))
         d.addCallback(lambda ign: self.failUnlessEqual(self.handler._heisenfiles, {}))
         return d
+    test_makeDirectory.timeout = 15
 
     def test_execCommand_and_openShell(self):
         class MockProtocol:
hunk ./src/allmydata/test/test_storage.py 27
                                      LayoutInvalid, MDMFSIGNABLEHEADER, \
                                      SIGNED_PREFIX, MDMFHEADER, \
                                      MDMFOFFSETS, SDMFSlotWriteProxy
-from allmydata.interfaces import BadWriteEnablerError, MDMF_VERSION, \
-                                 SDMF_VERSION
+from allmydata.interfaces import BadWriteEnablerError
 from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
 from allmydata.test.common_web import WebRenderingMixin
 from allmydata.web.storage import StorageStatus, remove_prefix
hunk ./src/allmydata/test/test_system.py 26
 from allmydata.monitor import Monitor
 from allmydata.mutable.common import NotWriteableError
 from allmydata.mutable import layout as mutable_layout
+from allmydata.mutable.publish import MutableData
 from foolscap.api import DeadReferenceError
 from twisted.python.failure import Failure
 from twisted.web.client import getPage
hunk ./src/allmydata/test/test_system.py 467
     def test_mutable(self):
         self.basedir = "system/SystemTest/test_mutable"
         DATA = "initial contents go here."  # 25 bytes % 3 != 0
+        DATA_uploadable = MutableData(DATA)
         NEWDATA = "new contents yay"
hunk ./src/allmydata/test/test_system.py 469
+        NEWDATA_uploadable = MutableData(NEWDATA)
         NEWERDATA = "this is getting old"
hunk ./src/allmydata/test/test_system.py 471
+        NEWERDATA_uploadable = MutableData(NEWERDATA)
 
         d = self.set_up_nodes(use_key_generator=True)
 
hunk ./src/allmydata/test/test_system.py 478
         def _create_mutable(res):
             c = self.clients[0]
             log.msg("starting create_mutable_file")
-            d1 = c.create_mutable_file(DATA)
+            d1 = c.create_mutable_file(DATA_uploadable)
             def _done(res):
                 log.msg("DONE: %s" % (res,))
                 self._mutable_node_1 = res
hunk ./src/allmydata/test/test_system.py 565
             self.failUnlessEqual(res, DATA)
             # replace the data
             log.msg("starting replace1")
-            d1 = newnode.overwrite(NEWDATA)
+            d1 = newnode.overwrite(NEWDATA_uploadable)
             d1.addCallback(lambda res: newnode.download_best_version())
             return d1
         d.addCallback(_check_download_3)
hunk ./src/allmydata/test/test_system.py 579
             newnode2 = self.clients[3].create_node_from_uri(uri)
             self._newnode3 = self.clients[3].create_node_from_uri(uri)
             log.msg("starting replace2")
-            d1 = newnode1.overwrite(NEWERDATA)
+            d1 = newnode1.overwrite(NEWERDATA_uploadable)
             d1.addCallback(lambda res: newnode2.download_best_version())
             return d1
         d.addCallback(_check_download_4)
hunk ./src/allmydata/test/test_system.py 649
         def _check_empty_file(res):
             # make sure we can create empty files, this usually screws up the
             # segsize math
-            d1 = self.clients[2].create_mutable_file("")
+            d1 = self.clients[2].create_mutable_file(MutableData(""))
             d1.addCallback(lambda newnode: newnode.download_best_version())
             d1.addCallback(lambda res: self.failUnlessEqual("", res))
             return d1
hunk ./src/allmydata/test/test_system.py 680
                                  self.key_generator_svc.key_generator.pool_size + size_delta)
 
         d.addCallback(check_kg_poolsize, 0)
-        d.addCallback(lambda junk: self.clients[3].create_mutable_file('hello, world'))
+        d.addCallback(lambda junk:
+            self.clients[3].create_mutable_file(MutableData('hello, world')))
         d.addCallback(check_kg_poolsize, -1)
         d.addCallback(lambda junk: self.clients[3].create_dirnode())
         d.addCallback(check_kg_poolsize, -2)
hunk ./src/allmydata/test/test_web.py 28
 from allmydata.util.encodingutil import to_str
 from allmydata.test.common import FakeCHKFileNode, FakeMutableFileNode, \
      create_chk_filenode, WebErrorMixin, ShouldFailMixin, make_mutable_file_uri
-from allmydata.interfaces import IMutableFileNode
+from allmydata.interfaces import IMutableFileNode, SDMF_VERSION, MDMF_VERSION
 from allmydata.mutable import servermap, publish, retrieve
 import allmydata.test.common_util as testutil
 from allmydata.test.no_network import GridTestMixin
hunk ./src/allmydata/test/test_web.py 57
         return FakeCHKFileNode(cap)
     def _create_mutable(self, cap):
         return FakeMutableFileNode(None, None, None, None).init_from_cap(cap)
-    def create_mutable_file(self, contents="", keysize=None):
+    def create_mutable_file(self, contents="", keysize=None,
+                            version=SDMF_VERSION):
         n = FakeMutableFileNode(None, None, None, None)
hunk ./src/allmydata/test/test_web.py 60
+        n.set_version(version)
         return n.create(contents)
 
 class FakeUploader(service.Service):
hunk ./src/allmydata/test/test_web.py 162
         self.nodemaker = FakeNodeMaker(None, self._secret_holder, None,
                                        self.uploader, None,
                                        None, None)
+        self.mutable_file_default = SDMF_VERSION
 
     def startService(self):
         return service.MultiService.startService(self)
hunk ./src/allmydata/test/test_web.py 781
                              self.PUT, base + "/@@name=/blah.txt", "")
         return d
 
+
     def test_GET_DIRURL_named_bad(self):
         base = "/file/%s" % urllib.quote(self._foo_uri)
         d = self.shouldFail2(error.Error, "test_PUT_DIRURL_named_bad",
hunk ./src/allmydata/test/test_web.py 897
                                                       self.NEWFILE_CONTENTS))
         return d
 
+    def test_PUT_NEWFILEURL_unlinked_mdmf(self):
+        # this should get us a few segments of an MDMF mutable file,
+        # which we can then test for.
+        contents = self.NEWFILE_CONTENTS * 300000
+        d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
+                     contents)
+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
+        d.addCallback(lambda json: self.failUnlessIn("mdmf", json))
+        return d
+
+    def test_PUT_NEWFILEURL_unlinked_sdmf(self):
+        contents = self.NEWFILE_CONTENTS * 300000
+        d = self.PUT("/uri?mutable=true&mutable-type=sdmf",
+                     contents)
+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
+        d.addCallback(lambda json: self.failUnlessIn("sdmf", json))
+        return d
+
     def test_PUT_NEWFILEURL_range_bad(self):
         headers = {"content-range": "bytes 1-10/%d" % len(self.NEWFILE_CONTENTS)}
         target = self.public_url + "/foo/new.txt"
hunk ./src/allmydata/test/test_web.py 947
         return d
 
     def test_PUT_NEWFILEURL_mutable_toobig(self):
-        d = self.shouldFail2(error.Error, "test_PUT_NEWFILEURL_mutable_toobig",
-                             "413 Request Entity Too Large",
-                             "SDMF is limited to one segment, and 10001 > 10000",
-                             self.PUT,
-                             self.public_url + "/foo/new.txt?mutable=true",
-                             "b" * (self.s.MUTABLE_SIZELIMIT+1))
+        # It is okay to upload large mutable files, so we should be able
+        # to do that.
+        d = self.PUT(self.public_url + "/foo/new.txt?mutable=true",
+                     "b" * (self.s.MUTABLE_SIZELIMIT + 1))
         return d
 
     def test_PUT_NEWFILEURL_replace(self):
hunk ./src/allmydata/test/test_web.py 1045
         d.addCallback(_check1)
         return d
 
+    def test_GET_FILEURL_json_mutable_type(self):
+        # The JSON should include mutable-type, which says whether the
+        # file is SDMF or MDMF
+        d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
+                     self.NEWFILE_CONTENTS * 300000)
+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
+        def _got_json(json, version):
+            data = simplejson.loads(json)
+            assert "filenode" == data[0]
+            data = data[1]
+            assert isinstance(data, dict)
+
+            self.failUnlessIn("mutable-type", data)
+            self.failUnlessEqual(data['mutable-type'], version)
+
+        d.addCallback(_got_json, "mdmf")
+        # Now make an SDMF file and check that it is reported correctly.
+        d.addCallback(lambda ignored:
+            self.PUT("/uri?mutable=true&mutable-type=sdmf",
+                      self.NEWFILE_CONTENTS * 300000))
+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
+        d.addCallback(_got_json, "sdmf")
+        return d
+
     def test_GET_FILEURL_json_missing(self):
         d = self.GET(self.public_url + "/foo/missing?json")
         d.addBoth(self.should404, "test_GET_FILEURL_json_missing")
hunk ./src/allmydata/test/test_web.py 1107
         d.addBoth(self.should404, "test_GET_FILEURL_uri_missing")
         return d
 
-    def test_GET_DIRECTORY_html_banner(self):
+    def test_GET_DIRECTORY_html(self):
         d = self.GET(self.public_url + "/foo", followRedirect=True)
         def _check(res):
             self.failUnlessIn('<div class="toolbar-item"><a href="../../..">Return to Welcome page</a></div>',res)
hunk ./src/allmydata/test/test_web.py 1111
+            self.failUnlessIn("mutable-type-mdmf", res)
+            self.failUnlessIn("mutable-type-sdmf", res)
         d.addCallback(_check)
         return d
 
hunk ./src/allmydata/test/test_web.py 1116
+    def test_GET_root_html(self):
+        # make sure that we have the option to upload an unlinked
+        # mutable file in SDMF and MDMF formats.
+        d = self.GET("/")
+        def _got_html(html):
+            # These are radio buttons that allow the user to toggle
+            # whether a particular mutable file is MDMF or SDMF.
+            self.failUnlessIn("mutable-type-mdmf", html)
+            self.failUnlessIn("mutable-type-sdmf", html)
+        d.addCallback(_got_html)
+        return d
+
+    def test_mutable_type_defaults(self):
+        # The checked="checked" attribute of the inputs corresponding to
+        # the mutable-type parameter should change as expected with the
+        # value configured in tahoe.cfg.
+        #
+        # By default, the value configured with the client is
+        # SDMF_VERSION, so that should be checked.
+        assert self.s.mutable_file_default == SDMF_VERSION
+
+        d = self.GET("/")
+        def _got_html(html, value):
+            i = 'input checked="checked" type="radio" id="mutable-type-%s"'
+            self.failUnlessIn(i % value, html)
+        d.addCallback(_got_html, "sdmf")
+        d.addCallback(lambda ignored:
+            self.GET(self.public_url + "/foo", followRedirect=True))
+        d.addCallback(_got_html, "sdmf")
+        # Now switch the configuration value to MDMF. The MDMF radio
+        # buttons should now be checked on these pages.
+        def _swap_values(ignored):
+            self.s.mutable_file_default = MDMF_VERSION
+        d.addCallback(_swap_values)
+        d.addCallback(lambda ignored: self.GET("/"))
+        d.addCallback(_got_html, "mdmf")
+        d.addCallback(lambda ignored:
+            self.GET(self.public_url + "/foo", followRedirect=True))
+        d.addCallback(_got_html, "mdmf")
+        return d
+
     def test_GET_DIRURL(self):
         # the addSlash means we get a redirect here
         # from /uri/$URI/foo/ , we need ../../../ to get back to the root
hunk ./src/allmydata/test/test_web.py 1246
         d.addCallback(self.failUnlessIsFooJSON)
         return d
 
+    def test_GET_DIRURL_json_mutable_type(self):
+        d = self.PUT(self.public_url + \
+                     "/foo/sdmf.txt?mutable=true&mutable-type=sdmf",
+                     self.NEWFILE_CONTENTS * 300000)
+        d.addCallback(lambda ignored:
+            self.PUT(self.public_url + \
+                     "/foo/mdmf.txt?mutable=true&mutable-type=mdmf",
+                     self.NEWFILE_CONTENTS * 300000))
+        # Now we have an MDMF and SDMF file in the directory. If we GET
+        # its JSON, we should see their encodings.
+        d.addCallback(lambda ignored:
+            self.GET(self.public_url + "/foo?t=json"))
+        def _got_json(json):
+            data = simplejson.loads(json)
+            assert data[0] == "dirnode"
+
+            data = data[1]
+            kids = data['children']
+
+            mdmf_data = kids['mdmf.txt'][1]
+            self.failUnlessIn("mutable-type", mdmf_data)
+            self.failUnlessEqual(mdmf_data['mutable-type'], "mdmf")
+
+            sdmf_data = kids['sdmf.txt'][1]
+            self.failUnlessIn("mutable-type", sdmf_data)
+            self.failUnlessEqual(sdmf_data['mutable-type'], "sdmf")
+        d.addCallback(_got_json)
+        return d
+
 
     def test_POST_DIRURL_manifest_no_ophandle(self):
         d = self.shouldFail2(error.Error,
hunk ./src/allmydata/test/test_web.py 1829
         return d
 
     def test_POST_upload_no_link_mutable_toobig(self):
-        d = self.shouldFail2(error.Error,
-                             "test_POST_upload_no_link_mutable_toobig",
-                             "413 Request Entity Too Large",
-                             "SDMF is limited to one segment, and 10001 > 10000",
-                             self.POST,
-                             "/uri", t="upload", mutable="true",
-                             file=("new.txt",
-                                   "b" * (self.s.MUTABLE_SIZELIMIT+1)) )
+        # The SDMF size limit is no longer in place, so we should be
+        # able to upload mutable files that are as large as we want them
+        # to be.
+        d = self.POST("/uri", t="upload", mutable="true",
+                      file=("new.txt", "b" * (self.s.MUTABLE_SIZELIMIT + 1)))
         return d
 
hunk ./src/allmydata/test/test_web.py 1836
+
+    def test_POST_upload_mutable_type_unlinked(self):
+        d = self.POST("/uri?t=upload&mutable=true&mutable-type=sdmf",
+                      file=("sdmf.txt", self.NEWFILE_CONTENTS * 300000))
+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
+        def _got_json(json, version):
+            data = simplejson.loads(json)
+            data = data[1]
+
+            self.failUnlessIn("mutable-type", data)
+            self.failUnlessEqual(data['mutable-type'], version)
+        d.addCallback(_got_json, "sdmf")
+        d.addCallback(lambda ignored:
+            self.POST("/uri?t=upload&mutable=true&mutable-type=mdmf",
+                      file=('mdmf.txt', self.NEWFILE_CONTENTS * 300000)))
+        d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
+        d.addCallback(_got_json, "mdmf")
+        return d
+
+    def test_POST_upload_mutable_type(self):
+        d = self.POST(self.public_url + \
+                      "/foo?t=upload&mutable=true&mutable-type=sdmf",
+                      file=("sdmf.txt", self.NEWFILE_CONTENTS * 300000))
+        fn = self._foo_node
+        def _got_cap(filecap, filename):
+            filenameu = unicode(filename)
+            self.failUnlessURIMatchesRWChild(filecap, fn, filenameu)
+            return self.GET(self.public_url + "/foo/%s?t=json" % filename)
+        d.addCallback(_got_cap, "sdmf.txt")
+        def _got_json(json, version):
+            data = simplejson.loads(json)
+            data = data[1]
+
+            self.failUnlessIn("mutable-type", data)
+            self.failUnlessEqual(data['mutable-type'], version)
+        d.addCallback(_got_json, "sdmf")
+        d.addCallback(lambda ignored:
+            self.POST(self.public_url + \
+                      "/foo?t=upload&mutable=true&mutable-type=mdmf",
+                      file=("mdmf.txt", self.NEWFILE_CONTENTS * 300000)))
+        d.addCallback(_got_cap, "mdmf.txt")
+        d.addCallback(_got_json, "mdmf")
+        return d
+
     def test_POST_upload_mutable(self):
         # this creates a mutable file
         d = self.POST(self.public_url + "/foo", t="upload", mutable="true",
hunk ./src/allmydata/test/test_web.py 2004
             self.failUnlessReallyEqual(headers["content-type"], ["text/plain"])
         d.addCallback(_got_headers)
 
-        # make sure that size errors are displayed correctly for overwrite
-        d.addCallback(lambda res:
-                      self.shouldFail2(error.Error,
-                                       "test_POST_upload_mutable-toobig",
-                                       "413 Request Entity Too Large",
-                                       "SDMF is limited to one segment, and 10001 > 10000",
-                                       self.POST,
-                                       self.public_url + "/foo", t="upload",
-                                       mutable="true",
-                                       file=("new.txt",
-                                             "b" * (self.s.MUTABLE_SIZELIMIT+1)),
-                                       ))
-
+        # make sure that outdated size limits aren't enforced anymore.
+        d.addCallback(lambda ignored:
+            self.POST(self.public_url + "/foo", t="upload",
+                      mutable="true",
+                      file=("new.txt",
+                            "b" * (self.s.MUTABLE_SIZELIMIT+1))))
         d.addErrback(self.dump_error)
         return d
 
hunk ./src/allmydata/test/test_web.py 2014
     def test_POST_upload_mutable_toobig(self):
-        d = self.shouldFail2(error.Error,
-                             "test_POST_upload_mutable_toobig",
-                             "413 Request Entity Too Large",
-                             "SDMF is limited to one segment, and 10001 > 10000",
-                             self.POST,
-                             self.public_url + "/foo",
-                             t="upload", mutable="true",
-                             file=("new.txt",
-                                   "b" * (self.s.MUTABLE_SIZELIMIT+1)) )
+        # SDMF had a size limti that was removed a while ago. MDMF has
+        # never had a size limit. Test to make sure that we do not
+        # encounter errors when trying to upload large mutable files,
+        # since there should be no coded prohibitions regarding large
+        # mutable files.
+        d = self.POST(self.public_url + "/foo",
+                      t="upload", mutable="true",
+                      file=("new.txt", "b" * (self.s.MUTABLE_SIZELIMIT + 1)))
         return d
 
     def dump_error(self, f):
hunk ./src/allmydata/test/test_web.py 3024
                                                       contents))
         return d
 
+    def test_PUT_NEWFILEURL_mdmf(self):
+        new_contents = self.NEWFILE_CONTENTS * 300000
+        d = self.PUT(self.public_url + \
+                     "/foo/mdmf.txt?mutable=true&mutable-type=mdmf",
+                     new_contents)
+        d.addCallback(lambda ignored:
+            self.GET(self.public_url + "/foo/mdmf.txt?t=json"))
+        def _got_json(json):
+            data = simplejson.loads(json)
+            data = data[1]
+            self.failUnlessIn("mutable-type", data)
+            self.failUnlessEqual(data['mutable-type'], "mdmf")
+        d.addCallback(_got_json)
+        return d
+
+    def test_PUT_NEWFILEURL_sdmf(self):
+        new_contents = self.NEWFILE_CONTENTS * 300000
+        d = self.PUT(self.public_url + \
+                     "/foo/sdmf.txt?mutable=true&mutable-type=sdmf",
+                     new_contents)
+        d.addCallback(lambda ignored:
+            self.GET(self.public_url + "/foo/sdmf.txt?t=json"))
+        def _got_json(json):
+            data = simplejson.loads(json)
+            data = data[1]
+            self.failUnlessIn("mutable-type", data)
+            self.failUnlessEqual(data['mutable-type'], "sdmf")
+        d.addCallback(_got_json)
+        return d
+
     def test_PUT_NEWFILEURL_uri_replace(self):
         contents, n, new_uri = self.makefile(8)
         d = self.PUT(self.public_url + "/foo/bar.txt?t=uri", new_uri)
hunk ./src/allmydata/test/test_web.py 3175
         d.addCallback(_done)
         return d
 
+
+    def test_PUT_update_at_offset(self):
+        file_contents = "test file" * 100000 # about 900 KiB
+        d = self.PUT("/uri?mutable=true", file_contents)
+        def _then(filecap):
+            self.filecap = filecap
+            new_data = file_contents[:100]
+            new = "replaced and so on"
+            new_data += new
+            new_data += file_contents[len(new_data):]
+            assert len(new_data) == len(file_contents)
+            self.new_data = new_data
+        d.addCallback(_then)
+        d.addCallback(lambda ignored:
+            self.PUT("/uri/%s?replace=True&offset=100" % self.filecap,
+                     "replaced and so on"))
+        def _get_data(filecap):
+            n = self.s.create_node_from_uri(filecap)
+            return n.download_best_version()
+        d.addCallback(_get_data)
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, self.new_data))
+        # Now try appending things to the file
+        d.addCallback(lambda ignored:
+            self.PUT("/uri/%s?offset=%d" % (self.filecap, len(self.new_data)),
+                     "puppies" * 100))
+        d.addCallback(_get_data)
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, self.new_data + ("puppies" * 100)))
+        return d
+
+
+    def test_PUT_update_at_offset_immutable(self):
+        file_contents = "Test file" * 100000
+        d = self.PUT("/uri", file_contents)
+        def _then(filecap):
+            self.filecap = filecap
+        d.addCallback(_then)
+        d.addCallback(lambda ignored:
+            self.shouldHTTPError("test immutable update",
+                                 400, "Bad Request",
+                                 "immutable",
+                                 self.PUT,
+                                 "/uri/%s?offset=50" % self.filecap,
+                                 "foo"))
+        return d
+
+
     def test_bad_method(self):
         url = self.webish_url + self.public_url + "/foo/bar.txt"
         d = self.shouldHTTPError("test_bad_method",
hunk ./src/allmydata/test/test_web.py 3492
         def _stash_mutable_uri(n, which):
             self.uris[which] = n.get_uri()
             assert isinstance(self.uris[which], str)
-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3"))
+        d.addCallback(lambda ign:
+            c0.create_mutable_file(publish.MutableData(DATA+"3")))
         d.addCallback(_stash_mutable_uri, "corrupt")
         d.addCallback(lambda ign:
                       c0.upload(upload.Data("literal", convergence="")))
hunk ./src/allmydata/test/test_web.py 3639
         def _stash_mutable_uri(n, which):
             self.uris[which] = n.get_uri()
             assert isinstance(self.uris[which], str)
-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3"))
+        d.addCallback(lambda ign:
+            c0.create_mutable_file(publish.MutableData(DATA+"3")))
         d.addCallback(_stash_mutable_uri, "corrupt")
 
         def _compute_fileurls(ignored):
hunk ./src/allmydata/test/test_web.py 4302
         def _stash_mutable_uri(n, which):
             self.uris[which] = n.get_uri()
             assert isinstance(self.uris[which], str)
-        d.addCallback(lambda ign: c0.create_mutable_file(DATA+"2"))
+        d.addCallback(lambda ign:
+            c0.create_mutable_file(publish.MutableData(DATA+"2")))
         d.addCallback(_stash_mutable_uri, "mutable")
 
         def _compute_fileurls(ignored):
hunk ./src/allmydata/test/test_web.py 4402
                                                         convergence="")))
         d.addCallback(_stash_uri, "small")
 
-        d.addCallback(lambda ign: c0.create_mutable_file("mutable"))
+        d.addCallback(lambda ign:
+            c0.create_mutable_file(publish.MutableData("mutable")))
         d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn))
         d.addCallback(_stash_uri, "mutable")
 
}
[resolve conflicts between 393-MDMF patches and trunk as of 1.8.2
"Brian Warner <warner@lothar.com>"**20110220230201
 Ignore-this: 9bbf5d26c994e8069202331dcb4cdd95
] {
merger 0.0 (
merger 0.0 (
merger 0.0 (
replace ./docs/configuration.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
merger 0.0 (
hunk ./docs/configuration.rst 384
-shares.needed = (int, optional) aka "k", default 3
-shares.total = (int, optional) aka "N", N >= k, default 10
-shares.happy = (int, optional) 1 <= happy <= N, default 7
-
- These three values set the default encoding parameters. Each time a new file
- is uploaded, erasure-coding is used to break the ciphertext into separate
- pieces. There will be "N" (i.e. shares.total) pieces created, and the file
- will be recoverable if any "k" (i.e. shares.needed) pieces are retrieved.
- The default values are 3-of-10 (i.e. shares.needed = 3, shares.total = 10).
- Setting k to 1 is equivalent to simple replication (uploading N copies of
- the file).
-
- These values control the tradeoff between storage overhead, performance, and
- reliability. To a first approximation, a 1MB file will use (1MB*N/k) of
- backend storage space (the actual value will be a bit more, because of other
- forms of overhead). Up to N-k shares can be lost before the file becomes
- unrecoverable, so assuming there are at least N servers, up to N-k servers
- can be offline without losing the file. So large N/k ratios are more
- reliable, and small N/k ratios use less disk space. Clearly, k must never be
- smaller than N.
-
- Large values of N will slow down upload operations slightly, since more
- servers must be involved, and will slightly increase storage overhead due to
- the hash trees that are created. Large values of k will cause downloads to
- be marginally slower, because more servers must be involved. N cannot be
- larger than 256, because of the 8-bit erasure-coding algorithm that Tahoe
- uses.
-
- shares.happy allows you control over the distribution of your immutable file.
- For a successful upload, shares are guaranteed to be initially placed on
- at least 'shares.happy' distinct servers, the correct functioning of any
- k of which is sufficient to guarantee the availability of the uploaded file.
- This value should not be larger than the number of servers on your grid.
- 
- A value of shares.happy <= k is allowed, but does not provide any redundancy
- if some servers fail or lose shares.
-
- (Mutable files use a different share placement algorithm that does not 
-  consider this parameter.)
-
-
-== Storage Server Configuration ==
-
-[storage]
-enabled = (boolean, optional)
-
- If this is True, the node will run a storage server, offering space to other
- clients. If it is False, the node will not run a storage server, meaning
- that no shares will be stored on this node. Use False this for clients who
- do not wish to provide storage service. The default value is True.
-
-readonly = (boolean, optional)
-
- If True, the node will run a storage server but will not accept any shares,
- making it effectively read-only. Use this for storage servers which are
- being decommissioned: the storage/ directory could be mounted read-only,
- while shares are moved to other servers. Note that this currently only
- affects immutable shares. Mutable shares (used for directories) will be
- written and modified anyway. See ticket #390 for the current status of this
- bug. The default value is False.
-
-reserved_space = (str, optional)
-
- If provided, this value defines how much disk space is reserved: the storage
- server will not accept any share which causes the amount of free disk space
- to drop below this value. (The free space is measured by a call to statvfs(2)
- on Unix, or GetDiskFreeSpaceEx on Windows, and is the space available to the
- user account under which the storage server runs.)
-
- This string contains a number, with an optional case-insensitive scale
- suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
- "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the same
- thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same thing.
-
-expire.enabled =
-expire.mode =
-expire.override_lease_duration =
-expire.cutoff_date =
-expire.immutable =
-expire.mutable =
-
- These settings control garbage-collection, in which the server will delete
- shares that no longer have an up-to-date lease on them. Please see the
- neighboring "garbage-collection.txt" document for full details.
-
-
-== Running A Helper ==
+Running A Helper
+================
hunk ./docs/configuration.rst 424
+mutable.format = sdmf or mdmf
+
+ This value tells Tahoe-LAFS what the default mutable file format should
+ be. If mutable.format=sdmf, then newly created mutable files will be in
+ the old SDMF format. This is desirable for clients that operate on
+ grids where some peers run older versions of Tahoe-LAFS, as these older
+ versions cannot read the new MDMF mutable file format. If
+ mutable.format = mdmf, then newly created mutable files will use the
+ new MDMF format, which supports efficient in-place modification and
+ streaming downloads. You can overwrite this value using a special
+ mutable-type parameter in the webapi. If you do not specify a value
+ here, Tahoe-LAFS will use SDMF for all newly-created mutable files.
+
+ Note that this parameter only applies to mutable files. Mutable
+ directories, which are stored as mutable files, are not controlled by
+ this parameter and will always use SDMF. We may revisit this decision
+ in future versions of Tahoe-LAFS.
)
)
hunk ./docs/configuration.rst 324
+Frontend Configuration
+======================
+
+The Tahoe client process can run a variety of frontend file-access protocols.
+You will use these to create and retrieve files from the virtual filesystem.
+Configuration details for each are documented in the following
+protocol-specific guides:
+
+HTTP
+
+    Tahoe runs a webserver by default on port 3456. This interface provides a
+    human-oriented "WUI", with pages to create, modify, and browse
+    directories and files, as well as a number of pages to check on the
+    status of your Tahoe node. It also provides a machine-oriented "WAPI",
+    with a REST-ful HTTP interface that can be used by other programs
+    (including the CLI tools). Please see `<frontends/webapi.rst>`_ for full
+    details, and the ``web.port`` and ``web.static`` config variables above.
+    The `<frontends/download-status.rst>`_ document also describes a few WUI
+    status pages.
+
+CLI
+
+    The main "bin/tahoe" executable includes subcommands for manipulating the
+    filesystem, uploading/downloading files, and creating/running Tahoe
+    nodes. See `<frontends/CLI.rst>`_ for details.
+
+FTP, SFTP
+
+    Tahoe can also run both FTP and SFTP servers, and map a username/password
+    pair to a top-level Tahoe directory. See `<frontends/FTP-and-SFTP.rst>`_
+    for instructions on configuring these services, and the ``[ftpd]`` and
+    ``[sftpd]`` sections of ``tahoe.cfg``.
+
)
hunk ./docs/configuration.rst 324
+``mutable.format = sdmf or mdmf``
+
+    This value tells Tahoe what the default mutable file format should
+    be. If ``mutable.format=sdmf``, then newly created mutable files will be
+    in the old SDMF format. This is desirable for clients that operate on
+    grids where some peers run older versions of Tahoe, as these older
+    versions cannot read the new MDMF mutable file format. If
+    ``mutable.format`` is ``mdmf``, then newly created mutable files will use
+    the new MDMF format, which supports efficient in-place modification and
+    streaming downloads. You can overwrite this value using a special
+    mutable-type parameter in the webapi. If you do not specify a value here,
+    Tahoe will use SDMF for all newly-created mutable files.
+
+    Note that this parameter only applies to mutable files. Mutable
+    directories, which are stored as mutable files, are not controlled by
+    this parameter and will always use SDMF. We may revisit this decision
+    in future versions of Tahoe-LAFS.
+
)
merger 0.0 (
merger 0.0 (
hunk ./docs/configuration.rst 324
+``mutable.format = sdmf or mdmf``
+
+    This value tells Tahoe what the default mutable file format should
+    be. If ``mutable.format=sdmf``, then newly created mutable files will be
+    in the old SDMF format. This is desirable for clients that operate on
+    grids where some peers run older versions of Tahoe, as these older
+    versions cannot read the new MDMF mutable file format. If
+    ``mutable.format`` is ``mdmf``, then newly created mutable files will use
+    the new MDMF format, which supports efficient in-place modification and
+    streaming downloads. You can overwrite this value using a special
+    mutable-type parameter in the webapi. If you do not specify a value here,
+    Tahoe will use SDMF for all newly-created mutable files.
+
+    Note that this parameter only applies to mutable files. Mutable
+    directories, which are stored as mutable files, are not controlled by
+    this parameter and will always use SDMF. We may revisit this decision
+    in future versions of Tahoe-LAFS.
+
merger 0.0 (
merger 0.0 (
replace ./docs/configuration.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
merger 0.0 (
hunk ./docs/configuration.rst 384
-shares.needed = (int, optional) aka "k", default 3
-shares.total = (int, optional) aka "N", N >= k, default 10
-shares.happy = (int, optional) 1 <= happy <= N, default 7
-
- These three values set the default encoding parameters. Each time a new file
- is uploaded, erasure-coding is used to break the ciphertext into separate
- pieces. There will be "N" (i.e. shares.total) pieces created, and the file
- will be recoverable if any "k" (i.e. shares.needed) pieces are retrieved.
- The default values are 3-of-10 (i.e. shares.needed = 3, shares.total = 10).
- Setting k to 1 is equivalent to simple replication (uploading N copies of
- the file).
-
- These values control the tradeoff between storage overhead, performance, and
- reliability. To a first approximation, a 1MB file will use (1MB*N/k) of
- backend storage space (the actual value will be a bit more, because of other
- forms of overhead). Up to N-k shares can be lost before the file becomes
- unrecoverable, so assuming there are at least N servers, up to N-k servers
- can be offline without losing the file. So large N/k ratios are more
- reliable, and small N/k ratios use less disk space. Clearly, k must never be
- smaller than N.
-
- Large values of N will slow down upload operations slightly, since more
- servers must be involved, and will slightly increase storage overhead due to
- the hash trees that are created. Large values of k will cause downloads to
- be marginally slower, because more servers must be involved. N cannot be
- larger than 256, because of the 8-bit erasure-coding algorithm that Tahoe
- uses.
-
- shares.happy allows you control over the distribution of your immutable file.
- For a successful upload, shares are guaranteed to be initially placed on
- at least 'shares.happy' distinct servers, the correct functioning of any
- k of which is sufficient to guarantee the availability of the uploaded file.
- This value should not be larger than the number of servers on your grid.
- 
- A value of shares.happy <= k is allowed, but does not provide any redundancy
- if some servers fail or lose shares.
-
- (Mutable files use a different share placement algorithm that does not 
-  consider this parameter.)
-
-
-== Storage Server Configuration ==
-
-[storage]
-enabled = (boolean, optional)
-
- If this is True, the node will run a storage server, offering space to other
- clients. If it is False, the node will not run a storage server, meaning
- that no shares will be stored on this node. Use False this for clients who
- do not wish to provide storage service. The default value is True.
-
-readonly = (boolean, optional)
-
- If True, the node will run a storage server but will not accept any shares,
- making it effectively read-only. Use this for storage servers which are
- being decommissioned: the storage/ directory could be mounted read-only,
- while shares are moved to other servers. Note that this currently only
- affects immutable shares. Mutable shares (used for directories) will be
- written and modified anyway. See ticket #390 for the current status of this
- bug. The default value is False.
-
-reserved_space = (str, optional)
-
- If provided, this value defines how much disk space is reserved: the storage
- server will not accept any share which causes the amount of free disk space
- to drop below this value. (The free space is measured by a call to statvfs(2)
- on Unix, or GetDiskFreeSpaceEx on Windows, and is the space available to the
- user account under which the storage server runs.)
-
- This string contains a number, with an optional case-insensitive scale
- suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
- "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the same
- thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same thing.
-
-expire.enabled =
-expire.mode =
-expire.override_lease_duration =
-expire.cutoff_date =
-expire.immutable =
-expire.mutable =
-
- These settings control garbage-collection, in which the server will delete
- shares that no longer have an up-to-date lease on them. Please see the
- neighboring "garbage-collection.txt" document for full details.
-
-
-== Running A Helper ==
+Running A Helper
+================
hunk ./docs/configuration.rst 424
+mutable.format = sdmf or mdmf
+
+ This value tells Tahoe-LAFS what the default mutable file format should
+ be. If mutable.format=sdmf, then newly created mutable files will be in
+ the old SDMF format. This is desirable for clients that operate on
+ grids where some peers run older versions of Tahoe-LAFS, as these older
+ versions cannot read the new MDMF mutable file format. If
+ mutable.format = mdmf, then newly created mutable files will use the
+ new MDMF format, which supports efficient in-place modification and
+ streaming downloads. You can overwrite this value using a special
+ mutable-type parameter in the webapi. If you do not specify a value
+ here, Tahoe-LAFS will use SDMF for all newly-created mutable files.
+
+ Note that this parameter only applies to mutable files. Mutable
+ directories, which are stored as mutable files, are not controlled by
+ this parameter and will always use SDMF. We may revisit this decision
+ in future versions of Tahoe-LAFS.
)
)
hunk ./docs/configuration.rst 324
+Frontend Configuration
+======================
+
+The Tahoe client process can run a variety of frontend file-access protocols.
+You will use these to create and retrieve files from the virtual filesystem.
+Configuration details for each are documented in the following
+protocol-specific guides:
+
+HTTP
+
+    Tahoe runs a webserver by default on port 3456. This interface provides a
+    human-oriented "WUI", with pages to create, modify, and browse
+    directories and files, as well as a number of pages to check on the
+    status of your Tahoe node. It also provides a machine-oriented "WAPI",
+    with a REST-ful HTTP interface that can be used by other programs
+    (including the CLI tools). Please see `<frontends/webapi.rst>`_ for full
+    details, and the ``web.port`` and ``web.static`` config variables above.
+    The `<frontends/download-status.rst>`_ document also describes a few WUI
+    status pages.
+
+CLI
+
+    The main "bin/tahoe" executable includes subcommands for manipulating the
+    filesystem, uploading/downloading files, and creating/running Tahoe
+    nodes. See `<frontends/CLI.rst>`_ for details.
+
+FTP, SFTP
+
+    Tahoe can also run both FTP and SFTP servers, and map a username/password
+    pair to a top-level Tahoe directory. See `<frontends/FTP-and-SFTP.rst>`_
+    for instructions on configuring these services, and the ``[ftpd]`` and
+    ``[sftpd]`` sections of ``tahoe.cfg``.
+
)
)
hunk ./docs/configuration.rst 402
-shares.needed = (int, optional) aka "k", default 3
-shares.total = (int, optional) aka "N", N >= k, default 10
-shares.happy = (int, optional) 1 <= happy <= N, default 7
-
- These three values set the default encoding parameters. Each time a new file
- is uploaded, erasure-coding is used to break the ciphertext into separate
- pieces. There will be "N" (i.e. shares.total) pieces created, and the file
- will be recoverable if any "k" (i.e. shares.needed) pieces are retrieved.
- The default values are 3-of-10 (i.e. shares.needed = 3, shares.total = 10).
- Setting k to 1 is equivalent to simple replication (uploading N copies of
- the file).
-
- These values control the tradeoff between storage overhead, performance, and
- reliability. To a first approximation, a 1MB file will use (1MB*N/k) of
- backend storage space (the actual value will be a bit more, because of other
- forms of overhead). Up to N-k shares can be lost before the file becomes
- unrecoverable, so assuming there are at least N servers, up to N-k servers
- can be offline without losing the file. So large N/k ratios are more
- reliable, and small N/k ratios use less disk space. Clearly, k must never be
- smaller than N.
-
- Large values of N will slow down upload operations slightly, since more
- servers must be involved, and will slightly increase storage overhead due to
- the hash trees that are created. Large values of k will cause downloads to
- be marginally slower, because more servers must be involved. N cannot be
- larger than 256, because of the 8-bit erasure-coding algorithm that Tahoe
- uses.
-
- shares.happy allows you control over the distribution of your immutable file.
- For a successful upload, shares are guaranteed to be initially placed on
- at least 'shares.happy' distinct servers, the correct functioning of any
- k of which is sufficient to guarantee the availability of the uploaded file.
- This value should not be larger than the number of servers on your grid.
- 
- A value of shares.happy <= k is allowed, but does not provide any redundancy
- if some servers fail or lose shares.
-
- (Mutable files use a different share placement algorithm that does not 
-  consider this parameter.)
-
-
-== Storage Server Configuration ==
-
-[storage]
-enabled = (boolean, optional)
-
- If this is True, the node will run a storage server, offering space to other
- clients. If it is False, the node will not run a storage server, meaning
- that no shares will be stored on this node. Use False this for clients who
- do not wish to provide storage service. The default value is True.
-
-readonly = (boolean, optional)
-
- If True, the node will run a storage server but will not accept any shares,
- making it effectively read-only. Use this for storage servers which are
- being decommissioned: the storage/ directory could be mounted read-only,
- while shares are moved to other servers. Note that this currently only
- affects immutable shares. Mutable shares (used for directories) will be
- written and modified anyway. See ticket #390 for the current status of this
- bug. The default value is False.
-
-reserved_space = (str, optional)
-
- If provided, this value defines how much disk space is reserved: the storage
- server will not accept any share which causes the amount of free disk space
- to drop below this value. (The free space is measured by a call to statvfs(2)
- on Unix, or GetDiskFreeSpaceEx on Windows, and is the space available to the
- user account under which the storage server runs.)
-
- This string contains a number, with an optional case-insensitive scale
- suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
- "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the same
- thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same thing.
-
-expire.enabled =
-expire.mode =
-expire.override_lease_duration =
-expire.cutoff_date =
-expire.immutable =
-expire.mutable =
-
- These settings control garbage-collection, in which the server will delete
- shares that no longer have an up-to-date lease on them. Please see the
- neighboring "garbage-collection.txt" document for full details.
-
-
-== Running A Helper ==
+Running A Helper
+================
)
merger 0.0 (
merger 0.0 (
hunk ./docs/configuration.rst 402
-shares.needed = (int, optional) aka "k", default 3
-shares.total = (int, optional) aka "N", N >= k, default 10
-shares.happy = (int, optional) 1 <= happy <= N, default 7
-
- These three values set the default encoding parameters. Each time a new file
- is uploaded, erasure-coding is used to break the ciphertext into separate
- pieces. There will be "N" (i.e. shares.total) pieces created, and the file
- will be recoverable if any "k" (i.e. shares.needed) pieces are retrieved.
- The default values are 3-of-10 (i.e. shares.needed = 3, shares.total = 10).
- Setting k to 1 is equivalent to simple replication (uploading N copies of
- the file).
-
- These values control the tradeoff between storage overhead, performance, and
- reliability. To a first approximation, a 1MB file will use (1MB*N/k) of
- backend storage space (the actual value will be a bit more, because of other
- forms of overhead). Up to N-k shares can be lost before the file becomes
- unrecoverable, so assuming there are at least N servers, up to N-k servers
- can be offline without losing the file. So large N/k ratios are more
- reliable, and small N/k ratios use less disk space. Clearly, k must never be
- smaller than N.
-
- Large values of N will slow down upload operations slightly, since more
- servers must be involved, and will slightly increase storage overhead due to
- the hash trees that are created. Large values of k will cause downloads to
- be marginally slower, because more servers must be involved. N cannot be
- larger than 256, because of the 8-bit erasure-coding algorithm that Tahoe
- uses.
-
- shares.happy allows you control over the distribution of your immutable file.
- For a successful upload, shares are guaranteed to be initially placed on
- at least 'shares.happy' distinct servers, the correct functioning of any
- k of which is sufficient to guarantee the availability of the uploaded file.
- This value should not be larger than the number of servers on your grid.
- 
- A value of shares.happy <= k is allowed, but does not provide any redundancy
- if some servers fail or lose shares.
-
- (Mutable files use a different share placement algorithm that does not 
-  consider this parameter.)
-
-
-== Storage Server Configuration ==
-
-[storage]
-enabled = (boolean, optional)
-
- If this is True, the node will run a storage server, offering space to other
- clients. If it is False, the node will not run a storage server, meaning
- that no shares will be stored on this node. Use False this for clients who
- do not wish to provide storage service. The default value is True.
-
-readonly = (boolean, optional)
-
- If True, the node will run a storage server but will not accept any shares,
- making it effectively read-only. Use this for storage servers which are
- being decommissioned: the storage/ directory could be mounted read-only,
- while shares are moved to other servers. Note that this currently only
- affects immutable shares. Mutable shares (used for directories) will be
- written and modified anyway. See ticket #390 for the current status of this
- bug. The default value is False.
-
-reserved_space = (str, optional)
-
- If provided, this value defines how much disk space is reserved: the storage
- server will not accept any share which causes the amount of free disk space
- to drop below this value. (The free space is measured by a call to statvfs(2)
- on Unix, or GetDiskFreeSpaceEx on Windows, and is the space available to the
- user account under which the storage server runs.)
-
- This string contains a number, with an optional case-insensitive scale
- suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
- "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the same
- thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same thing.
-
-expire.enabled =
-expire.mode =
-expire.override_lease_duration =
-expire.cutoff_date =
-expire.immutable =
-expire.mutable =
-
- These settings control garbage-collection, in which the server will delete
- shares that no longer have an up-to-date lease on them. Please see the
- neighboring "garbage-collection.txt" document for full details.
-
-
-== Running A Helper ==
+Running A Helper
+================
merger 0.0 (
hunk ./docs/configuration.rst 324
+``mutable.format = sdmf or mdmf``
+
+    This value tells Tahoe what the default mutable file format should
+    be. If ``mutable.format=sdmf``, then newly created mutable files will be
+    in the old SDMF format. This is desirable for clients that operate on
+    grids where some peers run older versions of Tahoe, as these older
+    versions cannot read the new MDMF mutable file format. If
+    ``mutable.format`` is ``mdmf``, then newly created mutable files will use
+    the new MDMF format, which supports efficient in-place modification and
+    streaming downloads. You can overwrite this value using a special
+    mutable-type parameter in the webapi. If you do not specify a value here,
+    Tahoe will use SDMF for all newly-created mutable files.
+
+    Note that this parameter only applies to mutable files. Mutable
+    directories, which are stored as mutable files, are not controlled by
+    this parameter and will always use SDMF. We may revisit this decision
+    in future versions of Tahoe-LAFS.
+
merger 0.0 (
merger 0.0 (
replace ./docs/configuration.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
merger 0.0 (
hunk ./docs/configuration.rst 384
-shares.needed = (int, optional) aka "k", default 3
-shares.total = (int, optional) aka "N", N >= k, default 10
-shares.happy = (int, optional) 1 <= happy <= N, default 7
-
- These three values set the default encoding parameters. Each time a new file
- is uploaded, erasure-coding is used to break the ciphertext into separate
- pieces. There will be "N" (i.e. shares.total) pieces created, and the file
- will be recoverable if any "k" (i.e. shares.needed) pieces are retrieved.
- The default values are 3-of-10 (i.e. shares.needed = 3, shares.total = 10).
- Setting k to 1 is equivalent to simple replication (uploading N copies of
- the file).
-
- These values control the tradeoff between storage overhead, performance, and
- reliability. To a first approximation, a 1MB file will use (1MB*N/k) of
- backend storage space (the actual value will be a bit more, because of other
- forms of overhead). Up to N-k shares can be lost before the file becomes
- unrecoverable, so assuming there are at least N servers, up to N-k servers
- can be offline without losing the file. So large N/k ratios are more
- reliable, and small N/k ratios use less disk space. Clearly, k must never be
- smaller than N.
-
- Large values of N will slow down upload operations slightly, since more
- servers must be involved, and will slightly increase storage overhead due to
- the hash trees that are created. Large values of k will cause downloads to
- be marginally slower, because more servers must be involved. N cannot be
- larger than 256, because of the 8-bit erasure-coding algorithm that Tahoe
- uses.
-
- shares.happy allows you control over the distribution of your immutable file.
- For a successful upload, shares are guaranteed to be initially placed on
- at least 'shares.happy' distinct servers, the correct functioning of any
- k of which is sufficient to guarantee the availability of the uploaded file.
- This value should not be larger than the number of servers on your grid.
- 
- A value of shares.happy <= k is allowed, but does not provide any redundancy
- if some servers fail or lose shares.
-
- (Mutable files use a different share placement algorithm that does not 
-  consider this parameter.)
-
-
-== Storage Server Configuration ==
-
-[storage]
-enabled = (boolean, optional)
-
- If this is True, the node will run a storage server, offering space to other
- clients. If it is False, the node will not run a storage server, meaning
- that no shares will be stored on this node. Use False this for clients who
- do not wish to provide storage service. The default value is True.
-
-readonly = (boolean, optional)
-
- If True, the node will run a storage server but will not accept any shares,
- making it effectively read-only. Use this for storage servers which are
- being decommissioned: the storage/ directory could be mounted read-only,
- while shares are moved to other servers. Note that this currently only
- affects immutable shares. Mutable shares (used for directories) will be
- written and modified anyway. See ticket #390 for the current status of this
- bug. The default value is False.
-
-reserved_space = (str, optional)
-
- If provided, this value defines how much disk space is reserved: the storage
- server will not accept any share which causes the amount of free disk space
- to drop below this value. (The free space is measured by a call to statvfs(2)
- on Unix, or GetDiskFreeSpaceEx on Windows, and is the space available to the
- user account under which the storage server runs.)
-
- This string contains a number, with an optional case-insensitive scale
- suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
- "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the same
- thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same thing.
-
-expire.enabled =
-expire.mode =
-expire.override_lease_duration =
-expire.cutoff_date =
-expire.immutable =
-expire.mutable =
-
- These settings control garbage-collection, in which the server will delete
- shares that no longer have an up-to-date lease on them. Please see the
- neighboring "garbage-collection.txt" document for full details.
-
-
-== Running A Helper ==
+Running A Helper
+================
hunk ./docs/configuration.rst 424
+mutable.format = sdmf or mdmf
+
+ This value tells Tahoe-LAFS what the default mutable file format should
+ be. If mutable.format=sdmf, then newly created mutable files will be in
+ the old SDMF format. This is desirable for clients that operate on
+ grids where some peers run older versions of Tahoe-LAFS, as these older
+ versions cannot read the new MDMF mutable file format. If
+ mutable.format = mdmf, then newly created mutable files will use the
+ new MDMF format, which supports efficient in-place modification and
+ streaming downloads. You can overwrite this value using a special
+ mutable-type parameter in the webapi. If you do not specify a value
+ here, Tahoe-LAFS will use SDMF for all newly-created mutable files.
+
+ Note that this parameter only applies to mutable files. Mutable
+ directories, which are stored as mutable files, are not controlled by
+ this parameter and will always use SDMF. We may revisit this decision
+ in future versions of Tahoe-LAFS.
)
)
hunk ./docs/configuration.rst 324
+Frontend Configuration
+======================
+
+The Tahoe client process can run a variety of frontend file-access protocols.
+You will use these to create and retrieve files from the virtual filesystem.
+Configuration details for each are documented in the following
+protocol-specific guides:
+
+HTTP
+
+    Tahoe runs a webserver by default on port 3456. This interface provides a
+    human-oriented "WUI", with pages to create, modify, and browse
+    directories and files, as well as a number of pages to check on the
+    status of your Tahoe node. It also provides a machine-oriented "WAPI",
+    with a REST-ful HTTP interface that can be used by other programs
+    (including the CLI tools). Please see `<frontends/webapi.rst>`_ for full
+    details, and the ``web.port`` and ``web.static`` config variables above.
+    The `<frontends/download-status.rst>`_ document also describes a few WUI
+    status pages.
+
+CLI
+
+    The main "bin/tahoe" executable includes subcommands for manipulating the
+    filesystem, uploading/downloading files, and creating/running Tahoe
+    nodes. See `<frontends/CLI.rst>`_ for details.
+
+FTP, SFTP
+
+    Tahoe can also run both FTP and SFTP servers, and map a username/password
+    pair to a top-level Tahoe directory. See `<frontends/FTP-and-SFTP.rst>`_
+    for instructions on configuring these services, and the ``[ftpd]`` and
+    ``[sftpd]`` sections of ``tahoe.cfg``.
+
)
)
)
replace ./docs/configuration.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
)
hunk ./src/allmydata/mutable/retrieve.py 7
 from zope.interface import implements
 from twisted.internet import defer
 from twisted.python import failure
-from foolscap.api import DeadReferenceError, eventually, fireEventually
-from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError
-from allmydata.util import hashutil, idlib, log
+from twisted.internet.interfaces import IPushProducer, IConsumer
+from foolscap.api import eventually, fireEventually
+from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError, \
+                                 MDMF_VERSION, SDMF_VERSION
+from allmydata.util import hashutil, log, mathutil
+from allmydata.util.dictutil import DictOfSets
 from allmydata import hashtree, codec
 from allmydata.storage.server import si_b2a
 from pycryptopp.cipher.aes import AES
hunk ./src/allmydata/mutable/retrieve.py 239
             # KiB, so we ask for that much.
             # TODO: Change the cache methods to allow us to fetch all of the
             # data that they have, then change this method to do that.
-            any_cache, timestamp = self._node._read_from_cache(self.verinfo,
-                                                               shnum,
-                                                               0,
-                                                               1000)
+            any_cache = self._node._read_from_cache(self.verinfo, shnum,
+                                                    0, 1000)
             ss = self.servermap.connections[peerid]
             reader = MDMFSlotReadProxy(ss,
                                        self._storage_index,
hunk ./src/allmydata/mutable/retrieve.py 373
                  (k, n, self._num_segments, self._segment_size,
                   self._tail_segment_size))
 
-        # ask the cache first
-        got_from_cache = False
-        datavs = []
-        for (offset, length) in readv:
-            (data, timestamp) = self._node._read_from_cache(self.verinfo, shnum,
-                                                            offset, length)
-            if data is not None:
-                datavs.append(data)
-        if len(datavs) == len(readv):
-            self.log("got data from cache")
-            got_from_cache = True
-            d = fireEventually({shnum: datavs})
-            # datavs is a dict mapping shnum to a pair of strings
+        for i in xrange(self._total_shares):
+            # So we don't have to do this later.
+            self._block_hash_trees[i] = hashtree.IncompleteHashTree(self._num_segments)
+
+        # Our last task is to tell the downloader where to start and
+        # where to stop. We use three parameters for that:
+        #   - self._start_segment: the segment that we need to start
+        #     downloading from. 
+        #   - self._current_segment: the next segment that we need to
+        #     download.
+        #   - self._last_segment: The last segment that we were asked to
+        #     download.
+        #
+        #  We say that the download is complete when
+        #  self._current_segment > self._last_segment. We use
+        #  self._start_segment and self._last_segment to know when to
+        #  strip things off of segments, and how much to strip.
+        if self._offset:
+            self.log("got offset: %d" % self._offset)
+            # our start segment is the first segment containing the
+            # offset we were given. 
+            start = mathutil.div_ceil(self._offset,
+                                      self._segment_size)
+            # this gets us the first segment after self._offset. Then
+            # our start segment is the one before it.
+            start -= 1
+
+            assert start < self._num_segments
+            self._start_segment = start
+            self.log("got start segment: %d" % self._start_segment)
         else:
             self._start_segment = 0
 
hunk ./src/allmydata/mutable/servermap.py 7
 from itertools import count
 from twisted.internet import defer
 from twisted.python import failure
-from foolscap.api import DeadReferenceError, RemoteException, eventually
-from allmydata.util import base32, hashutil, idlib, log
+from foolscap.api import DeadReferenceError, RemoteException, eventually, \
+                         fireEventually
+from allmydata.util import base32, hashutil, idlib, log, deferredutil
+from allmydata.util.dictutil import DictOfSets
 from allmydata.storage.server import si_b2a
 from allmydata.interfaces import IServermapUpdaterStatus
 from pycryptopp.publickey import rsa
hunk ./src/allmydata/mutable/servermap.py 16
 
 from allmydata.mutable.common import MODE_CHECK, MODE_ANYTHING, MODE_WRITE, MODE_READ, \
-     DictOfSets, CorruptShareError, NeedMoreDataError
-from allmydata.mutable.layout import unpack_prefix_and_signature, unpack_header, unpack_share, \
-     SIGNED_PREFIX_LENGTH
+     CorruptShareError
+from allmydata.mutable.layout import SIGNED_PREFIX_LENGTH, MDMFSlotReadProxy
 
 class UpdateStatus:
     implements(IServermapUpdaterStatus)
hunk ./src/allmydata/mutable/servermap.py 391
         #  * if we need the encrypted private key, we want [-1216ish:]
         #   * but we can't read from negative offsets
         #   * the offset table tells us the 'ish', also the positive offset
-        # A future version of the SMDF slot format should consider using
-        # fixed-size slots so we can retrieve less data. For now, we'll just
-        # read 2000 bytes, which also happens to read enough actual data to
-        # pre-fetch a 9-entry dirnode.
+        # MDMF:
+        #  * Checkstring? [0:72]
+        #  * If we want to validate the checkstring, then [0:72], [143:?] --
+        #    the offset table will tell us for sure.
+        #  * If we need the verification key, we have to consult the offset
+        #    table as well.
+        # At this point, we don't know which we are. Our filenode can
+        # tell us, but it might be lying -- in some cases, we're
+        # responsible for telling it which kind of file it is.
         self._read_size = 4000
         if mode == MODE_CHECK:
             # we use unpack_prefix_and_signature, so we need 1k
hunk ./src/allmydata/mutable/servermap.py 633
         updated.
         """
         if verinfo:
-            self._node._add_to_cache(verinfo, shnum, 0, data, now)
+            self._node._add_to_cache(verinfo, shnum, 0, data)
 
 
     def _got_results(self, datavs, peerid, readsize, stuff, started):
hunk ./src/allmydata/mutable/servermap.py 664
 
         for shnum,datav in datavs.items():
             data = datav[0]
-            try:
-                verinfo = self._got_results_one_share(shnum, data, peerid, lp)
-                last_verinfo = verinfo
-                last_shnum = shnum
-                self._node._add_to_cache(verinfo, shnum, 0, data, now)
-            except CorruptShareError, e:
-                # log it and give the other shares a chance to be processed
-                f = failure.Failure()
-                self.log(format="bad share: %(f_value)s", f_value=str(f.value),
-                         failure=f, parent=lp, level=log.WEIRD, umid="h5llHg")
-                self.notify_server_corruption(peerid, shnum, str(e))
-                self._bad_peers.add(peerid)
-                self._last_failure = f
-                checkstring = data[:SIGNED_PREFIX_LENGTH]
-                self._servermap.mark_bad_share(peerid, shnum, checkstring)
-                self._servermap.problems.append(f)
-                pass
+            reader = MDMFSlotReadProxy(ss,
+                                       storage_index,
+                                       shnum,
+                                       data)
+            self._readers.setdefault(peerid, dict())[shnum] = reader
+            # our goal, with each response, is to validate the version
+            # information and share data as best we can at this point --
+            # we do this by validating the signature. To do this, we
+            # need to do the following:
+            #   - If we don't already have the public key, fetch the
+            #     public key. We use this to validate the signature.
+            if not self._node.get_pubkey():
+                # fetch and set the public key.
+                d = reader.get_verification_key(queue=True)
+                d.addCallback(lambda results, shnum=shnum, peerid=peerid:
+                    self._try_to_set_pubkey(results, peerid, shnum, lp))
+                # XXX: Make self._pubkey_query_failed?
+                d.addErrback(lambda error, shnum=shnum, peerid=peerid:
+                    self._got_corrupt_share(error, shnum, peerid, data, lp))
+            else:
+                # we already have the public key.
+                d = defer.succeed(None)
 
             # Neither of these two branches return anything of
             # consequence, so the first entry in our deferredlist will
hunk ./src/allmydata/test/test_storage.py 1
-import time, os.path, platform, stat, re, simplejson, struct
+import time, os.path, platform, stat, re, simplejson, struct, shutil
 
hunk ./src/allmydata/test/test_storage.py 3
-import time, os.path, stat, re, simplejson, struct
+import mock
 
 from twisted.trial import unittest
 
}
[mutable/filenode.py: fix create_mutable_file('string')
"Brian Warner <warner@lothar.com>"**20110221014659
 Ignore-this: dc6bdad761089f0199681eeb784f1001
] hunk ./src/allmydata/mutable/filenode.py 137
         if contents is None:
             return MutableData("")
 
+        if isinstance(contents, str):
+            return MutableData(contents)
+
         if IMutableUploadable.providedBy(contents):
             return contents
 
[resolve more conflicts with current trunk
"Brian Warner <warner@lothar.com>"**20110221055600
 Ignore-this: 77ad038a478dbf5d9b34f7a68159a3e0
] hunk ./src/allmydata/mutable/servermap.py 461
         self._queries_completed = 0
 
         sb = self._storage_broker
-        full_peerlist = sb.get_servers_for_index(self._storage_index)
+        # All of the peers, permuted by the storage index, as usual.
+        full_peerlist = [(s.get_serverid(), s.get_rref())
+                         for s in sb.get_servers_for_psi(self._storage_index)]
         self.full_peerlist = full_peerlist # for use later, immutable
         self.extra_peers = full_peerlist[:] # peers are removed as we use them
         self._good_peers = set() # peers who had some shares
[update MDMF code with StorageFarmBroker changes
"Brian Warner <warner@lothar.com>"**20110221061004
 Ignore-this: a693b201d31125b391cebe0412ddd027
] {
hunk ./src/allmydata/mutable/publish.py 203
         self._encprivkey = self._node.get_encprivkey()
 
         sb = self._storage_broker
-        full_peerlist = sb.get_servers_for_index(self._storage_index)
+        full_peerlist = [(s.get_serverid(), s.get_rref())
+                         for s in sb.get_servers_for_psi(self._storage_index)]
         self.full_peerlist = full_peerlist # for use later, immutable
         self.bad_peers = set() # peerids who have errbacked/refused requests
 
hunk ./src/allmydata/test/test_mutable.py 2538
             # for either a block and salt or for hashes, either of which
             # will exercise the error handling code.
             killer = FirstServerGetsKilled()
-            for (serverid, ss) in nm.storage_broker.get_all_servers():
-                ss.post_call_notifier = killer.notify
+            for s in nm.storage_broker.get_connected_servers():
+                s.get_rref().post_call_notifier = killer.notify
             ver = servermap.best_recoverable_version()
             assert ver
             return self._node.download_version(servermap, ver)
}
[mutable/filenode: Clean up servermap handling in MutableFileVersion
Kevan Carstensen <kevan@isnotajoke.com>**20110226010433
 Ignore-this: 2257c9f65502098789f5ea355b94f130
 
 We want to update the servermap before attempting to modify a file,
 which we now do. This introduced code duplication, which was addressed
 by refactoring the servermap update into its own method, and then
 eliminating duplicate servermap updates throughout the
 MutableFileVersion.
] {
hunk ./src/allmydata/mutable/filenode.py 19
 from allmydata.mutable.publish import Publish, MutableData,\
                                       DEFAULT_MAX_SEGMENT_SIZE, \
                                       TransformingUploadable
-from allmydata.mutable.common import MODE_READ, MODE_WRITE, UnrecoverableFileError, \
+from allmydata.mutable.common import MODE_READ, MODE_WRITE, MODE_CHECK, UnrecoverableFileError, \
      ResponseCache, UncoordinatedWriteError
 from allmydata.mutable.servermap import ServerMap, ServermapUpdater
 from allmydata.mutable.retrieve import Retrieve
hunk ./src/allmydata/mutable/filenode.py 807
         a little bit.
         """
         log.msg("doing modify")
-        d = self._modify_once(modifier, first_time)
+        if first_time:
+            d = self._update_servermap()
+        else:
+            # We ran into trouble; do MODE_CHECK so we're a little more
+            # careful on subsequent tries.
+            d = self._update_servermap(mode=MODE_CHECK)
+
+        d.addCallback(lambda ignored:
+            self._modify_once(modifier, first_time))
         def _retry(f):
             f.trap(UncoordinatedWriteError)
hunk ./src/allmydata/mutable/filenode.py 818
+            # Uh oh, it broke. We're allowed to trust the servermap for our
+            # first try, but after that we need to update it. It's
+            # possible that we've failed due to a race with another
+            # uploader, and if the race is to converge correctly, we
+            # need to know about that upload.
             d2 = defer.maybeDeferred(backoffer, self, f)
             d2.addCallback(lambda ignored:
                            self._modify_and_retry(modifier,
hunk ./src/allmydata/mutable/filenode.py 837
         I attempt to apply a modifier to the contents of the mutable
         file.
         """
-        # XXX: This is wrong -- we could get more servers if we updated
-        # in MODE_ANYTHING and possibly MODE_CHECK. Probably we want to
-        # assert that the last update wasn't MODE_READ
-        assert self._servermap.last_update_mode == MODE_WRITE
+        assert self._servermap.last_update_mode != MODE_READ
 
         # download_to_data is serialized, so we have to call this to
         # avoid deadlock.
hunk ./src/allmydata/mutable/filenode.py 1076
 
         # Now ask for the servermap to be updated in MODE_WRITE with
         # this update range.
-        u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
-                             self._servermap,
-                             mode=MODE_WRITE,
-                             update_range=(start_segment, end_segment))
-        return u.update()
+        return self._update_servermap(update_range=(start_segment,
+                                                    end_segment))
 
 
     def _decode_and_decrypt_segments(self, ignored, data, offset):
hunk ./src/allmydata/mutable/filenode.py 1135
                                    segments_and_bht[1])
         p = Publish(self._node, self._storage_broker, self._servermap)
         return p.update(u, offset, segments_and_bht[2], self._version)
+
+
+    def _update_servermap(self, mode=MODE_WRITE, update_range=None):
+        """
+        I update the servermap. I return a Deferred that fires when the
+        servermap update is done.
+        """
+        if update_range:
+            u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
+                                 self._servermap,
+                                 mode=mode,
+                                 update_range=update_range)
+        else:
+            u = ServermapUpdater(self._node, self._storage_broker, Monitor(),
+                                 self._servermap,
+                                 mode=mode)
+        return u.update()
}
[web: Use the string "replace" to trigger whole-file replacement when processing an offset parameter.
Kevan Carstensen <kevan@isnotajoke.com>**20110227231643
 Ignore-this: 5bbf0b90d68efe20d4c531bb98a8321a
] {
hunk ./docs/frontends/webapi.rst 360
  To use the /uri/$FILECAP form, $FILECAP must be a write-cap for a mutable file.
 
  In the /uri/$DIRCAP/[SUBDIRS../]FILENAME form, if the target file is a
- writeable mutable file, that file's contents will be overwritten in-place. If
- it is a read-cap for a mutable file, an error will occur. If it is an
- immutable file, the old file will be discarded, and a new one will be put in
- its place. If the target file is a writable mutable file, you may also
- specify an "offset" parameter -- a byte offset that determines where in
- the mutable file the data from the HTTP request body is placed. This
- operation is relatively efficient for MDMF mutable files, and is
- relatively inefficient (but still supported) for SDMF mutable files.
+ writeable mutable file, that file's contents will be overwritten
+ in-place. If it is a read-cap for a mutable file, an error will occur.
+ If it is an immutable file, the old file will be discarded, and a new
+ one will be put in its place. If the target file is a writable mutable
+ file, you may also specify an "offset" parameter -- a byte offset that
+ determines where in the mutable file the data from the HTTP request
+ body is placed. This operation is relatively efficient for MDMF mutable
+ files, and is relatively inefficient (but still supported) for SDMF
+ mutable files. If no offset parameter is specified, then the entire
+ file is replaced with the data from the HTTP request body. For an
+ immutable file, the "offset" parameter is not valid.
 
  When creating a new file, if "mutable=true" is in the query arguments, the
  operation will create a mutable file instead of an immutable one.
hunk ./src/allmydata/test/test_web.py 3206
             self.failUnlessEqual(results, self.new_data + ("puppies" * 100)))
         return d
 
+    def test_PUT_update_at_invalid_offset(self):
+        file_contents = "test file" * 100000 # about 900 KiB
+        d = self.PUT("/uri?mutable=true", file_contents)
+        def _then(filecap):
+            self.filecap = filecap
+        d.addCallback(_then)
+        # Negative offsets should cause an error.
+        d.addCallback(lambda ignored:
+            self.shouldHTTPError("test mutable invalid offset negative",
+                                 400, "Bad Request",
+                                 "Invalid offset",
+                                 self.PUT,
+                                 "/uri/%s?offset=-1" % self.filecap,
+                                 "foo"))
+        return d
 
     def test_PUT_update_at_offset_immutable(self):
         file_contents = "Test file" * 100000
hunk ./src/allmydata/web/common.py 55
     # message? Since this call is going to be used by programmers and
     # their tools rather than users (through the wui), it is not
     # inconsistent to return that, I guess.
-    offset = int(offset)
-    return offset
+    return int(offset)
 
 
 def get_root(ctx_or_req):
hunk ./src/allmydata/web/filenode.py 219
         req = IRequest(ctx)
         t = get_arg(req, "t", "").strip()
         replace = parse_replace_arg(get_arg(req, "replace", "true"))
-        offset = parse_offset_arg(get_arg(req, "offset", -1))
+        offset = parse_offset_arg(get_arg(req, "offset", False))
 
         if not t:
hunk ./src/allmydata/web/filenode.py 222
-            if self.node.is_mutable() and offset >= 0:
-                return self.update_my_contents(req, offset)
-
-            elif self.node.is_mutable():
-                return self.replace_my_contents(req)
             if not replace:
                 # this is the early trap: if someone else modifies the
                 # directory while we're uploading, the add_file(overwrite=)
hunk ./src/allmydata/web/filenode.py 227
                 # call in replace_me_with_a_child will do the late trap.
                 raise ExistingChildError()
-            if offset >= 0:
-                raise WebError("PUT to a file: append operation invoked "
-                               "on an immutable cap")
 
hunk ./src/allmydata/web/filenode.py 228
+            if self.node.is_mutable():
+                if offset == False:
+                    return self.replace_my_contents(req)
+
+                if offset >= 0:
+                    return self.update_my_contents(req, offset)
+
+                raise WebError("PUT to a mutable file: Invalid offset")
+
+            else:
+                if offset != False:
+                    raise WebError("PUT to a file: append operation invoked "
+                                   "on an immutable cap")
+
+                assert self.parentnode and self.name
+                return self.replace_me_with_a_child(req, self.client, replace)
 
hunk ./src/allmydata/web/filenode.py 245
-            assert self.parentnode and self.name
-            return self.replace_me_with_a_child(req, self.client, replace)
         if t == "uri":
             if not replace:
                 raise ExistingChildError()
}
[docs/configuration.rst: fix more conflicts between #393 and trunk
Kevan Carstensen <kevan@isnotajoke.com>**20110228003426
 Ignore-this: 7917effdeecab00d634a06f1df8fe2cf
] {
replace ./docs/configuration.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
hunk ./docs/configuration.rst 324
     (Mutable files use a different share placement algorithm that does not
     currently consider this parameter.)
 
+``mutable.format = sdmf or mdmf``
+
+    This value tells Tahoe-LAFS what the default mutable file format should
+    be. If ``mutable.format=sdmf``, then newly created mutable files will be
+    in the old SDMF format. This is desirable for clients that operate on
+    grids where some peers run older versions of Tahoe-LAFS, as these older
+    versions cannot read the new MDMF mutable file format. If
+    ``mutable.format`` is ``mdmf``, then newly created mutable files will use
+    the new MDMF format, which supports efficient in-place modification and
+    streaming downloads. You can overwrite this value using a special
+    mutable-type parameter in the webapi. If you do not specify a value here,
+    Tahoe-LAFS will use SDMF for all newly-created mutable files.
+
+    Note that this parameter only applies to mutable files. Mutable
+    directories, which are stored as mutable files, are not controlled by
+    this parameter and will always use SDMF. We may revisit this decision
+    in future versions of Tahoe-LAFS.
+
+
+Frontend Configuration
+======================
+
+The Tahoe client process can run a variety of frontend file-access protocols.
+You will use these to create and retrieve files from the virtual filesystem.
+Configuration details for each are documented in the following
+protocol-specific guides:
+
+HTTP
+
+    Tahoe runs a webserver by default on port 3456. This interface provides a
+    human-oriented "WUI", with pages to create, modify, and browse
+    directories and files, as well as a number of pages to check on the
+    status of your Tahoe node. It also provides a machine-oriented "WAPI",
+    with a REST-ful HTTP interface that can be used by other programs
+    (including the CLI tools). Please see `<frontends/webapi.rst>`_ for full
+    details, and the ``web.port`` and ``web.static`` config variables above.
+    The `<frontends/download-status.rst>`_ document also describes a few WUI
+    status pages.
+
+CLI
+
+    The main "bin/tahoe" executable includes subcommands for manipulating the
+    filesystem, uploading/downloading files, and creating/running Tahoe
+    nodes. See `<frontends/CLI.rst>`_ for details.
+
+FTP, SFTP
+
+    Tahoe can also run both FTP and SFTP servers, and map a username/password
+    pair to a top-level Tahoe directory. See `<frontends/FTP-and-SFTP.rst>`_
+    for instructions on configuring these services, and the ``[ftpd]`` and
+    ``[sftpd]`` sections of ``tahoe.cfg``.
+
 
 Storage Server Configuration
 ============================
hunk ./docs/configuration.rst 436
     `<garbage-collection.rst>`_ for full details.
 
 
-shares.needed = (int, optional) aka "k", default 3
-shares.total = (int, optional) aka "N", N >= k, default 10
-shares.happy = (int, optional) 1 <= happy <= N, default 7
-
- These three values set the default encoding parameters. Each time a new file
- is uploaded, erasure-coding is used to break the ciphertext into separate
- pieces. There will be "N" (i.e. shares.total) pieces created, and the file
- will be recoverable if any "k" (i.e. shares.needed) pieces are retrieved.
- The default values are 3-of-10 (i.e. shares.needed = 3, shares.total = 10).
- Setting k to 1 is equivalent to simple replication (uploading N copies of
- the file).
-
- These values control the tradeoff between storage overhead, performance, and
- reliability. To a first approximation, a 1MB file will use (1MB*N/k) of
- backend storage space (the actual value will be a bit more, because of other
- forms of overhead). Up to N-k shares can be lost before the file becomes
- unrecoverable, so assuming there are at least N servers, up to N-k servers
- can be offline without losing the file. So large N/k ratios are more
- reliable, and small N/k ratios use less disk space. Clearly, k must never be
- smaller than N.
-
- Large values of N will slow down upload operations slightly, since more
- servers must be involved, and will slightly increase storage overhead due to
- the hash trees that are created. Large values of k will cause downloads to
- be marginally slower, because more servers must be involved. N cannot be
- larger than 256, because of the 8-bit erasure-coding algorithm that Tahoe-LAFS
- uses.
-
- shares.happy allows you control over the distribution of your immutable file.
- For a successful upload, shares are guaranteed to be initially placed on
- at least 'shares.happy' distinct servers, the correct functioning of any
- k of which is sufficient to guarantee the availability of the uploaded file.
- This value should not be larger than the number of servers on your grid.
- 
- A value of shares.happy <= k is allowed, but does not provide any redundancy
- if some servers fail or lose shares.
-
- (Mutable files use a different share placement algorithm that does not 
-  consider this parameter.)
-
-
-== Storage Server Configuration ==
-
-[storage]
-enabled = (boolean, optional)
-
- If this is True, the node will run a storage server, offering space to other
- clients. If it is False, the node will not run a storage server, meaning
- that no shares will be stored on this node. Use False this for clients who
- do not wish to provide storage service. The default value is True.
-
-readonly = (boolean, optional)
-
- If True, the node will run a storage server but will not accept any shares,
- making it effectively read-only. Use this for storage servers which are
- being decommissioned: the storage/ directory could be mounted read-only,
- while shares are moved to other servers. Note that this currently only
- affects immutable shares. Mutable shares (used for directories) will be
- written and modified anyway. See ticket #390 for the current status of this
- bug. The default value is False.
-
-reserved_space = (str, optional)
-
- If provided, this value defines how much disk space is reserved: the storage
- server will not accept any share which causes the amount of free disk space
- to drop below this value. (The free space is measured by a call to statvfs(2)
- on Unix, or GetDiskFreeSpaceEx on Windows, and is the space available to the
- user account under which the storage server runs.)
-
- This string contains a number, with an optional case-insensitive scale
- suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
- "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the same
- thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same thing.
-
-expire.enabled =
-expire.mode =
-expire.override_lease_duration =
-expire.cutoff_date =
-expire.immutable =
-expire.mutable =
-
- These settings control garbage-collection, in which the server will delete
- shares that no longer have an up-to-date lease on them. Please see the
- neighboring "garbage-collection.txt" document for full details.
-
-
-== Running A Helper ==
+Running A Helper
+================
 
 A "helper" is a regular client node that also offers the "upload helper"
 service.
}
[mutable/layout: remove references to the salt hash tree.
Kevan Carstensen <kevan@isnotajoke.com>**20110228010637
 Ignore-this: b3b2963ba4d0b42c78b6bba219d4deb5
] {
hunk ./src/allmydata/mutable/layout.py 577
     # 99          8           The offset of the EOF
     #
     # followed by salts and share data, the encrypted private key, the
-    # block hash tree, the salt hash tree, the share hash chain, a
-    # signature over the first eight fields, and a verification key.
+    # block hash tree, the share hash chain, a signature over the first
+    # eight fields, and a verification key.
     # 
     # The checkstring is the first three fields -- the version number,
     # sequence number, root hash and root salt hash. This is consistent
hunk ./src/allmydata/mutable/layout.py 628
     #      calculate the offset for the share hash chain, and fill that
     #      into the offsets table.
     #
-    #   4: At the same time, we're in a position to upload the salt hash
-    #      tree. This is a Merkle tree over all of the salts. We use a
-    #      Merkle tree so that we can validate each block,salt pair as
-    #      we download them later. We do this using 
-    #
-    #        put_salthashes(salt_hash_tree)
-    #
-    #      When you do this, I automatically put the root of the tree
-    #      (the hash at index 0 of the list) in its appropriate slot in
-    #      the signed prefix of the share.
-    #
-    #   5: We're now in a position to upload the share hash chain for
+    #   4: We're now in a position to upload the share hash chain for
     #      a share. Do that with something like:
     #      
     #        put_sharehashes(share_hash_chain) 
hunk ./src/allmydata/mutable/layout.py 639
     #      The root of this tree will be put explicitly in the next
     #      step.
     # 
-    #      TODO: Why? Why not just include it in the tree here?
-    #
-    #   6: Before putting the signature, we must first put the
+    #   5: Before putting the signature, we must first put the
     #      root_hash. Do this with:
     # 
     #        put_root_hash(root_hash).
hunk ./src/allmydata/mutable/layout.py 872
             raise LayoutInvalid("I was given the wrong size block to write")
 
         # We want to write at len(MDMFHEADER) + segnum * block_size.
-
         offset = MDMFHEADERSIZE + (self._actual_block_size * segnum)
         data = salt + data
 
hunk ./src/allmydata/mutable/layout.py 889
         # tree is written, since that could cause the private key to run
         # into the block hash tree. Before it writes the block hash
         # tree, the block hash tree writing method writes the offset of
-        # the salt hash tree. So that's a good indicator of whether or
+        # the share hash chain. So that's a good indicator of whether or
         # not the block hash tree has been written.
         if "share_hash_chain" in self._offsets:
             raise LayoutInvalid("You must write this before the block hash tree")
hunk ./src/allmydata/mutable/layout.py 907
         The encrypted private key must be queued before the block hash
         tree, since we need to know how large it is to know where the
         block hash tree should go. The block hash tree must be put
-        before the salt hash tree, since its size determines the
+        before the share hash chain, since its size determines the
         offset of the share hash chain.
         """
         assert self._offsets
hunk ./src/allmydata/mutable/layout.py 932
         I queue a write vector to put the share hash chain in my
         argument onto the remote server.
 
-        The salt hash tree must be queued before the share hash chain,
-        since we need to know where the salt hash tree ends before we
+        The block hash tree must be queued before the share hash chain,
+        since we need to know where the block hash tree ends before we
         can know where the share hash chain starts. The share hash chain
         must be put before the signature, since the length of the packed
         share hash chain determines the offset of the signature. Also,
hunk ./src/allmydata/mutable/layout.py 937
-        semantically, you must know what the root of the salt hash tree
+        semantically, you must know what the root of the block hash tree
         is before you can generate a valid signature.
         """
         assert isinstance(sharehashes, dict)
hunk ./src/allmydata/mutable/layout.py 942
         if "share_hash_chain" not in self._offsets:
-            raise LayoutInvalid("You need to put the salt hash tree before "
+            raise LayoutInvalid("You need to put the block hash tree before "
                                 "you can put the share hash chain")
         # The signature comes after the share hash chain. If the
         # signature has already been written, we must not write another
}
[test_mutable.py: add test to exercise fencepost bug
warner@lothar.com**20110228021056
 Ignore-this: d2f9cf237ce6db42fb250c8ad71a4fc3
] {
hunk ./src/allmydata/test/test_mutable.py 2
 
-import os
+import os, re
 from cStringIO import StringIO
 from twisted.trial import unittest
 from twisted.internet import defer, reactor
hunk ./src/allmydata/test/test_mutable.py 2931
         self.set_up_grid()
         self.c = self.g.clients[0]
         self.nm = self.c.nodemaker
-        self.data = "test data" * 100000 # about 900 KiB; MDMF
+        self.data = "testdata " * 100000 # about 900 KiB; MDMF
         self.small_data = "test data" * 10 # about 90 B; SDMF
         return self.do_upload()
 
hunk ./src/allmydata/test/test_mutable.py 2981
             self.failUnlessEqual(results, new_data))
         return d
 
+    def test_replace_segstart1(self):
+        offset = 128*1024+1
+        new_data = "NNNN"
+        expected = self.data[:offset]+new_data+self.data[offset+4:]
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv:
+            mv.update(MutableData(new_data), offset))
+        d.addCallback(lambda ignored:
+            self.mdmf_node.download_best_version())
+        def _check(results):
+            if results != expected:
+                print
+                print "got: %s ... %s" % (results[:20], results[-20:])
+                print "exp: %s ... %s" % (expected[:20], expected[-20:])
+                self.fail("results != expected")
+        d.addCallback(_check)
+        return d
+
+    def _check_differences(self, got, expected):
+        # displaying arbitrary file corruption is tricky for a
+        # 1MB file of repeating data,, so look for likely places
+        # with problems and display them separately
+        gotmods = [mo.span() for mo in re.finditer('([A-Z]+)', got)]
+        expmods = [mo.span() for mo in re.finditer('([A-Z]+)', expected)]
+        gotspans = ["%d:%d=%s" % (start,end,got[start:end])
+                    for (start,end) in gotmods]
+        expspans = ["%d:%d=%s" % (start,end,expected[start:end])
+                    for (start,end) in expmods]
+        #print "expecting: %s" % expspans
+
+        SEGSIZE = 128*1024
+        if got != expected:
+            print "differences:"
+            for segnum in range(len(expected)//SEGSIZE):
+                start = segnum * SEGSIZE
+                end = (segnum+1) * SEGSIZE
+                got_ends = "%s .. %s" % (got[start:start+20], got[end-20:end])
+                exp_ends = "%s .. %s" % (expected[start:start+20], expected[end-20:end])
+                if got_ends != exp_ends:
+                    print "expected[%d]: %s" % (start, exp_ends)
+                    print "got     [%d]: %s" % (start, got_ends)
+            if expspans != gotspans:
+                print "expected: %s" % expspans
+                print "got     : %s" % gotspans
+            open("EXPECTED","wb").write(expected)
+            open("GOT","wb").write(got)
+            print "wrote data to EXPECTED and GOT"
+            self.fail("didn't get expected data")
+
+
+    def test_replace_locations(self):
+        # exercise fencepost conditions
+        expected = self.data
+        SEGSIZE = 128*1024
+        suspects = range(SEGSIZE-3, SEGSIZE+1)+range(2*SEGSIZE-3, 2*SEGSIZE+1)
+        letters = iter("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
+        d = defer.succeed(None)
+        for offset in suspects:
+            new_data = letters.next()*2 # "AA", then "BB", etc
+            expected = expected[:offset]+new_data+expected[offset+2:]
+            d.addCallback(lambda ign:
+                          self.mdmf_node.get_best_mutable_version())
+            def _modify(mv, offset=offset, new_data=new_data):
+                # close over 'offset','new_data'
+                md = MutableData(new_data)
+                return mv.update(md, offset)
+            d.addCallback(_modify)
+            d.addCallback(lambda ignored:
+                          self.mdmf_node.download_best_version())
+            d.addCallback(self._check_differences, expected)
+        return d
+
 
     def test_replace_and_extend(self):
         # We should be able to replace data in the middle of a mutable
}
[mutable/publish: account for offsets on segment boundaries.
Kevan Carstensen <kevan@isnotajoke.com>**20110228083327
 Ignore-this: c8758a0580fcc15a22c2f8582d758a6b
] {
hunk ./src/allmydata/mutable/filenode.py 17
 from pycryptopp.cipher.aes import AES
 
 from allmydata.mutable.publish import Publish, MutableData,\
-                                      DEFAULT_MAX_SEGMENT_SIZE, \
                                       TransformingUploadable
 from allmydata.mutable.common import MODE_READ, MODE_WRITE, MODE_CHECK, UnrecoverableFileError, \
      ResponseCache, UncoordinatedWriteError
hunk ./src/allmydata/mutable/filenode.py 1058
         # appending data to the file.
         assert offset <= self.get_size()
 
+        segsize = self._version[3]
         # We'll need the segment that the data starts in, regardless of
         # what we'll do later.
hunk ./src/allmydata/mutable/filenode.py 1061
-        start_segment = mathutil.div_ceil(offset, DEFAULT_MAX_SEGMENT_SIZE)
+        start_segment = mathutil.div_ceil(offset, segsize)
         start_segment -= 1
 
         # We only need the end segment if the data we append does not go
hunk ./src/allmydata/mutable/filenode.py 1069
         end_segment = start_segment
         if offset + data.get_size() < self.get_size():
             end_data = offset + data.get_size()
-            end_segment = mathutil.div_ceil(end_data, DEFAULT_MAX_SEGMENT_SIZE)
+            end_segment = mathutil.div_ceil(end_data, segsize)
             end_segment -= 1
         self._start_segment = start_segment
         self._end_segment = end_segment
hunk ./src/allmydata/mutable/publish.py 551
                                                   segment_size)
             self.starting_segment = mathutil.div_ceil(offset,
                                                       segment_size)
-            self.starting_segment -= 1
+            if offset % segment_size != 0:
+                self.starting_segment -= 1
             if offset == 0:
                 self.starting_segment = 0
 
}
[tahoe-put: raise UsageError when given a nonsensical mutable type, move option validation code to the option parser.
Kevan Carstensen <kevan@isnotajoke.com>**20110301030807
 Ignore-this: 2dc19d8bd741842eff458ca553d0bf2a
] {
hunk ./src/allmydata/scripts/cli.py 186
         if self.from_file == u"-":
             self.from_file = None
 
+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
+
+
     def getSynopsis(self):
         return "Usage:  %s put [options] LOCAL_FILE REMOTE_FILE" % (self.command_name,)
 
hunk ./src/allmydata/scripts/tahoe_put.py 33
     stdout = options.stdout
     stderr = options.stderr
 
-    if mutable_type and mutable_type not in ('sdmf', 'mdmf'):
-        # Don't try to pass unsupported types to the webapi
-        print >>stderr, "error: %s is an invalid format" % mutable_type
-        return 1
-
     if nodeurl[-1] != "/":
         nodeurl += "/"
     if to_file:
hunk ./src/allmydata/test/test_cli.py 1053
         return d
 
     def test_mutable_type_invalid_format(self):
-        self.basedir = "cli/Put/mutable_type_invalid_format"
-        self.set_up_grid()
-        data = "data" * 100000
-        fn1 = os.path.join(self.basedir, "data")
-        fileutil.write(fn1, data)
-        d = self.do_cli("put", "--mutable", "--mutable-type=ldmf", fn1)
-        def _check_failure((rc, out, err)):
-            self.failIfEqual(rc, 0)
-            self.failUnlessIn("invalid", err)
-        d.addCallback(_check_failure)
-        return d
+        o = cli.PutOptions()
+        self.failUnlessRaises(usage.UsageError,
+                              o.parseOptions,
+                              ["--mutable", "--mutable-type=ldmf"])
 
     def test_put_with_nonexistent_alias(self):
         # when invoked with an alias that doesn't exist, 'tahoe put'
}
[web: use None instead of False in the case of no offset, use object identity comparison to check whether or not an offset was specified.
Kevan Carstensen <kevan@isnotajoke.com>**20110305010858
 Ignore-this: 14b7550ca95ce423c9b0b7f6f14ffd2f
] {
hunk ./src/allmydata/test/test_mutable.py 2981
             self.failUnlessEqual(results, new_data))
         return d
 
+    def test_replace_beginning(self):
+        # We should be able to replace data at the beginning of the file
+        # without truncating the file
+        B = "beginning"
+        new_data = B + self.data[len(B):]
+        d = self.mdmf_node.get_best_mutable_version()
+        d.addCallback(lambda mv: mv.update(MutableData(B), 0))
+        d.addCallback(lambda ignored: self.mdmf_node.download_best_version())
+        d.addCallback(lambda results: self.failUnlessEqual(results, new_data))
+        return d
+
     def test_replace_segstart1(self):
         offset = 128*1024+1
         new_data = "NNNN"
hunk ./src/allmydata/test/test_web.py 3204
         d.addCallback(_get_data)
         d.addCallback(lambda results:
             self.failUnlessEqual(results, self.new_data + ("puppies" * 100)))
+        # and try replacing the beginning of the file
+        d.addCallback(lambda ignored:
+            self.PUT("/uri/%s?offset=0" % self.filecap, "begin"))
+        d.addCallback(_get_data)
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results, "begin"+self.new_data[len("begin"):]+("puppies"*100)))
         return d
 
     def test_PUT_update_at_invalid_offset(self):
hunk ./src/allmydata/web/common.py 55
     # message? Since this call is going to be used by programmers and
     # their tools rather than users (through the wui), it is not
     # inconsistent to return that, I guess.
-    return int(offset)
+    if offset is not None:
+        offset = int(offset)
+
+    return offset
 
 
 def get_root(ctx_or_req):
hunk ./src/allmydata/web/filenode.py 219
         req = IRequest(ctx)
         t = get_arg(req, "t", "").strip()
         replace = parse_replace_arg(get_arg(req, "replace", "true"))
-        offset = parse_offset_arg(get_arg(req, "offset", False))
+        offset = parse_offset_arg(get_arg(req, "offset", None))
 
         if not t:
             if not replace:
hunk ./src/allmydata/web/filenode.py 229
                 raise ExistingChildError()
 
             if self.node.is_mutable():
-                if offset == False:
+                if offset is None:
                     return self.replace_my_contents(req)
 
                 if offset >= 0:
hunk ./src/allmydata/web/filenode.py 238
                 raise WebError("PUT to a mutable file: Invalid offset")
 
             else:
-                if offset != False:
+                if offset is not None:
                     raise WebError("PUT to a file: append operation invoked "
                                    "on an immutable cap")
 
}
[mutable/filenode: remove incorrect comments about segment boundaries
Kevan Carstensen <kevan@isnotajoke.com>**20110307081713
 Ignore-this: 7008644c3d9588815000a86edbf9c568
] {
hunk ./src/allmydata/mutable/filenode.py 1001
         offset. I return a Deferred that fires when this has been
         completed.
         """
-        # We have two cases here:
-        # 1. The new data will add few enough segments so that it does
-        #    not cross into the next power-of-two boundary.
-        # 2. It doesn't.
-        # 
-        # In the former case, we can modify the file in place. In the
-        # latter case, we need to re-encode the file.
         new_size = data.get_size() + offset
         old_size = self.get_size()
         segment_size = self._version[3]
hunk ./src/allmydata/mutable/filenode.py 1011
         log.msg("got %d old segments, %d new segments" % \
                         (num_old_segments, num_new_segments))
 
-        # We also do a whole file re-encode if the file is an SDMF file. 
+        # We do a whole file re-encode if the file is an SDMF file. 
         if self._version[2]: # version[2] == SDMF salt, which MDMF lacks
             log.msg("doing re-encode instead of in-place update")
             return self._do_modify_update(data, offset)
hunk ./src/allmydata/mutable/filenode.py 1016
 
+        # Otherwise, we can replace just the parts that are changing.
         log.msg("updating in place")
         d = self._do_update_update(data, offset)
         d.addCallback(self._decode_and_decrypt_segments, data, offset)
}
[mutable: use integer division where appropriate
Kevan Carstensen <kevan@isnotajoke.com>**20110307082229
 Ignore-this: a8767e89d919c9f2a5d5fef3953d53f9
] {
hunk ./src/allmydata/mutable/filenode.py 1055
         segsize = self._version[3]
         # We'll need the segment that the data starts in, regardless of
         # what we'll do later.
-        start_segment = mathutil.div_ceil(offset, segsize)
-        start_segment -= 1
+        start_segment = offset // segsize
 
         # We only need the end segment if the data we append does not go
         # beyond the current end-of-file.
hunk ./src/allmydata/mutable/filenode.py 1062
         end_segment = start_segment
         if offset + data.get_size() < self.get_size():
             end_data = offset + data.get_size()
-            end_segment = mathutil.div_ceil(end_data, segsize)
-            end_segment -= 1
+            end_segment = end_data // segsize
+
         self._start_segment = start_segment
         self._end_segment = end_segment
 
hunk ./src/allmydata/mutable/publish.py 547
 
         # Calculate the starting segment for the upload.
         if segment_size:
+            # We use div_ceil instead of integer division here because
+            # it is semantically correct.
+            # If datalength isn't an even multiple of segment_size, but
+            # is larger than segment_size, datalength // segment_size
+            # will be the largest number such that num <= datalength and
+            # num % segment_size == 0. But that's not what we want,
+            # because it ignores the extra data. div_ceil will give us
+            # the right number of segments for the data that we're
+            # given.
             self.num_segments = mathutil.div_ceil(self.datalength,
                                                   segment_size)
hunk ./src/allmydata/mutable/publish.py 558
-            self.starting_segment = mathutil.div_ceil(offset,
-                                                      segment_size)
-            if offset % segment_size != 0:
-                self.starting_segment -= 1
-            if offset == 0:
-                self.starting_segment = 0
+
+            self.starting_segment = offset // segment_size
 
         else:
             self.num_segments = 0
hunk ./src/allmydata/mutable/publish.py 604
         self.end_segment = self.num_segments - 1
         # Now figure out where the last segment should be.
         if self.data.get_size() != self.datalength:
+            # We're updating a few segments in the middle of a mutable
+            # file, so we don't want to republish the whole thing.
+            # (we don't have enough data to do that even if we wanted
+            # to)
             end = self.data.get_size()
hunk ./src/allmydata/mutable/publish.py 609
-            self.end_segment = mathutil.div_ceil(end,
-                                                 segment_size)
-            self.end_segment -= 1
+            self.end_segment = end // segment_size
+            if end % segment_size == 0:
+                self.end_segment -= 1
+
         self.log("got start segment %d" % self.starting_segment)
         self.log("got end segment %d" % self.end_segment)
 
}
[mutable/layout.py: reorder on-disk format to aput variable-length fields at the end of the share, after a predictably long preamble
Kevan Carstensen <kevan@isnotajoke.com>**20110501224125
 Ignore-this: 8b2c5d29b8984dfe675c1a2ada5205cf
] {
hunk ./src/allmydata/mutable/layout.py 539
                                      self._readvs)
 
 
-MDMFHEADER = ">BQ32sBBQQ QQQQQQ"
+MDMFHEADER = ">BQ32sBBQQ QQQQQQQQ"
 MDMFHEADERWITHOUTOFFSETS = ">BQ32sBBQQ"
 MDMFHEADERSIZE = struct.calcsize(MDMFHEADER)
 MDMFHEADERWITHOUTOFFSETSSIZE = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
hunk ./src/allmydata/mutable/layout.py 545
 MDMFCHECKSTRING = ">BQ32s"
 MDMFSIGNABLEHEADER = ">BQ32sBBQQ"
-MDMFOFFSETS = ">QQQQQQ"
+MDMFOFFSETS = ">QQQQQQQQ"
 MDMFOFFSETS_LENGTH = struct.calcsize(MDMFOFFSETS)
hunk ./src/allmydata/mutable/layout.py 547
+# XXX Fix this.
+PRIVATE_KEY_SIZE = 2000
+SIGNATURE_SIZE = 10000
+VERIFICATION_KEY_SIZE = 2000
+# We know we won't ever have more than 256 shares.
+# XXX: This, too, can be 
+SHARE_HASH_CHAIN_SIZE = HASH_SIZE * 256
 
 class MDMFSlotWriteProxy:
     implements(IMutableSlotWriter)
hunk ./src/allmydata/mutable/layout.py 577
     # 51          8           The data length of the original plaintext
     #-- end signed part --
     # 59          8           The offset of the encrypted private key
-    # 83          8           The offset of the signature
-    # 91          8           The offset of the verification key
-    # 67          8           The offset of the block hash tree
-    # 75          8           The offset of the share hash chain 
-    # 99          8           The offset of the EOF
-    #
-    # followed by salts and share data, the encrypted private key, the
-    # block hash tree, the share hash chain, a signature over the first
-    # eight fields, and a verification key.
+    # 67          8           The offset of the signature
+    # 75          8           The offset of the verification key
+    # 83          8           The offset of the end of the v. key.
+    # 92          8           The offset of the share data 
+    # 100         8           The offset of the block hash tree
+    # 108         8           The offset of the share hash chain
+    # 116         8           The offset of EOF
     # 
hunk ./src/allmydata/mutable/layout.py 585
+    # followed by the encrypted private key, signature, verification
+    # key, share hash chain, data, and block hash tree. We order the
+    # fields that way to make smart downloaders -- downloaders which
+    # prempetively read a big part of the share -- possible.
+    #
     # The checkstring is the first three fields -- the version number,
     # sequence number, root hash and root salt hash. This is consistent
     # in meaning to what we have with SDMF files, except now instead of
hunk ./src/allmydata/mutable/layout.py 792
         data_size += self._tail_block_size
         data_size += SALT_SIZE
         self._offsets['enc_privkey'] = MDMFHEADERSIZE
-        self._offsets['enc_privkey'] += data_size
-        # We'll wait for the rest. Callers can now call my "put_block" and
-        # "set_checkstring" methods.
+
+        # We don't define offsets for these because we want them to be
+        # tightly packed -- this allows us to ignore the responsibility
+        # of padding individual values, and of removing that padding
+        # later. So nonconstant_start is where we start writing
+        # nonconstant data.
+        nonconstant_start = self._offsets['enc_privkey']
+        nonconstant_start += PRIVATE_KEY_SIZE
+        nonconstant_start += SIGNATURE_SIZE
+        nonconstant_start += VERIFICATION_KEY_SIZE
+        nonconstant_start += SHARE_HASH_CHAIN_SIZE
+
+        self._offsets['share_data'] = nonconstant_start
+
+        # Finally, we know how big the share data will be, so we can
+        # figure out where the block hash tree needs to go.
+        # XXX: But this will go away if Zooko wants to make it so that
+        # you don't need to know the size of the file before you start
+        # uploading it.
+        self._offsets['block_hash_tree'] = self._offsets['share_data'] + \
+                    data_size
+
+        # Done. We can snow start writing.
 
 
     def set_checkstring(self,
hunk ./src/allmydata/mutable/layout.py 891
         anything to be written yet.
         """
         if segnum >= self._num_segments:
-            raise LayoutInvalid("I won't overwrite the private key")
+            raise LayoutInvalid("I won't overwrite the block hash tree")
         if len(salt) != SALT_SIZE:
             raise LayoutInvalid("I was given a salt of size %d, but "
                                 "I wanted a salt of size %d")
hunk ./src/allmydata/mutable/layout.py 902
             raise LayoutInvalid("I was given the wrong size block to write")
 
         # We want to write at len(MDMFHEADER) + segnum * block_size.
-        offset = MDMFHEADERSIZE + (self._actual_block_size * segnum)
+        offset = self._offsets['share_data'] + \
+            (self._actual_block_size * segnum)
         data = salt + data
 
         self._writevs.append(tuple([offset, data]))
hunk ./src/allmydata/mutable/layout.py 922
         # tree, the block hash tree writing method writes the offset of
         # the share hash chain. So that's a good indicator of whether or
         # not the block hash tree has been written.
-        if "share_hash_chain" in self._offsets:
-            raise LayoutInvalid("You must write this before the block hash tree")
+        if "signature" in self._offsets:
+            raise LayoutInvalid("You can't put the encrypted private key "
+                                "after putting the share hash chain")
+
+        self._offsets['share_hash_chain'] = self._offsets['enc_privkey'] + \
+                len(encprivkey)
 
hunk ./src/allmydata/mutable/layout.py 929
-        self._offsets['block_hash_tree'] = self._offsets['enc_privkey'] + \
-            len(encprivkey)
         self._writevs.append(tuple([self._offsets['enc_privkey'], encprivkey]))
 
 
hunk ./src/allmydata/mutable/layout.py 944
         offset of the share hash chain.
         """
         assert self._offsets
+        assert "block_hash_tree" in self._offsets
+
         assert isinstance(blockhashes, list)
hunk ./src/allmydata/mutable/layout.py 947
-        if "block_hash_tree" not in self._offsets:
-            raise LayoutInvalid("You must put the encrypted private key "
-                                "before you put the block hash tree")
-        # If written, the share hash chain causes the signature offset
-        # to be defined.
-        if "signature" in self._offsets:
-            raise LayoutInvalid("You must put the block hash tree before "
-                                "you put the share hash chain")
+
         blockhashes_s = "".join(blockhashes)
hunk ./src/allmydata/mutable/layout.py 949
-        self._offsets['share_hash_chain'] = self._offsets['block_hash_tree'] + len(blockhashes_s)
+        self._offsets['EOF'] = self._offsets['block_hash_tree'] + len(blockhashes_s)
 
         self._writevs.append(tuple([self._offsets['block_hash_tree'],
                                   blockhashes_s]))
hunk ./src/allmydata/mutable/layout.py 969
         is before you can generate a valid signature.
         """
         assert isinstance(sharehashes, dict)
+        assert self._offsets
         if "share_hash_chain" not in self._offsets:
hunk ./src/allmydata/mutable/layout.py 971
-            raise LayoutInvalid("You need to put the block hash tree before "
-                                "you can put the share hash chain")
+            raise LayoutInvalid("You must put the block hash tree before "
+                                "putting the share hash chain")
+
         # The signature comes after the share hash chain. If the
         # signature has already been written, we must not write another
         # share hash chain. The signature writes the verification key
hunk ./src/allmydata/mutable/layout.py 984
                                 "before you write the signature")
         sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i])
                                   for i in sorted(sharehashes.keys())])
-        self._offsets['signature'] = self._offsets['share_hash_chain'] + len(sharehashes_s)
+        self._offsets['signature'] = self._offsets['share_hash_chain'] + \
+            len(sharehashes_s)
         self._writevs.append(tuple([self._offsets['share_hash_chain'],
                             sharehashes_s]))
 
hunk ./src/allmydata/mutable/layout.py 1002
         # Signature is defined by the routine that places the share hash
         # chain, so it's a good thing to look for in finding out whether
         # or not the share hash chain exists on the remote server.
-        if "signature" not in self._offsets:
-            raise LayoutInvalid("You need to put the share hash chain "
-                                "before you can put the root share hash")
         if len(roothash) != HASH_SIZE:
             raise LayoutInvalid("hashes and salts must be exactly %d bytes"
                                  % HASH_SIZE)
hunk ./src/allmydata/mutable/layout.py 1053
         # If we put the signature after we put the verification key, we
         # could end up running into the verification key, and will
         # probably screw up the offsets as well. So we don't allow that.
+        if "verification_key_end" in self._offsets:
+            raise LayoutInvalid("You can't put the signature after the "
+                                "verification key")
         # The method that writes the verification key defines the EOF
         # offset before writing the verification key, so look for that.
hunk ./src/allmydata/mutable/layout.py 1058
-        if "EOF" in self._offsets:
-            raise LayoutInvalid("You must write the signature before the verification key")
-
-        self._offsets['verification_key'] = self._offsets['signature'] + len(signature)
+        self._offsets['verification_key'] = self._offsets['signature'] +\
+            len(signature)
         self._writevs.append(tuple([self._offsets['signature'], signature]))
 
 
hunk ./src/allmydata/mutable/layout.py 1074
         if "verification_key" not in self._offsets:
             raise LayoutInvalid("You must put the signature before you "
                                 "can put the verification key")
-        self._offsets['EOF'] = self._offsets['verification_key'] + len(verification_key)
+
+        self._offsets['verification_key_end'] = \
+            self._offsets['verification_key'] + len(verification_key)
         self._writevs.append(tuple([self._offsets['verification_key'],
                             verification_key]))
 
hunk ./src/allmydata/mutable/layout.py 1102
         of the write vectors that I've dealt with so far to be published
         to the remote server, ending the write process.
         """
-        if "EOF" not in self._offsets:
+        if "verification_key_end" not in self._offsets:
             raise LayoutInvalid("You must put the verification key before "
                                 "you can publish the offsets")
         offsets_offset = struct.calcsize(MDMFHEADERWITHOUTOFFSETS)
hunk ./src/allmydata/mutable/layout.py 1108
         offsets = struct.pack(MDMFOFFSETS,
                               self._offsets['enc_privkey'],
-                              self._offsets['block_hash_tree'],
                               self._offsets['share_hash_chain'],
                               self._offsets['signature'],
                               self._offsets['verification_key'],
hunk ./src/allmydata/mutable/layout.py 1111
+                              self._offsets['verification_key_end'],
+                              self._offsets['share_data'],
+                              self._offsets['block_hash_tree'],
                               self._offsets['EOF'])
         self._writevs.append(tuple([offsets_offset, offsets]))
         encoding_parameters_offset = struct.calcsize(MDMFCHECKSTRING)
hunk ./src/allmydata/mutable/layout.py 1227
         # MDMF, though we'll be left with 4 more bytes than we
         # need if this ends up being MDMF. This is probably less
         # expensive than the cost of a second roundtrip.
-        readvs = [(0, 107)]
+        readvs = [(0, 123)]
         d = self._read(readvs, force_remote)
         d.addCallback(self._process_encoding_parameters)
         d.addCallback(self._process_offsets)
hunk ./src/allmydata/mutable/layout.py 1330
             read_length = MDMFOFFSETS_LENGTH
             end = read_offset + read_length
             (encprivkey,
-             blockhashes,
              sharehashes,
              signature,
              verification_key,
hunk ./src/allmydata/mutable/layout.py 1333
+             verification_key_end,
+             sharedata,
+             blockhashes,
              eof) = struct.unpack(MDMFOFFSETS,
                                   offsets[read_offset:end])
             self._offsets = {}
hunk ./src/allmydata/mutable/layout.py 1344
             self._offsets['share_hash_chain'] = sharehashes
             self._offsets['signature'] = signature
             self._offsets['verification_key'] = verification_key
+            self._offsets['verification_key_end']= \
+                verification_key_end
             self._offsets['EOF'] = eof
hunk ./src/allmydata/mutable/layout.py 1347
+            self._offsets['share_data'] = sharedata
 
 
     def get_block_and_salt(self, segnum, queue=False):
hunk ./src/allmydata/mutable/layout.py 1357
         """
         d = self._maybe_fetch_offsets_and_header()
         def _then(ignored):
-            if self._version_number == 1:
-                base_share_offset = MDMFHEADERSIZE
-            else:
-                base_share_offset = self._offsets['share_data']
+            base_share_offset = self._offsets['share_data']
 
             if segnum + 1 > self._num_segments:
                 raise LayoutInvalid("Not a valid segment number")
hunk ./src/allmydata/mutable/layout.py 1430
         def _then(ignored):
             blockhashes_offset = self._offsets['block_hash_tree']
             if self._version_number == 1:
-                blockhashes_length = self._offsets['share_hash_chain'] - blockhashes_offset
+                blockhashes_length = self._offsets['EOF'] - blockhashes_offset
             else:
                 blockhashes_length = self._offsets['share_data'] - blockhashes_offset
             readvs = [(blockhashes_offset, blockhashes_length)]
hunk ./src/allmydata/mutable/layout.py 1501
             if self._version_number == 0:
                 privkey_length = self._offsets['EOF'] - privkey_offset
             else:
-                privkey_length = self._offsets['block_hash_tree'] - privkey_offset
+                privkey_length = self._offsets['share_hash_chain'] - privkey_offset
             readvs = [(privkey_offset, privkey_length)]
             return readvs
         d.addCallback(_make_readvs)
hunk ./src/allmydata/mutable/layout.py 1549
         def _make_readvs(ignored):
             if self._version_number == 1:
                 vk_offset = self._offsets['verification_key']
-                vk_length = self._offsets['EOF'] - vk_offset
+                vk_length = self._offsets['verification_key_end'] - vk_offset
             else:
                 vk_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ")
                 vk_length = self._offsets['signature'] - vk_offset
hunk ./src/allmydata/test/test_storage.py 26
 from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \
                                      LayoutInvalid, MDMFSIGNABLEHEADER, \
                                      SIGNED_PREFIX, MDMFHEADER, \
-                                     MDMFOFFSETS, SDMFSlotWriteProxy
+                                     MDMFOFFSETS, SDMFSlotWriteProxy, \
+                                     PRIVATE_KEY_SIZE, \
+                                     SIGNATURE_SIZE, \
+                                     VERIFICATION_KEY_SIZE, \
+                                     SHARE_HASH_CHAIN_SIZE
 from allmydata.interfaces import BadWriteEnablerError
 from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
 from allmydata.test.common_web import WebRenderingMixin
hunk ./src/allmydata/test/test_storage.py 1408
 
         # The encrypted private key comes after the shares + salts
         offset_size = struct.calcsize(MDMFOFFSETS)
-        encrypted_private_key_offset = len(data) + offset_size + len(sharedata)
-        # The blockhashes come after the private key
-        blockhashes_offset = encrypted_private_key_offset + len(self.encprivkey)
-        # The sharehashes come after the salt hashes
-        sharehashes_offset = blockhashes_offset + len(self.block_hash_tree_s)
-        # The signature comes after the share hash chain
+        encrypted_private_key_offset = len(data) + offset_size
+        # The share has chain comes after the private key
+        sharehashes_offset = encrypted_private_key_offset + \
+            len(self.encprivkey)
+
+        # The signature comes after the share hash chain.
         signature_offset = sharehashes_offset + len(self.share_hash_chain_s)
hunk ./src/allmydata/test/test_storage.py 1415
-        # The verification key comes after the signature
-        verification_offset = signature_offset + len(self.signature)
-        # The EOF comes after the verification key
-        eof_offset = verification_offset + len(self.verification_key)
+
+        verification_key_offset = signature_offset + len(self.signature)
+        verification_key_end = verification_key_offset + \
+            len(self.verification_key)
+
+        share_data_offset = offset_size
+        share_data_offset += PRIVATE_KEY_SIZE
+        share_data_offset += SIGNATURE_SIZE
+        share_data_offset += VERIFICATION_KEY_SIZE
+        share_data_offset += SHARE_HASH_CHAIN_SIZE
+
+        blockhashes_offset = share_data_offset + len(sharedata)
+        eof_offset = blockhashes_offset + len(self.block_hash_tree_s)
+
         data += struct.pack(MDMFOFFSETS,
                             encrypted_private_key_offset,
hunk ./src/allmydata/test/test_storage.py 1431
-                            blockhashes_offset,
                             sharehashes_offset,
                             signature_offset,
hunk ./src/allmydata/test/test_storage.py 1433
-                            verification_offset,
+                            verification_key_offset,
+                            verification_key_end,
+                            share_data_offset,
+                            blockhashes_offset,
                             eof_offset)
hunk ./src/allmydata/test/test_storage.py 1438
+
         self.offsets = {}
         self.offsets['enc_privkey'] = encrypted_private_key_offset
         self.offsets['block_hash_tree'] = blockhashes_offset
hunk ./src/allmydata/test/test_storage.py 1444
         self.offsets['share_hash_chain'] = sharehashes_offset
         self.offsets['signature'] = signature_offset
-        self.offsets['verification_key'] = verification_offset
+        self.offsets['verification_key'] = verification_key_offset
+        self.offsets['share_data'] = share_data_offset
+        self.offsets['verification_key_end'] = verification_key_end
         self.offsets['EOF'] = eof_offset
hunk ./src/allmydata/test/test_storage.py 1448
-        # Next, we'll add in the salts and share data,
-        data += sharedata
+
         # the private key,
         data += self.encprivkey
hunk ./src/allmydata/test/test_storage.py 1451
-        # the block hash tree,
-        data += self.block_hash_tree_s
-        # the share hash chain,
+        # the sharehashes
         data += self.share_hash_chain_s
         # the signature,
         data += self.signature
hunk ./src/allmydata/test/test_storage.py 1457
         # and the verification key
         data += self.verification_key
+        # Then we'll add in gibberish until we get to the right point.
+        nulls = "".join([" " for i in xrange(len(data), share_data_offset)])
+        data += nulls
+
+        # Then the share data
+        data += sharedata
+        # the blockhashes
+        data += self.block_hash_tree_s
         return data
 
 
hunk ./src/allmydata/test/test_storage.py 1729
         return d
 
 
-    def test_blockhashes_after_share_hash_chain(self):
+    def test_private_key_after_share_hash_chain(self):
         mw = self._make_new_mw("si1", 0)
         d = defer.succeed(None)
hunk ./src/allmydata/test/test_storage.py 1732
-        # Put everything up to and including the share hash chain
         for i in xrange(6):
             d.addCallback(lambda ignored, i=i:
                 mw.put_block(self.block, i, self.salt))
hunk ./src/allmydata/test/test_storage.py 1738
         d.addCallback(lambda ignored:
             mw.put_encprivkey(self.encprivkey))
         d.addCallback(lambda ignored:
-            mw.put_blockhashes(self.block_hash_tree))
-        d.addCallback(lambda ignored:
             mw.put_sharehashes(self.share_hash_chain))
 
hunk ./src/allmydata/test/test_storage.py 1740
-        # Now try to put the block hash tree again.
+        # Now try to put the private key again.
         d.addCallback(lambda ignored:
hunk ./src/allmydata/test/test_storage.py 1742
-            self.shouldFail(LayoutInvalid, "test repeat salthashes",
-                            None,
-                            mw.put_blockhashes, self.block_hash_tree))
-        return d
-
-
-    def test_encprivkey_after_blockhashes(self):
-        mw = self._make_new_mw("si1", 0)
-        d = defer.succeed(None)
-        # Put everything up to and including the block hash tree
-        for i in xrange(6):
-            d.addCallback(lambda ignored, i=i:
-                mw.put_block(self.block, i, self.salt))
-        d.addCallback(lambda ignored:
-            mw.put_encprivkey(self.encprivkey))
-        d.addCallback(lambda ignored:
-            mw.put_blockhashes(self.block_hash_tree))
-        d.addCallback(lambda ignored:
-            self.shouldFail(LayoutInvalid, "out of order private key",
+            self.shouldFail(LayoutInvalid, "test repeat private key",
                             None,
                             mw.put_encprivkey, self.encprivkey))
         return d
hunk ./src/allmydata/test/test_storage.py 1748
 
 
-    def test_share_hash_chain_after_signature(self):
-        mw = self._make_new_mw("si1", 0)
-        d = defer.succeed(None)
-        # Put everything up to and including the signature
-        for i in xrange(6):
-            d.addCallback(lambda ignored, i=i:
-                mw.put_block(self.block, i, self.salt))
-        d.addCallback(lambda ignored:
-            mw.put_encprivkey(self.encprivkey))
-        d.addCallback(lambda ignored:
-            mw.put_blockhashes(self.block_hash_tree))
-        d.addCallback(lambda ignored:
-            mw.put_sharehashes(self.share_hash_chain))
-        d.addCallback(lambda ignored:
-            mw.put_root_hash(self.root_hash))
-        d.addCallback(lambda ignored:
-            mw.put_signature(self.signature))
-        # Now try to put the share hash chain again. This should fail
-        d.addCallback(lambda ignored:
-            self.shouldFail(LayoutInvalid, "out of order share hash chain",
-                            None,
-                            mw.put_sharehashes, self.share_hash_chain))
-        return d
-
-
     def test_signature_after_verification_key(self):
         mw = self._make_new_mw("si1", 0)
         d = defer.succeed(None)
hunk ./src/allmydata/test/test_storage.py 1877
         mw = self._make_new_mw("si1", 0)
         # Test writing some blocks.
         read = self.ss.remote_slot_readv
-        expected_sharedata_offset = struct.calcsize(MDMFHEADER)
+        expected_private_key_offset = struct.calcsize(MDMFHEADER)
+        expected_sharedata_offset = struct.calcsize(MDMFHEADER) + \
+                                    PRIVATE_KEY_SIZE + \
+                                    SIGNATURE_SIZE + \
+                                    VERIFICATION_KEY_SIZE + \
+                                    SHARE_HASH_CHAIN_SIZE
         written_block_size = 2 + len(self.salt)
         written_block = self.block + self.salt
         for i in xrange(6):
hunk ./src/allmydata/test/test_storage.py 1903
                 self.failUnlessEqual(read("si1", [0], [(expected_sharedata_offset + (i * written_block_size), written_block_size)]),
                                 {0: [written_block]})
 
-            expected_private_key_offset = expected_sharedata_offset + \
-                                      len(written_block) * 6
             self.failUnlessEqual(len(self.encprivkey), 7)
             self.failUnlessEqual(read("si1", [0], [(expected_private_key_offset, 7)]),
                                  {0: [self.encprivkey]})
hunk ./src/allmydata/test/test_storage.py 1907
 
-            expected_block_hash_offset = expected_private_key_offset + len(self.encprivkey)
+            expected_block_hash_offset = expected_sharedata_offset + \
+                        (6 * written_block_size)
             self.failUnlessEqual(len(self.block_hash_tree_s), 32 * 6)
             self.failUnlessEqual(read("si1", [0], [(expected_block_hash_offset, 32 * 6)]),
                                  {0: [self.block_hash_tree_s]})
hunk ./src/allmydata/test/test_storage.py 1913
 
-            expected_share_hash_offset = expected_block_hash_offset + len(self.block_hash_tree_s)
+            expected_share_hash_offset = expected_private_key_offset + len(self.encprivkey)
             self.failUnlessEqual(read("si1", [0],[(expected_share_hash_offset, (32 + 2) * 6)]),
                                  {0: [self.share_hash_chain_s]})
 
hunk ./src/allmydata/test/test_storage.py 1919
             self.failUnlessEqual(read("si1", [0], [(9, 32)]),
                                  {0: [self.root_hash]})
-            expected_signature_offset = expected_share_hash_offset + len(self.share_hash_chain_s)
+            expected_signature_offset = expected_share_hash_offset + \
+                len(self.share_hash_chain_s)
             self.failUnlessEqual(len(self.signature), 9)
             self.failUnlessEqual(read("si1", [0], [(expected_signature_offset, 9)]),
                                  {0: [self.signature]})
hunk ./src/allmydata/test/test_storage.py 1941
             self.failUnlessEqual(n, 10)
             self.failUnlessEqual(segsize, 6)
             self.failUnlessEqual(datalen, 36)
-            expected_eof_offset = expected_verification_key_offset + len(self.verification_key)
+            expected_eof_offset = expected_block_hash_offset + \
+                len(self.block_hash_tree_s)
 
             # Check the version number to make sure that it is correct.
             expected_version_number = struct.pack(">B", 1)
hunk ./src/allmydata/test/test_storage.py 1969
             expected_offset = struct.pack(">Q", expected_private_key_offset)
             self.failUnlessEqual(read("si1", [0], [(59, 8)]),
                                  {0: [expected_offset]})
-            expected_offset = struct.pack(">Q", expected_block_hash_offset)
+            expected_offset = struct.pack(">Q", expected_share_hash_offset)
             self.failUnlessEqual(read("si1", [0], [(67, 8)]),
                                  {0: [expected_offset]})
hunk ./src/allmydata/test/test_storage.py 1972
-            expected_offset = struct.pack(">Q", expected_share_hash_offset)
+            expected_offset = struct.pack(">Q", expected_signature_offset)
             self.failUnlessEqual(read("si1", [0], [(75, 8)]),
                                  {0: [expected_offset]})
hunk ./src/allmydata/test/test_storage.py 1975
-            expected_offset = struct.pack(">Q", expected_signature_offset)
+            expected_offset = struct.pack(">Q", expected_verification_key_offset)
             self.failUnlessEqual(read("si1", [0], [(83, 8)]),
                                  {0: [expected_offset]})
hunk ./src/allmydata/test/test_storage.py 1978
-            expected_offset = struct.pack(">Q", expected_verification_key_offset)
+            expected_offset = struct.pack(">Q", expected_verification_key_offset + len(self.verification_key))
             self.failUnlessEqual(read("si1", [0], [(91, 8)]),
                                  {0: [expected_offset]})
hunk ./src/allmydata/test/test_storage.py 1981
-            expected_offset = struct.pack(">Q", expected_eof_offset)
+            expected_offset = struct.pack(">Q", expected_sharedata_offset)
             self.failUnlessEqual(read("si1", [0], [(99, 8)]),
                                  {0: [expected_offset]})
hunk ./src/allmydata/test/test_storage.py 1984
+            expected_offset = struct.pack(">Q", expected_block_hash_offset)
+            self.failUnlessEqual(read("si1", [0], [(107, 8)]),
+                                 {0: [expected_offset]})
+            expected_offset = struct.pack(">Q", expected_eof_offset)
+            self.failUnlessEqual(read("si1", [0], [(115, 8)]),
+                                 {0: [expected_offset]})
         d.addCallback(_check_publish)
         return d
 
hunk ./src/allmydata/test/test_storage.py 2117
         for i in xrange(6):
             d.addCallback(lambda ignored, i=i:
                 mw0.put_block(self.block, i, self.salt))
-        # Try to write the block hashes before writing the encrypted
-        # private key
-        d.addCallback(lambda ignored:
-            self.shouldFail(LayoutInvalid, "block hashes before key",
-                            None, mw0.put_blockhashes,
-                            self.block_hash_tree))
-
-        # Write the private key.
-        d.addCallback(lambda ignored:
-            mw0.put_encprivkey(self.encprivkey))
-
 
hunk ./src/allmydata/test/test_storage.py 2118
-        # Try to write the share hash chain without writing the block
-        # hash tree
+        # Try to write the share hash chain without writing the
+        # encrypted private key
         d.addCallback(lambda ignored:
             self.shouldFail(LayoutInvalid, "share hash chain before "
hunk ./src/allmydata/test/test_storage.py 2122
-                                           "salt hash tree",
+                                           "private key",
                             None,
                             mw0.put_sharehashes, self.share_hash_chain))
hunk ./src/allmydata/test/test_storage.py 2125
-
-        # Try to write the root hash and without writing either the
-        # block hashes or the or the share hashes
+        # Write the private key.
         d.addCallback(lambda ignored:
hunk ./src/allmydata/test/test_storage.py 2127
-            self.shouldFail(LayoutInvalid, "root hash before share hashes",
-                            None,
-                            mw0.put_root_hash, self.root_hash))
+            mw0.put_encprivkey(self.encprivkey))
 
         # Now write the block hashes and try again
         d.addCallback(lambda ignored:
hunk ./src/allmydata/test/test_storage.py 2133
             mw0.put_blockhashes(self.block_hash_tree))
 
-        d.addCallback(lambda ignored:
-            self.shouldFail(LayoutInvalid, "root hash before share hashes",
-                            None, mw0.put_root_hash, self.root_hash))
-
         # We haven't yet put the root hash on the share, so we shouldn't
         # be able to sign it.
         d.addCallback(lambda ignored:
hunk ./src/allmydata/test/test_storage.py 2378
         # This should be enough to fill in both the encoding parameters
         # and the table of offsets, which will complete the version
         # information tuple.
-        d.addCallback(_make_mr, 107)
+        d.addCallback(_make_mr, 123)
         d.addCallback(lambda mr:
             mr.get_verinfo())
         def _check_verinfo(verinfo):
hunk ./src/allmydata/test/test_storage.py 2412
         d.addCallback(_check_verinfo)
         # This is not enough data to read a block and a share, so the
         # wrapper should attempt to read this from the remote server.
-        d.addCallback(_make_mr, 107)
+        d.addCallback(_make_mr, 123)
         d.addCallback(lambda mr:
             mr.get_block_and_salt(0))
         def _check_block_and_salt((block, salt)):
hunk ./src/allmydata/test/test_storage.py 2420
             self.failUnlessEqual(salt, self.salt)
             self.failUnlessEqual(self.rref.read_count, 1)
         # This should be enough data to read one block.
-        d.addCallback(_make_mr, 249)
+        d.addCallback(_make_mr, 123 + PRIVATE_KEY_SIZE + SIGNATURE_SIZE + VERIFICATION_KEY_SIZE + SHARE_HASH_CHAIN_SIZE + 140)
         d.addCallback(lambda mr:
             mr.get_block_and_salt(0))
         d.addCallback(_check_block_and_salt)
hunk ./src/allmydata/test/test_storage.py 2438
         # This should be enough to get us the encoding parameters,
         # offset table, and everything else we need to build a verinfo
         # string.
-        d.addCallback(_make_mr, 107)
+        d.addCallback(_make_mr, 123)
         d.addCallback(lambda mr:
             mr.get_verinfo())
         def _check_verinfo(verinfo):
hunk ./src/allmydata/test/test_storage.py 2473
             self.failUnlessEqual(self.rref.read_count, 0)
         d.addCallback(_check_verinfo)
         # This shouldn't be enough to read any share data.
-        d.addCallback(_make_mr, 107)
+        d.addCallback(_make_mr, 123)
         d.addCallback(lambda mr:
             mr.get_block_and_salt(0))
         def _check_block_and_salt((block, salt)):
}
[uri.py: Add MDMF cap
Kevan Carstensen <kevan@isnotajoke.com>**20110501224249
 Ignore-this: a6d1046d33f5cc811c5e8b10af925f33
] {
hunk ./src/allmydata/interfaces.py 546
 
 class IMutableFileURI(Interface):
     """I am a URI which represents a mutable filenode."""
+    def get_extension_params():
+        """Return the extension parameters in the URI"""
 
 class IDirectoryURI(Interface):
     pass
hunk ./src/allmydata/test/test_uri.py 2
 
+import re
 from twisted.trial import unittest
 from allmydata import uri
 from allmydata.util import hashutil, base32
hunk ./src/allmydata/test/test_uri.py 259
         uri.CHKFileURI.init_from_string(fileURI)
 
 class Mutable(testutil.ReallyEqualMixin, unittest.TestCase):
-    def test_pack(self):
-        writekey = "\x01" * 16
-        fingerprint = "\x02" * 32
+    def setUp(self):
+        self.writekey = "\x01" * 16
+        self.fingerprint = "\x02" * 32
+        self.readkey = hashutil.ssk_readkey_hash(self.writekey)
+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
 
hunk ./src/allmydata/test/test_uri.py 265
-        u = uri.WriteableSSKFileURI(writekey, fingerprint)
-        self.failUnlessReallyEqual(u.writekey, writekey)
-        self.failUnlessReallyEqual(u.fingerprint, fingerprint)
+    def test_pack(self):
+        u = uri.WriteableSSKFileURI(self.writekey, self.fingerprint)
+        self.failUnlessReallyEqual(u.writekey, self.writekey)
+        self.failUnlessReallyEqual(u.fingerprint, self.fingerprint)
         self.failIf(u.is_readonly())
         self.failUnless(u.is_mutable())
         self.failUnless(IURI.providedBy(u))
hunk ./src/allmydata/test/test_uri.py 281
         self.failUnlessReallyEqual(u, u_h)
 
         u2 = uri.from_string(u.to_string())
-        self.failUnlessReallyEqual(u2.writekey, writekey)
-        self.failUnlessReallyEqual(u2.fingerprint, fingerprint)
+        self.failUnlessReallyEqual(u2.writekey, self.writekey)
+        self.failUnlessReallyEqual(u2.fingerprint, self.fingerprint)
         self.failIf(u2.is_readonly())
         self.failUnless(u2.is_mutable())
         self.failUnless(IURI.providedBy(u2))
hunk ./src/allmydata/test/test_uri.py 297
         self.failUnless(isinstance(u2imm, uri.UnknownURI), u2imm)
 
         u3 = u2.get_readonly()
-        readkey = hashutil.ssk_readkey_hash(writekey)
-        self.failUnlessReallyEqual(u3.fingerprint, fingerprint)
+        readkey = hashutil.ssk_readkey_hash(self.writekey)
+        self.failUnlessReallyEqual(u3.fingerprint, self.fingerprint)
         self.failUnlessReallyEqual(u3.readkey, readkey)
         self.failUnless(u3.is_readonly())
         self.failUnless(u3.is_mutable())
hunk ./src/allmydata/test/test_uri.py 317
         u3_h = uri.ReadonlySSKFileURI.init_from_human_encoding(he)
         self.failUnlessReallyEqual(u3, u3_h)
 
-        u4 = uri.ReadonlySSKFileURI(readkey, fingerprint)
-        self.failUnlessReallyEqual(u4.fingerprint, fingerprint)
+        u4 = uri.ReadonlySSKFileURI(readkey, self.fingerprint)
+        self.failUnlessReallyEqual(u4.fingerprint, self.fingerprint)
         self.failUnlessReallyEqual(u4.readkey, readkey)
         self.failUnless(u4.is_readonly())
         self.failUnless(u4.is_mutable())
hunk ./src/allmydata/test/test_uri.py 350
         self.failUnlessReallyEqual(u5, u5_h)
 
 
+    def test_writable_mdmf_cap(self):
+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
+        cap = u1.to_string()
+        u = uri.WritableMDMFFileURI.init_from_string(cap)
+
+        self.failUnless(IMutableFileURI.providedBy(u))
+        self.failUnlessReallyEqual(u.fingerprint, self.fingerprint)
+        self.failUnlessReallyEqual(u.writekey, self.writekey)
+        self.failUnless(u.is_mutable())
+        self.failIf(u.is_readonly())
+        self.failUnlessEqual(cap, u.to_string())
+
+        # Now get a readonly cap from the writable cap, and test that it
+        # degrades gracefully.
+        ru = u.get_readonly()
+        self.failUnlessReallyEqual(self.readkey, ru.readkey)
+        self.failUnlessReallyEqual(self.fingerprint, ru.fingerprint)
+        self.failUnless(ru.is_mutable())
+        self.failUnless(ru.is_readonly())
+
+        # Now get a verifier cap.
+        vu = ru.get_verify_cap()
+        self.failUnlessReallyEqual(self.storage_index, vu.storage_index)
+        self.failUnlessReallyEqual(self.fingerprint, vu.fingerprint)
+        self.failUnless(IVerifierURI.providedBy(vu))
+
+    def test_readonly_mdmf_cap(self):
+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
+        cap = u1.to_string()
+        u2 = uri.ReadonlyMDMFFileURI.init_from_string(cap)
+
+        self.failUnlessReallyEqual(u2.fingerprint, self.fingerprint)
+        self.failUnlessReallyEqual(u2.readkey, self.readkey)
+        self.failUnless(u2.is_readonly())
+        self.failUnless(u2.is_mutable())
+
+        vu = u2.get_verify_cap()
+        self.failUnlessEqual(u2.storage_index, self.storage_index)
+        self.failUnlessEqual(u2.fingerprint, self.fingerprint)
+
+    def test_create_writable_mdmf_cap_from_readcap(self):
+        # we shouldn't be able to create a writable MDMF cap given only a
+        # readcap.
+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
+        cap = u1.to_string()
+        self.failUnlessRaises(uri.BadURIError,
+                              uri.WritableMDMFFileURI.init_from_string,
+                              cap)
+
+    def test_create_writable_mdmf_cap_from_verifycap(self):
+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
+        cap = u1.to_string()
+        self.failUnlessRaises(uri.BadURIError,
+                              uri.WritableMDMFFileURI.init_from_string,
+                              cap)
+
+    def test_create_readonly_mdmf_cap_from_verifycap(self):
+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
+        cap = u1.to_string()
+        self.failUnlessRaises(uri.BadURIError,
+                              uri.ReadonlyMDMFFileURI.init_from_string,
+                              cap)
+
+    def test_mdmf_verifier_cap(self):
+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
+        self.failUnless(u1.is_readonly())
+        self.failIf(u1.is_mutable())
+        self.failUnlessReallyEqual(self.storage_index, u1.storage_index)
+        self.failUnlessReallyEqual(self.fingerprint, u1.fingerprint)
+
+        cap = u1.to_string()
+        u2 = uri.MDMFVerifierURI.init_from_string(cap)
+        self.failUnless(u2.is_readonly())
+        self.failIf(u2.is_mutable())
+        self.failUnlessReallyEqual(self.storage_index, u2.storage_index)
+        self.failUnlessReallyEqual(self.fingerprint, u2.fingerprint)
+
+        u3 = u2.get_readonly()
+        self.failUnlessReallyEqual(u3, u2)
+
+        u4 = u2.get_verify_cap()
+        self.failUnlessReallyEqual(u4, u2)
+
+    def test_mdmf_cap_extra_information(self):
+        # MDMF caps can be arbitrarily extended after the fingerprint
+        # and key/storage index fields. 
+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
+        self.failUnlessEqual([], u1.get_extension_params())
+
+        cap = u1.to_string()
+        # Now let's append some fields. Say, 131073 (the segment size)
+        # and 3 (the "k" encoding parameter).
+        expected_extensions = []
+        for e in ('131073', '3'):
+            cap += (":%s" % e)
+            expected_extensions.append(e)
+
+            u2 = uri.WritableMDMFFileURI.init_from_string(cap)
+            self.failUnlessReallyEqual(self.writekey, u2.writekey)
+            self.failUnlessReallyEqual(self.fingerprint, u2.fingerprint)
+            self.failIf(u2.is_readonly())
+            self.failUnless(u2.is_mutable())
+
+            c2 = u2.to_string()
+            u2n = uri.WritableMDMFFileURI.init_from_string(c2)
+            self.failUnlessReallyEqual(u2, u2n)
+
+            # We should get the extra back when we ask for it.
+            self.failUnlessEqual(expected_extensions, u2.get_extension_params())
+
+            # These should be preserved through cap attenuation, too.
+            u3 = u2.get_readonly()
+            self.failUnlessReallyEqual(self.readkey, u3.readkey)
+            self.failUnlessReallyEqual(self.fingerprint, u3.fingerprint)
+            self.failUnless(u3.is_readonly())
+            self.failUnless(u3.is_mutable())
+            self.failUnlessEqual(expected_extensions, u3.get_extension_params())
+
+            c3 = u3.to_string()
+            u3n = uri.ReadonlyMDMFFileURI.init_from_string(c3)
+            self.failUnlessReallyEqual(u3, u3n)
+
+            u4 = u3.get_verify_cap()
+            self.failUnlessReallyEqual(self.storage_index, u4.storage_index)
+            self.failUnlessReallyEqual(self.fingerprint, u4.fingerprint)
+            self.failUnless(u4.is_readonly())
+            self.failIf(u4.is_mutable())
+
+            c4 = u4.to_string()
+            u4n = uri.MDMFVerifierURI.init_from_string(c4)
+            self.failUnlessReallyEqual(u4n, u4)
+
+            self.failUnlessEqual(expected_extensions, u4.get_extension_params())
+
+
+    def test_sdmf_cap_extra_information(self):
+        # For interface consistency, we define a method to get
+        # extensions for SDMF files as well. This method must always
+        # return no extensions, since SDMF files were not created with
+        # extensions and cannot be modified to include extensions
+        # without breaking older clients.
+        u1 = uri.WriteableSSKFileURI(self.writekey, self.fingerprint)
+        cap = u1.to_string()
+        u2 = uri.WriteableSSKFileURI.init_from_string(cap)
+        self.failUnlessEqual([], u2.get_extension_params())
+
+    def test_extension_character_range(self):
+        # As written now, we shouldn't put things other than numbers in
+        # the extension fields.
+        writecap = uri.WritableMDMFFileURI(self.writekey, self.fingerprint).to_string()
+        readcap  = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint).to_string()
+        vcap     = uri.MDMFVerifierURI(self.storage_index, self.fingerprint).to_string()
+        self.failUnlessRaises(uri.BadURIError,
+                              uri.WritableMDMFFileURI.init_from_string,
+                              ("%s:invalid" % writecap))
+        self.failUnlessRaises(uri.BadURIError,
+                              uri.ReadonlyMDMFFileURI.init_from_string,
+                              ("%s:invalid" % readcap))
+        self.failUnlessRaises(uri.BadURIError,
+                              uri.MDMFVerifierURI.init_from_string,
+                              ("%s:invalid" % vcap))
+
+
+    def test_mdmf_valid_human_encoding(self):
+        # What's a human encoding? Well, it's of the form:
+        base = "https://127.0.0.1:3456/uri/"
+        # With a cap on the end. For each of the cap types, we need to
+        # test that a valid cap (with and without the traditional
+        # separators) is recognized and accepted by the classes.
+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
+                                     ['131073', '3'])
+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
+                                     ['131073', '3'])
+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
+                                 ['131073', '3'])
+
+        # These will yield six different caps.
+        for o in (w1, w2, r1 , r2, v1, v2):
+            url = base + o.to_string()
+            o1 = o.__class__.init_from_human_encoding(url)
+            self.failUnlessReallyEqual(o1, o)
+
+            # Note that our cap will, by default, have : as separators. 
+            # But it's expected that users from, e.g., the WUI, will
+            # have %3A as a separator. We need to make sure that the
+            # initialization routine handles that, too.
+            cap = o.to_string()
+            cap = re.sub(":", "%3A", cap)
+            url = base + cap
+            o2 = o.__class__.init_from_human_encoding(url)
+            self.failUnlessReallyEqual(o2, o)
+
+
+    def test_mdmf_human_encoding_invalid_base(self):
+        # What's a human encoding? Well, it's of the form:
+        base = "https://127.0.0.1:3456/foo/bar/bazuri/"
+        # With a cap on the end. For each of the cap types, we need to
+        # test that a valid cap (with and without the traditional
+        # separators) is recognized and accepted by the classes.
+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
+                                     ['131073', '3'])
+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
+                                     ['131073', '3'])
+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
+                                 ['131073', '3'])
+
+        # These will yield six different caps.
+        for o in (w1, w2, r1 , r2, v1, v2):
+            url = base + o.to_string()
+            self.failUnlessRaises(uri.BadURIError,
+                                  o.__class__.init_from_human_encoding,
+                                  url)
+
+    def test_mdmf_human_encoding_invalid_cap(self):
+        base = "https://127.0.0.1:3456/uri/"
+        # With a cap on the end. For each of the cap types, we need to
+        # test that a valid cap (with and without the traditional
+        # separators) is recognized and accepted by the classes.
+        w1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
+        w2 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
+                                     ['131073', '3'])
+        r1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
+        r2 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
+                                     ['131073', '3'])
+        v1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
+        v2 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
+                                 ['131073', '3'])
+
+        # These will yield six different caps.
+        for o in (w1, w2, r1 , r2, v1, v2):
+            # not exhaustive, obviously...
+            url = base + o.to_string() + "foobarbaz"
+            url2 = base + "foobarbaz" + o.to_string()
+            url3 = base + o.to_string()[:25] + "foo" + o.to_string()[:25]
+            for u in (url, url2, url3):
+                self.failUnlessRaises(uri.BadURIError,
+                                      o.__class__.init_from_human_encoding,
+                                      u)
+
+    def test_mdmf_from_string(self):
+        # Make sure that the from_string utility function works with
+        # MDMF caps.
+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint)
+        cap = u1.to_string()
+        self.failUnless(uri.is_uri(cap))
+        u2 = uri.from_string(cap)
+        self.failUnlessReallyEqual(u1, u2)
+        u3 = uri.from_string_mutable_filenode(cap)
+        self.failUnlessEqual(u3, u1)
+
+        # XXX: We should refactor the extension field into setUp
+        u1 = uri.WritableMDMFFileURI(self.writekey, self.fingerprint,
+                                     ['131073', '3'])
+        cap = u1.to_string()
+        self.failUnless(uri.is_uri(cap))
+        u2 = uri.from_string(cap)
+        self.failUnlessReallyEqual(u1, u2)
+        u3 = uri.from_string_mutable_filenode(cap)
+        self.failUnlessEqual(u3, u1)
+
+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint)
+        cap = u1.to_string()
+        self.failUnless(uri.is_uri(cap))
+        u2 = uri.from_string(cap)
+        self.failUnlessReallyEqual(u1, u2)
+        u3 = uri.from_string_mutable_filenode(cap)
+        self.failUnlessEqual(u3, u1)
+
+        u1 = uri.ReadonlyMDMFFileURI(self.readkey, self.fingerprint,
+                                     ['131073', '3'])
+        cap = u1.to_string()
+        self.failUnless(uri.is_uri(cap))
+        u2 = uri.from_string(cap)
+        self.failUnlessReallyEqual(u1, u2)
+        u3 = uri.from_string_mutable_filenode(cap)
+        self.failUnlessEqual(u3, u1)
+
+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint)
+        cap = u1.to_string()
+        self.failUnless(uri.is_uri(cap))
+        u2 = uri.from_string(cap)
+        self.failUnlessReallyEqual(u1, u2)
+        u3 = uri.from_string_verifier(cap)
+        self.failUnlessEqual(u3, u1)
+
+        u1 = uri.MDMFVerifierURI(self.storage_index, self.fingerprint,
+                                 ['131073', '3'])
+        cap = u1.to_string()
+        self.failUnless(uri.is_uri(cap))
+        u2 = uri.from_string(cap)
+        self.failUnlessReallyEqual(u1, u2)
+        u3 = uri.from_string_verifier(cap)
+        self.failUnlessEqual(u3, u1)
+
+
 class Dirnode(testutil.ReallyEqualMixin, unittest.TestCase):
     def test_pack(self):
         writekey = "\x01" * 16
hunk ./src/allmydata/uri.py 31
 SEP='(?::|%3A)'
 NUMBER='([0-9]+)'
 NUMBER_IGNORE='(?:[0-9]+)'
+OPTIONAL_EXTENSION_FIELD = '(' + SEP + '[0-9' + SEP + ']+|)'
 
 # "human-encoded" URIs are allowed to come with a leading
 # 'http://127.0.0.1:(8123|3456)/uri/' that will be ignored.
hunk ./src/allmydata/uri.py 297
     def get_verify_cap(self):
         return SSKVerifierURI(self.storage_index, self.fingerprint)
 
+    def get_extension_params(self):
+        return []
 
 class ReadonlySSKFileURI(_BaseURI):
     implements(IURI, IMutableFileURI)
hunk ./src/allmydata/uri.py 354
     def get_verify_cap(self):
         return SSKVerifierURI(self.storage_index, self.fingerprint)
 
+    def get_extension_params(self):
+        return []
 
 class SSKVerifierURI(_BaseURI):
     implements(IVerifierURI)
hunk ./src/allmydata/uri.py 401
     def get_verify_cap(self):
         return self
 
+    def get_extension_params(self):
+        return []
+
+class WritableMDMFFileURI(_BaseURI):
+    implements(IURI, IMutableFileURI)
+
+    BASE_STRING='URI:MDMF:'
+    STRING_RE=re.compile('^'+BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
+
+    def __init__(self, writekey, fingerprint, params=[]):
+        self.writekey = writekey
+        self.readkey = hashutil.ssk_readkey_hash(writekey)
+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
+        assert len(self.storage_index) == 16
+        self.fingerprint = fingerprint
+        self.extension = params
+
+    @classmethod
+    def init_from_human_encoding(cls, uri):
+        mo = cls.HUMAN_RE.search(uri)
+        if not mo:
+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
+        params = filter(lambda x: x != '', re.split(SEP, mo.group(3)))
+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
+
+    @classmethod
+    def init_from_string(cls, uri):
+        mo = cls.STRING_RE.search(uri)
+        if not mo:
+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
+        params = mo.group(3)
+        params = filter(lambda x: x != '', params.split(":"))
+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
+
+    def to_string(self):
+        assert isinstance(self.writekey, str)
+        assert isinstance(self.fingerprint, str)
+        ret = 'URI:MDMF:%s:%s' % (base32.b2a(self.writekey),
+                                  base32.b2a(self.fingerprint))
+        if self.extension:
+            ret += ":"
+            ret += ":".join(self.extension)
+
+        return ret
+
+    def __repr__(self):
+        return "<%s %s>" % (self.__class__.__name__, self.abbrev())
+
+    def abbrev(self):
+        return base32.b2a(self.writekey[:5])
+
+    def abbrev_si(self):
+        return base32.b2a(self.storage_index)[:5]
+
+    def is_readonly(self):
+        return False
+
+    def is_mutable(self):
+        return True
+
+    def get_readonly(self):
+        return ReadonlyMDMFFileURI(self.readkey, self.fingerprint, self.extension)
+
+    def get_verify_cap(self):
+        return MDMFVerifierURI(self.storage_index, self.fingerprint, self.extension)
+
+    def get_extension_params(self):
+        return self.extension
+
+class ReadonlyMDMFFileURI(_BaseURI):
+    implements(IURI, IMutableFileURI)
+
+    BASE_STRING='URI:MDMF-RO:'
+    STRING_RE=re.compile('^' +BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF-RO'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
+
+    def __init__(self, readkey, fingerprint, params=[]):
+        self.readkey = readkey
+        self.storage_index = hashutil.ssk_storage_index_hash(self.readkey)
+        assert len(self.storage_index) == 16
+        self.fingerprint = fingerprint
+        self.extension = params
+
+    @classmethod
+    def init_from_human_encoding(cls, uri):
+        mo = cls.HUMAN_RE.search(uri)
+        if not mo:
+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
+        params = mo.group(3)
+        params = filter(lambda x: x!= '', re.split(SEP, params))
+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
+
+    @classmethod
+    def init_from_string(cls, uri):
+        mo = cls.STRING_RE.search(uri)
+        if not mo:
+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
+
+        params = mo.group(3)
+        params = filter(lambda x: x != '', params.split(":"))
+        return cls(base32.a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
+
+    def to_string(self):
+        assert isinstance(self.readkey, str)
+        assert isinstance(self.fingerprint, str)
+        ret = 'URI:MDMF-RO:%s:%s' % (base32.b2a(self.readkey),
+                                     base32.b2a(self.fingerprint))
+        if self.extension:
+            ret += ":"
+            ret += ":".join(self.extension)
+
+        return ret
+
+    def __repr__(self):
+        return "<%s %s>" % (self.__class__.__name__, self.abbrev())
+
+    def abbrev(self):
+        return base32.b2a(self.readkey[:5])
+
+    def abbrev_si(self):
+        return base32.b2a(self.storage_index)[:5]
+
+    def is_readonly(self):
+        return True
+
+    def is_mutable(self):
+        return True
+
+    def get_readonly(self):
+        return self
+
+    def get_verify_cap(self):
+        return MDMFVerifierURI(self.storage_index, self.fingerprint, self.extension)
+
+    def get_extension_params(self):
+        return self.extension
+
+class MDMFVerifierURI(_BaseURI):
+    implements(IVerifierURI)
+
+    BASE_STRING='URI:MDMF-Verifier:'
+    STRING_RE=re.compile('^'+BASE_STRING+BASE32STR_128bits+':'+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
+    HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'MDMF-Verifier'+SEP+BASE32STR_128bits+SEP+BASE32STR_256bits+OPTIONAL_EXTENSION_FIELD+'$')
+
+    def __init__(self, storage_index, fingerprint, params=[]):
+        assert len(storage_index) == 16
+        self.storage_index = storage_index
+        self.fingerprint = fingerprint
+        self.extension = params
+
+    @classmethod
+    def init_from_human_encoding(cls, uri):
+        mo = cls.HUMAN_RE.search(uri)
+        if not mo:
+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
+        params = mo.group(3)
+        params = filter(lambda x: x != '', re.split(SEP, params))
+        return cls(si_a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
+
+    @classmethod
+    def init_from_string(cls, uri):
+        mo = cls.STRING_RE.search(uri)
+        if not mo:
+            raise BadURIError("'%s' doesn't look like a %s cap" % (uri, cls))
+        params = mo.group(3)
+        params = filter(lambda x: x != '', params.split(":"))
+        return cls(si_a2b(mo.group(1)), base32.a2b(mo.group(2)), params)
+
+    def to_string(self):
+        assert isinstance(self.storage_index, str)
+        assert isinstance(self.fingerprint, str)
+        ret = 'URI:MDMF-Verifier:%s:%s' % (si_b2a(self.storage_index),
+                                           base32.b2a(self.fingerprint))
+        if self.extension:
+            ret += ':'
+            ret += ":".join(self.extension)
+
+        return ret
+
+    def is_readonly(self):
+        return True
+
+    def is_mutable(self):
+        return False
+
+    def get_readonly(self):
+        return self
+
+    def get_verify_cap(self):
+        return self
+
+    def get_extension_params(self):
+        return self.extension
+
 class _DirectoryBaseURI(_BaseURI):
     implements(IURI, IDirnodeURI)
     def __init__(self, filenode_uri=None):
hunk ./src/allmydata/uri.py 831
             kind = "URI:SSK-RO readcap to a mutable file"
         elif s.startswith('URI:SSK-Verifier:'):
             return SSKVerifierURI.init_from_string(s)
+        elif s.startswith('URI:MDMF:'):
+            return WritableMDMFFileURI.init_from_string(s)
+        elif s.startswith('URI:MDMF-RO:'):
+            return ReadonlyMDMFFileURI.init_from_string(s)
+        elif s.startswith('URI:MDMF-Verifier:'):
+            return MDMFVerifierURI.init_from_string(s)
         elif s.startswith('URI:DIR2:'):
             if can_be_writeable:
                 return DirectoryURI.init_from_string(s)
}
[nodemaker, mutable/filenode: train nodemaker and filenode to handle MDMF caps
Kevan Carstensen <kevan@isnotajoke.com>**20110501224523
 Ignore-this: 1f3b4581eb583e7bb93d234182bda395
] {
hunk ./src/allmydata/mutable/filenode.py 12
      IMutableFileVersion, IWritable
 from allmydata.util import hashutil, log, consumer, deferredutil, mathutil
 from allmydata.util.assertutil import precondition
-from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI
+from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI, \
+                          WritableMDMFFileURI, ReadonlyMDMFFileURI
 from allmydata.monitor import Monitor
 from pycryptopp.cipher.aes import AES
 
hunk ./src/allmydata/mutable/filenode.py 75
         # set to this default value in case neither of those things happen,
         # or in case the servermap can't find any shares to tell us what
         # to publish as.
-        # TODO: Set this back to None, and find out why the tests fail
-        #       with it set to None.
+        # XXX: Version should come in via the constructor.
         self._protocol_version = None
 
         # all users of this MutableFileNode go through the serializer. This
hunk ./src/allmydata/mutable/filenode.py 95
         # verification key, nor things like 'k' or 'N'. If and when someone
         # wants to get our contents, we'll pull from shares and fill those
         # in.
-        assert isinstance(filecap, (ReadonlySSKFileURI, WriteableSSKFileURI))
+        if isinstance(filecap, (WritableMDMFFileURI, ReadonlyMDMFFileURI)):
+            self._protocol_version = MDMF_VERSION
+        elif isinstance(filecap, (ReadonlySSKFileURI, WriteableSSKFileURI)):
+            self._protocol_version = SDMF_VERSION
+
         self._uri = filecap
         self._writekey = None
hunk ./src/allmydata/mutable/filenode.py 102
-        if isinstance(filecap, WriteableSSKFileURI):
+
+        if not filecap.is_readonly() and filecap.is_mutable():
             self._writekey = self._uri.writekey
         self._readkey = self._uri.readkey
         self._storage_index = self._uri.storage_index
hunk ./src/allmydata/mutable/filenode.py 131
         self._writekey = hashutil.ssk_writekey_hash(privkey_s)
         self._encprivkey = self._encrypt_privkey(self._writekey, privkey_s)
         self._fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
-        self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
+        if self._protocol_version == MDMF_VERSION:
+            self._uri = WritableMDMFFileURI(self._writekey, self._fingerprint)
+        else:
+            self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
         self._readkey = self._uri.readkey
         self._storage_index = self._uri.storage_index
         initial_contents = self._get_initial_contents(contents)
hunk ./src/allmydata/nodemaker.py 82
             return self._create_immutable(cap)
         if isinstance(cap, uri.CHKFileVerifierURI):
             return self._create_immutable_verifier(cap)
-        if isinstance(cap, (uri.ReadonlySSKFileURI, uri.WriteableSSKFileURI)):
+        if isinstance(cap, (uri.ReadonlySSKFileURI, uri.WriteableSSKFileURI,
+                            uri.WritableMDMFFileURI, uri.ReadonlyMDMFFileURI)):
             return self._create_mutable(cap)
         if isinstance(cap, (uri.DirectoryURI,
                             uri.ReadonlyDirectoryURI,
hunk ./src/allmydata/test/test_mutable.py 196
                     offset2 = 0
                 if offset1 == "pubkey" and IV:
                     real_offset = 107
-                elif offset1 == "share_data" and not IV:
-                    real_offset = 107
                 elif offset1 in o:
                     real_offset = o[offset1]
                 else:
hunk ./src/allmydata/test/test_mutable.py 270
         return d
 
 
+    def test_mdmf_filenode_cap(self):
+        # Test that an MDMF filenode, once created, returns an MDMF URI.
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            cap = n.get_cap()
+            self.failUnless(isinstance(cap, uri.WritableMDMFFileURI))
+            rcap = n.get_readcap()
+            self.failUnless(isinstance(rcap, uri.ReadonlyMDMFFileURI))
+            vcap = n.get_verify_cap()
+            self.failUnless(isinstance(vcap, uri.MDMFVerifierURI))
+        d.addCallback(_created)
+        return d
+
+
+    def test_create_from_mdmf_writecap(self):
+        # Test that the nodemaker is capable of creating an MDMF
+        # filenode given an MDMF cap.
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            s = n.get_uri()
+            self.failUnless(s.startswith("URI:MDMF"))
+            n2 = self.nodemaker.create_from_cap(s)
+            self.failUnless(isinstance(n2, MutableFileNode))
+            self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
+            self.failUnlessEqual(n.get_uri(), n2.get_uri())
+        d.addCallback(_created)
+        return d
+
+
+    def test_create_from_mdmf_writecap_with_extensions(self):
+        # Test that the nodemaker is capable of creating an MDMF
+        # filenode when given a writecap with extension parameters in
+        # them.
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            s = n.get_uri()
+            s2 = "%s:3:131073" % s
+            n2 = self.nodemaker.create_from_cap(s2)
+
+            self.failUnlessEqual(n2.get_storage_index(), n.get_storage_index())
+            self.failUnlessEqual(n.get_writekey(), n2.get_writekey())
+        d.addCallback(_created)
+        return d
+
+
+    def test_create_from_mdmf_readcap(self):
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            s = n.get_readonly_uri()
+            n2 = self.nodemaker.create_from_cap(s)
+            self.failUnless(isinstance(n2, MutableFileNode))
+
+            # Check that it's a readonly node
+            self.failUnless(n2.is_readonly())
+        d.addCallback(_created)
+        return d
+
+
+    def test_create_from_mdmf_readcap_with_extensions(self):
+        # We should be able to create an MDMF filenode with the
+        # extension parameters without it breaking.
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            s = n.get_readonly_uri()
+            s = "%s:3:131073" % s
+
+            n2 = self.nodemaker.create_from_cap(s)
+            self.failUnless(isinstance(n2, MutableFileNode))
+            self.failUnless(n2.is_readonly())
+            self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
+        d.addCallback(_created)
+        return d
+
+
+    def test_internal_version_from_cap(self):
+        # MutableFileNodes and MutableFileVersions have an internal
+        # switch that tells them whether they're dealing with an SDMF or
+        # MDMF mutable file when they start doing stuff. We want to make
+        # sure that this is set appropriately given an MDMF cap.
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            self.uri = n.get_uri()
+            self.failUnlessEqual(n._protocol_version, MDMF_VERSION)
+
+            n2 = self.nodemaker.create_from_cap(self.uri)
+            self.failUnlessEqual(n2._protocol_version, MDMF_VERSION)
+        d.addCallback(_created)
+        return d
+
+
     def test_serialize(self):
         n = MutableFileNode(None, None, {"k": 3, "n": 10}, None)
         calls = []
hunk ./src/allmydata/test/test_mutable.py 464
         return d
 
 
+    def test_download_from_mdmf_cap(self):
+        # We should be able to download an MDMF file given its cap
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(node):
+            self.uri = node.get_uri()
+
+            return node.overwrite(MutableData("contents1" * 100000))
+        def _then(ignored):
+            node = self.nodemaker.create_from_cap(self.uri)
+            return node.download_best_version()
+        def _downloaded(data):
+            self.failUnlessEqual(data, "contents1" * 100000)
+        d.addCallback(_created)
+        d.addCallback(_then)
+        d.addCallback(_downloaded)
+        return d
+
+
     def test_mdmf_write_count(self):
         # Publishing an MDMF file should only cause one write for each
         # share that is to be published. Otherwise, we introduce
hunk ./src/allmydata/test/test_mutable.py 1735
     def test_verify_mdmf_bad_encprivkey(self):
         d = self.publish_mdmf()
         d.addCallback(lambda ignored:
-            corrupt(None, self._storage, "enc_privkey", [1]))
+            corrupt(None, self._storage, "enc_privkey", [0]))
         d.addCallback(lambda ignored:
             self._fn.check(Monitor(), verify=True))
         d.addCallback(self.check_bad, "test_verify_mdmf_bad_encprivkey")
hunk ./src/allmydata/test/test_mutable.py 2843
         return d
 
 
+    def test_version_extension_api(self):
+        # We need to define an API by which an uploader can set the
+        # extension parameters, and by which a downloader can retrieve
+        # extensions.
+        self.failUnless(False)
+
+
+    def test_extensions_from_cap(self):
+        self.failUnless(False)
+
+
+    def test_extensions_from_upload(self):
+        self.failUnless(False)
+
+
+    def test_cap_after_upload(self):
+        self.failUnless(False)
+
+
     def test_get_writekey(self):
         d = self.mdmf_node.get_best_mutable_version()
         d.addCallback(lambda bv:
}
[mutable/retrieve: fix typo in paused check
Kevan Carstensen <kevan@isnotajoke.com>**20110515225946
 Ignore-this: a9c7f3bdbab2f8248f8b6a64f574e7c4
] hunk ./src/allmydata/mutable/retrieve.py 207
         """
         if self._paused:
             d = defer.Deferred()
-            self._pause_defered.addCallback(lambda ignored: d.callback(res))
+            self._pause_deferred.addCallback(lambda ignored: d.callback(res))
             return d
         return defer.succeed(res)
 
[scripts/tahoe_put.py: teach tahoe put about MDMF caps
Kevan Carstensen <kevan@isnotajoke.com>**20110515230008
 Ignore-this: 1522f434f651683c924e37251a3c1bfd
] hunk ./src/allmydata/scripts/tahoe_put.py 49
         #  DIRCAP:./subdir/foo : DIRCAP/subdir/foo
         #  MUTABLE-FILE-WRITECAP : filecap
 
-        # FIXME: this shouldn't rely on a particular prefix.
-        if to_file.startswith("URI:SSK:"):
+        # FIXME: don't hardcode cap format.
+        if to_file.startswith("URI:MDMF:") or to_file.startswith("URI:SSK:"):
             url = nodeurl + "uri/%s" % urllib.quote(to_file)
         else:
             try:
[test/common.py: fix some MDMF-related bugs in common test fixtures
Kevan Carstensen <kevan@isnotajoke.com>**20110515230038
 Ignore-this: ab5ffe4789bb5e6ed5f54b91b760bac9
] {
hunk ./src/allmydata/test/common.py 199
                  default_encoding_parameters, history):
         self.init_from_cap(make_mutable_file_cap())
     def create(self, contents, key_generator=None, keysize=None):
+        if self.file_types[self.storage_index] == MDMF_VERSION and \
+            isinstance(self.my_uri, (uri.ReadonlySSKFileURI,
+                                 uri.WriteableSSKFileURI)):
+            self.init_from_cap(make_mdmf_mutable_file_cap())
         initial_contents = self._get_initial_contents(contents)
         data = initial_contents.read(initial_contents.get_size())
         data = "".join(data)
hunk ./src/allmydata/test/common.py 220
         return contents(self)
     def init_from_cap(self, filecap):
         assert isinstance(filecap, (uri.WriteableSSKFileURI,
-                                    uri.ReadonlySSKFileURI))
+                                    uri.ReadonlySSKFileURI,
+                                    uri.WritableMDMFFileURI,
+                                    uri.ReadonlyMDMFFileURI))
         self.my_uri = filecap
         self.storage_index = self.my_uri.get_storage_index()
hunk ./src/allmydata/test/common.py 225
+        if isinstance(filecap, (uri.WritableMDMFFileURI,
+                                uri.ReadonlyMDMFFileURI)):
+            self.file_types[self.storage_index] = MDMF_VERSION
+
+        else:
+            self.file_types[self.storage_index] = SDMF_VERSION
+
         return self
     def get_cap(self):
         return self.my_uri
hunk ./src/allmydata/test/common.py 249
         return self.my_uri.get_readonly().to_string()
     def get_verify_cap(self):
         return self.my_uri.get_verify_cap()
+    def get_repair_cap(self):
+        if self.my_uri.is_readonly():
+            return None
+        return self.my_uri
     def is_readonly(self):
         return self.my_uri.is_readonly()
     def is_mutable(self):
hunk ./src/allmydata/test/common.py 406
 def make_mutable_file_cap():
     return uri.WriteableSSKFileURI(writekey=os.urandom(16),
                                    fingerprint=os.urandom(32))
-def make_mutable_file_uri():
-    return make_mutable_file_cap().to_string()
+
+def make_mdmf_mutable_file_cap():
+    return uri.WritableMDMFFileURI(writekey=os.urandom(16),
+                                   fingerprint=os.urandom(32))
+
+def make_mutable_file_uri(mdmf=False):
+    if mdmf:
+        uri = make_mdmf_mutable_file_cap()
+    else:
+        uri = make_mutable_file_cap()
+
+    return uri.to_string()
 
 def make_verifier_uri():
     return uri.SSKVerifierURI(storage_index=os.urandom(16),
hunk ./src/allmydata/test/common.py 423
                               fingerprint=os.urandom(32)).to_string()
 
+def create_mutable_filenode(contents, mdmf=False):
+    # XXX: All of these arguments are kind of stupid. 
+    if mdmf:
+        cap = make_mdmf_mutable_file_cap()
+    else:
+        cap = make_mutable_file_cap()
+
+    filenode = FakeMutableFileNode(None, None, None, None)
+    filenode.init_from_cap(cap)
+    FakeMutableFileNode.all_contents[filenode.storage_index] = contents
+    return filenode
+
+
 class FakeDirectoryNode(dirnode.DirectoryNode):
     """This offers IDirectoryNode, but uses a FakeMutableFileNode for the
     backing store, so it doesn't go to the grid. The child data is still
}
[test/test_cli: Alter existing MDMF tests to test for MDMF caps
Kevan Carstensen <kevan@isnotajoke.com>**20110515230054
 Ignore-this: a90d089e1afb0f261710083c2be6b2fa
] {
hunk ./src/allmydata/test/test_cli.py 13
 from allmydata.util import fileutil, hashutil, base32
 from allmydata import uri
 from allmydata.immutable import upload
+from allmydata.interfaces import MDMF_VERSION, SDMF_VERSION
 from allmydata.mutable.publish import MutableData
 from allmydata.dirnode import normalize
 
hunk ./src/allmydata/test/test_cli.py 33
 from allmydata.test.common_util import StallMixin, ReallyEqualMixin
 from allmydata.test.no_network import GridTestMixin
 from twisted.internet import threads # CLI tests use deferToThread
+from twisted.internet import defer # List uses a DeferredList in one place.
 from twisted.python import usage
 
 from allmydata.util.assertutil import precondition
hunk ./src/allmydata/test/test_cli.py 1014
         d.addCallback(lambda (rc,out,err): self.failUnlessReallyEqual(out, DATA2))
         return d
 
+    def _check_mdmf_json(self, (rc, json, err)):
+         self.failUnlessEqual(rc, 0)
+         self.failUnlessEqual(err, "")
+         self.failUnlessIn('"mutable-type": "mdmf"', json)
+         # We also want a valid MDMF cap to be in the json.
+         self.failUnlessIn("URI:MDMF", json)
+         self.failUnlessIn("URI:MDMF-RO", json)
+         self.failUnlessIn("URI:MDMF-Verifier", json)
+
+    def _check_sdmf_json(self, (rc, json, err)):
+        self.failUnlessEqual(rc, 0)
+        self.failUnlessEqual(err, "")
+        self.failUnlessIn('"mutable-type": "sdmf"', json)
+        # We also want to see the appropriate SDMF caps.
+        self.failUnlessIn("URI:SSK", json)
+        self.failUnlessIn("URI:SSK-RO", json)
+        self.failUnlessIn("URI:SSK-Verifier", json)
+
     def test_mutable_type(self):
         self.basedir = "cli/Put/mutable_type"
         self.set_up_grid()
hunk ./src/allmydata/test/test_cli.py 1044
                         fn1, "tahoe:uploaded.txt"))
         d.addCallback(lambda ignored:
             self.do_cli("ls", "--json", "tahoe:uploaded.txt"))
-        d.addCallback(lambda (rc, json, err): self.failUnlessIn("mdmf", json))
+        d.addCallback(self._check_mdmf_json)
         d.addCallback(lambda ignored:
             self.do_cli("put", "--mutable", "--mutable-type=sdmf",
                         fn1, "tahoe:uploaded2.txt"))
hunk ./src/allmydata/test/test_cli.py 1050
         d.addCallback(lambda ignored:
             self.do_cli("ls", "--json", "tahoe:uploaded2.txt"))
-        d.addCallback(lambda (rc, json, err):
-            self.failUnlessIn("sdmf", json))
+        d.addCallback(self._check_sdmf_json)
         return d
 
     def test_mutable_type_unlinked(self):
hunk ./src/allmydata/test/test_cli.py 1062
         d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
         d.addCallback(lambda (rc, cap, err):
             self.do_cli("ls", "--json", cap))
-        d.addCallback(lambda (rc, json, err): self.failUnlessIn("mdmf", json))
+        d.addCallback(self._check_mdmf_json)
         d.addCallback(lambda ignored:
             self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1))
         d.addCallback(lambda (rc, cap, err):
hunk ./src/allmydata/test/test_cli.py 1067
             self.do_cli("ls", "--json", cap))
-        d.addCallback(lambda (rc, json, err):
-            self.failUnlessIn("sdmf", json))
+        d.addCallback(self._check_sdmf_json)
         return d
 
hunk ./src/allmydata/test/test_cli.py 1070
+    def test_put_to_mdmf_cap(self):
+        self.basedir = "cli/Put/put_to_mdmf_cap"
+        self.set_up_grid()
+        data = "data" * 100000
+        fn1 = os.path.join(self.basedir, "data")
+        fileutil.write(fn1, data)
+        d = self.do_cli("put", "--mutable", "--mutable-type=mdmf", fn1)
+        def _got_cap((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.cap = out
+        d.addCallback(_got_cap)
+        # Now try to write something to the cap using put.
+        data2 = "data2" * 100000
+        fn2 = os.path.join(self.basedir, "data2")
+        fileutil.write(fn2, data2)
+        d.addCallback(lambda ignored:
+            self.do_cli("put", fn2, self.cap))
+        def _got_put((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.failUnlessIn(self.cap, out)
+        d.addCallback(_got_put)
+        # Now get the cap. We should see the data we just put there.
+        d.addCallback(lambda ignored:
+            self.do_cli("get", self.cap))
+        def _got_data((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.failUnlessEqual(out, data2)
+        d.addCallback(_got_data)
+        return d
+
+    def test_put_to_sdmf_cap(self):
+        self.basedir = "cli/Put/put_to_sdmf_cap"
+        self.set_up_grid()
+        data = "data" * 100000
+        fn1 = os.path.join(self.basedir, "data")
+        fileutil.write(fn1, data)
+        d = self.do_cli("put", "--mutable", "--mutable-type=sdmf", fn1)
+        def _got_cap((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.cap = out
+        d.addCallback(_got_cap)
+        # Now try to write something to the cap using put.
+        data2 = "data2" * 100000
+        fn2 = os.path.join(self.basedir, "data2")
+        fileutil.write(fn2, data2)
+        d.addCallback(lambda ignored:
+            self.do_cli("put", fn2, self.cap))
+        def _got_put((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.failUnlessIn(self.cap, out)
+        d.addCallback(_got_put)
+        # Now get the cap. We should see the data we just put there.
+        d.addCallback(lambda ignored:
+            self.do_cli("get", self.cap))
+        def _got_data((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.failUnlessEqual(out, data2)
+        d.addCallback(_got_data)
+        return d
+
     def test_mutable_type_invalid_format(self):
         o = cli.PutOptions()
         self.failUnlessRaises(usage.UsageError,
hunk ./src/allmydata/test/test_cli.py 1363
         d.addCallback(_check)
         return d
 
+    def _create_directory_structure(self):
+        # Create a simple directory structure that we can use for MDMF,
+        # SDMF, and immutable testing.
+        assert self.g
+
+        client = self.g.clients[0]
+        # Create a dirnode
+        d = client.create_dirnode()
+        def _got_rootnode(n):
+            # Add a few nodes.
+            self._dircap = n.get_uri()
+            nm = n._nodemaker
+            # The uploaders may run at the same time, so we need two
+            # MutableData instances or they'll fight over offsets &c and
+            # break.
+            mutable_data = MutableData("data" * 100000)
+            mutable_data2 = MutableData("data" * 100000)
+            # Add both kinds of mutable node.
+            d1 = nm.create_mutable_file(mutable_data,
+                                        version=MDMF_VERSION)
+            d2 = nm.create_mutable_file(mutable_data2,
+                                        version=SDMF_VERSION)
+            # Add an immutable node. We do this through the directory,
+            # with add_file.
+            immutable_data = upload.Data("immutable data" * 100000,
+                                         convergence="")
+            d3 = n.add_file(u"immutable", immutable_data)
+            ds = [d1, d2, d3]
+            dl = defer.DeferredList(ds)
+            def _made_files((r1, r2, r3)):
+                self.failUnless(r1[0])
+                self.failUnless(r2[0])
+                self.failUnless(r3[0])
+
+                # r1, r2, and r3 contain nodes.
+                mdmf_node = r1[1]
+                sdmf_node = r2[1]
+                imm_node = r3[1]
+
+                self._mdmf_uri = mdmf_node.get_uri()
+                self._mdmf_readonly_uri = mdmf_node.get_readonly_uri()
+                self._sdmf_uri = mdmf_node.get_uri()
+                self._sdmf_readonly_uri = sdmf_node.get_readonly_uri()
+                self._imm_uri = imm_node.get_uri()
+
+                d1 = n.set_node(u"mdmf", mdmf_node)
+                d2 = n.set_node(u"sdmf", sdmf_node)
+                return defer.DeferredList([d1, d2])
+            # We can now list the directory by listing self._dircap.
+            dl.addCallback(_made_files)
+            return dl
+        d.addCallback(_got_rootnode)
+        return d
+
+    def test_list_mdmf(self):
+        # 'tahoe ls' should include MDMF files.
+        self.basedir = "cli/List/list_mdmf"
+        self.set_up_grid()
+        d = self._create_directory_structure()
+        d.addCallback(lambda ignored:
+            self.do_cli("ls", self._dircap))
+        def _got_ls((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.failUnlessEqual(err, "")
+            self.failUnlessIn("immutable", out)
+            self.failUnlessIn("mdmf", out)
+            self.failUnlessIn("sdmf", out)
+        d.addCallback(_got_ls)
+        return d
+
+    def test_list_mdmf_json(self):
+        # 'tahoe ls' should include MDMF caps when invoked with MDMF
+        # caps.
+        self.basedir = "cli/List/list_mdmf_json"
+        self.set_up_grid()
+        d = self._create_directory_structure()
+        d.addCallback(lambda ignored:
+            self.do_cli("ls", "--json", self._dircap))
+        def _got_json((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.failUnlessEqual(err, "")
+            self.failUnlessIn(self._mdmf_uri, out)
+            self.failUnlessIn(self._mdmf_readonly_uri, out)
+            self.failUnlessIn(self._sdmf_uri, out)
+            self.failUnlessIn(self._sdmf_readonly_uri, out)
+            self.failUnlessIn(self._imm_uri, out)
+            self.failUnlessIn('"mutable-type": "sdmf"', out)
+            self.failUnlessIn('"mutable-type": "mdmf"', out)
+        d.addCallback(_got_json)
+        return d
+
 
 class Mv(GridTestMixin, CLITestMixin, unittest.TestCase):
     def test_mv_behavior(self):
}
[test/test_mutable.py: write a test for pausing during retrieval, write support structure for that test
Kevan Carstensen <kevan@isnotajoke.com>**20110515230207
 Ignore-this: 8884ef3ad5be59dbc870ed14002ac45
] {
hunk ./src/allmydata/test/test_mutable.py 6
 from cStringIO import StringIO
 from twisted.trial import unittest
 from twisted.internet import defer, reactor
+from twisted.internet.interfaces import IConsumer
+from zope.interface import implements
 from allmydata import uri, client
 from allmydata.nodemaker import NodeMaker
 from allmydata.util import base32, consumer
hunk ./src/allmydata/test/test_mutable.py 466
         return d
 
 
+    def test_retrieve_pause(self):
+        # We should make sure that the retriever is able to pause
+        # correctly.
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(node):
+            self.node = node
+
+            return node.overwrite(MutableData("contents1" * 100000))
+        d.addCallback(_created)
+        # Now we'll retrieve it into a pausing consumer.
+        d.addCallback(lambda ignored:
+            self.node.get_best_mutable_version())
+        def _got_version(version):
+            self.c = PausingConsumer()
+            return version.read(self.c)
+        d.addCallback(_got_version)
+        d.addCallback(lambda ignored:
+            self.failUnlessEqual(self.c.data, "contents1" * 100000))
+        return d
+    test_retrieve_pause.timeout = 25
+
+
     def test_download_from_mdmf_cap(self):
         # We should be able to download an MDMF file given its cap
         d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
hunk ./src/allmydata/test/test_mutable.py 944
                     index = versionmap[shnum]
                     shares[peerid][shnum] = oldshares[index][peerid][shnum]
 
+class PausingConsumer:
+    implements(IConsumer)
+    def __init__(self):
+        self.data = ""
+        self.already_paused = False
+
+    def registerProducer(self, producer, streaming):
+        self.producer = producer
+        self.producer.resumeProducing()
 
hunk ./src/allmydata/test/test_mutable.py 954
+    def unregisterProducer(self):
+        self.producer = None
+
+    def _unpause(self, ignored):
+        self.producer.resumeProducing()
+
+    def write(self, data):
+        self.data += data
+        if not self.already_paused:
+           self.producer.pauseProducing()
+           self.already_paused = True
+           reactor.callLater(15, self._unpause, None)
 
 
 class Servermap(unittest.TestCase, PublishMixin):
}
[test/test_mutable.py: implement cap type checking
Kevan Carstensen <kevan@isnotajoke.com>**20110515230326
 Ignore-this: 64cf51b809605061047c8a1b02f5e212
] hunk ./src/allmydata/test/test_mutable.py 2904
 
 
     def test_cap_after_upload(self):
-        self.failUnless(False)
+        # If we create a new mutable file and upload things to it, and
+        # it's an MDMF file, we should get an MDMF cap back from that
+        # file and should be able to use that.
+        # That's essentially what MDMF node is, so just check that.
+        mdmf_uri = self.mdmf_node.get_uri()
+        cap = uri.from_string(mdmf_uri)
+        self.failUnless(isinstance(cap, uri.WritableMDMFFileURI))
+        readonly_mdmf_uri = self.mdmf_node.get_readonly_uri()
+        cap = uri.from_string(readonly_mdmf_uri)
+        self.failUnless(isinstance(cap, uri.ReadonlyMDMFFileURI))
 
 
     def test_get_writekey(self):
[test/test_web: add MDMF cap tests
Kevan Carstensen <kevan@isnotajoke.com>**20110515230358
 Ignore-this: ace5af3bdc9b65c3f6964c8fe056816
] {
hunk ./src/allmydata/test/test_web.py 27
 from allmydata.util.netstring import split_netstring
 from allmydata.util.encodingutil import to_str
 from allmydata.test.common import FakeCHKFileNode, FakeMutableFileNode, \
-     create_chk_filenode, WebErrorMixin, ShouldFailMixin, make_mutable_file_uri
+     create_chk_filenode, WebErrorMixin, ShouldFailMixin, \
+     make_mutable_file_uri, create_mutable_filenode
 from allmydata.interfaces import IMutableFileNode, SDMF_VERSION, MDMF_VERSION
 from allmydata.mutable import servermap, publish, retrieve
 import allmydata.test.common_util as testutil
hunk ./src/allmydata/test/test_web.py 208
             foo.set_uri(u"bar.txt", self._bar_txt_uri, self._bar_txt_uri)
             self._bar_txt_verifycap = n.get_verify_cap().to_string()
 
+            # sdmf
+            # XXX: Do we ever use this?
+            self.BAZ_CONTENTS, n, self._baz_txt_uri, self._baz_txt_readonly_uri = self.makefile_mutable(0)
+
+            foo.set_uri(u"baz.txt", self._baz_txt_uri, self._baz_txt_readonly_uri)
+
+            # mdmf
+            self.QUUX_CONTENTS, n, self._quux_txt_uri, self._quux_txt_readonly_uri = self.makefile_mutable(0, mdmf=True)
+            assert self._quux_txt_uri.startswith("URI:MDMF")
+            foo.set_uri(u"quux.txt", self._quux_txt_uri, self._quux_txt_readonly_uri)
+
             foo.set_uri(u"empty", res[3][1].get_uri(),
                         res[3][1].get_readonly_uri())
             sub_uri = res[4][1].get_uri()
hunk ./src/allmydata/test/test_web.py 250
             # public/
             # public/foo/
             # public/foo/bar.txt
+            # public/foo/baz.txt
+            # public/foo/quux.txt
             # public/foo/blockingfile
             # public/foo/empty/
             # public/foo/sub/
hunk ./src/allmydata/test/test_web.py 272
         n = create_chk_filenode(contents)
         return contents, n, n.get_uri()
 
+    def makefile_mutable(self, number, mdmf=False):
+        contents = "contents of mutable file %s\n" % number
+        n = create_mutable_filenode(contents, mdmf)
+        return contents, n, n.get_uri(), n.get_readonly_uri()
+
     def tearDown(self):
         return self.s.stopService()
 
hunk ./src/allmydata/test/test_web.py 283
     def failUnlessIsBarDotTxt(self, res):
         self.failUnlessReallyEqual(res, self.BAR_CONTENTS, res)
 
+    def failUnlessIsQuuxDotTxt(self, res):
+        self.failUnlessReallyEqual(res, self.QUUX_CONTENTS, res)
+
+    def failUnlessIsBazDotTxt(self, res):
+        self.failUnlessReallyEqual(res, self.BAZ_CONTENTS, res)
+
     def failUnlessIsBarJSON(self, res):
         data = simplejson.loads(res)
         self.failUnless(isinstance(data, list))
hunk ./src/allmydata/test/test_web.py 300
         self.failUnlessReallyEqual(to_str(data[1]["verify_uri"]), self._bar_txt_verifycap)
         self.failUnlessReallyEqual(data[1]["size"], len(self.BAR_CONTENTS))
 
+    def failUnlessIsQuuxJSON(self, res):
+        data = simplejson.loads(res)
+        self.failUnless(isinstance(data, list))
+        self.failUnlessEqual(data[0], "filenode")
+        self.failUnless(isinstance(data[1], dict))
+        metadata = data[1]
+        return self.failUnlessIsQuuxDotTxtMetadata(metadata)
+
+    def failUnlessIsQuuxDotTxtMetadata(self, metadata):
+        self.failUnless(metadata['mutable'])
+        self.failUnless("rw_uri" in metadata)
+        self.failUnlessEqual(metadata['rw_uri'], self._quux_txt_uri)
+        self.failUnless("ro_uri" in metadata)
+        self.failUnlessEqual(metadata['ro_uri'], self._quux_txt_readonly_uri)
+        self.failUnlessReallyEqual(metadata['size'], len(self.QUUX_CONTENTS))
+
     def failUnlessIsFooJSON(self, res):
         data = simplejson.loads(res)
         self.failUnless(isinstance(data, list))
hunk ./src/allmydata/test/test_web.py 329
 
         kidnames = sorted([unicode(n) for n in data[1]["children"]])
         self.failUnlessEqual(kidnames,
-                             [u"bar.txt", u"blockingfile", u"empty",
-                              u"n\u00fc.txt", u"sub"])
+                             [u"bar.txt", u"baz.txt", u"blockingfile",
+                              u"empty", u"n\u00fc.txt", u"quux.txt", u"sub"])
         kids = dict( [(unicode(name),value)
                       for (name,value)
                       in data[1]["children"].iteritems()] )
hunk ./src/allmydata/test/test_web.py 351
                                    self._bar_txt_metadata["tahoe"]["linkcrtime"])
         self.failUnlessReallyEqual(to_str(kids[u"n\u00fc.txt"][1]["ro_uri"]),
                                    self._bar_txt_uri)
+        self.failUnlessIn("quux.txt", kids)
+        self.failUnlessReallyEqual(kids[u"quux.txt"][1]["rw_uri"],
+                                   self._quux_txt_uri)
+        self.failUnlessReallyEqual(kids[u"quux.txt"][1]["ro_uri"],
+                                   self._quux_txt_readonly_uri)
 
     def GET(self, urlpath, followRedirect=False, return_response=False,
             **kwargs):
hunk ./src/allmydata/test/test_web.py 870
         d.addCallback(self.failUnlessIsBarDotTxt)
         return d
 
+    def test_GET_FILE_URI_mdmf(self):
+        base = "/uri/%s" % urllib.quote(self._quux_txt_uri)
+        d = self.GET(base)
+        d.addCallback(self.failUnlessIsQuuxDotTxt)
+        return d
+
+    def test_GET_FILE_URI_mdmf_extensions(self):
+        base = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
+        d = self.GET(base)
+        d.addCallback(self.failUnlessIsQuuxDotTxt)
+        return d
+
+    def test_GET_FILE_URI_mdmf_readonly(self):
+        base = "/uri/%s" % urllib.quote(self._quux_txt_readonly_uri)
+        d = self.GET(base)
+        d.addCallback(self.failUnlessIsQuuxDotTxt)
+        return d
+
     def test_GET_FILE_URI_badchild(self):
         base = "/uri/%s/boguschild" % urllib.quote(self._bar_txt_uri)
         errmsg = "Files have no children, certainly not named 'boguschild'"
hunk ./src/allmydata/test/test_web.py 904
                              self.PUT, base, "")
         return d
 
+    def test_PUT_FILE_URI_mdmf(self):
+        base = "/uri/%s" % urllib.quote(self._quux_txt_uri)
+        self._quux_new_contents = "new_contents"
+        d = self.GET(base)
+        d.addCallback(lambda res:
+            self.failUnlessIsQuuxDotTxt(res))
+        d.addCallback(lambda ignored:
+            self.PUT(base, self._quux_new_contents))
+        d.addCallback(lambda ignored:
+            self.GET(base))
+        d.addCallback(lambda res:
+            self.failUnlessReallyEqual(res, self._quux_new_contents))
+        return d
+
+    def test_PUT_FILE_URI_mdmf_extensions(self):
+        base = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
+        self._quux_new_contents = "new_contents"
+        d = self.GET(base)
+        d.addCallback(lambda res: self.failUnlessIsQuuxDotTxt(res))
+        d.addCallback(lambda ignored: self.PUT(base, self._quux_new_contents))
+        d.addCallback(lambda ignored: self.GET(base))
+        d.addCallback(lambda res: self.failUnlessEqual(self._quux_new_contents,
+                                                       res))
+        return d
+
+    def test_PUT_FILE_URI_mdmf_readonly(self):
+        # We're not allowed to PUT things to a readonly cap.
+        base = "/uri/%s" % self._quux_txt_readonly_uri
+        d = self.GET(base)
+        d.addCallback(lambda res:
+            self.failUnlessIsQuuxDotTxt(res))
+        # What should we get here? We get a 500 error now; that's not right.
+        d.addCallback(lambda ignored:
+            self.shouldFail2(error.Error, "test_PUT_FILE_URI_mdmf_readonly",
+                             "400 Bad Request", "read-only cap",
+                             self.PUT, base, "new data"))
+        return d
+
+    def test_PUT_FILE_URI_sdmf_readonly(self):
+        # We're not allowed to put things to a readonly cap.
+        base = "/uri/%s" % self._baz_txt_readonly_uri
+        d = self.GET(base)
+        d.addCallback(lambda res:
+            self.failUnlessIsBazDotTxt(res))
+        d.addCallback(lambda ignored:
+            self.shouldFail2(error.Error, "test_PUT_FILE_URI_sdmf_readonly",
+                             "400 Bad Request", "read-only cap",
+                             self.PUT, base, "new_data"))
+        return d
+
     # TODO: version of this with a Unicode filename
     def test_GET_FILEURL_save(self):
         d = self.GET(self.public_url + "/foo/bar.txt?filename=bar.txt&save=true",
hunk ./src/allmydata/test/test_web.py 970
         d.addBoth(self.should404, "test_GET_FILEURL_missing")
         return d
 
+    def test_GET_FILEURL_info_mdmf(self):
+        d = self.GET("/uri/%s?t=info" % self._quux_txt_uri)
+        def _got(res):
+            self.failUnlessIn("mutable file (mdmf)", res)
+            self.failUnlessIn(self._quux_txt_uri, res)
+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
+        d.addCallback(_got)
+        return d
+
+    def test_GET_FILEURL_info_mdmf_readonly(self):
+        d = self.GET("/uri/%s?t=info" % self._quux_txt_readonly_uri)
+        def _got(res):
+            self.failUnlessIn("mutable file (mdmf)", res)
+            self.failIfIn(self._quux_txt_uri, res)
+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
+        d.addCallback(_got)
+        return d
+
+    def test_GET_FILEURL_info_sdmf(self):
+        d = self.GET("/uri/%s?t=info" % self._baz_txt_uri)
+        def _got(res):
+            self.failUnlessIn("mutable file (sdmf)", res)
+            self.failUnlessIn(self._baz_txt_uri, res)
+        d.addCallback(_got)
+        return d
+
+    def test_GET_FILEURL_info_mdmf_extensions(self):
+        d = self.GET("/uri/%s:3:131073?t=info" % self._quux_txt_uri)
+        def _got(res):
+            self.failUnlessIn("mutable file (mdmf)", res)
+            self.failUnlessIn(self._quux_txt_uri, res)
+            self.failUnlessIn(self._quux_txt_readonly_uri, res)
+        d.addCallback(_got)
+        return d
+
     def test_PUT_overwrite_only_files(self):
         # create a directory, put a file in that directory.
         contents, n, filecap = self.makefile(8)
hunk ./src/allmydata/test/test_web.py 1052
         contents = self.NEWFILE_CONTENTS * 300000
         d = self.PUT("/uri?mutable=true&mutable-type=mdmf",
                      contents)
+        def _got_filecap(filecap):
+            self.failUnless(filecap.startswith("URI:MDMF"))
+            return filecap
+        d.addCallback(_got_filecap)
         d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
         d.addCallback(lambda json: self.failUnlessIn("mdmf", json))
         return d
hunk ./src/allmydata/test/test_web.py 1222
         d.addCallback(_got_json, "sdmf")
         return d
 
+    def test_GET_FILEURL_json_mdmf_extensions(self):
+        # A GET invoked against a URL that includes an MDMF cap with
+        # extensions should fetch the same JSON information as a GET
+        # invoked against a bare cap.
+        self._quux_txt_uri = "%s:3:131073" % self._quux_txt_uri
+        self._quux_txt_readonly_uri = "%s:3:131073" % self._quux_txt_readonly_uri
+        d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
+        d.addCallback(self.failUnlessIsQuuxJSON)
+        return d
+
+    def test_GET_FILEURL_json_mdmf(self):
+        d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
+        d.addCallback(self.failUnlessIsQuuxJSON)
+        return d
+
     def test_GET_FILEURL_json_missing(self):
         d = self.GET(self.public_url + "/foo/missing?json")
         d.addBoth(self.should404, "test_GET_FILEURL_json_missing")
hunk ./src/allmydata/test/test_web.py 1281
             self.failUnlessIn('<div class="toolbar-item"><a href="../../..">Return to Welcome page</a></div>',res)
             self.failUnlessIn("mutable-type-mdmf", res)
             self.failUnlessIn("mutable-type-sdmf", res)
+            self.failUnlessIn("quux", res)
         d.addCallback(_check)
         return d
 
hunk ./src/allmydata/test/test_web.py 1539
         d.addCallback(self.get_operation_results, "127", "json")
         def _got_json(stats):
             expected = {"count-immutable-files": 3,
-                        "count-mutable-files": 0,
+                        "count-mutable-files": 2,
                         "count-literal-files": 0,
hunk ./src/allmydata/test/test_web.py 1541
-                        "count-files": 3,
+                        "count-files": 5,
                         "count-directories": 3,
                         "size-immutable-files": 57,
                         "size-literal-files": 0,
hunk ./src/allmydata/test/test_web.py 1547
                         #"size-directories": 1912, # varies
                         #"largest-directory": 1590,
-                        "largest-directory-children": 5,
+                        "largest-directory-children": 7,
                         "largest-immutable-file": 19,
                         }
             for k,v in expected.iteritems():
hunk ./src/allmydata/test/test_web.py 1564
         def _check(res):
             self.failUnless(res.endswith("\n"))
             units = [simplejson.loads(t) for t in res[:-1].split("\n")]
-            self.failUnlessReallyEqual(len(units), 7)
+            self.failUnlessReallyEqual(len(units), 9)
             self.failUnlessEqual(units[-1]["type"], "stats")
             first = units[0]
             self.failUnlessEqual(first["path"], [])
hunk ./src/allmydata/test/test_web.py 1575
             self.failIfEqual(baz["storage-index"], None)
             self.failIfEqual(baz["verifycap"], None)
             self.failIfEqual(baz["repaircap"], None)
+            # XXX: Add quux and baz to this test.
             return
         d.addCallback(_check)
         return d
hunk ./src/allmydata/test/test_web.py 2021
         d.addCallback(lambda ignored:
             self.POST("/uri?t=upload&mutable=true&mutable-type=mdmf",
                       file=('mdmf.txt', self.NEWFILE_CONTENTS * 300000)))
+        def _got_filecap(filecap):
+            self.failUnless(filecap.startswith("URI:MDMF"))
+            return filecap
+        d.addCallback(_got_filecap)
         d.addCallback(lambda filecap: self.GET("/uri/%s?t=json" % filecap))
         d.addCallback(_got_json, "mdmf")
         return d
hunk ./src/allmydata/test/test_web.py 2038
             filenameu = unicode(filename)
             self.failUnlessURIMatchesRWChild(filecap, fn, filenameu)
             return self.GET(self.public_url + "/foo/%s?t=json" % filename)
+        def _got_mdmf_cap(filecap):
+            self.failUnless(filecap.startswith("URI:MDMF"))
+            return filecap
         d.addCallback(_got_cap, "sdmf.txt")
         def _got_json(json, version):
             data = simplejson.loads(json)
hunk ./src/allmydata/test/test_web.py 2053
             self.POST(self.public_url + \
                       "/foo?t=upload&mutable=true&mutable-type=mdmf",
                       file=("mdmf.txt", self.NEWFILE_CONTENTS * 300000)))
+        d.addCallback(_got_mdmf_cap)
         d.addCallback(_got_cap, "mdmf.txt")
         d.addCallback(_got_json, "mdmf")
         return d
hunk ./src/allmydata/test/test_web.py 2287
         # make sure that nothing was added
         d.addCallback(lambda res:
                       self.failUnlessNodeKeysAre(self._foo_node,
-                                                 [u"bar.txt", u"blockingfile",
-                                                  u"empty", u"n\u00fc.txt",
+                                                 [u"bar.txt", u"baz.txt", u"blockingfile",
+                                                  u"empty", u"n\u00fc.txt", u"quux.txt",
                                                   u"sub"]))
         return d
 
hunk ./src/allmydata/test/test_web.py 2410
         d.addCallback(_check3)
         return d
 
+    def test_POST_FILEURL_mdmf_check(self):
+        quux_url = "/uri/%s" % urllib.quote(self._quux_txt_uri)
+        d = self.POST(quux_url, t="check")
+        def _check(res):
+            self.failUnlessIn("Healthy", res)
+        d.addCallback(_check)
+        quux_extension_url = "/uri/%s" % urllib.quote("%s:3:131073" % self._quux_txt_uri)
+        d.addCallback(lambda ignored:
+            self.POST(quux_extension_url, t="check"))
+        d.addCallback(_check)
+        return d
+
+    def test_POST_FILEURL_mdmf_check_and_repair(self):
+        quux_url = "/uri/%s" % urllib.quote(self._quux_txt_uri)
+        d = self.POST(quux_url, t="check", repair="true")
+        def _check(res):
+            self.failUnlessIn("Healthy", res)
+        d.addCallback(_check)
+        quux_extension_url = "/uri/%s" %\
+            urllib.quote("%s:3:131073" % self._quux_txt_uri)
+        d.addCallback(lambda ignored:
+            self.POST(quux_extension_url, t="check", repair="true"))
+        d.addCallback(_check)
+        return d
+
     def wait_for_operation(self, ignored, ophandle):
         url = "/operations/" + ophandle
         url += "?t=status&output=JSON"
hunk ./src/allmydata/test/test_web.py 2480
         d.addCallback(self.wait_for_operation, "123")
         def _check_json(data):
             self.failUnlessReallyEqual(data["finished"], True)
-            self.failUnlessReallyEqual(data["count-objects-checked"], 8)
-            self.failUnlessReallyEqual(data["count-objects-healthy"], 8)
+            self.failUnlessReallyEqual(data["count-objects-checked"], 10)
+            self.failUnlessReallyEqual(data["count-objects-healthy"], 10)
         d.addCallback(_check_json)
         d.addCallback(self.get_operation_results, "123", "html")
         def _check_html(res):
hunk ./src/allmydata/test/test_web.py 2485
-            self.failUnless("Objects Checked: <span>8</span>" in res)
-            self.failUnless("Objects Healthy: <span>8</span>" in res)
+            self.failUnless("Objects Checked: <span>10</span>" in res)
+            self.failUnless("Objects Healthy: <span>10</span>" in res)
         d.addCallback(_check_html)
 
         d.addCallback(lambda res:
hunk ./src/allmydata/test/test_web.py 2515
         d.addCallback(self.wait_for_operation, "124")
         def _check_json(data):
             self.failUnlessReallyEqual(data["finished"], True)
-            self.failUnlessReallyEqual(data["count-objects-checked"], 8)
-            self.failUnlessReallyEqual(data["count-objects-healthy-pre-repair"], 8)
+            self.failUnlessReallyEqual(data["count-objects-checked"], 10)
+            self.failUnlessReallyEqual(data["count-objects-healthy-pre-repair"], 10)
             self.failUnlessReallyEqual(data["count-objects-unhealthy-pre-repair"], 0)
             self.failUnlessReallyEqual(data["count-corrupt-shares-pre-repair"], 0)
             self.failUnlessReallyEqual(data["count-repairs-attempted"], 0)
hunk ./src/allmydata/test/test_web.py 2522
             self.failUnlessReallyEqual(data["count-repairs-successful"], 0)
             self.failUnlessReallyEqual(data["count-repairs-unsuccessful"], 0)
-            self.failUnlessReallyEqual(data["count-objects-healthy-post-repair"], 8)
+            self.failUnlessReallyEqual(data["count-objects-healthy-post-repair"], 10)
             self.failUnlessReallyEqual(data["count-objects-unhealthy-post-repair"], 0)
             self.failUnlessReallyEqual(data["count-corrupt-shares-post-repair"], 0)
         d.addCallback(_check_json)
hunk ./src/allmydata/test/test_web.py 2528
         d.addCallback(self.get_operation_results, "124", "html")
         def _check_html(res):
-            self.failUnless("Objects Checked: <span>8</span>" in res)
+            self.failUnless("Objects Checked: <span>10</span>" in res)
 
hunk ./src/allmydata/test/test_web.py 2530
-            self.failUnless("Objects Healthy (before repair): <span>8</span>" in res)
+            self.failUnless("Objects Healthy (before repair): <span>10</span>" in res)
             self.failUnless("Objects Unhealthy (before repair): <span>0</span>" in res)
             self.failUnless("Corrupt Shares (before repair): <span>0</span>" in res)
 
hunk ./src/allmydata/test/test_web.py 2538
             self.failUnless("Repairs Successful: <span>0</span>" in res)
             self.failUnless("Repairs Unsuccessful: <span>0</span>" in res)
 
-            self.failUnless("Objects Healthy (after repair): <span>8</span>" in res)
+            self.failUnless("Objects Healthy (after repair): <span>10</span>" in res)
             self.failUnless("Objects Unhealthy (after repair): <span>0</span>" in res)
             self.failUnless("Corrupt Shares (after repair): <span>0</span>" in res)
         d.addCallback(_check_html)
hunk ./src/allmydata/test/test_web.py 2668
         filecap3 = node3.get_readonly_uri()
         node4 = self.s.create_node_from_uri(make_mutable_file_uri())
         dircap = DirectoryNode(node4, None, None).get_uri()
+        mdmfcap = make_mutable_file_uri(mdmf=True)
         litdircap = "URI:DIR2-LIT:ge3dumj2mewdcotyfqydulbshj5x2lbm"
         emptydircap = "URI:DIR2-LIT:"
         newkids = {u"child-imm":        ["filenode", {"rw_uri": filecap1,
hunk ./src/allmydata/test/test_web.py 2685
                                                       "ro_uri": self._make_readonly(dircap)}],
                    u"dirchild-lit":     ["dirnode",  {"ro_uri": litdircap}],
                    u"dirchild-empty":   ["dirnode",  {"ro_uri": emptydircap}],
+                   u"child-mutable-mdmf": ["filenode", {"rw_uri": mdmfcap,
+                                                        "ro_uri": self._make_readonly(mdmfcap)}],
                    }
         return newkids, {'filecap1': filecap1,
                          'filecap2': filecap2,
hunk ./src/allmydata/test/test_web.py 2696
                          'unknown_immcap': unknown_immcap,
                          'dircap': dircap,
                          'litdircap': litdircap,
-                         'emptydircap': emptydircap}
+                         'emptydircap': emptydircap,
+                         'mdmfcap': mdmfcap}
 
     def _create_immutable_children(self):
         contents, n, filecap1 = self.makefile(12)
hunk ./src/allmydata/test/test_web.py 3243
             data = data[1]
             self.failUnlessIn("mutable-type", data)
             self.failUnlessEqual(data['mutable-type'], "mdmf")
+            self.failUnless(data['rw_uri'].startswith("URI:MDMF"))
+            self.failUnless(data['ro_uri'].startswith("URI:MDMF"))
         d.addCallback(_got_json)
         return d
 
}
[web/filenode.py: complain if a PUT is requested with a readonly cap
Kevan Carstensen <kevan@isnotajoke.com>**20110515230421
 Ignore-this: e2f05201f3b008e157062ed187eacbb9
] hunk ./src/allmydata/web/filenode.py 229
                 raise ExistingChildError()
 
             if self.node.is_mutable():
+                # Are we a readonly filenode? We shouldn't allow callers
+                # to try to replace us if we are.
+                if self.node.is_readonly():
+                    raise WebError("PUT to a mutable file: replace or update"
+                                   " requested with read-only cap")
                 if offset is None:
                     return self.replace_my_contents(req)
 
[web/info.py: Display mutable type information when describing a mutable file
Kevan Carstensen <kevan@isnotajoke.com>**20110515230444
 Ignore-this: ce5ad22b494effe6c15e49471fae0d99
] {
hunk ./src/allmydata/web/info.py 8
 from nevow.inevow import IRequest
 
 from allmydata.util import base32
-from allmydata.interfaces import IDirectoryNode, IFileNode
+from allmydata.interfaces import IDirectoryNode, IFileNode, MDMF_VERSION, SDMF_VERSION
 from allmydata.web.common import getxmlfile
 from allmydata.mutable.common import UnrecoverableFileError # TODO: move
 
hunk ./src/allmydata/web/info.py 31
             si = node.get_storage_index()
             if si:
                 if node.is_mutable():
-                    return "mutable file"
+                    ret = "mutable file"
+                    if node.get_version() == MDMF_VERSION:
+                        ret += " (mdmf)"
+                    else:
+                        ret += " (sdmf)"
+                    return ret
                 return "immutable file"
             return "immutable LIT file"
         return "unknown"
}
[uri: teach mutable URI objects how to allow other objects to give them extension parameters
Kevan Carstensen <kevan@isnotajoke.com>**20110531012036
 Ignore-this: 96c06cee1efe5a92a5ed8d87ca09a7dd
] {
hunk ./src/allmydata/uri.py 300
     def get_extension_params(self):
         return []
 
+    def set_extension_params(self, params):
+        pass
+
 class ReadonlySSKFileURI(_BaseURI):
     implements(IURI, IMutableFileURI)
 
hunk ./src/allmydata/uri.py 360
     def get_extension_params(self):
         return []
 
+    def set_extension_params(self, params):
+        pass
+
 class SSKVerifierURI(_BaseURI):
     implements(IVerifierURI)
 
hunk ./src/allmydata/uri.py 410
     def get_extension_params(self):
         return []
 
+    def set_extension_params(self, params):
+        pass
+
 class WritableMDMFFileURI(_BaseURI):
     implements(IURI, IMutableFileURI)
 
hunk ./src/allmydata/uri.py 480
     def get_extension_params(self):
         return self.extension
 
+    def set_extension_params(self, params):
+        params = map(str, params)
+        self.extension = params
+
 class ReadonlyMDMFFileURI(_BaseURI):
     implements(IURI, IMutableFileURI)
 
hunk ./src/allmydata/uri.py 552
     def get_extension_params(self):
         return self.extension
 
+    def set_extension_params(self, params):
+        params = map(str, params)
+        self.extension = params
+
 class MDMFVerifierURI(_BaseURI):
     implements(IVerifierURI)
 
}
[interfaces: working update to interfaces.py for extension handling
Kevan Carstensen <kevan@isnotajoke.com>**20110531012201
 Ignore-this: 559c43cbf14eec7ac163ebd00c0b7a36
] {
hunk ./src/allmydata/interfaces.py 549
     def get_extension_params():
         """Return the extension parameters in the URI"""
 
+    def set_extension_params():
+        """Set the extension parameters that should be in the URI"""
+
 class IDirectoryURI(Interface):
     pass
 
hunk ./src/allmydata/interfaces.py 1049
         writer-visible data using this writekey.
         """
 
-    def set_version(version):
-        """Tahoe-LAFS supports SDMF and MDMF mutable files. By default,
-        we upload in SDMF for reasons of compatibility. If you want to
-        change this, set_version will let you do that.
-
-        To say that this file should be uploaded in SDMF, pass in a 0. To
-        say that the file should be uploaded as MDMF, pass in a 1.
-        """
-
     def get_version():
         """Returns the mutable file protocol version."""
 
}
[mutable/publish: tell filenodes about encoding parameters so they can be put in the cap
Kevan Carstensen <kevan@isnotajoke.com>**20110531012447
 Ignore-this: cf19f07a6913208a327604457466f2f2
] hunk ./src/allmydata/mutable/publish.py 1146
         self.log("Publish done, success")
         self._status.set_status("Finished")
         self._status.set_progress(1.0)
+        # Get k and segsize, then give them to the caller.
+        hints = {}
+        hints['segsize'] = self.segment_size
+        hints['k'] = self.required_shares
+        self._node.set_downloader_hints(hints)
         eventually(self.done_deferred.callback, res)
 
     def _failure(self):
[mutable/servermap: caps imply protocol version, so the servermap doesn't need to tell the filenode what it is anymore.
Kevan Carstensen <kevan@isnotajoke.com>**20110531012557
 Ignore-this: 9925f5dde5452db92cdbc4a7d6adf1c1
] hunk ./src/allmydata/mutable/servermap.py 877
         # and the versionmap
         self.versionmap.add(verinfo, (shnum, peerid, timestamp))
 
-        # It's our job to set the protocol version of our parent
-        # filenode if it isn't already set.
-        if not self._node.get_version():
-            # The first byte of the prefix is the version.
-            v = struct.unpack(">B", prefix[:1])[0]
-            self.log("got version %d" % v)
-            self._node.set_version(v)
-
         return verinfo
 
 
[mutable/filenode: pass downloader hints between publisher, MutableFileNode, and MutableFileVersion as convenient
Kevan Carstensen <kevan@isnotajoke.com>**20110531012641
 Ignore-this: 672c586891abfa38397bcdf90b64ca72
 
 We still need to work on making this more thorough; i.e., passing hints
 when other operations change encoding parameters.
] {
hunk ./src/allmydata/mutable/filenode.py 75
         # set to this default value in case neither of those things happen,
         # or in case the servermap can't find any shares to tell us what
         # to publish as.
-        # XXX: Version should come in via the constructor.
         self._protocol_version = None
 
         # all users of this MutableFileNode go through the serializer. This
hunk ./src/allmydata/mutable/filenode.py 83
         # forever without consuming more and more memory.
         self._serializer = defer.succeed(None)
 
+        # Starting with MDMF, we can get these from caps if they're
+        # there. Leave them alone for now; they'll be filled in by my
+        # init_from_cap method if necessary.
+        self._downloader_hints = {}
+
     def __repr__(self):
         if hasattr(self, '_uri'):
             return "<%s %x %s %s>" % (self.__class__.__name__, id(self), self.is_readonly() and 'RO' or 'RW', self._uri.abbrev())
hunk ./src/allmydata/mutable/filenode.py 120
         # if possible, otherwise by the first peer that Publish talks to.
         self._privkey = None
         self._encprivkey = None
+
+        # Starting with MDMF caps, we allowed arbitrary extensions in
+        # caps. If we were initialized with a cap that had extensions,
+        # we want to remember them so we can tell MutableFileVersions
+        # about them.
+        extensions = self._uri.get_extension_params()
+        if extensions:
+            extensions = map(int, extensions)
+            suspected_k, suspected_segsize = extensions
+            self._downloader_hints['k'] = suspected_k
+            self._downloader_hints['segsize'] = suspected_segsize
+
         return self
 
hunk ./src/allmydata/mutable/filenode.py 134
-    def create_with_keys(self, (pubkey, privkey), contents):
+    def create_with_keys(self, (pubkey, privkey), contents,
+                         version=SDMF_VERSION):
         """Call this to create a brand-new mutable file. It will create the
         shares, find homes for them, and upload the initial contents (created
         with the same rules as IClient.create_mutable_file() ). Returns a
hunk ./src/allmydata/mutable/filenode.py 148
         self._writekey = hashutil.ssk_writekey_hash(privkey_s)
         self._encprivkey = self._encrypt_privkey(self._writekey, privkey_s)
         self._fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s)
-        if self._protocol_version == MDMF_VERSION:
+        if version == MDMF_VERSION:
             self._uri = WritableMDMFFileURI(self._writekey, self._fingerprint)
hunk ./src/allmydata/mutable/filenode.py 150
-        else:
+            self._protocol_version = version
+        elif version == SDMF_VERSION:
             self._uri = WriteableSSKFileURI(self._writekey, self._fingerprint)
hunk ./src/allmydata/mutable/filenode.py 153
+            self._protocol_version = version
         self._readkey = self._uri.readkey
         self._storage_index = self._uri.storage_index
         initial_contents = self._get_initial_contents(contents)
hunk ./src/allmydata/mutable/filenode.py 365
                                      self._readkey,
                                      history=self._history)
             assert mfv.is_readonly()
+            mfv.set_downloader_hints(self._downloader_hints)
             # our caller can use this to download the contents of the
             # mutable file.
             return mfv
hunk ./src/allmydata/mutable/filenode.py 520
                                      self._secret_holder,
                                      history=self._history)
             assert not mfv.is_readonly()
+            mfv.set_downloader_hints(self._downloader_hints)
             return mfv
 
         return d.addCallback(_build_version)
hunk ./src/allmydata/mutable/filenode.py 549
         new_contents as an argument. I return a Deferred that eventually
         fires with the results of my replacement process.
         """
+        # TODO: Update downloader hints.
         return self._do_serialized(self._overwrite, new_contents)
 
 
hunk ./src/allmydata/mutable/filenode.py 563
         return d
 
 
-
     def upload(self, new_contents, servermap):
         """
         I overwrite the contents of the best recoverable version of this
hunk ./src/allmydata/mutable/filenode.py 570
         creating/updating our own servermap. I return a Deferred that
         fires with the results of my upload.
         """
+        # TODO: Update downloader hints
         return self._do_serialized(self._upload, new_contents, servermap)
 
 
hunk ./src/allmydata/mutable/filenode.py 582
         Deferred that eventually fires with an UploadResults instance
         describing this process.
         """
+        # TODO: Update downloader hints.
         return self._do_serialized(self._modify, modifier, backoffer)
 
 
hunk ./src/allmydata/mutable/filenode.py 650
         return u.update()
 
 
-    def set_version(self, version):
+    #def set_version(self, version):
         # I can be set in two ways:
         #  1. When the node is created.
         #  2. (for an existing share) when the Servermap is updated 
hunk ./src/allmydata/mutable/filenode.py 655
         #     before I am read.
-        assert version in (MDMF_VERSION, SDMF_VERSION)
-        self._protocol_version = version
+    #    assert version in (MDMF_VERSION, SDMF_VERSION)
+    #    self._protocol_version = version
 
 
     def get_version(self):
hunk ./src/allmydata/mutable/filenode.py 691
         """
         assert self._pubkey, "update_servermap must be called before publish"
 
+        # Define IPublishInvoker with a set_downloader_hints method?
+        # Then have the publisher call that method when it's done publishing?
         p = Publish(self, self._storage_broker, servermap)
         if self._history:
             self._history.notify_publish(p.get_status(),
hunk ./src/allmydata/mutable/filenode.py 702
         return d
 
 
+    def set_downloader_hints(self, hints):
+        self._downloader_hints = hints
+        extensions = hints.values()
+        self._uri.set_extension_params(extensions)
+
+
     def _did_upload(self, res, size):
         self._most_recent_size = size
         return res
hunk ./src/allmydata/mutable/filenode.py 769
         return self._writekey
 
 
+    def set_downloader_hints(self, hints):
+        """
+        I set the downloader hints.
+        """
+        assert isinstance(hints, dict)
+
+        self._downloader_hints = hints
+
+
+    def get_downloader_hints(self):
+        """
+        I return the downloader hints.
+        """
+        return self._downloader_hints
+
+
     def overwrite(self, new_contents):
         """
         I overwrite the contents of this mutable file version with the
hunk ./src/allmydata/nodemaker.py 97
                             version=SDMF_VERSION):
         n = MutableFileNode(self.storage_broker, self.secret_holder,
                             self.default_encoding_parameters, self.history)
-        n.set_version(version)
         d = self.key_generator.generate(keysize)
hunk ./src/allmydata/nodemaker.py 98
-        d.addCallback(n.create_with_keys, contents)
+        d.addCallback(n.create_with_keys, contents, version=version)
         d.addCallback(lambda res: n)
         return d
 
}
[test: change test fixtures to work with our new extension passing API; add, change, and delete tests as appropriate to reflect the fact that caps without hints are now the exception rather than the norm
Kevan Carstensen <kevan@isnotajoke.com>**20110531012739
 Ignore-this: 30ebf79b5f6c17f40fa4385de12070a0
] {
hunk ./src/allmydata/test/common.py 198
     def __init__(self, storage_broker, secret_holder,
                  default_encoding_parameters, history):
         self.init_from_cap(make_mutable_file_cap())
-    def create(self, contents, key_generator=None, keysize=None):
-        if self.file_types[self.storage_index] == MDMF_VERSION and \
+        self._k = default_encoding_parameters['k']
+        self._segsize = default_encoding_parameters['max_segment_size']
+    def create(self, contents, key_generator=None, keysize=None,
+               version=SDMF_VERSION):
+        if version == MDMF_VERSION and \
             isinstance(self.my_uri, (uri.ReadonlySSKFileURI,
                                  uri.WriteableSSKFileURI)):
             self.init_from_cap(make_mdmf_mutable_file_cap())
hunk ./src/allmydata/test/common.py 206
+        self.file_types[self.storage_index] = version
         initial_contents = self._get_initial_contents(contents)
         data = initial_contents.read(initial_contents.get_size())
         data = "".join(data)
hunk ./src/allmydata/test/common.py 211
         self.all_contents[self.storage_index] = data
+        self.my_uri.set_extension_params([self._k, self._segsize])
         return defer.succeed(self)
     def _get_initial_contents(self, contents):
         if contents is None:
hunk ./src/allmydata/test/common.py 283
     def get_servermap(self, mode):
         return defer.succeed(None)
 
-    def set_version(self, version):
-        assert version in (SDMF_VERSION, MDMF_VERSION)
-        self.file_types[self.storage_index] = version
-
     def get_version(self):
         assert self.storage_index in self.file_types
         return self.file_types[self.storage_index]
hunk ./src/allmydata/test/common.py 361
         new_data = new_contents.read(new_contents.get_size())
         new_data = "".join(new_data)
         self.all_contents[self.storage_index] = new_data
+        self.my_uri.set_extension_params([self._k, self._segsize])
         return defer.succeed(None)
     def modify(self, modifier):
         # this does not implement FileTooLargeError, but the real one does
hunk ./src/allmydata/test/common.py 371
         old_contents = self.all_contents[self.storage_index]
         new_data = modifier(old_contents, None, True)
         self.all_contents[self.storage_index] = new_data
+        self.my_uri.set_extension_params([self._k, self._segsize])
         return None
 
     # As actually implemented, MutableFilenode and MutableFileVersion
hunk ./src/allmydata/test/common.py 433
     else:
         cap = make_mutable_file_cap()
 
-    filenode = FakeMutableFileNode(None, None, None, None)
+    encoding_params = {}
+    encoding_params['k'] = 3
+    encoding_params['max_segment_size'] = 128*1024
+
+    filenode = FakeMutableFileNode(None, None, encoding_params, None)
     filenode.init_from_cap(cap)
hunk ./src/allmydata/test/common.py 439
-    FakeMutableFileNode.all_contents[filenode.storage_index] = contents
+    if mdmf:
+        filenode.create(MutableData(contents), version=MDMF_VERSION)
+    else:
+        filenode.create(MutableData(contents), version=SDMF_VERSION)
     return filenode
 
 
hunk ./src/allmydata/test/test_cli.py 1098
             self.failUnlessEqual(rc, 0)
             self.failUnlessEqual(out, data2)
         d.addCallback(_got_data)
+        # Now strip the extension information off of the cap and try
+        # to put something to it.
+        def _make_bare_cap(ignored):
+            cap = self.cap.split(":")
+            cap = ":".join(cap[:len(cap) - 2])
+            self.cap = cap
+        d.addCallback(_make_bare_cap)
+        data3 = "data3" * 100000
+        fn3 = os.path.join(self.basedir, "data3")
+        fileutil.write(fn3, data3)
+        d.addCallback(lambda ignored:
+            self.do_cli("put", fn3, self.cap))
+        d.addCallback(lambda ignored:
+            self.do_cli("get", self.cap))
+        def _got_data3((rc, out, err)):
+            self.failUnlessEqual(rc, 0)
+            self.failUnlessEqual(out, data3)
+        d.addCallback(_got_data3)
         return d
 
     def test_put_to_sdmf_cap(self):
hunk ./src/allmydata/test/test_mutable.py 311
         def _created(n):
             self.failUnless(isinstance(n, MutableFileNode))
             s = n.get_uri()
-            s2 = "%s:3:131073" % s
-            n2 = self.nodemaker.create_from_cap(s2)
+            # We need to cheat a little and delete the nodemaker's
+            # cache, otherwise we'll get the same node instance back.
+            self.failUnlessIn(":3:131073", s)
+            n2 = self.nodemaker.create_from_cap(s)
 
             self.failUnlessEqual(n2.get_storage_index(), n.get_storage_index())
             self.failUnlessEqual(n.get_writekey(), n2.get_writekey())
hunk ./src/allmydata/test/test_mutable.py 318
+            hints = n2._downloader_hints
+            self.failUnlessEqual(hints['k'], 3)
+            self.failUnlessEqual(hints['segsize'], 131073)
         d.addCallback(_created)
         return d
 
hunk ./src/allmydata/test/test_mutable.py 346
         def _created(n):
             self.failUnless(isinstance(n, MutableFileNode))
             s = n.get_readonly_uri()
-            s = "%s:3:131073" % s
+            self.failUnlessIn(":3:131073", s)
 
             n2 = self.nodemaker.create_from_cap(s)
             self.failUnless(isinstance(n2, MutableFileNode))
hunk ./src/allmydata/test/test_mutable.py 352
             self.failUnless(n2.is_readonly())
             self.failUnlessEqual(n.get_storage_index(), n2.get_storage_index())
+            hints = n2._downloader_hints
+            self.failUnlessEqual(hints["k"], 3)
+            self.failUnlessEqual(hints["segsize"], 131073)
         d.addCallback(_created)
         return d
 
hunk ./src/allmydata/test/test_mutable.py 514
         return d
 
 
+    def test_create_and_download_from_bare_mdmf_cap(self):
+        # MDMF caps have extension parameters on them by default. We
+        # need to make sure that they work without extension parameters.
+        contents = MutableData("contents" * 100000)
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION,
+                                               contents=contents)
+        def _created(node):
+            uri = node.get_uri()
+            self._created = node
+            self.failUnlessIn(":3:131073", uri)
+            # Now strip that off the end of the uri, then try creating
+            # and downloading the node again.
+            bare_uri = uri.replace(":3:131073", "")
+            assert ":3:131073" not in bare_uri
+
+            return self.nodemaker.create_from_cap(bare_uri)
+        d.addCallback(_created)
+        def _created_bare(node):
+            self.failUnlessEqual(node.get_writekey(),
+                                 self._created.get_writekey())
+            self.failUnlessEqual(node.get_readkey(),
+                                 self._created.get_readkey())
+            self.failUnlessEqual(node.get_storage_index(),
+                                 self._created.get_storage_index())
+            return node.download_best_version()
+        d.addCallback(_created_bare)
+        d.addCallback(lambda data:
+            self.failUnlessEqual(data, "contents" * 100000))
+        return d
+
+
     def test_mdmf_write_count(self):
         # Publishing an MDMF file should only cause one write for each
         # share that is to be published. Otherwise, we introduce
hunk ./src/allmydata/test/test_mutable.py 2155
         # and set the encoding parameters to something completely different
         fn2._required_shares = k
         fn2._total_shares = n
-        # Normally a servermap update would occur before a publish.
-        # Here, it doesn't, so we have to do it ourselves.
-        fn2.set_version(version)
 
         s = self._storage
         s._peers = {} # clear existing storage
hunk ./src/allmydata/test/test_mutable.py 2928
         # We need to define an API by which an uploader can set the
         # extension parameters, and by which a downloader can retrieve
         # extensions.
-        self.failUnless(False)
+        d = self.mdmf_node.get_best_mutable_version()
+        def _got_version(version):
+            hints = version.get_downloader_hints()
+            # Should be empty at this point.
+            self.failUnlessIn("k", hints)
+            self.failUnlessEqual(hints['k'], 3)
+            self.failUnlessIn('segsize', hints)
+            self.failUnlessEqual(hints['segsize'], 131073)
+        d.addCallback(_got_version)
+        return d
 
 
     def test_extensions_from_cap(self):
hunk ./src/allmydata/test/test_mutable.py 2941
-        self.failUnless(False)
+        # If we initialize a mutable file with a cap that has extension
+        # parameters in it and then grab the extension parameters using
+        # our API, we should see that they're set correctly.
+        mdmf_uri = self.mdmf_node.get_uri()
+        new_node = self.nm.create_from_cap(mdmf_uri)
+        d = new_node.get_best_mutable_version()
+        def _got_version(version):
+            hints = version.get_downloader_hints()
+            self.failUnlessIn("k", hints)
+            self.failUnlessEqual(hints["k"], 3)
+            self.failUnlessIn("segsize", hints)
+            self.failUnlessEqual(hints["segsize"], 131073)
+        d.addCallback(_got_version)
+        return d
 
 
     def test_extensions_from_upload(self):
hunk ./src/allmydata/test/test_mutable.py 2958
-        self.failUnless(False)
+        # If we create a new mutable file with some contents, we should
+        # get back an MDMF cap with the right hints in place.
+        contents = "foo bar baz" * 100000
+        d = self.nm.create_mutable_file(contents, version=MDMF_VERSION)
+        def _got_mutable_file(n):
+            rw_uri = n.get_uri()
+            expected_k = str(self.c.DEFAULT_ENCODING_PARAMETERS['k'])
+            self.failUnlessIn(expected_k, rw_uri)
+            # XXX: Get this more intelligently.
+            self.failUnlessIn("131073", rw_uri)
+
+            ro_uri = n.get_readonly_uri()
+            self.failUnlessIn(expected_k, ro_uri)
+            self.failUnlessIn("131073", ro_uri)
+        d.addCallback(_got_mutable_file)
+        return d
 
 
     def test_cap_after_upload(self):
hunk ./src/allmydata/test/test_web.py 52
         return stats
 
 class FakeNodeMaker(NodeMaker):
+    encoding_params = {
+        'k': 3,
+        'n': 10,
+        'happy': 7,
+        'max_segment_size':128*1024 # 1024=KiB
+    }
     def _create_lit(self, cap):
         return FakeCHKFileNode(cap)
     def _create_immutable(self, cap):
hunk ./src/allmydata/test/test_web.py 63
         return FakeCHKFileNode(cap)
     def _create_mutable(self, cap):
-        return FakeMutableFileNode(None, None, None, None).init_from_cap(cap)
+        return FakeMutableFileNode(None,
+                                   None,
+                                   self.encoding_params, None).init_from_cap(cap)
     def create_mutable_file(self, contents="", keysize=None,
                             version=SDMF_VERSION):
hunk ./src/allmydata/test/test_web.py 68
-        n = FakeMutableFileNode(None, None, None, None)
-        n.set_version(version)
-        return n.create(contents)
+        n = FakeMutableFileNode(None, None, self.encoding_params, None)
+        return n.create(contents, version=version)
 
 class FakeUploader(service.Service):
     name = "uploader"
hunk ./src/allmydata/test/test_web.py 307
         self.failUnlessReallyEqual(to_str(data[1]["verify_uri"]), self._bar_txt_verifycap)
         self.failUnlessReallyEqual(data[1]["size"], len(self.BAR_CONTENTS))
 
-    def failUnlessIsQuuxJSON(self, res):
+    def failUnlessIsQuuxJSON(self, res, readonly=False):
         data = simplejson.loads(res)
         self.failUnless(isinstance(data, list))
         self.failUnlessEqual(data[0], "filenode")
hunk ./src/allmydata/test/test_web.py 313
         self.failUnless(isinstance(data[1], dict))
         metadata = data[1]
-        return self.failUnlessIsQuuxDotTxtMetadata(metadata)
+        return self.failUnlessIsQuuxDotTxtMetadata(metadata, readonly)
 
hunk ./src/allmydata/test/test_web.py 315
-    def failUnlessIsQuuxDotTxtMetadata(self, metadata):
+    def failUnlessIsQuuxDotTxtMetadata(self, metadata, readonly):
         self.failUnless(metadata['mutable'])
hunk ./src/allmydata/test/test_web.py 317
-        self.failUnless("rw_uri" in metadata)
-        self.failUnlessEqual(metadata['rw_uri'], self._quux_txt_uri)
+        if readonly:
+            self.failIf("rw_uri" in metadata)
+        else:
+            self.failUnless("rw_uri" in metadata)
+            self.failUnlessEqual(metadata['rw_uri'], self._quux_txt_uri)
         self.failUnless("ro_uri" in metadata)
         self.failUnlessEqual(metadata['ro_uri'], self._quux_txt_readonly_uri)
         self.failUnlessReallyEqual(metadata['size'], len(self.QUUX_CONTENTS))
hunk ./src/allmydata/test/test_web.py 892
         d.addCallback(self.failUnlessIsQuuxDotTxt)
         return d
 
+    def test_GET_FILE_URI_mdmf_bare_cap(self):
+        cap_elements = self._quux_txt_uri.split(":")
+        # 6 == expected cap length with two extensions.
+        self.failUnlessEqual(len(cap_elements), 6)
+
+        # Now lop off the extension parameters and stitch everything
+        # back together
+        quux_uri = ":".join(cap_elements[:len(cap_elements) - 2])
+
+        # Now GET that. We should get back quux.
+        base = "/uri/%s" % urllib.quote(quux_uri)
+        d = self.GET(base)
+        d.addCallback(self.failUnlessIsQuuxDotTxt)
+        return d
+
     def test_GET_FILE_URI_mdmf_readonly(self):
         base = "/uri/%s" % urllib.quote(self._quux_txt_readonly_uri)
         d = self.GET(base)
hunk ./src/allmydata/test/test_web.py 954
                                                        res))
         return d
 
+    def test_PUT_FILE_URI_mdmf_bare_cap(self):
+        elements = self._quux_txt_uri.split(":")
+        self.failUnlessEqual(len(elements), 6)
+
+        quux_uri = ":".join(elements[:len(elements) - 2])
+        base = "/uri/%s" % urllib.quote(quux_uri)
+        self._quux_new_contents = "new_contents" * 50000
+
+        d = self.GET(base)
+        d.addCallback(self.failUnlessIsQuuxDotTxt)
+        d.addCallback(lambda ignored: self.PUT(base, self._quux_new_contents))
+        d.addCallback(lambda ignored: self.GET(base))
+        d.addCallback(lambda res:
+            self.failUnlessEqual(res, self._quux_new_contents))
+        return d
+
     def test_PUT_FILE_URI_mdmf_readonly(self):
         # We're not allowed to PUT things to a readonly cap.
         base = "/uri/%s" % self._quux_txt_readonly_uri
hunk ./src/allmydata/test/test_web.py 1046
         d.addCallback(_got)
         return d
 
+    def test_GET_FILEURL_info_mdmf_bare_cap(self):
+        elements = self._quux_txt_uri.split(":")
+        self.failUnlessEqual(len(elements), 6)
+
+        quux_uri = ":".join(elements[:len(elements) - 2])
+        base = "/uri/%s?t=info" % urllib.quote(quux_uri)
+        d = self.GET(base)
+        def _got(res):
+            self.failUnlessIn("mutable file (mdmf)", res)
+            self.failUnlessIn(quux_uri, res)
+        d.addCallback(_got)
+        return d
+
     def test_PUT_overwrite_only_files(self):
         # create a directory, put a file in that directory.
         contents, n, filecap = self.makefile(8)
hunk ./src/allmydata/test/test_web.py 1286
         d.addCallback(self.failUnlessIsQuuxJSON)
         return d
 
+    def test_GET_FILEURL_json_mdmf_bare_cap(self):
+        elements = self._quux_txt_uri.split(":")
+        self.failUnlessEqual(len(elements), 6)
+
+        quux_uri = ":".join(elements[:len(elements) - 2])
+        # so failUnlessIsQuuxJSON will work.
+        self._quux_txt_uri = quux_uri
+
+        # we need to alter the readonly URI in the same way, again so
+        # failUnlessIsQuuxJSON will work
+        elements = self._quux_txt_readonly_uri.split(":")
+        self.failUnlessEqual(len(elements), 6)
+        quux_ro_uri = ":".join(elements[:len(elements) - 2])
+        self._quux_txt_readonly_uri = quux_ro_uri
+
+        base = "/uri/%s?t=json" % urllib.quote(quux_uri)
+        d = self.GET(base)
+        d.addCallback(self.failUnlessIsQuuxJSON)
+        return d
+
+    def test_GET_FILEURL_json_mdmf_bare_readonly_cap(self):
+        elements = self._quux_txt_readonly_uri.split(":")
+        self.failUnlessEqual(len(elements), 6)
+
+        quux_readonly_uri = ":".join(elements[:len(elements) - 2])
+        # so failUnlessIsQuuxJSON will work
+        self._quux_txt_readonly_uri = quux_readonly_uri
+        base = "/uri/%s?t=json" % quux_readonly_uri
+        d = self.GET(base)
+        # XXX: We may need to make a method that knows how to check for
+        # readonly JSON, or else alter that one so that it knows how to
+        # do that.
+        d.addCallback(self.failUnlessIsQuuxJSON, readonly=True)
+        return d
+
     def test_GET_FILEURL_json_mdmf(self):
         d = self.GET("/uri/%s?t=json" % urllib.quote(self._quux_txt_uri))
         d.addCallback(self.failUnlessIsQuuxJSON)
}
[Add MDMF dirnodes
Kevan Carstensen <kevan@isnotajoke.com>**20110617175808
 Ignore-this: e7d184ece57b272be0e5a3917cc7642a
] {
hunk ./src/allmydata/client.py 493
         # may get an opaque node if there were any problems.
         return self.nodemaker.create_from_cap(write_uri, read_uri, deep_immutable=deep_immutable, name=name)
 
-    def create_dirnode(self, initial_children={}):
-        d = self.nodemaker.create_new_mutable_directory(initial_children)
+    def create_dirnode(self, initial_children={}, version=SDMF_VERSION):
+        d = self.nodemaker.create_new_mutable_directory(initial_children, version=version)
         return d
 
     def create_immutable_dirnode(self, children, convergence=None):
hunk ./src/allmydata/dirnode.py 14
 from allmydata.interfaces import IFilesystemNode, IDirectoryNode, IFileNode, \
      IImmutableFileNode, IMutableFileNode, \
      ExistingChildError, NoSuchChildError, ICheckable, IDeepCheckable, \
-     MustBeDeepImmutableError, CapConstraintError, ChildOfWrongTypeError
+     MustBeDeepImmutableError, CapConstraintError, ChildOfWrongTypeError, \
+     SDMF_VERSION, MDMF_VERSION
 from allmydata.check_results import DeepCheckResults, \
      DeepCheckAndRepairResults
 from allmydata.monitor import Monitor
hunk ./src/allmydata/dirnode.py 617
         d.addCallback(lambda res: deleter.old_child)
         return d
 
+    # XXX: Too many arguments? Worthwhile to break into mutable/immutable?
     def create_subdirectory(self, namex, initial_children={}, overwrite=True,
hunk ./src/allmydata/dirnode.py 619
-                            mutable=True, metadata=None):
+                            mutable=True, mutable_version=None, metadata=None):
         name = normalize(namex)
         if self.is_readonly():
             return defer.fail(NotWriteableError())
hunk ./src/allmydata/dirnode.py 624
         if mutable:
-            d = self._nodemaker.create_new_mutable_directory(initial_children)
+            if mutable_version:
+                d = self._nodemaker.create_new_mutable_directory(initial_children,
+                                                                 version=mutable_version)
+            else:
+                d = self._nodemaker.create_new_mutable_directory(initial_children)
         else:
hunk ./src/allmydata/dirnode.py 630
+            # mutable version doesn't make sense for immmutable directories.
+            assert mutable_version is None
             d = self._nodemaker.create_immutable_directory(initial_children)
         def _created(child):
             entries = {name: (child, metadata)}
hunk ./src/allmydata/nodemaker.py 88
         if isinstance(cap, (uri.DirectoryURI,
                             uri.ReadonlyDirectoryURI,
                             uri.ImmutableDirectoryURI,
-                            uri.LiteralDirectoryURI)):
+                            uri.LiteralDirectoryURI,
+                            uri.MDMFDirectoryURI,
+                            uri.ReadonlyMDMFDirectoryURI)):
             filenode = self._create_from_single_cap(cap.get_filenode_cap())
             return self._create_dirnode(filenode)
         return None
hunk ./src/allmydata/nodemaker.py 104
         d.addCallback(lambda res: n)
         return d
 
-    def create_new_mutable_directory(self, initial_children={}):
-        # mutable directories will always be SDMF for now, to help
-        # compatibility with older clients.
-        version = SDMF_VERSION
+    def create_new_mutable_directory(self, initial_children={},
+                                     version=SDMF_VERSION):
         # initial_children must have metadata (i.e. {} instead of None)
         for (name, (node, metadata)) in initial_children.iteritems():
             precondition(isinstance(metadata, dict),
hunk ./src/allmydata/uri.py 750
         return None
 
 
+class MDMFDirectoryURI(_DirectoryBaseURI):
+    implements(IDirectoryURI)
+
+    BASE_STRING='URI:DIR2-MDMF:'
+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF'+SEP)
+    INNER_URI_CLASS=WritableMDMFFileURI
+
+    def __init__(self, filenode_uri=None):
+        if filenode_uri:
+            assert not filenode_uri.is_readonly()
+        _DirectoryBaseURI.__init__(self, filenode_uri)
+
+    def is_readonly(self):
+        return False
+
+    def get_readonly(self):
+        return ReadonlyMDMFDirectoryURI(self._filenode_uri.get_readonly())
+
+    def get_verify_cap(self):
+        return MDMFDirectoryURIVerifier(self._filenode_uri.get_verify_cap())
+
+
+class ReadonlyMDMFDirectoryURI(_DirectoryBaseURI):
+    implements(IReadonlyDirectoryURI)
+
+    BASE_STRING='URI:DIR2-MDMF-RO:'
+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF-RO'+SEP)
+    INNER_URI_CLASS=ReadonlyMDMFFileURI
+
+    def __init__(self, filenode_uri=None):
+        if filenode_uri:
+            assert filenode_uri.is_readonly()
+        _DirectoryBaseURI.__init__(self, filenode_uri)
+
+    def is_readonly(self):
+        return True
+
+    def get_readonly(self):
+        return self
+
+    def get_verify_cap(self):
+        return MDMFDirectoryURIVerifier(self._filenode_uri.get_verify_cap())
+
 def wrap_dirnode_cap(filecap):
     if isinstance(filecap, WriteableSSKFileURI):
         return DirectoryURI(filecap)
hunk ./src/allmydata/uri.py 804
         return ImmutableDirectoryURI(filecap)
     if isinstance(filecap, LiteralFileURI):
         return LiteralDirectoryURI(filecap)
+    if isinstance(filecap, WritableMDMFFileURI):
+        return MDMFDirectoryURI(filecap)
+    if isinstance(filecap, ReadonlyMDMFFileURI):
+        return ReadonlyMDMFDirectoryURI(filecap)
     assert False, "cannot interpret as a directory cap: %s" % filecap.__class__
 
hunk ./src/allmydata/uri.py 810
+class MDMFDirectoryURIVerifier(_DirectoryBaseURI):
+    implements(IVerifierURI)
+
+    BASE_STRING='URI:DIR2-MDMF-Verifier:'
+    BASE_STRING_RE=re.compile('^'+BASE_STRING)
+    BASE_HUMAN_RE=re.compile('^'+OPTIONALHTTPLEAD+'URI'+SEP+'DIR2-MDMF-Verifier'+SEP)
+    INNER_URI_CLASS=MDMFVerifierURI
+
+    def __init__(self, filenode_uri=None):
+        if filenode_uri:
+            assert IVerifierURI.providedBy(filenode_uri)
+        self._filenode_uri = filenode_uri
+
+    def get_filenode_cap(self):
+        return self._filenode_uri
+
+    def is_mutable(self):
+        return False
 
 class DirectoryURIVerifier(_DirectoryBaseURI):
     implements(IVerifierURI)
hunk ./src/allmydata/uri.py 935
             return ImmutableDirectoryURI.init_from_string(s)
         elif s.startswith('URI:DIR2-LIT:'):
             return LiteralDirectoryURI.init_from_string(s)
+        elif s.startswith('URI:DIR2-MDMF:'):
+            if can_be_writeable:
+                return MDMFDirectoryURI.init_from_string(s)
+            kind = "URI:DIR2-MDMF directory writecap"
+        elif s.startswith('URI:DIR2-MDMF-RO:'):
+            if can_be_mutable:
+                return ReadonlyMDMFDirectoryURI.init_from_string(s)
+            kind = "URI:DIR2-MDMF-RO readcap to a mutable directory"
         elif s.startswith('x-tahoe-future-test-writeable:') and not can_be_writeable:
             # For testing how future writeable caps would behave in read-only contexts.
             kind = "x-tahoe-future-test-writeable: testing cap"
}
[Add tests for MDMF directories
Kevan Carstensen <kevan@isnotajoke.com>**20110617175950
 Ignore-this: 27882fd4cf827030d7574bd4b2b8cb77
] {
hunk ./src/allmydata/test/test_dirnode.py 14
 from allmydata.interfaces import IImmutableFileNode, IMutableFileNode, \
      ExistingChildError, NoSuchChildError, MustNotBeUnknownRWError, \
      MustBeDeepImmutableError, MustBeReadonlyError, \
-     IDeepCheckResults, IDeepCheckAndRepairResults
+     IDeepCheckResults, IDeepCheckAndRepairResults, \
+     MDMF_VERSION, SDMF_VERSION
 from allmydata.mutable.filenode import MutableFileNode
 from allmydata.mutable.common import UncoordinatedWriteError
 from allmydata.util import hashutil, base32
hunk ./src/allmydata/test/test_dirnode.py 61
               testutil.ReallyEqualMixin, testutil.ShouldFailMixin, testutil.StallMixin, ErrorMixin):
     timeout = 480 # It occasionally takes longer than 240 seconds on Francois's arm box.
 
-    def test_basic(self):
-        self.basedir = "dirnode/Dirnode/test_basic"
-        self.set_up_grid()
+    def _do_create_test(self, mdmf=False):
         c = self.g.clients[0]
hunk ./src/allmydata/test/test_dirnode.py 63
-        d = c.create_dirnode()
-        def _done(res):
-            self.failUnless(isinstance(res, dirnode.DirectoryNode))
-            self.failUnless(res.is_mutable())
-            self.failIf(res.is_readonly())
-            self.failIf(res.is_unknown())
-            self.failIf(res.is_allowed_in_immutable_directory())
-            res.raise_error()
-            rep = str(res)
-            self.failUnless("RW-MUT" in rep)
-        d.addCallback(_done)
+
+        self.expected_manifest = []
+        self.expected_verifycaps = set()
+        self.expected_storage_indexes = set()
+
+        d = None
+        if mdmf:
+            d = c.create_dirnode(version=MDMF_VERSION)
+        else:
+            d = c.create_dirnode()
+        def _then(n):
+            # /
+            self.rootnode = n
+            backing_node = n._node
+            if mdmf:
+                self.failUnlessEqual(backing_node.get_version(),
+                                     MDMF_VERSION)
+            else:
+                self.failUnlessEqual(backing_node.get_version(),
+                                     SDMF_VERSION)
+            self.failUnless(n.is_mutable())
+            u = n.get_uri()
+            self.failUnless(u)
+            cap_formats = []
+            if mdmf:
+                cap_formats = ["URI:DIR2-MDMF:",
+                               "URI:DIR2-MDMF-RO:",
+                               "URI:DIR2-MDMF-Verifier:"]
+            else:
+                cap_formats = ["URI:DIR2:",
+                               "URI:DIR2-RO",
+                               "URI:DIR2-Verifier:"]
+            rw, ro, v = cap_formats
+            self.failUnless(u.startswith(rw), u)
+            u_ro = n.get_readonly_uri()
+            self.failUnless(u_ro.startswith(ro), u_ro)
+            u_v = n.get_verify_cap().to_string()
+            self.failUnless(u_v.startswith(v), u_v)
+            u_r = n.get_repair_cap().to_string()
+            self.failUnlessReallyEqual(u_r, u)
+            self.expected_manifest.append( ((), u) )
+            self.expected_verifycaps.add(u_v)
+            si = n.get_storage_index()
+            self.expected_storage_indexes.add(base32.b2a(si))
+            expected_si = n._uri.get_storage_index()
+            self.failUnlessReallyEqual(si, expected_si)
+
+            d = n.list()
+            d.addCallback(lambda res: self.failUnlessEqual(res, {}))
+            d.addCallback(lambda res: n.has_child(u"missing"))
+            d.addCallback(lambda res: self.failIf(res))
+
+            fake_file_uri = make_mutable_file_uri()
+            other_file_uri = make_mutable_file_uri()
+            m = c.nodemaker.create_from_cap(fake_file_uri)
+            ffu_v = m.get_verify_cap().to_string()
+            self.expected_manifest.append( ((u"child",) , m.get_uri()) )
+            self.expected_verifycaps.add(ffu_v)
+            self.expected_storage_indexes.add(base32.b2a(m.get_storage_index()))
+            d.addCallback(lambda res: n.set_uri(u"child",
+                                                fake_file_uri, fake_file_uri))
+            d.addCallback(lambda res:
+                          self.shouldFail(ExistingChildError, "set_uri-no",
+                                          "child 'child' already exists",
+                                          n.set_uri, u"child",
+                                          other_file_uri, other_file_uri,
+                                          overwrite=False))
+            # /
+            # /child = mutable
+
+            d.addCallback(lambda res: n.create_subdirectory(u"subdir"))
+
+            # /
+            # /child = mutable
+            # /subdir = directory
+            def _created(subdir):
+                self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
+                self.subdir = subdir
+                new_v = subdir.get_verify_cap().to_string()
+                assert isinstance(new_v, str)
+                self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
+                self.expected_verifycaps.add(new_v)
+                si = subdir.get_storage_index()
+                self.expected_storage_indexes.add(base32.b2a(si))
+            d.addCallback(_created)
+
+            d.addCallback(lambda res:
+                          self.shouldFail(ExistingChildError, "mkdir-no",
+                                          "child 'subdir' already exists",
+                                          n.create_subdirectory, u"subdir",
+                                          overwrite=False))
+
+            d.addCallback(lambda res: n.list())
+            d.addCallback(lambda children:
+                          self.failUnlessReallyEqual(set(children.keys()),
+                                                     set([u"child", u"subdir"])))
+
+            d.addCallback(lambda res: n.start_deep_stats().when_done())
+            def _check_deepstats(stats):
+                self.failUnless(isinstance(stats, dict))
+                expected = {"count-immutable-files": 0,
+                            "count-mutable-files": 1,
+                            "count-literal-files": 0,
+                            "count-files": 1,
+                            "count-directories": 2,
+                            "size-immutable-files": 0,
+                            "size-literal-files": 0,
+                            #"size-directories": 616, # varies
+                            #"largest-directory": 616,
+                            "largest-directory-children": 2,
+                            "largest-immutable-file": 0,
+                            }
+                for k,v in expected.iteritems():
+                    self.failUnlessReallyEqual(stats[k], v,
+                                               "stats[%s] was %s, not %s" %
+                                               (k, stats[k], v))
+                self.failUnless(stats["size-directories"] > 500,
+                                stats["size-directories"])
+                self.failUnless(stats["largest-directory"] > 500,
+                                stats["largest-directory"])
+                self.failUnlessReallyEqual(stats["size-files-histogram"], [])
+            d.addCallback(_check_deepstats)
+
+            d.addCallback(lambda res: n.build_manifest().when_done())
+            def _check_manifest(res):
+                manifest = res["manifest"]
+                self.failUnlessReallyEqual(sorted(manifest),
+                                           sorted(self.expected_manifest))
+                stats = res["stats"]
+                _check_deepstats(stats)
+                self.failUnlessReallyEqual(self.expected_verifycaps,
+                                           res["verifycaps"])
+                self.failUnlessReallyEqual(self.expected_storage_indexes,
+                                           res["storage-index"])
+            d.addCallback(_check_manifest)
+
+            def _add_subsubdir(res):
+                return self.subdir.create_subdirectory(u"subsubdir")
+            d.addCallback(_add_subsubdir)
+            # /
+            # /child = mutable
+            # /subdir = directory
+            # /subdir/subsubdir = directory
+            d.addCallback(lambda res: n.get_child_at_path(u"subdir/subsubdir"))
+            d.addCallback(lambda subsubdir:
+                          self.failUnless(isinstance(subsubdir,
+                                                     dirnode.DirectoryNode)))
+            d.addCallback(lambda res: n.get_child_at_path(u""))
+            d.addCallback(lambda res: self.failUnlessReallyEqual(res.get_uri(),
+                                                                 n.get_uri()))
+
+            d.addCallback(lambda res: n.get_metadata_for(u"child"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()),
+                                               set(["tahoe"])))
+
+            d.addCallback(lambda res:
+                          self.shouldFail(NoSuchChildError, "gcamap-no",
+                                          "nope",
+                                          n.get_child_and_metadata_at_path,
+                                          u"subdir/nope"))
+            d.addCallback(lambda res:
+                          n.get_child_and_metadata_at_path(u""))
+            def _check_child_and_metadata1(res):
+                child, metadata = res
+                self.failUnless(isinstance(child, dirnode.DirectoryNode))
+                # edge-metadata needs at least one path segment
+                self.failUnlessEqual(set(metadata.keys()), set([]))
+            d.addCallback(_check_child_and_metadata1)
+            d.addCallback(lambda res:
+                          n.get_child_and_metadata_at_path(u"child"))
+
+            def _check_child_and_metadata2(res):
+                child, metadata = res
+                self.failUnlessReallyEqual(child.get_uri(),
+                                           fake_file_uri)
+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
+            d.addCallback(_check_child_and_metadata2)
+
+            d.addCallback(lambda res:
+                          n.get_child_and_metadata_at_path(u"subdir/subsubdir"))
+            def _check_child_and_metadata3(res):
+                child, metadata = res
+                self.failUnless(isinstance(child, dirnode.DirectoryNode))
+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
+            d.addCallback(_check_child_and_metadata3)
+
+            # set_uri + metadata
+            # it should be possible to add a child without any metadata
+            d.addCallback(lambda res: n.set_uri(u"c2",
+                                                fake_file_uri, fake_file_uri,
+                                                {}))
+            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+
+            # You can't override the link timestamps.
+            d.addCallback(lambda res: n.set_uri(u"c2",
+                                                fake_file_uri, fake_file_uri,
+                                                { 'tahoe': {'linkcrtime': "bogus"}}))
+            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
+            def _has_good_linkcrtime(metadata):
+                self.failUnless(metadata.has_key('tahoe'))
+                self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
+                self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
+            d.addCallback(_has_good_linkcrtime)
+
+            # if we don't set any defaults, the child should get timestamps
+            d.addCallback(lambda res: n.set_uri(u"c3",
+                                                fake_file_uri, fake_file_uri))
+            d.addCallback(lambda res: n.get_metadata_for(u"c3"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+
+            # we can also add specific metadata at set_uri() time
+            d.addCallback(lambda res: n.set_uri(u"c4",
+                                                fake_file_uri, fake_file_uri,
+                                                {"key": "value"}))
+            d.addCallback(lambda res: n.get_metadata_for(u"c4"))
+            d.addCallback(lambda metadata:
+                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
+                                              (metadata['key'] == "value"), metadata))
+
+            d.addCallback(lambda res: n.delete(u"c2"))
+            d.addCallback(lambda res: n.delete(u"c3"))
+            d.addCallback(lambda res: n.delete(u"c4"))
+
+            # set_node + metadata
+            # it should be possible to add a child without any metadata except for timestamps
+            d.addCallback(lambda res: n.set_node(u"d2", n, {}))
+            d.addCallback(lambda res: c.create_dirnode())
+            d.addCallback(lambda n2:
+                          self.shouldFail(ExistingChildError, "set_node-no",
+                                          "child 'd2' already exists",
+                                          n.set_node, u"d2", n2,
+                                          overwrite=False))
+            d.addCallback(lambda res: n.get_metadata_for(u"d2"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+
+            # if we don't set any defaults, the child should get timestamps
+            d.addCallback(lambda res: n.set_node(u"d3", n))
+            d.addCallback(lambda res: n.get_metadata_for(u"d3"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+
+            # we can also add specific metadata at set_node() time
+            d.addCallback(lambda res: n.set_node(u"d4", n,
+                                                {"key": "value"}))
+            d.addCallback(lambda res: n.get_metadata_for(u"d4"))
+            d.addCallback(lambda metadata:
+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
+                                          (metadata["key"] == "value"), metadata))
+
+            d.addCallback(lambda res: n.delete(u"d2"))
+            d.addCallback(lambda res: n.delete(u"d3"))
+            d.addCallback(lambda res: n.delete(u"d4"))
+
+            # metadata through set_children()
+            d.addCallback(lambda res:
+                          n.set_children({
+                              u"e1": (fake_file_uri, fake_file_uri),
+                              u"e2": (fake_file_uri, fake_file_uri, {}),
+                              u"e3": (fake_file_uri, fake_file_uri,
+                                      {"key": "value"}),
+                              }))
+            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
+            d.addCallback(lambda res:
+                          self.shouldFail(ExistingChildError, "set_children-no",
+                                          "child 'e1' already exists",
+                                          n.set_children,
+                                          { u"e1": (other_file_uri,
+                                                    other_file_uri),
+                                            u"new": (other_file_uri,
+                                                     other_file_uri),
+                                            },
+                                          overwrite=False))
+            # and 'new' should not have been created
+            d.addCallback(lambda res: n.list())
+            d.addCallback(lambda children: self.failIf(u"new" in children))
+            d.addCallback(lambda res: n.get_metadata_for(u"e1"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+            d.addCallback(lambda res: n.get_metadata_for(u"e2"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+            d.addCallback(lambda res: n.get_metadata_for(u"e3"))
+            d.addCallback(lambda metadata:
+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
+                                          (metadata["key"] == "value"), metadata))
+
+            d.addCallback(lambda res: n.delete(u"e1"))
+            d.addCallback(lambda res: n.delete(u"e2"))
+            d.addCallback(lambda res: n.delete(u"e3"))
+
+            # metadata through set_nodes()
+            d.addCallback(lambda res:
+                          n.set_nodes({ u"f1": (n, None),
+                                        u"f2": (n, {}),
+                                        u"f3": (n, {"key": "value"}),
+                                        }))
+            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
+            d.addCallback(lambda res:
+                          self.shouldFail(ExistingChildError, "set_nodes-no",
+                                          "child 'f1' already exists",
+                                          n.set_nodes, { u"f1": (n, None),
+                                                         u"new": (n, None), },
+                                          overwrite=False))
+            # and 'new' should not have been created
+            d.addCallback(lambda res: n.list())
+            d.addCallback(lambda children: self.failIf(u"new" in children))
+            d.addCallback(lambda res: n.get_metadata_for(u"f1"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+            d.addCallback(lambda res: n.get_metadata_for(u"f2"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+            d.addCallback(lambda res: n.get_metadata_for(u"f3"))
+            d.addCallback(lambda metadata:
+                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
+                                          (metadata["key"] == "value"), metadata))
+
+            d.addCallback(lambda res: n.delete(u"f1"))
+            d.addCallback(lambda res: n.delete(u"f2"))
+            d.addCallback(lambda res: n.delete(u"f3"))
+
+
+            d.addCallback(lambda res:
+                          n.set_metadata_for(u"child",
+                                             {"tags": ["web2.0-compatible"], "tahoe": {"bad": "mojo"}}))
+            d.addCallback(lambda n1: n1.get_metadata_for(u"child"))
+            d.addCallback(lambda metadata:
+                          self.failUnless((set(metadata.keys()) == set(["tags", "tahoe"])) and
+                                          metadata["tags"] == ["web2.0-compatible"] and
+                                          "bad" not in metadata["tahoe"], metadata))
+
+            d.addCallback(lambda res:
+                          self.shouldFail(NoSuchChildError, "set_metadata_for-nosuch", "",
+                                          n.set_metadata_for, u"nosuch", {}))
+
+
+            def _start(res):
+                self._start_timestamp = time.time()
+            d.addCallback(_start)
+            # simplejson-1.7.1 (as shipped on Ubuntu 'gutsy') rounds all
+            # floats to hundredeths (it uses str(num) instead of repr(num)).
+            # simplejson-1.7.3 does not have this bug. To prevent this bug
+            # from causing the test to fail, stall for more than a few
+            # hundrededths of a second.
+            d.addCallback(self.stall, 0.1)
+            d.addCallback(lambda res: n.add_file(u"timestamps",
+                                                 upload.Data("stamp me", convergence="some convergence string")))
+            d.addCallback(self.stall, 0.1)
+            def _stop(res):
+                self._stop_timestamp = time.time()
+            d.addCallback(_stop)
+
+            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
+            def _check_timestamp1(metadata):
+                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
+                tahoe_md = metadata["tahoe"]
+                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
+
+                self.failUnlessGreaterOrEqualThan(tahoe_md["linkcrtime"],
+                                                  self._start_timestamp)
+                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
+                                                  tahoe_md["linkcrtime"])
+                self.failUnlessGreaterOrEqualThan(tahoe_md["linkmotime"],
+                                                  self._start_timestamp)
+                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
+                                                  tahoe_md["linkmotime"])
+                # Our current timestamp rules say that replacing an existing
+                # child should preserve the 'linkcrtime' but update the
+                # 'linkmotime'
+                self._old_linkcrtime = tahoe_md["linkcrtime"]
+                self._old_linkmotime = tahoe_md["linkmotime"]
+            d.addCallback(_check_timestamp1)
+            d.addCallback(self.stall, 2.0) # accomodate low-res timestamps
+            d.addCallback(lambda res: n.set_node(u"timestamps", n))
+            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
+            def _check_timestamp2(metadata):
+                self.failUnlessIn("tahoe", metadata)
+                tahoe_md = metadata["tahoe"]
+                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
+
+                self.failUnlessReallyEqual(tahoe_md["linkcrtime"], self._old_linkcrtime)
+                self.failUnlessGreaterThan(tahoe_md["linkmotime"], self._old_linkmotime)
+                return n.delete(u"timestamps")
+            d.addCallback(_check_timestamp2)
+
+            d.addCallback(lambda res: n.delete(u"subdir"))
+            d.addCallback(lambda old_child:
+                          self.failUnlessReallyEqual(old_child.get_uri(),
+                                                     self.subdir.get_uri()))
+
+            d.addCallback(lambda res: n.list())
+            d.addCallback(lambda children:
+                          self.failUnlessReallyEqual(set(children.keys()),
+                                                     set([u"child"])))
+
+            uploadable1 = upload.Data("some data", convergence="converge")
+            d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
+            d.addCallback(lambda newnode:
+                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
+            uploadable2 = upload.Data("some data", convergence="stuff")
+            d.addCallback(lambda res:
+                          self.shouldFail(ExistingChildError, "add_file-no",
+                                          "child 'newfile' already exists",
+                                          n.add_file, u"newfile",
+                                          uploadable2,
+                                          overwrite=False))
+            d.addCallback(lambda res: n.list())
+            d.addCallback(lambda children:
+                          self.failUnlessReallyEqual(set(children.keys()),
+                                                     set([u"child", u"newfile"])))
+            d.addCallback(lambda res: n.get_metadata_for(u"newfile"))
+            d.addCallback(lambda metadata:
+                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
+
+            uploadable3 = upload.Data("some data", convergence="converge")
+            d.addCallback(lambda res: n.add_file(u"newfile-metadata",
+                                                 uploadable3,
+                                                 {"key": "value"}))
+            d.addCallback(lambda newnode:
+                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
+            d.addCallback(lambda res: n.get_metadata_for(u"newfile-metadata"))
+            d.addCallback(lambda metadata:
+                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
+                                              (metadata['key'] == "value"), metadata))
+            d.addCallback(lambda res: n.delete(u"newfile-metadata"))
+
+            d.addCallback(lambda res: n.create_subdirectory(u"subdir2"))
+            def _created2(subdir2):
+                self.subdir2 = subdir2
+                # put something in the way, to make sure it gets overwritten
+                return subdir2.add_file(u"child", upload.Data("overwrite me",
+                                                              "converge"))
+            d.addCallback(_created2)
+
+            d.addCallback(lambda res:
+                          n.move_child_to(u"child", self.subdir2))
+            d.addCallback(lambda res: n.list())
+            d.addCallback(lambda children:
+                          self.failUnlessReallyEqual(set(children.keys()),
+                                                     set([u"newfile", u"subdir2"])))
+            d.addCallback(lambda res: self.subdir2.list())
+            d.addCallback(lambda children:
+                          self.failUnlessReallyEqual(set(children.keys()),
+                                                     set([u"child"])))
+            d.addCallback(lambda res: self.subdir2.get(u"child"))
+            d.addCallback(lambda child:
+                          self.failUnlessReallyEqual(child.get_uri(),
+                                                     fake_file_uri))
+
+            # move it back, using new_child_name=
+            d.addCallback(lambda res:
+                          self.subdir2.move_child_to(u"child", n, u"newchild"))
+            d.addCallback(lambda res: n.list())
+            d.addCallback(lambda children:
+                          self.failUnlessReallyEqual(set(children.keys()),
+                                                     set([u"newchild", u"newfile",
+                                                          u"subdir2"])))
+            d.addCallback(lambda res: self.subdir2.list())
+            d.addCallback(lambda children:
+                          self.failUnlessReallyEqual(set(children.keys()), set([])))
+
+            # now make sure that we honor overwrite=False
+            d.addCallback(lambda res:
+                          self.subdir2.set_uri(u"newchild",
+                                               other_file_uri, other_file_uri))
+
+            d.addCallback(lambda res:
+                          self.shouldFail(ExistingChildError, "move_child_to-no",
+                                          "child 'newchild' already exists",
+                                          n.move_child_to, u"newchild",
+                                          self.subdir2,
+                                          overwrite=False))
+            d.addCallback(lambda res: self.subdir2.get(u"newchild"))
+            d.addCallback(lambda child:
+                          self.failUnlessReallyEqual(child.get_uri(),
+                                                     other_file_uri))
+
+
+            # Setting the no-write field should diminish a mutable cap to read-only
+            # (for both files and directories).
+
+            d.addCallback(lambda ign: n.set_uri(u"mutable", other_file_uri, other_file_uri))
+            d.addCallback(lambda ign: n.get(u"mutable"))
+            d.addCallback(lambda mutable: self.failIf(mutable.is_readonly(), mutable))
+            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
+            d.addCallback(lambda ign: n.get(u"mutable"))
+            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
+            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
+            d.addCallback(lambda ign: n.get(u"mutable"))
+            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
+
+            d.addCallback(lambda ign: n.get(u"subdir2"))
+            d.addCallback(lambda subdir2: self.failIf(subdir2.is_readonly()))
+            d.addCallback(lambda ign: n.set_metadata_for(u"subdir2", {"no-write": True}))
+            d.addCallback(lambda ign: n.get(u"subdir2"))
+            d.addCallback(lambda subdir2: self.failUnless(subdir2.is_readonly(), subdir2))
+
+            d.addCallback(lambda ign: n.set_uri(u"mutable_ro", other_file_uri, other_file_uri,
+                                                metadata={"no-write": True}))
+            d.addCallback(lambda ign: n.get(u"mutable_ro"))
+            d.addCallback(lambda mutable_ro: self.failUnless(mutable_ro.is_readonly(), mutable_ro))
+
+            d.addCallback(lambda ign: n.create_subdirectory(u"subdir_ro", metadata={"no-write": True}))
+            d.addCallback(lambda ign: n.get(u"subdir_ro"))
+            d.addCallback(lambda subdir_ro: self.failUnless(subdir_ro.is_readonly(), subdir_ro))
+
+            return d
+
+        d.addCallback(_then)
+
+        d.addErrback(self.explain_error)
         return d
 
hunk ./src/allmydata/test/test_dirnode.py 581
-    def test_initial_children(self):
-        self.basedir = "dirnode/Dirnode/test_initial_children"
-        self.set_up_grid()
+
+    def _do_initial_children_test(self, mdmf=False):
         c = self.g.clients[0]
         nm = c.nodemaker
 
hunk ./src/allmydata/test/test_dirnode.py 597
                 u"empty_litdir": (nm.create_from_cap(empty_litdir_uri), {}),
                 u"tiny_litdir": (nm.create_from_cap(tiny_litdir_uri), {}),
                 }
-        d = c.create_dirnode(kids)
-        
+        d = None
+        if mdmf:
+            d = c.create_dirnode(kids, version=MDMF_VERSION)
+        else:
+            d = c.create_dirnode(kids)
         def _created(dn):
             self.failUnless(isinstance(dn, dirnode.DirectoryNode))
hunk ./src/allmydata/test/test_dirnode.py 604
+            backing_node = dn._node
+            if mdmf:
+                self.failUnlessEqual(backing_node.get_version(),
+                                     MDMF_VERSION)
+            else:
+                self.failUnlessEqual(backing_node.get_version(),
+                                     SDMF_VERSION)
             self.failUnless(dn.is_mutable())
             self.failIf(dn.is_readonly())
             self.failIf(dn.is_unknown())
hunk ./src/allmydata/test/test_dirnode.py 619
             rep = str(dn)
             self.failUnless("RW-MUT" in rep)
             return dn.list()
-        d.addCallback(_created)
-        
+
         def _check_kids(children):
             self.failUnlessReallyEqual(set(children.keys()),
                                        set([one_nfc, u"two", u"mut", u"fut", u"fro",
hunk ./src/allmydata/test/test_dirnode.py 623
-                                            u"fut-unic", u"fro-unic", u"empty_litdir", u"tiny_litdir"]))
+                                        u"fut-unic", u"fro-unic", u"empty_litdir", u"tiny_litdir"]))
             one_node, one_metadata = children[one_nfc]
             two_node, two_metadata = children[u"two"]
             mut_node, mut_metadata = children[u"mut"]
hunk ./src/allmydata/test/test_dirnode.py 683
             d2.addCallback(lambda children: children[u"short"][0].read(MemAccum()))
             d2.addCallback(lambda accum: self.failUnlessReallyEqual(accum.data, "The end."))
             return d2
-
+        d.addCallback(_created)
         d.addCallback(_check_kids)
 
         d.addCallback(lambda ign: nm.create_new_mutable_directory(kids))
hunk ./src/allmydata/test/test_dirnode.py 707
                                       bad_kids2))
         return d
 
+    def _do_basic_test(self, mdmf=False):
+        c = self.g.clients[0]
+        d = None
+        if mdmf:
+            d = c.create_dirnode(version=MDMF_VERSION)
+        else:
+            d = c.create_dirnode()
+        def _done(res):
+            self.failUnless(isinstance(res, dirnode.DirectoryNode))
+            self.failUnless(res.is_mutable())
+            self.failIf(res.is_readonly())
+            self.failIf(res.is_unknown())
+            self.failIf(res.is_allowed_in_immutable_directory())
+            res.raise_error()
+            rep = str(res)
+            self.failUnless("RW-MUT" in rep)
+        d.addCallback(_done)
+        return d
+
+    def test_basic(self):
+        self.basedir = "dirnode/Dirnode/test_basic"
+        self.set_up_grid()
+        return self._do_basic_test()
+
+    def test_basic_mdmf(self):
+        self.basedir = "dirnode/Dirnode/test_basic_mdmf"
+        self.set_up_grid()
+        return self._do_basic_test(mdmf=True)
+
+    def test_initial_children(self):
+        self.basedir = "dirnode/Dirnode/test_initial_children"
+        self.set_up_grid()
+        return self._do_initial_children_test()
+
     def test_immutable(self):
         self.basedir = "dirnode/Dirnode/test_immutable"
         self.set_up_grid()
hunk ./src/allmydata/test/test_dirnode.py 1025
         d.addCallback(_done)
         return d
 
-    def _test_deepcheck_create(self):
+    def _test_deepcheck_create(self, version=SDMF_VERSION):
         # create a small tree with a loop, and some non-directories
         #  root/
         #  root/subdir/
hunk ./src/allmydata/test/test_dirnode.py 1033
         #  root/subdir/link -> root
         #  root/rodir
         c = self.g.clients[0]
-        d = c.create_dirnode()
+        d = c.create_dirnode(version=version)
         def _created_root(rootnode):
             self._rootnode = rootnode
hunk ./src/allmydata/test/test_dirnode.py 1036
+            self.failUnlessEqual(rootnode._node.get_version(), version)
             return rootnode.create_subdirectory(u"subdir")
         d.addCallback(_created_root)
         def _created_subdir(subdir):
hunk ./src/allmydata/test/test_dirnode.py 1075
         d.addCallback(_check_results)
         return d
 
+    def test_deepcheck_mdmf(self):
+        self.basedir = "dirnode/Dirnode/test_deepcheck_mdmf"
+        self.set_up_grid()
+        d = self._test_deepcheck_create(MDMF_VERSION)
+        d.addCallback(lambda rootnode: rootnode.start_deep_check().when_done())
+        def _check_results(r):
+            self.failUnless(IDeepCheckResults.providedBy(r))
+            c = r.get_counters()
+            self.failUnlessReallyEqual(c,
+                                       {"count-objects-checked": 4,
+                                        "count-objects-healthy": 4,
+                                        "count-objects-unhealthy": 0,
+                                        "count-objects-unrecoverable": 0,
+                                        "count-corrupt-shares": 0,
+                                        })
+            self.failIf(r.get_corrupt_shares())
+            self.failUnlessReallyEqual(len(r.get_all_results()), 4)
+        d.addCallback(_check_results)
+        return d
+
     def test_deepcheck_and_repair(self):
         self.basedir = "dirnode/Dirnode/test_deepcheck_and_repair"
         self.set_up_grid()
hunk ./src/allmydata/test/test_dirnode.py 1124
         d.addCallback(_check_results)
         return d
 
+    def test_deepcheck_and_repair_mdmf(self):
+        self.basedir = "dirnode/Dirnode/test_deepcheck_and_repair_mdmf"
+        self.set_up_grid()
+        d = self._test_deepcheck_create(version=MDMF_VERSION)
+        d.addCallback(lambda rootnode:
+                      rootnode.start_deep_check_and_repair().when_done())
+        def _check_results(r):
+            self.failUnless(IDeepCheckAndRepairResults.providedBy(r))
+            c = r.get_counters()
+            self.failUnlessReallyEqual(c,
+                                       {"count-objects-checked": 4,
+                                        "count-objects-healthy-pre-repair": 4,
+                                        "count-objects-unhealthy-pre-repair": 0,
+                                        "count-objects-unrecoverable-pre-repair": 0,
+                                        "count-corrupt-shares-pre-repair": 0,
+                                        "count-objects-healthy-post-repair": 4,
+                                        "count-objects-unhealthy-post-repair": 0,
+                                        "count-objects-unrecoverable-post-repair": 0,
+                                        "count-corrupt-shares-post-repair": 0,
+                                        "count-repairs-attempted": 0,
+                                        "count-repairs-successful": 0,
+                                        "count-repairs-unsuccessful": 0,
+                                        })
+            self.failIf(r.get_corrupt_shares())
+            self.failIf(r.get_remaining_corrupt_shares())
+            self.failUnlessReallyEqual(len(r.get_all_results()), 4)
+        d.addCallback(_check_results)
+        return d
+
     def _mark_file_bad(self, rootnode):
         self.delete_shares_numbered(rootnode.get_uri(), [0])
         return rootnode
hunk ./src/allmydata/test/test_dirnode.py 1176
         d.addCallback(_check_results)
         return d
 
-    def test_readonly(self):
-        self.basedir = "dirnode/Dirnode/test_readonly"
+    def test_deepcheck_problems_mdmf(self):
+        self.basedir = "dirnode/Dirnode/test_deepcheck_problems_mdmf"
         self.set_up_grid()
hunk ./src/allmydata/test/test_dirnode.py 1179
+        d = self._test_deepcheck_create(version=MDMF_VERSION)
+        d.addCallback(lambda rootnode: self._mark_file_bad(rootnode))
+        d.addCallback(lambda rootnode: rootnode.start_deep_check().when_done())
+        def _check_results(r):
+            c = r.get_counters()
+            self.failUnlessReallyEqual(c,
+                                       {"count-objects-checked": 4,
+                                        "count-objects-healthy": 3,
+                                        "count-objects-unhealthy": 1,
+                                        "count-objects-unrecoverable": 0,
+                                        "count-corrupt-shares": 0,
+                                        })
+            #self.failUnlessReallyEqual(len(r.get_problems()), 1) # TODO
+        d.addCallback(_check_results)
+        return d
+
+    def _do_readonly_test(self, version=SDMF_VERSION):
         c = self.g.clients[0]
         nm = c.nodemaker
         filecap = make_chk_file_uri(1234)
hunk ./src/allmydata/test/test_dirnode.py 1202
         filenode = nm.create_from_cap(filecap)
         uploadable = upload.Data("some data", convergence="some convergence string")
 
-        d = c.create_dirnode()
+        d = c.create_dirnode(version=version)
         def _created(rw_dn):
hunk ./src/allmydata/test/test_dirnode.py 1204
+            backing_node = rw_dn._node
+            self.failUnlessEqual(backing_node.get_version(), version)
             d2 = rw_dn.set_uri(u"child", filecap, filecap)
             d2.addCallback(lambda res: rw_dn)
             return d2
hunk ./src/allmydata/test/test_dirnode.py 1245
         d.addCallback(_listed)
         return d
 
+    def test_readonly(self):
+        self.basedir = "dirnode/Dirnode/test_readonly"
+        self.set_up_grid()
+        return self._do_readonly_test()
+
+    def test_readonly_mdmf(self):
+        self.basedir = "dirnode/Dirnode/test_readonly_mdmf"
+        self.set_up_grid()
+        return self._do_readonly_test(version=MDMF_VERSION)
+
     def failUnlessGreaterThan(self, a, b):
         self.failUnless(a > b, "%r should be > %r" % (a, b))
 
hunk ./src/allmydata/test/test_dirnode.py 1264
     def test_create(self):
         self.basedir = "dirnode/Dirnode/test_create"
         self.set_up_grid()
-        c = self.g.clients[0]
-
-        self.expected_manifest = []
-        self.expected_verifycaps = set()
-        self.expected_storage_indexes = set()
-
-        d = c.create_dirnode()
-        def _then(n):
-            # /
-            self.rootnode = n
-            self.failUnless(n.is_mutable())
-            u = n.get_uri()
-            self.failUnless(u)
-            self.failUnless(u.startswith("URI:DIR2:"), u)
-            u_ro = n.get_readonly_uri()
-            self.failUnless(u_ro.startswith("URI:DIR2-RO:"), u_ro)
-            u_v = n.get_verify_cap().to_string()
-            self.failUnless(u_v.startswith("URI:DIR2-Verifier:"), u_v)
-            u_r = n.get_repair_cap().to_string()
-            self.failUnlessReallyEqual(u_r, u)
-            self.expected_manifest.append( ((), u) )
-            self.expected_verifycaps.add(u_v)
-            si = n.get_storage_index()
-            self.expected_storage_indexes.add(base32.b2a(si))
-            expected_si = n._uri.get_storage_index()
-            self.failUnlessReallyEqual(si, expected_si)
-
-            d = n.list()
-            d.addCallback(lambda res: self.failUnlessEqual(res, {}))
-            d.addCallback(lambda res: n.has_child(u"missing"))
-            d.addCallback(lambda res: self.failIf(res))
-
-            fake_file_uri = make_mutable_file_uri()
-            other_file_uri = make_mutable_file_uri()
-            m = c.nodemaker.create_from_cap(fake_file_uri)
-            ffu_v = m.get_verify_cap().to_string()
-            self.expected_manifest.append( ((u"child",) , m.get_uri()) )
-            self.expected_verifycaps.add(ffu_v)
-            self.expected_storage_indexes.add(base32.b2a(m.get_storage_index()))
-            d.addCallback(lambda res: n.set_uri(u"child",
-                                                fake_file_uri, fake_file_uri))
-            d.addCallback(lambda res:
-                          self.shouldFail(ExistingChildError, "set_uri-no",
-                                          "child 'child' already exists",
-                                          n.set_uri, u"child",
-                                          other_file_uri, other_file_uri,
-                                          overwrite=False))
-            # /
-            # /child = mutable
-
-            d.addCallback(lambda res: n.create_subdirectory(u"subdir"))
-
-            # /
-            # /child = mutable
-            # /subdir = directory
-            def _created(subdir):
-                self.failUnless(isinstance(subdir, dirnode.DirectoryNode))
-                self.subdir = subdir
-                new_v = subdir.get_verify_cap().to_string()
-                assert isinstance(new_v, str)
-                self.expected_manifest.append( ((u"subdir",), subdir.get_uri()) )
-                self.expected_verifycaps.add(new_v)
-                si = subdir.get_storage_index()
-                self.expected_storage_indexes.add(base32.b2a(si))
-            d.addCallback(_created)
-
-            d.addCallback(lambda res:
-                          self.shouldFail(ExistingChildError, "mkdir-no",
-                                          "child 'subdir' already exists",
-                                          n.create_subdirectory, u"subdir",
-                                          overwrite=False))
-
-            d.addCallback(lambda res: n.list())
-            d.addCallback(lambda children:
-                          self.failUnlessReallyEqual(set(children.keys()),
-                                                     set([u"child", u"subdir"])))
-
-            d.addCallback(lambda res: n.start_deep_stats().when_done())
-            def _check_deepstats(stats):
-                self.failUnless(isinstance(stats, dict))
-                expected = {"count-immutable-files": 0,
-                            "count-mutable-files": 1,
-                            "count-literal-files": 0,
-                            "count-files": 1,
-                            "count-directories": 2,
-                            "size-immutable-files": 0,
-                            "size-literal-files": 0,
-                            #"size-directories": 616, # varies
-                            #"largest-directory": 616,
-                            "largest-directory-children": 2,
-                            "largest-immutable-file": 0,
-                            }
-                for k,v in expected.iteritems():
-                    self.failUnlessReallyEqual(stats[k], v,
-                                               "stats[%s] was %s, not %s" %
-                                               (k, stats[k], v))
-                self.failUnless(stats["size-directories"] > 500,
-                                stats["size-directories"])
-                self.failUnless(stats["largest-directory"] > 500,
-                                stats["largest-directory"])
-                self.failUnlessReallyEqual(stats["size-files-histogram"], [])
-            d.addCallback(_check_deepstats)
-
-            d.addCallback(lambda res: n.build_manifest().when_done())
-            def _check_manifest(res):
-                manifest = res["manifest"]
-                self.failUnlessReallyEqual(sorted(manifest),
-                                           sorted(self.expected_manifest))
-                stats = res["stats"]
-                _check_deepstats(stats)
-                self.failUnlessReallyEqual(self.expected_verifycaps,
-                                           res["verifycaps"])
-                self.failUnlessReallyEqual(self.expected_storage_indexes,
-                                           res["storage-index"])
-            d.addCallback(_check_manifest)
-
-            def _add_subsubdir(res):
-                return self.subdir.create_subdirectory(u"subsubdir")
-            d.addCallback(_add_subsubdir)
-            # /
-            # /child = mutable
-            # /subdir = directory
-            # /subdir/subsubdir = directory
-            d.addCallback(lambda res: n.get_child_at_path(u"subdir/subsubdir"))
-            d.addCallback(lambda subsubdir:
-                          self.failUnless(isinstance(subsubdir,
-                                                     dirnode.DirectoryNode)))
-            d.addCallback(lambda res: n.get_child_at_path(u""))
-            d.addCallback(lambda res: self.failUnlessReallyEqual(res.get_uri(),
-                                                                 n.get_uri()))
-
-            d.addCallback(lambda res: n.get_metadata_for(u"child"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()),
-                                               set(["tahoe"])))
-
-            d.addCallback(lambda res:
-                          self.shouldFail(NoSuchChildError, "gcamap-no",
-                                          "nope",
-                                          n.get_child_and_metadata_at_path,
-                                          u"subdir/nope"))
-            d.addCallback(lambda res:
-                          n.get_child_and_metadata_at_path(u""))
-            def _check_child_and_metadata1(res):
-                child, metadata = res
-                self.failUnless(isinstance(child, dirnode.DirectoryNode))
-                # edge-metadata needs at least one path segment
-                self.failUnlessEqual(set(metadata.keys()), set([]))
-            d.addCallback(_check_child_and_metadata1)
-            d.addCallback(lambda res:
-                          n.get_child_and_metadata_at_path(u"child"))
-
-            def _check_child_and_metadata2(res):
-                child, metadata = res
-                self.failUnlessReallyEqual(child.get_uri(),
-                                           fake_file_uri)
-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
-            d.addCallback(_check_child_and_metadata2)
-
-            d.addCallback(lambda res:
-                          n.get_child_and_metadata_at_path(u"subdir/subsubdir"))
-            def _check_child_and_metadata3(res):
-                child, metadata = res
-                self.failUnless(isinstance(child, dirnode.DirectoryNode))
-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
-            d.addCallback(_check_child_and_metadata3)
-
-            # set_uri + metadata
-            # it should be possible to add a child without any metadata
-            d.addCallback(lambda res: n.set_uri(u"c2",
-                                                fake_file_uri, fake_file_uri,
-                                                {}))
-            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-
-            # You can't override the link timestamps.
-            d.addCallback(lambda res: n.set_uri(u"c2",
-                                                fake_file_uri, fake_file_uri,
-                                                { 'tahoe': {'linkcrtime': "bogus"}}))
-            d.addCallback(lambda res: n.get_metadata_for(u"c2"))
-            def _has_good_linkcrtime(metadata):
-                self.failUnless(metadata.has_key('tahoe'))
-                self.failUnless(metadata['tahoe'].has_key('linkcrtime'))
-                self.failIfEqual(metadata['tahoe']['linkcrtime'], 'bogus')
-            d.addCallback(_has_good_linkcrtime)
-
-            # if we don't set any defaults, the child should get timestamps
-            d.addCallback(lambda res: n.set_uri(u"c3",
-                                                fake_file_uri, fake_file_uri))
-            d.addCallback(lambda res: n.get_metadata_for(u"c3"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-
-            # we can also add specific metadata at set_uri() time
-            d.addCallback(lambda res: n.set_uri(u"c4",
-                                                fake_file_uri, fake_file_uri,
-                                                {"key": "value"}))
-            d.addCallback(lambda res: n.get_metadata_for(u"c4"))
-            d.addCallback(lambda metadata:
-                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
-                                              (metadata['key'] == "value"), metadata))
-
-            d.addCallback(lambda res: n.delete(u"c2"))
-            d.addCallback(lambda res: n.delete(u"c3"))
-            d.addCallback(lambda res: n.delete(u"c4"))
-
-            # set_node + metadata
-            # it should be possible to add a child without any metadata except for timestamps
-            d.addCallback(lambda res: n.set_node(u"d2", n, {}))
-            d.addCallback(lambda res: c.create_dirnode())
-            d.addCallback(lambda n2:
-                          self.shouldFail(ExistingChildError, "set_node-no",
-                                          "child 'd2' already exists",
-                                          n.set_node, u"d2", n2,
-                                          overwrite=False))
-            d.addCallback(lambda res: n.get_metadata_for(u"d2"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-
-            # if we don't set any defaults, the child should get timestamps
-            d.addCallback(lambda res: n.set_node(u"d3", n))
-            d.addCallback(lambda res: n.get_metadata_for(u"d3"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-
-            # we can also add specific metadata at set_node() time
-            d.addCallback(lambda res: n.set_node(u"d4", n,
-                                                {"key": "value"}))
-            d.addCallback(lambda res: n.get_metadata_for(u"d4"))
-            d.addCallback(lambda metadata:
-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
-                                          (metadata["key"] == "value"), metadata))
-
-            d.addCallback(lambda res: n.delete(u"d2"))
-            d.addCallback(lambda res: n.delete(u"d3"))
-            d.addCallback(lambda res: n.delete(u"d4"))
-
-            # metadata through set_children()
-            d.addCallback(lambda res:
-                          n.set_children({
-                              u"e1": (fake_file_uri, fake_file_uri),
-                              u"e2": (fake_file_uri, fake_file_uri, {}),
-                              u"e3": (fake_file_uri, fake_file_uri,
-                                      {"key": "value"}),
-                              }))
-            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
-            d.addCallback(lambda res:
-                          self.shouldFail(ExistingChildError, "set_children-no",
-                                          "child 'e1' already exists",
-                                          n.set_children,
-                                          { u"e1": (other_file_uri,
-                                                    other_file_uri),
-                                            u"new": (other_file_uri,
-                                                     other_file_uri),
-                                            },
-                                          overwrite=False))
-            # and 'new' should not have been created
-            d.addCallback(lambda res: n.list())
-            d.addCallback(lambda children: self.failIf(u"new" in children))
-            d.addCallback(lambda res: n.get_metadata_for(u"e1"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-            d.addCallback(lambda res: n.get_metadata_for(u"e2"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-            d.addCallback(lambda res: n.get_metadata_for(u"e3"))
-            d.addCallback(lambda metadata:
-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
-                                          (metadata["key"] == "value"), metadata))
-
-            d.addCallback(lambda res: n.delete(u"e1"))
-            d.addCallback(lambda res: n.delete(u"e2"))
-            d.addCallback(lambda res: n.delete(u"e3"))
-
-            # metadata through set_nodes()
-            d.addCallback(lambda res:
-                          n.set_nodes({ u"f1": (n, None),
-                                        u"f2": (n, {}),
-                                        u"f3": (n, {"key": "value"}),
-                                        }))
-            d.addCallback(lambda n2: self.failUnlessIdentical(n2, n))
-            d.addCallback(lambda res:
-                          self.shouldFail(ExistingChildError, "set_nodes-no",
-                                          "child 'f1' already exists",
-                                          n.set_nodes, { u"f1": (n, None),
-                                                         u"new": (n, None), },
-                                          overwrite=False))
-            # and 'new' should not have been created
-            d.addCallback(lambda res: n.list())
-            d.addCallback(lambda children: self.failIf(u"new" in children))
-            d.addCallback(lambda res: n.get_metadata_for(u"f1"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-            d.addCallback(lambda res: n.get_metadata_for(u"f2"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-            d.addCallback(lambda res: n.get_metadata_for(u"f3"))
-            d.addCallback(lambda metadata:
-                          self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
-                                          (metadata["key"] == "value"), metadata))
-
-            d.addCallback(lambda res: n.delete(u"f1"))
-            d.addCallback(lambda res: n.delete(u"f2"))
-            d.addCallback(lambda res: n.delete(u"f3"))
-
-
-            d.addCallback(lambda res:
-                          n.set_metadata_for(u"child",
-                                             {"tags": ["web2.0-compatible"], "tahoe": {"bad": "mojo"}}))
-            d.addCallback(lambda n1: n1.get_metadata_for(u"child"))
-            d.addCallback(lambda metadata:
-                          self.failUnless((set(metadata.keys()) == set(["tags", "tahoe"])) and
-                                          metadata["tags"] == ["web2.0-compatible"] and
-                                          "bad" not in metadata["tahoe"], metadata))
-
-            d.addCallback(lambda res:
-                          self.shouldFail(NoSuchChildError, "set_metadata_for-nosuch", "",
-                                          n.set_metadata_for, u"nosuch", {}))
-
-
-            def _start(res):
-                self._start_timestamp = time.time()
-            d.addCallback(_start)
-            # simplejson-1.7.1 (as shipped on Ubuntu 'gutsy') rounds all
-            # floats to hundredeths (it uses str(num) instead of repr(num)).
-            # simplejson-1.7.3 does not have this bug. To prevent this bug
-            # from causing the test to fail, stall for more than a few
-            # hundrededths of a second.
-            d.addCallback(self.stall, 0.1)
-            d.addCallback(lambda res: n.add_file(u"timestamps",
-                                                 upload.Data("stamp me", convergence="some convergence string")))
-            d.addCallback(self.stall, 0.1)
-            def _stop(res):
-                self._stop_timestamp = time.time()
-            d.addCallback(_stop)
-
-            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
-            def _check_timestamp1(metadata):
-                self.failUnlessEqual(set(metadata.keys()), set(["tahoe"]))
-                tahoe_md = metadata["tahoe"]
-                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
-
-                self.failUnlessGreaterOrEqualThan(tahoe_md["linkcrtime"],
-                                                  self._start_timestamp)
-                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
-                                                  tahoe_md["linkcrtime"])
-                self.failUnlessGreaterOrEqualThan(tahoe_md["linkmotime"],
-                                                  self._start_timestamp)
-                self.failUnlessGreaterOrEqualThan(self._stop_timestamp,
-                                                  tahoe_md["linkmotime"])
-                # Our current timestamp rules say that replacing an existing
-                # child should preserve the 'linkcrtime' but update the
-                # 'linkmotime'
-                self._old_linkcrtime = tahoe_md["linkcrtime"]
-                self._old_linkmotime = tahoe_md["linkmotime"]
-            d.addCallback(_check_timestamp1)
-            d.addCallback(self.stall, 2.0) # accomodate low-res timestamps
-            d.addCallback(lambda res: n.set_node(u"timestamps", n))
-            d.addCallback(lambda res: n.get_metadata_for(u"timestamps"))
-            def _check_timestamp2(metadata):
-                self.failUnlessIn("tahoe", metadata)
-                tahoe_md = metadata["tahoe"]
-                self.failUnlessEqual(set(tahoe_md.keys()), set(["linkcrtime", "linkmotime"]))
-
-                self.failUnlessReallyEqual(tahoe_md["linkcrtime"], self._old_linkcrtime)
-                self.failUnlessGreaterThan(tahoe_md["linkmotime"], self._old_linkmotime)
-                return n.delete(u"timestamps")
-            d.addCallback(_check_timestamp2)
-
-            d.addCallback(lambda res: n.delete(u"subdir"))
-            d.addCallback(lambda old_child:
-                          self.failUnlessReallyEqual(old_child.get_uri(),
-                                                     self.subdir.get_uri()))
-
-            d.addCallback(lambda res: n.list())
-            d.addCallback(lambda children:
-                          self.failUnlessReallyEqual(set(children.keys()),
-                                                     set([u"child"])))
-
-            uploadable1 = upload.Data("some data", convergence="converge")
-            d.addCallback(lambda res: n.add_file(u"newfile", uploadable1))
-            d.addCallback(lambda newnode:
-                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
-            uploadable2 = upload.Data("some data", convergence="stuff")
-            d.addCallback(lambda res:
-                          self.shouldFail(ExistingChildError, "add_file-no",
-                                          "child 'newfile' already exists",
-                                          n.add_file, u"newfile",
-                                          uploadable2,
-                                          overwrite=False))
-            d.addCallback(lambda res: n.list())
-            d.addCallback(lambda children:
-                          self.failUnlessReallyEqual(set(children.keys()),
-                                                     set([u"child", u"newfile"])))
-            d.addCallback(lambda res: n.get_metadata_for(u"newfile"))
-            d.addCallback(lambda metadata:
-                          self.failUnlessEqual(set(metadata.keys()), set(["tahoe"])))
-
-            uploadable3 = upload.Data("some data", convergence="converge")
-            d.addCallback(lambda res: n.add_file(u"newfile-metadata",
-                                                 uploadable3,
-                                                 {"key": "value"}))
-            d.addCallback(lambda newnode:
-                          self.failUnless(IImmutableFileNode.providedBy(newnode)))
-            d.addCallback(lambda res: n.get_metadata_for(u"newfile-metadata"))
-            d.addCallback(lambda metadata:
-                              self.failUnless((set(metadata.keys()) == set(["key", "tahoe"])) and
-                                              (metadata['key'] == "value"), metadata))
-            d.addCallback(lambda res: n.delete(u"newfile-metadata"))
-
-            d.addCallback(lambda res: n.create_subdirectory(u"subdir2"))
-            def _created2(subdir2):
-                self.subdir2 = subdir2
-                # put something in the way, to make sure it gets overwritten
-                return subdir2.add_file(u"child", upload.Data("overwrite me",
-                                                              "converge"))
-            d.addCallback(_created2)
-
-            d.addCallback(lambda res:
-                          n.move_child_to(u"child", self.subdir2))
-            d.addCallback(lambda res: n.list())
-            d.addCallback(lambda children:
-                          self.failUnlessReallyEqual(set(children.keys()),
-                                                     set([u"newfile", u"subdir2"])))
-            d.addCallback(lambda res: self.subdir2.list())
-            d.addCallback(lambda children:
-                          self.failUnlessReallyEqual(set(children.keys()),
-                                                     set([u"child"])))
-            d.addCallback(lambda res: self.subdir2.get(u"child"))
-            d.addCallback(lambda child:
-                          self.failUnlessReallyEqual(child.get_uri(),
-                                                     fake_file_uri))
-
-            # move it back, using new_child_name=
-            d.addCallback(lambda res:
-                          self.subdir2.move_child_to(u"child", n, u"newchild"))
-            d.addCallback(lambda res: n.list())
-            d.addCallback(lambda children:
-                          self.failUnlessReallyEqual(set(children.keys()),
-                                                     set([u"newchild", u"newfile",
-                                                          u"subdir2"])))
-            d.addCallback(lambda res: self.subdir2.list())
-            d.addCallback(lambda children:
-                          self.failUnlessReallyEqual(set(children.keys()), set([])))
-
-            # now make sure that we honor overwrite=False
-            d.addCallback(lambda res:
-                          self.subdir2.set_uri(u"newchild",
-                                               other_file_uri, other_file_uri))
-
-            d.addCallback(lambda res:
-                          self.shouldFail(ExistingChildError, "move_child_to-no",
-                                          "child 'newchild' already exists",
-                                          n.move_child_to, u"newchild",
-                                          self.subdir2,
-                                          overwrite=False))
-            d.addCallback(lambda res: self.subdir2.get(u"newchild"))
-            d.addCallback(lambda child:
-                          self.failUnlessReallyEqual(child.get_uri(),
-                                                     other_file_uri))
-
-
-            # Setting the no-write field should diminish a mutable cap to read-only
-            # (for both files and directories).
-
-            d.addCallback(lambda ign: n.set_uri(u"mutable", other_file_uri, other_file_uri))
-            d.addCallback(lambda ign: n.get(u"mutable"))
-            d.addCallback(lambda mutable: self.failIf(mutable.is_readonly(), mutable))
-            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
-            d.addCallback(lambda ign: n.get(u"mutable"))
-            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
-            d.addCallback(lambda ign: n.set_metadata_for(u"mutable", {"no-write": True}))
-            d.addCallback(lambda ign: n.get(u"mutable"))
-            d.addCallback(lambda mutable: self.failUnless(mutable.is_readonly(), mutable))
-
-            d.addCallback(lambda ign: n.get(u"subdir2"))
-            d.addCallback(lambda subdir2: self.failIf(subdir2.is_readonly()))
-            d.addCallback(lambda ign: n.set_metadata_for(u"subdir2", {"no-write": True}))
-            d.addCallback(lambda ign: n.get(u"subdir2"))
-            d.addCallback(lambda subdir2: self.failUnless(subdir2.is_readonly(), subdir2))
-
-            d.addCallback(lambda ign: n.set_uri(u"mutable_ro", other_file_uri, other_file_uri,
-                                                metadata={"no-write": True}))
-            d.addCallback(lambda ign: n.get(u"mutable_ro"))
-            d.addCallback(lambda mutable_ro: self.failUnless(mutable_ro.is_readonly(), mutable_ro))
-
-            d.addCallback(lambda ign: n.create_subdirectory(u"subdir_ro", metadata={"no-write": True}))
-            d.addCallback(lambda ign: n.get(u"subdir_ro"))
-            d.addCallback(lambda subdir_ro: self.failUnless(subdir_ro.is_readonly(), subdir_ro))
-
-            return d
-
-        d.addCallback(_then)
-
-        d.addErrback(self.explain_error)
-        return d
+        return self._do_create_test()
 
     def test_update_metadata(self):
         (t1, t2, t3) = (626644800.0, 634745640.0, 892226160.0)
hunk ./src/allmydata/test/test_dirnode.py 1283
         self.failUnlessEqual(md4, {"bool": True, "number": 42,
                                    "tahoe":{"linkcrtime": t1, "linkmotime": t1}})
 
-    def test_create_subdirectory(self):
-        self.basedir = "dirnode/Dirnode/test_create_subdirectory"
-        self.set_up_grid()
+    def _do_create_subdirectory_test(self, version=SDMF_VERSION):
         c = self.g.clients[0]
         nm = c.nodemaker
 
hunk ./src/allmydata/test/test_dirnode.py 1287
-        d = c.create_dirnode()
+        d = c.create_dirnode(version=version)
         def _then(n):
             # /
             self.rootnode = n
hunk ./src/allmydata/test/test_dirnode.py 1297
             kids = {u"kid1": (nm.create_from_cap(fake_file_uri), {}),
                     u"kid2": (nm.create_from_cap(other_file_uri), md),
                     }
-            d = n.create_subdirectory(u"subdir", kids)
+            d = n.create_subdirectory(u"subdir", kids,
+                                      mutable_version=version)
             def _check(sub):
                 d = n.get_child_at_path(u"subdir")
                 d.addCallback(lambda sub2: self.failUnlessReallyEqual(sub2.get_uri(),
hunk ./src/allmydata/test/test_dirnode.py 1314
         d.addCallback(_then)
         return d
 
+    def test_create_subdirectory(self):
+        self.basedir = "dirnode/Dirnode/test_create_subdirectory"
+        self.set_up_grid()
+        return self._do_create_subdirectory_test()
+
+    def test_create_subdirectory_mdmf(self):
+        self.basedir = "dirnode/Dirnode/test_create_subdirectory_mdmf"
+        self.set_up_grid()
+        return self._do_create_subdirectory_test(version=MDMF_VERSION)
+
+    def test_create_mdmf(self):
+        self.basedir = "dirnode/Dirnode/test_mdmf"
+        self.set_up_grid()
+        return self._do_create_test(mdmf=True)
+
+    def test_mdmf_initial_children(self):
+        self.basedir = "dirnode/Dirnode/test_mdmf"
+        self.set_up_grid()
+        return self._do_initial_children_test(mdmf=True)
+
 class MinimalFakeMutableFile:
     def get_writekey(self):
         return "writekey"
hunk ./src/allmydata/test/test_dirnode.py 1706
             self.failUnless(n.get_readonly_uri().startswith("imm."), i)
 
 
+
 class DeepStats(testutil.ReallyEqualMixin, unittest.TestCase):
     timeout = 240 # It takes longer than 120 seconds on Francois's arm box.
     def test_stats(self):
hunk ./src/allmydata/test/test_uri.py 795
         self.failUnlessReallyEqual(u1.get_storage_index(), None)
         self.failUnlessReallyEqual(u1.abbrev_si(), "<LIT>")
 
+    def test_mdmf(self):
+        writekey = "\x01" * 16
+        fingerprint = "\x02" * 32
+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
+        d1 = uri.MDMFDirectoryURI(uri1)
+        self.failIf(d1.is_readonly())
+        self.failUnless(d1.is_mutable())
+        self.failUnless(IURI.providedBy(d1))
+        self.failUnless(IDirnodeURI.providedBy(d1))
+        d1_uri = d1.to_string()
+
+        d2 = uri.from_string(d1_uri)
+        self.failUnlessIsInstance(d2, uri.MDMFDirectoryURI)
+        self.failIf(d2.is_readonly())
+        self.failUnless(d2.is_mutable())
+        self.failUnless(IURI.providedBy(d2))
+        self.failUnless(IDirnodeURI.providedBy(d2))
+
+        # It doesn't make sense to ask for a deep immutable URI for a
+        # mutable directory, and we should get back a result to that
+        # effect.
+        d3 = uri.from_string(d2.to_string(), deep_immutable=True)
+        self.failUnlessIsInstance(d3, uri.UnknownURI)
+
+    def test_mdmf_with_extensions(self):
+        writekey = "\x01" * 16
+        fingerprint = "\x02" * 32
+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
+        d1 = uri.MDMFDirectoryURI(uri1)
+        d1_uri = d1.to_string()
+        # Add some extensions, verify that the URI is interpreted
+        # correctly.
+        d1_uri += ":3:131073"
+        uri2 = uri.from_string(d1_uri)
+        self.failUnlessIsInstance(uri2, uri.MDMFDirectoryURI)
+        self.failUnless(IURI.providedBy(uri2))
+        self.failUnless(IDirnodeURI.providedBy(uri2))
+        self.failUnless(uri1.is_mutable())
+        self.failIf(uri1.is_readonly())
+
+        d2_uri = uri2.to_string()
+        self.failUnlessIn(":3:131073", d2_uri)
+
+        # Now attenuate, verify that the extensions persist
+        ro_uri = uri2.get_readonly()
+        self.failUnlessIsInstance(ro_uri, uri.ReadonlyMDMFDirectoryURI)
+        self.failUnless(ro_uri.is_mutable())
+        self.failUnless(ro_uri.is_readonly())
+        self.failUnless(IURI.providedBy(ro_uri))
+        self.failUnless(IDirnodeURI.providedBy(ro_uri))
+        ro_uri_str = ro_uri.to_string()
+        self.failUnlessIn(":3:131073", ro_uri_str)
+
+    def test_mdmf_attenuation(self):
+        writekey = "\x01" * 16
+        fingerprint = "\x02" * 32
+
+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
+        d1 = uri.MDMFDirectoryURI(uri1)
+        self.failUnless(d1.is_mutable())
+        self.failIf(d1.is_readonly())
+        self.failUnless(IURI.providedBy(d1))
+        self.failUnless(IDirnodeURI.providedBy(d1))
+
+        d1_uri = d1.to_string()
+        d1_uri_from_fn = uri.MDMFDirectoryURI(d1.get_filenode_cap()).to_string()
+        self.failUnlessEqual(d1_uri_from_fn, d1_uri)
+
+        uri2 = uri.from_string(d1_uri)
+        self.failUnlessIsInstance(uri2, uri.MDMFDirectoryURI)
+        self.failUnless(IURI.providedBy(uri2))
+        self.failUnless(IDirnodeURI.providedBy(uri2))
+        self.failUnless(uri2.is_mutable())
+        self.failIf(uri2.is_readonly())
+
+        ro = uri2.get_readonly()
+        self.failUnlessIsInstance(ro, uri.ReadonlyMDMFDirectoryURI)
+        self.failUnless(ro.is_mutable())
+        self.failUnless(ro.is_readonly())
+        self.failUnless(IURI.providedBy(ro))
+        self.failUnless(IDirnodeURI.providedBy(ro))
+
+        ro_uri = ro.to_string()
+        n = uri.from_string(ro_uri, deep_immutable=True)
+        self.failUnlessIsInstance(n, uri.UnknownURI)
+
+        fn_cap = ro.get_filenode_cap()
+        fn_ro_cap = fn_cap.get_readonly()
+        d3 = uri.ReadonlyMDMFDirectoryURI(fn_ro_cap)
+        self.failUnlessEqual(ro.to_string(), d3.to_string())
+        self.failUnless(ro.is_mutable())
+        self.failUnless(ro.is_readonly())
+
+    def test_mdmf_verifier(self):
+        # I'm not sure what I want to write here yet.
+        writekey = "\x01" * 16
+        fingerprint = "\x02" * 32
+        uri1 = uri.WritableMDMFFileURI(writekey, fingerprint)
+        d1 = uri.MDMFDirectoryURI(uri1)
+        v1 = d1.get_verify_cap()
+        self.failUnlessIsInstance(v1, uri.MDMFDirectoryURIVerifier)
+        self.failIf(v1.is_mutable())
+
+        d2 = uri.from_string(d1.to_string())
+        v2 = d2.get_verify_cap()
+        self.failUnlessIsInstance(v2, uri.MDMFDirectoryURIVerifier)
+        self.failIf(v2.is_mutable())
+        self.failUnlessEqual(v2.to_string(), v1.to_string())
+
+        # Now attenuate and make sure that works correctly.
+        r3 = d2.get_readonly()
+        v3 = r3.get_verify_cap()
+        self.failUnlessIsInstance(v3, uri.MDMFDirectoryURIVerifier)
+        self.failIf(v3.is_mutable())
+        self.failUnlessEqual(v3.to_string(), v1.to_string())
+        r4 = uri.from_string(r3.to_string())
+        v4 = r4.get_verify_cap()
+        self.failUnlessIsInstance(v4, uri.MDMFDirectoryURIVerifier)
+        self.failIf(v4.is_mutable())
+        self.failUnlessEqual(v4.to_string(), v3.to_string())
+
}
[web: teach WUI and webapi to create MDMF directories
Kevan Carstensen <kevan@isnotajoke.com>**20110617180019
 Ignore-this: 956a60542a26c2d5118085ab9e3c470e
] {
hunk ./src/allmydata/web/common.py 41
         return None # interpreted by the caller as "let the nodemaker decide"
 
     arg = arg.lower()
-    assert arg in ("mdmf", "sdmf")
-
     if arg == "mdmf":
         return MDMF_VERSION
hunk ./src/allmydata/web/common.py 43
+    elif arg == "sdmf":
+        return SDMF_VERSION
 
hunk ./src/allmydata/web/common.py 46
-    return SDMF_VERSION
+    return "invalid"
 
 
 def parse_offset_arg(offset):
hunk ./src/allmydata/web/directory.py 26
      IOpHandleTable, NeedOperationHandleError, \
      boolean_of_arg, get_arg, get_root, parse_replace_arg, \
      should_create_intermediate_directories, \
-     getxmlfile, RenderMixin, humanize_failure, convert_children_json
+     getxmlfile, RenderMixin, humanize_failure, convert_children_json, \
+     parse_mutable_type_arg
 from allmydata.web.filenode import ReplaceMeMixin, \
      FileNodeHandler, PlaceHolderNodeHandler
 from allmydata.web.check_results import CheckResults, \
hunk ./src/allmydata/web/directory.py 112
                     mutable = True
                     if t == "mkdir-immutable":
                         mutable = False
+
+                    mt = None
+                    if mutable:
+                        arg = get_arg(req, "mutable-type", None)
+                        mt = parse_mutable_type_arg(arg)
+                        if mt is "invalid":
+                            raise WebError("Unknown type: %s" % arg,
+                                           http.BAD_REQUEST)
                     d = self.node.create_subdirectory(name, kids,
hunk ./src/allmydata/web/directory.py 121
-                                                      mutable=mutable)
+                                                      mutable=mutable,
+                                                      mutable_version=mt)
                     d.addCallback(make_handler_for,
                                   self.client, self.node, name)
                     return d
hunk ./src/allmydata/web/directory.py 253
         name = name.decode("utf-8")
         replace = boolean_of_arg(get_arg(req, "replace", "true"))
         kids = {}
-        d = self.node.create_subdirectory(name, kids, overwrite=replace)
+        arg = get_arg(req, "mutable-type", None)
+        mt = parse_mutable_type_arg(arg)
+        if mt is not None and mt is not "invalid":
+            d = self.node.create_subdirectory(name, kids, overwrite=replace,
+                                          mutable_version=mt)
+        elif mt is "invalid":
+            raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
+        else:
+            d = self.node.create_subdirectory(name, kids, overwrite=replace)
         d.addCallback(lambda child: child.get_uri()) # TODO: urlencode
         return d
 
hunk ./src/allmydata/web/directory.py 277
         req.content.seek(0)
         kids_json = req.content.read()
         kids = convert_children_json(self.client.nodemaker, kids_json)
-        d = self.node.create_subdirectory(name, kids, overwrite=False)
+        arg = get_arg(req, "mutable-type", None)
+        mt = parse_mutable_type_arg(arg)
+        if mt is not None and mt is not "invalid":
+            d = self.node.create_subdirectory(name, kids, overwrite=False,
+                                              mutable_version=mt)
+        elif mt is "invalid":
+            raise WebError("Unknown type: %s" % arg)
+        else:
+            d = self.node.create_subdirectory(name, kids, overwrite=False)
         d.addCallback(lambda child: child.get_uri()) # TODO: urlencode
         return d
 
hunk ./src/allmydata/web/directory.py 786
 
         return ctx.tag
 
+    # XXX: Duplicated from root.py.
     def render_forms(self, ctx, data):
         forms = []
 
hunk ./src/allmydata/web/directory.py 795
         if self.dirnode_children is None:
             return T.div["No upload forms: directory is unreadable"]
 
+        mdmf_directory_input = T.input(type='radio', name='mutable-type',
+                                       id='mutable-directory-mdmf',
+                                       value='mdmf')
+        sdmf_directory_input = T.input(type='radio', name='mutable-type',
+                                       id='mutable-directory-sdmf',
+                                       value='sdmf', checked='checked')
         mkdir = T.form(action=".", method="post",
                        enctype="multipart/form-data")[
             T.fieldset[
hunk ./src/allmydata/web/directory.py 809
             T.legend(class_="freeform-form-label")["Create a new directory in this directory"],
             "New directory name: ",
             T.input(type="text", name="name"), " ",
+            T.label(for_='mutable-directory-sdmf')["SDMF"],
+            sdmf_directory_input,
+            T.label(for_='mutable-directory-mdmf')["MDMF"],
+            mdmf_directory_input,
             T.input(type="submit", value="Create"),
             ]]
         forms.append(T.div(class_="freeform-form")[mkdir])
hunk ./src/allmydata/web/filenode.py 29
         # a new file is being uploaded in our place.
         mutable = boolean_of_arg(get_arg(req, "mutable", "false"))
         if mutable:
-            mutable_type = parse_mutable_type_arg(get_arg(req,
-                                                          "mutable-type",
-                                                          None))
+            arg = get_arg(req, "mutable-type", None)
+            mutable_type = parse_mutable_type_arg(arg)
+            if mutable_type is "invalid":
+                raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
+
             data = MutableFileHandle(req.content)
             d = client.create_mutable_file(data, version=mutable_type)
             def _uploaded(newnode):
hunk ./src/allmydata/web/filenode.py 76
         # create an immutable file
         contents = req.fields["file"]
         if mutable:
-            mutable_type = parse_mutable_type_arg(get_arg(req, "mutable-type",
-                                                          None))
+            arg = get_arg(req, "mutable-type", None)
+            mutable_type = parse_mutable_type_arg(arg)
+            if mutable_type is "invalid":
+                raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
             uploadable = MutableFileHandle(contents.file)
             d = client.create_mutable_file(uploadable, version=mutable_type)
             def _uploaded(newnode):
hunk ./src/allmydata/web/root.py 50
         if t == "":
             mutable = boolean_of_arg(get_arg(req, "mutable", "false").strip())
             if mutable:
-                version = parse_mutable_type_arg(get_arg(req, "mutable-type",
-                                                 None))
+                arg = get_arg(req, "mutable-type", None)
+                version = parse_mutable_type_arg(arg)
+                if version == "invalid":
+                    errmsg = "Unknown type: %s" % arg
+                    raise WebError(errmsg, http.BAD_REQUEST)
+
                 return unlinked.PUTUnlinkedSSK(req, self.client, version)
             else:
                 return unlinked.PUTUnlinkedCHK(req, self.client)
hunk ./src/allmydata/web/root.py 74
         if t in ("", "upload"):
             mutable = bool(get_arg(req, "mutable", "").strip())
             if mutable:
-                version = parse_mutable_type_arg(get_arg(req, "mutable-type",
-                                                         None))
+                arg = get_arg(req, "mutable-type", None)
+                version = parse_mutable_type_arg(arg)
+                if version is "invalid":
+                    raise WebError("Unknown type: %s" % arg, http.BAD_REQUEST)
                 return unlinked.POSTUnlinkedSSK(req, self.client, version)
             else:
                 return unlinked.POSTUnlinkedCHK(req, self.client)
hunk ./src/allmydata/web/root.py 376
 
     def render_mkdir_form(self, ctx, data):
         # this is a form where users can create new directories
+        mdmf_input = T.input(type='radio', name='mutable-type',
+                             value='mdmf', id='mutable-directory-mdmf')
+        sdmf_input = T.input(type='radio', name='mutable-type',
+                             value='sdmf', id='mutable-directory-sdmf',
+                             checked='checked')
         form = T.form(action="uri", method="post",
                       enctype="multipart/form-data")[
             T.fieldset[
hunk ./src/allmydata/web/root.py 385
             T.legend(class_="freeform-form-label")["Create a directory"],
+            T.label(for_='mutable-directory-sdmf')["SDMF"],
+            sdmf_input,
+            T.label(for_='mutable-directory-mdmf')["MDMF"],
+            mdmf_input,
             T.input(type="hidden", name="t", value="mkdir"),
             T.input(type="hidden", name="redirect_to_result", value="true"),
             T.input(type="submit", value="Create a directory"),
hunk ./src/allmydata/web/unlinked.py 9
 from allmydata.immutable.upload import FileHandle
 from allmydata.mutable.publish import MutableFileHandle
 from allmydata.web.common import getxmlfile, get_arg, boolean_of_arg, \
-     convert_children_json, WebError
+     convert_children_json, WebError, parse_mutable_type_arg
 from allmydata.web import status
 
 def PUTUnlinkedCHK(req, client):
hunk ./src/allmydata/web/unlinked.py 30
 
 def PUTUnlinkedCreateDirectory(req, client):
     # "PUT /uri?t=mkdir", to create an unlinked directory.
-    d = client.create_dirnode()
+    arg = get_arg(req, "mutable-type", None)
+    mt = parse_mutable_type_arg(arg)
+    if mt is not None and mt is not "invalid":
+        d = client.create_dirnode(version=mt)
+    elif mt is "invalid":
+        msg = "Unknown type: %s" % arg
+        raise WebError(msg, http.BAD_REQUEST)
+    else:
+        d = client.create_dirnode()
     d.addCallback(lambda dirnode: dirnode.get_uri())
     # XXX add redirect_to_result
     return d
hunk ./src/allmydata/web/unlinked.py 115
             raise WebError("t=mkdir does not accept children=, "
                            "try t=mkdir-with-children instead",
                            http.BAD_REQUEST)
-    d = client.create_dirnode()
+    arg = get_arg(req, "mutable-type", None)
+    mt = parse_mutable_type_arg(arg)
+    if mt is not None and mt is not "invalid":
+        d = client.create_dirnode(version=mt)
+    elif mt is "invalid":
+        msg = "Unknown type: %s" % arg
+        raise WebError(msg, http.BAD_REQUEST)
+    else:
+        d = client.create_dirnode()
     redirect = get_arg(req, "redirect_to_result", "false")
     if boolean_of_arg(redirect):
         def _then_redir(res):
}
[test/test_web: test webapi and WUI for MDMF directory handling
Kevan Carstensen <kevan@isnotajoke.com>**20110617180100
 Ignore-this: 63ed7832872fd35eb7319cf6a6f251b
] {
hunk ./src/allmydata/test/test_web.py 1122
         d.addCallback(lambda json: self.failUnlessIn("sdmf", json))
         return d
 
+    def test_PUT_NEWFILEURL_unlinked_bad_mutable_type(self):
+        contents = self.NEWFILE_CONTENTS * 300000
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.PUT, "/uri?mutable=true&mutable-type=foo",
+                                    contents)
+
     def test_PUT_NEWFILEURL_range_bad(self):
         headers = {"content-range": "bytes 1-10/%d" % len(self.NEWFILE_CONTENTS)}
         target = self.public_url + "/foo/new.txt"
hunk ./src/allmydata/test/test_web.py 1365
             self.failUnless(CSS_STYLE.search(res), res)
         d.addCallback(_check)
         return d
-    
+
     def test_GET_FILEURL_uri_missing(self):
         d = self.GET(self.public_url + "/foo/missing?t=uri")
         d.addBoth(self.should404, "test_GET_FILEURL_uri_missing")
hunk ./src/allmydata/test/test_web.py 1375
         d = self.GET(self.public_url + "/foo", followRedirect=True)
         def _check(res):
             self.failUnlessIn('<div class="toolbar-item"><a href="../../..">Return to Welcome page</a></div>',res)
+            # These are radio buttons that allow a user to toggle
+            # whether a particular mutable file is SDMF or MDMF.
             self.failUnlessIn("mutable-type-mdmf", res)
             self.failUnlessIn("mutable-type-sdmf", res)
hunk ./src/allmydata/test/test_web.py 1379
+            # Similarly, these toggle whether a particular directory
+            # should be MDMF or SDMF.
+            self.failUnlessIn("mutable-directory-mdmf", res)
+            self.failUnlessIn("mutable-directory-sdmf", res)
             self.failUnlessIn("quux", res)
         d.addCallback(_check)
         return d
hunk ./src/allmydata/test/test_web.py 1396
             # whether a particular mutable file is MDMF or SDMF.
             self.failUnlessIn("mutable-type-mdmf", html)
             self.failUnlessIn("mutable-type-sdmf", html)
+            # We should also have the ability to create a mutable directory.
+            self.failUnlessIn("mkdir", html)
+            # ...and we should have the ability to say whether that's an
+            # MDMF or SDMF directory
+            self.failUnlessIn("mutable-directory-mdmf", html)
+            self.failUnlessIn("mutable-directory-sdmf", html)
         d.addCallback(_got_html)
         return d
 
hunk ./src/allmydata/test/test_web.py 1710
         d.addCallback(self.failUnlessNodeKeysAre, [])
         return d
 
+    def test_PUT_NEWDIRURL_mdmf(self):
+        d = self.PUT(self.public_url + "/foo/newdir?t=mkdir&mutable-type=mdmf", "")
+        d.addCallback(lambda res:
+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(lambda node:
+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
+        return d
+
+    def test_PUT_NEWDIRURL_sdmf(self):
+        d = self.PUT(self.public_url + "/foo/newdir?t=mkdir&mutable-type=sdmf",
+                     "")
+        d.addCallback(lambda res:
+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(lambda node:
+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
+        return d
+
+    def test_PUT_NEWDIRURL_bad_mutable_type(self):
+        return self.shouldHTTPError("test bad mutable type",
+                             400, "Bad Request", "Unknown type: foo",
+                             self.PUT, self.public_url + \
+                             "/foo/newdir=?t=mkdir&mutable-type=foo", "")
+
     def test_POST_NEWDIRURL(self):
         d = self.POST2(self.public_url + "/foo/newdir?t=mkdir", "")
         d.addCallback(lambda res:
hunk ./src/allmydata/test/test_web.py 1743
         d.addCallback(self.failUnlessNodeKeysAre, [])
         return d
 
+    def test_POST_NEWDIRURL_mdmf(self):
+        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir&mutable-type=mdmf", "")
+        d.addCallback(lambda res:
+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(lambda node:
+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
+        return d
+
+    def test_POST_NEWDIRURL_sdmf(self):
+        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir&mutable-type=sdmf", "")
+        d.addCallback(lambda res:
+            self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(lambda node:
+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
+        return d
+
+    def test_POST_NEWDIRURL_bad_mutable_type(self):
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.POST2, self.public_url + \
+                                    "/foo/newdir?t=mkdir&mutable-type=foo", "")
+
     def test_POST_NEWDIRURL_emptyname(self):
         # an empty pathname component (i.e. a double-slash) is disallowed
         d = self.shouldFail2(error.Error, "test_POST_NEWDIRURL_emptyname",
hunk ./src/allmydata/test/test_web.py 1775
                              self.POST, self.public_url + "//?t=mkdir")
         return d
 
-    def test_POST_NEWDIRURL_initial_children(self):
+    def _do_POST_NEWDIRURL_initial_children_test(self, version=None):
         (newkids, caps) = self._create_initial_children()
hunk ./src/allmydata/test/test_web.py 1777
-        d = self.POST2(self.public_url + "/foo/newdir?t=mkdir-with-children",
+        query = "/foo/newdir?t=mkdir-with-children"
+        if version == MDMF_VERSION:
+            query += "&mutable-type=mdmf"
+        elif version == SDMF_VERSION:
+            query += "&mutable-type=sdmf"
+        else:
+            version = SDMF_VERSION # for later
+        d = self.POST2(self.public_url + query,
                        simplejson.dumps(newkids))
         def _check(uri):
             n = self.s.create_node_from_uri(uri.strip())
hunk ./src/allmydata/test/test_web.py 1789
             d2 = self.failUnlessNodeKeysAre(n, newkids.keys())
+            self.failUnlessEqual(n._node.get_version(), version)
             d2.addCallback(lambda ign:
                            self.failUnlessROChildURIIs(n, u"child-imm",
                                                        caps['filecap1']))
hunk ./src/allmydata/test/test_web.py 1827
         d.addCallback(self.failUnlessROChildURIIs, u"child-imm", caps['filecap1'])
         return d
 
+    def test_POST_NEWDIRURL_initial_children(self):
+        return self._do_POST_NEWDIRURL_initial_children_test()
+
+    def test_POST_NEWDIRURL_initial_children_mdmf(self):
+        return self._do_POST_NEWDIRURL_initial_children_test(MDMF_VERSION)
+
+    def test_POST_NEWDIRURL_initial_children_sdmf(self):
+        return self._do_POST_NEWDIRURL_initial_children_test(SDMF_VERSION)
+
+    def test_POST_NEWDIRURL_initial_children_bad_mutable_type(self):
+        (newkids, caps) = self._create_initial_children()
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.POST2, self.public_url + \
+                                    "/foo/newdir?t=mkdir-with-children&mutable-type=foo",
+                                    simplejson.dumps(newkids))
+
     def test_POST_NEWDIRURL_immutable(self):
         (newkids, caps) = self._create_immutable_children()
         d = self.POST2(self.public_url + "/foo/newdir?t=mkdir-immutable",
hunk ./src/allmydata/test/test_web.py 1944
         d.addCallback(self.failUnlessNodeKeysAre, [])
         return d
 
+    def test_PUT_NEWDIRURL_mkdirs_mdmf(self):
+        d = self.PUT(self.public_url + "/foo/subdir/newdir?t=mkdir&mutable-type=mdmf", "")
+        d.addCallback(lambda ignored:
+            self.failUnlessNodeHasChild(self._foo_node, u"subdir"))
+        d.addCallback(lambda ignored:
+            self.failIfNodeHasChild(self._foo_node, u"newdir"))
+        d.addCallback(lambda ignored:
+            self._foo_node.get_child_at_path(u"subdir"))
+        def _got_subdir(subdir):
+            # XXX: What we want?
+            #self.failUnlessEqual(subdir._node.get_version(), MDMF_VERSION)
+            self.failUnlessNodeHasChild(subdir, u"newdir")
+            return subdir.get_child_at_path(u"newdir")
+        d.addCallback(_got_subdir)
+        d.addCallback(lambda newdir:
+            self.failUnlessEqual(newdir._node.get_version(), MDMF_VERSION))
+        return d
+
+    def test_PUT_NEWDIRURL_mkdirs_sdmf(self):
+        d = self.PUT(self.public_url + "/foo/subdir/newdir?t=mkdir&mutable-type=sdmf", "")
+        d.addCallback(lambda ignored:
+            self.failUnlessNodeHasChild(self._foo_node, u"subdir"))
+        d.addCallback(lambda ignored:
+            self.failIfNodeHasChild(self._foo_node, u"newdir"))
+        d.addCallback(lambda ignored:
+            self._foo_node.get_child_at_path(u"subdir"))
+        def _got_subdir(subdir):
+            # XXX: What we want?
+            #self.failUnlessEqual(subdir._node.get_version(), MDMF_VERSION)
+            self.failUnlessNodeHasChild(subdir, u"newdir")
+            return subdir.get_child_at_path(u"newdir")
+        d.addCallback(_got_subdir)
+        d.addCallback(lambda newdir:
+            self.failUnlessEqual(newdir._node.get_version(), SDMF_VERSION))
+        return d
+
+    def test_PUT_NEWDIRURL_mkdirs_bad_mutable_type(self):
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.PUT, self.public_url + \
+                                    "/foo/subdir/newdir?t=mkdir&mutable-type=foo",
+                                    "")
+
     def test_DELETE_DIRURL(self):
         d = self.DELETE(self.public_url + "/foo")
         d.addCallback(lambda res:
hunk ./src/allmydata/test/test_web.py 2254
         d.addCallback(_got_json, "mdmf")
         return d
 
+    def test_POST_upload_mutable_type_unlinked_bad_mutable_type(self):
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.POST,
+                                    "/uri?5=upload&mutable=true&mutable-type=foo",
+                                    file=("foo.txt", self.NEWFILE_CONTENTS * 300000))
+
     def test_POST_upload_mutable_type(self):
         d = self.POST(self.public_url + \
                       "/foo?t=upload&mutable=true&mutable-type=sdmf",
hunk ./src/allmydata/test/test_web.py 2290
         d.addCallback(_got_json, "mdmf")
         return d
 
+    def test_POST_upload_bad_mutable_type(self):
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.POST, self.public_url + \
+                                    "/foo?t=upload&mutable=true&mutable-type=foo",
+                                    file=("foo.txt", self.NEWFILE_CONTENTS * 300000))
+
     def test_POST_upload_mutable(self):
         # this creates a mutable file
         d = self.POST(self.public_url + "/foo", t="upload", mutable="true",
hunk ./src/allmydata/test/test_web.py 2796
         d.addCallback(self.failUnlessNodeKeysAre, [])
         return d
 
+    def test_POST_mkdir_mdmf(self):
+        d = self.POST(self.public_url + "/foo?t=mkdir&name=newdir&mutable-type=mdmf")
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(lambda node:
+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
+        return d
+
+    def test_POST_mkdir_sdmf(self):
+        d = self.POST(self.public_url + "/foo?t=mkdir&name=newdir&mutable-type=sdmf")
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(lambda node:
+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
+        return d
+
+    def test_POST_mkdir_bad_mutable_type(self):
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.POST, self.public_url + \
+                                    "/foo?t=mkdir&name=newdir&mutable-type=foo")
+
     def test_POST_mkdir_initial_children(self):
         (newkids, caps) = self._create_initial_children()
         d = self.POST2(self.public_url +
hunk ./src/allmydata/test/test_web.py 2829
         d.addCallback(self.failUnlessROChildURIIs, u"child-imm", caps['filecap1'])
         return d
 
+    def test_POST_mkdir_initial_children_mdmf(self):
+        (newkids, caps) = self._create_initial_children()
+        d = self.POST2(self.public_url +
+                       "/foo?t=mkdir-with-children&name=newdir&mutable-type=mdmf",
+                       simplejson.dumps(newkids))
+        d.addCallback(lambda res:
+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(lambda node:
+            self.failUnlessEqual(node._node.get_version(), MDMF_VERSION))
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(self.failUnlessROChildURIIs, u"child-imm",
+                       caps['filecap1'])
+        return d
+
+    # XXX: Duplication.
+    def test_POST_mkdir_initial_children_sdmf(self):
+        (newkids, caps) = self._create_initial_children()
+        d = self.POST2(self.public_url +
+                       "/foo?t=mkdir-with-children&name=newdir&mutable-type=sdmf",
+                       simplejson.dumps(newkids))
+        d.addCallback(lambda res:
+                      self.failUnlessNodeHasChild(self._foo_node, u"newdir"))
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(lambda node:
+            self.failUnlessEqual(node._node.get_version(), SDMF_VERSION))
+        d.addCallback(lambda res: self._foo_node.get(u"newdir"))
+        d.addCallback(self.failUnlessROChildURIIs, u"child-imm",
+                       caps['filecap1'])
+        return d
+
+    def test_POST_mkdir_initial_children_bad_mutable_type(self):
+        (newkids, caps) = self._create_initial_children()
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.POST, self.public_url + \
+                                    "/foo?t=mkdir-with-children&name=newdir&mutable-type=foo",
+                                    simplejson.dumps(newkids))
+
     def test_POST_mkdir_immutable(self):
         (newkids, caps) = self._create_immutable_children()
         d = self.POST2(self.public_url +
hunk ./src/allmydata/test/test_web.py 2924
         d.addCallback(_after_mkdir)
         return d
 
+    def test_POST_mkdir_no_parentdir_noredirect_mdmf(self):
+        d = self.POST("/uri?t=mkdir&mutable-type=mdmf")
+        def _after_mkdir(res):
+            u = uri.from_string(res)
+            # Check that this is an MDMF writecap
+            self.failUnlessIsInstance(u, uri.MDMFDirectoryURI)
+        d.addCallback(_after_mkdir)
+        return d
+
+    def test_POST_mkdir_no_parentdir_noredirect_sdmf(self):
+        d = self.POST("/uri?t=mkdir&mutable-type=sdmf")
+        def _after_mkdir(res):
+            u = uri.from_string(res)
+            self.failUnlessIsInstance(u, uri.DirectoryURI)
+        d.addCallback(_after_mkdir)
+        return d
+
+    def test_POST_mkdir_no_parentdir_noredirect_bad_mutable_type(self):
+        return self.shouldHTTPError("test bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.POST, self.public_url + \
+                                    "/uri?t=mkdir&mutable-type=foo")
+
     def test_POST_mkdir_no_parentdir_noredirect2(self):
         # make sure form-based arguments (as on the welcome page) still work
         d = self.POST("/uri", t="mkdir")
hunk ./src/allmydata/test/test_web.py 3584
         d.addCallback(_got_json)
         return d
 
+    def test_PUT_NEWFILEURL_bad_mutable_type(self):
+       new_contents = self.NEWFILE_CONTENTS * 300000
+       return self.shouldHTTPError("test bad mutable type",
+                                   400, "Bad Request", "Unknown type: foo",
+                                   self.PUT, self.public_url + \
+                                   "/foo/foo.txt?mutable=true&mutable-type=foo",
+                                   new_contents)
+
     def test_PUT_NEWFILEURL_uri_replace(self):
         contents, n, new_uri = self.makefile(8)
         d = self.PUT(self.public_url + "/foo/bar.txt?t=uri", new_uri)
hunk ./src/allmydata/test/test_web.py 3701
         d.addCallback(self.failUnlessIsEmptyJSON)
         return d
 
+    def test_PUT_mkdir_mdmf(self):
+        d = self.PUT("/uri?t=mkdir&mutable-type=mdmf", "")
+        def _got(res):
+            u = uri.from_string(res)
+            # Check that this is an MDMF writecap
+            self.failUnlessIsInstance(u, uri.MDMFDirectoryURI)
+        d.addCallback(_got)
+        return d
+
+    def test_PUT_mkdir_sdmf(self):
+        d = self.PUT("/uri?t=mkdir&mutable-type=sdmf", "")
+        def _got(res):
+            u = uri.from_string(res)
+            self.failUnlessIsInstance(u, uri.DirectoryURI)
+        d.addCallback(_got)
+        return d
+
+    def test_PUT_mkdir_bad_mutable_type(self):
+        return self.shouldHTTPError("bad mutable type",
+                                    400, "Bad Request", "Unknown type: foo",
+                                    self.PUT, "/uri?t=mkdir&mutable-type=foo",
+                                    "")
+
     def test_POST_check(self):
         d = self.POST(self.public_url + "/foo", t="check", name="bar.txt")
         def _done(res):
}
[scripts: teach CLI to make MDMF directories
Kevan Carstensen <kevan@isnotajoke.com>**20110617180137
 Ignore-this: 5dc968bd22278033b534a561f230a4f
] {
hunk ./src/allmydata/scripts/cli.py 53
 
 
 class MakeDirectoryOptions(VDriveOptions):
+    optParameters = [
+        ("mutable-type", None, False, "Create a mutable file in the given format. Valid formats are 'sdmf' for SDMF and 'mdmf' for MDMF"),
+        ]
+
     def parseArgs(self, where=""):
         self.where = argv_to_unicode(where)
 
merger 0.0 (
hunk ./src/allmydata/scripts/cli.py 59
+
+    def getSynopsis(self):
+        return "Usage:  %s mkdir [options] [REMOTE_DIR]" % (self.command_name,)
+
hunk ./src/allmydata/scripts/cli.py 59
+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
)
hunk ./src/allmydata/scripts/tahoe_mkdir.py 25
     if not where or not path:
         # create a new unlinked directory
         url = nodeurl + "uri?t=mkdir"
+        if options["mutable-type"]:
+            url += "&mutable-type=%s" % urllib.quote(options['mutable-type'])
         resp = do_http("POST", url)
         rc = check_http_error(resp, stderr)
         if rc:
hunk ./src/allmydata/scripts/tahoe_mkdir.py 42
     # path must be "/".join([s.encode("utf-8") for s in segments])
     url = nodeurl + "uri/%s/%s?t=mkdir" % (urllib.quote(rootcap),
                                            urllib.quote(path))
+    if options['mutable-type']:
+        url += "&mutable-type=%s" % urllib.quote(options['mutable-type'])
+
     resp = do_http("POST", url)
     check_http_error(resp, stderr)
     new_uri = resp.read().strip()
}
[test/test_cli: test CLI's MDMF creation powers
Kevan Carstensen <kevan@isnotajoke.com>**20110617180209
 Ignore-this: d4b493b266446b2be3ce3c5f2505577d
] hunk ./src/allmydata/test/test_cli.py 3156
 
         return d
 
+    def test_mkdir_mutable_type(self):
+        self.basedir = os.path.dirname(self.mktemp())
+        self.set_up_grid()
+        d = self.do_cli("create-alias", "tahoe")
+        d.addCallback(lambda ignored:
+            self.do_cli("mkdir", "--mutable-type=sdmf", "tahoe:foo"))
+        def _check((rc, out, err), st):
+            self.failUnlessReallyEqual(rc, 0)
+            self.failUnlessReallyEqual(err, "")
+            self.failUnlessIn(st, out)
+            return out
+        def _stash_dircap(cap):
+            self._dircap = cap
+            u = uri.from_string(cap)
+            fn_uri = u.get_filenode_cap()
+            self._filecap = fn_uri.to_string()
+        d.addCallback(_check, "URI:DIR2")
+        d.addCallback(_stash_dircap)
+        d.addCallback(lambda ignored:
+            self.do_cli("ls", "--json", "tahoe:foo"))
+        d.addCallback(_check, "URI:DIR2")
+        d.addCallback(lambda ignored:
+            self.do_cli("ls", "--json", self._filecap))
+        d.addCallback(_check, '"mutable-type": "sdmf"')
+        d.addCallback(lambda ignored:
+            self.do_cli("mkdir", "--mutable-type=mdmf", "tahoe:bar"))
+        d.addCallback(_check, "URI:DIR2-MDMF")
+        d.addCallback(_stash_dircap)
+        d.addCallback(lambda ignored:
+            self.do_cli("ls", "--json", "tahoe:bar"))
+        d.addCallback(_check, "URI:DIR2-MDMF")
+        d.addCallback(lambda ignored:
+            self.do_cli("ls", "--json", self._filecap))
+        d.addCallback(_check, '"mutable-type": "mdmf"')
+        return d
+
+    def test_mkdir_mutable_type_unlinked(self):
+        self.basedir = os.path.dirname(self.mktemp())
+        self.set_up_grid()
+        d = self.do_cli("mkdir", "--mutable-type=sdmf")
+        def _check((rc, out, err), st):
+            self.failUnlessReallyEqual(rc, 0)
+            self.failUnlessReallyEqual(err, "")
+            self.failUnlessIn(st, out)
+            return out
+        d.addCallback(_check, "URI:DIR2")
+        def _stash_dircap(cap):
+            self._dircap = cap
+            # Now we're going to feed the cap into uri.from_string...
+            u = uri.from_string(cap)
+            # ...grab the underlying filenode uri.
+            fn_uri = u.get_filenode_cap()
+            # ...and stash that.
+            self._filecap = fn_uri.to_string()
+        d.addCallback(_stash_dircap)
+        d.addCallback(lambda res: self.do_cli("ls", "--json",
+                                              self._filecap))
+        d.addCallback(_check, '"mutable-type": "sdmf"')
+        d.addCallback(lambda res: self.do_cli("mkdir", "--mutable-type=mdmf"))
+        d.addCallback(_check, "URI:DIR2-MDMF")
+        d.addCallback(_stash_dircap)
+        d.addCallback(lambda res: self.do_cli("ls", "--json",
+                                              self._filecap))
+        d.addCallback(_check, '"mutable-type": "mdmf"')
+        return d
+
+    def test_mkdir_bad_mutable_type(self):
+        o = cli.MakeDirectoryOptions()
+        self.failUnlessRaises(usage.UsageError,
+                              o.parseOptions,
+                              ["--mutable", "--mutable-type=ldmf"])
+
     def test_mkdir_unicode(self):
         self.basedir = os.path.dirname(self.mktemp())
         self.set_up_grid()
[mutable/layout: Add tighter bounds on the sizes of certain share components
Kevan Carstensen <kevan@isnotajoke.com>**20110727162855
 Ignore-this: a75cb54bcccb6ff78d1112c4c87eed1d
] {
hunk ./src/allmydata/mutable/layout.py 2
 
-import struct
+import struct, math
 from allmydata.mutable.common import NeedMoreDataError, UnknownVersionError
 from allmydata.interfaces import HASH_SIZE, SALT_SIZE, SDMF_VERSION, \
                                  MDMF_VERSION, IMutableSlotWriter
hunk ./src/allmydata/mutable/layout.py 547
 MDMFSIGNABLEHEADER = ">BQ32sBBQQ"
 MDMFOFFSETS = ">QQQQQQQQ"
 MDMFOFFSETS_LENGTH = struct.calcsize(MDMFOFFSETS)
-# XXX Fix this.
-PRIVATE_KEY_SIZE = 2000
-SIGNATURE_SIZE = 10000
-VERIFICATION_KEY_SIZE = 2000
-# We know we won't ever have more than 256 shares.
-# XXX: This, too, can be 
-SHARE_HASH_CHAIN_SIZE = HASH_SIZE * 256
+
+PRIVATE_KEY_SIZE = 1220
+SIGNATURE_SIZE = 260
+VERIFICATION_KEY_SIZE = 292
+# We know we won't have more than 256 shares, and we know that we won't
+# need to store more than lg 256 of them to validate, so that's our
+# bound. We add 1 to the int cast to round to the next integer.
+SHARE_HASH_CHAIN_SIZE = int(math.log(HASH_SIZE * 256)) + 1
 
 class MDMFSlotWriteProxy:
     implements(IMutableSlotWriter)
hunk ./src/allmydata/mutable/layout.py 1078
 
         self._offsets['verification_key_end'] = \
             self._offsets['verification_key'] + len(verification_key)
+        assert self._offsets['verification_key_end'] <= self._offsets['share_data']
         self._writevs.append(tuple([self._offsets['verification_key'],
                             verification_key]))
 
}
[test/test_mutable: write tests that see what happens when there are 255 shares, amend existing tests to also test for this condition.
Kevan Carstensen <kevan@isnotajoke.com>**20110727162955
 Ignore-this: 282ce3d8bffd41561b0820d4fd721448
] {
hunk ./src/allmydata/test/test_mutable.py 271
         d.addCallback(_created)
         return d
 
+    def test_max_shares(self):
+        self.nodemaker.default_encoding_parameters['n'] = 255
+        d = self.nodemaker.create_mutable_file(version=SDMF_VERSION)
+        def _created(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
+            sb = self.nodemaker.storage_broker
+            num_shares = sum([len(self._storage._peers[x].keys()) for x \
+                              in sb.get_all_serverids()])
+            self.failUnlessEqual(num_shares, 255)
+            self._node = n
+            return n
+        d.addCallback(_created)
+        # Now we upload some contents
+        d.addCallback(lambda n:
+            n.overwrite(MutableData("contents" * 50000)))
+        # ...then download contents
+        d.addCallback(lambda ignored:
+            self._node.download_best_version())
+        # ...and check to make sure everything went okay.
+        d.addCallback(lambda contents:
+            self.failUnlessEqual("contents" * 50000, contents))
+        return d
+
+    def test_max_shares_mdmf(self):
+        # Test how files behave when there are 255 shares.
+        self.nodemaker.default_encoding_parameters['n'] = 255
+        d = self.nodemaker.create_mutable_file(version=MDMF_VERSION)
+        def _created(n):
+            self.failUnless(isinstance(n, MutableFileNode))
+            self.failUnlessEqual(n.get_storage_index(), n._storage_index)
+            sb = self.nodemaker.storage_broker
+            num_shares = sum([len(self._storage._peers[x].keys()) for x \
+                              in sb.get_all_serverids()])
+            self.failUnlessEqual(num_shares, 255)
+            self._node = n
+            return n
+        d.addCallback(_created)
+        d.addCallback(lambda n:
+            n.overwrite(MutableData("contents" * 50000)))
+        d.addCallback(lambda ignored:
+            self._node.download_best_version())
+        d.addCallback(lambda contents:
+            self.failUnlessEqual(contents, "contents" * 50000))
+        return d
 
     def test_mdmf_filenode_cap(self):
         # Test that an MDMF filenode, once created, returns an MDMF URI.
hunk ./src/allmydata/test/test_mutable.py 3250
             self.mdmf_node = n1
             self.sdmf_node = n2
         dl.addCallback(_then)
-        return dl
+        # Make SDMF and MDMF mutable file nodes that have 255 shares.
+        def _make_max_shares(ign):
+            self.nm.default_encoding_parameters['n'] = 255
+            self.nm.default_encoding_parameters['k'] = 127
+            d1 = self.nm.create_mutable_file(MutableData(self.data),
+                                             version=MDMF_VERSION)
+            d2 = \
+                self.nm.create_mutable_file(MutableData(self.small_data))
+            return gatherResults([d1, d2])
+        dl.addCallback(_make_max_shares)
+        def _stash((n1, n2)):
+            assert isinstance(n1, MutableFileNode)
+            assert isinstance(n2, MutableFileNode)
 
hunk ./src/allmydata/test/test_mutable.py 3264
+            self.mdmf_max_shares_node = n1
+            self.sdmf_max_shares_node = n2
+        dl.addCallback(_stash)
+        return dl
 
     def test_append(self):
         # We should be able to append data to the middle of a mutable
hunk ./src/allmydata/test/test_mutable.py 3273
         # file and get what we expect.
         new_data = self.data + "appended"
-        d = self.mdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv:
-            mv.update(MutableData("appended"), len(self.data)))
-        d.addCallback(lambda ignored:
-            self.mdmf_node.download_best_version())
-        d.addCallback(lambda results:
-            self.failUnlessEqual(results, new_data))
+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv:
+                mv.update(MutableData("appended"), len(self.data)))
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            d.addCallback(lambda results:
+                self.failUnlessEqual(results, new_data))
         return d
hunk ./src/allmydata/test/test_mutable.py 3282
-    test_append.timeout = 15
-
 
     def test_replace(self):
         # We should be able to replace data in the middle of a mutable
hunk ./src/allmydata/test/test_mutable.py 3289
         new_data = self.data[:100]
         new_data += "appended"
         new_data += self.data[108:]
-        d = self.mdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv:
-            mv.update(MutableData("appended"), 100))
-        d.addCallback(lambda ignored:
-            self.mdmf_node.download_best_version())
-        d.addCallback(lambda results:
-            self.failUnlessEqual(results, new_data))
+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv:
+                mv.update(MutableData("appended"), 100))
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            d.addCallback(lambda results:
+                self.failUnlessEqual(results, new_data))
         return d
 
     def test_replace_beginning(self):
hunk ./src/allmydata/test/test_mutable.py 3304
         # without truncating the file
         B = "beginning"
         new_data = B + self.data[len(B):]
-        d = self.mdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv: mv.update(MutableData(B), 0))
-        d.addCallback(lambda ignored: self.mdmf_node.download_best_version())
-        d.addCallback(lambda results: self.failUnlessEqual(results, new_data))
+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv: mv.update(MutableData(B), 0))
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            d.addCallback(lambda results: self.failUnlessEqual(results, new_data))
         return d
 
     def test_replace_segstart1(self):
hunk ./src/allmydata/test/test_mutable.py 3316
         offset = 128*1024+1
         new_data = "NNNN"
         expected = self.data[:offset]+new_data+self.data[offset+4:]
-        d = self.mdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv:
-            mv.update(MutableData(new_data), offset))
-        d.addCallback(lambda ignored:
-            self.mdmf_node.download_best_version())
-        def _check(results):
-            if results != expected:
-                print
-                print "got: %s ... %s" % (results[:20], results[-20:])
-                print "exp: %s ... %s" % (expected[:20], expected[-20:])
-                self.fail("results != expected")
-        d.addCallback(_check)
+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv:
+                mv.update(MutableData(new_data), offset))
+            # close around node.
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            def _check(results):
+                if results != expected:
+                    print
+                    print "got: %s ... %s" % (results[:20], results[-20:])
+                    print "exp: %s ... %s" % (expected[:20], expected[-20:])
+                    self.fail("results != expected")
+            d.addCallback(_check)
         return d
 
     def _check_differences(self, got, expected):
hunk ./src/allmydata/test/test_mutable.py 3386
             d.addCallback(self._check_differences, expected)
         return d
 
+    def test_replace_locations_max_shares(self):
+        # exercise fencepost conditions
+        expected = self.data
+        SEGSIZE = 128*1024
+        suspects = range(SEGSIZE-3, SEGSIZE+1)+range(2*SEGSIZE-3, 2*SEGSIZE+1)
+        letters = iter("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
+        d = defer.succeed(None)
+        for offset in suspects:
+            new_data = letters.next()*2 # "AA", then "BB", etc
+            expected = expected[:offset]+new_data+expected[offset+2:]
+            d.addCallback(lambda ign:
+                          self.mdmf_max_shares_node.get_best_mutable_version())
+            def _modify(mv, offset=offset, new_data=new_data):
+                # close over 'offset','new_data'
+                md = MutableData(new_data)
+                return mv.update(md, offset)
+            d.addCallback(_modify)
+            d.addCallback(lambda ignored:
+                          self.mdmf_max_shares_node.download_best_version())
+            d.addCallback(self._check_differences, expected)
+        return d
 
     def test_replace_and_extend(self):
         # We should be able to replace data in the middle of a mutable
hunk ./src/allmydata/test/test_mutable.py 3413
         # file and extend that mutable file and get what we expect.
         new_data = self.data[:100]
         new_data += "modified " * 100000
-        d = self.mdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv:
-            mv.update(MutableData("modified " * 100000), 100))
-        d.addCallback(lambda ignored:
-            self.mdmf_node.download_best_version())
-        d.addCallback(lambda results:
-            self.failUnlessEqual(results, new_data))
+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv:
+                mv.update(MutableData("modified " * 100000), 100))
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            d.addCallback(lambda results:
+                self.failUnlessEqual(results, new_data))
         return d
 
 
hunk ./src/allmydata/test/test_mutable.py 3435
         # power-of-two boundary.
         segment = "a" * DEFAULT_MAX_SEGMENT_SIZE
         new_data = self.data + (segment * 2)
-        d = self.mdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv:
-            mv.update(MutableData(segment * 2), len(self.data)))
-        d.addCallback(lambda ignored:
-            self.mdmf_node.download_best_version())
-        d.addCallback(lambda results:
-            self.failUnlessEqual(results, new_data))
+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv:
+                mv.update(MutableData(segment * 2), len(self.data)))
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            d.addCallback(lambda results:
+                self.failUnlessEqual(results, new_data))
         return d
     test_append_power_of_two.timeout = 15
 
hunk ./src/allmydata/test/test_mutable.py 3450
     def test_update_sdmf(self):
         # Running update on a single-segment file should still work.
         new_data = self.small_data + "appended"
-        d = self.sdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv:
-            mv.update(MutableData("appended"), len(self.small_data)))
-        d.addCallback(lambda ignored:
-            self.sdmf_node.download_best_version())
-        d.addCallback(lambda results:
-            self.failUnlessEqual(results, new_data))
+        for node in (self.sdmf_node, self.sdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv:
+                mv.update(MutableData("appended"), len(self.small_data)))
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            d.addCallback(lambda results:
+                self.failUnlessEqual(results, new_data))
         return d
 
     def test_replace_in_last_segment(self):
hunk ./src/allmydata/test/test_mutable.py 3467
         new_data = self.data[:replace_offset] + "replaced"
         rest_offset = replace_offset + len("replaced")
         new_data += self.data[rest_offset:]
-        d = self.mdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv:
-            mv.update(MutableData("replaced"), replace_offset))
-        d.addCallback(lambda ignored:
-            self.mdmf_node.download_best_version())
-        d.addCallback(lambda results:
-            self.failUnlessEqual(results, new_data))
+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv:
+                mv.update(MutableData("replaced"), replace_offset))
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            d.addCallback(lambda results:
+                self.failUnlessEqual(results, new_data))
         return d
 
 
hunk ./src/allmydata/test/test_mutable.py 3486
         new_data += "replaced"
         rest_offset = len(new_data)
         new_data += self.data[rest_offset:]
-        d = self.mdmf_node.get_best_mutable_version()
-        d.addCallback(lambda mv:
-            mv.update(MutableData((2 * new_segment) + "replaced"),
-                      replace_offset))
-        d.addCallback(lambda ignored:
-            self.mdmf_node.download_best_version())
-        d.addCallback(lambda results:
-            self.failUnlessEqual(results, new_data))
+        for node in (self.mdmf_node, self.mdmf_max_shares_node):
+            d = node.get_best_mutable_version()
+            d.addCallback(lambda mv:
+                mv.update(MutableData((2 * new_segment) + "replaced"),
+                          replace_offset))
+            d.addCallback(lambda ignored, node=node:
+                node.download_best_version())
+            d.addCallback(lambda results:
+                self.failUnlessEqual(results, new_data))
         return d
}
[test/test_mutable: add interoperability tests
Kevan Carstensen <kevan@isnotajoke.com>**20110728171601
 Ignore-this: e7afc73f26b87df32766f200d51c1069
] {
hunk ./src/allmydata/test/test_mutable.py 2
 
-import os, re
+import os, re, base64
 from cStringIO import StringIO
 from twisted.trial import unittest
 from twisted.internet import defer, reactor
hunk ./src/allmydata/test/test_mutable.py 10
 from zope.interface import implements
 from allmydata import uri, client
 from allmydata.nodemaker import NodeMaker
-from allmydata.util import base32, consumer
+from allmydata.util import base32, consumer, fileutil
 from allmydata.util.hashutil import tagged_hash, ssk_writekey_hash, \
      ssk_pubkey_fingerprint_hash
 from allmydata.util.deferredutil import gatherResults
hunk ./src/allmydata/test/test_mutable.py 22
 from foolscap.api import eventually, fireEventually
 from foolscap.logging import log
 from allmydata.storage_client import StorageFarmBroker
+from allmydata.storage.common import storage_index_to_dir, si_b2a
 
 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
 from allmydata.mutable.common import ResponseCache, \
hunk ./src/allmydata/test/test_mutable.py 3497
             d.addCallback(lambda results:
                 self.failUnlessEqual(results, new_data))
         return d
+
+class Interoperability(GridTestMixin, unittest.TestCase, testutil.ShouldFailMixin):
+    sdmf_old_shares = {}
+    sdmf_old_shares[0] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAhgcAb/adrQFrhlrRNoRpvjDuxmFebA4F0qCyqWssm61AAQ/EX4eC/1+hGOQ/h4EiKUkqxdsfzdcPlDvd11SGWZ0VHsUclZChTzuBAU2zLTXm+cG8IFhO50ly6Ey/DB44NtMKVaVzO0nU8DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[1] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAhgcAb/adrQFrhlrRNoRpvjDuxmFebA4F0qCyqWssm61AAP7FHJWQoU87gQFNsy015vnBvCBYTudJcuhMvwweODbTD8Rfh4L/X6EY5D+HgSIpSSrF2x/N1w+UO93XVIZZnRUeePDXEwhqYDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[2] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAd8jdiCodW233N1acXhZGnulDKR3hiNsMdEIsijRPemewASoSCFpVj4utEE+eVFM146xfgC6DX39GaQ2zT3YKsWX3GiLwKtGffwqV7IlZIcBEVqMfTXSTZsY+dZm1MxxCZH0Zd33VY0yggDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[3] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcABOOLy8EETxh7h7/z9d62EiPu9CNpRrCOLxUhn+JUS+DuAAd8jdiCodW233N1acXhZGnulDKR3hiNsMdEIsijRPemewARoi8CrRn38KleyJWSHARFajH010k2bGPnWZtTMcQmR9GhIIWlWPi60QT55UUzXjrF+ALoNff0ZpDbNPdgqxZfcSNSplrHqtsDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[4] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAoIM8M4XulprmLd4gGMobS2Bv9CmwB5LpK/ySHE1QWjdwAUMA7/aVz7Mb1em0eks+biC8ZuVUhuAEkTVOAF4YulIjE8JlfW0dS1XKk62u0586QxiN38NTsluUDx8EAPTL66yRsfb1f3rRIDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[5] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAoIM8M4XulprmLd4gGMobS2Bv9CmwB5LpK/ySHE1QWjdwATPCZX1tHUtVypOtrtOfOkMYjd/DU7JblA8fBAD0y+uskwDv9pXPsxvV6bR6Sz5uILxm5VSG4ASRNU4AXhi6UiMUKZHBmcmEgDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[6] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAlyHZU7RfTJjbHu1gjabWZsTu+7nAeRVG6/ZSd4iMQ1ZgAWDSFSPvKzcFzRcuRlVgKUf0HBce1MCF8SwpUbPPEyfVJty4xLZ7DvNU/Eh/R6BarsVAagVXdp+GtEu0+fok7nilT4LchmHo8DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[7] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgACtTh7+7gs/l5w1lOkgbF6w7rkXLNslK7L2KYF4SPFLUcAA6dlE140Fc7FgB77PeM5Phv+bypQEYtyfLQHxd+OxlG3AAlyHZU7RfTJjbHu1gjabWZsTu+7nAeRVG6/ZSd4iMQ1ZgAVbcuMS2ew7zVPxIf0egWq7FQGoFV3afhrRLtPn6JO54oNIVI+8rNwXNFy5GVWApR/QcFx7UwIXxLClRs88TJ9UtLnNF4/mM0DE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[8] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgABUSzNKiMx0E91q51/WH6ASL0fDEOLef9oxuyBX5F5cpoABojmWkDX3k3FKfgNHIeptE3lxB8HHzxDfSD250psyfNCAAwGsKbMxbmI2NpdTozZ3SICrySwgGkatA1gsDOJmOnTzgAYmqKY7A9vQChuYa17fYSyKerIb3682jxiIneQvCMWCK5WcuI4PMeIsUAj8yxdxHvV+a9vtSCEsDVvymrrooDKX1GK98t37yoDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_shares[9] = "VGFob2UgbXV0YWJsZSBjb250YWluZXIgdjEKdQlEA47ESLbTdKdpLJXCpBxd5OH239tl5hvAiz1dvGdE5rIOpf8cbfxbPcwNF+Y5dM92uBVbmV6KAAAAAAAAB/wAAAAAAAAJ0AAAAAFOWSw7jSx7WXzaMpdleJYXwYsRCV82jNA5oex9m2YhXSnb2POh+vvC1LE1NAfRc9GOb2zQG84Xdsx1Jub2brEeKkyt0sRIttN0p2kslcKkHF3k4fbf22XmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABamJprL6ecrsOoFKdrXUmWveLq8nzEGDOjFnyK9detI3noX3uyK2MwSnFdAfyN0tuAwoAAAAAAAAAFQAAAAAAAAAVAAABjwAAAo8AAAMXAAADNwAAAAAAAAM+AAAAAAAAB/wwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQC1IkainlJF12IBXBQdpRK1zXB7a26vuEYqRmQM09YjC6sQjCs0F2ICk8n9m/2Kw4l16eIEboB2Au9pODCE+u/dEAakEFh4qidTMn61rbGUbsLK8xzuWNW22ezzz9/nPia0HDrulXt51/FYtfnnAuD1RJGXJv/8tDllE9FL/18TzlH4WuB6Fp8FTgv7QdbZAfWJHDGFIpVCJr1XxOCsSZNFJIqGwZnD2lsChiWw5OJDbKd8otqN1hIbfHyMyfMOJ/BzRzvZXaUt4Dv5nf93EmQDWClxShRwpuX/NkZ5B2K9OFonFTbOCexm/MjMAdCBqebKKaiHFkiknUCn9eJQpZ5bAgERgV50VKj+AVTDfgTpqfO2vfo4wrufi6ZBb8QV7hllhUFBjYogQ9C96dnS7skv0s+cqFuUjwMILr5/rsbEmEMGvl0T0ytyAbtlXuowEFVj/YORNknM4yjY72YUtEPTlMpk0Cis7aIgTvu5qWMPER26PMApZuRqiwRsGIkaJIvOVOTHHjFYe3/YzdMkc7OZtqRMfQLtwVl2/zKQQV8b/a9vaT6q3mRLRd4P3esaAFe/+7sR/t+9tmB+a8kxtKM6kmaVQJMbXJZ4aoHGfeLX0m35Rcvu2Bmph7QfSDjk/eaE3q55zYSoGWShmlhlw4Kwg84sMuhmcVhLvo0LovR8bKmbdgABUSzNKiMx0E91q51/WH6ASL0fDEOLef9oxuyBX5F5cpoABojmWkDX3k3FKfgNHIeptE3lxB8HHzxDfSD250psyfNCAAwGsKbMxbmI2NpdTozZ3SICrySwgGkatA1gsDOJmOnTzgAXVnLiODzHiLFAI/MsXcR71fmvb7UghLA1b8pq66KAyl+aopjsD29AKG5hrXt9hLIp6shvfrzaPGIid5C8IxYIrjgBj1YohGgDE0Wua7Lx6Bnad5n91qmHAnwSEJE5YIhQM634omd6cq9Wk4seJCUIn+ucoknrpxp0IR9QMxpKSMRHRUg2K8ZegnY3YqFunRZKCfsq9ufQEKgjZN12AFqi551KPBdn4/3V5HK6xTv0P4robSsE/BvuIfByvRf/W7ZrDx+CFC4EEcsBOACOZCrkhhqd5TkYKbe9RA+vs56+9N5qZGurkxcoKviiyEncxvTuShD65DK/6x6kMDMgQv/EdZDI3x9GtHTnRBYXwDGnPJ19w+q2zC3e2XarbxTGYQIPEC5mYx0gAA0sbjf018NGfwBhl6SB54iGsa8uLvR3jHv6OSRJgwxL6j7P0Ts4Hv2EtO12P0Lv21pwi3JC1O/WviSrKCvrQD5lMHL9Uym3hwFi2zu0mqwZvxOAbGy7kfOPXkLYKOHTZLthzKj3PsdjeceWBfYIvPGKYcd6wDr36d1aXSYS4IWeApTS2AQ2lu0DUcgSefAvsA8NkgOklvJY1cjTMSg6j6cxQo48Bvl8RAWGLbr4h2S/8KwDGxwLsSv0Gop/gnFc3GzCsmL0EkEyHHWkCA8YRXCghfW80KLDV495ff7yF5oiwK56GniqowZ3RG9Jxp5MXoJQgsLV1VMQFMAmsY69yz8eoxRH3wl9L0dMyndLulhWWzNwPMQ2I0yAWdzA/pksVmwTJTFenB3MHCiWc5rEwJ3yofe6NZZnZQrYyL9r1TNnVwfTwRUiykPiLSk4x9Mi6DX7RamDAxc8u3gDVfjPsTOTagBOEGUWlGAL54KE/E6sgCQ5DEAt12chk8AxbjBFLPgV+/idrzS0lZHOL+IVBI9D0i3Bq1yZcSIqcjZB0M3IbxbPm4gLAYOWEiTUN2ecsEHHg9nt6rhgffVoqSbCCFPbpC0xf7WOC3+BQORIZECOCC7cUAciXq3xn+GuxpFE40RWRJeKAK7bBQ21X89ABIXlQFkFddZ9kRvlZ2Pnl0oeF+2pjnZu0Yc2czNfZEQF2P7BKIdLrgMgxG89snxAY8qAYTCKyQw6xTG87wkjDcpy1wzsZLP3WsOuO7cAm7b27xU0jRKq8Cw4d1hDoyRG+RdS53F8RFJzVMaNNYgxU2tfRwUvXpTRXiOheeRVvh25+YGVnjakUXjx/dSDnOw4ETHGHD+7styDkeSfc3BdSZxswzc6OehgMI+xsCxeeRym15QUm9hxvg8X7Bfz/0WulgFwgzrm11TVynZYOmvyHpiZKoqQyQyKahIrfhwuchCr7lMsZ4a+umIkNkKxCLZnI+T7jd+eGFMgKItjz3kTTxRl3IhaJG3LbPmwRUJynMxQKdMi4Uf0qy0U7+i8hIJ9m50QXc+3tw2bwDSbx22XYJ9Wf14gxx5G5SPTb1JVCbhe4fxNt91xIxCow2zk62tzbYfRe6dfmDmgYHkv2PIEtMJZK8iKLDjFfu2ZUxsKT2A5g1q17og6o9MeXeuFS3mzJXJYFQZd+3UzlFR9qwkFkby9mg5y4XSeMvRLOHPt/H/r5SpEqBE6a9MadZYt61FBV152CUEzd43ihXtrAa0XH9HdsiySBcWI1SpM3mv9rRP0DiLjMUzHw/K1D8TE2f07zW4t/9kvE11tFj/NpICixQAAAAA="
+    sdmf_old_cap = "URI:SSK:gmjgofw6gan57gwpsow6gtrz3e:5adm6fayxmu3e4lkmfvt6lkkfix34ai2wop2ioqr4bgvvhiol3kq"
+    sdmf_old_contents = "This is a test file.\n"
+    def copy_sdmf_shares(self):
+        # We'll basically be short-circuiting the upload process.
+        servernums = self.g.servers_by_number.keys()
+        assert len(servernums) == 10
+
+        assignments = zip(self.sdmf_old_shares.keys(), servernums)
+        # Get the storage index.
+        cap = uri.from_string(self.sdmf_old_cap)
+        si = cap.get_storage_index()
+
+        # Now execute each assignment by writing the storage.
+        for (share, servernum) in assignments:
+            sharedata = base64.b64decode(self.sdmf_old_shares[share])
+            storedir = self.get_serverdir(servernum)
+            storage_path = os.path.join(storedir, "shares",
+                                        storage_index_to_dir(si))
+            fileutil.make_dirs(storage_path)
+            fileutil.write(os.path.join(storage_path, "%d" % share),
+                           sharedata)
+        # ...and verify that the shares are there.
+        shares = self.find_uri_shares(self.sdmf_old_cap)
+        assert len(shares) == 10
+
+    def test_new_downloader_can_read_old_shares(self):
+        self.basedir = "mutable/Interoperability/new_downloader_can_read_old_shares"
+        self.set_up_grid()
+        self.copy_sdmf_shares()
+        nm = self.g.clients[0].nodemaker
+        n = nm.create_from_cap(self.sdmf_old_cap)
+        d = n.download_best_version()
+        d.addCallback(self.failUnlessEqual, self.sdmf_old_contents)
+        return d
}
[scripts/cli: resolve merge conflicts
Kevan Carstensen <kevan@isnotajoke.com>**20110728234015
 Ignore-this: 5c9d3ee5e4cac5d3c31779cdcf475c8c
] hunk ./src/allmydata/scripts/cli.py 59
 
     def parseArgs(self, where=""):
         self.where = argv_to_unicode(where)
+
+        if self['mutable-type'] and self['mutable-type'] not in ("sdmf", "mdmf"):
+            raise usage.UsageError("%s is an invalid format" % self['mutable-type'])
+
+    def getSynopsis(self):
+        return "Usage:  %s mkdir [options] [REMOTE_DIR]" % (self.command_name,)
+
     longdesc = """Create a new directory, either unlinked or as a subdirectory."""
 
 class AddAliasOptions(VDriveOptions):
[mutable/publish: clean up error handling.
Kevan Carstensen <kevan@isnotajoke.com>**20110730214113
 Ignore-this: e2480b276ed2abc3e9111bb02c8c74b7
] {
hunk ./src/allmydata/mutable/publish.py 640
 
         # If we make it to this point, we were successful in placing the
         # file.
-        return self._done(None)
+        return self._done()
 
 
     def push_segment(self, segnum):
hunk ./src/allmydata/mutable/publish.py 658
             self._current_segment += 1
         # XXX: I don't think we need to do addBoth here -- any errBacks
         # should be handled within push_segment.
-        d.addBoth(_increment_segnum)
-        d.addBoth(self._turn_barrier)
-        d.addBoth(self._push)
+        d.addCallback(_increment_segnum)
+        d.addCallback(self._turn_barrier)
+        d.addCallback(self._push)
+        d.addErrback(self._failure)
 
 
     def _turn_barrier(self, result):
hunk ./src/allmydata/mutable/publish.py 1133
         return
 
 
-    def _done(self, res):
+    def _done(self):
         if not self._running:
             return
         self._running = False
hunk ./src/allmydata/mutable/publish.py 1152
         hints['segsize'] = self.segment_size
         hints['k'] = self.required_shares
         self._node.set_downloader_hints(hints)
-        eventually(self.done_deferred.callback, res)
+        eventually(self.done_deferred.callback, None)
 
hunk ./src/allmydata/mutable/publish.py 1154
-    def _failure(self):
+    def _failure(self, f=None):
+        if f:
+            self._last_failure = f
 
         if not self.surprised:
             # We ran out of servers
}
[mutable: address failure to publish when there are as  many writers as k, add/fix tests for this
Kevan Carstensen <kevan@isnotajoke.com>**20110730220208
 Ignore-this: 51900a9abbfca54137d2855fa7a2f81b
] {
hunk ./src/allmydata/mutable/publish.py 112
         self._log_number = num
         self._running = True
         self._first_write_error = None
+        self._last_failure = None
 
         self._status = PublishStatus()
         self._status.set_storage_index(self._storage_index)
hunk ./src/allmydata/mutable/publish.py 627
         # Can we still successfully publish this file?
         # TODO: Keep track of outstanding queries before aborting the
         #       process.
-        if len(self.writers) <= self.required_shares or self.surprised:
+        if len(self.writers) < self.required_shares or self.surprised:
             return self._failure()
 
         # Figure out what we need to do next. Each of these needs to
hunk ./src/allmydata/mutable/publish.py 1161
 
         if not self.surprised:
             # We ran out of servers
-            self.log("Publish ran out of good servers, "
-                     "last failure was: %s" % str(self._last_failure))
-            e = NotEnoughServersError("Ran out of non-bad servers, "
-                                      "last failure was %s" %
-                                      str(self._last_failure))
+            msg = "Publish ran out of good servers"
+            if self._last_failure:
+                msg += ", last failure was: %s" % str(self._last_failure)
+            self.log(msg)
+            e = NotEnoughServersError(msg)
+
         else:
             # We ran into shares that we didn't recognize, which means
             # that we need to return an UncoordinatedWriteError.
hunk ./src/allmydata/test/test_mutable.py 272
         d.addCallback(_created)
         return d
 
+    def test_single_share(self):
+        # Make sure that we tolerate publishing a single share.
+        self.nodemaker.default_encoding_parameters['k'] = 1
+        self.nodemaker.default_encoding_parameters['happy'] = 1
+        self.nodemaker.default_encoding_parameters['n'] = 1
+        d = defer.succeed(None)
+        for v in (SDMF_VERSION, MDMF_VERSION):
+            d.addCallback(lambda ignored:
+                self.nodemaker.create_mutable_file(version=v))
+            def _created(n):
+                self.failUnless(isinstance(n, MutableFileNode))
+                self._node = n
+                return n
+            d.addCallback(_created)
+            d.addCallback(lambda n:
+                n.overwrite(MutableData("Contents" * 50000)))
+            d.addCallback(lambda ignored:
+                self._node.download_best_version())
+            d.addCallback(lambda contents:
+                self.failUnlessEqual(contents, "Contents" * 50000))
+        return d
+
     def test_max_shares(self):
         self.nodemaker.default_encoding_parameters['n'] = 255
         d = self.nodemaker.create_mutable_file(version=SDMF_VERSION)
hunk ./src/allmydata/test/test_mutable.py 2688
 
         d = self.shouldFail(NotEnoughServersError,
                             "test_publish_all_servers_bad",
-                            "Ran out of non-bad servers",
+                            "ran out of good servers",
                             nm.create_mutable_file, MutableData("contents"))
         return d
 
}
[test/test_cli: rework a tahoe cp test that relied on an old webapi error message
Kevan Carstensen <kevan@isnotajoke.com>**20110730220300
 Ignore-this: e9c37ad8f4dd1f5bf362fa9227e302a1
] hunk ./src/allmydata/test/test_cli.py 2150
             self.do_cli("cp", replacement_file_path, "tahoe:test_file.txt"))
         def _check_error_message((rc, out, err)):
             self.failUnlessEqual(rc, 1)
-            self.failUnlessIn("need write capability to publish", err)
+            self.failUnlessIn("replace or update requested with read-only cap", err)
         d.addCallback(_check_error_message)
         # Make extra sure that that didn't work.
         d.addCallback(lambda ignored:

Context:

[test_cli.py: use to_str on fields loaded using simplejson.loads in new tests. refs #1304
david-sarah@jacaranda.org**20110730032521
 Ignore-this: d1d6dfaefd1b4e733181bf127c79c00b
] 
[cli: make 'tahoe cp' overwrite mutable files in-place
Kevan Carstensen <kevan@isnotajoke.com>**20110729202039
 Ignore-this: b2ad21a19439722f05c49bfd35b01855
] 
[SFTP: write an error message to standard error for unrecognized shell commands. Change the existing message for shell sessions to be written to standard error, and refactor some duplicated code. Also change the lines of the error messages to end in CRLF, and take into account Kevan's review comments. fixes #1442, #1446
david-sarah@jacaranda.org**20110729233102
 Ignore-this: d2f2bb4664f25007d1602bf7333e2cdd
] 
[src/allmydata/scripts/cli.py: fix pyflakes warning.
david-sarah@jacaranda.org**20110728021402
 Ignore-this: 94050140ddb99865295973f49927c509
] 
[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
david-sarah@jacaranda.org**20110724225440
 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
] 
[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
david-sarah@jacaranda.org**20110629185356
 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
] 
[docs/man/tahoe.1: add man page. fixes #1420
david-sarah@jacaranda.org**20110724171728
 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
] 
[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
david-sarah@jacaranda.org**20110721234941
 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
] 
[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
david-sarah@jacaranda.org**20110722000320
 Ignore-this: 55cd558b791526113db3f83c00ec328a
] 
[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
david-sarah@jacaranda.org**20110721233658
 Ignore-this: 81b41745477163c9b39c0b59db91cc62
] 
[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
david-sarah@jacaranda.org**20110722035402
 Ignore-this: 5d03f544c4154f088e26c7107494bf39
] 
[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
david-sarah@jacaranda.org**20110722024907
 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
] 
[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
david-sarah@jacaranda.org**20110718005949
 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
] 
[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
david-sarah@jacaranda.org**20110717194315
 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
] 
[README.txt: say that quickstart.rst is in the docs directory.
david-sarah@jacaranda.org**20110717192400
 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
] 
[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
zooko@zooko.com**20110717114226
 Ignore-this: df222120d41447ce4102616921626c82
 fixes #1383
] 
[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
david-sarah@jacaranda.org**20110716181813
 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
] 
[docs: add missing link in NEWS.rst
zooko@zooko.com**20110712153307
 Ignore-this: be7b7eb81c03700b739daa1027d72b35
] 
[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
zooko@zooko.com**20110712153229
 Ignore-this: 723c4f9e2211027c79d711715d972c5
 Also remove a couple of vestigial references to figleaf, which is long gone.
 fixes #1409 (remove contrib/fuse)
] 
[add Protovis.js-based download-status timeline visualization
Brian Warner <warner@lothar.com>**20110629222606
 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
 
 provide status overlap info on the webapi t=json output, add decode/decrypt
 rate tooltips, add zoomin/zoomout buttons
] 
[add more download-status data, fix tests
Brian Warner <warner@lothar.com>**20110629222555
 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
] 
[prepare for viz: improve DownloadStatus events
Brian Warner <warner@lothar.com>**20110629222542
 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
 
 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
] 
[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
zooko@zooko.com**20110629185711
 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
] 
[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
david-sarah@jacaranda.org**20110130235809
 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
] 
[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
david-sarah@jacaranda.org**20110626054124
 Ignore-this: abb864427a1b91bd10d5132b4589fd90
] 
[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
david-sarah@jacaranda.org**20110623205528
 Ignore-this: c63e23146c39195de52fb17c7c49b2da
] 
[Rename test_package_initialization.py to (much shorter) test_import.py .
Brian Warner <warner@lothar.com>**20110611190234
 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
 
 The former name was making my 'ls' listings hard to read, by forcing them
 down to just two columns.
] 
[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
zooko@zooko.com**20110611163741
 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
 fixes #1412
] 
[wui: right-align the size column in the WUI
zooko@zooko.com**20110611153758
 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
 fixes #1412
] 
[docs: three minor fixes
zooko@zooko.com**20110610121656
 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
 CREDITS for arc for stats tweak
 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
 English usage tweak
] 
[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
david-sarah@jacaranda.org**20110609223719
 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
] 
[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
wilcoxjg@gmail.com**20110527120135
 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
 NEWS.rst, stats.py: documentation of change to get_latencies
 stats.rst: now documents percentile modification in get_latencies
 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
 fixes #1392
] 
[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
david-sarah@jacaranda.org**20110517011214
 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
] 
[docs: convert NEWS to NEWS.rst and change all references to it.
david-sarah@jacaranda.org**20110517010255
 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
] 
[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
david-sarah@jacaranda.org**20110512140559
 Ignore-this: 784548fc5367fac5450df1c46890876d
] 
[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
david-sarah@jacaranda.org**20110130164923
 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
] 
[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
zooko@zooko.com**20110128142006
 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
] 
[M-x whitespace-cleanup
zooko@zooko.com**20110510193653
 Ignore-this: dea02f831298c0f65ad096960e7df5c7
] 
[docs: fix typo in running.rst, thanks to arch_o_median
zooko@zooko.com**20110510193633
 Ignore-this: ca06de166a46abbc61140513918e79e8
] 
[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
david-sarah@jacaranda.org**20110204204902
 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
] 
[relnotes.txt: forseeable -> foreseeable. refs #1342
david-sarah@jacaranda.org**20110204204116
 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
] 
[replace remaining .html docs with .rst docs
zooko@zooko.com**20110510191650
 Ignore-this: d557d960a986d4ac8216d1677d236399
 Remove install.html (long since deprecated).
 Also replace some obsolete references to install.html with references to quickstart.rst.
 Fix some broken internal references within docs/historical/historical_known_issues.txt.
 Thanks to Ravi Pinjala and Patrick McDonald.
 refs #1227
] 
[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
zooko@zooko.com**20110428055232
 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
] 
[munin tahoe_files plugin: fix incorrect file count
francois@ctrlaltdel.ch**20110428055312
 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
 fixes #1391
] 
[corrected "k must never be smaller than N" to "k must never be greater than N"
secorp@allmydata.org**20110425010308
 Ignore-this: 233129505d6c70860087f22541805eac
] 
[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
david-sarah@jacaranda.org**20110411190738
 Ignore-this: 7847d26bc117c328c679f08a7baee519
] 
[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
david-sarah@jacaranda.org**20110410155844
 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
] 
[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
david-sarah@jacaranda.org**20110410155705
 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
] 
[remove unused variable detected by pyflakes
zooko@zooko.com**20110407172231
 Ignore-this: 7344652d5e0720af822070d91f03daf9
] 
[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
david-sarah@jacaranda.org**20110401202750
 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
] 
[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
Brian Warner <warner@lothar.com>**20110325232511
 Ignore-this: d5307faa6900f143193bfbe14e0f01a
] 
[control.py: remove all uses of s.get_serverid()
warner@lothar.com**20110227011203
 Ignore-this: f80a787953bd7fa3d40e828bde00e855
] 
[web: remove some uses of s.get_serverid(), not all
warner@lothar.com**20110227011159
 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
] 
[immutable/downloader/fetcher.py: remove all get_serverid() calls
warner@lothar.com**20110227011156
 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
] 
[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
warner@lothar.com**20110227011153
 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
 
 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
 _shares_from_server dict was being popped incorrectly (using shnum as the
 index instead of serverid). I'm still thinking through the consequences of
 this bug. It was probably benign and really hard to detect. I think it would
 cause us to incorrectly believe that we're pulling too many shares from a
 server, and thus prefer a different server rather than asking for a second
 share from the first server. The diversity code is intended to spread out the
 number of shares simultaneously being requested from each server, but with
 this bug, it might be spreading out the total number of shares requested at
 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
 segment, so the effect doesn't last very long).
] 
[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
warner@lothar.com**20110227011150
 Ignore-this: d8d56dd8e7b280792b40105e13664554
 
 test_download.py: create+check MyShare instances better, make sure they share
 Server objects, now that finder.py cares
] 
[immutable/downloader/finder.py: reduce use of get_serverid(), one left
warner@lothar.com**20110227011146
 Ignore-this: 5785be173b491ae8a78faf5142892020
] 
[immutable/offloaded.py: reduce use of get_serverid() a bit more
warner@lothar.com**20110227011142
 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
] 
[immutable/upload.py: reduce use of get_serverid()
warner@lothar.com**20110227011138
 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
] 
[immutable/checker.py: remove some uses of s.get_serverid(), not all
warner@lothar.com**20110227011134
 Ignore-this: e480a37efa9e94e8016d826c492f626e
] 
[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
warner@lothar.com**20110227011132
 Ignore-this: 6078279ddf42b179996a4b53bee8c421
 MockIServer stubs
] 
[upload.py: rearrange _make_trackers a bit, no behavior changes
warner@lothar.com**20110227011128
 Ignore-this: 296d4819e2af452b107177aef6ebb40f
] 
[happinessutil.py: finally rename merge_peers to merge_servers
warner@lothar.com**20110227011124
 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
] 
[test_upload.py: factor out FakeServerTracker
warner@lothar.com**20110227011120
 Ignore-this: 6c182cba90e908221099472cc159325b
] 
[test_upload.py: server-vs-tracker cleanup
warner@lothar.com**20110227011115
 Ignore-this: 2915133be1a3ba456e8603885437e03
] 
[happinessutil.py: server-vs-tracker cleanup
warner@lothar.com**20110227011111
 Ignore-this: b856c84033562d7d718cae7cb01085a9
] 
[upload.py: more tracker-vs-server cleanup
warner@lothar.com**20110227011107
 Ignore-this: bb75ed2afef55e47c085b35def2de315
] 
[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
warner@lothar.com**20110227011103
 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
] 
[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
warner@lothar.com**20110227011100
 Ignore-this: 7ea858755cbe5896ac212a925840fe68
 
 No behavioral changes, just updating variable/method names and log messages.
 The effects outside these three files should be minimal: some exception
 messages changed (to say "server" instead of "peer"), and some internal class
 names were changed. A few things still use "peer" to minimize external
 changes, like UploadResults.timings["peer_selection"] and
 happinessutil.merge_peers, which can be changed later.
] 
[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
warner@lothar.com**20110227011056
 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
] 
[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
warner@lothar.com**20110227011051
 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
] 
[test: increase timeout on a network test because Francois's ARM machine hit that timeout
zooko@zooko.com**20110317165909
 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
] 
[docs/configuration.rst: add a "Frontend Configuration" section
Brian Warner <warner@lothar.com>**20110222014323
 Ignore-this: 657018aa501fe4f0efef9851628444ca
 
 this points to docs/frontends/*.rst, which were previously underlinked
] 
[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
"Brian Warner <warner@lothar.com>"**20110221061544
 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
] 
[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
david-sarah@jacaranda.org**20110221015817
 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
] 
[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
david-sarah@jacaranda.org**20110221020125
 Ignore-this: b0744ed58f161bf188e037bad077fc48
] 
[Refactor StorageFarmBroker handling of servers
Brian Warner <warner@lothar.com>**20110221015804
 Ignore-this: 842144ed92f5717699b8f580eab32a51
 
 Pass around IServer instance instead of (peerid, rref) tuple. Replace
 "descriptor" with "server". Other replacements:
 
  get_all_servers -> get_connected_servers/get_known_servers
  get_servers_for_index -> get_servers_for_psi (now returns IServers)
 
 This change still needs to be pushed further down: lots of code is now
 getting the IServer and then distributing (peerid, rref) internally.
 Instead, it ought to distribute the IServer internally and delay
 extracting a serverid or rref until the last moment.
 
 no_network.py was updated to retain parallelism.
] 
[TAG allmydata-tahoe-1.8.2
warner@lothar.com**20110131020101] 
Patch bundle hash:
84ad39c67cb591e9201475eb2e54ecf4cca51738