14 patches for repository http://tahoe-lafs.org/source/tahoe/trunk:

Thu Aug 25 01:32:17 BST 2011  david-sarah@jacaranda.org
  * interfaces.py: 'which -> that' grammar cleanup.

Tue Sep 20 00:29:26 BST 2011  david-sarah@jacaranda.org
  * Pluggable backends -- new and moved files, changes to moved files. refs #999

Tue Sep 20 00:32:56 BST 2011  david-sarah@jacaranda.org
  * Pluggable backends -- all other changes. refs #999

Tue Sep 20 04:38:03 BST 2011  david-sarah@jacaranda.org
  * Work-in-progress, includes fix to bug involving BucketWriter. refs #999

Tue Sep 20 18:17:37 BST 2011  david-sarah@jacaranda.org
  * docs/backends: document the configuration options for the pluggable backends scheme. refs #999

Wed Sep 21 04:12:07 BST 2011  david-sarah@jacaranda.org
  * Fix some incorrect attribute accesses. refs #999

Wed Sep 21 04:16:25 BST 2011  david-sarah@jacaranda.org
  * docs/backends/S3.rst: remove Issues section. refs #999

Wed Sep 21 04:17:05 BST 2011  david-sarah@jacaranda.org
  * docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999

Wed Sep 21 19:46:49 BST 2011  david-sarah@jacaranda.org
  * More fixes to tests needed for pluggable backends. refs #999

Wed Sep 21 23:14:21 BST 2011  david-sarah@jacaranda.org
  * Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999

Wed Sep 21 23:20:38 BST 2011  david-sarah@jacaranda.org
  * uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999

Thu Sep 22 05:54:51 BST 2011  david-sarah@jacaranda.org
  * Fix some more test failures. refs #999

Thu Sep 22 19:30:08 BST 2011  david-sarah@jacaranda.org
  * Fix most of the crawler tests. refs #999

Thu Sep 22 19:33:23 BST 2011  david-sarah@jacaranda.org
  * Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999

New patches:

[interfaces.py: 'which -> that' grammar cleanup.
david-sarah@jacaranda.org**20110825003217
 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6
] {
hunk ./src/allmydata/interfaces.py 38
     the StubClient. This object doesn't actually offer any services, but the
     announcement helps the Introducer keep track of which clients are
     subscribed (so the grid admin can keep track of things like the size of
-    the grid and the client versions in use. This is the (empty)
+    the grid and the client versions in use). This is the (empty)
     RemoteInterface for the StubClient."""
 
 class RIBucketWriter(RemoteInterface):
hunk ./src/allmydata/interfaces.py 276
         (binary) storage index string, and 'shnum' is the integer share
         number. 'reason' is a human-readable explanation of the problem,
         probably including some expected hash values and the computed ones
-        which did not match. Corruption advisories for mutable shares should
+        that did not match. Corruption advisories for mutable shares should
         include a hash of the public key (the same value that appears in the
         mutable-file verify-cap), since the current share format does not
         store that on disk.
hunk ./src/allmydata/interfaces.py 413
           remote_host: the IAddress, if connected, otherwise None
 
         This method is intended for monitoring interfaces, such as a web page
-        which describes connecting and connected peers.
+        that describes connecting and connected peers.
         """
 
     def get_all_peerids():
hunk ./src/allmydata/interfaces.py 515
 
     # TODO: rename to get_read_cap()
     def get_readonly():
-        """Return another IURI instance, which represents a read-only form of
+        """Return another IURI instance that represents a read-only form of
         this one. If is_readonly() is True, this returns self."""
 
     def get_verify_cap():
hunk ./src/allmydata/interfaces.py 542
         passing into init_from_string."""
 
 class IDirnodeURI(Interface):
-    """I am a URI which represents a dirnode."""
+    """I am a URI that represents a dirnode."""
 
 class IFileURI(Interface):
hunk ./src/allmydata/interfaces.py 545
-    """I am a URI which represents a filenode."""
+    """I am a URI that represents a filenode."""
     def get_size():
         """Return the length (in bytes) of the file that I represent."""
 
hunk ./src/allmydata/interfaces.py 553
     pass
 
 class IMutableFileURI(Interface):
-    """I am a URI which represents a mutable filenode."""
+    """I am a URI that represents a mutable filenode."""
     def get_extension_params():
         """Return the extension parameters in the URI"""
 
hunk ./src/allmydata/interfaces.py 856
         """
 
 class IFileNode(IFilesystemNode):
-    """I am a node which represents a file: a sequence of bytes. I am not a
+    """I am a node that represents a file: a sequence of bytes. I am not a
     container, like IDirectoryNode."""
     def get_best_readable_version():
         """Return a Deferred that fires with an IReadable for the 'best'
hunk ./src/allmydata/interfaces.py 905
     multiple versions of a file present in the grid, some of which might be
     unrecoverable (i.e. have fewer than 'k' shares). These versions are
     loosely ordered: each has a sequence number and a hash, and any version
-    with seqnum=N was uploaded by a node which has seen at least one version
+    with seqnum=N was uploaded by a node that has seen at least one version
     with seqnum=N-1.
 
     The 'servermap' (an instance of IMutableFileServerMap) is used to
hunk ./src/allmydata/interfaces.py 1014
         as a guide to where the shares are located.
 
         I return a Deferred that fires with the requested contents, or
-        errbacks with UnrecoverableFileError. Note that a servermap which was
+        errbacks with UnrecoverableFileError. Note that a servermap that was
         updated with MODE_ANYTHING or MODE_READ may not know about shares for
         all versions (those modes stop querying servers as soon as they can
         fulfil their goals), so you may want to use MODE_CHECK (which checks
hunk ./src/allmydata/interfaces.py 1073
     """Upload was unable to satisfy 'servers_of_happiness'"""
 
 class UnableToFetchCriticalDownloadDataError(Exception):
-    """I was unable to fetch some piece of critical data which is supposed to
+    """I was unable to fetch some piece of critical data that is supposed to
     be identically present in all shares."""
 
 class NoServersError(Exception):
hunk ./src/allmydata/interfaces.py 1085
     exists, and overwrite= was set to False."""
 
 class NoSuchChildError(Exception):
-    """A directory node was asked to fetch a child which does not exist."""
+    """A directory node was asked to fetch a child that does not exist."""
 
 class ChildOfWrongTypeError(Exception):
     """An operation was attempted on a child of the wrong type (file or directory)."""
hunk ./src/allmydata/interfaces.py 1403
         if you initially thought you were going to use 10 peers, started
         encoding, and then two of the peers dropped out: you could use
         desired_share_ids= to skip the work (both memory and CPU) of
-        producing shares for the peers which are no longer available.
+        producing shares for the peers that are no longer available.
 
         """
 
hunk ./src/allmydata/interfaces.py 1478
         if you initially thought you were going to use 10 peers, started
         encoding, and then two of the peers dropped out: you could use
         desired_share_ids= to skip the work (both memory and CPU) of
-        producing shares for the peers which are no longer available.
+        producing shares for the peers that are no longer available.
 
         For each call, encode() will return a Deferred that fires with two
         lists, one containing shares and the other containing the shareids.
hunk ./src/allmydata/interfaces.py 1535
         required to be of the same length.  The i'th element of their_shareids
         is required to be the shareid of the i'th buffer in some_shares.
 
-        This returns a Deferred which fires with a sequence of buffers. This
+        This returns a Deferred that fires with a sequence of buffers. This
         sequence will contain all of the segments of the original data, in
         order. The sum of the lengths of all of the buffers will be the
         'data_size' value passed into the original ICodecEncode.set_params()
hunk ./src/allmydata/interfaces.py 1582
         Encoding parameters can be set in three ways. 1: The Encoder class
         provides defaults (3/7/10). 2: the Encoder can be constructed with
         an 'options' dictionary, in which the
-        needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
+        'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3:
         set_params((k,d,n)) can be called.
 
         If you intend to use set_params(), you must call it before
hunk ./src/allmydata/interfaces.py 1780
         produced, so that the segment hashes can be generated with only a
         single pass.
 
-        This returns a Deferred which fires with a sequence of hashes, using:
+        This returns a Deferred that fires with a sequence of hashes, using:
 
          tuple(segment_hashes[first:last])
 
hunk ./src/allmydata/interfaces.py 1796
     def get_plaintext_hash():
         """OBSOLETE; Get the hash of the whole plaintext.
 
-        This returns a Deferred which fires with a tagged SHA-256 hash of the
+        This returns a Deferred that fires with a tagged SHA-256 hash of the
         whole plaintext, obtained from hashutil.plaintext_hash(data).
         """
 
hunk ./src/allmydata/interfaces.py 1856
         be used to encrypt the data. The key will also be hashed to derive
         the StorageIndex.
 
-        Uploadables which want to achieve convergence should hash their file
+        Uploadables that want to achieve convergence should hash their file
         contents and the serialized_encoding_parameters to form the key
         (which of course requires a full pass over the data). Uploadables can
         use the upload.ConvergentUploadMixin class to achieve this
hunk ./src/allmydata/interfaces.py 1862
         automatically.
 
-        Uploadables which do not care about convergence (or do not wish to
+        Uploadables that do not care about convergence (or do not wish to
         make multiple passes over the data) can simply return a
         strongly-random 16 byte string.
 
hunk ./src/allmydata/interfaces.py 1872
 
     def read(length):
         """Return a Deferred that fires with a list of strings (perhaps with
-        only a single element) which, when concatenated together, contain the
+        only a single element) that, when concatenated together, contain the
         next 'length' bytes of data. If EOF is near, this may provide fewer
         than 'length' bytes. The total number of bytes provided by read()
         before it signals EOF must equal the size provided by get_size().
hunk ./src/allmydata/interfaces.py 1919
 
     def read(length):
         """
-        Returns a list of strings which, when concatenated, are the next
+        Returns a list of strings that, when concatenated, are the next
         length bytes of the file, or fewer if there are fewer bytes
         between the current location and the end of the file.
         """
hunk ./src/allmydata/interfaces.py 1932
 
 class IUploadResults(Interface):
     """I am returned by upload() methods. I contain a number of public
-    attributes which can be read to determine the results of the upload. Some
+    attributes that can be read to determine the results of the upload. Some
     of these are functional, some are timing information. All of these may be
     None.
 
hunk ./src/allmydata/interfaces.py 1965
 
 class IDownloadResults(Interface):
     """I am created internally by download() methods. I contain a number of
-    public attributes which contain details about the download process.::
+    public attributes that contain details about the download process.::
 
      .file_size : the size of the file, in bytes
      .servers_used : set of server peerids that were used during download
hunk ./src/allmydata/interfaces.py 1991
 class IUploader(Interface):
     def upload(uploadable):
         """Upload the file. 'uploadable' must impement IUploadable. This
-        returns a Deferred which fires with an IUploadResults instance, from
+        returns a Deferred that fires with an IUploadResults instance, from
         which the URI of the file can be obtained as results.uri ."""
 
     def upload_ssk(write_capability, new_version, uploadable):
hunk ./src/allmydata/interfaces.py 2041
         kind of lease that is obtained (which account number to claim, etc).
 
         TODO: any problems seen during checking will be reported to the
-        health-manager.furl, a centralized object which is responsible for
+        health-manager.furl, a centralized object that is responsible for
         figuring out why files are unhealthy so corrective action can be
         taken.
         """
hunk ./src/allmydata/interfaces.py 2056
         will be put in the check-and-repair results. The Deferred will not
         fire until the repair is complete.
 
-        This returns a Deferred which fires with an instance of
+        This returns a Deferred that fires with an instance of
         ICheckAndRepairResults."""
 
 class IDeepCheckable(Interface):
hunk ./src/allmydata/interfaces.py 2141
                               that was found to be corrupt. Each share
                               locator is a list of (serverid, storage_index,
                               sharenum).
-         count-incompatible-shares: the number of shares which are of a share
+         count-incompatible-shares: the number of shares that are of a share
                                     format unknown to this checker
          list-incompatible-shares: a list of 'share locators', one for each
                                    share that was found to be of an unknown
hunk ./src/allmydata/interfaces.py 2148
                                    format. Each share locator is a list of
                                    (serverid, storage_index, sharenum).
          servers-responding: list of (binary) storage server identifiers,
-                             one for each server which responded to the share
+                             one for each server that responded to the share
                              query (even if they said they didn't have
                              shares, and even if they said they did have
                              shares but then didn't send them when asked, or
hunk ./src/allmydata/interfaces.py 2345
         will use the data in the checker results to guide the repair process,
         such as which servers provided bad data and should therefore be
         avoided. The ICheckResults object is inside the
-        ICheckAndRepairResults object, which is returned by the
+        ICheckAndRepairResults object that is returned by the
         ICheckable.check() method::
 
          d = filenode.check(repair=False)
hunk ./src/allmydata/interfaces.py 2436
         methods to create new objects. I return synchronously."""
 
     def create_mutable_file(contents=None, keysize=None):
-        """I create a new mutable file, and return a Deferred which will fire
+        """I create a new mutable file, and return a Deferred that will fire
         with the IMutableFileNode instance when it is ready. If contents= is
         provided (a bytestring), it will be used as the initial contents of
         the new file, otherwise the file will contain zero bytes. keysize= is
hunk ./src/allmydata/interfaces.py 2444
         usual."""
 
     def create_new_mutable_directory(initial_children={}):
-        """I create a new mutable directory, and return a Deferred which will
+        """I create a new mutable directory, and return a Deferred that will
         fire with the IDirectoryNode instance when it is ready. If
         initial_children= is provided (a dict mapping unicode child name to
         (childnode, metadata_dict) tuples), the directory will be populated
hunk ./src/allmydata/interfaces.py 2452
 
 class IClientStatus(Interface):
     def list_all_uploads():
-        """Return a list of uploader objects, one for each upload which
+        """Return a list of uploader objects, one for each upload that
         currently has an object available (tracked with weakrefs). This is
         intended for debugging purposes."""
     def list_active_uploads():
hunk ./src/allmydata/interfaces.py 2462
         started uploads."""
 
     def list_all_downloads():
-        """Return a list of downloader objects, one for each download which
+        """Return a list of downloader objects, one for each download that
         currently has an object available (tracked with weakrefs). This is
         intended for debugging purposes."""
     def list_active_downloads():
hunk ./src/allmydata/interfaces.py 2689
 
     def provide(provider=RIStatsProvider, nickname=str):
         """
-        @param provider: a stats collector instance which should be polled
+        @param provider: a stats collector instance that should be polled
                          periodically by the gatherer to collect stats.
         @param nickname: a name useful to identify the provided client
         """
hunk ./src/allmydata/interfaces.py 2722
 
 class IValidatedThingProxy(Interface):
     def start():
-        """ Acquire a thing and validate it. Return a deferred which is
+        """ Acquire a thing and validate it. Return a deferred that is
         eventually fired with self if the thing is valid or errbacked if it
         can't be acquired or validated."""
 
}
[Pluggable backends -- new and moved files, changes to moved files. refs #999
david-sarah@jacaranda.org**20110919232926
 Ignore-this: ec5d2d1362a092d919e84327d3092424
] {
adddir ./src/allmydata/storage/backends
adddir ./src/allmydata/storage/backends/disk
move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py
move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py
adddir ./src/allmydata/storage/backends/null
addfile ./src/allmydata/storage/backends/__init__.py
addfile ./src/allmydata/storage/backends/base.py
hunk ./src/allmydata/storage/backends/base.py 1
+
+from twisted.application import service
+
+from allmydata.storage.common import si_b2a
+from allmydata.storage.lease import LeaseInfo
+from allmydata.storage.bucket import BucketReader
+
+
+class Backend(service.MultiService):
+    def __init__(self):
+        service.MultiService.__init__(self)
+
+
+class ShareSet(object):
+    """
+    This class implements shareset logic that could work for all backends, but
+    might be useful to override for efficiency.
+    """
+
+    def __init__(self, storageindex):
+        self.storageindex = storageindex
+
+    def get_storage_index(self):
+        return self.storageindex
+
+    def get_storage_index_string(self):
+        return si_b2a(self.storageindex)
+
+    def renew_lease(self, renew_secret, new_expiration_time):
+        found_shares = False
+        for share in self.get_shares():
+            found_shares = True
+            share.renew_lease(renew_secret, new_expiration_time)
+
+        if not found_shares:
+            raise IndexError("no such lease to renew")
+
+    def get_leases(self):
+        # Since all shares get the same lease data, we just grab the leases
+        # from the first share.
+        try:
+            sf = self.get_shares().next()
+            return sf.get_leases()
+        except StopIteration:
+            return iter([])
+
+    def add_or_renew_lease(self, lease_info):
+        # This implementation assumes that lease data is duplicated in
+        # all shares of a shareset, which might not be true for all backends.
+        for share in self.get_shares():
+            share.add_or_renew_lease(lease_info)
+
+    def make_bucket_reader(self, storageserver, share):
+        return BucketReader(storageserver, share)
+
+    def testv_and_readv_and_writev(self, storageserver, secrets,
+                                   test_and_write_vectors, read_vector,
+                                   expiration_time):
+        # The implementation here depends on the following helper methods,
+        # which must be provided by subclasses:
+        #
+        # def _clean_up_after_unlink(self):
+        #     """clean up resources associated with the shareset after some
+        #     shares might have been deleted"""
+        #
+        # def _create_mutable_share(self, storageserver, shnum, write_enabler):
+        #     """create a mutable share with the given shnum and write_enabler"""
+
+        # secrets might be a triple with cancel_secret in secrets[2], but if
+        # so we ignore the cancel_secret.
+        write_enabler = secrets[0]
+        renew_secret = secrets[1]
+
+        si_s = self.get_storage_index_string()
+        shares = {}
+        for share in self.get_shares():
+            # XXX is it correct to ignore immutable shares? Maybe get_shares should
+            # have a parameter saying what type it's expecting.
+            if share.sharetype == "mutable":
+                share.check_write_enabler(write_enabler, si_s)
+                shares[share.get_shnum()] = share
+
+        # write_enabler is good for all existing shares
+
+        # now evaluate test vectors
+        testv_is_good = True
+        for sharenum in test_and_write_vectors:
+            (testv, datav, new_length) = test_and_write_vectors[sharenum]
+            if sharenum in shares:
+                if not shares[sharenum].check_testv(testv):
+                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
+                    testv_is_good = False
+                    break
+            else:
+                # compare the vectors against an empty share, in which all
+                # reads return empty strings
+                if not EmptyShare().check_testv(testv):
+                    self.log("testv failed (empty): [%d] %r" % (sharenum,
+                                                                testv))
+                    testv_is_good = False
+                    break
+
+        # gather the read vectors, before we do any writes
+        read_data = {}
+        for shnum, share in shares.items():
+            read_data[shnum] = share.readv(read_vector)
+
+        ownerid = 1 # TODO
+        lease_info = LeaseInfo(ownerid, renew_secret,
+                               expiration_time, storageserver.get_serverid())
+
+        if testv_is_good:
+            # now apply the write vectors
+            for shnum in test_and_write_vectors:
+                (testv, datav, new_length) = test_and_write_vectors[shnum]
+                if new_length == 0:
+                    if shnum in shares:
+                        shares[shnum].unlink()
+                else:
+                    if shnum not in shares:
+                        # allocate a new share
+                        share = self._create_mutable_share(storageserver, shnum, write_enabler)
+                        shares[shnum] = share
+                    shares[shnum].writev(datav, new_length)
+                    # and update the lease
+                    shares[shnum].add_or_renew_lease(lease_info)
+
+            if new_length == 0:
+                self._clean_up_after_unlink()
+
+        return (testv_is_good, read_data)
+
+    def readv(self, wanted_shnums, read_vector):
+        """
+        Read a vector from the numbered shares in this shareset. An empty
+        shares list means to return data from all known shares.
+
+        @param wanted_shnums=ListOf(int)
+        @param read_vector=ReadVector
+        @return DictOf(int, ReadData): shnum -> results, with one key per share
+        """
+        datavs = {}
+        for share in self.get_shares():
+            shnum = share.get_shnum()
+            if not wanted_shnums or shnum in wanted_shnums:
+                datavs[shnum] = share.readv(read_vector)
+
+        return datavs
+
+
+def testv_compare(a, op, b):
+    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
+    if op == "lt":
+        return a < b
+    if op == "le":
+        return a <= b
+    if op == "eq":
+        return a == b
+    if op == "ne":
+        return a != b
+    if op == "ge":
+        return a >= b
+    if op == "gt":
+        return a > b
+    # never reached
+
+
+class EmptyShare:
+    def check_testv(self, testv):
+        test_good = True
+        for (offset, length, operator, specimen) in testv:
+            data = ""
+            if not testv_compare(data, operator, specimen):
+                test_good = False
+                break
+        return test_good
+
addfile ./src/allmydata/storage/backends/disk/__init__.py
addfile ./src/allmydata/storage/backends/disk/disk_backend.py
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1
+
+import re
+
+from twisted.python.filepath import UnlistableError
+
+from zope.interface import implements
+from allmydata.interfaces import IStorageBackend, IShareSet
+from allmydata.util import fileutil, log, time_format
+from allmydata.storage.common import si_b2a, si_a2b
+from allmydata.storage.bucket import BucketWriter
+from allmydata.storage.backends.base import Backend, ShareSet
+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
+from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share
+
+# storage/
+# storage/shares/incoming
+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
+# storage/shares/$START/$STORAGEINDEX
+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
+
+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
+# base-32 chars).
+# $SHARENUM matches this regex:
+NUM_RE=re.compile("^[0-9]+$")
+
+
+def si_si2dir(startfp, storageindex):
+    sia = si_b2a(storageindex)
+    newfp = startfp.child(sia[:2])
+    return newfp.child(sia)
+
+
+def get_share(fp):
+    f = fp.open('rb')
+    try:
+        prefix = f.read(32)
+    finally:
+        f.close()
+
+    if prefix == MutableDiskShare.MAGIC:
+        return MutableDiskShare(fp)
+    else:
+        # assume it's immutable
+        return ImmutableDiskShare(fp)
+
+
+class DiskBackend(Backend):
+    implements(IStorageBackend)
+
+    def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False):
+        Backend.__init__(self)
+        self._setup_storage(storedir, readonly, reserved_space, discard_storage)
+        self._setup_corruption_advisory()
+
+    def _setup_storage(self, storedir, readonly, reserved_space, discard_storage):
+        self._storedir = storedir
+        self._readonly = readonly
+        self._reserved_space = int(reserved_space)
+        self._discard_storage = discard_storage
+        self._sharedir = self._storedir.child("shares")
+        fileutil.fp_make_dirs(self._sharedir)
+        self._incomingdir = self._sharedir.child('incoming')
+        self._clean_incomplete()
+        if self._reserved_space and (self.get_available_space() is None):
+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
+                    umid="0wZ27w", level=log.UNUSUAL)
+
+    def _clean_incomplete(self):
+        fileutil.fp_remove(self._incomingdir)
+        fileutil.fp_make_dirs(self._incomingdir)
+
+    def _setup_corruption_advisory(self):
+        # we don't actually create the corruption-advisory dir until necessary
+        self._corruption_advisory_dir = self._storedir.child("corruption-advisories")
+
+    def _make_shareset(self, sharehomedir):
+        return self.get_shareset(si_a2b(sharehomedir.basename()))
+
+    def get_sharesets_for_prefix(self, prefix):
+        prefixfp = self._sharedir.child(prefix)
+        try:
+            sharesets = map(self._make_shareset, prefixfp.children())
+            def _by_base32si(b):
+                return b.get_storage_index_string()
+            sharesets.sort(key=_by_base32si)
+        except EnvironmentError:
+            sharesets = []
+        return sharesets
+
+    def get_shareset(self, storageindex):
+        sharehomedir = si_si2dir(self._sharedir, storageindex)
+        incominghomedir = si_si2dir(self._incomingdir, storageindex)
+        return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage)
+
+    def fill_in_space_stats(self, stats):
+        stats['storage_server.reserved_space'] = self._reserved_space
+        try:
+            disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space)
+            writeable = disk['avail'] > 0
+
+            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
+            stats['storage_server.disk_total'] = disk['total']
+            stats['storage_server.disk_used'] = disk['used']
+            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
+            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
+            stats['storage_server.disk_avail'] = disk['avail']
+        except AttributeError:
+            writeable = True
+        except EnvironmentError:
+            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
+            writeable = False
+
+        if self._readonly:
+            stats['storage_server.disk_avail'] = 0
+            writeable = False
+
+        stats['storage_server.accepting_immutable_shares'] = int(writeable)
+
+    def get_available_space(self):
+        if self._readonly:
+            return 0
+        return fileutil.get_available_space(self._sharedir, self._reserved_space)
+
+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
+        fileutil.fp_make_dirs(self._corruption_advisory_dir)
+        now = time_format.iso_utc(sep="T")
+        si_s = si_b2a(storageindex)
+
+        # Windows can't handle colons in the filename.
+        name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "")
+        f = self._corruption_advisory_dir.child(name).open("w")
+        try:
+            f.write("report: Share Corruption\n")
+            f.write("type: %s\n" % sharetype)
+            f.write("storage_index: %s\n" % si_s)
+            f.write("share_number: %d\n" % shnum)
+            f.write("\n")
+            f.write(reason)
+            f.write("\n")
+        finally:
+            f.close()
+
+        log.msg(format=("client claims corruption in (%(share_type)s) " +
+                        "%(si)s-%(shnum)d: %(reason)s"),
+                share_type=sharetype, si=si_s, shnum=shnum, reason=reason,
+                level=log.SCARY, umid="SGx2fA")
+
+
+class DiskShareSet(ShareSet):
+    implements(IShareSet)
+
+    def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False):
+        ShareSet.__init__(self, storageindex)
+        self._sharehomedir = sharehomedir
+        self._incominghomedir = incominghomedir
+        self._discard_storage = discard_storage
+
+    def get_overhead(self):
+        return (fileutil.get_disk_usage(self._sharehomedir) +
+                fileutil.get_disk_usage(self._incominghomedir))
+
+    def get_shares(self):
+        """
+        Generate IStorageBackendShare objects for shares we have for this storage index.
+        ("Shares we have" means completed ones, excluding incoming ones.)
+        """
+        try:
+            for fp in self._sharehomedir.children():
+                shnumstr = fp.basename()
+                if not NUM_RE.match(shnumstr):
+                    continue
+                sharehome = self._sharehomedir.child(shnumstr)
+                yield self.get_share(sharehome)
+        except UnlistableError:
+            # There is no shares directory at all.
+            pass
+
+    def has_incoming(self, shnum):
+        if self._incominghomedir is None:
+            return False
+        return self._incominghomedir.child(str(shnum)).exists()
+
+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
+        sharehome = self._sharehomedir.child(str(shnum))
+        incominghome = self._incominghomedir.child(str(shnum))
+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
+                                   max_size=max_space_per_bucket, create=True)
+        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
+        if self._discard_storage:
+            bw.throw_out_all_data = True
+        return bw
+
+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
+        fileutil.fp_make_dirs(self._sharehomedir)
+        sharehome = self._sharehomedir.child(str(shnum))
+        serverid = storageserver.get_serverid()
+        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
+
+    def _clean_up_after_unlink(self):
+        fileutil.fp_rmdir_if_empty(self._sharehomedir)
+
hunk ./src/allmydata/storage/backends/disk/immutable.py 1
-import os, stat, struct, time
 
hunk ./src/allmydata/storage/backends/disk/immutable.py 2
-from foolscap.api import Referenceable
+import struct
 
 from zope.interface import implements
hunk ./src/allmydata/storage/backends/disk/immutable.py 5
-from allmydata.interfaces import RIBucketWriter, RIBucketReader
-from allmydata.util import base32, fileutil, log
+
+from allmydata.interfaces import IStoredShare
+from allmydata.util import fileutil
 from allmydata.util.assertutil import precondition
hunk ./src/allmydata/storage/backends/disk/immutable.py 9
+from allmydata.util.fileutil import fp_make_dirs
 from allmydata.util.hashutil import constant_time_compare
hunk ./src/allmydata/storage/backends/disk/immutable.py 11
+from allmydata.util.encodingutil import quote_filepath
+from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError
 from allmydata.storage.lease import LeaseInfo
hunk ./src/allmydata/storage/backends/disk/immutable.py 14
-from allmydata.storage.common import UnknownImmutableContainerVersionError, \
-     DataTooLargeError
+
 
 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
 # and share data. The share data is accessed by RIBucketWriter.write and
hunk ./src/allmydata/storage/backends/disk/immutable.py 41
 # then the value stored in this field will be the actual share data length
 # modulo 2**32.
 
-class ShareFile:
-    LEASE_SIZE = struct.calcsize(">L32s32sL")
+class ImmutableDiskShare(object):
+    implements(IStoredShare)
+
     sharetype = "immutable"
hunk ./src/allmydata/storage/backends/disk/immutable.py 45
+    LEASE_SIZE = struct.calcsize(">L32s32sL")
+
 
hunk ./src/allmydata/storage/backends/disk/immutable.py 48
-    def __init__(self, filename, max_size=None, create=False):
-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
+        """ If max_size is not None then I won't allow more than
+        max_size to be written to me. If create=True then max_size
+        must not be None. """
         precondition((max_size is not None) or (not create), max_size, create)
hunk ./src/allmydata/storage/backends/disk/immutable.py 53
-        self.home = filename
+        self._storageindex = storageindex
         self._max_size = max_size
hunk ./src/allmydata/storage/backends/disk/immutable.py 55
+        self._incominghome = incominghome
+        self._home = finalhome
+        self._shnum = shnum
         if create:
             # touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
hunk ./src/allmydata/storage/backends/disk/immutable.py 61
-            assert not os.path.exists(self.home)
-            fileutil.make_dirs(os.path.dirname(self.home))
-            f = open(self.home, 'wb')
+            assert not finalhome.exists()
+            fp_make_dirs(self._incominghome.parent())
             # The second field -- the four-byte share data length -- is no
             # longer used as of Tahoe v1.3.0, but we continue to write it in
             # there in case someone downgrades a storage server from >=
hunk ./src/allmydata/storage/backends/disk/immutable.py 72
             # the largest length that can fit into the field. That way, even
             # if this does happen, the old < v1.3.0 server will still allow
             # clients to read the first part of the share.
-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
-            f.close()
+            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0
         else:
hunk ./src/allmydata/storage/backends/disk/immutable.py 76
-            f = open(self.home, 'rb')
-            filesize = os.path.getsize(self.home)
-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
-            f.close()
+            f = self._home.open(mode='rb')
+            try:
+                (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+            finally:
+                f.close()
+            filesize = self._home.getsize()
             if version != 1:
                 msg = "sharefile %s had version %d but we wanted 1" % \
hunk ./src/allmydata/storage/backends/disk/immutable.py 84
-                      (filename, version)
+                      (self._home, version)
                 raise UnknownImmutableContainerVersionError(msg)
             self._num_leases = num_leases
             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
hunk ./src/allmydata/storage/backends/disk/immutable.py 90
         self._data_offset = 0xc
 
+    def __repr__(self):
+        return ("<ImmutableDiskShare %s:%r at %s>"
+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
+
+    def close(self):
+        fileutil.fp_make_dirs(self._home.parent())
+        self._incominghome.moveTo(self._home)
+        try:
+            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
+            # We try to delete the parent (.../ab/abcde) to avoid leaving
+            # these directories lying around forever, but the delete might
+            # fail if we're working on another share for the same storage
+            # index (like ab/abcde/5). The alternative approach would be to
+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
+            # ShareWriter), each of which is responsible for a single
+            # directory on disk, and have them use reference counting of
+            # their children to know when they should do the rmdir. This
+            # approach is simpler, but relies on os.rmdir refusing to delete
+            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
+            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
+            # we also delete the grandparent (prefix) directory, .../ab ,
+            # again to avoid leaving directories lying around. This might
+            # fail if there is another bucket open that shares a prefix (like
+            # ab/abfff).
+            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
+            # we leave the great-grandparent (incoming/) directory in place.
+        except EnvironmentError:
+            # ignore the "can't rmdir because the directory is not empty"
+            # exceptions, those are normal consequences of the
+            # above-mentioned conditions.
+            pass
+        pass
+
+    def get_used_space(self):
+        return (fileutil.get_used_space(self._home) +
+                fileutil.get_used_space(self._incominghome))
+
+    def get_storage_index(self):
+        return self._storageindex
+
+    def get_shnum(self):
+        return self._shnum
+
     def unlink(self):
hunk ./src/allmydata/storage/backends/disk/immutable.py 134
-        os.unlink(self.home)
+        self._home.remove()
+
+    def get_size(self):
+        return self._home.getsize()
+
+    def get_data_length(self):
+        return self._lease_offset - self._data_offset
+
+    #def readv(self, read_vector):
+    #    ...
 
     def read_share_data(self, offset, length):
         precondition(offset >= 0)
hunk ./src/allmydata/storage/backends/disk/immutable.py 147
-        # reads beyond the end of the data are truncated. Reads that start
+
+        # Reads beyond the end of the data are truncated. Reads that start
         # beyond the end of the data return an empty string.
         seekpos = self._data_offset+offset
         actuallength = max(0, min(length, self._lease_offset-seekpos))
hunk ./src/allmydata/storage/backends/disk/immutable.py 154
         if actuallength == 0:
             return ""
-        f = open(self.home, 'rb')
-        f.seek(seekpos)
-        return f.read(actuallength)
+        f = self._home.open(mode='rb')
+        try:
+            f.seek(seekpos)
+            sharedata = f.read(actuallength)
+        finally:
+            f.close()
+        return sharedata
 
     def write_share_data(self, offset, data):
         length = len(data)
hunk ./src/allmydata/storage/backends/disk/immutable.py 167
         precondition(offset >= 0, offset)
         if self._max_size is not None and offset+length > self._max_size:
             raise DataTooLargeError(self._max_size, offset, length)
-        f = open(self.home, 'rb+')
-        real_offset = self._data_offset+offset
-        f.seek(real_offset)
-        assert f.tell() == real_offset
-        f.write(data)
-        f.close()
+        f = self._incominghome.open(mode='rb+')
+        try:
+            real_offset = self._data_offset+offset
+            f.seek(real_offset)
+            assert f.tell() == real_offset
+            f.write(data)
+        finally:
+            f.close()
 
     def _write_lease_record(self, f, lease_number, lease_info):
         offset = self._lease_offset + lease_number * self.LEASE_SIZE
hunk ./src/allmydata/storage/backends/disk/immutable.py 184
 
     def _read_num_leases(self, f):
         f.seek(0x08)
-        (num_leases,) = struct.unpack(">L", f.read(4))
+        ro = f.read(4)
+        (num_leases,) = struct.unpack(">L", ro)
         return num_leases
 
     def _write_num_leases(self, f, num_leases):
hunk ./src/allmydata/storage/backends/disk/immutable.py 195
     def _truncate_leases(self, f, num_leases):
         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
 
+    # These lease operations are intended for use by disk_backend.py.
+    # Other clients should not depend on the fact that the disk backend
+    # stores leases in share files.
+
     def get_leases(self):
         """Yields a LeaseInfo instance for all leases."""
hunk ./src/allmydata/storage/backends/disk/immutable.py 201
-        f = open(self.home, 'rb')
-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
-        f.seek(self._lease_offset)
-        for i in range(num_leases):
-            data = f.read(self.LEASE_SIZE)
-            if data:
-                yield LeaseInfo().from_immutable_data(data)
+        f = self._home.open(mode='rb')
+        try:
+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+            f.seek(self._lease_offset)
+            for i in range(num_leases):
+                data = f.read(self.LEASE_SIZE)
+                if data:
+                    yield LeaseInfo().from_immutable_data(data)
+        finally:
+            f.close()
 
     def add_lease(self, lease_info):
hunk ./src/allmydata/storage/backends/disk/immutable.py 213
-        f = open(self.home, 'rb+')
-        num_leases = self._read_num_leases(f)
-        self._write_lease_record(f, num_leases, lease_info)
-        self._write_num_leases(f, num_leases+1)
-        f.close()
+        f = self._incominghome.open(mode='rb')
+        try:
+            num_leases = self._read_num_leases(f)
+        finally:
+            f.close()
+        f = self._home.open(mode='wb+')
+        try:
+            self._write_lease_record(f, num_leases, lease_info)
+            self._write_num_leases(f, num_leases+1)
+        finally:
+            f.close()
 
     def renew_lease(self, renew_secret, new_expire_time):
hunk ./src/allmydata/storage/backends/disk/immutable.py 226
-        for i,lease in enumerate(self.get_leases()):
-            if constant_time_compare(lease.renew_secret, renew_secret):
-                # yup. See if we need to update the owner time.
-                if new_expire_time > lease.expiration_time:
-                    # yes
-                    lease.expiration_time = new_expire_time
-                    f = open(self.home, 'rb+')
-                    self._write_lease_record(f, i, lease)
-                    f.close()
-                return
+        try:
+            for i, lease in enumerate(self.get_leases()):
+                if constant_time_compare(lease.renew_secret, renew_secret):
+                    # yup. See if we need to update the owner time.
+                    if new_expire_time > lease.expiration_time:
+                        # yes
+                        lease.expiration_time = new_expire_time
+                        f = self._home.open('rb+')
+                        try:
+                            self._write_lease_record(f, i, lease)
+                        finally:
+                            f.close()
+                    return
+        except IndexError, e:
+            raise Exception("IndexError: %s" % (e,))
         raise IndexError("unable to renew non-existent lease")
 
     def add_or_renew_lease(self, lease_info):
hunk ./src/allmydata/storage/backends/disk/immutable.py 249
                              lease_info.expiration_time)
         except IndexError:
             self.add_lease(lease_info)
-
-
-    def cancel_lease(self, cancel_secret):
-        """Remove a lease with the given cancel_secret. If the last lease is
-        cancelled, the file will be removed. Return the number of bytes that
-        were freed (by truncating the list of leases, and possibly by
-        deleting the file. Raise IndexError if there was no lease with the
-        given cancel_secret.
-        """
-
-        leases = list(self.get_leases())
-        num_leases_removed = 0
-        for i,lease in enumerate(leases):
-            if constant_time_compare(lease.cancel_secret, cancel_secret):
-                leases[i] = None
-                num_leases_removed += 1
-        if not num_leases_removed:
-            raise IndexError("unable to find matching lease to cancel")
-        if num_leases_removed:
-            # pack and write out the remaining leases. We write these out in
-            # the same order as they were added, so that if we crash while
-            # doing this, we won't lose any non-cancelled leases.
-            leases = [l for l in leases if l] # remove the cancelled leases
-            f = open(self.home, 'rb+')
-            for i,lease in enumerate(leases):
-                self._write_lease_record(f, i, lease)
-            self._write_num_leases(f, len(leases))
-            self._truncate_leases(f, len(leases))
-            f.close()
-        space_freed = self.LEASE_SIZE * num_leases_removed
-        if not len(leases):
-            space_freed += os.stat(self.home)[stat.ST_SIZE]
-            self.unlink()
-        return space_freed
-
-
-class BucketWriter(Referenceable):
-    implements(RIBucketWriter)
-
-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
-        self.ss = ss
-        self.incominghome = incominghome
-        self.finalhome = finalhome
-        self._max_size = max_size # don't allow the client to write more than this
-        self._canary = canary
-        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
-        self.closed = False
-        self.throw_out_all_data = False
-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
-        # also, add our lease to the file now, so that other ones can be
-        # added by simultaneous uploaders
-        self._sharefile.add_lease(lease_info)
-
-    def allocated_size(self):
-        return self._max_size
-
-    def remote_write(self, offset, data):
-        start = time.time()
-        precondition(not self.closed)
-        if self.throw_out_all_data:
-            return
-        self._sharefile.write_share_data(offset, data)
-        self.ss.add_latency("write", time.time() - start)
-        self.ss.count("write")
-
-    def remote_close(self):
-        precondition(not self.closed)
-        start = time.time()
-
-        fileutil.make_dirs(os.path.dirname(self.finalhome))
-        fileutil.rename(self.incominghome, self.finalhome)
-        try:
-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
-            # We try to delete the parent (.../ab/abcde) to avoid leaving
-            # these directories lying around forever, but the delete might
-            # fail if we're working on another share for the same storage
-            # index (like ab/abcde/5). The alternative approach would be to
-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
-            # ShareWriter), each of which is responsible for a single
-            # directory on disk, and have them use reference counting of
-            # their children to know when they should do the rmdir. This
-            # approach is simpler, but relies on os.rmdir refusing to delete
-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
-            os.rmdir(os.path.dirname(self.incominghome))
-            # we also delete the grandparent (prefix) directory, .../ab ,
-            # again to avoid leaving directories lying around. This might
-            # fail if there is another bucket open that shares a prefix (like
-            # ab/abfff).
-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
-            # we leave the great-grandparent (incoming/) directory in place.
-        except EnvironmentError:
-            # ignore the "can't rmdir because the directory is not empty"
-            # exceptions, those are normal consequences of the
-            # above-mentioned conditions.
-            pass
-        self._sharefile = None
-        self.closed = True
-        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
-
-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
-        self.ss.bucket_writer_closed(self, filelen)
-        self.ss.add_latency("close", time.time() - start)
-        self.ss.count("close")
-
-    def _disconnected(self):
-        if not self.closed:
-            self._abort()
-
-    def remote_abort(self):
-        log.msg("storage: aborting sharefile %s" % self.incominghome,
-                facility="tahoe.storage", level=log.UNUSUAL)
-        if not self.closed:
-            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
-        self._abort()
-        self.ss.count("abort")
-
-    def _abort(self):
-        if self.closed:
-            return
-
-        os.remove(self.incominghome)
-        # if we were the last share to be moved, remove the incoming/
-        # directory that was our parent
-        parentdir = os.path.split(self.incominghome)[0]
-        if not os.listdir(parentdir):
-            os.rmdir(parentdir)
-        self._sharefile = None
-
-        # We are now considered closed for further writing. We must tell
-        # the storage server about this so that it stops expecting us to
-        # use the space it allocated for us earlier.
-        self.closed = True
-        self.ss.bucket_writer_closed(self, 0)
-
-
-class BucketReader(Referenceable):
-    implements(RIBucketReader)
-
-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
-        self.ss = ss
-        self._share_file = ShareFile(sharefname)
-        self.storage_index = storage_index
-        self.shnum = shnum
-
-    def __repr__(self):
-        return "<%s %s %s>" % (self.__class__.__name__,
-                               base32.b2a_l(self.storage_index[:8], 60),
-                               self.shnum)
-
-    def remote_read(self, offset, length):
-        start = time.time()
-        data = self._share_file.read_share_data(offset, length)
-        self.ss.add_latency("read", time.time() - start)
-        self.ss.count("read")
-        return data
-
-    def remote_advise_corrupt_share(self, reason):
-        return self.ss.remote_advise_corrupt_share("immutable",
-                                                   self.storage_index,
-                                                   self.shnum,
-                                                   reason)
hunk ./src/allmydata/storage/backends/disk/mutable.py 1
-import os, stat, struct
 
hunk ./src/allmydata/storage/backends/disk/mutable.py 2
-from allmydata.interfaces import BadWriteEnablerError
-from allmydata.util import idlib, log
+import struct
+
+from zope.interface import implements
+
+from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError
+from allmydata.util import fileutil, idlib, log
 from allmydata.util.assertutil import precondition
 from allmydata.util.hashutil import constant_time_compare
hunk ./src/allmydata/storage/backends/disk/mutable.py 10
-from allmydata.storage.lease import LeaseInfo
-from allmydata.storage.common import UnknownMutableContainerVersionError, \
+from allmydata.util.encodingutil import quote_filepath
+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
      DataTooLargeError
hunk ./src/allmydata/storage/backends/disk/mutable.py 13
+from allmydata.storage.lease import LeaseInfo
+from allmydata.storage.backends.base import testv_compare
 
hunk ./src/allmydata/storage/backends/disk/mutable.py 16
-# the MutableShareFile is like the ShareFile, but used for mutable data. It
-# has a different layout. See docs/mutable.txt for more details.
+
+# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data.
+# It has a different layout. See docs/mutable.rst for more details.
 
 # #   offset    size    name
 # 1   0         32      magic verstr "tahoe mutable container v1" plus binary
hunk ./src/allmydata/storage/backends/disk/mutable.py 31
 #                        4    4   expiration timestamp
 #                        8   32   renewal token
 #                        40  32   cancel token
-#                        72  20   nodeid which accepted the tokens
+#                        72  20   nodeid that accepted the tokens
 # 7   468       (a)     data
 # 8   ??        4       count of extra leases
 # 9   ??        n*92    extra leases
hunk ./src/allmydata/storage/backends/disk/mutable.py 37
 
 
-# The struct module doc says that L's are 4 bytes in size., and that Q's are
+# The struct module doc says that L's are 4 bytes in size, and that Q's are
 # 8 bytes in size. Since compatibility depends upon this, double-check it.
 assert struct.calcsize(">L") == 4, struct.calcsize(">L")
 assert struct.calcsize(">Q") == 8, struct.calcsize(">Q")
hunk ./src/allmydata/storage/backends/disk/mutable.py 42
 
-class MutableShareFile:
+
+class MutableDiskShare(object):
+    implements(IStoredMutableShare)
 
     sharetype = "mutable"
     DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s")
hunk ./src/allmydata/storage/backends/disk/mutable.py 54
     assert LEASE_SIZE == 92
     DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE
     assert DATA_OFFSET == 468, DATA_OFFSET
+
     # our sharefiles share with a recognizable string, plus some random
     # binary data to reduce the chance that a regular text file will look
     # like a sharefile.
hunk ./src/allmydata/storage/backends/disk/mutable.py 63
     MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary
     # TODO: decide upon a policy for max share size
 
-    def __init__(self, filename, parent=None):
-        self.home = filename
-        if os.path.exists(self.home):
+    def __init__(self, storageindex, shnum, home, parent=None):
+        self._storageindex = storageindex
+        self._shnum = shnum
+        self._home = home
+        if self._home.exists():
             # we don't cache anything, just check the magic
hunk ./src/allmydata/storage/backends/disk/mutable.py 69
-            f = open(self.home, 'rb')
-            data = f.read(self.HEADER_SIZE)
-            (magic,
-             write_enabler_nodeid, write_enabler,
-             data_length, extra_least_offset) = \
-             struct.unpack(">32s20s32sQQ", data)
-            if magic != self.MAGIC:
-                msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
-                      (filename, magic, self.MAGIC)
-                raise UnknownMutableContainerVersionError(msg)
+            f = self._home.open('rb')
+            try:
+                data = f.read(self.HEADER_SIZE)
+                (magic,
+                 write_enabler_nodeid, write_enabler,
+                 data_length, extra_least_offset) = \
+                 struct.unpack(">32s20s32sQQ", data)
+                if magic != self.MAGIC:
+                    msg = "sharefile %s had magic '%r' but we wanted '%r'" % \
+                          (quote_filepath(self._home), magic, self.MAGIC)
+                    raise UnknownMutableContainerVersionError(msg)
+            finally:
+                f.close()
         self.parent = parent # for logging
 
     def log(self, *args, **kwargs):
hunk ./src/allmydata/storage/backends/disk/mutable.py 87
         return self.parent.log(*args, **kwargs)
 
-    def create(self, my_nodeid, write_enabler):
-        assert not os.path.exists(self.home)
+    def create(self, serverid, write_enabler):
+        assert not self._home.exists()
         data_length = 0
         extra_lease_offset = (self.HEADER_SIZE
                               + 4 * self.LEASE_SIZE
hunk ./src/allmydata/storage/backends/disk/mutable.py 95
                               + data_length)
         assert extra_lease_offset == self.DATA_OFFSET # true at creation
         num_extra_leases = 0
-        f = open(self.home, 'wb')
-        header = struct.pack(">32s20s32sQQ",
-                             self.MAGIC, my_nodeid, write_enabler,
-                             data_length, extra_lease_offset,
-                             )
-        leases = ("\x00"*self.LEASE_SIZE) * 4
-        f.write(header + leases)
-        # data goes here, empty after creation
-        f.write(struct.pack(">L", num_extra_leases))
-        # extra leases go here, none at creation
-        f.close()
+        f = self._home.open('wb')
+        try:
+            header = struct.pack(">32s20s32sQQ",
+                                 self.MAGIC, serverid, write_enabler,
+                                 data_length, extra_lease_offset,
+                                 )
+            leases = ("\x00"*self.LEASE_SIZE) * 4
+            f.write(header + leases)
+            # data goes here, empty after creation
+            f.write(struct.pack(">L", num_extra_leases))
+            # extra leases go here, none at creation
+        finally:
+            f.close()
+
+    def __repr__(self):
+        return ("<MutableDiskShare %s:%r at %s>"
+                % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
+
+    def get_used_space(self):
+        return fileutil.get_used_space(self._home)
+
+    def get_storage_index(self):
+        return self._storageindex
+
+    def get_shnum(self):
+        return self._shnum
 
     def unlink(self):
hunk ./src/allmydata/storage/backends/disk/mutable.py 123
-        os.unlink(self.home)
+        self._home.remove()
 
     def _read_data_length(self, f):
         f.seek(self.DATA_LENGTH_OFFSET)
hunk ./src/allmydata/storage/backends/disk/mutable.py 291
 
     def get_leases(self):
         """Yields a LeaseInfo instance for all leases."""
-        f = open(self.home, 'rb')
-        for i, lease in self._enumerate_leases(f):
-            yield lease
-        f.close()
+        f = self._home.open('rb')
+        try:
+            for i, lease in self._enumerate_leases(f):
+                yield lease
+        finally:
+            f.close()
 
     def _enumerate_leases(self, f):
         for i in range(self._get_num_lease_slots(f)):
hunk ./src/allmydata/storage/backends/disk/mutable.py 303
             try:
                 data = self._read_lease_record(f, i)
                 if data is not None:
-                    yield i,data
+                    yield i, data
             except IndexError:
                 return
 
hunk ./src/allmydata/storage/backends/disk/mutable.py 307
+    # These lease operations are intended for use by disk_backend.py.
+    # Other non-test clients should not depend on the fact that the disk
+    # backend stores leases in share files.
+
     def add_lease(self, lease_info):
         precondition(lease_info.owner_num != 0) # 0 means "no lease here"
hunk ./src/allmydata/storage/backends/disk/mutable.py 313
-        f = open(self.home, 'rb+')
-        num_lease_slots = self._get_num_lease_slots(f)
-        empty_slot = self._get_first_empty_lease_slot(f)
-        if empty_slot is not None:
-            self._write_lease_record(f, empty_slot, lease_info)
-        else:
-            self._write_lease_record(f, num_lease_slots, lease_info)
-        f.close()
+        f = self._home.open('rb+')
+        try:
+            num_lease_slots = self._get_num_lease_slots(f)
+            empty_slot = self._get_first_empty_lease_slot(f)
+            if empty_slot is not None:
+                self._write_lease_record(f, empty_slot, lease_info)
+            else:
+                self._write_lease_record(f, num_lease_slots, lease_info)
+        finally:
+            f.close()
 
     def renew_lease(self, renew_secret, new_expire_time):
         accepting_nodeids = set()
hunk ./src/allmydata/storage/backends/disk/mutable.py 326
-        f = open(self.home, 'rb+')
-        for (leasenum,lease) in self._enumerate_leases(f):
-            if constant_time_compare(lease.renew_secret, renew_secret):
-                # yup. See if we need to update the owner time.
-                if new_expire_time > lease.expiration_time:
-                    # yes
-                    lease.expiration_time = new_expire_time
-                    self._write_lease_record(f, leasenum, lease)
-                f.close()
-                return
-            accepting_nodeids.add(lease.nodeid)
-        f.close()
+        f = self._home.open('rb+')
+        try:
+            for (leasenum, lease) in self._enumerate_leases(f):
+                if constant_time_compare(lease.renew_secret, renew_secret):
+                    # yup. See if we need to update the owner time.
+                    if new_expire_time > lease.expiration_time:
+                        # yes
+                        lease.expiration_time = new_expire_time
+                        self._write_lease_record(f, leasenum, lease)
+                    return
+                accepting_nodeids.add(lease.nodeid)
+        finally:
+            f.close()
         # Return the accepting_nodeids set, to give the client a chance to
hunk ./src/allmydata/storage/backends/disk/mutable.py 340
-        # update the leases on a share which has been migrated from its
+        # update the leases on a share that has been migrated from its
         # original server to a new one.
         msg = ("Unable to renew non-existent lease. I have leases accepted by"
                " nodeids: ")
hunk ./src/allmydata/storage/backends/disk/mutable.py 357
         except IndexError:
             self.add_lease(lease_info)
 
-    def cancel_lease(self, cancel_secret):
-        """Remove any leases with the given cancel_secret. If the last lease
-        is cancelled, the file will be removed. Return the number of bytes
-        that were freed (by truncating the list of leases, and possibly by
-        deleting the file. Raise IndexError if there was no lease with the
-        given cancel_secret."""
-
-        accepting_nodeids = set()
-        modified = 0
-        remaining = 0
-        blank_lease = LeaseInfo(owner_num=0,
-                                renew_secret="\x00"*32,
-                                cancel_secret="\x00"*32,
-                                expiration_time=0,
-                                nodeid="\x00"*20)
-        f = open(self.home, 'rb+')
-        for (leasenum,lease) in self._enumerate_leases(f):
-            accepting_nodeids.add(lease.nodeid)
-            if constant_time_compare(lease.cancel_secret, cancel_secret):
-                self._write_lease_record(f, leasenum, blank_lease)
-                modified += 1
-            else:
-                remaining += 1
-        if modified:
-            freed_space = self._pack_leases(f)
-            f.close()
-            if not remaining:
-                freed_space += os.stat(self.home)[stat.ST_SIZE]
-                self.unlink()
-            return freed_space
-
-        msg = ("Unable to cancel non-existent lease. I have leases "
-               "accepted by nodeids: ")
-        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
-                         for anid in accepting_nodeids])
-        msg += " ."
-        raise IndexError(msg)
-
-    def _pack_leases(self, f):
-        # TODO: reclaim space from cancelled leases
-        return 0
-
     def _read_write_enabler_and_nodeid(self, f):
         f.seek(0)
         data = f.read(self.HEADER_SIZE)
hunk ./src/allmydata/storage/backends/disk/mutable.py 369
 
     def readv(self, readv):
         datav = []
-        f = open(self.home, 'rb')
-        for (offset, length) in readv:
-            datav.append(self._read_share_data(f, offset, length))
-        f.close()
+        f = self._home.open('rb')
+        try:
+            for (offset, length) in readv:
+                datav.append(self._read_share_data(f, offset, length))
+        finally:
+            f.close()
         return datav
 
hunk ./src/allmydata/storage/backends/disk/mutable.py 377
-#    def remote_get_length(self):
-#        f = open(self.home, 'rb')
-#        data_length = self._read_data_length(f)
-#        f.close()
-#        return data_length
+    def get_size(self):
+        return self._home.getsize()
+
+    def get_data_length(self):
+        f = self._home.open('rb')
+        try:
+            data_length = self._read_data_length(f)
+        finally:
+            f.close()
+        return data_length
 
     def check_write_enabler(self, write_enabler, si_s):
hunk ./src/allmydata/storage/backends/disk/mutable.py 389
-        f = open(self.home, 'rb+')
-        (real_write_enabler, write_enabler_nodeid) = \
-                             self._read_write_enabler_and_nodeid(f)
-        f.close()
+        f = self._home.open('rb+')
+        try:
+            (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f)
+        finally:
+            f.close()
         # avoid a timing attack
         #if write_enabler != real_write_enabler:
         if not constant_time_compare(write_enabler, real_write_enabler):
hunk ./src/allmydata/storage/backends/disk/mutable.py 410
 
     def check_testv(self, testv):
         test_good = True
-        f = open(self.home, 'rb+')
-        for (offset, length, operator, specimen) in testv:
-            data = self._read_share_data(f, offset, length)
-            if not testv_compare(data, operator, specimen):
-                test_good = False
-                break
-        f.close()
+        f = self._home.open('rb+')
+        try:
+            for (offset, length, operator, specimen) in testv:
+                data = self._read_share_data(f, offset, length)
+                if not testv_compare(data, operator, specimen):
+                    test_good = False
+                    break
+        finally:
+            f.close()
         return test_good
 
     def writev(self, datav, new_length):
hunk ./src/allmydata/storage/backends/disk/mutable.py 422
-        f = open(self.home, 'rb+')
-        for (offset, data) in datav:
-            self._write_share_data(f, offset, data)
-        if new_length is not None:
-            cur_length = self._read_data_length(f)
-            if new_length < cur_length:
-                self._write_data_length(f, new_length)
-                # TODO: if we're going to shrink the share file when the
-                # share data has shrunk, then call
-                # self._change_container_size() here.
-        f.close()
-
-def testv_compare(a, op, b):
-    assert op in ("lt", "le", "eq", "ne", "ge", "gt")
-    if op == "lt":
-        return a < b
-    if op == "le":
-        return a <= b
-    if op == "eq":
-        return a == b
-    if op == "ne":
-        return a != b
-    if op == "ge":
-        return a >= b
-    if op == "gt":
-        return a > b
-    # never reached
+        f = self._home.open('rb+')
+        try:
+            for (offset, data) in datav:
+                self._write_share_data(f, offset, data)
+            if new_length is not None:
+                cur_length = self._read_data_length(f)
+                if new_length < cur_length:
+                    self._write_data_length(f, new_length)
+                    # TODO: if we're going to shrink the share file when the
+                    # share data has shrunk, then call
+                    # self._change_container_size() here.
+        finally:
+            f.close()
 
hunk ./src/allmydata/storage/backends/disk/mutable.py 436
-class EmptyShare:
+    def close(self):
+        pass
 
hunk ./src/allmydata/storage/backends/disk/mutable.py 439
-    def check_testv(self, testv):
-        test_good = True
-        for (offset, length, operator, specimen) in testv:
-            data = ""
-            if not testv_compare(data, operator, specimen):
-                test_good = False
-                break
-        return test_good
 
hunk ./src/allmydata/storage/backends/disk/mutable.py 440
-def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent):
-    ms = MutableShareFile(filename, parent)
-    ms.create(my_nodeid, write_enabler)
+def create_mutable_disk_share(fp, serverid, write_enabler, parent):
+    ms = MutableDiskShare(fp, parent)
+    ms.create(serverid, write_enabler)
     del ms
hunk ./src/allmydata/storage/backends/disk/mutable.py 444
-    return MutableShareFile(filename, parent)
-
+    return MutableDiskShare(fp, parent)
addfile ./src/allmydata/storage/backends/null/__init__.py
addfile ./src/allmydata/storage/backends/null/null_backend.py
hunk ./src/allmydata/storage/backends/null/null_backend.py 2
 
+import os, struct
+
+from zope.interface import implements
+
+from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare
+from allmydata.util.assertutil import precondition
+from allmydata.util.hashutil import constant_time_compare
+from allmydata.storage.backends.base import Backend, ShareSet
+from allmydata.storage.bucket import BucketWriter
+from allmydata.storage.common import si_b2a
+from allmydata.storage.lease import LeaseInfo
+
+
+class NullBackend(Backend):
+    implements(IStorageBackend)
+
+    def __init__(self):
+        Backend.__init__(self)
+
+    def get_available_space(self, reserved_space):
+        return None
+
+    def get_sharesets_for_prefix(self, prefix):
+        pass
+
+    def get_shareset(self, storageindex):
+        return NullShareSet(storageindex)
+
+    def fill_in_space_stats(self, stats):
+        pass
+
+    def set_storage_server(self, ss):
+        self.ss = ss
+
+    def advise_corrupt_share(self, sharetype, storageindex, shnum, reason):
+        pass
+
+
+class NullShareSet(ShareSet):
+    implements(IShareSet)
+
+    def __init__(self, storageindex):
+        self.storageindex = storageindex
+
+    def get_overhead(self):
+        return 0
+
+    def get_incoming_shnums(self):
+        return frozenset()
+
+    def get_shares(self):
+        pass
+
+    def get_share(self, shnum):
+        return None
+
+    def get_storage_index(self):
+        return self.storageindex
+
+    def get_storage_index_string(self):
+        return si_b2a(self.storageindex)
+
+    def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
+        immutableshare = ImmutableNullShare()
+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
+
+    def _create_mutable_share(self, storageserver, shnum, write_enabler):
+        return MutableNullShare()
+
+    def _clean_up_after_unlink(self):
+        pass
+
+
+class ImmutableNullShare:
+    implements(IStoredShare)
+    sharetype = "immutable"
+
+    def __init__(self):
+        """ If max_size is not None then I won't allow more than
+        max_size to be written to me. If create=True then max_size
+        must not be None. """
+        pass
+
+    def get_shnum(self):
+        return self.shnum
+
+    def unlink(self):
+        os.unlink(self.fname)
+
+    def read_share_data(self, offset, length):
+        precondition(offset >= 0)
+        # Reads beyond the end of the data are truncated. Reads that start
+        # beyond the end of the data return an empty string.
+        seekpos = self._data_offset+offset
+        fsize = os.path.getsize(self.fname)
+        actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528
+        if actuallength == 0:
+            return ""
+        f = open(self.fname, 'rb')
+        f.seek(seekpos)
+        return f.read(actuallength)
+
+    def write_share_data(self, offset, data):
+        pass
+
+    def _write_lease_record(self, f, lease_number, lease_info):
+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
+        f.seek(offset)
+        assert f.tell() == offset
+        f.write(lease_info.to_immutable_data())
+
+    def _read_num_leases(self, f):
+        f.seek(0x08)
+        (num_leases,) = struct.unpack(">L", f.read(4))
+        return num_leases
+
+    def _write_num_leases(self, f, num_leases):
+        f.seek(0x08)
+        f.write(struct.pack(">L", num_leases))
+
+    def _truncate_leases(self, f, num_leases):
+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
+
+    def get_leases(self):
+        """Yields a LeaseInfo instance for all leases."""
+        f = open(self.fname, 'rb')
+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+        f.seek(self._lease_offset)
+        for i in range(num_leases):
+            data = f.read(self.LEASE_SIZE)
+            if data:
+                yield LeaseInfo().from_immutable_data(data)
+
+    def add_lease(self, lease):
+        pass
+
+    def renew_lease(self, renew_secret, new_expire_time):
+        for i,lease in enumerate(self.get_leases()):
+            if constant_time_compare(lease.renew_secret, renew_secret):
+                # yup. See if we need to update the owner time.
+                if new_expire_time > lease.expiration_time:
+                    # yes
+                    lease.expiration_time = new_expire_time
+                    f = open(self.fname, 'rb+')
+                    self._write_lease_record(f, i, lease)
+                    f.close()
+                return
+        raise IndexError("unable to renew non-existent lease")
+
+    def add_or_renew_lease(self, lease_info):
+        try:
+            self.renew_lease(lease_info.renew_secret,
+                             lease_info.expiration_time)
+        except IndexError:
+            self.add_lease(lease_info)
+
+
+class MutableNullShare:
+    implements(IStoredMutableShare)
+    sharetype = "mutable"
+
+    """ XXX: TODO """
addfile ./src/allmydata/storage/bucket.py
hunk ./src/allmydata/storage/bucket.py 1
+
+import time
+
+from foolscap.api import Referenceable
+
+from zope.interface import implements
+from allmydata.interfaces import RIBucketWriter, RIBucketReader
+from allmydata.util import base32, log
+from allmydata.util.assertutil import precondition
+
+
+class BucketWriter(Referenceable):
+    implements(RIBucketWriter)
+
+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
+        self.ss = ss
+        self._max_size = max_size # don't allow the client to write more than this
+        self._canary = canary
+        self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
+        self.closed = False
+        self.throw_out_all_data = False
+        self._share = immutableshare
+        # also, add our lease to the file now, so that other ones can be
+        # added by simultaneous uploaders
+        self._share.add_lease(lease_info)
+
+    def allocated_size(self):
+        return self._max_size
+
+    def remote_write(self, offset, data):
+        start = time.time()
+        precondition(not self.closed)
+        if self.throw_out_all_data:
+            return
+        self._share.write_share_data(offset, data)
+        self.ss.add_latency("write", time.time() - start)
+        self.ss.count("write")
+
+    def remote_close(self):
+        precondition(not self.closed)
+        start = time.time()
+
+        self._share.close()
+        filelen = self._share.stat()
+        self._share = None
+
+        self.closed = True
+        self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
+
+        self.ss.bucket_writer_closed(self, filelen)
+        self.ss.add_latency("close", time.time() - start)
+        self.ss.count("close")
+
+    def _disconnected(self):
+        if not self.closed:
+            self._abort()
+
+    def remote_abort(self):
+        log.msg("storage: aborting write to share %r" % self._share,
+                facility="tahoe.storage", level=log.UNUSUAL)
+        if not self.closed:
+            self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
+        self._abort()
+        self.ss.count("abort")
+
+    def _abort(self):
+        if self.closed:
+            return
+        self._share.unlink()
+        self._share = None
+
+        # We are now considered closed for further writing. We must tell
+        # the storage server about this so that it stops expecting us to
+        # use the space it allocated for us earlier.
+        self.closed = True
+        self.ss.bucket_writer_closed(self, 0)
+
+
+class BucketReader(Referenceable):
+    implements(RIBucketReader)
+
+    def __init__(self, ss, share):
+        self.ss = ss
+        self._share = share
+        self.storageindex = share.storageindex
+        self.shnum = share.shnum
+
+    def __repr__(self):
+        return "<%s %s %s>" % (self.__class__.__name__,
+                               base32.b2a_l(self.storageindex[:8], 60),
+                               self.shnum)
+
+    def remote_read(self, offset, length):
+        start = time.time()
+        data = self._share.read_share_data(offset, length)
+        self.ss.add_latency("read", time.time() - start)
+        self.ss.count("read")
+        return data
+
+    def remote_advise_corrupt_share(self, reason):
+        return self.ss.remote_advise_corrupt_share("immutable",
+                                                   self.storageindex,
+                                                   self.shnum,
+                                                   reason)
addfile ./src/allmydata/test/test_backends.py
hunk ./src/allmydata/test/test_backends.py 1
+import os, stat
+from twisted.trial import unittest
+from allmydata.util.log import msg
+from allmydata.test.common_util import ReallyEqualMixin
+import mock
+
+# This is the code that we're going to be testing.
+from allmydata.storage.server import StorageServer
+from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir
+from allmydata.storage.backends.null.null_backend import NullBackend
+
+# The following share file content was generated with
+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
+# with share data == 'a'. The total size of this input
+# is 85 bytes.
+shareversionnumber = '\x00\x00\x00\x01'
+sharedatalength = '\x00\x00\x00\x01'
+numberofleases = '\x00\x00\x00\x01'
+shareinputdata = 'a'
+ownernumber = '\x00\x00\x00\x00'
+renewsecret  = 'x'*32
+cancelsecret = 'y'*32
+expirationtime = '\x00(\xde\x80'
+nextlease = ''
+containerdata = shareversionnumber + sharedatalength + numberofleases
+client_data = shareinputdata + ownernumber + renewsecret + \
+    cancelsecret + expirationtime + nextlease
+share_data = containerdata + client_data
+testnodeid = 'testnodeidxxxxxxxxxx'
+
+
+class MockFileSystem(unittest.TestCase):
+    """ I simulate a filesystem that the code under test can use. I simulate
+    just the parts of the filesystem that the current implementation of Disk
+    backend needs. """
+    def setUp(self):
+        # Make patcher, patch, and effects for disk-using functions.
+        msg( "%s.setUp()" % (self,))
+        self.mockedfilepaths = {}
+        # keys are pathnames, values are MockFilePath objects. This is necessary because
+        # MockFilePath behavior sometimes depends on the filesystem. Where it does,
+        # self.mockedfilepaths has the relevant information.
+        self.storedir = MockFilePath('teststoredir', self.mockedfilepaths)
+        self.basedir = self.storedir.child('shares')
+        self.baseincdir = self.basedir.child('incoming')
+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
+        self.shareincomingname = self.sharedirincomingname.child('0')
+        self.sharefinalname = self.sharedirfinalname.child('0')
+
+        # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler,
+        # or LeaseCheckingCrawler.
+
+        self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath)
+        self.FilePathFake.__enter__()
+
+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler')
+        FakeBCC = self.BCountingCrawler.__enter__()
+        FakeBCC.side_effect = self.call_FakeBCC
+
+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler')
+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
+        FakeLCC.side_effect = self.call_FakeLCC
+
+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
+        GetSpace = self.get_available_space.__enter__()
+        GetSpace.side_effect = self.call_get_available_space
+
+        self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat')
+        getsize = self.statforsize.__enter__()
+        getsize.side_effect = self.call_statforsize
+
+    def call_FakeBCC(self, StateFile):
+        return MockBCC()
+
+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
+        return MockLCC()
+
+    def call_get_available_space(self, storedir, reservedspace):
+        # The input vector has an input size of 85.
+        return 85 - reservedspace
+
+    def call_statforsize(self, fakefpname):
+        return self.mockedfilepaths[fakefpname].fileobject.size()
+
+    def tearDown(self):
+        msg( "%s.tearDown()" % (self,))
+        self.FilePathFake.__exit__()
+        self.mockedfilepaths = {}
+
+
+class MockFilePath:
+    def __init__(self, pathstring, ffpathsenvironment, existence=False):
+        #  I can't just make the values MockFileObjects because they may be directories.
+        self.mockedfilepaths = ffpathsenvironment
+        self.path = pathstring
+        self.existence = existence
+        if not self.mockedfilepaths.has_key(self.path):
+            #  The first MockFilePath object is special
+            self.mockedfilepaths[self.path] = self
+            self.fileobject = None
+        else:
+            self.fileobject = self.mockedfilepaths[self.path].fileobject
+        self.spawn = {}
+        self.antecedent = os.path.dirname(self.path)
+
+    def setContent(self, contentstring):
+        # This method rewrites the data in the file that corresponds to its path
+        # name whether it preexisted or not.
+        self.fileobject = MockFileObject(contentstring)
+        self.existence = True
+        self.mockedfilepaths[self.path].fileobject = self.fileobject
+        self.mockedfilepaths[self.path].existence = self.existence
+        self.setparents()
+
+    def create(self):
+        # This method chokes if there's a pre-existing file!
+        if self.mockedfilepaths[self.path].fileobject:
+            raise OSError
+        else:
+            self.existence = True
+            self.mockedfilepaths[self.path].fileobject = self.fileobject
+            self.mockedfilepaths[self.path].existence = self.existence
+            self.setparents()
+
+    def open(self, mode='r'):
+        # XXX Makes no use of mode.
+        if not self.mockedfilepaths[self.path].fileobject:
+            # If there's no fileobject there already then make one and put it there.
+            self.fileobject = MockFileObject()
+            self.existence = True
+            self.mockedfilepaths[self.path].fileobject = self.fileobject
+            self.mockedfilepaths[self.path].existence = self.existence
+        else:
+            # Otherwise get a ref to it.
+            self.fileobject = self.mockedfilepaths[self.path].fileobject
+            self.existence = self.mockedfilepaths[self.path].existence
+        return self.fileobject.open(mode)
+
+    def child(self, childstring):
+        arg2child = os.path.join(self.path, childstring)
+        child = MockFilePath(arg2child, self.mockedfilepaths)
+        return child
+
+    def children(self):
+        childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)]
+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
+        childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()]
+        self.spawn = frozenset(childrenfromffs)
+        return self.spawn
+
+    def parent(self):
+        if self.mockedfilepaths.has_key(self.antecedent):
+            parent = self.mockedfilepaths[self.antecedent]
+        else:
+            parent = MockFilePath(self.antecedent, self.mockedfilepaths)
+        return parent
+
+    def parents(self):
+        antecedents = []
+        def f(fps, antecedents):
+            newfps = os.path.split(fps)[0]
+            if newfps:
+                antecedents.append(newfps)
+                f(newfps, antecedents)
+        f(self.path, antecedents)
+        return antecedents
+
+    def setparents(self):
+        for fps in self.parents():
+            if not self.mockedfilepaths.has_key(fps):
+                self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True)
+
+    def basename(self):
+        return os.path.split(self.path)[1]
+
+    def moveTo(self, newffp):
+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
+        if self.mockedfilepaths[newffp.path].exists():
+            raise OSError
+        else:
+            self.mockedfilepaths[newffp.path] = self
+            self.path = newffp.path
+
+    def getsize(self):
+        return self.fileobject.getsize()
+
+    def exists(self):
+        return self.existence
+
+    def isdir(self):
+        return True
+
+    def makedirs(self):
+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
+        pass
+
+    def remove(self):
+        pass
+
+
+class MockFileObject:
+    def __init__(self, contentstring=''):
+        self.buffer = contentstring
+        self.pos = 0
+    def open(self, mode='r'):
+        return self
+    def write(self, instring):
+        begin = self.pos
+        padlen = begin - len(self.buffer)
+        if padlen > 0:
+            self.buffer += '\x00' * padlen
+        end = self.pos + len(instring)
+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
+        self.pos = end
+    def close(self):
+        self.pos = 0
+    def seek(self, pos):
+        self.pos = pos
+    def read(self, numberbytes):
+        return self.buffer[self.pos:self.pos+numberbytes]
+    def tell(self):
+        return self.pos
+    def size(self):
+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
+        # Hmmm...  perhaps we need to sometimes stat the address when there's not a mockfileobject present?
+        return {stat.ST_SIZE:len(self.buffer)}
+    def getsize(self):
+        return len(self.buffer)
+
+class MockBCC:
+    def setServiceParent(self, Parent):
+        pass
+
+
+class MockLCC:
+    def setServiceParent(self, Parent):
+        pass
+
+
+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
+    """ NullBackend is just for testing and executable documentation, so
+    this test is actually a test of StorageServer in which we're using
+    NullBackend as helper code for the test, rather than a test of
+    NullBackend. """
+    def setUp(self):
+        self.ss = StorageServer(testnodeid, NullBackend())
+
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
+        """
+        Write a new share. This tests that StorageServer's remote_allocate_buckets
+        generates the correct return types when given test-vector arguments. That
+        bs is of the correct type is verified by attempting to invoke remote_write
+        on bs[0].
+        """
+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        bs[0].remote_write(0, 'a')
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+
+
+class TestServerConstruction(MockFileSystem, ReallyEqualMixin):
+    def test_create_server_disk_backend(self):
+        """ This tests whether a server instance can be constructed with a
+        filesystem backend. To pass the test, it mustn't use the filesystem
+        outside of its configured storedir. """
+        StorageServer(testnodeid, DiskBackend(self.storedir))
+
+
+class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin):
+    """ This tests both the StorageServer and the Disk backend together. """
+    def setUp(self):
+        MockFileSystem.setUp(self)
+        try:
+            self.backend = DiskBackend(self.storedir)
+            self.ss = StorageServer(testnodeid, self.backend)
+
+            self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1)
+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
+        except:
+            MockFileSystem.tearDown(self)
+            raise
+
+    @mock.patch('time.time')
+    @mock.patch('allmydata.util.fileutil.get_available_space')
+    def test_out_of_space(self, mockget_available_space, mocktime):
+        mocktime.return_value = 0
+
+        def call_get_available_space(dir, reserve):
+            return 0
+
+        mockget_available_space.side_effect = call_get_available_space
+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        self.failUnlessReallyEqual(bsc, {})
+
+    @mock.patch('time.time')
+    def test_write_and_read_share(self, mocktime):
+        """
+        Write a new share, read it, and test the server's (and disk backend's)
+        handling of simultaneous and successive attempts to write the same
+        share.
+        """
+        mocktime.return_value = 0
+        # Inspect incoming and fail unless it's empty.
+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
+
+        self.failUnlessReallyEqual(incomingset, frozenset())
+
+        # Populate incoming with the sharenum: 0.
+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
+
+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
+
+
+
+        # Attempt to create a second share writer with the same sharenum.
+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
+
+        # Show that no sharewriter results from a remote_allocate_buckets
+        # with the same si and sharenum, until BucketWriter.remote_close()
+        # has been called.
+        self.failIf(bsa)
+
+        # Test allocated size.
+        spaceint = self.ss.allocated_size()
+        self.failUnlessReallyEqual(spaceint, 1)
+
+        # Write 'a' to shnum 0. Only tested together with close and read.
+        bs[0].remote_write(0, 'a')
+
+        # Preclose: Inspect final, failUnless nothing there.
+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
+        bs[0].remote_close()
+
+        # Postclose: (Omnibus) failUnless written data is in final.
+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
+        contents = sharesinfinal[0].read_share_data(0, 73)
+        self.failUnlessReallyEqual(contents, client_data)
+
+        # Exercise the case that the share we're asking to allocate is
+        # already (completely) uploaded.
+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+
+
+    def test_read_old_share(self):
+        """ This tests whether the code correctly finds and reads
+        shares written out by old (Tahoe-LAFS <= v1.8.2)
+        servers. There is a similar test in test_download, but that one
+        is from the perspective of the client and exercises a deeper
+        stack of code. This one is for exercising just the
+        StorageServer object. """
+        # Contruct a file with the appropriate contents in the mockfilesystem.
+        datalen = len(share_data)
+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0))
+        finalhome.setContent(share_data)
+
+        # Now begin the test.
+        bs = self.ss.remote_get_buckets('teststorage_index')
+
+        self.failUnlessEqual(len(bs), 1)
+        b = bs['0']
+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
+        # If you try to read past the end you get the as much data as is there.
+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
+        # If you start reading past the end of the file you get the empty string.
+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
}
[Pluggable backends -- all other changes. refs #999
david-sarah@jacaranda.org**20110919233256
 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957
] {
hunk ./src/allmydata/client.py 245
             sharetypes.append("immutable")
         if self.get_config("storage", "expire.mutable", True, boolean=True):
             sharetypes.append("mutable")
-        expiration_sharetypes = tuple(sharetypes)
 
hunk ./src/allmydata/client.py 246
+        expiration_policy = {
+            'enabled': expire,
+            'mode': mode,
+            'override_lease_duration': o_l_d,
+            'cutoff_date': cutoff_date,
+            'sharetypes': tuple(sharetypes),
+        }
         ss = StorageServer(storedir, self.nodeid,
                            reserved_space=reserved,
                            discard_storage=discard,
hunk ./src/allmydata/client.py 258
                            readonly_storage=readonly,
                            stats_provider=self.stats_provider,
-                           expiration_enabled=expire,
-                           expiration_mode=mode,
-                           expiration_override_lease_duration=o_l_d,
-                           expiration_cutoff_date=cutoff_date,
-                           expiration_sharetypes=expiration_sharetypes)
+                           expiration_policy=expiration_policy)
         self.add_service(ss)
 
         d = self.when_tub_ready()
hunk ./src/allmydata/immutable/offloaded.py 306
         if os.path.exists(self._encoding_file):
             self.log("ciphertext already present, bypassing fetch",
                      level=log.UNUSUAL)
+            # XXX the following comment is probably stale, since
+            # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist.
+            #
             # we'll still need the plaintext hashes (when
             # LocalCiphertextReader.get_plaintext_hashtree_leaves() is
             # called), and currently the easiest way to get them is to ask
hunk ./src/allmydata/immutable/upload.py 765
             self._status.set_progress(1, progress)
         return cryptdata
 
-
     def get_plaintext_hashtree_leaves(self, first, last, num_segments):
hunk ./src/allmydata/immutable/upload.py 766
+        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
+        plaintext segments, i.e. get the tagged hashes of the given segments.
+        The segment size is expected to be generated by the
+        IEncryptedUploadable before any plaintext is read or ciphertext
+        produced, so that the segment hashes can be generated with only a
+        single pass.
+
+        This returns a Deferred that fires with a sequence of hashes, using:
+
+         tuple(segment_hashes[first:last])
+
+        'num_segments' is used to assert that the number of segments that the
+        IEncryptedUploadable handled matches the number of segments that the
+        encoder was expecting.
+
+        This method must not be called until the final byte has been read
+        from read_encrypted(). Once this method is called, read_encrypted()
+        can never be called again.
+        """
         # this is currently unused, but will live again when we fix #453
         if len(self._plaintext_segment_hashes) < num_segments:
             # close out the last one
hunk ./src/allmydata/immutable/upload.py 803
         return defer.succeed(tuple(self._plaintext_segment_hashes[first:last]))
 
     def get_plaintext_hash(self):
+        """OBSOLETE; Get the hash of the whole plaintext.
+
+        This returns a Deferred that fires with a tagged SHA-256 hash of the
+        whole plaintext, obtained from hashutil.plaintext_hash(data).
+        """
+        # this is currently unused, but will live again when we fix #453
         h = self._plaintext_hasher.digest()
         return defer.succeed(h)
 
hunk ./src/allmydata/interfaces.py 29
 Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes
 Offset = Number
 ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments
-WriteEnablerSecret = Hash # used to protect mutable bucket modifications
-LeaseRenewSecret = Hash # used to protect bucket lease renewal requests
-LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests
+WriteEnablerSecret = Hash # used to protect mutable share modifications
+LeaseRenewSecret = Hash # used to protect lease renewal requests
+LeaseCancelSecret = Hash # used to protect lease cancellation requests
 
 class RIStubClient(RemoteInterface):
     """Each client publishes a service announcement for a dummy object called
hunk ./src/allmydata/interfaces.py 106
                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
                          allocated_size=Offset, canary=Referenceable):
         """
-        @param storage_index: the index of the bucket to be created or
+        @param storage_index: the index of the shareset to be created or
                               increfed.
         @param sharenums: these are the share numbers (probably between 0 and
                           99) that the sender is proposing to store on this
hunk ./src/allmydata/interfaces.py 111
                           server.
-        @param renew_secret: This is the secret used to protect bucket refresh
+        @param renew_secret: This is the secret used to protect lease renewal.
                              This secret is generated by the client and
                              stored for later comparison by the server. Each
                              server is given a different secret.
hunk ./src/allmydata/interfaces.py 115
-        @param cancel_secret: Like renew_secret, but protects bucket decref.
-        @param canary: If the canary is lost before close(), the bucket is
+        @param cancel_secret: ignored
+        @param canary: If the canary is lost before close(), the allocation is
                        deleted.
         @return: tuple of (alreadygot, allocated), where alreadygot is what we
                  already have and allocated is what we hereby agree to accept.
hunk ./src/allmydata/interfaces.py 129
                   renew_secret=LeaseRenewSecret,
                   cancel_secret=LeaseCancelSecret):
         """
-        Add a new lease on the given bucket. If the renew_secret matches an
+        Add a new lease on the given shareset. If the renew_secret matches an
         existing lease, that lease will be renewed instead. If there is no
hunk ./src/allmydata/interfaces.py 131
-        bucket for the given storage_index, return silently. (note that in
+        shareset for the given storage_index, return silently. (Note that in
         tahoe-1.3.0 and earlier, IndexError was raised if there was no
hunk ./src/allmydata/interfaces.py 133
-        bucket)
+        shareset.)
         """
         return Any() # returns None now, but future versions might change
 
hunk ./src/allmydata/interfaces.py 139
     def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret):
         """
-        Renew the lease on a given bucket, resetting the timer to 31 days.
-        Some networks will use this, some will not. If there is no bucket for
+        Renew the lease on a given shareset, resetting the timer to 31 days.
+        Some networks will use this, some will not. If there is no shareset for
         the given storage_index, IndexError will be raised.
 
         For mutable shares, if the given renew_secret does not match an
hunk ./src/allmydata/interfaces.py 146
         existing lease, IndexError will be raised with a note listing the
         server-nodeids on the existing leases, so leases on migrated shares
-        can be renewed or cancelled. For immutable shares, IndexError
-        (without the note) will be raised.
+        can be renewed. For immutable shares, IndexError (without the note)
+        will be raised.
         """
         return Any()
 
hunk ./src/allmydata/interfaces.py 154
     def get_buckets(storage_index=StorageIndex):
         return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS)
 
-
-
     def slot_readv(storage_index=StorageIndex,
                    shares=ListOf(int), readv=ReadVector):
         """Read a vector from the numbered shares associated with the given
hunk ./src/allmydata/interfaces.py 163
 
     def slot_testv_and_readv_and_writev(storage_index=StorageIndex,
                                         secrets=TupleOf(WriteEnablerSecret,
-                                                        LeaseRenewSecret,
-                                                        LeaseCancelSecret),
+                                                        LeaseRenewSecret),
                                         tw_vectors=TestAndWriteVectorsForShares,
                                         r_vector=ReadVector,
                                         ):
hunk ./src/allmydata/interfaces.py 167
-        """General-purpose test-and-set operation for mutable slots. Perform
-        a bunch of comparisons against the existing shares. If they all pass,
-        then apply a bunch of write vectors to those shares. Then use the
-        read vectors to extract data from all the shares and return the data.
+        """
+        General-purpose atomic test-read-and-set operation for mutable slots.
+        Perform a bunch of comparisons against the existing shares. If they
+        all pass: use the read vectors to extract data from all the shares,
+        then apply a bunch of write vectors to those shares. Return the read
+        data, which does not include any modifications made by the writes.
 
         This method is, um, large. The goal is to allow clients to update all
         the shares associated with a mutable file in a single round trip.
hunk ./src/allmydata/interfaces.py 177
 
-        @param storage_index: the index of the bucket to be created or
+        @param storage_index: the index of the shareset to be created or
                               increfed.
         @param write_enabler: a secret that is stored along with the slot.
                               Writes are accepted from any caller who can
hunk ./src/allmydata/interfaces.py 183
                               present the matching secret. A different secret
                               should be used for each slot*server pair.
-        @param renew_secret: This is the secret used to protect bucket refresh
+        @param renew_secret: This is the secret used to protect lease renewal.
                              This secret is generated by the client and
                              stored for later comparison by the server. Each
                              server is given a different secret.
hunk ./src/allmydata/interfaces.py 187
-        @param cancel_secret: Like renew_secret, but protects bucket decref.
+        @param cancel_secret: ignored
 
hunk ./src/allmydata/interfaces.py 189
-        The 'secrets' argument is a tuple of (write_enabler, renew_secret,
-        cancel_secret). The first is required to perform any write. The
-        latter two are used when allocating new shares. To simply acquire a
-        new lease on existing shares, use an empty testv and an empty writev.
+        The 'secrets' argument is a tuple with (write_enabler, renew_secret).
+        The write_enabler is required to perform any write. The renew_secret
+        is used when allocating new shares.
 
         Each share can have a separate test vector (i.e. a list of
         comparisons to perform). If all vectors for all shares pass, then all
hunk ./src/allmydata/interfaces.py 280
         store that on disk.
         """
 
-class IStorageBucketWriter(Interface):
+
+class IStorageBackend(Interface):
     """
hunk ./src/allmydata/interfaces.py 283
-    Objects of this kind live on the client side.
+    Objects of this kind live on the server side and are used by the
+    storage server object.
     """
hunk ./src/allmydata/interfaces.py 286
-    def put_block(segmentnum=int, data=ShareData):
-        """@param data: For most segments, this data will be 'blocksize'
-        bytes in length. The last segment might be shorter.
-        @return: a Deferred that fires (with None) when the operation completes
+    def get_available_space():
+        """
+        Returns available space for share storage in bytes, or
+        None if this information is not available or if the available
+        space is unlimited.
+
+        If the backend is configured for read-only mode then this will
+        return 0.
+        """
+
+    def get_sharesets_for_prefix(prefix):
+        """
+        Generates IShareSet objects for all storage indices matching the
+        given prefix for which this backend holds shares.
+        """
+
+    def get_shareset(storageindex):
+        """
+        Get an IShareSet object for the given storage index.
+        """
+
+    def advise_corrupt_share(storageindex, sharetype, shnum, reason):
+        """
+        Clients who discover hash failures in shares that they have
+        downloaded from me will use this method to inform me about the
+        failures. I will record their concern so that my operator can
+        manually inspect the shares in question.
+
+        'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer
+        share number. 'reason' is a human-readable explanation of the problem,
+        probably including some expected hash values and the computed ones
+        that did not match. Corruption advisories for mutable shares should
+        include a hash of the public key (the same value that appears in the
+        mutable-file verify-cap), since the current share format does not
+        store that on disk.
+
+        @param storageindex=str
+        @param sharetype=str
+        @param shnum=int
+        @param reason=str
+        """
+
+
+class IShareSet(Interface):
+    def get_storage_index():
+        """
+        Returns the storage index for this shareset.
+        """
+
+    def get_storage_index_string():
+        """
+        Returns the base32-encoded storage index for this shareset.
+        """
+
+    def get_overhead():
+        """
+        Returns the storage overhead, in bytes, of this shareset (exclusive
+        of the space used by its shares).
+        """
+
+    def get_shares():
+        """
+        Generates the IStoredShare objects held in this shareset.
+        """
+
+    def has_incoming(shnum):
+        """
+        Returns True if this shareset has an incoming (partial) share with this number, otherwise False.
+        """
+
+    def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary):
+        """
+        Create a bucket writer that can be used to write data to a given share.
+
+        @param storageserver=RIStorageServer
+        @param shnum=int: A share number in this shareset
+        @param max_space_per_bucket=int: The maximum space allocated for the
+                 share, in bytes
+        @param lease_info=LeaseInfo: The initial lease information
+        @param canary=Referenceable: If the canary is lost before close(), the
+                 bucket is deleted.
+        @return an IStorageBucketWriter for the given share
+        """
+
+    def make_bucket_reader(storageserver, share):
+        """
+        Create a bucket reader that can be used to read data from a given share.
+
+        @param storageserver=RIStorageServer
+        @param share=IStoredShare
+        @return an IStorageBucketReader for the given share
+        """
+
+    def readv(wanted_shnums, read_vector):
+        """
+        Read a vector from the numbered shares in this shareset. An empty
+        wanted_shnums list means to return data from all known shares.
+
+        @param wanted_shnums=ListOf(int)
+        @param read_vector=ReadVector
+        @return DictOf(int, ReadData): shnum -> results, with one key per share
+        """
+
+    def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time):
+        """
+        General-purpose atomic test-read-and-set operation for mutable slots.
+        Perform a bunch of comparisons against the existing shares in this
+        shareset. If they all pass: use the read vectors to extract data from
+        all the shares, then apply a bunch of write vectors to those shares.
+        Return the read data, which does not include any modifications made by
+        the writes.
+
+        See the similar method in RIStorageServer for more detail.
+
+        @param storageserver=RIStorageServer
+        @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...])
+        @param test_and_write_vectors=TestAndWriteVectorsForShares
+        @param read_vector=ReadVector
+        @param expiration_time=int
+        @return TupleOf(bool, DictOf(int, ReadData))
+        """
+
+    def add_or_renew_lease(lease_info):
+        """
+        Add a new lease on the shares in this shareset. If the renew_secret
+        matches an existing lease, that lease will be renewed instead. If
+        there are no shares in this shareset, return silently.
+
+        @param lease_info=LeaseInfo
+        """
+
+    def renew_lease(renew_secret, new_expiration_time):
+        """
+        Renew a lease on the shares in this shareset, resetting the timer
+        to 31 days. Some grids will use this, some will not. If there are no
+        shares in this shareset, IndexError will be raised.
+
+        For mutable shares, if the given renew_secret does not match an
+        existing lease, IndexError will be raised with a note listing the
+        server-nodeids on the existing leases, so leases on migrated shares
+        can be renewed. For immutable shares, IndexError (without the note)
+        will be raised.
+
+        @param renew_secret=LeaseRenewSecret
+        """
+
+
+class IStoredShare(Interface):
+    """
+    This object contains as much as all of the share data.  It is intended
+    for lazy evaluation, such that in many use cases substantially less than
+    all of the share data will be accessed.
+    """
+    def close():
+        """
+        Complete writing to this share.
+        """
+
+    def get_storage_index():
+        """
+        Returns the storage index.
+        """
+
+    def get_shnum():
+        """
+        Returns the share number.
+        """
+
+    def get_data_length():
+        """
+        Returns the data length in bytes.
+        """
+
+    def get_size():
+        """
+        Returns the size of the share in bytes.
+        """
+
+    def get_used_space():
+        """
+        Returns the amount of backend storage including overhead, in bytes, used
+        by this share.
+        """
+
+    def unlink():
+        """
+        Signal that this share can be removed from the backend storage. This does
+        not guarantee that the share data will be immediately inaccessible, or
+        that it will be securely erased.
+        """
+
+    def readv(read_vector):
+        """
+        XXX
+        """
+
+
+class IStoredMutableShare(IStoredShare):
+    def check_write_enabler(write_enabler, si_s):
+        """
+        XXX
         """
 
hunk ./src/allmydata/interfaces.py 489
-    def put_plaintext_hashes(hashes=ListOf(Hash)):
+    def check_testv(test_vector):
+        """
+        XXX
+        """
+
+    def writev(datav, new_length):
+        """
+        XXX
+        """
+
+
+class IStorageBucketWriter(Interface):
+    """
+    Objects of this kind live on the client side.
+    """
+    def put_block(segmentnum, data):
         """
hunk ./src/allmydata/interfaces.py 506
+        @param segmentnum=int
+        @param data=ShareData: For most segments, this data will be 'blocksize'
+        bytes in length. The last segment might be shorter.
         @return: a Deferred that fires (with None) when the operation completes
         """
 
hunk ./src/allmydata/interfaces.py 512
-    def put_crypttext_hashes(hashes=ListOf(Hash)):
+    def put_crypttext_hashes(hashes):
         """
hunk ./src/allmydata/interfaces.py 514
+        @param hashes=ListOf(Hash)
         @return: a Deferred that fires (with None) when the operation completes
         """
 
hunk ./src/allmydata/interfaces.py 518
-    def put_block_hashes(blockhashes=ListOf(Hash)):
+    def put_block_hashes(blockhashes):
         """
hunk ./src/allmydata/interfaces.py 520
+        @param blockhashes=ListOf(Hash)
         @return: a Deferred that fires (with None) when the operation completes
         """
 
hunk ./src/allmydata/interfaces.py 524
-    def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))):
+    def put_share_hashes(sharehashes):
         """
hunk ./src/allmydata/interfaces.py 526
+        @param sharehashes=ListOf(TupleOf(int, Hash))
         @return: a Deferred that fires (with None) when the operation completes
         """
 
hunk ./src/allmydata/interfaces.py 530
-    def put_uri_extension(data=URIExtensionData):
+    def put_uri_extension(data):
         """This block of data contains integrity-checking information (hashes
         of plaintext, crypttext, and shares), as well as encoding parameters
         that are necessary to recover the data. This is a serialized dict
hunk ./src/allmydata/interfaces.py 535
         mapping strings to other strings. The hash of this data is kept in
-        the URI and verified before any of the data is used. All buckets for
-        a given file contain identical copies of this data.
+        the URI and verified before any of the data is used. All share
+        containers for a given file contain identical copies of this data.
 
         The serialization format is specified with the following pseudocode:
         for k in sorted(dict.keys()):
hunk ./src/allmydata/interfaces.py 543
             assert re.match(r'^[a-zA-Z_\-]+$', k)
             write(k + ':' + netstring(dict[k]))
 
+        @param data=URIExtensionData
         @return: a Deferred that fires (with None) when the operation completes
         """
 
hunk ./src/allmydata/interfaces.py 558
 
 class IStorageBucketReader(Interface):
 
-    def get_block_data(blocknum=int, blocksize=int, size=int):
+    def get_block_data(blocknum, blocksize, size):
         """Most blocks will be the same size. The last block might be shorter
         than the others.
 
hunk ./src/allmydata/interfaces.py 562
+        @param blocknum=int
+        @param blocksize=int
+        @param size=int
         @return: ShareData
         """
 
hunk ./src/allmydata/interfaces.py 573
         @return: ListOf(Hash)
         """
 
-    def get_block_hashes(at_least_these=SetOf(int)):
+    def get_block_hashes(at_least_these=()):
         """
hunk ./src/allmydata/interfaces.py 575
+        @param at_least_these=SetOf(int)
         @return: ListOf(Hash)
         """
 
hunk ./src/allmydata/interfaces.py 579
-    def get_share_hashes(at_least_these=SetOf(int)):
+    def get_share_hashes():
         """
         @return: ListOf(TupleOf(int, Hash))
         """
hunk ./src/allmydata/interfaces.py 611
         @return: unicode nickname, or None
         """
 
-    # methods moved from IntroducerClient, need review
-    def get_all_connections():
-        """Return a frozenset of (nodeid, service_name, rref) tuples, one for
-        each active connection we've established to a remote service. This is
-        mostly useful for unit tests that need to wait until a certain number
-        of connections have been made."""
-
-    def get_all_connectors():
-        """Return a dict that maps from (nodeid, service_name) to a
-        RemoteServiceConnector instance for all services that we are actively
-        trying to connect to. Each RemoteServiceConnector has the following
-        public attributes::
-
-          service_name: the type of service provided, like 'storage'
-          announcement_time: when we first heard about this service
-          last_connect_time: when we last established a connection
-          last_loss_time: when we last lost a connection
-
-          version: the peer's version, from the most recent connection
-          oldest_supported: the peer's oldest supported version, same
-
-          rref: the RemoteReference, if connected, otherwise None
-          remote_host: the IAddress, if connected, otherwise None
-
-        This method is intended for monitoring interfaces, such as a web page
-        that describes connecting and connected peers.
-        """
-
-    def get_all_peerids():
-        """Return a frozenset of all peerids to whom we have a connection (to
-        one or more services) established. Mostly useful for unit tests."""
-
-    def get_all_connections_for(service_name):
-        """Return a frozenset of (nodeid, service_name, rref) tuples, one
-        for each active connection that provides the given SERVICE_NAME."""
-
-    def get_permuted_peers(service_name, key):
-        """Returns an ordered list of (peerid, rref) tuples, selecting from
-        the connections that provide SERVICE_NAME, using a hash-based
-        permutation keyed by KEY. This randomizes the service list in a
-        repeatable way, to distribute load over many peers.
-        """
-
 
 class IMutableSlotWriter(Interface):
     """
hunk ./src/allmydata/interfaces.py 616
     The interface for a writer around a mutable slot on a remote server.
     """
-    def set_checkstring(checkstring, *args):
+    def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None):
         """
         Set the checkstring that I will pass to the remote server when
         writing.
hunk ./src/allmydata/interfaces.py 640
         Add a block and salt to the share.
         """
 
-    def put_encprivey(encprivkey):
+    def put_encprivkey(encprivkey):
         """
         Add the encrypted private key to the share.
         """
hunk ./src/allmydata/interfaces.py 645
 
-    def put_blockhashes(blockhashes=list):
+    def put_blockhashes(blockhashes):
         """
hunk ./src/allmydata/interfaces.py 647
+        @param blockhashes=list
         Add the block hash tree to the share.
         """
 
hunk ./src/allmydata/interfaces.py 651
-    def put_sharehashes(sharehashes=dict):
+    def put_sharehashes(sharehashes):
         """
hunk ./src/allmydata/interfaces.py 653
+        @param sharehashes=dict
         Add the share hash chain to the share.
         """
 
hunk ./src/allmydata/interfaces.py 739
     def get_extension_params():
         """Return the extension parameters in the URI"""
 
-    def set_extension_params():
+    def set_extension_params(params):
         """Set the extension parameters that should be in the URI"""
 
 class IDirectoryURI(Interface):
hunk ./src/allmydata/interfaces.py 879
         writer-visible data using this writekey.
         """
 
-    # TODO: Can this be overwrite instead of replace?
-    def replace(new_contents):
-        """Replace the contents of the mutable file, provided that no other
+    def overwrite(new_contents):
+        """Overwrite the contents of the mutable file, provided that no other
         node has published (or is attempting to publish, concurrently) a
         newer version of the file than this one.
 
hunk ./src/allmydata/interfaces.py 1346
         is empty, the metadata will be an empty dictionary.
         """
 
-    def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True):
+    def set_uri(name, writecap, readcap, metadata=None, overwrite=True):
         """I add a child (by writecap+readcap) at the specific name. I return
         a Deferred that fires when the operation finishes. If overwrite= is
         True, I will replace any existing child of the same name, otherwise
hunk ./src/allmydata/interfaces.py 1745
     Block Hash, and the encoding parameters, both of which must be included
     in the URI.
 
-    I do not choose shareholders, that is left to the IUploader. I must be
-    given a dict of RemoteReferences to storage buckets that are ready and
-    willing to receive data.
+    I do not choose shareholders, that is left to the IUploader.
     """
 
     def set_size(size):
hunk ./src/allmydata/interfaces.py 1752
         """Specify the number of bytes that will be encoded. This must be
         peformed before get_serialized_params() can be called.
         """
+
     def set_params(params):
         """Override the default encoding parameters. 'params' is a tuple of
         (k,d,n), where 'k' is the number of required shares, 'd' is the
hunk ./src/allmydata/interfaces.py 1848
     download, validate, decode, and decrypt data from them, writing the
     results to an output file.
 
-    I do not locate the shareholders, that is left to the IDownloader. I must
-    be given a dict of RemoteReferences to storage buckets that are ready to
-    send data.
+    I do not locate the shareholders, that is left to the IDownloader.
     """
 
     def setup(outfile):
hunk ./src/allmydata/interfaces.py 1950
         resuming an interrupted upload (where we need to compute the
         plaintext hashes, but don't need the redundant encrypted data)."""
 
-    def get_plaintext_hashtree_leaves(first, last, num_segments):
-        """OBSOLETE; Get the leaf nodes of a merkle hash tree over the
-        plaintext segments, i.e. get the tagged hashes of the given segments.
-        The segment size is expected to be generated by the
-        IEncryptedUploadable before any plaintext is read or ciphertext
-        produced, so that the segment hashes can be generated with only a
-        single pass.
-
-        This returns a Deferred that fires with a sequence of hashes, using:
-
-         tuple(segment_hashes[first:last])
-
-        'num_segments' is used to assert that the number of segments that the
-        IEncryptedUploadable handled matches the number of segments that the
-        encoder was expecting.
-
-        This method must not be called until the final byte has been read
-        from read_encrypted(). Once this method is called, read_encrypted()
-        can never be called again.
-        """
-
-    def get_plaintext_hash():
-        """OBSOLETE; Get the hash of the whole plaintext.
-
-        This returns a Deferred that fires with a tagged SHA-256 hash of the
-        whole plaintext, obtained from hashutil.plaintext_hash(data).
-        """
-
     def close():
         """Just like IUploadable.close()."""
 
hunk ./src/allmydata/interfaces.py 2144
         returns a Deferred that fires with an IUploadResults instance, from
         which the URI of the file can be obtained as results.uri ."""
 
-    def upload_ssk(write_capability, new_version, uploadable):
-        """TODO: how should this work?"""
-
 class ICheckable(Interface):
     def check(monitor, verify=False, add_lease=False):
         """Check up on my health, optionally repairing any problems.
hunk ./src/allmydata/interfaces.py 2505
 
 class IRepairResults(Interface):
     """I contain the results of a repair operation."""
-    def get_successful(self):
+    def get_successful():
         """Returns a boolean: True if the repair made the file healthy, False
         if not. Repair failure generally indicates a file that has been
         damaged beyond repair."""
hunk ./src/allmydata/interfaces.py 2577
     Tahoe process will typically have a single NodeMaker, but unit tests may
     create simplified/mocked forms for testing purposes.
     """
-    def create_from_cap(writecap, readcap=None, **kwargs):
+    def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"):
         """I create an IFilesystemNode from the given writecap/readcap. I can
         only provide nodes for existing file/directory objects: use my other
         methods to create new objects. I return synchronously."""
hunk ./src/allmydata/monitor.py 30
 
     # the following methods are provided for the operation code
 
-    def is_cancelled(self):
+    def is_cancelled():
         """Returns True if the operation has been cancelled. If True,
         operation code should stop creating new work, and attempt to stop any
         work already in progress."""
hunk ./src/allmydata/monitor.py 35
 
-    def raise_if_cancelled(self):
+    def raise_if_cancelled():
         """Raise OperationCancelledError if the operation has been cancelled.
         Operation code that has a robust error-handling path can simply call
         this periodically."""
hunk ./src/allmydata/monitor.py 40
 
-    def set_status(self, status):
+    def set_status(status):
         """Sets the Monitor's 'status' object to an arbitrary value.
         Different operations will store different sorts of status information
         here. Operation code should use get+modify+set sequences to update
hunk ./src/allmydata/monitor.py 46
         this."""
 
-    def get_status(self):
+    def get_status():
         """Return the status object. If the operation failed, this will be a
         Failure instance."""
 
hunk ./src/allmydata/monitor.py 50
-    def finish(self, status):
+    def finish(status):
         """Call this when the operation is done, successful or not. The
         Monitor's lifetime is influenced by the completion of the operation
         it is monitoring. The Monitor's 'status' value will be set with the
hunk ./src/allmydata/monitor.py 63
 
     # the following methods are provided for the initiator of the operation
 
-    def is_finished(self):
+    def is_finished():
         """Return a boolean, True if the operation is done (whether
         successful or failed), False if it is still running."""
 
hunk ./src/allmydata/monitor.py 67
-    def when_done(self):
+    def when_done():
         """Return a Deferred that fires when the operation is complete. It
         will fire with the operation status, the same value as returned by
         get_status()."""
hunk ./src/allmydata/monitor.py 72
 
-    def cancel(self):
+    def cancel():
         """Cancel the operation as soon as possible. is_cancelled() will
         start returning True after this is called."""
 
hunk ./src/allmydata/mutable/filenode.py 753
         self._writekey = writekey
         self._serializer = defer.succeed(None)
 
-
     def get_sequence_number(self):
         """
         Get the sequence number of the mutable version that I represent.
hunk ./src/allmydata/mutable/filenode.py 759
         """
         return self._version[0] # verinfo[0] == the sequence number
 
+    def get_servermap(self):
+        return self._servermap
 
hunk ./src/allmydata/mutable/filenode.py 762
-    # TODO: Terminology?
     def get_writekey(self):
         """
         I return a writekey or None if I don't have a writekey.
hunk ./src/allmydata/mutable/filenode.py 768
         """
         return self._writekey
 
-
     def set_downloader_hints(self, hints):
         """
         I set the downloader hints.
hunk ./src/allmydata/mutable/filenode.py 776
 
         self._downloader_hints = hints
 
-
     def get_downloader_hints(self):
         """
         I return the downloader hints.
hunk ./src/allmydata/mutable/filenode.py 782
         """
         return self._downloader_hints
 
-
     def overwrite(self, new_contents):
         """
         I overwrite the contents of this mutable file version with the
hunk ./src/allmydata/mutable/filenode.py 791
 
         return self._do_serialized(self._overwrite, new_contents)
 
-
     def _overwrite(self, new_contents):
         assert IMutableUploadable.providedBy(new_contents)
         assert self._servermap.last_update_mode == MODE_WRITE
hunk ./src/allmydata/mutable/filenode.py 797
 
         return self._upload(new_contents)
 
-
     def modify(self, modifier, backoffer=None):
         """I use a modifier callback to apply a change to the mutable file.
         I implement the following pseudocode::
hunk ./src/allmydata/mutable/filenode.py 841
 
         return self._do_serialized(self._modify, modifier, backoffer)
 
-
     def _modify(self, modifier, backoffer):
         if backoffer is None:
             backoffer = BackoffAgent().delay
hunk ./src/allmydata/mutable/filenode.py 846
         return self._modify_and_retry(modifier, backoffer, True)
 
-
     def _modify_and_retry(self, modifier, backoffer, first_time):
         """
         I try to apply modifier to the contents of this version of the
hunk ./src/allmydata/mutable/filenode.py 878
         d.addErrback(_retry)
         return d
 
-
     def _modify_once(self, modifier, first_time):
         """
         I attempt to apply a modifier to the contents of the mutable
hunk ./src/allmydata/mutable/filenode.py 913
         d.addCallback(_apply)
         return d
 
-
     def is_readonly(self):
         """
         I return True if this MutableFileVersion provides no write
hunk ./src/allmydata/mutable/filenode.py 921
         """
         return self._writekey is None
 
-
     def is_mutable(self):
         """
         I return True, since mutable files are always mutable by
hunk ./src/allmydata/mutable/filenode.py 928
         """
         return True
 
-
     def get_storage_index(self):
         """
         I return the storage index of the reference that I encapsulate.
hunk ./src/allmydata/mutable/filenode.py 934
         """
         return self._storage_index
 
-
     def get_size(self):
         """
         I return the length, in bytes, of this readable object.
hunk ./src/allmydata/mutable/filenode.py 940
         """
         return self._servermap.size_of_version(self._version)
 
-
     def download_to_data(self, fetch_privkey=False):
         """
         I return a Deferred that fires with the contents of this
hunk ./src/allmydata/mutable/filenode.py 951
         d.addCallback(lambda mc: "".join(mc.chunks))
         return d
 
-
     def _try_to_download_data(self):
         """
         I am an unserialized cousin of download_to_data; I am called
hunk ./src/allmydata/mutable/filenode.py 963
         d.addCallback(lambda mc: "".join(mc.chunks))
         return d
 
-
     def read(self, consumer, offset=0, size=None, fetch_privkey=False):
         """
         I read a portion (possibly all) of the mutable file that I
hunk ./src/allmydata/mutable/filenode.py 971
         return self._do_serialized(self._read, consumer, offset, size,
                                    fetch_privkey)
 
-
     def _read(self, consumer, offset=0, size=None, fetch_privkey=False):
         """
         I am the serialized companion of read.
hunk ./src/allmydata/mutable/filenode.py 981
         d = r.download(consumer, offset, size)
         return d
 
-
     def _do_serialized(self, cb, *args, **kwargs):
         # note: to avoid deadlock, this callable is *not* allowed to invoke
         # other serialized methods within this (or any other)
hunk ./src/allmydata/mutable/filenode.py 999
         self._serializer.addErrback(log.err)
         return d
 
-
     def _upload(self, new_contents):
         #assert self._pubkey, "update_servermap must be called before publish"
         p = Publish(self._node, self._storage_broker, self._servermap)
hunk ./src/allmydata/mutable/filenode.py 1009
         d.addCallback(self._did_upload, new_contents.get_size())
         return d
 
-
     def _did_upload(self, res, size):
         self._most_recent_size = size
         return res
hunk ./src/allmydata/mutable/filenode.py 1029
         """
         return self._do_serialized(self._update, data, offset)
 
-
     def _update(self, data, offset):
         """
         I update the mutable file version represented by this particular
hunk ./src/allmydata/mutable/filenode.py 1058
         d.addCallback(self._build_uploadable_and_finish, data, offset)
         return d
 
-
     def _do_modify_update(self, data, offset):
         """
         I perform a file update by modifying the contents of the file
hunk ./src/allmydata/mutable/filenode.py 1073
             return new
         return self._modify(m, None)
 
-
     def _do_update_update(self, data, offset):
         """
         I start the Servermap update that gets us the data we need to
hunk ./src/allmydata/mutable/filenode.py 1108
         return self._update_servermap(update_range=(start_segment,
                                                     end_segment))
 
-
     def _decode_and_decrypt_segments(self, ignored, data, offset):
         """
         After the servermap update, I take the encrypted and encoded
hunk ./src/allmydata/mutable/filenode.py 1148
         d3 = defer.succeed(blockhashes)
         return deferredutil.gatherResults([d1, d2, d3])
 
-
     def _build_uploadable_and_finish(self, segments_and_bht, data, offset):
         """
         After the process has the plaintext segments, I build the
hunk ./src/allmydata/mutable/filenode.py 1163
         p = Publish(self._node, self._storage_broker, self._servermap)
         return p.update(u, offset, segments_and_bht[2], self._version)
 
-
     def _update_servermap(self, mode=MODE_WRITE, update_range=None):
         """
         I update the servermap. I return a Deferred that fires when the
hunk ./src/allmydata/storage/common.py 1
-
-import os.path
 from allmydata.util import base32
 
 class DataTooLargeError(Exception):
hunk ./src/allmydata/storage/common.py 5
     pass
+
 class UnknownMutableContainerVersionError(Exception):
     pass
hunk ./src/allmydata/storage/common.py 8
+
 class UnknownImmutableContainerVersionError(Exception):
     pass
 
hunk ./src/allmydata/storage/common.py 18
 
 def si_a2b(ascii_storageindex):
     return base32.a2b(ascii_storageindex)
-
-def storage_index_to_dir(storageindex):
-    sia = si_b2a(storageindex)
-    return os.path.join(sia[:2], sia)
hunk ./src/allmydata/storage/crawler.py 2
 
-import os, time, struct
+import time, struct
 import cPickle as pickle
 from twisted.internet import reactor
 from twisted.application import service
hunk ./src/allmydata/storage/crawler.py 6
+
+from allmydata.util.assertutil import precondition
+from allmydata.interfaces import IStorageBackend
 from allmydata.storage.common import si_b2a
hunk ./src/allmydata/storage/crawler.py 10
-from allmydata.util import fileutil
+
 
 class TimeSliceExceeded(Exception):
     pass
hunk ./src/allmydata/storage/crawler.py 15
 
+
 class ShareCrawler(service.MultiService):
hunk ./src/allmydata/storage/crawler.py 17
-    """A ShareCrawler subclass is attached to a StorageServer, and
-    periodically walks all of its shares, processing each one in some
-    fashion. This crawl is rate-limited, to reduce the IO burden on the host,
-    since large servers can easily have a terabyte of shares, in several
-    million files, which can take hours or days to read.
+    """
+    An instance of a subclass of ShareCrawler is attached to a storage
+    backend, and periodically walks the backend's shares, processing them
+    in some fashion. This crawl is rate-limited to reduce the I/O burden on
+    the host, since large servers can easily have a terabyte of shares in
+    several million files, which can take hours or days to read.
 
     Once the crawler starts a cycle, it will proceed at a rate limited by the
     allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor
hunk ./src/allmydata/storage/crawler.py 33
     long enough to ensure that 'minimum_cycle_time' elapses between the start
     of two consecutive cycles.
 
-    We assume that the normal upload/download/get_buckets traffic of a tahoe
+    We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS
     grid will cause the prefixdir contents to be mostly cached in the kernel,
hunk ./src/allmydata/storage/crawler.py 35
-    or that the number of buckets in each prefixdir will be small enough to
-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
-    buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
+    or that the number of sharesets in each prefixdir will be small enough to
+    load quickly. A 1TB allmydata.com server was measured to have 2.56 million
+    sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per
     prefix. On this server, each prefixdir took 130ms-200ms to list the first
     time, and 17ms to list the second time.
 
hunk ./src/allmydata/storage/crawler.py 41
-    To use a crawler, create a subclass which implements the process_bucket()
-    method. It will be called with a prefixdir and a base32 storage index
-    string. process_bucket() must run synchronously. Any keys added to
-    self.state will be preserved. Override add_initial_state() to set up
-    initial state keys. Override finished_cycle() to perform additional
-    processing when the cycle is complete. Any status that the crawler
-    produces should be put in the self.state dictionary. Status renderers
-    (like a web page which describes the accomplishments of your crawler)
-    will use crawler.get_state() to retrieve this dictionary; they can
-    present the contents as they see fit.
+    To implement a crawler, create a subclass that implements the
+    process_shareset() method. It will be called with a prefixdir and an
+    object providing the IShareSet interface. process_shareset() must run
+    synchronously. Any keys added to self.state will be preserved. Override
+    add_initial_state() to set up initial state keys. Override
+    finished_cycle() to perform additional processing when the cycle is
+    complete. Any status that the crawler produces should be put in the
+    self.state dictionary. Status renderers (like a web page describing the
+    accomplishments of your crawler) will use crawler.get_state() to retrieve
+    this dictionary; they can present the contents as they see fit.
 
hunk ./src/allmydata/storage/crawler.py 52
-    Then create an instance, with a reference to a StorageServer and a
-    filename where it can store persistent state. The statefile is used to
-    keep track of how far around the ring the process has travelled, as well
-    as timing history to allow the pace to be predicted and controlled. The
-    statefile will be updated and written to disk after each time slice (just
-    before the crawler yields to the reactor), and also after each cycle is
-    finished, and also when stopService() is called. Note that this means
-    that a crawler which is interrupted with SIGKILL while it is in the
-    middle of a time slice will lose progress: the next time the node is
-    started, the crawler will repeat some unknown amount of work.
+    Then create an instance, with a reference to a backend object providing
+    the IStorageBackend interface, and a filename where it can store
+    persistent state. The statefile is used to keep track of how far around
+    the ring the process has travelled, as well as timing history to allow
+    the pace to be predicted and controlled. The statefile will be updated
+    and written to disk after each time slice (just before the crawler yields
+    to the reactor), and also after each cycle is finished, and also when
+    stopService() is called. Note that this means that a crawler that is
+    interrupted with SIGKILL while it is in the middle of a time slice will
+    lose progress: the next time the node is started, the crawler will repeat
+    some unknown amount of work.
 
     The crawler instance must be started with startService() before it will
hunk ./src/allmydata/storage/crawler.py 65
-    do any work. To make it stop doing work, call stopService().
+    do any work. To make it stop doing work, call stopService(). A crawler
+    is usually a child service of a StorageServer, although it should not
+    depend on that.
+
+    For historical reasons, some dictionary key names use the term "bucket"
+    for what is now preferably called a "shareset" (the set of shares that a
+    server holds under a given storage index).
     """
 
     slow_start = 300 # don't start crawling for 5 minutes after startup
hunk ./src/allmydata/storage/crawler.py 80
     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
     minimum_cycle_time = 300 # don't run a cycle faster than this
 
-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
+    def __init__(self, backend, statefp, allowed_cpu_percentage=None):
+        precondition(IStorageBackend.providedBy(backend), backend)
         service.MultiService.__init__(self)
hunk ./src/allmydata/storage/crawler.py 83
+        self.backend = backend
+        self.statefp = statefp
         if allowed_cpu_percentage is not None:
             self.allowed_cpu_percentage = allowed_cpu_percentage
hunk ./src/allmydata/storage/crawler.py 87
-        self.server = server
-        self.sharedir = server.sharedir
-        self.statefile = statefile
         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
                          for i in range(2**10)]
         self.prefixes.sort()
hunk ./src/allmydata/storage/crawler.py 91
         self.timer = None
-        self.bucket_cache = (None, [])
+        self.shareset_cache = (None, [])
         self.current_sleep_time = None
         self.next_wake_time = None
         self.last_prefix_finished_time = None
hunk ./src/allmydata/storage/crawler.py 154
                 left = len(self.prefixes) - self.last_complete_prefix_index
                 remaining = left * self.last_prefix_elapsed_time
                 # TODO: remainder of this prefix: we need to estimate the
-                # per-bucket time, probably by measuring the time spent on
-                # this prefix so far, divided by the number of buckets we've
+                # per-shareset time, probably by measuring the time spent on
+                # this prefix so far, divided by the number of sharesets we've
                 # processed.
             d["estimated-cycle-complete-time-left"] = remaining
             # it's possible to call get_progress() from inside a crawler's
hunk ./src/allmydata/storage/crawler.py 175
         state dictionary.
 
         If we are not currently sleeping (i.e. get_state() was called from
-        inside the process_prefixdir, process_bucket, or finished_cycle()
+        inside the process_prefixdir, process_shareset, or finished_cycle()
         methods, or if startService has not yet been called on this crawler),
         these two keys will be None.
 
hunk ./src/allmydata/storage/crawler.py 188
     def load_state(self):
         # we use this to store state for both the crawler's internals and
         # anything the subclass-specific code needs. The state is stored
-        # after each bucket is processed, after each prefixdir is processed,
+        # after each shareset is processed, after each prefixdir is processed,
         # and after a cycle is complete. The internal keys we use are:
         #  ["version"]: int, always 1
         #  ["last-cycle-finished"]: int, or None if we have not yet finished
hunk ./src/allmydata/storage/crawler.py 202
         #                            are sleeping between cycles, or if we
         #                            have not yet finished any prefixdir since
         #                            a cycle was started
-        #  ["last-complete-bucket"]: str, base32 storage index bucket name
-        #                            of the last bucket to be processed, or
-        #                            None if we are sleeping between cycles
+        #  ["last-complete-bucket"]: str, base32 storage index of the last
+        #                            shareset to be processed, or None if we
+        #                            are sleeping between cycles
         try:
hunk ./src/allmydata/storage/crawler.py 206
-            f = open(self.statefile, "rb")
-            state = pickle.load(f)
-            f.close()
+            state = pickle.loads(self.statefp.getContent())
         except EnvironmentError:
             state = {"version": 1,
                      "last-cycle-finished": None,
hunk ./src/allmydata/storage/crawler.py 242
         else:
             last_complete_prefix = self.prefixes[lcpi]
         self.state["last-complete-prefix"] = last_complete_prefix
-        tmpfile = self.statefile + ".tmp"
-        f = open(tmpfile, "wb")
-        pickle.dump(self.state, f)
-        f.close()
-        fileutil.move_into_place(tmpfile, self.statefile)
+        self.statefp.setContent(pickle.dumps(self.state))
 
     def startService(self):
         # arrange things to look like we were just sleeping, so
hunk ./src/allmydata/storage/crawler.py 284
         sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice
         # if the math gets weird, or a timequake happens, don't sleep
         # forever. Note that this means that, while a cycle is running, we
-        # will process at least one bucket every 5 minutes, no matter how
-        # long that bucket takes.
+        # will process at least one shareset every 5 minutes, no matter how
+        # long that shareset takes.
         sleep_time = max(0.0, min(sleep_time, 299))
         if finished_cycle:
             # how long should we sleep between cycles? Don't run faster than
hunk ./src/allmydata/storage/crawler.py 315
         for i in range(self.last_complete_prefix_index+1, len(self.prefixes)):
             # if we want to yield earlier, just raise TimeSliceExceeded()
             prefix = self.prefixes[i]
-            prefixdir = os.path.join(self.sharedir, prefix)
-            if i == self.bucket_cache[0]:
-                buckets = self.bucket_cache[1]
+            if i == self.shareset_cache[0]:
+                sharesets = self.shareset_cache[1]
             else:
hunk ./src/allmydata/storage/crawler.py 318
-                try:
-                    buckets = os.listdir(prefixdir)
-                    buckets.sort()
-                except EnvironmentError:
-                    buckets = []
-                self.bucket_cache = (i, buckets)
-            self.process_prefixdir(cycle, prefix, prefixdir,
-                                   buckets, start_slice)
+                sharesets = self.backend.get_sharesets_for_prefix(prefix)
+                self.shareset_cache = (i, sharesets)
+            self.process_prefixdir(cycle, prefix, sharesets, start_slice)
             self.last_complete_prefix_index = i
 
             now = time.time()
hunk ./src/allmydata/storage/crawler.py 345
         self.finished_cycle(cycle)
         self.save_state()
 
-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
-        """This gets a list of bucket names (i.e. storage index strings,
+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
+        """
+        This gets a list of shareset names (i.e. storage index strings,
         base32-encoded) in sorted order.
 
         You can override this if your crawler doesn't care about the actual
hunk ./src/allmydata/storage/crawler.py 352
         shares, for example a crawler which merely keeps track of how many
-        buckets are being managed by this server.
+        sharesets are being managed by this server.
 
hunk ./src/allmydata/storage/crawler.py 354
-        Subclasses which *do* care about actual bucket should leave this
-        method along, and implement process_bucket() instead.
+        Subclasses which *do* care about actual shareset should leave this
+        method alone, and implement process_shareset() instead.
         """
 
hunk ./src/allmydata/storage/crawler.py 358
-        for bucket in buckets:
-            if bucket <= self.state["last-complete-bucket"]:
+        for shareset in sharesets:
+            base32si = shareset.get_storage_index_string()
+            if base32si <= self.state["last-complete-bucket"]:
                 continue
hunk ./src/allmydata/storage/crawler.py 362
-            self.process_bucket(cycle, prefix, prefixdir, bucket)
-            self.state["last-complete-bucket"] = bucket
+            self.process_shareset(cycle, prefix, shareset)
+            self.state["last-complete-bucket"] = base32si
             if time.time() >= start_slice + self.cpu_slice:
                 raise TimeSliceExceeded()
 
hunk ./src/allmydata/storage/crawler.py 370
     # the remaining methods are explictly for subclasses to implement.
 
     def started_cycle(self, cycle):
-        """Notify a subclass that the crawler is about to start a cycle.
+        """
+        Notify a subclass that the crawler is about to start a cycle.
 
         This method is for subclasses to override. No upcall is necessary.
         """
hunk ./src/allmydata/storage/crawler.py 377
         pass
 
-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
-        """Examine a single bucket. Subclasses should do whatever they want
+    def process_shareset(self, cycle, prefix, shareset):
+        """
+        Examine a single shareset. Subclasses should do whatever they want
         to do to the shares therein, then update self.state as necessary.
 
         If the crawler is never interrupted by SIGKILL, this method will be
hunk ./src/allmydata/storage/crawler.py 383
-        called exactly once per share (per cycle). If it *is* interrupted,
+        called exactly once per shareset (per cycle). If it *is* interrupted,
         then the next time the node is started, some amount of work will be
         duplicated, according to when self.save_state() was last called. By
         default, save_state() is called at the end of each timeslice, and
hunk ./src/allmydata/storage/crawler.py 391
 
         To reduce the chance of duplicate work (i.e. to avoid adding multiple
         records to a database), you can call save_state() at the end of your
-        process_bucket() method. This will reduce the maximum duplicated work
-        to one bucket per SIGKILL. It will also add overhead, probably 1-20ms
-        per bucket (and some disk writes), which will count against your
-        allowed_cpu_percentage, and which may be considerable if
-        process_bucket() runs quickly.
+        process_shareset() method. This will reduce the maximum duplicated
+        work to one shareset per SIGKILL. It will also add overhead, probably
+        1-20ms per shareset (and some disk writes), which will count against
+        your allowed_cpu_percentage, and which may be considerable if
+        process_shareset() runs quickly.
 
         This method is for subclasses to override. No upcall is necessary.
         """
hunk ./src/allmydata/storage/crawler.py 402
         pass
 
     def finished_prefix(self, cycle, prefix):
-        """Notify a subclass that the crawler has just finished processing a
-        prefix directory (all buckets with the same two-character/10bit
+        """
+        Notify a subclass that the crawler has just finished processing a
+        prefix directory (all sharesets with the same two-character/10-bit
         prefix). To impose a limit on how much work might be duplicated by a
         SIGKILL that occurs during a timeslice, you can call
         self.save_state() here, but be aware that it may represent a
hunk ./src/allmydata/storage/crawler.py 415
         pass
 
     def finished_cycle(self, cycle):
-        """Notify subclass that a cycle (one complete traversal of all
+        """
+        Notify subclass that a cycle (one complete traversal of all
         prefixdirs) has just finished. 'cycle' is the number of the cycle
         that just finished. This method should perform summary work and
         update self.state to publish information to status displays.
hunk ./src/allmydata/storage/crawler.py 433
         pass
 
     def yielding(self, sleep_time):
-        """The crawler is about to sleep for 'sleep_time' seconds. This
+        """
+        The crawler is about to sleep for 'sleep_time' seconds. This
         method is mostly for the convenience of unit tests.
 
         This method is for subclasses to override. No upcall is necessary.
hunk ./src/allmydata/storage/crawler.py 443
 
 
 class BucketCountingCrawler(ShareCrawler):
-    """I keep track of how many buckets are being managed by this server.
-    This is equivalent to the number of distributed files and directories for
-    which I am providing storage. The actual number of files+directories in
-    the full grid is probably higher (especially when there are more servers
-    than 'N', the number of generated shares), because some files+directories
-    will have shares on other servers instead of me. Also note that the
-    number of buckets will differ from the number of shares in small grids,
-    when more than one share is placed on a single server.
+    """
+    I keep track of how many sharesets, each corresponding to a storage index,
+    are being managed by this server. This is equivalent to the number of
+    distributed files and directories for which I am providing storage. The
+    actual number of files and directories in the full grid is probably higher
+    (especially when there are more servers than 'N', the number of generated
+    shares), because some files and directories will have shares on other
+    servers instead of me. Also note that the number of sharesets will differ
+    from the number of shares in small grids, when more than one share is
+    placed on a single server.
     """
 
     minimum_cycle_time = 60*60 # we don't need this more than once an hour
hunk ./src/allmydata/storage/crawler.py 457
 
-    def __init__(self, server, statefile, num_sample_prefixes=1):
-        ShareCrawler.__init__(self, server, statefile)
+    def __init__(self, backend, statefp, num_sample_prefixes=1):
+        ShareCrawler.__init__(self, backend, statefp)
         self.num_sample_prefixes = num_sample_prefixes
 
     def add_initial_state(self):
hunk ./src/allmydata/storage/crawler.py 471
         self.state.setdefault("last-complete-bucket-count", None)
         self.state.setdefault("storage-index-samples", {})
 
-    def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice):
+    def process_prefixdir(self, cycle, prefix, sharesets, start_slice):
         # we override process_prefixdir() because we don't want to look at
hunk ./src/allmydata/storage/crawler.py 473
-        # the individual buckets. We'll save state after each one. On my
+        # the individual sharesets. We'll save state after each one. On my
         # laptop, a mostly-empty storage server can process about 70
         # prefixdirs in a 1.0s slice.
         if cycle not in self.state["bucket-counts"]:
hunk ./src/allmydata/storage/crawler.py 478
             self.state["bucket-counts"][cycle] = {}
-        self.state["bucket-counts"][cycle][prefix] = len(buckets)
+        self.state["bucket-counts"][cycle][prefix] = len(sharesets)
         if prefix in self.prefixes[:self.num_sample_prefixes]:
hunk ./src/allmydata/storage/crawler.py 480
-            self.state["storage-index-samples"][prefix] = (cycle, buckets)
+            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
 
     def finished_cycle(self, cycle):
         last_counts = self.state["bucket-counts"].get(cycle, [])
hunk ./src/allmydata/storage/crawler.py 486
         if len(last_counts) == len(self.prefixes):
             # great, we have a whole cycle.
-            num_buckets = sum(last_counts.values())
-            self.state["last-complete-bucket-count"] = num_buckets
+            num_sharesets = sum(last_counts.values())
+            self.state["last-complete-bucket-count"] = num_sharesets
             # get rid of old counts
             for old_cycle in list(self.state["bucket-counts"].keys()):
                 if old_cycle != cycle:
hunk ./src/allmydata/storage/crawler.py 494
                     del self.state["bucket-counts"][old_cycle]
         # get rid of old samples too
         for prefix in list(self.state["storage-index-samples"].keys()):
-            old_cycle,buckets = self.state["storage-index-samples"][prefix]
+            old_cycle, storage_indices = self.state["storage-index-samples"][prefix]
             if old_cycle != cycle:
                 del self.state["storage-index-samples"][prefix]
hunk ./src/allmydata/storage/crawler.py 497
-
hunk ./src/allmydata/storage/expirer.py 1
-import time, os, pickle, struct
+
+import time, pickle, struct
+from twisted.python import log as twlog
+
 from allmydata.storage.crawler import ShareCrawler
hunk ./src/allmydata/storage/expirer.py 6
-from allmydata.storage.shares import get_share_file
-from allmydata.storage.common import UnknownMutableContainerVersionError, \
+from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
      UnknownImmutableContainerVersionError
hunk ./src/allmydata/storage/expirer.py 8
-from twisted.python import log as twlog
+
 
 class LeaseCheckingCrawler(ShareCrawler):
     """I examine the leases on all shares, determining which are still valid
hunk ./src/allmydata/storage/expirer.py 17
     removed.
 
     I collect statistics on the leases and make these available to a web
-    status page, including::
+    status page, including:
 
     Space recovered during this cycle-so-far:
      actual (only if expiration_enabled=True):
hunk ./src/allmydata/storage/expirer.py 21
-      num-buckets, num-shares, sum of share sizes, real disk usage
+      num-storage-indices, num-shares, sum of share sizes, real disk usage
       ('real disk usage' means we use stat(fn).st_blocks*512 and include any
        space used by the directory)
      what it would have been with the original lease expiration time
hunk ./src/allmydata/storage/expirer.py 32
 
     Space recovered during the last 10 cycles  <-- saved in separate pickle
 
-    Shares/buckets examined:
+    Shares/storage-indices examined:
      this cycle-so-far
      prediction of rest of cycle
      during last 10 cycles <-- separate pickle
hunk ./src/allmydata/storage/expirer.py 42
     Histogram of leases-per-share:
      this-cycle-to-date
      last 10 cycles <-- separate pickle
-    Histogram of lease ages, buckets = 1day
+    Histogram of lease ages, storage-indices over 1 day
      cycle-to-date
      last 10 cycles <-- separate pickle
 
hunk ./src/allmydata/storage/expirer.py 53
     slow_start = 360 # wait 6 minutes after startup
     minimum_cycle_time = 12*60*60 # not more than twice per day
 
-    def __init__(self, server, statefile, historyfile,
-                 expiration_enabled, mode,
-                 override_lease_duration, # used if expiration_mode=="age"
-                 cutoff_date, # used if expiration_mode=="cutoff-date"
-                 sharetypes):
-        self.historyfile = historyfile
-        self.expiration_enabled = expiration_enabled
-        self.mode = mode
+    def __init__(self, backend, statefp, historyfp, expiration_policy):
+        # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first.
+        self.historyfp = historyfp
+        ShareCrawler.__init__(self, backend, statefp)
+
+        self.expiration_enabled = expiration_policy['enabled']
+        self.mode = expiration_policy['mode']
         self.override_lease_duration = None
         self.cutoff_date = None
         if self.mode == "age":
hunk ./src/allmydata/storage/expirer.py 63
-            assert isinstance(override_lease_duration, (int, type(None)))
-            self.override_lease_duration = override_lease_duration # seconds
+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
+            self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds
         elif self.mode == "cutoff-date":
hunk ./src/allmydata/storage/expirer.py 66
-            assert isinstance(cutoff_date, int) # seconds-since-epoch
-            assert cutoff_date is not None
-            self.cutoff_date = cutoff_date
+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
+            self.cutoff_date = expiration_policy['cutoff_date']
         else:
hunk ./src/allmydata/storage/expirer.py 69
-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
-        self.sharetypes_to_expire = sharetypes
-        ShareCrawler.__init__(self, server, statefile)
+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
+        self.sharetypes_to_expire = expiration_policy['sharetypes']
 
     def add_initial_state(self):
         # we fill ["cycle-to-date"] here (even though they will be reset in
hunk ./src/allmydata/storage/expirer.py 84
             self.state["cycle-to-date"].setdefault(k, so_far[k])
 
         # initialize history
-        if not os.path.exists(self.historyfile):
+        if not self.historyfp.exists():
             history = {} # cyclenum -> dict
hunk ./src/allmydata/storage/expirer.py 86
-            f = open(self.historyfile, "wb")
-            pickle.dump(history, f)
-            f.close()
+            self.historyfp.setContent(pickle.dumps(history))
 
     def create_empty_cycle_dict(self):
         recovered = self.create_empty_recovered_dict()
hunk ./src/allmydata/storage/expirer.py 99
 
     def create_empty_recovered_dict(self):
         recovered = {}
+        # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server)
         for a in ("actual", "original", "configured", "examined"):
             for b in ("buckets", "shares", "sharebytes", "diskbytes"):
                 recovered[a+"-"+b] = 0
hunk ./src/allmydata/storage/expirer.py 110
     def started_cycle(self, cycle):
         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
 
-    def stat(self, fn):
-        return os.stat(fn)
-
-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
-        bucketdir = os.path.join(prefixdir, storage_index_b32)
-        s = self.stat(bucketdir)
+    def process_storage_index(self, cycle, prefix, container):
         would_keep_shares = []
         wks = None
hunk ./src/allmydata/storage/expirer.py 113
+        sharetype = None
 
hunk ./src/allmydata/storage/expirer.py 115
-        for fn in os.listdir(bucketdir):
-            try:
-                shnum = int(fn)
-            except ValueError:
-                continue # non-numeric means not a sharefile
-            sharefile = os.path.join(bucketdir, fn)
+        for share in container.get_shares():
+            sharetype = share.sharetype
             try:
hunk ./src/allmydata/storage/expirer.py 118
-                wks = self.process_share(sharefile)
+                wks = self.process_share(share)
             except (UnknownMutableContainerVersionError,
                     UnknownImmutableContainerVersionError,
                     struct.error):
hunk ./src/allmydata/storage/expirer.py 122
-                twlog.msg("lease-checker error processing %s" % sharefile)
+                twlog.msg("lease-checker error processing %r" % (share,))
                 twlog.err()
hunk ./src/allmydata/storage/expirer.py 124
-                which = (storage_index_b32, shnum)
+                which = (si_b2a(share.storageindex), share.get_shnum())
                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
                 wks = (1, 1, 1, "unknown")
             would_keep_shares.append(wks)
hunk ./src/allmydata/storage/expirer.py 129
 
-        sharetype = None
+        container_type = None
         if wks:
hunk ./src/allmydata/storage/expirer.py 131
-            # use the last share's sharetype as the buckettype
-            sharetype = wks[3]
+            # use the last share's sharetype as the container type
+            container_type = wks[3]
         rec = self.state["cycle-to-date"]["space-recovered"]
         self.increment(rec, "examined-buckets", 1)
         if sharetype:
hunk ./src/allmydata/storage/expirer.py 136
-            self.increment(rec, "examined-buckets-"+sharetype, 1)
+            self.increment(rec, "examined-buckets-"+container_type, 1)
+
+        container_diskbytes = container.get_overhead()
 
hunk ./src/allmydata/storage/expirer.py 140
-        try:
-            bucket_diskbytes = s.st_blocks * 512
-        except AttributeError:
-            bucket_diskbytes = 0 # no stat().st_blocks on windows
         if sum([wks[0] for wks in would_keep_shares]) == 0:
hunk ./src/allmydata/storage/expirer.py 141
-            self.increment_bucketspace("original", bucket_diskbytes, sharetype)
+            self.increment_container_space("original", container_diskbytes, sharetype)
         if sum([wks[1] for wks in would_keep_shares]) == 0:
hunk ./src/allmydata/storage/expirer.py 143
-            self.increment_bucketspace("configured", bucket_diskbytes, sharetype)
+            self.increment_container_space("configured", container_diskbytes, sharetype)
         if sum([wks[2] for wks in would_keep_shares]) == 0:
hunk ./src/allmydata/storage/expirer.py 145
-            self.increment_bucketspace("actual", bucket_diskbytes, sharetype)
+            self.increment_container_space("actual", container_diskbytes, sharetype)
 
hunk ./src/allmydata/storage/expirer.py 147
-    def process_share(self, sharefilename):
-        # first, find out what kind of a share it is
-        sf = get_share_file(sharefilename)
-        sharetype = sf.sharetype
+    def process_share(self, share):
+        sharetype = share.sharetype
         now = time.time()
hunk ./src/allmydata/storage/expirer.py 150
-        s = self.stat(sharefilename)
+        sharebytes = share.get_size()
+        diskbytes = share.get_used_space()
 
         num_leases = 0
         num_valid_leases_original = 0
hunk ./src/allmydata/storage/expirer.py 158
         num_valid_leases_configured = 0
         expired_leases_configured = []
 
-        for li in sf.get_leases():
+        for li in share.get_leases():
             num_leases += 1
             original_expiration_time = li.get_expiration_time()
             grant_renew_time = li.get_grant_renew_time_time()
hunk ./src/allmydata/storage/expirer.py 171
 
             #  expired-or-not according to our configured age limit
             expired = False
-            if self.mode == "age":
-                age_limit = original_expiration_time
-                if self.override_lease_duration is not None:
-                    age_limit = self.override_lease_duration
-                if age > age_limit:
-                    expired = True
-            else:
-                assert self.mode == "cutoff-date"
-                if grant_renew_time < self.cutoff_date:
-                    expired = True
-            if sharetype not in self.sharetypes_to_expire:
-                expired = False
+            if sharetype in self.sharetypes_to_expire:
+                if self.mode == "age":
+                    age_limit = original_expiration_time
+                    if self.override_lease_duration is not None:
+                        age_limit = self.override_lease_duration
+                    if age > age_limit:
+                        expired = True
+                else:
+                    assert self.mode == "cutoff-date"
+                    if grant_renew_time < self.cutoff_date:
+                        expired = True
 
             if expired:
                 expired_leases_configured.append(li)
hunk ./src/allmydata/storage/expirer.py 190
 
         so_far = self.state["cycle-to-date"]
         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
-        self.increment_space("examined", s, sharetype)
+        self.increment_space("examined", diskbytes, sharetype)
 
         would_keep_share = [1, 1, 1, sharetype]
 
hunk ./src/allmydata/storage/expirer.py 196
         if self.expiration_enabled:
             for li in expired_leases_configured:
-                sf.cancel_lease(li.cancel_secret)
+                share.cancel_lease(li.cancel_secret)
 
         if num_valid_leases_original == 0:
             would_keep_share[0] = 0
hunk ./src/allmydata/storage/expirer.py 200
-            self.increment_space("original", s, sharetype)
+            self.increment_space("original", sharebytes, diskbytes, sharetype)
 
         if num_valid_leases_configured == 0:
             would_keep_share[1] = 0
hunk ./src/allmydata/storage/expirer.py 204
-            self.increment_space("configured", s, sharetype)
+            self.increment_space("configured", sharebytes, diskbytes, sharetype)
             if self.expiration_enabled:
                 would_keep_share[2] = 0
hunk ./src/allmydata/storage/expirer.py 207
-                self.increment_space("actual", s, sharetype)
+                self.increment_space("actual", sharebytes, diskbytes, sharetype)
 
         return would_keep_share
 
hunk ./src/allmydata/storage/expirer.py 211
-    def increment_space(self, a, s, sharetype):
-        sharebytes = s.st_size
-        try:
-            # note that stat(2) says that st_blocks is 512 bytes, and that
-            # st_blksize is "optimal file sys I/O ops blocksize", which is
-            # independent of the block-size that st_blocks uses.
-            diskbytes = s.st_blocks * 512
-        except AttributeError:
-            # the docs say that st_blocks is only on linux. I also see it on
-            # MacOS. But it isn't available on windows.
-            diskbytes = sharebytes
+    def increment_space(self, a, sharebytes, diskbytes, sharetype):
         so_far_sr = self.state["cycle-to-date"]["space-recovered"]
         self.increment(so_far_sr, a+"-shares", 1)
         self.increment(so_far_sr, a+"-sharebytes", sharebytes)
hunk ./src/allmydata/storage/expirer.py 221
             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
 
-    def increment_bucketspace(self, a, bucket_diskbytes, sharetype):
+    def increment_container_space(self, a, container_diskbytes, container_type):
         rec = self.state["cycle-to-date"]["space-recovered"]
hunk ./src/allmydata/storage/expirer.py 223
-        self.increment(rec, a+"-diskbytes", bucket_diskbytes)
+        self.increment(rec, a+"-diskbytes", container_diskbytes)
         self.increment(rec, a+"-buckets", 1)
hunk ./src/allmydata/storage/expirer.py 225
-        if sharetype:
-            self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes)
-            self.increment(rec, a+"-buckets-"+sharetype, 1)
+        if container_type:
+            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
+            self.increment(rec, a+"-buckets-"+container_type, 1)
 
     def increment(self, d, k, delta=1):
         if k not in d:
hunk ./src/allmydata/storage/expirer.py 281
         # copy() needs to become a deepcopy
         h["space-recovered"] = s["space-recovered"].copy()
 
-        history = pickle.load(open(self.historyfile, "rb"))
+        history = pickle.load(self.historyfp.getContent())
         history[cycle] = h
         while len(history) > 10:
             oldcycles = sorted(history.keys())
hunk ./src/allmydata/storage/expirer.py 286
             del history[oldcycles[0]]
-        f = open(self.historyfile, "wb")
-        pickle.dump(history, f)
-        f.close()
+        self.historyfp.setContent(pickle.dumps(history))
 
     def get_state(self):
         """In addition to the crawler state described in
hunk ./src/allmydata/storage/expirer.py 355
         progress = self.get_progress()
 
         state = ShareCrawler.get_state(self) # does a shallow copy
-        history = pickle.load(open(self.historyfile, "rb"))
+        history = pickle.load(self.historyfp.getContent())
         state["history"] = history
 
         if not progress["cycle-in-progress"]:
hunk ./src/allmydata/storage/lease.py 3
 import struct, time
 
+
+class NonExistentLeaseError(Exception):
+    pass
+
 class LeaseInfo:
     def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None,
                  expiration_time=None, nodeid=None):
hunk ./src/allmydata/storage/lease.py 21
 
     def get_expiration_time(self):
         return self.expiration_time
+
     def get_grant_renew_time_time(self):
         # hack, based upon fixed 31day expiration period
         return self.expiration_time - 31*24*60*60
hunk ./src/allmydata/storage/lease.py 25
+
     def get_age(self):
         return time.time() - self.get_grant_renew_time_time()
 
hunk ./src/allmydata/storage/lease.py 36
          self.expiration_time) = struct.unpack(">L32s32sL", data)
         self.nodeid = None
         return self
+
     def to_immutable_data(self):
         return struct.pack(">L32s32sL",
                            self.owner_num,
hunk ./src/allmydata/storage/lease.py 49
                            int(self.expiration_time),
                            self.renew_secret, self.cancel_secret,
                            self.nodeid)
+
     def from_mutable_data(self, data):
         (self.owner_num,
          self.expiration_time,
hunk ./src/allmydata/storage/server.py 1
-import os, re, weakref, struct, time
+import weakref, time
 
 from foolscap.api import Referenceable
 from twisted.application import service
hunk ./src/allmydata/storage/server.py 7
 
 from zope.interface import implements
-from allmydata.interfaces import RIStorageServer, IStatsProducer
-from allmydata.util import fileutil, idlib, log, time_format
+from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend
+from allmydata.util.assertutil import precondition
+from allmydata.util import idlib, log
 import allmydata # for __full_version__
 
hunk ./src/allmydata/storage/server.py 12
-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
+from allmydata.storage.common import si_a2b, si_b2a
+[si_a2b]  # hush pyflakes
 from allmydata.storage.lease import LeaseInfo
hunk ./src/allmydata/storage/server.py 15
-from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
-     create_mutable_sharefile
-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
-from allmydata.storage.crawler import BucketCountingCrawler
 from allmydata.storage.expirer import LeaseCheckingCrawler
hunk ./src/allmydata/storage/server.py 16
-
-# storage/
-# storage/shares/incoming
-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
-# storage/shares/$START/$STORAGEINDEX
-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
-
-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
-# base-32 chars).
-
-# $SHARENUM matches this regex:
-NUM_RE=re.compile("^[0-9]+$")
-
+from allmydata.storage.crawler import BucketCountingCrawler
 
 
 class StorageServer(service.MultiService, Referenceable):
hunk ./src/allmydata/storage/server.py 21
     implements(RIStorageServer, IStatsProducer)
+
     name = 'storage'
     LeaseCheckerClass = LeaseCheckingCrawler
hunk ./src/allmydata/storage/server.py 24
+    DEFAULT_EXPIRATION_POLICY = {
+        'enabled': False,
+        'mode': 'age',
+        'override_lease_duration': None,
+        'cutoff_date': None,
+        'sharetypes': ('mutable', 'immutable'),
+    }
 
hunk ./src/allmydata/storage/server.py 32
-    def __init__(self, storedir, nodeid, reserved_space=0,
-                 discard_storage=False, readonly_storage=False,
+    def __init__(self, serverid, backend, statedir,
                  stats_provider=None,
hunk ./src/allmydata/storage/server.py 34
-                 expiration_enabled=False,
-                 expiration_mode="age",
-                 expiration_override_lease_duration=None,
-                 expiration_cutoff_date=None,
-                 expiration_sharetypes=("mutable", "immutable")):
+                 expiration_policy=None):
         service.MultiService.__init__(self)
hunk ./src/allmydata/storage/server.py 36
-        assert isinstance(nodeid, str)
-        assert len(nodeid) == 20
-        self.my_nodeid = nodeid
-        self.storedir = storedir
-        sharedir = os.path.join(storedir, "shares")
-        fileutil.make_dirs(sharedir)
-        self.sharedir = sharedir
-        # we don't actually create the corruption-advisory dir until necessary
-        self.corruption_advisory_dir = os.path.join(storedir,
-                                                    "corruption-advisories")
-        self.reserved_space = int(reserved_space)
-        self.no_storage = discard_storage
-        self.readonly_storage = readonly_storage
+        precondition(IStorageBackend.providedBy(backend), backend)
+        precondition(isinstance(serverid, str), serverid)
+        precondition(len(serverid) == 20, serverid)
+
+        self._serverid = serverid
         self.stats_provider = stats_provider
         if self.stats_provider:
             self.stats_provider.register_producer(self)
hunk ./src/allmydata/storage/server.py 44
-        self.incomingdir = os.path.join(sharedir, 'incoming')
-        self._clean_incomplete()
-        fileutil.make_dirs(self.incomingdir)
         self._active_writers = weakref.WeakKeyDictionary()
hunk ./src/allmydata/storage/server.py 45
+        self.backend = backend
+        self.backend.setServiceParent(self)
+        self._statedir = statedir
         log.msg("StorageServer created", facility="tahoe.storage")
 
hunk ./src/allmydata/storage/server.py 50
-        if reserved_space:
-            if self.get_available_space() is None:
-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
-                        umin="0wZ27w", level=log.UNUSUAL)
-
         self.latencies = {"allocate": [], # immutable
                           "write": [],
                           "close": [],
hunk ./src/allmydata/storage/server.py 61
                           "renew": [],
                           "cancel": [],
                           }
-        self.add_bucket_counter()
-
-        statefile = os.path.join(self.storedir, "lease_checker.state")
-        historyfile = os.path.join(self.storedir, "lease_checker.history")
-        klass = self.LeaseCheckerClass
-        self.lease_checker = klass(self, statefile, historyfile,
-                                   expiration_enabled, expiration_mode,
-                                   expiration_override_lease_duration,
-                                   expiration_cutoff_date,
-                                   expiration_sharetypes)
-        self.lease_checker.setServiceParent(self)
+        self._setup_bucket_counter()
+        self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY)
 
     def __repr__(self):
hunk ./src/allmydata/storage/server.py 65
-        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),)
 
hunk ./src/allmydata/storage/server.py 67
-    def add_bucket_counter(self):
-        statefile = os.path.join(self.storedir, "bucket_counter.state")
-        self.bucket_counter = BucketCountingCrawler(self, statefile)
+    def _setup_bucket_counter(self):
+        statefp = self._statedir.child("bucket_counter.state")
+        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
         self.bucket_counter.setServiceParent(self)
 
hunk ./src/allmydata/storage/server.py 72
+    def _setup_lease_checker(self, expiration_policy):
+        statefp = self._statedir.child("lease_checker.state")
+        historyfp = self._statedir.child("lease_checker.history")
+        self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy)
+        self.lease_checker.setServiceParent(self)
+
     def count(self, name, delta=1):
         if self.stats_provider:
             self.stats_provider.count("storage_server." + name, delta)
hunk ./src/allmydata/storage/server.py 92
         """Return a dict, indexed by category, that contains a dict of
         latency numbers for each category. If there are sufficient samples
         for unambiguous interpretation, each dict will contain the
-        following keys: mean, 01_0_percentile, 10_0_percentile,
+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
         99_0_percentile, 99_9_percentile.  If there are insufficient
         samples for a given percentile to be interpreted unambiguously
hunk ./src/allmydata/storage/server.py 114
             else:
                 stats["mean"] = None
 
-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
                              (0.999, "99_9_percentile", 1000)]
 
             for percentile, percentilestring, minnumtoobserve in orderstatlist:
hunk ./src/allmydata/storage/server.py 133
             kwargs["facility"] = "tahoe.storage"
         return log.msg(*args, **kwargs)
 
-    def _clean_incomplete(self):
-        fileutil.rm_dir(self.incomingdir)
+    def get_serverid(self):
+        return self._serverid
 
     def get_stats(self):
         # remember: RIStatsProvider requires that our return dict
hunk ./src/allmydata/storage/server.py 138
-        # contains numeric values.
+        # contains numeric, or None values.
         stats = { 'storage_server.allocated': self.allocated_size(), }
hunk ./src/allmydata/storage/server.py 140
-        stats['storage_server.reserved_space'] = self.reserved_space
         for category,ld in self.get_latencies().items():
             for name,v in ld.items():
                 stats['storage_server.latencies.%s.%s' % (category, name)] = v
hunk ./src/allmydata/storage/server.py 144
 
-        try:
-            disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space)
-            writeable = disk['avail'] > 0
-
-            # spacetime predictors should use disk_avail / (d(disk_used)/dt)
-            stats['storage_server.disk_total'] = disk['total']
-            stats['storage_server.disk_used'] = disk['used']
-            stats['storage_server.disk_free_for_root'] = disk['free_for_root']
-            stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot']
-            stats['storage_server.disk_avail'] = disk['avail']
-        except AttributeError:
-            writeable = True
-        except EnvironmentError:
-            log.msg("OS call to get disk statistics failed", level=log.UNUSUAL)
-            writeable = False
-
-        if self.readonly_storage:
-            stats['storage_server.disk_avail'] = 0
-            writeable = False
+        self.backend.fill_in_space_stats(stats)
 
hunk ./src/allmydata/storage/server.py 146
-        stats['storage_server.accepting_immutable_shares'] = int(writeable)
         s = self.bucket_counter.get_state()
         bucket_count = s.get("last-complete-bucket-count")
         if bucket_count:
hunk ./src/allmydata/storage/server.py 153
         return stats
 
     def get_available_space(self):
-        """Returns available space for share storage in bytes, or None if no
-        API to get this information is available."""
-
-        if self.readonly_storage:
-            return 0
-        return fileutil.get_available_space(self.sharedir, self.reserved_space)
+        return self.backend.get_available_space()
 
     def allocated_size(self):
         space = 0
hunk ./src/allmydata/storage/server.py 162
         return space
 
     def remote_get_version(self):
-        remaining_space = self.get_available_space()
+        remaining_space = self.backend.get_available_space()
         if remaining_space is None:
             # We're on a platform that has no API to get disk stats.
             remaining_space = 2**64
hunk ./src/allmydata/storage/server.py 178
                     }
         return version
 
-    def remote_allocate_buckets(self, storage_index,
+    def remote_allocate_buckets(self, storageindex,
                                 renew_secret, cancel_secret,
                                 sharenums, allocated_size,
                                 canary, owner_num=0):
hunk ./src/allmydata/storage/server.py 182
+        # cancel_secret is no longer used.
         # owner_num is not for clients to set, but rather it should be
hunk ./src/allmydata/storage/server.py 184
-        # curried into the PersonalStorageServer instance that is dedicated
-        # to a particular owner.
+        # curried into a StorageServer instance dedicated to a particular
+        # owner.
         start = time.time()
         self.count("allocate")
hunk ./src/allmydata/storage/server.py 188
-        alreadygot = set()
         bucketwriters = {} # k: shnum, v: BucketWriter
hunk ./src/allmydata/storage/server.py 189
-        si_dir = storage_index_to_dir(storage_index)
-        si_s = si_b2a(storage_index)
 
hunk ./src/allmydata/storage/server.py 190
+        si_s = si_b2a(storageindex)
         log.msg("storage: allocate_buckets %s" % si_s)
 
hunk ./src/allmydata/storage/server.py 193
-        # in this implementation, the lease information (including secrets)
-        # goes into the share files themselves. It could also be put into a
-        # separate database. Note that the lease should not be added until
-        # the BucketWriter has been closed.
+        # Note that the lease should not be added until the BucketWriter
+        # has been closed.
         expire_time = time.time() + 31*24*60*60
hunk ./src/allmydata/storage/server.py 196
-        lease_info = LeaseInfo(owner_num,
-                               renew_secret, cancel_secret,
-                               expire_time, self.my_nodeid)
+        lease_info = LeaseInfo(owner_num, renew_secret,
+                               expire_time, self._serverid)
 
         max_space_per_bucket = allocated_size
 
hunk ./src/allmydata/storage/server.py 201
-        remaining_space = self.get_available_space()
+        remaining_space = self.backend.get_available_space()
         limited = remaining_space is not None
         if limited:
hunk ./src/allmydata/storage/server.py 204
-            # this is a bit conservative, since some of this allocated_size()
-            # has already been written to disk, where it will show up in
+            # This is a bit conservative, since some of this allocated_size()
+            # has already been written to the backend, where it will show up in
             # get_available_space.
             remaining_space -= self.allocated_size()
hunk ./src/allmydata/storage/server.py 208
-        # self.readonly_storage causes remaining_space <= 0
+            # If the backend is read-only, remaining_space will be <= 0.
+
+        shareset = self.backend.get_shareset(storageindex)
 
hunk ./src/allmydata/storage/server.py 212
-        # fill alreadygot with all shares that we have, not just the ones
+        # Fill alreadygot with all shares that we have, not just the ones
         # they asked about: this will save them a lot of work. Add or update
         # leases for all of them: if they want us to hold shares for this
hunk ./src/allmydata/storage/server.py 215
-        # file, they'll want us to hold leases for this file.
-        for (shnum, fn) in self._get_bucket_shares(storage_index):
-            alreadygot.add(shnum)
-            sf = ShareFile(fn)
-            sf.add_or_renew_lease(lease_info)
+        # file, they'll want us to hold leases for all the shares of it.
+        #
+        # XXX should we be making the assumption here that lease info is
+        # duplicated in all shares?
+        alreadygot = set()
+        for share in shareset.get_shares():
+            share.add_or_renew_lease(lease_info)
+            alreadygot.add(share.shnum)
 
hunk ./src/allmydata/storage/server.py 224
-        for shnum in sharenums:
-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
-            if os.path.exists(finalhome):
-                # great! we already have it. easy.
-                pass
-            elif os.path.exists(incominghome):
+        for shnum in sharenums - alreadygot:
+            if shareset.has_incoming(shnum):
                 # Note that we don't create BucketWriters for shnums that
                 # have a partial share (in incoming/), so if a second upload
                 # occurs while the first is still in progress, the second
hunk ./src/allmydata/storage/server.py 232
                 # uploader will use different storage servers.
                 pass
             elif (not limited) or (remaining_space >= max_space_per_bucket):
-                # ok! we need to create the new share file.
-                bw = BucketWriter(self, incominghome, finalhome,
-                                  max_space_per_bucket, lease_info, canary)
-                if self.no_storage:
-                    bw.throw_out_all_data = True
+                bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket,
+                                                 lease_info, canary)
                 bucketwriters[shnum] = bw
                 self._active_writers[bw] = 1
                 if limited:
hunk ./src/allmydata/storage/server.py 239
                     remaining_space -= max_space_per_bucket
             else:
-                # bummer! not enough space to accept this bucket
+                # Bummer not enough space to accept this share.
                 pass
 
hunk ./src/allmydata/storage/server.py 242
-        if bucketwriters:
-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
-
         self.add_latency("allocate", time.time() - start)
         return alreadygot, bucketwriters
 
hunk ./src/allmydata/storage/server.py 245
-    def _iter_share_files(self, storage_index):
-        for shnum, filename in self._get_bucket_shares(storage_index):
-            f = open(filename, 'rb')
-            header = f.read(32)
-            f.close()
-            if header[:32] == MutableShareFile.MAGIC:
-                sf = MutableShareFile(filename, self)
-                # note: if the share has been migrated, the renew_lease()
-                # call will throw an exception, with information to help the
-                # client update the lease.
-            elif header[:4] == struct.pack(">L", 1):
-                sf = ShareFile(filename)
-            else:
-                continue # non-sharefile
-            yield sf
-
-    def remote_add_lease(self, storage_index, renew_secret, cancel_secret,
+    def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
                          owner_num=1):
hunk ./src/allmydata/storage/server.py 247
+        # cancel_secret is no longer used.
         start = time.time()
         self.count("add-lease")
         new_expire_time = time.time() + 31*24*60*60
hunk ./src/allmydata/storage/server.py 251
-        lease_info = LeaseInfo(owner_num,
-                               renew_secret, cancel_secret,
-                               new_expire_time, self.my_nodeid)
-        for sf in self._iter_share_files(storage_index):
-            sf.add_or_renew_lease(lease_info)
-        self.add_latency("add-lease", time.time() - start)
-        return None
+        lease_info = LeaseInfo(owner_num, renew_secret,
+                               new_expire_time, self._serverid)
 
hunk ./src/allmydata/storage/server.py 254
-    def remote_renew_lease(self, storage_index, renew_secret):
+        try:
+            self.backend.add_or_renew_lease(lease_info)
+        finally:
+            self.add_latency("add-lease", time.time() - start)
+
+    def remote_renew_lease(self, storageindex, renew_secret):
         start = time.time()
         self.count("renew")
hunk ./src/allmydata/storage/server.py 262
-        new_expire_time = time.time() + 31*24*60*60
-        found_buckets = False
-        for sf in self._iter_share_files(storage_index):
-            found_buckets = True
-            sf.renew_lease(renew_secret, new_expire_time)
-        self.add_latency("renew", time.time() - start)
-        if not found_buckets:
-            raise IndexError("no such lease to renew")
+
+        try:
+            shareset = self.backend.get_shareset(storageindex)
+            new_expiration_time = start + 31*24*60*60   # one month from now
+            shareset.renew_lease(renew_secret, new_expiration_time)
+        finally:
+            self.add_latency("renew", time.time() - start)
 
     def bucket_writer_closed(self, bw, consumed_size):
         if self.stats_provider:
hunk ./src/allmydata/storage/server.py 275
             self.stats_provider.count('storage_server.bytes_added', consumed_size)
         del self._active_writers[bw]
 
-    def _get_bucket_shares(self, storage_index):
-        """Return a list of (shnum, pathname) tuples for files that hold
-        shares for this storage_index. In each tuple, 'shnum' will always be
-        the integer form of the last component of 'pathname'."""
-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
-        try:
-            for f in os.listdir(storagedir):
-                if NUM_RE.match(f):
-                    filename = os.path.join(storagedir, f)
-                    yield (int(f), filename)
-        except OSError:
-            # Commonly caused by there being no buckets at all.
-            pass
-
-    def remote_get_buckets(self, storage_index):
+    def remote_get_buckets(self, storageindex):
         start = time.time()
         self.count("get")
hunk ./src/allmydata/storage/server.py 278
-        si_s = si_b2a(storage_index)
+        si_s = si_b2a(storageindex)
         log.msg("storage: get_buckets %s" % si_s)
         bucketreaders = {} # k: sharenum, v: BucketReader
hunk ./src/allmydata/storage/server.py 281
-        for shnum, filename in self._get_bucket_shares(storage_index):
-            bucketreaders[shnum] = BucketReader(self, filename,
-                                                storage_index, shnum)
-        self.add_latency("get", time.time() - start)
-        return bucketreaders
 
hunk ./src/allmydata/storage/server.py 282
-    def get_leases(self, storage_index):
-        """Provide an iterator that yields all of the leases attached to this
-        bucket. Each lease is returned as a LeaseInfo instance.
+        try:
+            shareset = self.backend.get_shareset(storageindex)
+            for share in shareset.get_shares():
+                bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share)
+            return bucketreaders
+        finally:
+            self.add_latency("get", time.time() - start)
 
hunk ./src/allmydata/storage/server.py 290
-        This method is not for client use.
+    def get_leases(self, storageindex):
         """
hunk ./src/allmydata/storage/server.py 292
+        Provide an iterator that yields all of the leases attached to this
+        bucket. Each lease is returned as a LeaseInfo instance.
 
hunk ./src/allmydata/storage/server.py 295
-        # since all shares get the same lease data, we just grab the leases
-        # from the first share
-        try:
-            shnum, filename = self._get_bucket_shares(storage_index).next()
-            sf = ShareFile(filename)
-            return sf.get_leases()
-        except StopIteration:
-            return iter([])
+        This method is not for client use. XXX do we need it at all?
+        """
+        return self.backend.get_shareset(storageindex).get_leases()
 
hunk ./src/allmydata/storage/server.py 299
-    def remote_slot_testv_and_readv_and_writev(self, storage_index,
+    def remote_slot_testv_and_readv_and_writev(self, storageindex,
                                                secrets,
                                                test_and_write_vectors,
                                                read_vector):
hunk ./src/allmydata/storage/server.py 305
         start = time.time()
         self.count("writev")
-        si_s = si_b2a(storage_index)
+        si_s = si_b2a(storageindex)
         log.msg("storage: slot_writev %s" % si_s)
hunk ./src/allmydata/storage/server.py 307
-        si_dir = storage_index_to_dir(storage_index)
-        (write_enabler, renew_secret, cancel_secret) = secrets
-        # shares exist if there is a file for them
-        bucketdir = os.path.join(self.sharedir, si_dir)
-        shares = {}
-        if os.path.isdir(bucketdir):
-            for sharenum_s in os.listdir(bucketdir):
-                try:
-                    sharenum = int(sharenum_s)
-                except ValueError:
-                    continue
-                filename = os.path.join(bucketdir, sharenum_s)
-                msf = MutableShareFile(filename, self)
-                msf.check_write_enabler(write_enabler, si_s)
-                shares[sharenum] = msf
-        # write_enabler is good for all existing shares.
-
-        # Now evaluate test vectors.
-        testv_is_good = True
-        for sharenum in test_and_write_vectors:
-            (testv, datav, new_length) = test_and_write_vectors[sharenum]
-            if sharenum in shares:
-                if not shares[sharenum].check_testv(testv):
-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
-                    testv_is_good = False
-                    break
-            else:
-                # compare the vectors against an empty share, in which all
-                # reads return empty strings.
-                if not EmptyShare().check_testv(testv):
-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
-                                                                testv))
-                    testv_is_good = False
-                    break
-
-        # now gather the read vectors, before we do any writes
-        read_data = {}
-        for sharenum, share in shares.items():
-            read_data[sharenum] = share.readv(read_vector)
-
-        ownerid = 1 # TODO
-        expire_time = time.time() + 31*24*60*60   # one month
-        lease_info = LeaseInfo(ownerid,
-                               renew_secret, cancel_secret,
-                               expire_time, self.my_nodeid)
-
-        if testv_is_good:
-            # now apply the write vectors
-            for sharenum in test_and_write_vectors:
-                (testv, datav, new_length) = test_and_write_vectors[sharenum]
-                if new_length == 0:
-                    if sharenum in shares:
-                        shares[sharenum].unlink()
-                else:
-                    if sharenum not in shares:
-                        # allocate a new share
-                        allocated_size = 2000 # arbitrary, really
-                        share = self._allocate_slot_share(bucketdir, secrets,
-                                                          sharenum,
-                                                          allocated_size,
-                                                          owner_num=0)
-                        shares[sharenum] = share
-                    shares[sharenum].writev(datav, new_length)
-                    # and update the lease
-                    shares[sharenum].add_or_renew_lease(lease_info)
-
-            if new_length == 0:
-                # delete empty bucket directories
-                if not os.listdir(bucketdir):
-                    os.rmdir(bucketdir)
 
hunk ./src/allmydata/storage/server.py 308
+        try:
+            shareset = self.backend.get_shareset(storageindex)
+            expiration_time = start + 31*24*60*60   # one month from now
+            return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors,
+                                                       read_vector, expiration_time)
+        finally:
+            self.add_latency("writev", time.time() - start)
 
hunk ./src/allmydata/storage/server.py 316
-        # all done
-        self.add_latency("writev", time.time() - start)
-        return (testv_is_good, read_data)
-
-    def _allocate_slot_share(self, bucketdir, secrets, sharenum,
-                             allocated_size, owner_num=0):
-        (write_enabler, renew_secret, cancel_secret) = secrets
-        my_nodeid = self.my_nodeid
-        fileutil.make_dirs(bucketdir)
-        filename = os.path.join(bucketdir, "%d" % sharenum)
-        share = create_mutable_sharefile(filename, my_nodeid, write_enabler,
-                                         self)
-        return share
-
-    def remote_slot_readv(self, storage_index, shares, readv):
+    def remote_slot_readv(self, storageindex, shares, readv):
         start = time.time()
         self.count("readv")
hunk ./src/allmydata/storage/server.py 319
-        si_s = si_b2a(storage_index)
-        lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
-                     facility="tahoe.storage", level=log.OPERATIONAL)
-        si_dir = storage_index_to_dir(storage_index)
-        # shares exist if there is a file for them
-        bucketdir = os.path.join(self.sharedir, si_dir)
-        if not os.path.isdir(bucketdir):
+        si_s = si_b2a(storageindex)
+        log.msg("storage: slot_readv %s %s" % (si_s, shares),
+                facility="tahoe.storage", level=log.OPERATIONAL)
+
+        try:
+            shareset = self.backend.get_shareset(storageindex)
+            return shareset.readv(self, shares, readv)
+        finally:
             self.add_latency("readv", time.time() - start)
hunk ./src/allmydata/storage/server.py 328
-            return {}
-        datavs = {}
-        for sharenum_s in os.listdir(bucketdir):
-            try:
-                sharenum = int(sharenum_s)
-            except ValueError:
-                continue
-            if sharenum in shares or not shares:
-                filename = os.path.join(bucketdir, sharenum_s)
-                msf = MutableShareFile(filename, self)
-                datavs[sharenum] = msf.readv(readv)
-        log.msg("returning shares %s" % (datavs.keys(),),
-                facility="tahoe.storage", level=log.NOISY, parent=lp)
-        self.add_latency("readv", time.time() - start)
-        return datavs
 
hunk ./src/allmydata/storage/server.py 329
-    def remote_advise_corrupt_share(self, share_type, storage_index, shnum,
-                                    reason):
-        fileutil.make_dirs(self.corruption_advisory_dir)
-        now = time_format.iso_utc(sep="T")
-        si_s = si_b2a(storage_index)
-        # windows can't handle colons in the filename
-        fn = os.path.join(self.corruption_advisory_dir,
-                          "%s--%s-%d" % (now, si_s, shnum)).replace(":","")
-        f = open(fn, "w")
-        f.write("report: Share Corruption\n")
-        f.write("type: %s\n" % share_type)
-        f.write("storage_index: %s\n" % si_s)
-        f.write("share_number: %d\n" % shnum)
-        f.write("\n")
-        f.write(reason)
-        f.write("\n")
-        f.close()
-        log.msg(format=("client claims corruption in (%(share_type)s) " +
-                        "%(si)s-%(shnum)d: %(reason)s"),
-                share_type=share_type, si=si_s, shnum=shnum, reason=reason,
-                level=log.SCARY, umid="SGx2fA")
-        return None
+    def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason):
+        self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason)
hunk ./src/allmydata/test/common.py 20
 from allmydata.mutable.common import CorruptShareError
 from allmydata.mutable.layout import unpack_header
 from allmydata.mutable.publish import MutableData
-from allmydata.storage.mutable import MutableShareFile
+from allmydata.storage.backends.disk.mutable import MutableDiskShare
 from allmydata.util import hashutil, log, fileutil, pollmixin
 from allmydata.util.assertutil import precondition
 from allmydata.util.consumer import download_to_data
hunk ./src/allmydata/test/common.py 1297
 
 def _corrupt_mutable_share_data(data, debug=False):
     prefix = data[:32]
-    assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC)
-    data_offset = MutableShareFile.DATA_OFFSET
+    assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC)
+    data_offset = MutableDiskShare.DATA_OFFSET
     sharetype = data[data_offset:data_offset+1]
     assert sharetype == "\x00", "non-SDMF mutable shares not supported"
     (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
hunk ./src/allmydata/test/no_network.py 21
 from twisted.application import service
 from twisted.internet import defer, reactor
 from twisted.python.failure import Failure
+from twisted.python.filepath import FilePath
 from foolscap.api import Referenceable, fireEventually, RemoteException
 from base64 import b32encode
hunk ./src/allmydata/test/no_network.py 24
+
 from allmydata import uri as tahoe_uri
 from allmydata.client import Client
hunk ./src/allmydata/test/no_network.py 27
-from allmydata.storage.server import StorageServer, storage_index_to_dir
+from allmydata.storage.server import StorageServer
+from allmydata.storage.backends.disk.disk_backend import DiskBackend
 from allmydata.util import fileutil, idlib, hashutil
 from allmydata.util.hashutil import sha1
 from allmydata.test.common_web import HTTPClientGETFactory
hunk ./src/allmydata/test/no_network.py 155
             seed = server.get_permutation_seed()
             return sha1(peer_selection_index + seed).digest()
         return sorted(self.get_connected_servers(), key=_permuted)
+
     def get_connected_servers(self):
         return self.client._servers
hunk ./src/allmydata/test/no_network.py 158
+
     def get_nickname_for_serverid(self, serverid):
         return None
 
hunk ./src/allmydata/test/no_network.py 162
+    def get_known_servers(self):
+        return self.get_connected_servers()
+
+    def get_all_serverids(self):
+        return self.client.get_all_serverids()
+
+
 class NoNetworkClient(Client):
     def create_tub(self):
         pass
hunk ./src/allmydata/test/no_network.py 262
 
     def make_server(self, i, readonly=False):
         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
-        serverdir = os.path.join(self.basedir, "servers",
-                                 idlib.shortnodeid_b2a(serverid), "storage")
-        fileutil.make_dirs(serverdir)
-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
-                           readonly_storage=readonly)
+        storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage")
+
+        # The backend will make the storage directory and any necessary parents.
+        backend = DiskBackend(storagedir, readonly=readonly)
+        ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats())
         ss._no_network_server_number = i
         return ss
 
hunk ./src/allmydata/test/no_network.py 276
         middleman = service.MultiService()
         middleman.setServiceParent(self)
         ss.setServiceParent(middleman)
-        serverid = ss.my_nodeid
+        serverid = ss.get_serverid()
         self.servers_by_number[i] = ss
         wrapper = wrap_storage_server(ss)
         self.wrappers_by_id[serverid] = wrapper
hunk ./src/allmydata/test/no_network.py 295
         # it's enough to remove the server from c._servers (we don't actually
         # have to detach and stopService it)
         for i,ss in self.servers_by_number.items():
-            if ss.my_nodeid == serverid:
+            if ss.get_serverid() == serverid:
                 del self.servers_by_number[i]
                 break
         del self.wrappers_by_id[serverid]
hunk ./src/allmydata/test/no_network.py 345
     def get_clientdir(self, i=0):
         return self.g.clients[i].basedir
 
+    def get_server(self, i):
+        return self.g.servers_by_number[i]
+
     def get_serverdir(self, i):
hunk ./src/allmydata/test/no_network.py 349
-        return self.g.servers_by_number[i].storedir
+        return self.g.servers_by_number[i].backend.storedir
+
+    def remove_server(self, i):
+        self.g.remove_server(self.g.servers_by_number[i].get_serverid())
 
     def iterate_servers(self):
         for i in sorted(self.g.servers_by_number.keys()):
hunk ./src/allmydata/test/no_network.py 357
             ss = self.g.servers_by_number[i]
-            yield (i, ss, ss.storedir)
+            yield (i, ss, ss.backend.storedir)
 
     def find_uri_shares(self, uri):
         si = tahoe_uri.from_string(uri).get_storage_index()
hunk ./src/allmydata/test/no_network.py 361
-        prefixdir = storage_index_to_dir(si)
         shares = []
         for i,ss in self.g.servers_by_number.items():
hunk ./src/allmydata/test/no_network.py 363
-            serverid = ss.my_nodeid
-            basedir = os.path.join(ss.sharedir, prefixdir)
-            if not os.path.exists(basedir):
-                continue
-            for f in os.listdir(basedir):
-                try:
-                    shnum = int(f)
-                    shares.append((shnum, serverid, os.path.join(basedir, f)))
-                except ValueError:
-                    pass
+            for share in ss.backend.get_shareset(si).get_shares():
+                shares.append((share.get_shnum(), ss.get_serverid(), share._home))
         return sorted(shares)
 
hunk ./src/allmydata/test/no_network.py 367
+    def count_leases(self, uri):
+        """Return (filename, leasecount) pairs in arbitrary order."""
+        si = tahoe_uri.from_string(uri).get_storage_index()
+        lease_counts = []
+        for i,ss in self.g.servers_by_number.items():
+            for share in ss.backend.get_shareset(si).get_shares():
+                num_leases = len(list(share.get_leases()))
+                lease_counts.append( (share._home.path, num_leases) )
+        return lease_counts
+
     def copy_shares(self, uri):
         shares = {}
hunk ./src/allmydata/test/no_network.py 379
-        for (shnum, serverid, sharefile) in self.find_uri_shares(uri):
-            shares[sharefile] = open(sharefile, "rb").read()
+        for (shnum, serverid, sharefp) in self.find_uri_shares(uri):
+            shares[sharefp.path] = sharefp.getContent()
         return shares
 
hunk ./src/allmydata/test/no_network.py 383
+    def copy_share(self, from_share, uri, to_server):
+        si = uri.from_string(self.uri).get_storage_index()
+        (i_shnum, i_serverid, i_sharefp) = from_share
+        shares_dir = to_server.backend.get_shareset(si)._sharehomedir
+        i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
+
     def restore_all_shares(self, shares):
hunk ./src/allmydata/test/no_network.py 390
-        for sharefile, data in shares.items():
-            open(sharefile, "wb").write(data)
+        for share, data in shares.items():
+            share.home.setContent(data)
 
hunk ./src/allmydata/test/no_network.py 393
-    def delete_share(self, (shnum, serverid, sharefile)):
-        os.unlink(sharefile)
+    def delete_share(self, (shnum, serverid, sharefp)):
+        sharefp.remove()
 
     def delete_shares_numbered(self, uri, shnums):
hunk ./src/allmydata/test/no_network.py 397
-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
             if i_shnum in shnums:
hunk ./src/allmydata/test/no_network.py 399
-                os.unlink(i_sharefile)
+                i_sharefp.remove()
 
hunk ./src/allmydata/test/no_network.py 401
-    def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function):
-        sharedata = open(sharefile, "rb").read()
-        corruptdata = corruptor_function(sharedata)
-        open(sharefile, "wb").write(corruptdata)
+    def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False):
+        sharedata = sharefp.getContent()
+        corruptdata = corruptor_function(sharedata, debug=debug)
+        sharefp.setContent(corruptdata)
 
     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
hunk ./src/allmydata/test/no_network.py 407
-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
             if i_shnum in shnums:
hunk ./src/allmydata/test/no_network.py 409
-                sharedata = open(i_sharefile, "rb").read()
-                corruptdata = corruptor(sharedata, debug=debug)
-                open(i_sharefile, "wb").write(corruptdata)
+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
 
     def corrupt_all_shares(self, uri, corruptor, debug=False):
hunk ./src/allmydata/test/no_network.py 412
-        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
-            sharedata = open(i_sharefile, "rb").read()
-            corruptdata = corruptor(sharedata, debug=debug)
-            open(i_sharefile, "wb").write(corruptdata)
+        for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri):
+            self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug)
 
     def GET(self, urlpath, followRedirect=False, return_response=False,
             method="GET", clientnum=0, **kwargs):
hunk ./src/allmydata/test/test_download.py 6
 # a previous run. This asserts that the current code is capable of decoding
 # shares from a previous version.
 
-import os
 from twisted.trial import unittest
 from twisted.internet import defer, reactor
 from allmydata import uri
hunk ./src/allmydata/test/test_download.py 9
-from allmydata.storage.server import storage_index_to_dir
 from allmydata.util import base32, fileutil, spans, log, hashutil
 from allmydata.util.consumer import download_to_data, MemoryConsumer
 from allmydata.immutable import upload, layout
hunk ./src/allmydata/test/test_download.py 85
         u = upload.Data(plaintext, None)
         d = self.c0.upload(u)
         f = open("stored_shares.py", "w")
-        def _created_immutable(ur):
-            # write the generated shares and URI to a file, which can then be
-            # incorporated into this one next time.
-            f.write('immutable_uri = "%s"\n' % ur.uri)
-            f.write('immutable_shares = {\n')
-            si = uri.from_string(ur.uri).get_storage_index()
-            si_dir = storage_index_to_dir(si)
+
+        def _write_py(uri):
+            si = uri.from_string(uri).get_storage_index()
             for (i,ss,ssdir) in self.iterate_servers():
hunk ./src/allmydata/test/test_download.py 89
-                sharedir = os.path.join(ssdir, "shares", si_dir)
                 shares = {}
hunk ./src/allmydata/test/test_download.py 90
-                for fn in os.listdir(sharedir):
-                    shnum = int(fn)
-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
-                    shares[shnum] = sharedata
-                fileutil.rm_dir(sharedir)
+                shareset = ss.backend.get_shareset(si)
+                for share in shareset.get_shares():
+                    sharedata = share._home.getContent()
+                    shares[share.get_shnum()] = sharedata
+
+                fileutil.fp_remove(shareset._sharehomedir)
                 if shares:
                     f.write(' %d: { # client[%d]\n' % (i, i))
                     for shnum in sorted(shares.keys()):
hunk ./src/allmydata/test/test_download.py 103
                                 (shnum, base32.b2a(shares[shnum])))
                     f.write('    },\n')
             f.write('}\n')
-            f.write('\n')
 
hunk ./src/allmydata/test/test_download.py 104
+        def _created_immutable(ur):
+            # write the generated shares and URI to a file, which can then be
+            # incorporated into this one next time.
+            f.write('immutable_uri = "%s"\n' % ur.uri)
+            f.write('immutable_shares = {\n')
+            _write_py(ur.uri)
+            f.write('\n')
         d.addCallback(_created_immutable)
 
         d.addCallback(lambda ignored:
hunk ./src/allmydata/test/test_download.py 118
         def _created_mutable(n):
             f.write('mutable_uri = "%s"\n' % n.get_uri())
             f.write('mutable_shares = {\n')
-            si = uri.from_string(n.get_uri()).get_storage_index()
-            si_dir = storage_index_to_dir(si)
-            for (i,ss,ssdir) in self.iterate_servers():
-                sharedir = os.path.join(ssdir, "shares", si_dir)
-                shares = {}
-                for fn in os.listdir(sharedir):
-                    shnum = int(fn)
-                    sharedata = open(os.path.join(sharedir, fn), "rb").read()
-                    shares[shnum] = sharedata
-                fileutil.rm_dir(sharedir)
-                if shares:
-                    f.write(' %d: { # client[%d]\n' % (i, i))
-                    for shnum in sorted(shares.keys()):
-                        f.write('  %d: base32.a2b("%s"),\n' %
-                                (shnum, base32.b2a(shares[shnum])))
-                    f.write('    },\n')
-            f.write('}\n')
-
-            f.close()
+            _write_py(n.get_uri())
         d.addCallback(_created_mutable)
 
         def _done(ignored):
hunk ./src/allmydata/test/test_download.py 123
             f.close()
-        d.addCallback(_done)
+        d.addBoth(_done)
 
         return d
 
hunk ./src/allmydata/test/test_download.py 127
+    def _write_shares(self, uri, shares):
+        si = uri.from_string(uri).get_storage_index()
+        for i in shares:
+            shares_for_server = shares[i]
+            for shnum in shares_for_server:
+                share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
+                fileutil.fp_make_dirs(share_dir)
+                share_dir.child(str(shnum)).setContent(shares[shnum])
+
     def load_shares(self, ignored=None):
         # this uses the data generated by create_shares() to populate the
         # storage servers with pre-generated shares
hunk ./src/allmydata/test/test_download.py 139
-        si = uri.from_string(immutable_uri).get_storage_index()
-        si_dir = storage_index_to_dir(si)
-        for i in immutable_shares:
-            shares = immutable_shares[i]
-            for shnum in shares:
-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
-                fileutil.make_dirs(dn)
-                fn = os.path.join(dn, str(shnum))
-                f = open(fn, "wb")
-                f.write(shares[shnum])
-                f.close()
-
-        si = uri.from_string(mutable_uri).get_storage_index()
-        si_dir = storage_index_to_dir(si)
-        for i in mutable_shares:
-            shares = mutable_shares[i]
-            for shnum in shares:
-                dn = os.path.join(self.get_serverdir(i), "shares", si_dir)
-                fileutil.make_dirs(dn)
-                fn = os.path.join(dn, str(shnum))
-                f = open(fn, "wb")
-                f.write(shares[shnum])
-                f.close()
+        self._write_shares(immutable_uri, immutable_shares)
+        self._write_shares(mutable_uri, mutable_shares)
 
     def download_immutable(self, ignored=None):
         n = self.c0.create_node_from_uri(immutable_uri)
hunk ./src/allmydata/test/test_download.py 183
 
         self.load_shares()
         si = uri.from_string(immutable_uri).get_storage_index()
-        si_dir = storage_index_to_dir(si)
 
         n = self.c0.create_node_from_uri(immutable_uri)
         d = download_to_data(n)
hunk ./src/allmydata/test/test_download.py 198
                 for clientnum in immutable_shares:
                     for shnum in immutable_shares[clientnum]:
                         if s._shnum == shnum:
-                            fn = os.path.join(self.get_serverdir(clientnum),
-                                              "shares", si_dir, str(shnum))
-                            os.unlink(fn)
+                            share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
+                            share_dir.child(str(shnum)).remove()
         d.addCallback(_clobber_some_shares)
         d.addCallback(lambda ign: download_to_data(n))
         d.addCallback(_got_data)
hunk ./src/allmydata/test/test_download.py 212
                 for shnum in immutable_shares[clientnum]:
                     if shnum == save_me:
                         continue
-                    fn = os.path.join(self.get_serverdir(clientnum),
-                                      "shares", si_dir, str(shnum))
-                    if os.path.exists(fn):
-                        os.unlink(fn)
+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
+                    fileutil.fp_remove(share_dir.child(str(shnum)))
             # now the download should fail with NotEnoughSharesError
             return self.shouldFail(NotEnoughSharesError, "1shares", None,
                                    download_to_data, n)
hunk ./src/allmydata/test/test_download.py 223
             # delete the last remaining share
             for clientnum in immutable_shares:
                 for shnum in immutable_shares[clientnum]:
-                    fn = os.path.join(self.get_serverdir(clientnum),
-                                      "shares", si_dir, str(shnum))
-                    if os.path.exists(fn):
-                        os.unlink(fn)
+                    share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
+                    share_dir.child(str(shnum)).remove()
             # now a new download should fail with NoSharesError. We want a
             # new ImmutableFileNode so it will forget about the old shares.
             # If we merely called create_node_from_uri() without first
hunk ./src/allmydata/test/test_download.py 801
         # will report two shares, and the ShareFinder will handle the
         # duplicate by attaching both to the same CommonShare instance.
         si = uri.from_string(immutable_uri).get_storage_index()
-        si_dir = storage_index_to_dir(si)
-        sh0_file = [sharefile
-                    for (shnum, serverid, sharefile)
-                    in self.find_uri_shares(immutable_uri)
-                    if shnum == 0][0]
-        sh0_data = open(sh0_file, "rb").read()
+        sh0_fp = [sharefp for (shnum, serverid, sharefp)
+                          in self.find_uri_shares(immutable_uri)
+                          if shnum == 0][0]
+        sh0_data = sh0_fp.getContent()
         for clientnum in immutable_shares:
             if 0 in immutable_shares[clientnum]:
                 continue
hunk ./src/allmydata/test/test_download.py 808
-            cdir = self.get_serverdir(clientnum)
-            target = os.path.join(cdir, "shares", si_dir, "0")
-            outf = open(target, "wb")
-            outf.write(sh0_data)
-            outf.close()
+            cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
+            fileutil.fp_make_dirs(cdir)
+            cdir.child(str(shnum)).setContent(sh0_data)
 
         d = self.download_immutable()
         return d
hunk ./src/allmydata/test/test_encode.py 134
         d.addCallback(_try)
         return d
 
-    def get_share_hashes(self, at_least_these=()):
+    def get_share_hashes(self):
         d = self._start()
         def _try(unused=None):
             if self.mode == "bad sharehash":
hunk ./src/allmydata/test/test_hung_server.py 3
 # -*- coding: utf-8 -*-
 
-import os, shutil
 from twisted.trial import unittest
 from twisted.internet import defer
hunk ./src/allmydata/test/test_hung_server.py 5
-from allmydata import uri
+
 from allmydata.util.consumer import download_to_data
 from allmydata.immutable import upload
 from allmydata.mutable.common import UnrecoverableFileError
hunk ./src/allmydata/test/test_hung_server.py 10
 from allmydata.mutable.publish import MutableData
-from allmydata.storage.common import storage_index_to_dir
 from allmydata.test.no_network import GridTestMixin
 from allmydata.test.common import ShouldFailMixin
 from allmydata.util.pollmixin import PollMixin
hunk ./src/allmydata/test/test_hung_server.py 18
 immutable_plaintext = "data" * 10000
 mutable_plaintext = "muta" * 10000
 
+
 class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin,
                              unittest.TestCase):
     # Many of these tests take around 60 seconds on François's ARM buildslave:
hunk ./src/allmydata/test/test_hung_server.py 31
     timeout = 240
 
     def _break(self, servers):
-        for (id, ss) in servers:
-            self.g.break_server(id)
+        for ss in servers:
+            self.g.break_server(ss.get_serverid())
 
     def _hang(self, servers, **kwargs):
hunk ./src/allmydata/test/test_hung_server.py 35
-        for (id, ss) in servers:
-            self.g.hang_server(id, **kwargs)
+        for ss in servers:
+            self.g.hang_server(ss.get_serverid(), **kwargs)
 
     def _unhang(self, servers, **kwargs):
hunk ./src/allmydata/test/test_hung_server.py 39
-        for (id, ss) in servers:
-            self.g.unhang_server(id, **kwargs)
+        for ss in servers:
+            self.g.unhang_server(ss.get_serverid(), **kwargs)
 
     def _hang_shares(self, shnums, **kwargs):
         # hang all servers who are holding the given shares
hunk ./src/allmydata/test/test_hung_server.py 52
                     hung_serverids.add(i_serverid)
 
     def _delete_all_shares_from(self, servers):
-        serverids = [id for (id, ss) in servers]
-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
+        serverids = [ss.get_serverid() for ss in servers]
+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
             if i_serverid in serverids:
hunk ./src/allmydata/test/test_hung_server.py 55
-                os.unlink(i_sharefile)
+                i_sharefp.remove()
 
     def _corrupt_all_shares_in(self, servers, corruptor_func):
hunk ./src/allmydata/test/test_hung_server.py 58
-        serverids = [id for (id, ss) in servers]
-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
+        serverids = [ss.get_serverid() for ss in servers]
+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
             if i_serverid in serverids:
hunk ./src/allmydata/test/test_hung_server.py 61
-                self._corrupt_share((i_shnum, i_sharefile), corruptor_func)
+                self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
 
     def _copy_all_shares_from(self, from_servers, to_server):
hunk ./src/allmydata/test/test_hung_server.py 64
-        serverids = [id for (id, ss) in from_servers]
-        for (i_shnum, i_serverid, i_sharefile) in self.shares:
+        serverids = [ss.get_serverid() for ss in from_servers]
+        for (i_shnum, i_serverid, i_sharefp) in self.shares:
             if i_serverid in serverids:
hunk ./src/allmydata/test/test_hung_server.py 67
-                self._copy_share((i_shnum, i_sharefile), to_server)
+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
 
hunk ./src/allmydata/test/test_hung_server.py 69
-    def _copy_share(self, share, to_server):
-        (sharenum, sharefile) = share
-        (id, ss) = to_server
-        shares_dir = os.path.join(ss.original.storedir, "shares")
-        si = uri.from_string(self.uri).get_storage_index()
-        si_dir = os.path.join(shares_dir, storage_index_to_dir(si))
-        if not os.path.exists(si_dir):
-            os.makedirs(si_dir)
-        new_sharefile = os.path.join(si_dir, str(sharenum))
-        shutil.copy(sharefile, new_sharefile)
         self.shares = self.find_uri_shares(self.uri)
hunk ./src/allmydata/test/test_hung_server.py 70
-        # Make sure that the storage server has the share.
-        self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
-                        in self.shares)
-
-    def _corrupt_share(self, share, corruptor_func):
-        (sharenum, sharefile) = share
-        data = open(sharefile, "rb").read()
-        newdata = corruptor_func(data)
-        os.unlink(sharefile)
-        wf = open(sharefile, "wb")
-        wf.write(newdata)
-        wf.close()
 
     def _set_up(self, mutable, testdir, num_clients=1, num_servers=10):
         self.mutable = mutable
hunk ./src/allmydata/test/test_hung_server.py 82
 
         self.c0 = self.g.clients[0]
         nm = self.c0.nodemaker
-        self.servers = sorted([(s.get_serverid(), s.get_rref())
-                               for s in nm.storage_broker.get_connected_servers()])
+        unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()]
+        self.servers = [ss for (id, ss) in sorted(unsorted)]
         self.servers = self.servers[5:] + self.servers[:5]
 
         if mutable:
hunk ./src/allmydata/test/test_hung_server.py 244
             # stuck-but-not-overdue, and 4 live requests. All 4 live requests
             # will retire before the download is complete and the ShareFinder
             # is shut off. That will leave 4 OVERDUE and 1
-            # stuck-but-not-overdue, for a total of 5 requests in in
+            # stuck-but-not-overdue, for a total of 5 requests in
             # _sf.pending_requests
             for t in self._sf.overdue_timers.values()[:4]:
                 t.reset(-1.0)
hunk ./src/allmydata/test/test_mutable.py 21
 from foolscap.api import eventually, fireEventually
 from foolscap.logging import log
 from allmydata.storage_client import StorageFarmBroker
-from allmydata.storage.common import storage_index_to_dir
 from allmydata.scripts import debug
 
 from allmydata.mutable.filenode import MutableFileNode, BackoffAgent
hunk ./src/allmydata/test/test_mutable.py 3670
         # Now execute each assignment by writing the storage.
         for (share, servernum) in assignments:
             sharedata = base64.b64decode(self.sdmf_old_shares[share])
-            storedir = self.get_serverdir(servernum)
-            storage_path = os.path.join(storedir, "shares",
-                                        storage_index_to_dir(si))
-            fileutil.make_dirs(storage_path)
-            fileutil.write(os.path.join(storage_path, "%d" % share),
-                           sharedata)
+            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
+            fileutil.fp_make_dirs(storage_dir)
+            storage_dir.child("%d" % share).setContent(sharedata)
         # ...and verify that the shares are there.
         shares = self.find_uri_shares(self.sdmf_old_cap)
         assert len(shares) == 10
hunk ./src/allmydata/test/test_provisioning.py 13
 from nevow import inevow
 from zope.interface import implements
 
-class MyRequest:
+class MockRequest:
     implements(inevow.IRequest)
     pass
 
hunk ./src/allmydata/test/test_provisioning.py 26
     def test_load(self):
         pt = provisioning.ProvisioningTool()
         self.fields = {}
-        #r = MyRequest()
+        #r = MockRequest()
         #r.fields = self.fields
         #ctx = RequestContext()
         #unfilled = pt.renderSynchronously(ctx)
hunk ./src/allmydata/test/test_repairer.py 537
         # happiness setting.
         def _delete_some_servers(ignored):
             for i in xrange(7):
-                self.g.remove_server(self.g.servers_by_number[i].my_nodeid)
+                self.remove_server(i)
 
             assert len(self.g.servers_by_number) == 3
 
hunk ./src/allmydata/test/test_storage.py 14
 from allmydata import interfaces
 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
 from allmydata.storage.server import StorageServer
-from allmydata.storage.mutable import MutableShareFile
-from allmydata.storage.immutable import BucketWriter, BucketReader
-from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \
+from allmydata.storage.backends.disk.mutable import MutableDiskShare
+from allmydata.storage.bucket import BucketWriter, BucketReader
+from allmydata.storage.common import DataTooLargeError, \
      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.crawler import BucketCountingCrawler
hunk ./src/allmydata/test/test_storage.py 474
         w[0].remote_write(0, "\xff"*10)
         w[0].remote_close()
 
-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
-        f = open(fn, "rb+")
+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
+        f = fp.open("rb+")
         f.seek(0)
         f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
         f.close()
hunk ./src/allmydata/test/test_storage.py 814
     def test_bad_magic(self):
         ss = self.create("test_bad_magic")
         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
-        fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0")
-        f = open(fn, "rb+")
+        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
+        f = fp.open("rb+")
         f.seek(0)
         f.write("BAD MAGIC")
         f.close()
hunk ./src/allmydata/test/test_storage.py 842
 
         # Trying to make the container too large (by sending a write vector
         # whose offset is too high) will raise an exception.
-        TOOBIG = MutableShareFile.MAX_SIZE + 10
+        TOOBIG = MutableDiskShare.MAX_SIZE + 10
         self.failUnlessRaises(DataTooLargeError,
                               rstaraw, "si1", secrets,
                               {0: ([], [(TOOBIG,data)], None)},
hunk ./src/allmydata/test/test_storage.py 1229
 
         # create a random non-numeric file in the bucket directory, to
         # exercise the code that's supposed to ignore those.
-        bucket_dir = os.path.join(self.workdir("test_leases"),
-                                  "shares", storage_index_to_dir("si1"))
-        f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w")
-        f.write("you ought to be ignoring me\n")
-        f.close()
+        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
+        bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
 
hunk ./src/allmydata/test/test_storage.py 1232
-        s0 = MutableShareFile(os.path.join(bucket_dir, "0"))
+        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
         self.failUnlessEqual(len(list(s0.get_leases())), 1)
 
         # add-lease on a missing storage index is silently ignored
hunk ./src/allmydata/test/test_storage.py 3118
         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
 
         # add a non-sharefile to exercise another code path
-        fn = os.path.join(ss.sharedir,
-                          storage_index_to_dir(immutable_si_0),
-                          "not-a-share")
-        f = open(fn, "wb")
-        f.write("I am not a share.\n")
-        f.close()
+        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
+        fp.setContent("I am not a share.\n")
 
         # this is before the crawl has started, so we're not in a cycle yet
         initial_state = lc.get_state()
hunk ./src/allmydata/test/test_storage.py 3282
     def test_expire_age(self):
         basedir = "storage/LeaseCrawler/expire_age"
         fileutil.make_dirs(basedir)
-        # setting expiration_time to 2000 means that any lease which is more
-        # than 2000s old will be expired.
-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
-                                       expiration_enabled=True,
-                                       expiration_mode="age",
-                                       expiration_override_lease_duration=2000)
+        # setting 'override_lease_duration' to 2000 means that any lease that
+        # is more than 2000 seconds old will be expired.
+        expiration_policy = {
+            'enabled': True,
+            'mode': 'age',
+            'override_lease_duration': 2000,
+            'sharetypes': ('mutable', 'immutable'),
+        }
+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
         # make it start sooner than usual.
         lc = ss.lease_checker
         lc.slow_start = 0
hunk ./src/allmydata/test/test_storage.py 3423
     def test_expire_cutoff_date(self):
         basedir = "storage/LeaseCrawler/expire_cutoff_date"
         fileutil.make_dirs(basedir)
-        # setting cutoff-date to 2000 seconds ago means that any lease which
-        # is more than 2000s old will be expired.
+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
+        # is more than 2000 seconds old will be expired.
         now = time.time()
         then = int(now - 2000)
hunk ./src/allmydata/test/test_storage.py 3427
-        ss = InstrumentedStorageServer(basedir, "\x00" * 20,
-                                       expiration_enabled=True,
-                                       expiration_mode="cutoff-date",
-                                       expiration_cutoff_date=then)
+        expiration_policy = {
+            'enabled': True,
+            'mode': 'cutoff-date',
+            'cutoff_date': then,
+            'sharetypes': ('mutable', 'immutable'),
+        }
+        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
         # make it start sooner than usual.
         lc = ss.lease_checker
         lc.slow_start = 0
hunk ./src/allmydata/test/test_storage.py 3575
     def test_only_immutable(self):
         basedir = "storage/LeaseCrawler/only_immutable"
         fileutil.make_dirs(basedir)
+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
+        # is more than 2000 seconds old will be expired.
         now = time.time()
         then = int(now - 2000)
hunk ./src/allmydata/test/test_storage.py 3579
-        ss = StorageServer(basedir, "\x00" * 20,
-                           expiration_enabled=True,
-                           expiration_mode="cutoff-date",
-                           expiration_cutoff_date=then,
-                           expiration_sharetypes=("immutable",))
+        expiration_policy = {
+            'enabled': True,
+            'mode': 'cutoff-date',
+            'cutoff_date': then,
+            'sharetypes': ('immutable',),
+        }
+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
         lc = ss.lease_checker
         lc.slow_start = 0
         webstatus = StorageStatus(ss)
hunk ./src/allmydata/test/test_storage.py 3636
     def test_only_mutable(self):
         basedir = "storage/LeaseCrawler/only_mutable"
         fileutil.make_dirs(basedir)
+        # setting 'cutoff_date' to 2000 seconds ago means that any lease that
+        # is more than 2000 seconds old will be expired.
         now = time.time()
         then = int(now - 2000)
hunk ./src/allmydata/test/test_storage.py 3640
-        ss = StorageServer(basedir, "\x00" * 20,
-                           expiration_enabled=True,
-                           expiration_mode="cutoff-date",
-                           expiration_cutoff_date=then,
-                           expiration_sharetypes=("mutable",))
+        expiration_policy = {
+            'enabled': True,
+            'mode': 'cutoff-date',
+            'cutoff_date': then,
+            'sharetypes': ('mutable',),
+        }
+        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
         lc = ss.lease_checker
         lc.slow_start = 0
         webstatus = StorageStatus(ss)
hunk ./src/allmydata/test/test_storage.py 3819
     def test_no_st_blocks(self):
         basedir = "storage/LeaseCrawler/no_st_blocks"
         fileutil.make_dirs(basedir)
-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20,
-                                        expiration_mode="age",
-                                        expiration_override_lease_duration=-1000)
-        # a negative expiration_time= means the "configured-"
+        # A negative 'override_lease_duration' means that the "configured-"
         # space-recovered counts will be non-zero, since all shares will have
hunk ./src/allmydata/test/test_storage.py 3821
-        # expired by then
+        # expired by then.
+        expiration_policy = {
+            'enabled': True,
+            'mode': 'age',
+            'override_lease_duration': -1000,
+            'sharetypes': ('mutable', 'immutable'),
+        }
+        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
 
         # make it start sooner than usual.
         lc = ss.lease_checker
hunk ./src/allmydata/test/test_storage.py 3877
         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
         first = min(self.sis)
         first_b32 = base32.b2a(first)
-        fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0")
-        f = open(fn, "rb+")
+        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
+        f = fp.open("rb+")
         f.seek(0)
         f.write("BAD MAGIC")
         f.close()
hunk ./src/allmydata/test/test_storage.py 3890
 
         # also create an empty bucket
         empty_si = base32.b2a("\x04"*16)
-        empty_bucket_dir = os.path.join(ss.sharedir,
-                                        storage_index_to_dir(empty_si))
-        fileutil.make_dirs(empty_bucket_dir)
+        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
+        fileutil.fp_make_dirs(empty_bucket_dir)
 
         ss.setServiceParent(self.s)
 
hunk ./src/allmydata/test/test_system.py 10
 
 import allmydata
 from allmydata import uri
-from allmydata.storage.mutable import MutableShareFile
+from allmydata.storage.backends.disk.mutable import MutableDiskShare
 from allmydata.storage.server import si_a2b
 from allmydata.immutable import offloaded, upload
 from allmydata.immutable.literal import LiteralFileNode
hunk ./src/allmydata/test/test_system.py 421
         return shares
 
     def _corrupt_mutable_share(self, filename, which):
-        msf = MutableShareFile(filename)
+        msf = MutableDiskShare(filename)
         datav = msf.readv([ (0, 1000000) ])
         final_share = datav[0]
         assert len(final_share) < 1000000 # ought to be truncated
hunk ./src/allmydata/test/test_upload.py 22
 from allmydata.util.happinessutil import servers_of_happiness, \
                                          shares_by_server, merge_servers
 from allmydata.storage_client import StorageFarmBroker
-from allmydata.storage.server import storage_index_to_dir
 
 MiB = 1024*1024
 
hunk ./src/allmydata/test/test_upload.py 821
 
     def _copy_share_to_server(self, share_number, server_number):
         ss = self.g.servers_by_number[server_number]
-        # Copy share i from the directory associated with the first
-        # storage server to the directory associated with this one.
-        assert self.g, "I tried to find a grid at self.g, but failed"
-        assert self.shares, "I tried to find shares at self.shares, but failed"
-        old_share_location = self.shares[share_number][2]
-        new_share_location = os.path.join(ss.storedir, "shares")
-        si = uri.from_string(self.uri).get_storage_index()
-        new_share_location = os.path.join(new_share_location,
-                                          storage_index_to_dir(si))
-        if not os.path.exists(new_share_location):
-            os.makedirs(new_share_location)
-        new_share_location = os.path.join(new_share_location,
-                                          str(share_number))
-        if old_share_location != new_share_location:
-            shutil.copy(old_share_location, new_share_location)
-        shares = self.find_uri_shares(self.uri)
-        # Make sure that the storage server has the share.
-        self.failUnless((share_number, ss.my_nodeid, new_share_location)
-                        in shares)
+        self.copy_share(self.shares[share_number], ss)
 
     def _setup_grid(self):
         """
hunk ./src/allmydata/test/test_upload.py 1103
                 self._copy_share_to_server(i, 2)
         d.addCallback(_copy_shares)
         # Remove the first server, and add a placeholder with share 0
-        d.addCallback(lambda ign:
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign: self.remove_server(0))
         d.addCallback(lambda ign:
             self._add_server_with_share(server_number=4, share_number=0))
         # Now try uploading.
hunk ./src/allmydata/test/test_upload.py 1134
         d.addCallback(lambda ign:
             self._add_server(server_number=4))
         d.addCallback(_copy_shares)
-        d.addCallback(lambda ign:
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign: self.remove_server(0))
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
hunk ./src/allmydata/test/test_upload.py 1196
                 self._copy_share_to_server(i, 2)
         d.addCallback(_copy_shares)
         # Remove server 0, and add another in its place
-        d.addCallback(lambda ign:
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign: self.remove_server(0))
         d.addCallback(lambda ign:
             self._add_server_with_share(server_number=4, share_number=0,
                                         readonly=True))
hunk ./src/allmydata/test/test_upload.py 1237
             for i in xrange(1, 10):
                 self._copy_share_to_server(i, 2)
         d.addCallback(_copy_shares)
-        d.addCallback(lambda ign:
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign: self.remove_server(0))
         def _reset_encoding_parameters(ign, happy=4):
             client = self.g.clients[0]
             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
hunk ./src/allmydata/test/test_upload.py 1273
         # remove the original server
         # (necessary to ensure that the Tahoe2ServerSelector will distribute
         #  all the shares)
-        def _remove_server(ign):
-            server = self.g.servers_by_number[0]
-            self.g.remove_server(server.my_nodeid)
-        d.addCallback(_remove_server)
+        d.addCallback(lambda ign: self.remove_server(0))
         # This should succeed; we still have 4 servers, and the
         # happiness of the upload is 4.
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 1285
         d.addCallback(lambda ign:
             self._setup_and_upload())
         d.addCallback(_do_server_setup)
-        d.addCallback(_remove_server)
+        d.addCallback(lambda ign: self.remove_server(0))
         d.addCallback(lambda ign:
             self.shouldFail(UploadUnhappinessError,
                             "test_dropped_servers_in_encoder",
hunk ./src/allmydata/test/test_upload.py 1307
             self._add_server_with_share(4, 7, readonly=True)
             self._add_server_with_share(5, 8, readonly=True)
         d.addCallback(_do_server_setup_2)
-        d.addCallback(_remove_server)
+        d.addCallback(lambda ign: self.remove_server(0))
         d.addCallback(lambda ign:
             self._do_upload_with_broken_servers(1))
         d.addCallback(_set_basedir)
hunk ./src/allmydata/test/test_upload.py 1314
         d.addCallback(lambda ign:
             self._setup_and_upload())
         d.addCallback(_do_server_setup_2)
-        d.addCallback(_remove_server)
+        d.addCallback(lambda ign: self.remove_server(0))
         d.addCallback(lambda ign:
             self.shouldFail(UploadUnhappinessError,
                             "test_dropped_servers_in_encoder",
hunk ./src/allmydata/test/test_upload.py 1528
             for i in xrange(1, 10):
                 self._copy_share_to_server(i, 1)
         d.addCallback(_copy_shares)
-        d.addCallback(lambda ign:
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign: self.remove_server(0))
         def _prepare_client(ign):
             client = self.g.clients[0]
             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
hunk ./src/allmydata/test/test_upload.py 1550
         def _setup(ign):
             for i in xrange(1, 11):
                 self._add_server(server_number=i)
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
+            self.remove_server(0)
             c = self.g.clients[0]
             # We set happy to an unsatisfiable value so that we can check the
             # counting in the exception message. The same progress message
hunk ./src/allmydata/test/test_upload.py 1577
                 self._add_server(server_number=i)
             self._add_server(server_number=11, readonly=True)
             self._add_server(server_number=12, readonly=True)
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
+            self.remove_server(0)
             c = self.g.clients[0]
             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
             return c
hunk ./src/allmydata/test/test_upload.py 1605
             # the first one that the selector sees.
             for i in xrange(10):
                 self._copy_share_to_server(i, 9)
-            # Remove server 0, and its contents
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
+            self.remove_server(0)
             # Make happiness unsatisfiable
             c = self.g.clients[0]
             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45
hunk ./src/allmydata/test/test_upload.py 1625
         def _then(ign):
             for i in xrange(1, 11):
                 self._add_server(server_number=i, readonly=True)
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
+            self.remove_server(0)
             c = self.g.clients[0]
             c.DEFAULT_ENCODING_PARAMETERS['k'] = 2
             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
hunk ./src/allmydata/test/test_upload.py 1661
             self._add_server(server_number=4, readonly=True))
         d.addCallback(lambda ign:
             self._add_server(server_number=5, readonly=True))
-        d.addCallback(lambda ign:
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign: self.remove_server(0))
         def _reset_encoding_parameters(ign, happy=4):
             client = self.g.clients[0]
             client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
hunk ./src/allmydata/test/test_upload.py 1696
         d.addCallback(lambda ign:
             self._add_server(server_number=2))
         def _break_server_2(ign):
-            serverid = self.g.servers_by_number[2].my_nodeid
+            serverid = self.get_server(2).get_serverid()
             self.g.break_server(serverid)
         d.addCallback(_break_server_2)
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 1705
             self._add_server(server_number=4, readonly=True))
         d.addCallback(lambda ign:
             self._add_server(server_number=5, readonly=True))
-        d.addCallback(lambda ign:
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign: self.remove_server(0))
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
hunk ./src/allmydata/test/test_upload.py 1816
             # Copy shares
             self._copy_share_to_server(1, 1)
             self._copy_share_to_server(2, 1)
-            # Remove server 0
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
+            self.remove_server(0)
             client = self.g.clients[0]
             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
             return client
hunk ./src/allmydata/test/test_upload.py 1930
                                         readonly=True)
             self._add_server_with_share(server_number=4, share_number=3,
                                         readonly=True)
-            # Remove server 0.
-            self.g.remove_server(self.g.servers_by_number[0].my_nodeid)
+            self.remove_server(0)
             # Set the client appropriately
             c = self.g.clients[0]
             c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
hunk ./src/allmydata/test/test_util.py 9
 from twisted.trial import unittest
 from twisted.internet import defer, reactor
 from twisted.python.failure import Failure
+from twisted.python.filepath import FilePath
 from twisted.python import log
 from pycryptopp.hash.sha256 import SHA256 as _hash
 
hunk ./src/allmydata/test/test_util.py 508
                 os.chdir(saved_cwd)
 
     def test_disk_stats(self):
-        avail = fileutil.get_available_space('.', 2**14)
+        avail = fileutil.get_available_space(FilePath('.'), 2**14)
         if avail == 0:
             raise unittest.SkipTest("This test will spuriously fail there is no disk space left.")
 
hunk ./src/allmydata/test/test_util.py 512
-        disk = fileutil.get_disk_stats('.', 2**13)
+        disk = fileutil.get_disk_stats(FilePath('.'), 2**13)
         self.failUnless(disk['total'] > 0, disk['total'])
         self.failUnless(disk['used'] > 0, disk['used'])
         self.failUnless(disk['free_for_root'] > 0, disk['free_for_root'])
hunk ./src/allmydata/test/test_util.py 521
 
     def test_disk_stats_avail_nonnegative(self):
         # This test will spuriously fail if you have more than 2^128
-        # bytes of available space on your filesystem.
-        disk = fileutil.get_disk_stats('.', 2**128)
+        # bytes of available space on your filesystem (lucky you).
+        disk = fileutil.get_disk_stats(FilePath('.'), 2**128)
         self.failUnlessEqual(disk['avail'], 0)
 
 class PollMixinTests(unittest.TestCase):
hunk ./src/allmydata/test/test_web.py 12
 from twisted.python import failure, log
 from nevow import rend
 from allmydata import interfaces, uri, webish, dirnode
-from allmydata.storage.shares import get_share_file
 from allmydata.storage_client import StorageFarmBroker
 from allmydata.immutable import upload
 from allmydata.immutable.downloader.status import DownloadStatus
hunk ./src/allmydata/test/test_web.py 4111
             good_shares = self.find_uri_shares(self.uris["good"])
             self.failUnlessReallyEqual(len(good_shares), 10)
             sick_shares = self.find_uri_shares(self.uris["sick"])
-            os.unlink(sick_shares[0][2])
+            sick_shares[0][2].remove()
             dead_shares = self.find_uri_shares(self.uris["dead"])
             for i in range(1, 10):
hunk ./src/allmydata/test/test_web.py 4114
-                os.unlink(dead_shares[i][2])
+                dead_shares[i][2].remove()
             c_shares = self.find_uri_shares(self.uris["corrupt"])
             cso = CorruptShareOptions()
             cso.stdout = StringIO()
hunk ./src/allmydata/test/test_web.py 4118
-            cso.parseOptions([c_shares[0][2]])
+            cso.parseOptions([c_shares[0][2].path])
             corrupt_share(cso)
         d.addCallback(_clobber_shares)
 
hunk ./src/allmydata/test/test_web.py 4253
             good_shares = self.find_uri_shares(self.uris["good"])
             self.failUnlessReallyEqual(len(good_shares), 10)
             sick_shares = self.find_uri_shares(self.uris["sick"])
-            os.unlink(sick_shares[0][2])
+            sick_shares[0][2].remove()
             dead_shares = self.find_uri_shares(self.uris["dead"])
             for i in range(1, 10):
hunk ./src/allmydata/test/test_web.py 4256
-                os.unlink(dead_shares[i][2])
+                dead_shares[i][2].remove()
             c_shares = self.find_uri_shares(self.uris["corrupt"])
             cso = CorruptShareOptions()
             cso.stdout = StringIO()
hunk ./src/allmydata/test/test_web.py 4260
-            cso.parseOptions([c_shares[0][2]])
+            cso.parseOptions([c_shares[0][2].path])
             corrupt_share(cso)
         d.addCallback(_clobber_shares)
 
hunk ./src/allmydata/test/test_web.py 4319
 
         def _clobber_shares(ignored):
             sick_shares = self.find_uri_shares(self.uris["sick"])
-            os.unlink(sick_shares[0][2])
+            sick_shares[0][2].remove()
         d.addCallback(_clobber_shares)
 
         d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json")
hunk ./src/allmydata/test/test_web.py 4811
             good_shares = self.find_uri_shares(self.uris["good"])
             self.failUnlessReallyEqual(len(good_shares), 10)
             sick_shares = self.find_uri_shares(self.uris["sick"])
-            os.unlink(sick_shares[0][2])
+            sick_shares[0][2].remove()
             #dead_shares = self.find_uri_shares(self.uris["dead"])
             #for i in range(1, 10):
hunk ./src/allmydata/test/test_web.py 4814
-            #    os.unlink(dead_shares[i][2])
+            #    dead_shares[i][2].remove()
 
             #c_shares = self.find_uri_shares(self.uris["corrupt"])
             #cso = CorruptShareOptions()
hunk ./src/allmydata/test/test_web.py 4819
             #cso.stdout = StringIO()
-            #cso.parseOptions([c_shares[0][2]])
+            #cso.parseOptions([c_shares[0][2].path])
             #corrupt_share(cso)
         d.addCallback(_clobber_shares)
 
hunk ./src/allmydata/test/test_web.py 4870
         d.addErrback(self.explain_web_error)
         return d
 
-    def _count_leases(self, ignored, which):
-        u = self.uris[which]
-        shares = self.find_uri_shares(u)
-        lease_counts = []
-        for shnum, serverid, fn in shares:
-            sf = get_share_file(fn)
-            num_leases = len(list(sf.get_leases()))
-            lease_counts.append( (fn, num_leases) )
-        return lease_counts
-
-    def _assert_leasecount(self, lease_counts, expected):
+    def _assert_leasecount(self, ignored, which, expected):
+        lease_counts = self.count_leases(self.uris[which])
         for (fn, num_leases) in lease_counts:
             if num_leases != expected:
                 self.fail("expected %d leases, have %d, on %s" %
hunk ./src/allmydata/test/test_web.py 4903
                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
         d.addCallback(_compute_fileurls)
 
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "two")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 1)
+        d.addCallback(self._assert_leasecount, "one", 1)
+        d.addCallback(self._assert_leasecount, "two", 1)
+        d.addCallback(self._assert_leasecount, "mutable", 1)
 
         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
         def _got_html_good(res):
hunk ./src/allmydata/test/test_web.py 4913
             self.failIf("Not Healthy" in res, res)
         d.addCallback(_got_html_good)
 
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "two")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 1)
+        d.addCallback(self._assert_leasecount, "one", 1)
+        d.addCallback(self._assert_leasecount, "two", 1)
+        d.addCallback(self._assert_leasecount, "mutable", 1)
 
         # this CHECK uses the original client, which uses the same
         # lease-secrets, so it will just renew the original lease
hunk ./src/allmydata/test/test_web.py 4922
         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
         d.addCallback(_got_html_good)
 
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "two")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 1)
+        d.addCallback(self._assert_leasecount, "one", 1)
+        d.addCallback(self._assert_leasecount, "two", 1)
+        d.addCallback(self._assert_leasecount, "mutable", 1)
 
         # this CHECK uses an alternate client, which adds a second lease
         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
hunk ./src/allmydata/test/test_web.py 4930
         d.addCallback(_got_html_good)
 
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 2)
-        d.addCallback(self._count_leases, "two")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 1)
+        d.addCallback(self._assert_leasecount, "one", 2)
+        d.addCallback(self._assert_leasecount, "two", 1)
+        d.addCallback(self._assert_leasecount, "mutable", 1)
 
         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
         d.addCallback(_got_html_good)
hunk ./src/allmydata/test/test_web.py 4937
 
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 2)
-        d.addCallback(self._count_leases, "two")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 1)
+        d.addCallback(self._assert_leasecount, "one", 2)
+        d.addCallback(self._assert_leasecount, "two", 1)
+        d.addCallback(self._assert_leasecount, "mutable", 1)
 
         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
                       clientnum=1)
hunk ./src/allmydata/test/test_web.py 4945
         d.addCallback(_got_html_good)
 
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 2)
-        d.addCallback(self._count_leases, "two")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 2)
+        d.addCallback(self._assert_leasecount, "one", 2)
+        d.addCallback(self._assert_leasecount, "two", 1)
+        d.addCallback(self._assert_leasecount, "mutable", 2)
 
         d.addErrback(self.explain_web_error)
         return d
hunk ./src/allmydata/test/test_web.py 4989
             self.failUnlessReallyEqual(len(units), 4+1)
         d.addCallback(_done)
 
-        d.addCallback(self._count_leases, "root")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 1)
+        d.addCallback(self._assert_leasecount, "root", 1)
+        d.addCallback(self._assert_leasecount, "one", 1)
+        d.addCallback(self._assert_leasecount, "mutable", 1)
 
         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
         d.addCallback(_done)
hunk ./src/allmydata/test/test_web.py 4996
 
-        d.addCallback(self._count_leases, "root")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 1)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 1)
+        d.addCallback(self._assert_leasecount, "root", 1)
+        d.addCallback(self._assert_leasecount, "one", 1)
+        d.addCallback(self._assert_leasecount, "mutable", 1)
 
         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
                       clientnum=1)
hunk ./src/allmydata/test/test_web.py 5004
         d.addCallback(_done)
 
-        d.addCallback(self._count_leases, "root")
-        d.addCallback(self._assert_leasecount, 2)
-        d.addCallback(self._count_leases, "one")
-        d.addCallback(self._assert_leasecount, 2)
-        d.addCallback(self._count_leases, "mutable")
-        d.addCallback(self._assert_leasecount, 2)
+        d.addCallback(self._assert_leasecount, "root", 2)
+        d.addCallback(self._assert_leasecount, "one", 2)
+        d.addCallback(self._assert_leasecount, "mutable", 2)
 
         d.addErrback(self.explain_web_error)
         return d
merger 0.0 (
hunk ./src/allmydata/uri.py 829
+    def is_readonly(self):
+        return True
+
+    def get_readonly(self):
+        return self
+
+
hunk ./src/allmydata/uri.py 829
+    def is_readonly(self):
+        return True
+
+    def get_readonly(self):
+        return self
+
+
)
merger 0.0 (
hunk ./src/allmydata/uri.py 848
+    def is_readonly(self):
+        return True
+
+    def get_readonly(self):
+        return self
+
hunk ./src/allmydata/uri.py 848
+    def is_readonly(self):
+        return True
+
+    def get_readonly(self):
+        return self
+
)
hunk ./src/allmydata/util/encodingutil.py 221
 def quote_path(path, quotemarks=True):
     return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks)
 
+def quote_filepath(fp, quotemarks=True, encoding=None):
+    path = fp.path
+    if isinstance(path, str):
+        try:
+            path = path.decode(filesystem_encoding)
+        except UnicodeDecodeError:
+            return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),)
+
+    return quote_output(path, quotemarks=quotemarks, encoding=encoding)
+
 
 def unicode_platform():
     """
hunk ./src/allmydata/util/fileutil.py 5
 Futz with files like a pro.
 """
 
-import sys, exceptions, os, stat, tempfile, time, binascii
+import errno, sys, exceptions, os, stat, tempfile, time, binascii
+
+from allmydata.util.assertutil import precondition
 
 from twisted.python import log
hunk ./src/allmydata/util/fileutil.py 10
+from twisted.python.filepath import FilePath, UnlistableError
 
 from pycryptopp.cipher.aes import AES
 
hunk ./src/allmydata/util/fileutil.py 189
             raise tx
         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
 
-def rm_dir(dirname):
+def fp_make_dirs(dirfp):
+    """
+    An idempotent version of FilePath.makedirs().  If the dir already
+    exists, do nothing and return without raising an exception.  If this
+    call creates the dir, return without raising an exception.  If there is
+    an error that prevents creation or if the directory gets deleted after
+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
+    exists, raise an exception.
+    """
+    log.msg( "xxx 0 %s" % (dirfp,))
+    tx = None
+    try:
+        dirfp.makedirs()
+    except OSError, x:
+        tx = x
+
+    if not dirfp.isdir():
+        if tx:
+            raise tx
+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
+
+def fp_rmdir_if_empty(dirfp):
+    """ Remove the directory if it is empty. """
+    try:
+        os.rmdir(dirfp.path)
+    except OSError, e:
+        if e.errno != errno.ENOTEMPTY:
+            raise
+    else:
+        dirfp.changed()
+
+def rmtree(dirname):
     """
     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
     already gone, do nothing and return without raising an exception.  If this
hunk ./src/allmydata/util/fileutil.py 239
             else:
                 remove(fullname)
         os.rmdir(dirname)
-    except Exception, le:
-        # Ignore "No such file or directory"
-        if (not isinstance(le, OSError)) or le.args[0] != 2:
+    except EnvironmentError, le:
+        # Ignore "No such file or directory", collect any other exception.
+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
             excs.append(le)
hunk ./src/allmydata/util/fileutil.py 243
+    except Exception, le:
+        excs.append(le)
 
     # Okay, now we've recursively removed everything, ignoring any "No
     # such file or directory" errors, and collecting any other errors.
hunk ./src/allmydata/util/fileutil.py 256
             raise OSError, "Failed to remove dir for unknown reason."
         raise OSError, excs
 
+def fp_remove(fp):
+    """
+    An idempotent version of shutil.rmtree().  If the file/dir is already
+    gone, do nothing and return without raising an exception.  If this call
+    removes the file/dir, return without raising an exception.  If there is
+    an error that prevents removal, or if a file or directory at the same
+    path gets created again by someone else after this deletes it and before
+    this checks that it is gone, raise an exception.
+    """
+    try:
+        fp.remove()
+    except UnlistableError, e:
+        if e.originalException.errno != errno.ENOENT:
+            raise
+    except OSError, e:
+        if e.errno != errno.ENOENT:
+            raise
+
+def rm_dir(dirname):
+    # Renamed to be like shutil.rmtree and unlike rmdir.
+    return rmtree(dirname)
 
 def remove_if_possible(f):
     try:
hunk ./src/allmydata/util/fileutil.py 387
         import traceback
         traceback.print_exc()
 
-def get_disk_stats(whichdir, reserved_space=0):
+def get_disk_stats(whichdirfp, reserved_space=0):
     """Return disk statistics for the storage disk, in the form of a dict
     with the following fields.
       total:            total bytes on disk
hunk ./src/allmydata/util/fileutil.py 408
     you can pass how many bytes you would like to leave unused on this
     filesystem as reserved_space.
     """
+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
 
     if have_GetDiskFreeSpaceExW:
         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
hunk ./src/allmydata/util/fileutil.py 419
         n_free_for_nonroot = c_ulonglong(0)
         n_total            = c_ulonglong(0)
         n_free_for_root    = c_ulonglong(0)
-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
-                                               byref(n_total),
-                                               byref(n_free_for_root))
+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
+                                                      byref(n_total),
+                                                      byref(n_free_for_root))
         if retval == 0:
             raise OSError("Windows error %d attempting to get disk statistics for %r"
hunk ./src/allmydata/util/fileutil.py 424
-                          % (GetLastError(), whichdir))
+                          % (GetLastError(), whichdirfp.path))
         free_for_nonroot = n_free_for_nonroot.value
         total            = n_total.value
         free_for_root    = n_free_for_root.value
hunk ./src/allmydata/util/fileutil.py 433
         # <http://docs.python.org/library/os.html#os.statvfs>
         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
-        s = os.statvfs(whichdir)
+        s = os.statvfs(whichdirfp.path)
 
         # on my mac laptop:
         #  statvfs(2) is a wrapper around statfs(2).
hunk ./src/allmydata/util/fileutil.py 460
              'avail': avail,
            }
 
-def get_available_space(whichdir, reserved_space):
+def get_available_space(whichdirfp, reserved_space):
     """Returns available space for share storage in bytes, or None if no
     API to get this information is available.
 
hunk ./src/allmydata/util/fileutil.py 472
     you can pass how many bytes you would like to leave unused on this
     filesystem as reserved_space.
     """
+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
     try:
hunk ./src/allmydata/util/fileutil.py 474
-        return get_disk_stats(whichdir, reserved_space)['avail']
+        return get_disk_stats(whichdirfp, reserved_space)['avail']
     except AttributeError:
         return None
hunk ./src/allmydata/util/fileutil.py 477
-    except EnvironmentError:
-        log.msg("OS call to get disk statistics failed")
+
+
+def get_used_space(fp):
+    if fp is None:
         return 0
hunk ./src/allmydata/util/fileutil.py 482
+    try:
+        s = os.stat(fp.path)
+    except EnvironmentError:
+        if not fp.exists():
+            return 0
+        raise
+    else:
+        # POSIX defines st_blocks (originally a BSDism):
+        #   <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html>
+        # but does not require stat() to give it a "meaningful value"
+        #   <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html>
+        # and says:
+        #   "The unit for the st_blocks member of the stat structure is not defined
+        #    within IEEE Std 1003.1-2001. In some implementations it is 512 bytes.
+        #    It may differ on a file system basis. There is no correlation between
+        #    values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>)
+        #    structure members."
+        #
+        # The Linux docs define it as "the number of blocks allocated to the file,
+        # [in] 512-byte units." It is also defined that way on MacOS X. Python does
+        # not set the attribute on Windows.
+        #
+        # We consider platforms that define st_blocks but give it a wrong value, or
+        # measure it in a unit other than 512 bytes, to be broken. See also
+        # <http://bugs.python.org/issue12350>.
+
+        if hasattr(s, 'st_blocks'):
+            return s.st_blocks * 512
+        else:
+            return s.st_size
}
[Work-in-progress, includes fix to bug involving BucketWriter. refs #999
david-sarah@jacaranda.org**20110920033803
 Ignore-this: 64e9e019421454e4d08141d10b6e4eed
] {
hunk ./src/allmydata/client.py 9
 from twisted.internet import reactor, defer
 from twisted.application import service
 from twisted.application.internet import TimerService
+from twisted.python.filepath import FilePath
 from foolscap.api import Referenceable
 from pycryptopp.publickey import rsa
 
hunk ./src/allmydata/client.py 15
 import allmydata
 from allmydata.storage.server import StorageServer
+from allmydata.storage.backends.disk.disk_backend import DiskBackend
 from allmydata import storage_client
 from allmydata.immutable.upload import Uploader
 from allmydata.immutable.offloaded import Helper
hunk ./src/allmydata/client.py 213
             return
         readonly = self.get_config("storage", "readonly", False, boolean=True)
 
-        storedir = os.path.join(self.basedir, self.STOREDIR)
+        storedir = FilePath(self.basedir).child(self.STOREDIR)
 
         data = self.get_config("storage", "reserved_space", None)
         reserved = None
hunk ./src/allmydata/client.py 255
             'cutoff_date': cutoff_date,
             'sharetypes': tuple(sharetypes),
         }
-        ss = StorageServer(storedir, self.nodeid,
-                           reserved_space=reserved,
-                           discard_storage=discard,
-                           readonly_storage=readonly,
+
+        backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
+                              discard_storage=discard)
+        ss = StorageServer(nodeid, backend, storedir,
                            stats_provider=self.stats_provider,
                            expiration_policy=expiration_policy)
         self.add_service(ss)
hunk ./src/allmydata/interfaces.py 348
 
     def get_shares():
         """
-        Generates the IStoredShare objects held in this shareset.
+        Generates IStoredShare objects for all completed shares in this shareset.
         """
 
     def has_incoming(shnum):
hunk ./src/allmydata/storage/backends/base.py 69
         # def _create_mutable_share(self, storageserver, shnum, write_enabler):
         #     """create a mutable share with the given shnum and write_enabler"""
 
-        # secrets might be a triple with cancel_secret in secrets[2], but if
-        # so we ignore the cancel_secret.
         write_enabler = secrets[0]
         renew_secret = secrets[1]
hunk ./src/allmydata/storage/backends/base.py 71
+        cancel_secret = '\x00'*32
+        if len(secrets) > 2:
+            cancel_secret = secrets[2]
 
         si_s = self.get_storage_index_string()
         shares = {}
hunk ./src/allmydata/storage/backends/base.py 110
             read_data[shnum] = share.readv(read_vector)
 
         ownerid = 1 # TODO
-        lease_info = LeaseInfo(ownerid, renew_secret,
+        lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret,
                                expiration_time, storageserver.get_serverid())
 
         if testv_is_good:
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34
     return newfp.child(sia)
 
 
-def get_share(fp):
+def get_share(storageindex, shnum, fp):
     f = fp.open('rb')
     try:
         prefix = f.read(32)
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42
         f.close()
 
     if prefix == MutableDiskShare.MAGIC:
-        return MutableDiskShare(fp)
+        return MutableDiskShare(storageindex, shnum, fp)
     else:
         # assume it's immutable
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45
-        return ImmutableDiskShare(fp)
+        return ImmutableDiskShare(storageindex, shnum, fp)
 
 
 class DiskBackend(Backend):
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174
                 if not NUM_RE.match(shnumstr):
                     continue
                 sharehome = self._sharehomedir.child(shnumstr)
-                yield self.get_share(sharehome)
+                yield get_share(self.get_storage_index(), int(shnumstr), sharehome)
         except UnlistableError:
             # There is no shares directory at all.
             pass
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185
         return self._incominghomedir.child(str(shnum)).exists()
 
     def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary):
-        sharehome = self._sharehomedir.child(str(shnum))
+        finalhome = self._sharehomedir.child(str(shnum))
         incominghome = self._incominghomedir.child(str(shnum))
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187
-        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome,
-                                   max_size=max_space_per_bucket, create=True)
+        immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
+                                   max_size=max_space_per_bucket)
         bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
         if self._discard_storage:
             bw.throw_out_all_data = True
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198
         fileutil.fp_make_dirs(self._sharehomedir)
         sharehome = self._sharehomedir.child(str(shnum))
         serverid = storageserver.get_serverid()
-        return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver)
+        return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver)
 
     def _clean_up_after_unlink(self):
         fileutil.fp_rmdir_if_empty(self._sharehomedir)
hunk ./src/allmydata/storage/backends/disk/immutable.py 48
     LEASE_SIZE = struct.calcsize(">L32s32sL")
 
 
-    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
-        """ If max_size is not None then I won't allow more than
-        max_size to be written to me. If create=True then max_size
-        must not be None. """
-        precondition((max_size is not None) or (not create), max_size, create)
+    def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None):
+        """
+        If max_size is not None then I won't allow more than max_size to be written to me.
+        If finalhome is not None (meaning that we are creating the share) then max_size
+        must not be None.
+        """
+        precondition((max_size is not None) or (finalhome is None), max_size, finalhome)
         self._storageindex = storageindex
         self._max_size = max_size
hunk ./src/allmydata/storage/backends/disk/immutable.py 57
-        self._incominghome = incominghome
-        self._home = finalhome
+
+        # If we are creating the share, _finalhome refers to the final path and
+        # _home to the incoming path. Otherwise, _finalhome is None.
+        self._finalhome = finalhome
+        self._home = home
         self._shnum = shnum
hunk ./src/allmydata/storage/backends/disk/immutable.py 63
-        if create:
-            # touch the file, so later callers will see that we're working on
+
+        if self._finalhome is not None:
+            # Touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
hunk ./src/allmydata/storage/backends/disk/immutable.py 67
-            assert not finalhome.exists()
-            fp_make_dirs(self._incominghome.parent())
+            assert not self._finalhome.exists()
+            fp_make_dirs(self._home.parent())
             # The second field -- the four-byte share data length -- is no
             # longer used as of Tahoe v1.3.0, but we continue to write it in
             # there in case someone downgrades a storage server from >=
hunk ./src/allmydata/storage/backends/disk/immutable.py 78
             # the largest length that can fit into the field. That way, even
             # if this does happen, the old < v1.3.0 server will still allow
             # clients to read the first part of the share.
-            self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
+            self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0
         else:
hunk ./src/allmydata/storage/backends/disk/immutable.py 101
                 % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home)))
 
     def close(self):
-        fileutil.fp_make_dirs(self._home.parent())
-        self._incominghome.moveTo(self._home)
-        try:
-            # self._incominghome is like storage/shares/incoming/ab/abcde/4 .
-            # We try to delete the parent (.../ab/abcde) to avoid leaving
-            # these directories lying around forever, but the delete might
-            # fail if we're working on another share for the same storage
-            # index (like ab/abcde/5). The alternative approach would be to
-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
-            # ShareWriter), each of which is responsible for a single
-            # directory on disk, and have them use reference counting of
-            # their children to know when they should do the rmdir. This
-            # approach is simpler, but relies on os.rmdir refusing to delete
-            # a non-empty directory. Do *not* use fileutil.fp_remove() here!
-            fileutil.fp_rmdir_if_empty(self._incominghome.parent())
-            # we also delete the grandparent (prefix) directory, .../ab ,
-            # again to avoid leaving directories lying around. This might
-            # fail if there is another bucket open that shares a prefix (like
-            # ab/abfff).
-            fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent())
-            # we leave the great-grandparent (incoming/) directory in place.
-        except EnvironmentError:
-            # ignore the "can't rmdir because the directory is not empty"
-            # exceptions, those are normal consequences of the
-            # above-mentioned conditions.
-            pass
-        pass
+        fileutil.fp_make_dirs(self._finalhome.parent())
+        self._home.moveTo(self._finalhome)
+
+        # self._home is like storage/shares/incoming/ab/abcde/4 .
+        # We try to delete the parent (.../ab/abcde) to avoid leaving
+        # these directories lying around forever, but the delete might
+        # fail if we're working on another share for the same storage
+        # index (like ab/abcde/5). The alternative approach would be to
+        # use a hierarchy of objects (PrefixHolder, BucketHolder,
+        # ShareWriter), each of which is responsible for a single
+        # directory on disk, and have them use reference counting of
+        # their children to know when they should do the rmdir. This
+        # approach is simpler, but relies on os.rmdir (used by
+        # fp_rmdir_if_empty) refusing to delete a non-empty directory.
+        # Do *not* use fileutil.fp_remove() here!
+        parent = self._home.parent()
+        fileutil.fp_rmdir_if_empty(parent)
+
+        # we also delete the grandparent (prefix) directory, .../ab ,
+        # again to avoid leaving directories lying around. This might
+        # fail if there is another bucket open that shares a prefix (like
+        # ab/abfff).
+        fileutil.fp_rmdir_if_empty(parent.parent())
+
+        # we leave the great-grandparent (incoming/) directory in place.
+
+        # allow lease changes after closing.
+        self._home = self._finalhome
+        self._finalhome = None
 
     def get_used_space(self):
hunk ./src/allmydata/storage/backends/disk/immutable.py 132
-        return (fileutil.get_used_space(self._home) +
-                fileutil.get_used_space(self._incominghome))
+        return (fileutil.get_used_space(self._finalhome) +
+                fileutil.get_used_space(self._home))
 
     def get_storage_index(self):
         return self._storageindex
hunk ./src/allmydata/storage/backends/disk/immutable.py 175
         precondition(offset >= 0, offset)
         if self._max_size is not None and offset+length > self._max_size:
             raise DataTooLargeError(self._max_size, offset, length)
-        f = self._incominghome.open(mode='rb+')
+        f = self._home.open(mode='rb+')
         try:
             real_offset = self._data_offset+offset
             f.seek(real_offset)
hunk ./src/allmydata/storage/backends/disk/immutable.py 205
 
     # These lease operations are intended for use by disk_backend.py.
     # Other clients should not depend on the fact that the disk backend
-    # stores leases in share files.
+    # stores leases in share files. XXX bucket.py also relies on this.
 
     def get_leases(self):
         """Yields a LeaseInfo instance for all leases."""
hunk ./src/allmydata/storage/backends/disk/immutable.py 221
             f.close()
 
     def add_lease(self, lease_info):
-        f = self._incominghome.open(mode='rb')
+        f = self._home.open(mode='rb+')
         try:
             num_leases = self._read_num_leases(f)
hunk ./src/allmydata/storage/backends/disk/immutable.py 224
-        finally:
-            f.close()
-        f = self._home.open(mode='wb+')
-        try:
             self._write_lease_record(f, num_leases, lease_info)
             self._write_num_leases(f, num_leases+1)
         finally:
hunk ./src/allmydata/storage/backends/disk/mutable.py 440
         pass
 
 
-def create_mutable_disk_share(fp, serverid, write_enabler, parent):
-    ms = MutableDiskShare(fp, parent)
+def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent):
+    ms = MutableDiskShare(storageindex, shnum, fp, parent)
     ms.create(serverid, write_enabler)
     del ms
hunk ./src/allmydata/storage/backends/disk/mutable.py 444
-    return MutableDiskShare(fp, parent)
+    return MutableDiskShare(storageindex, shnum, fp, parent)
hunk ./src/allmydata/storage/bucket.py 44
         start = time.time()
 
         self._share.close()
-        filelen = self._share.stat()
+        # XXX should this be self._share.get_used_space() ?
+        consumed_size = self._share.get_size()
         self._share = None
 
         self.closed = True
hunk ./src/allmydata/storage/bucket.py 51
         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
 
-        self.ss.bucket_writer_closed(self, filelen)
+        self.ss.bucket_writer_closed(self, consumed_size)
         self.ss.add_latency("close", time.time() - start)
         self.ss.count("close")
 
hunk ./src/allmydata/storage/server.py 182
                                 renew_secret, cancel_secret,
                                 sharenums, allocated_size,
                                 canary, owner_num=0):
-        # cancel_secret is no longer used.
         # owner_num is not for clients to set, but rather it should be
         # curried into a StorageServer instance dedicated to a particular
         # owner.
hunk ./src/allmydata/storage/server.py 195
         # Note that the lease should not be added until the BucketWriter
         # has been closed.
         expire_time = time.time() + 31*24*60*60
-        lease_info = LeaseInfo(owner_num, renew_secret,
+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
                                expire_time, self._serverid)
 
         max_space_per_bucket = allocated_size
hunk ./src/allmydata/test/no_network.py 349
         return self.g.servers_by_number[i]
 
     def get_serverdir(self, i):
-        return self.g.servers_by_number[i].backend.storedir
+        return self.g.servers_by_number[i].backend._storedir
 
     def remove_server(self, i):
         self.g.remove_server(self.g.servers_by_number[i].get_serverid())
hunk ./src/allmydata/test/no_network.py 357
     def iterate_servers(self):
         for i in sorted(self.g.servers_by_number.keys()):
             ss = self.g.servers_by_number[i]
-            yield (i, ss, ss.backend.storedir)
+            yield (i, ss, ss.backend._storedir)
 
     def find_uri_shares(self, uri):
         si = tahoe_uri.from_string(uri).get_storage_index()
hunk ./src/allmydata/test/no_network.py 384
         return shares
 
     def copy_share(self, from_share, uri, to_server):
-        si = uri.from_string(self.uri).get_storage_index()
+        si = tahoe_uri.from_string(uri).get_storage_index()
         (i_shnum, i_serverid, i_sharefp) = from_share
         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
hunk ./src/allmydata/test/test_download.py 127
 
         return d
 
-    def _write_shares(self, uri, shares):
-        si = uri.from_string(uri).get_storage_index()
+    def _write_shares(self, fileuri, shares):
+        si = uri.from_string(fileuri).get_storage_index()
         for i in shares:
             shares_for_server = shares[i]
             for shnum in shares_for_server:
hunk ./src/allmydata/test/test_hung_server.py 36
 
     def _hang(self, servers, **kwargs):
         for ss in servers:
-            self.g.hang_server(ss.get_serverid(), **kwargs)
+            self.g.hang_server(ss.original.get_serverid(), **kwargs)
 
     def _unhang(self, servers, **kwargs):
         for ss in servers:
hunk ./src/allmydata/test/test_hung_server.py 40
-            self.g.unhang_server(ss.get_serverid(), **kwargs)
+            self.g.unhang_server(ss.original.get_serverid(), **kwargs)
 
     def _hang_shares(self, shnums, **kwargs):
         # hang all servers who are holding the given shares
hunk ./src/allmydata/test/test_hung_server.py 52
                     hung_serverids.add(i_serverid)
 
     def _delete_all_shares_from(self, servers):
-        serverids = [ss.get_serverid() for ss in servers]
+        serverids = [ss.original.get_serverid() for ss in servers]
         for (i_shnum, i_serverid, i_sharefp) in self.shares:
             if i_serverid in serverids:
                 i_sharefp.remove()
hunk ./src/allmydata/test/test_hung_server.py 58
 
     def _corrupt_all_shares_in(self, servers, corruptor_func):
-        serverids = [ss.get_serverid() for ss in servers]
+        serverids = [ss.original.get_serverid() for ss in servers]
         for (i_shnum, i_serverid, i_sharefp) in self.shares:
             if i_serverid in serverids:
                 self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func)
hunk ./src/allmydata/test/test_hung_server.py 64
 
     def _copy_all_shares_from(self, from_servers, to_server):
-        serverids = [ss.get_serverid() for ss in from_servers]
+        serverids = [ss.original.get_serverid() for ss in from_servers]
         for (i_shnum, i_serverid, i_sharefp) in self.shares:
             if i_serverid in serverids:
                 self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
hunk ./src/allmydata/test/test_mutable.py 2991
             fso = debug.FindSharesOptions()
             storage_index = base32.b2a(n.get_storage_index())
             fso.si_s = storage_index
-            fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir)))
+            fso.nodedirs = [unicode(storedir.parent().path)
                             for (i,ss,storedir)
                             in self.iterate_servers()]
             fso.stdout = StringIO()
hunk ./src/allmydata/test/test_upload.py 818
         if share_number is not None:
             self._copy_share_to_server(share_number, server_number)
 
-
     def _copy_share_to_server(self, share_number, server_number):
         ss = self.g.servers_by_number[server_number]
hunk ./src/allmydata/test/test_upload.py 820
-        self.copy_share(self.shares[share_number], ss)
+        self.copy_share(self.shares[share_number], self.uri, ss)
 
     def _setup_grid(self):
         """
}
[docs/backends: document the configuration options for the pluggable backends scheme. refs #999
david-sarah@jacaranda.org**20110920171737
 Ignore-this: 5947e864682a43cb04e557334cda7c19
] {
adddir ./docs/backends
addfile ./docs/backends/S3.rst
hunk ./docs/backends/S3.rst 1
+====================================================
+Storing Shares in Amazon Simple Storage Service (S3)
+====================================================
+
+S3 is a commercial storage service provided by Amazon, described at
+`<https://aws.amazon.com/s3/>`_.
+
+The Tahoe-LAFS storage server can be configured to store its shares in
+an S3 bucket, rather than on local filesystem. To enable this, add the
+following keys to the server's ``tahoe.cfg`` file:
+
+``[storage]``
+
+``backend = s3``
+
+    This turns off the local filesystem backend and enables use of S3.
+
+``s3.access_key_id = (string, required)``
+``s3.secret_access_key = (string, required)``
+
+    These two give the storage server permission to access your Amazon
+    Web Services account, allowing them to upload and download shares
+    from S3.
+
+``s3.bucket = (string, required)``
+
+    This controls which bucket will be used to hold shares. The Tahoe-LAFS
+    storage server will only modify and access objects in the configured S3
+    bucket.
+
+``s3.url = (URL string, optional)``
+
+    This URL tells the storage server how to access the S3 service. It
+    defaults to ``http://s3.amazonaws.com``, but by setting it to something
+    else, you may be able to use some other S3-like service if it is
+    sufficiently compatible.
+
+``s3.max_space = (str, optional)``
+
+    This tells the server to limit how much space can be used in the S3
+    bucket. Before each share is uploaded, the server will ask S3 for the
+    current bucket usage, and will only accept the share if it does not cause
+    the usage to grow above this limit.
+
+    The string contains a number, with an optional case-insensitive scale
+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
+    thing.
+
+    If ``s3.max_space`` is omitted, the default behavior is to allow
+    unlimited usage.
+
+
+Once configured, the WUI "storage server" page will provide information about
+how much space is being used and how many shares are being stored.
+
+
+Issues
+------
+
+Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
+is configured to store shares in S3 rather than on local disk, some common
+operations may behave differently:
+
+* Lease crawling/expiration is not yet implemented. As a result, shares will
+  be retained forever, and the Storage Server status web page will not show
+  information about the number of mutable/immutable shares present.
+
+* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
+  each share upload, causing the upload process to run slightly slower and
+  incur more S3 request charges.
addfile ./docs/backends/disk.rst
hunk ./docs/backends/disk.rst 1
+====================================
+Storing Shares on a Local Filesystem
+====================================
+
+The "disk" backend stores shares on the local filesystem. Versions of
+Tahoe-LAFS <= 1.9.0 always stored shares in this way.
+
+``[storage]``
+
+``backend = disk``
+
+    This enables use of the disk backend, and is the default.
+
+``reserved_space = (str, optional)``
+
+    If provided, this value defines how much disk space is reserved: the
+    storage server will not accept any share that causes the amount of free
+    disk space to drop below this value. (The free space is measured by a
+    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
+    space available to the user account under which the storage server runs.)
+
+    This string contains a number, with an optional case-insensitive scale
+    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
+    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
+    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
+    thing.
+
+    "``tahoe create-node``" generates a tahoe.cfg with
+    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
+    reservation to suit your needs.
+
+``expire.enabled =``
+
+``expire.mode =``
+
+``expire.override_lease_duration =``
+
+``expire.cutoff_date =``
+
+``expire.immutable =``
+
+``expire.mutable =``
+
+    These settings control garbage collection, causing the server to
+    delete shares that no longer have an up-to-date lease on them. Please
+    see `<garbage-collection.rst>`_ for full details.
hunk ./docs/configuration.rst 412
     <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/390>`_ for the current
     status of this bug. The default value is ``False``.
 
-``reserved_space = (str, optional)``
+``backend = (string, optional)``
 
hunk ./docs/configuration.rst 414
-    If provided, this value defines how much disk space is reserved: the
-    storage server will not accept any share that causes the amount of free
-    disk space to drop below this value. (The free space is measured by a
-    call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the
-    space available to the user account under which the storage server runs.)
+    Storage servers can store the data into different "backends". Clients
+    need not be aware of which backend is used by a server. The default
+    value is ``disk``.
 
hunk ./docs/configuration.rst 418
-    This string contains a number, with an optional case-insensitive scale
-    suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So
-    "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the
-    same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same
-    thing.
+``backend = disk``
 
hunk ./docs/configuration.rst 420
-    "``tahoe create-node``" generates a tahoe.cfg with
-    "``reserved_space=1G``", but you may wish to raise, lower, or remove the
-    reservation to suit your needs.
+    The default is to store shares on the local filesystem (in
+    BASEDIR/storage/shares/). For configuration details (including how to
+    reserve a minimum amount of free space), see `<backends/disk.rst>`_.
 
hunk ./docs/configuration.rst 424
-``expire.enabled =``
+``backend = S3``
 
hunk ./docs/configuration.rst 426
-``expire.mode =``
-
-``expire.override_lease_duration =``
-
-``expire.cutoff_date =``
-
-``expire.immutable =``
-
-``expire.mutable =``
-
-    These settings control garbage collection, in which the server will
-    delete shares that no longer have an up-to-date lease on them. Please see
-    `<garbage-collection.rst>`_ for full details.
+    The storage server can store all shares to an Amazon Simple Storage
+    Service (S3) bucket. For configuration details, see `<backends/S3.rst>`_.
 
 
 Running A Helper
}
[Fix some incorrect attribute accesses. refs #999
david-sarah@jacaranda.org**20110921031207
 Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd
] {
hunk ./src/allmydata/client.py 258
 
         backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved,
                               discard_storage=discard)
-        ss = StorageServer(nodeid, backend, storedir,
+        ss = StorageServer(self.nodeid, backend, storedir,
                            stats_provider=self.stats_provider,
                            expiration_policy=expiration_policy)
         self.add_service(ss)
hunk ./src/allmydata/interfaces.py 449
         Returns the storage index.
         """
 
+    def get_storage_index_string():
+        """
+        Returns the base32-encoded storage index.
+        """
+
     def get_shnum():
         """
         Returns the share number.
hunk ./src/allmydata/storage/backends/disk/immutable.py 138
     def get_storage_index(self):
         return self._storageindex
 
+    def get_storage_index_string(self):
+        return si_b2a(self._storageindex)
+
     def get_shnum(self):
         return self._shnum
 
hunk ./src/allmydata/storage/backends/disk/mutable.py 119
     def get_storage_index(self):
         return self._storageindex
 
+    def get_storage_index_string(self):
+        return si_b2a(self._storageindex)
+
     def get_shnum(self):
         return self._shnum
 
hunk ./src/allmydata/storage/bucket.py 86
     def __init__(self, ss, share):
         self.ss = ss
         self._share = share
-        self.storageindex = share.storageindex
-        self.shnum = share.shnum
+        self.storageindex = share.get_storage_index()
+        self.shnum = share.get_shnum()
 
     def __repr__(self):
         return "<%s %s %s>" % (self.__class__.__name__,
hunk ./src/allmydata/storage/expirer.py 6
 from twisted.python import log as twlog
 
 from allmydata.storage.crawler import ShareCrawler
-from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \
+from allmydata.storage.common import UnknownMutableContainerVersionError, \
      UnknownImmutableContainerVersionError
 
 
hunk ./src/allmydata/storage/expirer.py 124
                     struct.error):
                 twlog.msg("lease-checker error processing %r" % (share,))
                 twlog.err()
-                which = (si_b2a(share.storageindex), share.get_shnum())
+                which = (share.get_storage_index_string(), share.get_shnum())
                 self.state["cycle-to-date"]["corrupt-shares"].append(which)
                 wks = (1, 1, 1, "unknown")
             would_keep_shares.append(wks)
hunk ./src/allmydata/storage/server.py 221
         alreadygot = set()
         for share in shareset.get_shares():
             share.add_or_renew_lease(lease_info)
-            alreadygot.add(share.shnum)
+            alreadygot.add(share.get_shnum())
 
         for shnum in sharenums - alreadygot:
             if shareset.has_incoming(shnum):
hunk ./src/allmydata/storage/server.py 324
 
         try:
             shareset = self.backend.get_shareset(storageindex)
-            return shareset.readv(self, shares, readv)
+            return shareset.readv(shares, readv)
         finally:
             self.add_latency("readv", time.time() - start)
 
hunk ./src/allmydata/storage/shares.py 1
-#! /usr/bin/python
-
-from allmydata.storage.mutable import MutableShareFile
-from allmydata.storage.immutable import ShareFile
-
-def get_share_file(filename):
-    f = open(filename, "rb")
-    prefix = f.read(32)
-    f.close()
-    if prefix == MutableShareFile.MAGIC:
-        return MutableShareFile(filename)
-    # otherwise assume it's immutable
-    return ShareFile(filename)
-
rmfile ./src/allmydata/storage/shares.py
hunk ./src/allmydata/test/no_network.py 387
         si = tahoe_uri.from_string(uri).get_storage_index()
         (i_shnum, i_serverid, i_sharefp) = from_share
         shares_dir = to_server.backend.get_shareset(si)._sharehomedir
+        fileutil.fp_make_dirs(shares_dir)
         i_sharefp.copyTo(shares_dir.child(str(i_shnum)))
 
     def restore_all_shares(self, shares):
hunk ./src/allmydata/test/no_network.py 391
-        for share, data in shares.items():
-            share.home.setContent(data)
+        for sharepath, data in shares.items():
+            FilePath(sharepath).setContent(data)
 
     def delete_share(self, (shnum, serverid, sharefp)):
         sharefp.remove()
hunk ./src/allmydata/test/test_upload.py 744
         servertoshnums = {} # k: server, v: set(shnum)
 
         for i, c in self.g.servers_by_number.iteritems():
-            for (dirp, dirns, fns) in os.walk(c.sharedir):
+            for (dirp, dirns, fns) in os.walk(c.backend._sharedir.path):
                 for fn in fns:
                     try:
                         sharenum = int(fn)
}
[docs/backends/S3.rst: remove Issues section. refs #999
david-sarah@jacaranda.org**20110921031625
 Ignore-this: c83d8f52b790bc32488869e6ee1df8c2
] hunk ./docs/backends/S3.rst 57
 
 Once configured, the WUI "storage server" page will provide information about
 how much space is being used and how many shares are being stored.
-
-
-Issues
-------
-
-Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS
-is configured to store shares in S3 rather than on local disk, some common
-operations may behave differently:
-
-* Lease crawling/expiration is not yet implemented. As a result, shares will
-  be retained forever, and the Storage Server status web page will not show
-  information about the number of mutable/immutable shares present.
-
-* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for
-  each share upload, causing the upload process to run slightly slower and
-  incur more S3 request charges.
[docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999
david-sarah@jacaranda.org**20110921031705
 Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138
] {
hunk ./docs/backends/S3.rst 38
     else, you may be able to use some other S3-like service if it is
     sufficiently compatible.
 
-``s3.max_space = (str, optional)``
+``s3.max_space = (quantity of space, optional)``
 
     This tells the server to limit how much space can be used in the S3
     bucket. Before each share is uploaded, the server will ask S3 for the
hunk ./docs/backends/disk.rst 14
 
     This enables use of the disk backend, and is the default.
 
-``reserved_space = (str, optional)``
+``reserved_space = (quantity of space, optional)``
 
     If provided, this value defines how much disk space is reserved: the
     storage server will not accept any share that causes the amount of free
}
[More fixes to tests needed for pluggable backends. refs #999
david-sarah@jacaranda.org**20110921184649
 Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca
] {
hunk ./src/allmydata/scripts/debug.py 8
 from twisted.python import usage, failure
 from twisted.internet import defer
 from twisted.scripts import trial as twisted_trial
+from twisted.python.filepath import FilePath
 
 
 class DumpOptions(usage.Options):
hunk ./src/allmydata/scripts/debug.py 38
         self['filename'] = argv_to_abspath(filename)
 
 def dump_share(options):
-    from allmydata.storage.mutable import MutableShareFile
+    from allmydata.storage.backends.disk.disk_backend import get_share
     from allmydata.util.encodingutil import quote_output
 
     out = options.stdout
hunk ./src/allmydata/scripts/debug.py 46
     # check the version, to see if we have a mutable or immutable share
     print >>out, "share filename: %s" % quote_output(options['filename'])
 
-    f = open(options['filename'], "rb")
-    prefix = f.read(32)
-    f.close()
-    if prefix == MutableShareFile.MAGIC:
-        return dump_mutable_share(options)
-    # otherwise assume it's immutable
-    return dump_immutable_share(options)
-
-def dump_immutable_share(options):
-    from allmydata.storage.immutable import ShareFile
+    share = get_share("", 0, fp)
+    if share.sharetype == "mutable":
+        return dump_mutable_share(options, share)
+    else:
+        assert share.sharetype == "immutable", share.sharetype
+        return dump_immutable_share(options)
 
hunk ./src/allmydata/scripts/debug.py 53
+def dump_immutable_share(options, share):
     out = options.stdout
hunk ./src/allmydata/scripts/debug.py 55
-    f = ShareFile(options['filename'])
     if not options["leases-only"]:
hunk ./src/allmydata/scripts/debug.py 56
-        dump_immutable_chk_share(f, out, options)
-    dump_immutable_lease_info(f, out)
+        dump_immutable_chk_share(share, out, options)
+    dump_immutable_lease_info(share, out)
     print >>out
     return 0
 
hunk ./src/allmydata/scripts/debug.py 166
     return when
 
 
-def dump_mutable_share(options):
-    from allmydata.storage.mutable import MutableShareFile
+def dump_mutable_share(options, m):
     from allmydata.util import base32, idlib
     out = options.stdout
hunk ./src/allmydata/scripts/debug.py 169
-    m = MutableShareFile(options['filename'])
     f = open(options['filename'], "rb")
     WE, nodeid = m._read_write_enabler_and_nodeid(f)
     num_extra_leases = m._read_num_extra_leases(f)
hunk ./src/allmydata/scripts/debug.py 641
     /home/warner/testnet/node-1/storage/shares/44k/44kai1tui348689nrw8fjegc8c/9
     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
     """
-    from allmydata.storage.server import si_a2b, storage_index_to_dir
-    from allmydata.util.encodingutil import listdir_unicode
+    from allmydata.storage.server import si_a2b
+    from allmydata.storage.backends.disk_backend import si_si2dir
+    from allmydata.util.encodingutil import quote_filepath
 
     out = options.stdout
hunk ./src/allmydata/scripts/debug.py 646
-    sharedir = storage_index_to_dir(si_a2b(options.si_s))
-    for d in options.nodedirs:
-        d = os.path.join(d, "storage/shares", sharedir)
-        if os.path.exists(d):
-            for shnum in listdir_unicode(d):
-                print >>out, os.path.join(d, shnum)
+    si = si_a2b(options.si_s)
+    for nodedir in options.nodedirs:
+        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
+        if sharedir.exists():
+            for sharefp in sharedir.children():
+                print >>out, quote_filepath(sharefp, quotemarks=False)
 
     return 0
 
hunk ./src/allmydata/scripts/debug.py 878
         print >>err, "Error processing %s" % quote_output(si_dir)
         failure.Failure().printTraceback(err)
 
+
 class CorruptShareOptions(usage.Options):
     def getSynopsis(self):
         return "Usage: tahoe debug corrupt-share SHARE_FILENAME"
hunk ./src/allmydata/scripts/debug.py 902
 Obviously, this command should not be used in normal operation.
 """
         return t
+
     def parseArgs(self, filename):
         self['filename'] = filename
 
hunk ./src/allmydata/scripts/debug.py 907
 def corrupt_share(options):
+    do_corrupt_share(options.stdout, FilePath(options['filename']), options['offset'])
+
+def do_corrupt_share(out, fp, offset="block-random"):
     import random
hunk ./src/allmydata/scripts/debug.py 911
-    from allmydata.storage.mutable import MutableShareFile
-    from allmydata.storage.immutable import ShareFile
+    from allmydata.storage.backends.disk.mutable import MutableDiskShare
+    from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
     from allmydata.mutable.layout import unpack_header
     from allmydata.immutable.layout import ReadBucketProxy
hunk ./src/allmydata/scripts/debug.py 915
-    out = options.stdout
-    fn = options['filename']
-    assert options["offset"] == "block-random", "other offsets not implemented"
+
+    assert offset == "block-random", "other offsets not implemented"
+
     # first, what kind of share is it?
 
     def flip_bit(start, end):
hunk ./src/allmydata/scripts/debug.py 924
         offset = random.randrange(start, end)
         bit = random.randrange(0, 8)
         print >>out, "[%d..%d):  %d.b%d" % (start, end, offset, bit)
-        f = open(fn, "rb+")
-        f.seek(offset)
-        d = f.read(1)
-        d = chr(ord(d) ^ 0x01)
-        f.seek(offset)
-        f.write(d)
-        f.close()
+        f = fp.open("rb+")
+        try:
+            f.seek(offset)
+            d = f.read(1)
+            d = chr(ord(d) ^ 0x01)
+            f.seek(offset)
+            f.write(d)
+        finally:
+            f.close()
 
hunk ./src/allmydata/scripts/debug.py 934
-    f = open(fn, "rb")
-    prefix = f.read(32)
-    f.close()
-    if prefix == MutableShareFile.MAGIC:
-        # mutable
-        m = MutableShareFile(fn)
-        f = open(fn, "rb")
-        f.seek(m.DATA_OFFSET)
-        data = f.read(2000)
-        # make sure this slot contains an SMDF share
-        assert data[0] == "\x00", "non-SDMF mutable shares not supported"
+    f = fp.open("rb")
+    try:
+        prefix = f.read(32)
+    finally:
         f.close()
hunk ./src/allmydata/scripts/debug.py 939
+    if prefix == MutableDiskShare.MAGIC:
+        # mutable
+        m = MutableDiskShare("", 0, fp)
+        f = fp.open("rb")
+        try:
+            f.seek(m.DATA_OFFSET)
+            data = f.read(2000)
+            # make sure this slot contains an SMDF share
+            assert data[0] == "\x00", "non-SDMF mutable shares not supported"
+        finally:
+            f.close()
 
         (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize,
          ig_datalen, offsets) = unpack_header(data)
hunk ./src/allmydata/scripts/debug.py 960
         flip_bit(start, end)
     else:
         # otherwise assume it's immutable
-        f = ShareFile(fn)
+        f = ImmutableDiskShare("", 0, fp)
         bp = ReadBucketProxy(None, None, '')
         offsets = bp._parse_offsets(f.read_share_data(0, 0x24))
         start = f._data_offset + offsets["data"]
hunk ./src/allmydata/storage/backends/base.py 92
             (testv, datav, new_length) = test_and_write_vectors[sharenum]
             if sharenum in shares:
                 if not shares[sharenum].check_testv(testv):
-                    self.log("testv failed: [%d]: %r" % (sharenum, testv))
+                    storageserver.log("testv failed: [%d]: %r" % (sharenum, testv))
                     testv_is_good = False
                     break
             else:
hunk ./src/allmydata/storage/backends/base.py 99
                 # compare the vectors against an empty share, in which all
                 # reads return empty strings
                 if not EmptyShare().check_testv(testv):
-                    self.log("testv failed (empty): [%d] %r" % (sharenum,
-                                                                testv))
+                    storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv))
                     testv_is_good = False
                     break
 
hunk ./src/allmydata/test/test_cli.py 2892
             # delete one, corrupt a second
             shares = self.find_uri_shares(self.uri)
             self.failUnlessReallyEqual(len(shares), 10)
-            os.unlink(shares[0][2])
-            cso = debug.CorruptShareOptions()
-            cso.stdout = StringIO()
-            cso.parseOptions([shares[1][2]])
+            shares[0][2].remove()
+            stdout = StringIO()
+            sharefile = shares[1][2]
             storage_index = uri.from_string(self.uri).get_storage_index()
             self._corrupt_share_line = "  server %s, SI %s, shnum %d" % \
                                        (base32.b2a(shares[1][1]),
hunk ./src/allmydata/test/test_cli.py 2900
                                         base32.b2a(storage_index),
                                         shares[1][0])
-            debug.corrupt_share(cso)
+            debug.do_corrupt_share(stdout, sharefile)
         d.addCallback(_clobber_shares)
 
         d.addCallback(lambda ign: self.do_cli("check", "--verify", self.uri))
hunk ./src/allmydata/test/test_cli.py 3017
         def _clobber_shares(ignored):
             shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"])
             self.failUnlessReallyEqual(len(shares), 10)
-            os.unlink(shares[0][2])
+            shares[0][2].remove()
 
             shares = self.find_uri_shares(self.uris["mutable"])
hunk ./src/allmydata/test/test_cli.py 3020
-            cso = debug.CorruptShareOptions()
-            cso.stdout = StringIO()
-            cso.parseOptions([shares[1][2]])
+            stdout = StringIO()
+            sharefile = shares[1][2]
             storage_index = uri.from_string(self.uris["mutable"]).get_storage_index()
             self._corrupt_share_line = " corrupt: server %s, SI %s, shnum %d" % \
                                        (base32.b2a(shares[1][1]),
hunk ./src/allmydata/test/test_cli.py 3027
                                         base32.b2a(storage_index),
                                         shares[1][0])
-            debug.corrupt_share(cso)
+            debug.do_corrupt_share(stdout, sharefile)
         d.addCallback(_clobber_shares)
 
         # root
hunk ./src/allmydata/test/test_client.py 90
                            "enabled = true\n" + \
                            "reserved_space = 1000\n")
         c = client.Client(basedir)
-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 1000)
+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 1000)
 
     def test_reserved_2(self):
         basedir = "client.Basic.test_reserved_2"
hunk ./src/allmydata/test/test_client.py 101
                            "enabled = true\n" + \
                            "reserved_space = 10K\n")
         c = client.Client(basedir)
-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 10*1000)
+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 10*1000)
 
     def test_reserved_3(self):
         basedir = "client.Basic.test_reserved_3"
hunk ./src/allmydata/test/test_client.py 112
                            "enabled = true\n" + \
                            "reserved_space = 5mB\n")
         c = client.Client(basedir)
-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
                              5*1000*1000)
 
     def test_reserved_4(self):
hunk ./src/allmydata/test/test_client.py 124
                            "enabled = true\n" + \
                            "reserved_space = 78Gb\n")
         c = client.Client(basedir)
-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space,
+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space,
                              78*1000*1000*1000)
 
     def test_reserved_bad(self):
hunk ./src/allmydata/test/test_client.py 136
                            "enabled = true\n" + \
                            "reserved_space = bogus\n")
         c = client.Client(basedir)
-        self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 0)
+        self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 0)
 
     def _permute(self, sb, key):
         return [ s.get_serverid() for s in sb.get_servers_for_psi(key) ]
hunk ./src/allmydata/test/test_crawler.py 7
 from twisted.trial import unittest
 from twisted.application import service
 from twisted.internet import defer
+from twisted.python.filepath import FilePath
 from foolscap.api import eventually, fireEventually
 
 from allmydata.util import fileutil, hashutil, pollmixin
hunk ./src/allmydata/test/test_crawler.py 13
 from allmydata.storage.server import StorageServer, si_b2a
 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
+from allmydata.storage.backends.disk.disk_backend import DiskBackend
 
 from allmydata.test.test_storage import FakeCanary
 from allmydata.test.common_util import StallMixin
hunk ./src/allmydata/test/test_crawler.py 115
 
     def test_immediate(self):
         self.basedir = "crawler/Basic/immediate"
-        fileutil.make_dirs(self.basedir)
         serverid = "\x00" * 20
hunk ./src/allmydata/test/test_crawler.py 116
-        ss = StorageServer(self.basedir, serverid)
+        fp = FilePath(self.basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer(serverid, backend, fp)
         ss.setServiceParent(self.s)
 
         sis = [self.write(i, ss, serverid) for i in range(10)]
hunk ./src/allmydata/test/test_crawler.py 122
-        statefile = os.path.join(self.basedir, "statefile")
+        statefp = fp.child("statefile")
 
hunk ./src/allmydata/test/test_crawler.py 124
-        c = BucketEnumeratingCrawler(ss, statefile, allowed_cpu_percentage=.1)
+        c = BucketEnumeratingCrawler(backend, statefp, allowed_cpu_percentage=.1)
         c.load_state()
 
         c.start_current_prefix(time.time())
hunk ./src/allmydata/test/test_crawler.py 137
         self.failUnlessEqual(sorted(sis), sorted(c.all_buckets))
 
         # check that a new crawler picks up on the state file properly
-        c2 = BucketEnumeratingCrawler(ss, statefile)
+        c2 = BucketEnumeratingCrawler(backend, statefp)
         c2.load_state()
 
         c2.start_current_prefix(time.time())
hunk ./src/allmydata/test/test_crawler.py 145
 
     def test_service(self):
         self.basedir = "crawler/Basic/service"
-        fileutil.make_dirs(self.basedir)
         serverid = "\x00" * 20
hunk ./src/allmydata/test/test_crawler.py 146
-        ss = StorageServer(self.basedir, serverid)
+        fp = FilePath(self.basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer(serverid, backend, fp)
         ss.setServiceParent(self.s)
 
         sis = [self.write(i, ss, serverid) for i in range(10)]
hunk ./src/allmydata/test/test_crawler.py 153
 
-        statefile = os.path.join(self.basedir, "statefile")
-        c = BucketEnumeratingCrawler(ss, statefile)
+        statefp = fp.child("statefile")
+        c = BucketEnumeratingCrawler(backend, statefp)
         c.setServiceParent(self.s)
 
         # it should be legal to call get_state() and get_progress() right
hunk ./src/allmydata/test/test_crawler.py 174
 
     def test_paced(self):
         self.basedir = "crawler/Basic/paced"
-        fileutil.make_dirs(self.basedir)
         serverid = "\x00" * 20
hunk ./src/allmydata/test/test_crawler.py 175
-        ss = StorageServer(self.basedir, serverid)
+        fp = FilePath(self.basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer(serverid, backend, fp)
         ss.setServiceParent(self.s)
 
         # put four buckets in each prefixdir
hunk ./src/allmydata/test/test_crawler.py 186
             for tail in range(4):
                 sis.append(self.write(i, ss, serverid, tail))
 
-        statefile = os.path.join(self.basedir, "statefile")
+        statefp = fp.child("statefile")
 
hunk ./src/allmydata/test/test_crawler.py 188
-        c = PacedCrawler(ss, statefile)
+        c = PacedCrawler(backend, statefp)
         c.load_state()
         try:
             c.start_current_prefix(time.time())
hunk ./src/allmydata/test/test_crawler.py 213
         del c
 
         # start a new crawler, it should start from the beginning
-        c = PacedCrawler(ss, statefile)
+        c = PacedCrawler(backend, statefp)
         c.load_state()
         try:
             c.start_current_prefix(time.time())
hunk ./src/allmydata/test/test_crawler.py 226
         c.cpu_slice = PacedCrawler.cpu_slice
 
         # a third crawler should pick up from where it left off
-        c2 = PacedCrawler(ss, statefile)
+        c2 = PacedCrawler(backend, statefp)
         c2.all_buckets = c.all_buckets[:]
         c2.load_state()
         c2.countdown = -1
hunk ./src/allmydata/test/test_crawler.py 237
 
         # now stop it at the end of a bucket (countdown=4), to exercise a
         # different place that checks the time
-        c = PacedCrawler(ss, statefile)
+        c = PacedCrawler(backend, statefp)
         c.load_state()
         c.countdown = 4
         try:
hunk ./src/allmydata/test/test_crawler.py 256
 
         # stop it again at the end of the bucket, check that a new checker
         # picks up correctly
-        c = PacedCrawler(ss, statefile)
+        c = PacedCrawler(backend, statefp)
         c.load_state()
         c.countdown = 4
         try:
hunk ./src/allmydata/test/test_crawler.py 266
         # that should stop at the end of one of the buckets.
         c.save_state()
 
-        c2 = PacedCrawler(ss, statefile)
+        c2 = PacedCrawler(backend, statefp)
         c2.all_buckets = c.all_buckets[:]
         c2.load_state()
         c2.countdown = -1
hunk ./src/allmydata/test/test_crawler.py 277
 
     def test_paced_service(self):
         self.basedir = "crawler/Basic/paced_service"
-        fileutil.make_dirs(self.basedir)
         serverid = "\x00" * 20
hunk ./src/allmydata/test/test_crawler.py 278
-        ss = StorageServer(self.basedir, serverid)
+        fp = FilePath(self.basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer(serverid, backend, fp)
         ss.setServiceParent(self.s)
 
         sis = [self.write(i, ss, serverid) for i in range(10)]
hunk ./src/allmydata/test/test_crawler.py 285
 
-        statefile = os.path.join(self.basedir, "statefile")
-        c = PacedCrawler(ss, statefile)
+        statefp = fp.child("statefile")
+        c = PacedCrawler(backend, statefp)
 
         did_check_progress = [False]
         def check_progress():
hunk ./src/allmydata/test/test_crawler.py 345
         # and read the stdout when it runs.
 
         self.basedir = "crawler/Basic/cpu_usage"
-        fileutil.make_dirs(self.basedir)
         serverid = "\x00" * 20
hunk ./src/allmydata/test/test_crawler.py 346
-        ss = StorageServer(self.basedir, serverid)
+        fp = FilePath(self.basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer(serverid, backend, fp)
         ss.setServiceParent(self.s)
 
         for i in range(10):
hunk ./src/allmydata/test/test_crawler.py 354
             self.write(i, ss, serverid)
 
-        statefile = os.path.join(self.basedir, "statefile")
-        c = ConsumingCrawler(ss, statefile)
+        statefp = fp.child("statefile")
+        c = ConsumingCrawler(backend, statefp)
         c.setServiceParent(self.s)
 
         # this will run as fast as it can, consuming about 50ms per call to
hunk ./src/allmydata/test/test_crawler.py 391
 
     def test_empty_subclass(self):
         self.basedir = "crawler/Basic/empty_subclass"
-        fileutil.make_dirs(self.basedir)
         serverid = "\x00" * 20
hunk ./src/allmydata/test/test_crawler.py 392
-        ss = StorageServer(self.basedir, serverid)
+        fp = FilePath(self.basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer(serverid, backend, fp)
         ss.setServiceParent(self.s)
 
         for i in range(10):
hunk ./src/allmydata/test/test_crawler.py 400
             self.write(i, ss, serverid)
 
-        statefile = os.path.join(self.basedir, "statefile")
-        c = ShareCrawler(ss, statefile)
+        statefp = fp.child("statefile")
+        c = ShareCrawler(backend, statefp)
         c.slow_start = 0
         c.setServiceParent(self.s)
 
hunk ./src/allmydata/test/test_crawler.py 417
         d.addCallback(_done)
         return d
 
-
     def test_oneshot(self):
         self.basedir = "crawler/Basic/oneshot"
hunk ./src/allmydata/test/test_crawler.py 419
-        fileutil.make_dirs(self.basedir)
         serverid = "\x00" * 20
hunk ./src/allmydata/test/test_crawler.py 420
-        ss = StorageServer(self.basedir, serverid)
+        fp = FilePath(self.basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer(serverid, backend, fp)
         ss.setServiceParent(self.s)
 
         for i in range(30):
hunk ./src/allmydata/test/test_crawler.py 428
             self.write(i, ss, serverid)
 
-        statefile = os.path.join(self.basedir, "statefile")
-        c = OneShotCrawler(ss, statefile)
+        statefp = fp.child("statefile")
+        c = OneShotCrawler(backend, statefp)
         c.setServiceParent(self.s)
 
         d = c.finished_d
hunk ./src/allmydata/test/test_crawler.py 447
             self.failUnlessEqual(s["current-cycle"], None)
         d.addCallback(_check)
         return d
-
hunk ./src/allmydata/test/test_deepcheck.py 23
      ShouldFailMixin
 from allmydata.test.common_util import StallMixin
 from allmydata.test.no_network import GridTestMixin
+from allmydata.scripts import debug
+
 
 timeout = 2400 # One of these took 1046.091s on Zandr's ARM box.
 
hunk ./src/allmydata/test/test_deepcheck.py 905
         d.addErrback(self.explain_error)
         return d
 
-
-
     def set_up_damaged_tree(self):
         # 6.4s
 
hunk ./src/allmydata/test/test_deepcheck.py 989
 
         return d
 
-    def _run_cli(self, argv):
-        stdout, stderr = StringIO(), StringIO()
-        # this can only do synchronous operations
-        assert argv[0] == "debug"
-        runner.runner(argv, run_by_human=False, stdout=stdout, stderr=stderr)
-        return stdout.getvalue()
-
     def _delete_some_shares(self, node):
         self.delete_shares_numbered(node.get_uri(), [0,1])
 
hunk ./src/allmydata/test/test_deepcheck.py 995
     def _corrupt_some_shares(self, node):
         for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()):
             if shnum in (0,1):
-                self._run_cli(["debug", "corrupt-share", sharefile])
+                debug.do_corrupt_share(StringIO(), sharefile)
 
     def _delete_most_shares(self, node):
         self.delete_shares_numbered(node.get_uri(), range(1,10))
hunk ./src/allmydata/test/test_deepcheck.py 1000
 
-
     def check_is_healthy(self, cr, where):
         try:
             self.failUnless(ICheckResults.providedBy(cr), (cr, type(cr), where))
hunk ./src/allmydata/test/test_download.py 134
             for shnum in shares_for_server:
                 share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir
                 fileutil.fp_make_dirs(share_dir)
-                share_dir.child(str(shnum)).setContent(shares[shnum])
+                share_dir.child(str(shnum)).setContent(shares_for_server[shnum])
 
     def load_shares(self, ignored=None):
         # this uses the data generated by create_shares() to populate the
hunk ./src/allmydata/test/test_hung_server.py 32
 
     def _break(self, servers):
         for ss in servers:
-            self.g.break_server(ss.get_serverid())
+            self.g.break_server(ss.original.get_serverid())
 
     def _hang(self, servers, **kwargs):
         for ss in servers:
hunk ./src/allmydata/test/test_hung_server.py 67
         serverids = [ss.original.get_serverid() for ss in from_servers]
         for (i_shnum, i_serverid, i_sharefp) in self.shares:
             if i_serverid in serverids:
-                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server)
+                self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server.original)
 
         self.shares = self.find_uri_shares(self.uri)
 
hunk ./src/allmydata/test/test_mutable.py 3670
         # Now execute each assignment by writing the storage.
         for (share, servernum) in assignments:
             sharedata = base64.b64decode(self.sdmf_old_shares[share])
-            storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir
+            storage_dir = self.get_server(servernum).backend.get_shareset(si)._sharehomedir
             fileutil.fp_make_dirs(storage_dir)
             storage_dir.child("%d" % share).setContent(sharedata)
         # ...and verify that the shares are there.
hunk ./src/allmydata/test/test_no_network.py 10
 from allmydata.immutable.upload import Data
 from allmydata.util.consumer import download_to_data
 
+
 class Harness(unittest.TestCase):
     def setUp(self):
         self.s = service.MultiService()
hunk ./src/allmydata/test/test_storage.py 1
-import time, os.path, platform, stat, re, simplejson, struct, shutil
+import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
 
 import mock
 
hunk ./src/allmydata/test/test_storage.py 6
 from twisted.trial import unittest
-
 from twisted.internet import defer
 from twisted.application import service
hunk ./src/allmydata/test/test_storage.py 8
+from twisted.python.filepath import FilePath
 from foolscap.api import fireEventually
hunk ./src/allmydata/test/test_storage.py 10
-import itertools
+
 from allmydata import interfaces
 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
 from allmydata.storage.server import StorageServer
hunk ./src/allmydata/test/test_storage.py 14
+from allmydata.storage.backends.disk.disk_backend import DiskBackend
 from allmydata.storage.backends.disk.mutable import MutableDiskShare
 from allmydata.storage.bucket import BucketWriter, BucketReader
 from allmydata.storage.common import DataTooLargeError, \
hunk ./src/allmydata/test/test_storage.py 310
         return self.sparent.stopService()
 
     def workdir(self, name):
-        basedir = os.path.join("storage", "Server", name)
-        return basedir
+        return FilePath("storage").child("Server").child(name)
 
     def create(self, name, reserved_space=0, klass=StorageServer):
         workdir = self.workdir(name)
hunk ./src/allmydata/test/test_storage.py 314
-        ss = klass(workdir, "\x00" * 20, reserved_space=reserved_space,
+        backend = DiskBackend(workdir, readonly=False, reserved_space=reserved_space)
+        ss = klass("\x00" * 20, backend, workdir,
                    stats_provider=FakeStatsProvider())
         ss.setServiceParent(self.sparent)
         return ss
hunk ./src/allmydata/test/test_storage.py 1386
 
     def tearDown(self):
         self.sparent.stopService()
-        shutil.rmtree(self.workdir("MDMFProxies storage test server"))
+        fileutil.fp_remove(self.workdir("MDMFProxies storage test server"))
 
 
     def write_enabler(self, we_tag):
hunk ./src/allmydata/test/test_storage.py 2781
         return self.sparent.stopService()
 
     def workdir(self, name):
-        basedir = os.path.join("storage", "Server", name)
-        return basedir
+        return FilePath("storage").child("Server").child(name)
 
     def create(self, name):
         workdir = self.workdir(name)
hunk ./src/allmydata/test/test_storage.py 2785
-        ss = StorageServer(workdir, "\x00" * 20)
+        backend = DiskBackend(workdir)
+        ss = StorageServer("\x00" * 20, backend, workdir)
         ss.setServiceParent(self.sparent)
         return ss
 
hunk ./src/allmydata/test/test_storage.py 4061
         }
 
         basedir = "storage/WebStatus/status_right_disk_stats"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=reserved_space)
-        expecteddir = ss.sharedir
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp, readonly=False, reserved_space=reserved_space)
+        ss = StorageServer("\x00" * 20, backend, fp)
+        expecteddir = backend._sharedir
         ss.setServiceParent(self.s)
         w = StorageStatus(ss)
         html = w.renderSynchronously()
hunk ./src/allmydata/test/test_storage.py 4084
 
     def test_readonly(self):
         basedir = "storage/WebStatus/readonly"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp, readonly=True)
+        ss = StorageServer("\x00" * 20, backend, fp)
         ss.setServiceParent(self.s)
         w = StorageStatus(ss)
         html = w.renderSynchronously()
hunk ./src/allmydata/test/test_storage.py 4096
 
     def test_reserved(self):
         basedir = "storage/WebStatus/reserved"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
-        ss.setServiceParent(self.s)
-        w = StorageStatus(ss)
-        html = w.renderSynchronously()
-        self.failUnlessIn("<h1>Storage Server Status</h1>", html)
-        s = remove_tags(html)
-        self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
-
-    def test_huge_reserved(self):
-        basedir = "storage/WebStatus/reserved"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp, readonly=False, reserved_space=10e6)
+        ss = StorageServer("\x00" * 20, backend, fp)
         ss.setServiceParent(self.s)
         w = StorageStatus(ss)
         html = w.renderSynchronously()
hunk ./src/allmydata/test/test_upload.py 3
 # -*- coding: utf-8 -*-
 
-import os, shutil
+import os
 from cStringIO import StringIO
 from twisted.trial import unittest
 from twisted.python.failure import Failure
hunk ./src/allmydata/test/test_upload.py 14
 from allmydata import uri, monitor, client
 from allmydata.immutable import upload, encode
 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
-from allmydata.util import log
+from allmydata.util import log, fileutil
 from allmydata.util.assertutil import precondition
 from allmydata.util.deferredutil import DeferredListShouldSucceed
 from allmydata.test.no_network import GridTestMixin
hunk ./src/allmydata/test/test_upload.py 972
                                         readonly=True))
         # Remove the first share from server 0.
         def _remove_share_0_from_server_0():
-            share_location = self.shares[0][2]
-            os.remove(share_location)
+            self.shares[0][2].remove()
         d.addCallback(lambda ign:
             _remove_share_0_from_server_0())
         # Set happy = 4 in the client.
hunk ./src/allmydata/test/test_upload.py 1847
             self._copy_share_to_server(3, 1)
             storedir = self.get_serverdir(0)
             # remove the storedir, wiping out any existing shares
-            shutil.rmtree(storedir)
+            fileutil.fp_remove(storedir)
             # create an empty storedir to replace the one we just removed
hunk ./src/allmydata/test/test_upload.py 1849
-            os.mkdir(storedir)
+            storedir.mkdir()
             client = self.g.clients[0]
             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
             return client
hunk ./src/allmydata/test/test_upload.py 1888
             self._copy_share_to_server(3, 1)
             storedir = self.get_serverdir(0)
             # remove the storedir, wiping out any existing shares
-            shutil.rmtree(storedir)
+            fileutil.fp_remove(storedir)
             # create an empty storedir to replace the one we just removed
hunk ./src/allmydata/test/test_upload.py 1890
-            os.mkdir(storedir)
+            storedir.mkdir()
             client = self.g.clients[0]
             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
             return client
hunk ./src/allmydata/test/test_web.py 4870
         d.addErrback(self.explain_web_error)
         return d
 
-    def _assert_leasecount(self, ignored, which, expected):
+    def _assert_leasecount(self, which, expected):
         lease_counts = self.count_leases(self.uris[which])
         for (fn, num_leases) in lease_counts:
             if num_leases != expected:
hunk ./src/allmydata/test/test_web.py 4903
                 self.fileurls[which] = "uri/" + urllib.quote(self.uris[which])
         d.addCallback(_compute_fileurls)
 
-        d.addCallback(self._assert_leasecount, "one", 1)
-        d.addCallback(self._assert_leasecount, "two", 1)
-        d.addCallback(self._assert_leasecount, "mutable", 1)
+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
 
         d.addCallback(self.CHECK, "one", "t=check") # no add-lease
         def _got_html_good(res):
hunk ./src/allmydata/test/test_web.py 4913
             self.failIf("Not Healthy" in res, res)
         d.addCallback(_got_html_good)
 
-        d.addCallback(self._assert_leasecount, "one", 1)
-        d.addCallback(self._assert_leasecount, "two", 1)
-        d.addCallback(self._assert_leasecount, "mutable", 1)
+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
 
         # this CHECK uses the original client, which uses the same
         # lease-secrets, so it will just renew the original lease
hunk ./src/allmydata/test/test_web.py 4922
         d.addCallback(self.CHECK, "one", "t=check&add-lease=true")
         d.addCallback(_got_html_good)
 
-        d.addCallback(self._assert_leasecount, "one", 1)
-        d.addCallback(self._assert_leasecount, "two", 1)
-        d.addCallback(self._assert_leasecount, "mutable", 1)
+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
 
         # this CHECK uses an alternate client, which adds a second lease
         d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1)
hunk ./src/allmydata/test/test_web.py 4930
         d.addCallback(_got_html_good)
 
-        d.addCallback(self._assert_leasecount, "one", 2)
-        d.addCallback(self._assert_leasecount, "two", 1)
-        d.addCallback(self._assert_leasecount, "mutable", 1)
+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
 
         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true")
         d.addCallback(_got_html_good)
hunk ./src/allmydata/test/test_web.py 4937
 
-        d.addCallback(self._assert_leasecount, "one", 2)
-        d.addCallback(self._assert_leasecount, "two", 1)
-        d.addCallback(self._assert_leasecount, "mutable", 1)
+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
 
         d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true",
                       clientnum=1)
hunk ./src/allmydata/test/test_web.py 4945
         d.addCallback(_got_html_good)
 
-        d.addCallback(self._assert_leasecount, "one", 2)
-        d.addCallback(self._assert_leasecount, "two", 1)
-        d.addCallback(self._assert_leasecount, "mutable", 2)
+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
+        d.addCallback(lambda ign: self._assert_leasecount("two", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
 
         d.addErrback(self.explain_web_error)
         return d
hunk ./src/allmydata/test/test_web.py 4989
             self.failUnlessReallyEqual(len(units), 4+1)
         d.addCallback(_done)
 
-        d.addCallback(self._assert_leasecount, "root", 1)
-        d.addCallback(self._assert_leasecount, "one", 1)
-        d.addCallback(self._assert_leasecount, "mutable", 1)
+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
 
         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true")
         d.addCallback(_done)
hunk ./src/allmydata/test/test_web.py 4996
 
-        d.addCallback(self._assert_leasecount, "root", 1)
-        d.addCallback(self._assert_leasecount, "one", 1)
-        d.addCallback(self._assert_leasecount, "mutable", 1)
+        d.addCallback(lambda ign: self._assert_leasecount("root", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("one", 1))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 1))
 
         d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true",
                       clientnum=1)
hunk ./src/allmydata/test/test_web.py 5004
         d.addCallback(_done)
 
-        d.addCallback(self._assert_leasecount, "root", 2)
-        d.addCallback(self._assert_leasecount, "one", 2)
-        d.addCallback(self._assert_leasecount, "mutable", 2)
+        d.addCallback(lambda ign: self._assert_leasecount("root", 2))
+        d.addCallback(lambda ign: self._assert_leasecount("one", 2))
+        d.addCallback(lambda ign: self._assert_leasecount("mutable", 2))
 
         d.addErrback(self.explain_web_error)
         return d
}
[Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999
david-sarah@jacaranda.org**20110921221421
 Ignore-this: 600e3ccef8533aa43442fa576c7d88cf
] {
hunk ./src/allmydata/scripts/debug.py 642
     /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2
     """
     from allmydata.storage.server import si_a2b
-    from allmydata.storage.backends.disk_backend import si_si2dir
+    from allmydata.storage.backends.disk.disk_backend import si_si2dir
     from allmydata.util.encodingutil import quote_filepath
 
     out = options.stdout
hunk ./src/allmydata/scripts/debug.py 648
     si = si_a2b(options.si_s)
     for nodedir in options.nodedirs:
-        sharedir = si_si2dir(nodedir.child("storage").child("shares"), si)
+        sharedir = si_si2dir(FilePath(nodedir).child("storage").child("shares"), si)
         if sharedir.exists():
             for sharefp in sharedir.children():
                 print >>out, quote_filepath(sharefp, quotemarks=False)
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 189
         incominghome = self._incominghomedir.child(str(shnum))
         immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome,
                                    max_size=max_space_per_bucket)
-        bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary)
+        bw = BucketWriter(storageserver, immsh, lease_info, canary)
         if self._discard_storage:
             bw.throw_out_all_data = True
         return bw
hunk ./src/allmydata/storage/backends/disk/immutable.py 147
     def unlink(self):
         self._home.remove()
 
+    def get_allocated_size(self):
+        return self._max_size
+
     def get_size(self):
         return self._home.getsize()
 
hunk ./src/allmydata/storage/bucket.py 15
 class BucketWriter(Referenceable):
     implements(RIBucketWriter)
 
-    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
+    def __init__(self, ss, immutableshare, lease_info, canary):
         self.ss = ss
hunk ./src/allmydata/storage/bucket.py 17
-        self._max_size = max_size # don't allow the client to write more than this
         self._canary = canary
         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
         self.closed = False
hunk ./src/allmydata/storage/bucket.py 27
         self._share.add_lease(lease_info)
 
     def allocated_size(self):
-        return self._max_size
+        return self._share.get_allocated_size()
 
     def remote_write(self, offset, data):
         start = time.time()
hunk ./src/allmydata/storage/crawler.py 480
             self.state["bucket-counts"][cycle] = {}
         self.state["bucket-counts"][cycle][prefix] = len(sharesets)
         if prefix in self.prefixes[:self.num_sample_prefixes]:
-            self.state["storage-index-samples"][prefix] = (cycle, sharesets)
+            si_strings = [shareset.get_storage_index_string() for shareset in sharesets]
+            self.state["storage-index-samples"][prefix] = (cycle, si_strings)
 
     def finished_cycle(self, cycle):
         last_counts = self.state["bucket-counts"].get(cycle, [])
hunk ./src/allmydata/storage/expirer.py 281
         # copy() needs to become a deepcopy
         h["space-recovered"] = s["space-recovered"].copy()
 
-        history = pickle.load(self.historyfp.getContent())
+        history = pickle.loads(self.historyfp.getContent())
         history[cycle] = h
         while len(history) > 10:
             oldcycles = sorted(history.keys())
hunk ./src/allmydata/storage/expirer.py 355
         progress = self.get_progress()
 
         state = ShareCrawler.get_state(self) # does a shallow copy
-        history = pickle.load(self.historyfp.getContent())
+        history = pickle.loads(self.historyfp.getContent())
         state["history"] = history
 
         if not progress["cycle-in-progress"]:
hunk ./src/allmydata/test/test_download.py 199
                     for shnum in immutable_shares[clientnum]:
                         if s._shnum == shnum:
                             share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
-                            share_dir.child(str(shnum)).remove()
+                            fileutil.fp_remove(share_dir.child(str(shnum)))
         d.addCallback(_clobber_some_shares)
         d.addCallback(lambda ign: download_to_data(n))
         d.addCallback(_got_data)
hunk ./src/allmydata/test/test_download.py 224
             for clientnum in immutable_shares:
                 for shnum in immutable_shares[clientnum]:
                     share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir
-                    share_dir.child(str(shnum)).remove()
+                    fileutil.fp_remove(share_dir.child(str(shnum)))
             # now a new download should fail with NoSharesError. We want a
             # new ImmutableFileNode so it will forget about the old shares.
             # If we merely called create_node_from_uri() without first
hunk ./src/allmydata/test/test_repairer.py 415
         def _test_corrupt(ignored):
             olddata = {}
             shares = self.find_uri_shares(self.uri)
-            for (shnum, serverid, sharefile) in shares:
-                olddata[ (shnum, serverid) ] = open(sharefile, "rb").read()
+            for (shnum, serverid, sharefp) in shares:
+                olddata[ (shnum, serverid) ] = sharefp.getContent()
             for sh in shares:
                 self.corrupt_share(sh, common._corrupt_uri_extension)
hunk ./src/allmydata/test/test_repairer.py 419
-            for (shnum, serverid, sharefile) in shares:
-                newdata = open(sharefile, "rb").read()
+            for (shnum, serverid, sharefp) in shares:
+                newdata = sharefp.getContent()
                 self.failIfEqual(olddata[ (shnum, serverid) ], newdata)
         d.addCallback(_test_corrupt)
 
hunk ./src/allmydata/test/test_storage.py 63
 
 class Bucket(unittest.TestCase):
     def make_workdir(self, name):
-        basedir = os.path.join("storage", "Bucket", name)
-        incoming = os.path.join(basedir, "tmp", "bucket")
-        final = os.path.join(basedir, "bucket")
-        fileutil.make_dirs(basedir)
-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
+        basedir = FilePath("storage").child("Bucket").child(name)
+        tmpdir = basedir.child("tmp")
+        tmpdir.makedirs()
+        incoming = tmpdir.child("bucket")
+        final = basedir.child("bucket")
         return incoming, final
 
     def bucket_writer_closed(self, bw, consumed):
hunk ./src/allmydata/test/test_storage.py 87
 
     def test_create(self):
         incoming, final = self.make_workdir("test_create")
-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
-                          FakeCanary())
+        share = ImmutableDiskShare("", 0, incoming, final, 200)
+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
         bw.remote_write(0, "a"*25)
         bw.remote_write(25, "b"*25)
         bw.remote_write(50, "c"*25)
hunk ./src/allmydata/test/test_storage.py 97
 
     def test_readwrite(self):
         incoming, final = self.make_workdir("test_readwrite")
-        bw = BucketWriter(self, incoming, final, 200, self.make_lease(),
-                          FakeCanary())
+        share = ImmutableDiskShare("", 0, incoming, 200)
+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
         bw.remote_write(0, "a"*25)
         bw.remote_write(25, "b"*25)
         bw.remote_write(50, "c"*7) # last block may be short
hunk ./src/allmydata/test/test_storage.py 140
 
         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
 
-        fileutil.write(final, share_file_data)
+        final.setContent(share_file_data)
 
         mockstorageserver = mock.Mock()
 
hunk ./src/allmydata/test/test_storage.py 179
 
 class BucketProxy(unittest.TestCase):
     def make_bucket(self, name, size):
-        basedir = os.path.join("storage", "BucketProxy", name)
-        incoming = os.path.join(basedir, "tmp", "bucket")
-        final = os.path.join(basedir, "bucket")
-        fileutil.make_dirs(basedir)
-        fileutil.make_dirs(os.path.join(basedir, "tmp"))
-        bw = BucketWriter(self, incoming, final, size, self.make_lease(),
-                          FakeCanary())
+        basedir = FilePath("storage").child("BucketProxy").child(name)
+        tmpdir = basedir.child("tmp")
+        tmpdir.makedirs()
+        incoming = tmpdir.child("bucket")
+        final = basedir.child("bucket")
+        share = ImmutableDiskShare("", 0, incoming, final, size)
+        bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
         rb = RemoteBucket()
         rb.target = bw
         return bw, rb, final
hunk ./src/allmydata/test/test_storage.py 206
         pass
 
     def test_create(self):
-        bw, rb, sharefname = self.make_bucket("test_create", 500)
+        bw, rb, sharefp = self.make_bucket("test_create", 500)
         bp = WriteBucketProxy(rb, None,
                               data_size=300,
                               block_size=10,
hunk ./src/allmydata/test/test_storage.py 237
                         for i in (1,9,13)]
         uri_extension = "s" + "E"*498 + "e"
 
-        bw, rb, sharefname = self.make_bucket(name, sharesize)
+        bw, rb, sharefp = self.make_bucket(name, sharesize)
         bp = wbp_class(rb, None,
                        data_size=95,
                        block_size=25,
hunk ./src/allmydata/test/test_storage.py 258
 
         # now read everything back
         def _start_reading(res):
-            br = BucketReader(self, sharefname)
+            br = BucketReader(self, sharefp)
             rb = RemoteBucket()
             rb.target = br
             server = NoNetworkServer("abc", None)
hunk ./src/allmydata/test/test_storage.py 373
         for i, wb in writers.items():
             wb.remote_write(0, "%10d" % i)
             wb.remote_close()
-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
-                                "shares")
-        children_of_storedir = set(os.listdir(storedir))
+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
+        children_of_storedir = sorted([child.basename() for child in storedir.children()])
 
         # Now store another one under another storageindex that has leading
         # chars the same as the first storageindex.
hunk ./src/allmydata/test/test_storage.py 382
         for i, wb in writers.items():
             wb.remote_write(0, "%10d" % i)
             wb.remote_close()
-        storedir = os.path.join(self.workdir("test_dont_overfill_dirs"),
-                                "shares")
-        new_children_of_storedir = set(os.listdir(storedir))
+        storedir = self.workdir("test_dont_overfill_dirs").child("shares")
+        new_children_of_storedir = sorted([child.basename() for child in storedir.children()])
         self.failUnlessEqual(children_of_storedir, new_children_of_storedir)
 
     def test_remove_incoming(self):
hunk ./src/allmydata/test/test_storage.py 390
         ss = self.create("test_remove_incoming")
         already, writers = self.allocate(ss, "vid", range(3), 10)
         for i,wb in writers.items():
+            incoming_share_home = wb._share._home
             wb.remote_write(0, "%10d" % i)
             wb.remote_close()
hunk ./src/allmydata/test/test_storage.py 393
-        incoming_share_dir = wb.incominghome
-        incoming_bucket_dir = os.path.dirname(incoming_share_dir)
-        incoming_prefix_dir = os.path.dirname(incoming_bucket_dir)
-        incoming_dir = os.path.dirname(incoming_prefix_dir)
-        self.failIf(os.path.exists(incoming_bucket_dir), incoming_bucket_dir)
-        self.failIf(os.path.exists(incoming_prefix_dir), incoming_prefix_dir)
-        self.failUnless(os.path.exists(incoming_dir), incoming_dir)
+        incoming_bucket_dir = incoming_share_home.parent()
+        incoming_prefix_dir = incoming_bucket_dir.parent()
+        incoming_dir = incoming_prefix_dir.parent()
+        self.failIf(incoming_bucket_dir.exists(), incoming_bucket_dir)
+        self.failIf(incoming_prefix_dir.exists(), incoming_prefix_dir)
+        self.failUnless(incoming_dir.exists(), incoming_dir)
 
     def test_abort(self):
         # remote_abort, when called on a writer, should make sure that
hunk ./src/allmydata/test/test_upload.py 1849
             # remove the storedir, wiping out any existing shares
             fileutil.fp_remove(storedir)
             # create an empty storedir to replace the one we just removed
-            storedir.mkdir()
+            storedir.makedirs()
             client = self.g.clients[0]
             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
             return client
hunk ./src/allmydata/test/test_upload.py 1890
             # remove the storedir, wiping out any existing shares
             fileutil.fp_remove(storedir)
             # create an empty storedir to replace the one we just removed
-            storedir.mkdir()
+            storedir.makedirs()
             client = self.g.clients[0]
             client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
             return client
}
[uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999
david-sarah@jacaranda.org**20110921222038
 Ignore-this: ffeeab60d8e71a6a29a002d024d76fcf
] {
hunk ./src/allmydata/uri.py 829
     def is_mutable(self):
         return False
 
+    def is_readonly(self):
+        return True
+
+    def get_readonly(self):
+        return self
+
+
 class DirectoryURIVerifier(_DirectoryBaseURI):
     implements(IVerifierURI)
 
hunk ./src/allmydata/uri.py 855
     def is_mutable(self):
         return False
 
+    def is_readonly(self):
+        return True
+
+    def get_readonly(self):
+        return self
+
 
 class ImmutableDirectoryURIVerifier(DirectoryURIVerifier):
     implements(IVerifierURI)
}
[Fix some more test failures. refs #999
david-sarah@jacaranda.org**20110922045451
 Ignore-this: b726193cbd03a7c3d343f6e4a0f33ee7
] {
hunk ./src/allmydata/scripts/debug.py 42
     from allmydata.util.encodingutil import quote_output
 
     out = options.stdout
+    filename = options['filename']
 
     # check the version, to see if we have a mutable or immutable share
hunk ./src/allmydata/scripts/debug.py 45
-    print >>out, "share filename: %s" % quote_output(options['filename'])
+    print >>out, "share filename: %s" % quote_output(filename)
 
hunk ./src/allmydata/scripts/debug.py 47
-    share = get_share("", 0, fp)
+    share = get_share("", 0, FilePath(filename))
     if share.sharetype == "mutable":
         return dump_mutable_share(options, share)
     else:
hunk ./src/allmydata/storage/backends/disk/mutable.py 85
         self.parent = parent # for logging
 
     def log(self, *args, **kwargs):
-        return self.parent.log(*args, **kwargs)
+        if self.parent:
+            return self.parent.log(*args, **kwargs)
 
     def create(self, serverid, write_enabler):
         assert not self._home.exists()
hunk ./src/allmydata/storage/common.py 6
 class DataTooLargeError(Exception):
     pass
 
-class UnknownMutableContainerVersionError(Exception):
+class UnknownContainerVersionError(Exception):
     pass
 
hunk ./src/allmydata/storage/common.py 9
-class UnknownImmutableContainerVersionError(Exception):
+class UnknownMutableContainerVersionError(UnknownContainerVersionError):
+    pass
+
+class UnknownImmutableContainerVersionError(UnknownContainerVersionError):
     pass
 
 
hunk ./src/allmydata/storage/crawler.py 208
         try:
             state = pickle.loads(self.statefp.getContent())
         except EnvironmentError:
+            if self.statefp.exists():
+                raise
             state = {"version": 1,
                      "last-cycle-finished": None,
                      "current-cycle": None,
hunk ./src/allmydata/storage/server.py 24
 
     name = 'storage'
     LeaseCheckerClass = LeaseCheckingCrawler
+    BucketCounterClass = BucketCountingCrawler
     DEFAULT_EXPIRATION_POLICY = {
         'enabled': False,
         'mode': 'age',
hunk ./src/allmydata/storage/server.py 70
 
     def _setup_bucket_counter(self):
         statefp = self._statedir.child("bucket_counter.state")
-        self.bucket_counter = BucketCountingCrawler(self.backend, statefp)
+        self.bucket_counter = self.BucketCounterClass(self.backend, statefp)
         self.bucket_counter.setServiceParent(self)
 
     def _setup_lease_checker(self, expiration_policy):
hunk ./src/allmydata/storage/server.py 224
             share.add_or_renew_lease(lease_info)
             alreadygot.add(share.get_shnum())
 
-        for shnum in sharenums - alreadygot:
+        for shnum in set(sharenums) - alreadygot:
             if shareset.has_incoming(shnum):
                 # Note that we don't create BucketWriters for shnums that
                 # have a partial share (in incoming/), so if a second upload
hunk ./src/allmydata/storage/server.py 247
 
     def remote_add_lease(self, storageindex, renew_secret, cancel_secret,
                          owner_num=1):
-        # cancel_secret is no longer used.
         start = time.time()
         self.count("add-lease")
         new_expire_time = time.time() + 31*24*60*60
hunk ./src/allmydata/storage/server.py 250
-        lease_info = LeaseInfo(owner_num, renew_secret,
+        lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret,
                                new_expire_time, self._serverid)
 
         try:
hunk ./src/allmydata/storage/server.py 254
-            self.backend.add_or_renew_lease(lease_info)
+            shareset = self.backend.get_shareset(storageindex)
+            shareset.add_or_renew_lease(lease_info)
         finally:
             self.add_latency("add-lease", time.time() - start)
 
hunk ./src/allmydata/test/test_crawler.py 3
 
 import time
-import os.path
+
 from twisted.trial import unittest
 from twisted.application import service
 from twisted.internet import defer
hunk ./src/allmydata/test/test_crawler.py 10
 from twisted.python.filepath import FilePath
 from foolscap.api import eventually, fireEventually
 
-from allmydata.util import fileutil, hashutil, pollmixin
+from allmydata.util import hashutil, pollmixin
 from allmydata.storage.server import StorageServer, si_b2a
 from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded
 from allmydata.storage.backends.disk.disk_backend import DiskBackend
hunk ./src/allmydata/test/test_mutable.py 3025
             cso.stderr = StringIO()
             debug.catalog_shares(cso)
             shares = cso.stdout.getvalue().splitlines()
+            self.failIf(len(shares) < 1, shares)
             oneshare = shares[0] # all shares should be MDMF
             self.failIf(oneshare.startswith("UNKNOWN"), oneshare)
             self.failUnless(oneshare.startswith("MDMF"), oneshare)
hunk ./src/allmydata/test/test_storage.py 1
-import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools
+import time, os.path, platform, re, simplejson, struct, itertools
 
 import mock
 
hunk ./src/allmydata/test/test_storage.py 15
 from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format
 from allmydata.storage.server import StorageServer
 from allmydata.storage.backends.disk.disk_backend import DiskBackend
+from allmydata.storage.backends.disk.immutable import ImmutableDiskShare
 from allmydata.storage.backends.disk.mutable import MutableDiskShare
 from allmydata.storage.bucket import BucketWriter, BucketReader
hunk ./src/allmydata/test/test_storage.py 18
-from allmydata.storage.common import DataTooLargeError, \
+from allmydata.storage.common import DataTooLargeError, UnknownContainerVersionError, \
      UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.crawler import BucketCountingCrawler
hunk ./src/allmydata/test/test_storage.py 88
 
     def test_create(self):
         incoming, final = self.make_workdir("test_create")
-        share = ImmutableDiskShare("", 0, incoming, final, 200)
+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
         bw.remote_write(0, "a"*25)
         bw.remote_write(25, "b"*25)
hunk ./src/allmydata/test/test_storage.py 98
 
     def test_readwrite(self):
         incoming, final = self.make_workdir("test_readwrite")
-        share = ImmutableDiskShare("", 0, incoming, 200)
+        share = ImmutableDiskShare("", 0, incoming, final, max_size=200)
         bw = BucketWriter(self, share, self.make_lease(), FakeCanary())
         bw.remote_write(0, "a"*25)
         bw.remote_write(25, "b"*25)
hunk ./src/allmydata/test/test_storage.py 106
         bw.remote_close()
 
         # now read from it
-        br = BucketReader(self, bw.finalhome)
+        br = BucketReader(self, share)
         self.failUnlessEqual(br.remote_read(0, 25), "a"*25)
         self.failUnlessEqual(br.remote_read(25, 25), "b"*25)
         self.failUnlessEqual(br.remote_read(50, 7), "c"*7)
hunk ./src/allmydata/test/test_storage.py 131
         ownernumber = struct.pack('>L', 0)
         renewsecret  = 'THIS LETS ME RENEW YOUR FILE....'
         assert len(renewsecret) == 32
-        cancelsecret = 'THIS LETS ME KILL YOUR FILE HAHA'
+        cancelsecret = 'THIS USED TO LET ME KILL YR FILE'
         assert len(cancelsecret) == 32
         expirationtime = struct.pack('>L', 60*60*24*31) # 31 days in seconds
 
hunk ./src/allmydata/test/test_storage.py 142
         incoming, final = self.make_workdir("test_read_past_end_of_share_data")
 
         final.setContent(share_file_data)
+        share = ImmutableDiskShare("", 0, final)
 
         mockstorageserver = mock.Mock()
 
hunk ./src/allmydata/test/test_storage.py 147
         # Now read from it.
-        br = BucketReader(mockstorageserver, final)
+        br = BucketReader(mockstorageserver, share)
 
         self.failUnlessEqual(br.remote_read(0, len(share_data)), share_data)
 
hunk ./src/allmydata/test/test_storage.py 260
 
         # now read everything back
         def _start_reading(res):
-            br = BucketReader(self, sharefp)
+            share = ImmutableDiskShare("", 0, sharefp)
+            br = BucketReader(self, share)
             rb = RemoteBucket()
             rb.target = br
             server = NoNetworkServer("abc", None)
hunk ./src/allmydata/test/test_storage.py 346
         if 'cygwin' in syslow or 'windows' in syslow or 'darwin' in syslow:
             raise unittest.SkipTest("If your filesystem doesn't support efficient sparse files then it is very expensive (Mac OS X and Windows don't support efficient sparse files).")
 
-        avail = fileutil.get_available_space('.', 512*2**20)
+        avail = fileutil.get_available_space(FilePath('.'), 512*2**20)
         if avail <= 4*2**30:
             raise unittest.SkipTest("This test will spuriously fail if you have less than 4 GiB free on your filesystem.")
 
hunk ./src/allmydata/test/test_storage.py 476
         w[0].remote_write(0, "\xff"*10)
         w[0].remote_close()
 
-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
         f = fp.open("rb+")
hunk ./src/allmydata/test/test_storage.py 478
-        f.seek(0)
-        f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
-        f.close()
+        try:
+            f.seek(0)
+            f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1
+        finally:
+            f.close()
 
         ss.remote_get_buckets("allocate")
 
hunk ./src/allmydata/test/test_storage.py 575
 
     def test_seek(self):
         basedir = self.workdir("test_seek_behavior")
-        fileutil.make_dirs(basedir)
-        filename = os.path.join(basedir, "testfile")
-        f = open(filename, "wb")
-        f.write("start")
-        f.close()
+        basedir.makedirs()
+        fp = basedir.child("testfile")
+        fp.setContent("start")
+
         # mode="w" allows seeking-to-create-holes, but truncates pre-existing
         # files. mode="a" preserves previous contents but does not allow
         # seeking-to-create-holes. mode="r+" allows both.
hunk ./src/allmydata/test/test_storage.py 582
-        f = open(filename, "rb+")
-        f.seek(100)
-        f.write("100")
-        f.close()
-        filelen = os.stat(filename)[stat.ST_SIZE]
+        f = fp.open("rb+")
+        try:
+            f.seek(100)
+            f.write("100")
+        finally:
+            f.close()
+        fp.restat()
+        filelen = fp.getsize()
         self.failUnlessEqual(filelen, 100+3)
hunk ./src/allmydata/test/test_storage.py 591
-        f2 = open(filename, "rb")
-        self.failUnlessEqual(f2.read(5), "start")
-
+        f2 = fp.open("rb")
+        try:
+            self.failUnlessEqual(f2.read(5), "start")
+        finally:
+            f2.close()
 
     def test_leases(self):
         ss = self.create("test_leases")
hunk ./src/allmydata/test/test_storage.py 693
 
     def test_readonly(self):
         workdir = self.workdir("test_readonly")
-        ss = StorageServer(workdir, "\x00" * 20, readonly_storage=True)
+        backend = DiskBackend(workdir, readonly=True)
+        ss = StorageServer("\x00" * 20, backend, workdir)
         ss.setServiceParent(self.sparent)
 
         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
hunk ./src/allmydata/test/test_storage.py 710
 
     def test_discard(self):
         # discard is really only used for other tests, but we test it anyways
+        # XXX replace this with a null backend test
         workdir = self.workdir("test_discard")
hunk ./src/allmydata/test/test_storage.py 712
-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
+        ss = StorageServer("\x00" * 20, backend, workdir)
         ss.setServiceParent(self.sparent)
 
         already,writers = self.allocate(ss, "vid", [0,1,2], 75)
hunk ./src/allmydata/test/test_storage.py 731
 
     def test_advise_corruption(self):
         workdir = self.workdir("test_advise_corruption")
-        ss = StorageServer(workdir, "\x00" * 20, discard_storage=True)
+        backend = DiskBackend(workdir, readonly=False, discard_storage=True)
+        ss = StorageServer("\x00" * 20, backend, workdir)
         ss.setServiceParent(self.sparent)
 
         si0_s = base32.b2a("si0")
hunk ./src/allmydata/test/test_storage.py 738
         ss.remote_advise_corrupt_share("immutable", "si0", 0,
                                        "This share smells funny.\n")
-        reportdir = os.path.join(workdir, "corruption-advisories")
-        reports = os.listdir(reportdir)
+        reportdir = workdir.child("corruption-advisories")
+        reports = [child.basename() for child in reportdir.children()]
         self.failUnlessEqual(len(reports), 1)
         report_si0 = reports[0]
hunk ./src/allmydata/test/test_storage.py 742
-        self.failUnlessIn(si0_s, report_si0)
-        f = open(os.path.join(reportdir, report_si0), "r")
-        report = f.read()
-        f.close()
+        self.failUnlessIn(si0_s, str(report_si0))
+        report = reportdir.child(report_si0).getContent()
+
         self.failUnlessIn("type: immutable", report)
         self.failUnlessIn("storage_index: %s" % si0_s, report)
         self.failUnlessIn("share_number: 0", report)
hunk ./src/allmydata/test/test_storage.py 762
         self.failUnlessEqual(set(b.keys()), set([1]))
         b[1].remote_advise_corrupt_share("This share tastes like dust.\n")
 
-        reports = os.listdir(reportdir)
+        reports = [child.basename() for child in reportdir.children()]
         self.failUnlessEqual(len(reports), 2)
hunk ./src/allmydata/test/test_storage.py 764
-        report_si1 = [r for r in reports if si1_s in r][0]
-        f = open(os.path.join(reportdir, report_si1), "r")
-        report = f.read()
-        f.close()
+        report_si1 = [r for r in reports if si1_s in str(r)][0]
+        report = reportdir.child(report_si1).getContent()
+
         self.failUnlessIn("type: immutable", report)
         self.failUnlessIn("storage_index: %s" % si1_s, report)
         self.failUnlessIn("share_number: 1", report)
hunk ./src/allmydata/test/test_storage.py 783
         return self.sparent.stopService()
 
     def workdir(self, name):
-        basedir = os.path.join("storage", "MutableServer", name)
-        return basedir
+        return FilePath("storage").child("MutableServer").child(name)
 
     def create(self, name):
         workdir = self.workdir(name)
hunk ./src/allmydata/test/test_storage.py 787
-        ss = StorageServer(workdir, "\x00" * 20)
+        backend = DiskBackend(workdir)
+        ss = StorageServer("\x00" * 20, backend, workdir)
         ss.setServiceParent(self.sparent)
         return ss
 
hunk ./src/allmydata/test/test_storage.py 810
         cancel_secret = self.cancel_secret(lease_tag)
         rstaraw = ss.remote_slot_testv_and_readv_and_writev
         testandwritev = dict( [ (shnum, ([], [], None) )
-                         for shnum in sharenums ] )
+                                for shnum in sharenums ] )
         readv = []
         rc = rstaraw(storage_index,
                      (write_enabler, renew_secret, cancel_secret),
hunk ./src/allmydata/test/test_storage.py 824
     def test_bad_magic(self):
         ss = self.create("test_bad_magic")
         self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10)
-        fp = ss.backend.get_shareset("si1").sharehomedir.child("0")
+        fp = ss.backend.get_shareset("si1")._sharehomedir.child("0")
         f = fp.open("rb+")
hunk ./src/allmydata/test/test_storage.py 826
-        f.seek(0)
-        f.write("BAD MAGIC")
-        f.close()
+        try:
+            f.seek(0)
+            f.write("BAD MAGIC")
+        finally:
+            f.close()
         read = ss.remote_slot_readv
hunk ./src/allmydata/test/test_storage.py 832
-        e = self.failUnlessRaises(UnknownMutableContainerVersionError,
+
+        # This used to test for UnknownMutableContainerVersionError,
+        # but the current code raises UnknownImmutableContainerVersionError.
+        # (It changed because remote_slot_readv now works with either
+        # mutable or immutable shares.) Since the share file doesn't have
+        # the mutable magic, it's not clear that this is wrong.
+        # For now, accept either exception.
+        e = self.failUnlessRaises(UnknownContainerVersionError,
                                   read, "si1", [0], [(0,10)])
hunk ./src/allmydata/test/test_storage.py 841
-        self.failUnlessIn(" had magic ", str(e))
+        self.failUnlessIn(" had ", str(e))
         self.failUnlessIn(" but we wanted ", str(e))
 
     def test_container_size(self):
hunk ./src/allmydata/test/test_storage.py 1248
 
         # create a random non-numeric file in the bucket directory, to
         # exercise the code that's supposed to ignore those.
-        bucket_dir = ss.backend.get_shareset("si1").sharehomedir
+        bucket_dir = ss.backend.get_shareset("si1")._sharehomedir
         bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n")
 
hunk ./src/allmydata/test/test_storage.py 1251
-        s0 = MutableDiskShare(os.path.join(bucket_dir, "0"))
+        s0 = MutableDiskShare("", 0, bucket_dir.child("0"))
         self.failUnlessEqual(len(list(s0.get_leases())), 1)
 
         # add-lease on a missing storage index is silently ignored
hunk ./src/allmydata/test/test_storage.py 1365
         # note: this is a detail of the storage server implementation, and
         # may change in the future
         prefix = si[:2]
-        prefixdir = os.path.join(self.workdir("test_remove"), "shares", prefix)
-        bucketdir = os.path.join(prefixdir, si)
-        self.failUnless(os.path.exists(prefixdir), prefixdir)
-        self.failIf(os.path.exists(bucketdir), bucketdir)
+        prefixdir = self.workdir("test_remove").child("shares").child(prefix)
+        bucketdir = prefixdir.child(si)
+        self.failUnless(prefixdir.exists(), prefixdir)
+        self.failIf(bucketdir.exists(), bucketdir)
 
 
 class MDMFProxies(unittest.TestCase, ShouldFailMixin):
hunk ./src/allmydata/test/test_storage.py 1420
 
 
     def workdir(self, name):
-        basedir = os.path.join("storage", "MutableServer", name)
-        return basedir
-
+        return FilePath("storage").child("MDMFProxies").child(name)
 
     def create(self, name):
         workdir = self.workdir(name)
hunk ./src/allmydata/test/test_storage.py 1424
-        ss = StorageServer(workdir, "\x00" * 20)
+        backend = DiskBackend(workdir)
+        ss = StorageServer("\x00" * 20, backend, workdir)
         ss.setServiceParent(self.sparent)
         return ss
 
hunk ./src/allmydata/test/test_storage.py 2798
         return self.sparent.stopService()
 
     def workdir(self, name):
-        return FilePath("storage").child("Server").child(name)
+        return FilePath("storage").child("Stats").child(name)
 
     def create(self, name):
         workdir = self.workdir(name)
hunk ./src/allmydata/test/test_storage.py 2886
             d.callback(None)
 
 class MyStorageServer(StorageServer):
-    def add_bucket_counter(self):
-        statefile = os.path.join(self.storedir, "bucket_counter.state")
-        self.bucket_counter = MyBucketCountingCrawler(self, statefile)
-        self.bucket_counter.setServiceParent(self)
+    BucketCounterClass = MyBucketCountingCrawler
+
 
 class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
 
hunk ./src/allmydata/test/test_storage.py 2899
 
     def test_bucket_counter(self):
         basedir = "storage/BucketCounter/bucket_counter"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer("\x00" * 20, backend, fp)
+
         # to make sure we capture the bucket-counting-crawler in the middle
         # of a cycle, we reach in and reduce its maximum slice time to 0. We
         # also make it start sooner than usual.
hunk ./src/allmydata/test/test_storage.py 2958
 
     def test_bucket_counter_cleanup(self):
         basedir = "storage/BucketCounter/bucket_counter_cleanup"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer("\x00" * 20, backend, fp)
+
         # to make sure we capture the bucket-counting-crawler in the middle
         # of a cycle, we reach in and reduce its maximum slice time to 0.
         ss.bucket_counter.slow_start = 0
hunk ./src/allmydata/test/test_storage.py 3002
 
     def test_bucket_counter_eta(self):
         basedir = "storage/BucketCounter/bucket_counter_eta"
-        fileutil.make_dirs(basedir)
-        ss = MyStorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = MyStorageServer("\x00" * 20, backend, fp)
         ss.bucket_counter.slow_start = 0
         # these will be fired inside finished_prefix()
         hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)]
hunk ./src/allmydata/test/test_storage.py 3125
 
     def test_basic(self):
         basedir = "storage/LeaseCrawler/basic"
-        fileutil.make_dirs(basedir)
-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
+
         # make it start sooner than usual.
         lc = ss.lease_checker
         lc.slow_start = 0
hunk ./src/allmydata/test/test_storage.py 3141
         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
 
         # add a non-sharefile to exercise another code path
-        fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share")
+        fp = ss.backend.get_shareset(immutable_si_0)._sharehomedir.child("not-a-share")
         fp.setContent("I am not a share.\n")
 
         # this is before the crawl has started, so we're not in a cycle yet
hunk ./src/allmydata/test/test_storage.py 3264
             self.failUnlessEqual(rec["configured-sharebytes"], 0)
 
             def _get_sharefile(si):
-                return list(ss._iter_share_files(si))[0]
+                return list(ss.backend.get_shareset(si).get_shares())[0]
             def count_leases(si):
                 return len(list(_get_sharefile(si).get_leases()))
             self.failUnlessEqual(count_leases(immutable_si_0), 1)
hunk ./src/allmydata/test/test_storage.py 3296
         for i,lease in enumerate(sf.get_leases()):
             if lease.renew_secret == renew_secret:
                 lease.expiration_time = new_expire_time
-                f = open(sf.home, 'rb+')
-                sf._write_lease_record(f, i, lease)
-                f.close()
+                f = sf._home.open('rb+')
+                try:
+                    sf._write_lease_record(f, i, lease)
+                finally:
+                    f.close()
                 return
         raise IndexError("unable to renew non-existent lease")
 
hunk ./src/allmydata/test/test_storage.py 3306
     def test_expire_age(self):
         basedir = "storage/LeaseCrawler/expire_age"
-        fileutil.make_dirs(basedir)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+
         # setting 'override_lease_duration' to 2000 means that any lease that
         # is more than 2000 seconds old will be expired.
         expiration_policy = {
hunk ./src/allmydata/test/test_storage.py 3317
             'override_lease_duration': 2000,
             'sharetypes': ('mutable', 'immutable'),
         }
-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
+
         # make it start sooner than usual.
         lc = ss.lease_checker
         lc.slow_start = 0
hunk ./src/allmydata/test/test_storage.py 3330
         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
 
         def count_shares(si):
-            return len(list(ss._iter_share_files(si)))
+            return len(list(ss.backend.get_shareset(si).get_shares()))
         def _get_sharefile(si):
hunk ./src/allmydata/test/test_storage.py 3332
-            return list(ss._iter_share_files(si))[0]
+            return list(ss.backend.get_shareset(si).get_shares())[0]
         def count_leases(si):
             return len(list(_get_sharefile(si).get_leases()))
 
hunk ./src/allmydata/test/test_storage.py 3355
 
         sf0 = _get_sharefile(immutable_si_0)
         self.backdate_lease(sf0, self.renew_secrets[0], now - 1000)
-        sf0_size = os.stat(sf0.home).st_size
+        sf0_size = sf0.get_size()
 
         # immutable_si_1 gets an extra lease
         sf1 = _get_sharefile(immutable_si_1)
hunk ./src/allmydata/test/test_storage.py 3363
 
         sf2 = _get_sharefile(mutable_si_2)
         self.backdate_lease(sf2, self.renew_secrets[3], now - 1000)
-        sf2_size = os.stat(sf2.home).st_size
+        sf2_size = sf2.get_size()
 
         # mutable_si_3 gets an extra lease
         sf3 = _get_sharefile(mutable_si_3)
hunk ./src/allmydata/test/test_storage.py 3450
 
     def test_expire_cutoff_date(self):
         basedir = "storage/LeaseCrawler/expire_cutoff_date"
-        fileutil.make_dirs(basedir)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+
         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
         # is more than 2000 seconds old will be expired.
         now = time.time()
hunk ./src/allmydata/test/test_storage.py 3463
             'cutoff_date': then,
             'sharetypes': ('mutable', 'immutable'),
         }
-        ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy)
+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
+
         # make it start sooner than usual.
         lc = ss.lease_checker
         lc.slow_start = 0
hunk ./src/allmydata/test/test_storage.py 3476
         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
 
         def count_shares(si):
-            return len(list(ss._iter_share_files(si)))
+            return len(list(ss.backend.get_shareset(si).get_shares()))
         def _get_sharefile(si):
hunk ./src/allmydata/test/test_storage.py 3478
-            return list(ss._iter_share_files(si))[0]
+            return list(ss.backend.get_shareset(si).get_shares())[0]
         def count_leases(si):
             return len(list(_get_sharefile(si).get_leases()))
 
hunk ./src/allmydata/test/test_storage.py 3505
 
         sf0 = _get_sharefile(immutable_si_0)
         self.backdate_lease(sf0, self.renew_secrets[0], new_expiration_time)
-        sf0_size = os.stat(sf0.home).st_size
+        sf0_size = sf0.get_size()
 
         # immutable_si_1 gets an extra lease
         sf1 = _get_sharefile(immutable_si_1)
hunk ./src/allmydata/test/test_storage.py 3513
 
         sf2 = _get_sharefile(mutable_si_2)
         self.backdate_lease(sf2, self.renew_secrets[3], new_expiration_time)
-        sf2_size = os.stat(sf2.home).st_size
+        sf2_size = sf2.get_size()
 
         # mutable_si_3 gets an extra lease
         sf3 = _get_sharefile(mutable_si_3)
hunk ./src/allmydata/test/test_storage.py 3605
 
     def test_only_immutable(self):
         basedir = "storage/LeaseCrawler/only_immutable"
-        fileutil.make_dirs(basedir)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+
         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
         # is more than 2000 seconds old will be expired.
         now = time.time()
hunk ./src/allmydata/test/test_storage.py 3618
             'cutoff_date': then,
             'sharetypes': ('immutable',),
         }
-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
         lc = ss.lease_checker
         lc.slow_start = 0
         webstatus = StorageStatus(ss)
hunk ./src/allmydata/test/test_storage.py 3629
         new_expiration_time = now - 3000 + 31*24*60*60
 
         def count_shares(si):
-            return len(list(ss._iter_share_files(si)))
+            return len(list(ss.backend.get_shareset(si).get_shares()))
         def _get_sharefile(si):
hunk ./src/allmydata/test/test_storage.py 3631
-            return list(ss._iter_share_files(si))[0]
+            return list(ss.backend.get_shareset(si).get_shares())[0]
         def count_leases(si):
             return len(list(_get_sharefile(si).get_leases()))
 
hunk ./src/allmydata/test/test_storage.py 3668
 
     def test_only_mutable(self):
         basedir = "storage/LeaseCrawler/only_mutable"
-        fileutil.make_dirs(basedir)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+
         # setting 'cutoff_date' to 2000 seconds ago means that any lease that
         # is more than 2000 seconds old will be expired.
         now = time.time()
hunk ./src/allmydata/test/test_storage.py 3681
             'cutoff_date': then,
             'sharetypes': ('mutable',),
         }
-        ss = StorageServer(basedir, "\x00" * 20, expiration_policy)
+        ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
         lc = ss.lease_checker
         lc.slow_start = 0
         webstatus = StorageStatus(ss)
hunk ./src/allmydata/test/test_storage.py 3692
         new_expiration_time = now - 3000 + 31*24*60*60
 
         def count_shares(si):
-            return len(list(ss._iter_share_files(si)))
+            return len(list(ss.backend.get_shareset(si).get_shares()))
         def _get_sharefile(si):
hunk ./src/allmydata/test/test_storage.py 3694
-            return list(ss._iter_share_files(si))[0]
+            return list(ss.backend.get_shareset(si).get_shares())[0]
         def count_leases(si):
             return len(list(_get_sharefile(si).get_leases()))
 
hunk ./src/allmydata/test/test_storage.py 3731
 
     def test_bad_mode(self):
         basedir = "storage/LeaseCrawler/bad_mode"
-        fileutil.make_dirs(basedir)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+
+        expiration_policy = {
+            'enabled': True,
+            'mode': 'bogus',
+            'override_lease_duration': None,
+            'cutoff_date': None,
+            'sharetypes': ('mutable', 'immutable'),
+        }
         e = self.failUnlessRaises(ValueError,
hunk ./src/allmydata/test/test_storage.py 3742
-                                  StorageServer, basedir, "\x00" * 20,
-                                  expiration_mode="bogus")
+                                  StorageServer, "\x00" * 20, backend, fp,
+                                  expiration_policy=expiration_policy)
         self.failUnlessIn("GC mode 'bogus' must be 'age' or 'cutoff-date'", str(e))
 
     def test_parse_duration(self):
hunk ./src/allmydata/test/test_storage.py 3767
 
     def test_limited_history(self):
         basedir = "storage/LeaseCrawler/limited_history"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer("\x00" * 20, backend, fp)
+
         # make it start sooner than usual.
         lc = ss.lease_checker
         lc.slow_start = 0
hunk ./src/allmydata/test/test_storage.py 3801
 
     def test_unpredictable_future(self):
         basedir = "storage/LeaseCrawler/unpredictable_future"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer("\x00" * 20, backend, fp)
+
         # make it start sooner than usual.
         lc = ss.lease_checker
         lc.slow_start = 0
hunk ./src/allmydata/test/test_storage.py 3866
 
     def test_no_st_blocks(self):
         basedir = "storage/LeaseCrawler/no_st_blocks"
-        fileutil.make_dirs(basedir)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+
         # A negative 'override_lease_duration' means that the "configured-"
         # space-recovered counts will be non-zero, since all shares will have
         # expired by then.
hunk ./src/allmydata/test/test_storage.py 3878
             'override_lease_duration': -1000,
             'sharetypes': ('mutable', 'immutable'),
         }
-        ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy)
+        ss = No_ST_BLOCKS_StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy)
 
         # make it start sooner than usual.
         lc = ss.lease_checker
hunk ./src/allmydata/test/test_storage.py 3911
             UnknownImmutableContainerVersionError,
             ]
         basedir = "storage/LeaseCrawler/share_corruption"
-        fileutil.make_dirs(basedir)
-        ss = InstrumentedStorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = InstrumentedStorageServer("\x00" * 20, backend, fp)
         w = StorageStatus(ss)
         # make it start sooner than usual.
         lc = ss.lease_checker
hunk ./src/allmydata/test/test_storage.py 3928
         [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis
         first = min(self.sis)
         first_b32 = base32.b2a(first)
-        fp = ss.backend.get_shareset(first).sharehomedir.child("0")
+        fp = ss.backend.get_shareset(first)._sharehomedir.child("0")
         f = fp.open("rb+")
hunk ./src/allmydata/test/test_storage.py 3930
-        f.seek(0)
-        f.write("BAD MAGIC")
-        f.close()
+        try:
+            f.seek(0)
+            f.write("BAD MAGIC")
+        finally:
+            f.close()
         # if get_share_file() doesn't see the correct mutable magic, it
         # assumes the file is an immutable share, and then
         # immutable.ShareFile sees a bad version. So regardless of which kind
hunk ./src/allmydata/test/test_storage.py 3943
 
         # also create an empty bucket
         empty_si = base32.b2a("\x04"*16)
-        empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir
+        empty_bucket_dir = ss.backend.get_shareset(empty_si)._sharehomedir
         fileutil.fp_make_dirs(empty_bucket_dir)
 
         ss.setServiceParent(self.s)
hunk ./src/allmydata/test/test_storage.py 4031
 
     def test_status(self):
         basedir = "storage/WebStatus/status"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer("\x00" * 20, backend, fp)
         ss.setServiceParent(self.s)
         w = StorageStatus(ss)
         d = self.render1(w)
hunk ./src/allmydata/test/test_storage.py 4065
         # Some platforms may have no disk stats API. Make sure the code can handle that
         # (test runs on all platforms).
         basedir = "storage/WebStatus/status_no_disk_stats"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer("\x00" * 20, backend, fp)
         ss.setServiceParent(self.s)
         w = StorageStatus(ss)
         html = w.renderSynchronously()
hunk ./src/allmydata/test/test_storage.py 4085
         # If the API to get disk stats exists but a call to it fails, then the status should
         # show that no shares will be accepted, and get_available_space() should be 0.
         basedir = "storage/WebStatus/status_bad_disk_stats"
-        fileutil.make_dirs(basedir)
-        ss = StorageServer(basedir, "\x00" * 20)
+        fp = FilePath(basedir)
+        backend = DiskBackend(fp)
+        ss = StorageServer("\x00" * 20, backend, fp)
         ss.setServiceParent(self.s)
         w = StorageStatus(ss)
         html = w.renderSynchronously()
}
[Fix most of the crawler tests. refs #999
david-sarah@jacaranda.org**20110922183008
 Ignore-this: 116c0848008f3989ba78d87c07ec783c
] {
hunk ./src/allmydata/storage/backends/disk/disk_backend.py 160
         self._discard_storage = discard_storage
 
     def get_overhead(self):
-        return (fileutil.get_disk_usage(self._sharehomedir) +
-                fileutil.get_disk_usage(self._incominghomedir))
+        return (fileutil.get_used_space(self._sharehomedir) +
+                fileutil.get_used_space(self._incominghomedir))
 
     def get_shares(self):
         """
hunk ./src/allmydata/storage/crawler.py 2
 
-import time, struct
-import cPickle as pickle
+import time, pickle, struct
 from twisted.internet import reactor
 from twisted.application import service
 
hunk ./src/allmydata/storage/crawler.py 205
         #                            shareset to be processed, or None if we
         #                            are sleeping between cycles
         try:
-            state = pickle.loads(self.statefp.getContent())
+            pickled = self.statefp.getContent()
         except EnvironmentError:
             if self.statefp.exists():
                 raise
hunk ./src/allmydata/storage/crawler.py 215
                      "last-complete-prefix": None,
                      "last-complete-bucket": None,
                      }
+        else:
+            state = pickle.loads(pickled)
+
         state.setdefault("current-cycle-start-time", time.time()) # approximate
         self.state = state
         lcp = state["last-complete-prefix"]
hunk ./src/allmydata/storage/crawler.py 246
         else:
             last_complete_prefix = self.prefixes[lcpi]
         self.state["last-complete-prefix"] = last_complete_prefix
-        self.statefp.setContent(pickle.dumps(self.state))
+        pickled = pickle.dumps(self.state)
+        self.statefp.setContent(pickled)
 
     def startService(self):
         # arrange things to look like we were just sleeping, so
hunk ./src/allmydata/storage/expirer.py 86
         # initialize history
         if not self.historyfp.exists():
             history = {} # cyclenum -> dict
-            self.historyfp.setContent(pickle.dumps(history))
+            pickled = pickle.dumps(history)
+            self.historyfp.setContent(pickled)
 
     def create_empty_cycle_dict(self):
         recovered = self.create_empty_recovered_dict()
hunk ./src/allmydata/storage/expirer.py 111
     def started_cycle(self, cycle):
         self.state["cycle-to-date"] = self.create_empty_cycle_dict()
 
-    def process_storage_index(self, cycle, prefix, container):
+    def process_shareset(self, cycle, prefix, shareset):
         would_keep_shares = []
         wks = None
hunk ./src/allmydata/storage/expirer.py 114
-        sharetype = None
 
hunk ./src/allmydata/storage/expirer.py 115
-        for share in container.get_shares():
-            sharetype = share.sharetype
+        for share in shareset.get_shares():
             try:
                 wks = self.process_share(share)
             except (UnknownMutableContainerVersionError,
hunk ./src/allmydata/storage/expirer.py 128
                 wks = (1, 1, 1, "unknown")
             would_keep_shares.append(wks)
 
-        container_type = None
+        shareset_type = None
         if wks:
hunk ./src/allmydata/storage/expirer.py 130
-            # use the last share's sharetype as the container type
-            container_type = wks[3]
+            # use the last share's type as the shareset type
+            shareset_type = wks[3]
         rec = self.state["cycle-to-date"]["space-recovered"]
         self.increment(rec, "examined-buckets", 1)
hunk ./src/allmydata/storage/expirer.py 134
-        if sharetype:
-            self.increment(rec, "examined-buckets-"+container_type, 1)
+        if shareset_type:
+            self.increment(rec, "examined-buckets-"+shareset_type, 1)
 
hunk ./src/allmydata/storage/expirer.py 137
-        container_diskbytes = container.get_overhead()
+        shareset_diskbytes = shareset.get_overhead()
 
         if sum([wks[0] for wks in would_keep_shares]) == 0:
hunk ./src/allmydata/storage/expirer.py 140
-            self.increment_container_space("original", container_diskbytes, sharetype)
+            self.increment_shareset_space("original", shareset_diskbytes, shareset_type)
         if sum([wks[1] for wks in would_keep_shares]) == 0:
hunk ./src/allmydata/storage/expirer.py 142
-            self.increment_container_space("configured", container_diskbytes, sharetype)
+            self.increment_shareset_space("configured", shareset_diskbytes, shareset_type)
         if sum([wks[2] for wks in would_keep_shares]) == 0:
hunk ./src/allmydata/storage/expirer.py 144
-            self.increment_container_space("actual", container_diskbytes, sharetype)
+            self.increment_shareset_space("actual", shareset_diskbytes, shareset_type)
 
     def process_share(self, share):
         sharetype = share.sharetype
hunk ./src/allmydata/storage/expirer.py 189
 
         so_far = self.state["cycle-to-date"]
         self.increment(so_far["leases-per-share-histogram"], num_leases, 1)
-        self.increment_space("examined", diskbytes, sharetype)
+        self.increment_space("examined", sharebytes, diskbytes, sharetype)
 
         would_keep_share = [1, 1, 1, sharetype]
 
hunk ./src/allmydata/storage/expirer.py 220
             self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes)
             self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes)
 
-    def increment_container_space(self, a, container_diskbytes, container_type):
+    def increment_shareset_space(self, a, shareset_diskbytes, shareset_type):
         rec = self.state["cycle-to-date"]["space-recovered"]
hunk ./src/allmydata/storage/expirer.py 222
-        self.increment(rec, a+"-diskbytes", container_diskbytes)
+        self.increment(rec, a+"-diskbytes", shareset_diskbytes)
         self.increment(rec, a+"-buckets", 1)
hunk ./src/allmydata/storage/expirer.py 224
-        if container_type:
-            self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes)
-            self.increment(rec, a+"-buckets-"+container_type, 1)
+        if shareset_type:
+            self.increment(rec, a+"-diskbytes-"+shareset_type, shareset_diskbytes)
+            self.increment(rec, a+"-buckets-"+shareset_type, 1)
 
     def increment(self, d, k, delta=1):
         if k not in d:
hunk ./src/allmydata/storage/expirer.py 280
         # copy() needs to become a deepcopy
         h["space-recovered"] = s["space-recovered"].copy()
 
-        history = pickle.loads(self.historyfp.getContent())
+        pickled = self.historyfp.getContent()
+        history = pickle.loads(pickled)
         history[cycle] = h
         while len(history) > 10:
             oldcycles = sorted(history.keys())
hunk ./src/allmydata/storage/expirer.py 286
             del history[oldcycles[0]]
-        self.historyfp.setContent(pickle.dumps(history))
+        repickled = pickle.dumps(history)
+        self.historyfp.setContent(repickled)
 
     def get_state(self):
         """In addition to the crawler state described in
hunk ./src/allmydata/storage/expirer.py 356
         progress = self.get_progress()
 
         state = ShareCrawler.get_state(self) # does a shallow copy
-        history = pickle.loads(self.historyfp.getContent())
+        pickled = self.historyfp.getContent()
+        history = pickle.loads(pickled)
         state["history"] = history
 
         if not progress["cycle-in-progress"]:
hunk ./src/allmydata/test/test_crawler.py 25
         ShareCrawler.__init__(self, *args, **kwargs)
         self.all_buckets = []
         self.finished_d = defer.Deferred()
-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
-        self.all_buckets.append(storage_index_b32)
+
+    def process_shareset(self, cycle, prefix, shareset):
+        self.all_buckets.append(shareset.get_storage_index_string())
+
     def finished_cycle(self, cycle):
         eventually(self.finished_d.callback, None)
 
hunk ./src/allmydata/test/test_crawler.py 41
         self.all_buckets = []
         self.finished_d = defer.Deferred()
         self.yield_cb = None
-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
-        self.all_buckets.append(storage_index_b32)
+
+    def process_shareset(self, cycle, prefix, shareset):
+        self.all_buckets.append(shareset.get_storage_index_string())
         self.countdown -= 1
         if self.countdown == 0:
             # force a timeout. We restore it in yielding()
hunk ./src/allmydata/test/test_crawler.py 66
         self.accumulated = 0.0
         self.cycles = 0
         self.last_yield = 0.0
-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
+
+    def process_shareset(self, cycle, prefix, shareset):
         start = time.time()
         time.sleep(0.05)
         elapsed = time.time() - start
hunk ./src/allmydata/test/test_crawler.py 85
         ShareCrawler.__init__(self, *args, **kwargs)
         self.counter = 0
         self.finished_d = defer.Deferred()
-    def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32):
+
+    def process_shareset(self, cycle, prefix, shareset):
         self.counter += 1
     def finished_cycle(self, cycle):
         self.finished_d.callback(None)
hunk ./src/allmydata/test/test_storage.py 3041
 
 class InstrumentedLeaseCheckingCrawler(LeaseCheckingCrawler):
     stop_after_first_bucket = False
-    def process_bucket(self, *args, **kwargs):
-        LeaseCheckingCrawler.process_bucket(self, *args, **kwargs)
+
+    def process_shareset(self, cycle, prefix, shareset):
+        LeaseCheckingCrawler.process_shareset(self, cycle, prefix, shareset)
         if self.stop_after_first_bucket:
             self.stop_after_first_bucket = False
             self.cpu_slice = -1.0
hunk ./src/allmydata/test/test_storage.py 3051
         if not self.stop_after_first_bucket:
             self.cpu_slice = 500
 
+class InstrumentedStorageServer(StorageServer):
+    LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
+
+
 class BrokenStatResults:
     pass
 class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler):
hunk ./src/allmydata/test/test_storage.py 3069
             setattr(bsr, attrname, getattr(s, attrname))
         return bsr
 
-class InstrumentedStorageServer(StorageServer):
-    LeaseCheckerClass = InstrumentedLeaseCheckingCrawler
 class No_ST_BLOCKS_StorageServer(StorageServer):
     LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
 
}
[Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
david-sarah@jacaranda.org**20110922183323
 Ignore-this: a11fb0dd0078ff627cb727fc769ec848
] {
hunk ./src/allmydata/storage/backends/disk/immutable.py 260
         except IndexError:
             self.add_lease(lease_info)
 
+    def cancel_lease(self, cancel_secret):
+        """Remove a lease with the given cancel_secret. If the last lease is
+        cancelled, the file will be removed. Return the number of bytes that
+        were freed (by truncating the list of leases, and possibly by
+        deleting the file). Raise IndexError if there was no lease with the
+        given cancel_secret.
+        """
+
+        leases = list(self.get_leases())
+        num_leases_removed = 0
+        for i, lease in enumerate(leases):
+            if constant_time_compare(lease.cancel_secret, cancel_secret):
+                leases[i] = None
+                num_leases_removed += 1
+        if not num_leases_removed:
+            raise IndexError("unable to find matching lease to cancel")
+
+        space_freed = 0
+        if num_leases_removed:
+            # pack and write out the remaining leases. We write these out in
+            # the same order as they were added, so that if we crash while
+            # doing this, we won't lose any non-cancelled leases.
+            leases = [l for l in leases if l] # remove the cancelled leases
+            if len(leases) > 0:
+                f = self._home.open('rb+')
+                try:
+                    for i, lease in enumerate(leases):
+                        self._write_lease_record(f, i, lease)
+                    self._write_num_leases(f, len(leases))
+                    self._truncate_leases(f, len(leases))
+                finally:
+                    f.close()
+                space_freed = self.LEASE_SIZE * num_leases_removed
+            else:
+                space_freed = fileutil.get_used_space(self._home)
+                self.unlink()
+        return space_freed
+
hunk ./src/allmydata/storage/backends/disk/mutable.py 361
         except IndexError:
             self.add_lease(lease_info)
 
+    def cancel_lease(self, cancel_secret):
+        """Remove any leases with the given cancel_secret. If the last lease
+        is cancelled, the file will be removed. Return the number of bytes
+        that were freed (by truncating the list of leases, and possibly by
+        deleting the file). Raise IndexError if there was no lease with the
+        given cancel_secret."""
+
+        # XXX can this be more like ImmutableDiskShare.cancel_lease?
+
+        accepting_nodeids = set()
+        modified = 0
+        remaining = 0
+        blank_lease = LeaseInfo(owner_num=0,
+                                renew_secret="\x00"*32,
+                                cancel_secret="\x00"*32,
+                                expiration_time=0,
+                                nodeid="\x00"*20)
+        f = self._home.open('rb+')
+        try:
+            for (leasenum, lease) in self._enumerate_leases(f):
+                accepting_nodeids.add(lease.nodeid)
+                if constant_time_compare(lease.cancel_secret, cancel_secret):
+                    self._write_lease_record(f, leasenum, blank_lease)
+                    modified += 1
+                else:
+                    remaining += 1
+            if modified:
+                freed_space = self._pack_leases(f)
+        finally:
+            f.close()
+
+        if modified > 0:
+            if remaining == 0:
+                freed_space = fileutil.get_used_space(self._home)
+                self.unlink()
+            return freed_space
+
+        msg = ("Unable to cancel non-existent lease. I have leases "
+               "accepted by nodeids: ")
+        msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid))
+                         for anid in accepting_nodeids])
+        msg += " ."
+        raise IndexError(msg)
+
+    def _pack_leases(self, f):
+        # TODO: reclaim space from cancelled leases
+        return 0
+
     def _read_write_enabler_and_nodeid(self, f):
         f.seek(0)
         data = f.read(self.HEADER_SIZE)
}

Context:

[test_mutable.py: skip test_publish_surprise_mdmf, which is causing an error. refs #1534, #393
david-sarah@jacaranda.org**20110920183319
 Ignore-this: 6fb020e09e8de437cbcc2c9f57835b31
] 
[test/test_mutable: write publish surprise test for MDMF, rename existing test_publish_surprise to clarify that it is for SDMF
kevan@isnotajoke.com**20110918003657
 Ignore-this: 722c507e8f5b537ff920e0555951059a
] 
[test/test_mutable: refactor publish surprise test into common test fixture, rewrite test_publish_surprise to use test fixture
kevan@isnotajoke.com**20110918003533
 Ignore-this: 6f135888d400a99a09b5f9a4be443b6e
] 
[mutable/publish: add errback immediately after write, don't consume errors from other parts of the publisher
kevan@isnotajoke.com**20110917234708
 Ignore-this: 12bf6b0918a5dc5ffc30ece669fad51d
] 
[.darcs-boringfile: minor cleanups.
david-sarah@jacaranda.org**20110920154918
 Ignore-this: cab78e30d293da7e2832207dbee2ffeb
] 
[uri.py: fix two interface violations in verifier URI classes. refs #1474
david-sarah@jacaranda.org**20110920030156
 Ignore-this: 454ddd1419556cb1d7576d914cb19598
] 
[Make platform-detection code tolerate linux-3.0, patch by zooko.
Brian Warner <warner@lothar.com>**20110915202620
 Ignore-this: af63cf9177ae531984dea7a1cad03762
 
 Otherwise address-autodetection can't find ifconfig. refs #1536
] 
[test_web.py: fix a bug in _count_leases that was causing us to check only the lease count of one share file, not of all share files as intended.
david-sarah@jacaranda.org**20110915185126
 Ignore-this: d96632bc48d770b9b577cda1bbd8ff94
] 
[docs: insert a newline at the beginning of known_issues.rst to see if this makes it render more nicely in trac
zooko@zooko.com**20110914064728
 Ignore-this: aca15190fa22083c5d4114d3965f5d65
] 
[docs: remove the coding: utf-8 declaration at the to of known_issues.rst, since the trac rendering doesn't hide it
zooko@zooko.com**20110914055713
 Ignore-this: 941ed32f83ead377171aa7a6bd198fcf
] 
[docs: more cleanup of known_issues.rst -- now it passes "rst2html --verbose" without comment
zooko@zooko.com**20110914055419
 Ignore-this: 5505b3d76934bd97d0312cc59ed53879
] 
[docs: more formatting improvements to known_issues.rst
zooko@zooko.com**20110914051639
 Ignore-this: 9ae9230ec9a38a312cbacaf370826691
] 
[docs: reformatting of known_issues.rst
zooko@zooko.com**20110914050240
 Ignore-this: b8be0375079fb478be9d07500f9aaa87
] 
[docs: fix formatting error in docs/known_issues.rst
zooko@zooko.com**20110914045909
 Ignore-this: f73fe74ad2b9e655aa0c6075acced15a
] 
[merge Tahoe-LAFS v1.8.3 release announcement with trunk
zooko@zooko.com**20110913210544
 Ignore-this: 163f2c3ddacca387d7308e4b9332516e
] 
[docs: release notes for Tahoe-LAFS v1.8.3
zooko@zooko.com**20110913165826
 Ignore-this: 84223604985b14733a956d2fbaeb4e9f
] 
[tests: bump up the timeout in this test that fails on FreeStorm's CentOS in order to see if it is just very slow
zooko@zooko.com**20110913024255
 Ignore-this: 6a86d691e878cec583722faad06fb8e4
] 
[interfaces: document that the 'fills-holes-with-zero-bytes' key should be used to detect whether a storage server has that behavior. refs #1528
david-sarah@jacaranda.org**20110913002843
 Ignore-this: 1a00a6029d40f6792af48c5578c1fd69
] 
[CREDITS: more CREDITS for Kevan and David-Sarah
zooko@zooko.com**20110912223357
 Ignore-this: 4ea8f0d6f2918171d2f5359c25ad1ada
] 
[merge NEWS about the mutable file bounds fixes with NEWS about work-in-progress
zooko@zooko.com**20110913205521
 Ignore-this: 4289a4225f848d6ae6860dd39bc92fa8
] 
[doc: add NEWS item about fixes to potential palimpsest issues in mutable files
zooko@zooko.com**20110912223329
 Ignore-this: 9d63c95ddf95c7d5453c94a1ba4d406a
 ref. #1528
] 
[merge the NEWS about the security fix (#1528) with the work-in-progress NEWS
zooko@zooko.com**20110913205153
 Ignore-this: 88e88a2ad140238c62010cf7c66953fc
] 
[doc: add NEWS entry about the issue which allows unauthorized deletion of shares
zooko@zooko.com**20110912223246
 Ignore-this: 77e06d09103d2ef6bb51ea3e5d6e80b0
 ref. #1528
] 
[doc: add entry in known_issues.rst about the issue which allows unauthorized deletion of shares
zooko@zooko.com**20110912223135
 Ignore-this: b26c6ea96b6c8740b93da1f602b5a4cd
 ref. #1528
] 
[storage: more paranoid handling of bounds and palimpsests in mutable share files
zooko@zooko.com**20110912222655
 Ignore-this: a20782fa423779ee851ea086901e1507
 * storage server ignores requests to extend shares by sending a new_length
 * storage server fills exposed holes (created by sending a write vector whose offset begins after the end of the current data) with 0 to avoid "palimpsest" exposure of previous contents
 * storage server zeroes out lease info at the old location when moving it to a new location
 ref. #1528
] 
[storage: test that the storage server ignores requests to extend shares by sending a new_length, and that the storage server fills exposed holes with 0 to avoid "palimpsest" exposure of previous contents
zooko@zooko.com**20110912222554
 Ignore-this: 61ebd7b11250963efdf5b1734a35271
 ref. #1528
] 
[immutable: prevent clients from reading past the end of share data, which would allow them to learn the cancellation secret
zooko@zooko.com**20110912222458
 Ignore-this: da1ebd31433ea052087b75b2e3480c25
 Declare explicitly that we prevent this problem in the server's version dict.
 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
] 
[storage: remove the storage server's "remote_cancel_lease" function
zooko@zooko.com**20110912222331
 Ignore-this: 1c32dee50e0981408576daffad648c50
 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
 fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them)
] 
[storage: test that the storage server does *not* have a "remote_cancel_lease" function
zooko@zooko.com**20110912222324
 Ignore-this: 21c652009704652d35f34651f98dd403
 We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file.
 ref. #1528
] 
[immutable: test whether the server allows clients to read past the end of share data, which would allow them to learn the cancellation secret
zooko@zooko.com**20110912221201
 Ignore-this: 376e47b346c713d37096531491176349
 Also test whether the server explicitly declares that it prevents this problem.
 ref #1528
] 
[Retrieve._activate_enough_peers: rewrite Verify logic
Brian Warner <warner@lothar.com>**20110909181150
 Ignore-this: 9367c11e1eacbf025f75ce034030d717
] 
[Retrieve: implement/test stopProducing
Brian Warner <warner@lothar.com>**20110909181150
 Ignore-this: 47b2c3df7dc69835e0a066ca12e3c178
] 
[move DownloadStopped from download.common to interfaces
Brian Warner <warner@lothar.com>**20110909181150
 Ignore-this: 8572acd3bb16e50341dbed8eb1d90a50
] 
[retrieve.py: remove vestigal self._validated_readers
Brian Warner <warner@lothar.com>**20110909181150
 Ignore-this: faab2ec14e314a53a2ffb714de626e2d
] 
[Retrieve: rewrite flow-control: use a top-level loop() to catch all errors
Brian Warner <warner@lothar.com>**20110909181150
 Ignore-this: e162d2cd53b3d3144fc6bc757e2c7714
 
 This ought to close the potential for dropped errors and hanging downloads.
 Verify needs to be examined, I may have broken it, although all tests pass.
] 
[Retrieve: merge _validate_active_prefixes into _add_active_peers
Brian Warner <warner@lothar.com>**20110909181150
 Ignore-this: d3ead31e17e69394ae7058eeb5beaf4c
] 
[Retrieve: remove the initial prefix-is-still-good check
Brian Warner <warner@lothar.com>**20110909181150
 Ignore-this: da66ee51c894eaa4e862e2dffb458acc
 
 This check needs to be done with each fetch from the storage server, to
 detect when someone has changed the share (i.e. our servermap goes stale).
 Doing it just once at the beginning of retrieve isn't enough: a write might
 occur after the first segment but before the second, etc.
 
 _try_to_validate_prefix() was not removed: it will be used by the future
 check-with-each-fetch code.
 
 test_mutable.Roundtrip.test_corrupt_all_seqnum_late was disabled, since it
 fails until this check is brought back. (the corruption it applies only
 touches the prefix, not the block data, so the check-less retrieve actually
 tolerates it). Don't forget to re-enable it once the check is brought back.
] 
[MDMFSlotReadProxy: remove the queue
Brian Warner <warner@lothar.com>**20110909181150
 Ignore-this: 96673cb8dda7a87a423de2f4897d66d2
 
 This is a neat trick to reduce Foolscap overhead, but the need for an
 explicit flush() complicates the Retrieve path and makes it prone to
 lost-progress bugs.
 
 Also change test_mutable.FakeStorageServer to tolerate multiple reads of the
 same share in a row, a limitation exposed by turning off the queue.
] 
[rearrange Retrieve: first step, shouldn't change order of execution
Brian Warner <warner@lothar.com>**20110909181149
 Ignore-this: e3006368bfd2802b82ea45c52409e8d6
] 
[CLI: test_cli.py -- remove an unnecessary call in test_mkdir_mutable_type. refs #1527
david-sarah@jacaranda.org**20110906183730
 Ignore-this: 122e2ffbee84861c32eda766a57759cf
] 
[CLI: improve test for 'tahoe mkdir --mutable-type='. refs #1527
david-sarah@jacaranda.org**20110906183020
 Ignore-this: f1d4598e6c536f0a2b15050b3bc0ef9d
] 
[CLI: make the --mutable-type option value for 'tahoe put' and 'tahoe mkdir' case-insensitive, and change --help for these commands accordingly. fixes #1527
david-sarah@jacaranda.org**20110905020922
 Ignore-this: 75a6df0a2df9c467d8c010579e9a024e
] 
[cli: make --mutable-type imply --mutable in 'tahoe put'
Kevan Carstensen <kevan@isnotajoke.com>**20110903190920
 Ignore-this: 23336d3c43b2a9554e40c2a11c675e93
] 
[SFTP: add a comment about a subtle interaction between OverwriteableFileConsumer and GeneralSFTPFile, and test the case it is commenting on.
david-sarah@jacaranda.org**20110903222304
 Ignore-this: 980c61d4dd0119337f1463a69aeebaf0
] 
[improve the storage/mutable.py asserts even more
warner@lothar.com**20110901160543
 Ignore-this: 5b2b13c49bc4034f96e6e3aaaa9a9946
] 
[storage/mutable.py: special characters in struct.foo arguments indicate standard as opposed to native sizes, we should be using these characters in these asserts
wilcoxjg@gmail.com**20110901084144
 Ignore-this: 28ace2b2678642e4d7269ddab8c67f30
] 
[docs/write_coordination.rst: fix formatting and add more specific warning about access via sshfs.
david-sarah@jacaranda.org**20110831232148
 Ignore-this: cd9c851d3eb4e0a1e088f337c291586c
] 
[test_mutable.Version: consolidate some tests, reduce runtime from 19s to 15s
warner@lothar.com**20110831050451
 Ignore-this: 64815284d9e536f8f3798b5f44cf580c
] 
[mutable/retrieve: handle the case where self._read_length is 0.
Kevan Carstensen <kevan@isnotajoke.com>**20110830210141
 Ignore-this: fceafbe485851ca53f2774e5a4fd8d30
 
 Note that the downloader will still fetch a segment for a zero-length
 read, which is wasteful. Fixing that isn't specifically required to fix
 #1512, but it should probably be fixed before 1.9.
] 
[NEWS: added summary of all changes since 1.8.2. Needs editing.
Brian Warner <warner@lothar.com>**20110830163205
 Ignore-this: 273899b37a899fc6919b74572454b8b2
] 
[test_mutable.Update: only upload the files needed for each test. refs #1500
Brian Warner <warner@lothar.com>**20110829072717
 Ignore-this: 4d2ab4c7523af9054af7ecca9c3d9dc7
 
 This first step shaves 15% off the runtime: from 139s to 119s on my laptop.
 It also fixes a couple of places where a Deferred was being dropped, which
 would cause two tests to run in parallel and also confuse error reporting.
] 
[Let Uploader retain History instead of passing it into upload(). Fixes #1079.
Brian Warner <warner@lothar.com>**20110829063246
 Ignore-this: 3902c58ec12bd4b2d876806248e19f17
 
 This consistently records all immutable uploads in the Recent Uploads And
 Downloads page, regardless of code path. Previously, certain webapi upload
 operations (like PUT /uri/$DIRCAP/newchildname) failed to pass the History
 object and were left out.
] 
[Fix mutable publish/retrieve timing status displays. Fixes #1505.
Brian Warner <warner@lothar.com>**20110828232221
 Ignore-this: 4080ce065cf481b2180fd711c9772dd6
 
 publish:
 * encrypt and encode times are cumulative, not just current-segment
 
 retrieve:
 * same for decrypt and decode times
 * update "current status" to include segment number
 * set status to Finished/Failed when download is complete
 * set progress to 1.0 when complete
 
 More improvements to consider:
 * progress is currently 0% or 100%: should calculate how many segments are
   involved (remembering retrieve can be less than the whole file) and set it
   to a fraction
 * "fetch" time is fuzzy: what we want is to know how much of the delay is not
   our own fault, but since we do decode/decrypt work while waiting for more
   shares, it's not straightforward
] 
[Teach 'tahoe debug catalog-shares about MDMF. Closes #1507.
Brian Warner <warner@lothar.com>**20110828080931
 Ignore-this: 56ef2951db1a648353d7daac6a04c7d1
] 
[debug.py: remove some dead comments
Brian Warner <warner@lothar.com>**20110828074556
 Ignore-this: 40e74040dd4d14fd2f4e4baaae506b31
] 
[hush pyflakes
Brian Warner <warner@lothar.com>**20110828074254
 Ignore-this: bef9d537a969fa82fe4decc4ba2acb09
] 
[MutableFileNode.set_downloader_hints: never depend upon order of dict.values()
Brian Warner <warner@lothar.com>**20110828074103
 Ignore-this: caaf1aa518dbdde4d797b7f335230faa
 
 The old code was calculating the "extension parameters" (a list) from the
 downloader hints (a dictionary) with hints.values(), which is not stable, and
 would result in corrupted filecaps (with the 'k' and 'segsize' hints
 occasionally swapped). The new code always uses [k,segsize].
] 
[layout.py: fix MDMF share layout documentation
Brian Warner <warner@lothar.com>**20110828073921
 Ignore-this: 3f13366fed75b5e31b51ae895450a225
] 
[teach 'tahoe debug dump-share' about MDMF and offsets. refs #1507
Brian Warner <warner@lothar.com>**20110828073834
 Ignore-this: 3a9d2ef9c47a72bf1506ba41199a1dea
] 
[test_mutable.Version.test_debug: use splitlines() to fix buildslaves
Brian Warner <warner@lothar.com>**20110828064728
 Ignore-this: c7f6245426fc80b9d1ae901d5218246a
 
 Any slave running in a directory with spaces in the name was miscounting
 shares, causing the test to fail.
] 
[test_mutable.Version: exercise 'tahoe debug find-shares' on MDMF. refs #1507
Brian Warner <warner@lothar.com>**20110828005542
 Ignore-this: cb20bea1c28bfa50a72317d70e109672
 
 Also changes NoNetworkGrid to put shares in storage/shares/ .
] 
[test_mutable.py: oops, missed a .todo
Brian Warner <warner@lothar.com>**20110828002118
 Ignore-this: fda09ae86481352b7a627c278d2a3940
] 
[test_mutable: merge davidsarah's patch with my Version refactorings
warner@lothar.com**20110827235707
 Ignore-this: b5aaf481c90d99e33827273b5d118fd0
] 
[Make the immutable/read-only constraint checking for MDMF URIs identical to that for SSK URIs. refs #393
david-sarah@jacaranda.org**20110823012720
 Ignore-this: e1f59d7ff2007c81dbef2aeb14abd721
] 
[Additional tests for MDMF URIs and for zero-length files. refs #393
david-sarah@jacaranda.org**20110823011532
 Ignore-this: a7cc0c09d1d2d72413f9cd227c47a9d5
] 
[Additional tests for zero-length partial reads and updates to mutable versions. refs #393
david-sarah@jacaranda.org**20110822014111
 Ignore-this: 5fc6f4d06e11910124e4a277ec8a43ea
] 
[test_mutable.Version: factor out some expensive uploads, save 25% runtime
Brian Warner <warner@lothar.com>**20110827232737
 Ignore-this: ea37383eb85ea0894b254fe4dfb45544
] 
[SDMF: update filenode with correct k/N after Retrieve. Fixes #1510.
Brian Warner <warner@lothar.com>**20110827225031
 Ignore-this: b50ae6e1045818c400079f118b4ef48
 
 Without this, we get a regression when modifying a mutable file that was
 created with more shares (larger N) than our current tahoe.cfg . The
 modification attempt creates new versions of the (0,1,..,newN-1) shares, but
 leaves the old versions of the (newN,..,oldN-1) shares alone (and throws a
 assertion error in SDMFSlotWriteProxy.finish_publishing in the process).
 
 The mixed versions that result (some shares with e.g. N=10, some with N=20,
 such that both versions are recoverable) cause problems for the Publish code,
 even before MDMF landed. Might be related to refs #1390 and refs #1042.
] 
[layout.py: annotate assertion to figure out 'tahoe backup' failure
Brian Warner <warner@lothar.com>**20110827195253
 Ignore-this: 9b92b954e3ed0d0f80154fff1ff674e5
] 
[Add 'tahoe debug dump-cap' support for MDMF, DIR2-CHK, DIR2-MDMF. refs #1507.
Brian Warner <warner@lothar.com>**20110827195048
 Ignore-this: 61c6af5e33fc88e0251e697a50addb2c
 
 This also adds tests for all those cases, and fixes an omission in uri.py
 that broke parsing of DIR2-MDMF-Verifier and DIR2-CHK-Verifier.
] 
[MDMF: more writable/writeable consistentifications
warner@lothar.com**20110827190602
 Ignore-this: 22492a9e20c1819ddb12091062888b55
] 
[MDMF: s/Writable/Writeable/g, for consistency with existing SDMF code
warner@lothar.com**20110827183357
 Ignore-this: 9dd312acedbdb2fc2f7bef0d0fb17c0b
] 
[setup.cfg: remove no-longer-supported test_mac_diskimage alias. refs #1479
david-sarah@jacaranda.org**20110826230345
 Ignore-this: 40e908b8937322a290fb8012bfcad02a
] 
[test_mutable.Update: increase timeout from 120s to 400s, slaves are failing
Brian Warner <warner@lothar.com>**20110825230140
 Ignore-this: 101b1924a30cdbda9b2e419e95ca15ec
] 
[tests: fix check_memory test
zooko@zooko.com**20110825201116
 Ignore-this: 4d66299fa8cb61d2ca04b3f45344d835
 fixes #1503
] 
[TAG allmydata-tahoe-1.9.0a1
warner@lothar.com**20110825161122
 Ignore-this: 3cbf49f00dbda58189f893c427f65605
] 
Patch bundle hash:
7e9fd7ca66bba646aab82d2886530d0caa025f44