Sat Oct 17 18:30:13 PDT 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.

Fri Oct 30 02:19:08 PDT 2009  "Kevan Carstensen" <kevan@isnotajoke.com>
  * Refactor some behavior into a mixin, and add tests for the behavior described in #778

Tue Nov  3 19:36:02 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Alter tests to use the new form of set_shareholders

Tue Nov  3 19:42:32 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Minor tweak to an existing test -- make the first server read-write, instead of read-only

Wed Nov  4 03:13:24 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Add a test for upload.shares_by_server

Wed Nov  4 03:28:49 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Add more tests for comment:53 in ticket #778

Sun Nov  8 16:37:35 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers

Mon Nov 16 11:23:34 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Re-work 'test_upload.py' to be more readable; add more tests for #778

Sun Nov 22 17:20:08 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Add tests for the behavior described in #834.

Fri Dec  4 20:34:53 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
  * Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.

Thu Jan  7 10:13:25 PST 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * Alter various unit tests to work with the new happy behavior

Fri May  7 15:00:51 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
  * Revisions of the #778 tests, per reviewers' comments
  
  - Fix comments and confusing naming.
  - Add tests for the new error messages suggested by David-Sarah
    and Zooko.
  - Alter existing tests for new error messages.
  - Make sure that the tests continue to work with the trunk.
  - Add a test for a mutual disjointedness assertion that I added to
    upload.servers_of_happiness.
  - Fix the comments to correctly reflect read-onlyness
  - Add a test for an edge case in should_add_server
  - Add an assertion to make sure that share redistribution works as it 
    should
  - Alter tests to work with revised servers_of_happiness semantics
  - Remove tests for should_add_server, since that function no longer exists.
  - Alter tests to know about merge_peers, and to use it before calling 
    servers_of_happiness.
  - Add tests for merge_peers.
  - Add Zooko's puzzles to the tests.
  - Edit encoding tests to expect the new kind of failure message.
  

New patches:

[Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.
Kevan Carstensen <kevan@isnotajoke.com>**20091018013013
 Ignore-this: e12cd7c4ddeb65305c5a7e08df57c754
] {
hunk ./src/allmydata/test/no_network.py 219
             c.setServiceParent(self)
             self.clients.append(c)
 
-    def make_server(self, i):
+    def make_server(self, i, readonly=False):
         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
         serverdir = os.path.join(self.basedir, "servers",
                                  idlib.shortnodeid_b2a(serverid))
hunk ./src/allmydata/test/no_network.py 224
         fileutil.make_dirs(serverdir)
-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats())
+        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
+                           readonly_storage=readonly)
         return ss
 
     def add_server(self, i, ss):
}
[Refactor some behavior into a mixin, and add tests for the behavior described in #778
"Kevan Carstensen" <kevan@isnotajoke.com>**20091030091908
 Ignore-this: a6f9797057ca135579b249af3b2b66ac
] {
hunk ./src/allmydata/test/test_upload.py 2
 
-import os
+import os, shutil
 from cStringIO import StringIO
 from twisted.trial import unittest
 from twisted.python.failure import Failure
hunk ./src/allmydata/test/test_upload.py 12
 
 import allmydata # for __full_version__
 from allmydata import uri, monitor, client
-from allmydata.immutable import upload
+from allmydata.immutable import upload, encode
 from allmydata.interfaces import FileTooLargeError, NoSharesError, \
      NotEnoughSharesError
 from allmydata.util.assertutil import precondition
hunk ./src/allmydata/test/test_upload.py 20
 from no_network import GridTestMixin
 from common_util import ShouldFailMixin
 from allmydata.storage_client import StorageFarmBroker
+from allmydata.storage.server import storage_index_to_dir
 
 MiB = 1024*1024
 
hunk ./src/allmydata/test/test_upload.py 91
 class ServerError(Exception):
     pass
 
+class SetDEPMixin:
+    def set_encoding_parameters(self, k, happy, n, max_segsize=1*MiB):
+        p = {"k": k,
+             "happy": happy,
+             "n": n,
+             "max_segment_size": max_segsize,
+             }
+        self.node.DEFAULT_ENCODING_PARAMETERS = p
+
 class FakeStorageServer:
     def __init__(self, mode):
         self.mode = mode
hunk ./src/allmydata/test/test_upload.py 247
     u = upload.FileHandle(fh, convergence=None)
     return uploader.upload(u)
 
-class GoodServer(unittest.TestCase, ShouldFailMixin):
+class GoodServer(unittest.TestCase, ShouldFailMixin, SetDEPMixin):
     def setUp(self):
         self.node = FakeClient(mode="good")
         self.u = upload.Uploader()
hunk ./src/allmydata/test/test_upload.py 254
         self.u.running = True
         self.u.parent = self.node
 
-    def set_encoding_parameters(self, k, happy, n, max_segsize=1*MiB):
-        p = {"k": k,
-             "happy": happy,
-             "n": n,
-             "max_segment_size": max_segsize,
-             }
-        self.node.DEFAULT_ENCODING_PARAMETERS = p
-
     def _check_small(self, newuri, size):
         u = uri.from_string(newuri)
         self.failUnless(isinstance(u, uri.LiteralFileURI))
hunk ./src/allmydata/test/test_upload.py 377
         d.addCallback(self._check_large, SIZE_LARGE)
         return d
 
-class ServerErrors(unittest.TestCase, ShouldFailMixin):
+class ServerErrors(unittest.TestCase, ShouldFailMixin, SetDEPMixin):
     def make_node(self, mode, num_servers=10):
         self.node = FakeClient(mode, num_servers)
         self.u = upload.Uploader()
hunk ./src/allmydata/test/test_upload.py 677
         d.addCallback(_done)
         return d
 
-class EncodingParameters(GridTestMixin, unittest.TestCase):
+class EncodingParameters(GridTestMixin, unittest.TestCase, SetDEPMixin,
+    ShouldFailMixin):
+    def _do_upload_with_broken_servers(self, servers_to_break):
+        """
+        I act like a normal upload, but before I send the results of
+        Tahoe2PeerSelector to the Encoder, I break the first servers_to_break
+        PeerTrackers in the used_peers part of the return result.
+        """
+        assert self.g, "I tried to find a grid at self.g, but failed"
+        broker = self.g.clients[0].storage_broker
+        sh     = self.g.clients[0]._secret_holder
+        data = upload.Data("data" * 10000, convergence="")
+        data.encoding_param_k = 3
+        data.encoding_param_happy = 4
+        data.encoding_param_n = 10
+        uploadable = upload.EncryptAnUploadable(data)
+        encoder = encode.Encoder()
+        encoder.set_encrypted_uploadable(uploadable)
+        status = upload.UploadStatus()
+        selector = upload.Tahoe2PeerSelector("dglev", "test", status)
+        storage_index = encoder.get_param("storage_index")
+        share_size = encoder.get_param("share_size")
+        block_size = encoder.get_param("block_size")
+        num_segments = encoder.get_param("num_segments")
+        d = selector.get_shareholders(broker, sh, storage_index,
+                                      share_size, block_size, num_segments,
+                                      10, 4)
+        def _have_shareholders((used_peers, already_peers)):
+            assert servers_to_break <= len(used_peers)
+            for index in xrange(servers_to_break):
+                server = list(used_peers)[index]
+                for share in server.buckets.keys():
+                    server.buckets[share].abort()
+            buckets = {}
+            for peer in used_peers:
+                buckets.update(peer.buckets)
+            encoder.set_shareholders(buckets)
+            d = encoder.start()
+            return d
+        d.addCallback(_have_shareholders)
+        return d
+
+    def _add_server_with_share(self, server_number, share_number=None,
+                               readonly=False):
+        assert self.g, "I tried to find a grid at self.g, but failed"
+        assert self.shares, "I tried to find shares at self.shares, but failed"
+        ss = self.g.make_server(server_number, readonly)
+        self.g.add_server(server_number, ss)
+        if share_number:
+            # Copy share i from the directory associated with the first 
+            # storage server to the directory associated with this one.
+            old_share_location = self.shares[share_number][2]
+            new_share_location = os.path.join(ss.storedir, "shares")
+            si = uri.from_string(self.uri).get_storage_index()
+            new_share_location = os.path.join(new_share_location,
+                                              storage_index_to_dir(si))
+            if not os.path.exists(new_share_location):
+                os.makedirs(new_share_location)
+            new_share_location = os.path.join(new_share_location,
+                                              str(share_number))
+            shutil.copy(old_share_location, new_share_location)
+            shares = self.find_shares(self.uri)
+            # Make sure that the storage server has the share.
+            self.failUnless((share_number, ss.my_nodeid, new_share_location)
+                            in shares)
+
+    def _setup_and_upload(self):
+        """
+        I set up a NoNetworkGrid with a single server and client,
+        upload a file to it, store its uri in self.uri, and store its
+        sharedata in self.shares.
+        """
+        self.set_up_grid(num_clients=1, num_servers=1)
+        client = self.g.clients[0]
+        client.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
+        data = upload.Data("data" * 10000, convergence="")
+        self.data = data
+        d = client.upload(data)
+        def _store_uri(ur):
+            self.uri = ur.uri
+        d.addCallback(_store_uri)
+        d.addCallback(lambda ign:
+            self.find_shares(self.uri))
+        def _store_shares(shares):
+            self.shares = shares
+        d.addCallback(_store_shares)
+        return d
+
     def test_configure_parameters(self):
         self.basedir = self.mktemp()
         hooks = {0: self._set_up_nodes_extra_config}
hunk ./src/allmydata/test/test_upload.py 784
         d.addCallback(_check)
         return d
 
+    def _setUp(self, ns):
+        # Used by test_happy_semantics and test_prexisting_share_behavior
+        # to set up the grid.
+        self.node = FakeClient(mode="good", num_servers=ns)
+        self.u = upload.Uploader()
+        self.u.running = True
+        self.u.parent = self.node
+
+    def test_happy_semantics(self):
+        self._setUp(2)
+        DATA = upload.Data("kittens" * 10000, convergence="")
+        # These parameters are unsatisfiable with the client that we've made
+        # -- we'll use them to test that the semnatics work correctly.
+        self.set_encoding_parameters(k=3, happy=5, n=10)
+        d = self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
+                            "shares could only be placed on 2 servers "
+                            "(5 were requested)",
+                            self.u.upload, DATA)
+        # Let's reset the client to have 10 servers
+        d.addCallback(lambda ign:
+            self._setUp(10))
+        # These parameters are satisfiable with the client we've made.
+        d.addCallback(lambda ign:
+            self.set_encoding_parameters(k=3, happy=5, n=10))
+        # this should work
+        d.addCallback(lambda ign:
+            self.u.upload(DATA))
+        # Let's reset the client to have 7 servers
+        # (this is less than n, but more than h)
+        d.addCallback(lambda ign:
+            self._setUp(7))
+        # These encoding parameters should still be satisfiable with our 
+        # client setup
+        d.addCallback(lambda ign:
+            self.set_encoding_parameters(k=3, happy=5, n=10))
+        # This, then, should work.
+        d.addCallback(lambda ign:
+            self.u.upload(DATA))
+        return d
+
+    def test_problem_layouts(self):
+        self.basedir = self.mktemp()
+        # This scenario is at 
+        # http://allmydata.org/trac/tahoe/ticket/778#comment:52
+        #
+        # The scenario in comment:52 proposes that we have a layout
+        # like:
+        # server 1: share 1
+        # server 2: share 1
+        # server 3: share 1
+        # server 4: shares 2 - 10
+        # To get access to the shares, we will first upload to one 
+        # server, which will then have shares 1 - 10. We'll then 
+        # add three new servers, configure them to not accept any new
+        # shares, then write share 1 directly into the serverdir of each.
+        # Then each of servers 1 - 3 will report that they have share 1, 
+        # and will not accept any new share, while server 4 will report that
+        # it has shares 2 - 10 and will accept new shares.
+        # We'll then set 'happy' = 4, and see that an upload fails
+        # (as it should)
+        d = self._setup_and_upload()
+        d.addCallback(lambda ign:
+            self._add_server_with_share(1, 0, True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(2, 0, True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(3, 0, True))
+        # Remove the first share from server 0.
+        def _remove_share_0():
+            share_location = self.shares[0][2]
+            os.remove(share_location)
+        d.addCallback(lambda ign:
+            _remove_share_0())
+        # Set happy = 4 in the client.
+        def _prepare():
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
+            return client
+        d.addCallback(lambda ign:
+            _prepare())
+        # Uploading data should fail
+        d.addCallback(lambda client:
+            self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
+                            "shares could only be placed on 1 servers "
+                            "(4 were requested)",
+                            client.upload, upload.Data("data" * 10000,
+                                                       convergence="")))
+
+
+        # This scenario is at
+        # http://allmydata.org/trac/tahoe/ticket/778#comment:53
+        #
+        # Set up the grid to have one server
+        def _change_basedir(ign):
+            self.basedir = self.mktemp()
+        d.addCallback(_change_basedir)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        # We want to have a layout like this:
+        # server 1: share 1
+        # server 2: share 2
+        # server 3: share 3
+        # server 4: shares 1 - 10
+        # (this is an expansion of Zooko's example because it is easier
+        #  to code, but it will fail in the same way)
+        # To start, we'll create a server with shares 1-10 of the data 
+        # we're about to upload.
+        # Next, we'll add three new servers to our NoNetworkGrid. We'll add
+        # one share from our initial upload to each of these.
+        # The counterintuitive ordering of the share numbers is to deal with 
+        # the permuting of these servers -- distributing the shares this 
+        # way ensures that the Tahoe2PeerSelector sees them in the order 
+        # described above.
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1, share_number=2))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2, share_number=0))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3, share_number=1))
+        # So, we now have the following layout:
+        # server 0: shares 1 - 10
+        # server 1: share 0
+        # server 2: share 1
+        # server 3: share 2
+        # We want to change the 'happy' parameter in the client to 4. 
+        # We then want to feed the upload process a list of peers that
+        # server 0 is at the front of, so we trigger Zooko's scenario.
+        # Ideally, a reupload of our original data should work.
+        def _reset_encoding_parameters(ign):
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
+            return client
+        d.addCallback(_reset_encoding_parameters)
+        # We need this to get around the fact that the old Data 
+        # instance already has a happy parameter set.
+        d.addCallback(lambda client:
+            client.upload(upload.Data("data" * 10000, convergence="")))
+        return d
+
+
+    def test_dropped_servers_in_encoder(self):
+        def _set_basedir(ign=None):
+            self.basedir = self.mktemp()
+        _set_basedir()
+        d = self._setup_and_upload();
+        # Add 5 servers, with one share each from the original
+        # Add a readonly server
+        def _do_server_setup(ign):
+            self._add_server_with_share(1, 1, True)
+            self._add_server_with_share(2)
+            self._add_server_with_share(3)
+            self._add_server_with_share(4)
+            self._add_server_with_share(5)
+        d.addCallback(_do_server_setup)
+        # remove the original server
+        # (necessary to ensure that the Tahoe2PeerSelector will distribute
+        #  all the shares)
+        def _remove_server(ign):
+            server = self.g.servers_by_number[0]
+            self.g.remove_server(server.my_nodeid)
+        d.addCallback(_remove_server)
+        # This should succeed.
+        d.addCallback(lambda ign:
+            self._do_upload_with_broken_servers(1))
+        # Now, do the same thing over again, but drop 2 servers instead
+        # of 1. This should fail.
+        d.addCallback(_set_basedir)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        d.addCallback(_do_server_setup)
+        d.addCallback(_remove_server)
+        d.addCallback(lambda ign:
+            self.shouldFail(NotEnoughSharesError,
+                            "test_dropped_server_in_encoder", "",
+                            self._do_upload_with_broken_servers, 2))
+        return d
+
+
+    def test_servers_with_unique_shares(self):
+        # servers_with_unique_shares expects a dict of 
+        # shnum => peerid as a preexisting shares argument.
+        test1 = {
+                 1 : "server1",
+                 2 : "server2",
+                 3 : "server3",
+                 4 : "server4"
+                }
+        unique_servers = upload.servers_with_unique_shares(test1)
+        self.failUnlessEqual(4, len(unique_servers))
+        for server in ["server1", "server2", "server3", "server4"]:
+            self.failUnlessIn(server, unique_servers)
+        test1[4] = "server1"
+        # Now there should only be 3 unique servers.
+        unique_servers = upload.servers_with_unique_shares(test1)
+        self.failUnlessEqual(3, len(unique_servers))
+        for server in ["server1", "server2", "server3"]:
+            self.failUnlessIn(server, unique_servers)
+        # servers_with_unique_shares expects a set of PeerTracker
+        # instances as a used_peers argument, but only uses the peerid
+        # instance variable to assess uniqueness. So we feed it some fake
+        # PeerTrackers whose only important characteristic is that they 
+        # have peerid set to something.
+        class FakePeerTracker:
+            pass
+        trackers = []
+        for server in ["server5", "server6", "server7", "server8"]:
+            t = FakePeerTracker()
+            t.peerid = server
+            trackers.append(t)
+        # Recall that there are 3 unique servers in test1. Since none of
+        # those overlap with the ones in trackers, we should get 7 back
+        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
+        self.failUnlessEqual(7, len(unique_servers))
+        expected_servers = ["server" + str(i) for i in xrange(1, 9)]
+        expected_servers.remove("server4")
+        for server in expected_servers:
+            self.failUnlessIn(server, unique_servers)
+        # Now add an overlapping server to trackers.
+        t = FakePeerTracker()
+        t.peerid = "server1"
+        trackers.append(t)
+        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
+        self.failUnlessEqual(7, len(unique_servers))
+        for server in expected_servers:
+            self.failUnlessIn(server, unique_servers)
+
+
     def _set_up_nodes_extra_config(self, clientdir):
         cfgfn = os.path.join(clientdir, "tahoe.cfg")
         oldcfg = open(cfgfn, "r").read()
}
[Alter tests to use the new form of set_shareholders
Kevan Carstensen <kevan@isnotajoke.com>**20091104033602
 Ignore-this: 3deac11fc831618d11441317463ef830
] {
hunk ./src/allmydata/test/test_encode.py 301
                      (NUM_SEGMENTS-1)*segsize, len(data), NUM_SEGMENTS*segsize)
 
             shareholders = {}
+            servermap = {}
             for shnum in range(NUM_SHARES):
                 peer = FakeBucketReaderWriterProxy()
                 shareholders[shnum] = peer
hunk ./src/allmydata/test/test_encode.py 305
+                servermap[shnum] = str(shnum)
                 all_shareholders.append(peer)
hunk ./src/allmydata/test/test_encode.py 307
-            e.set_shareholders(shareholders)
+            e.set_shareholders(shareholders, servermap)
             return e.start()
         d.addCallback(_ready)
 
merger 0.0 (
hunk ./src/allmydata/test/test_encode.py 462
-            all_peers = []
hunk ./src/allmydata/test/test_encode.py 463
+            servermap = {}
)
hunk ./src/allmydata/test/test_encode.py 467
                 mode = bucket_modes.get(shnum, "good")
                 peer = FakeBucketReaderWriterProxy(mode)
                 shareholders[shnum] = peer
-            e.set_shareholders(shareholders)
+                servermap[shnum] = str(shnum)
+            e.set_shareholders(shareholders, servermap)
             return e.start()
         d.addCallback(_ready)
         def _sent(res):
hunk ./src/allmydata/test/test_upload.py 711
                 for share in server.buckets.keys():
                     server.buckets[share].abort()
             buckets = {}
+            servermap = already_peers.copy()
             for peer in used_peers:
                 buckets.update(peer.buckets)
hunk ./src/allmydata/test/test_upload.py 714
-            encoder.set_shareholders(buckets)
+                for bucket in peer.buckets:
+                    servermap[bucket] = peer.peerid
+            encoder.set_shareholders(buckets, servermap)
             d = encoder.start()
             return d
         d.addCallback(_have_shareholders)
hunk ./src/allmydata/test/test_upload.py 933
         _set_basedir()
         d = self._setup_and_upload();
         # Add 5 servers, with one share each from the original
-        # Add a readonly server
         def _do_server_setup(ign):
             self._add_server_with_share(1, 1, True)
             self._add_server_with_share(2)
}
[Minor tweak to an existing test -- make the first server read-write, instead of read-only
Kevan Carstensen <kevan@isnotajoke.com>**20091104034232
 Ignore-this: a951a46c93f7f58dd44d93d8623b2aee
] hunk ./src/allmydata/test/test_upload.py 934
         d = self._setup_and_upload();
         # Add 5 servers, with one share each from the original
         def _do_server_setup(ign):
-            self._add_server_with_share(1, 1, True)
+            self._add_server_with_share(1, 1)
             self._add_server_with_share(2)
             self._add_server_with_share(3)
             self._add_server_with_share(4)
[Add a test for upload.shares_by_server
Kevan Carstensen <kevan@isnotajoke.com>**20091104111324
 Ignore-this: f9802e82d6982a93e00f92e0b276f018
] hunk ./src/allmydata/test/test_upload.py 1013
             self.failUnlessIn(server, unique_servers)
 
 
+    def test_shares_by_server(self):
+        test = {
+                    1 : "server1",
+                    2 : "server2",
+                    3 : "server3",
+                    4 : "server4"
+               }
+        shares_by_server = upload.shares_by_server(test)
+        self.failUnlessEqual(set([1]), shares_by_server["server1"])
+        self.failUnlessEqual(set([2]), shares_by_server["server2"])
+        self.failUnlessEqual(set([3]), shares_by_server["server3"])
+        self.failUnlessEqual(set([4]), shares_by_server["server4"])
+        test1 = {
+                    1 : "server1",
+                    2 : "server1",
+                    3 : "server1",
+                    4 : "server2",
+                    5 : "server2"
+                }
+        shares_by_server = upload.shares_by_server(test1)
+        self.failUnlessEqual(set([1, 2, 3]), shares_by_server["server1"])
+        self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
+
+
     def _set_up_nodes_extra_config(self, clientdir):
         cfgfn = os.path.join(clientdir, "tahoe.cfg")
         oldcfg = open(cfgfn, "r").read()
[Add more tests for comment:53 in ticket #778
Kevan Carstensen <kevan@isnotajoke.com>**20091104112849
 Ignore-this: 3bb2edd299a944cc9586e14d5d83ec8c
] {
hunk ./src/allmydata/test/test_upload.py 722
         d.addCallback(_have_shareholders)
         return d
 
-    def _add_server_with_share(self, server_number, share_number=None,
-                               readonly=False):
+    def _add_server(self, server_number, readonly=False):
         assert self.g, "I tried to find a grid at self.g, but failed"
         assert self.shares, "I tried to find shares at self.shares, but failed"
         ss = self.g.make_server(server_number, readonly)
hunk ./src/allmydata/test/test_upload.py 727
         self.g.add_server(server_number, ss)
+
+    def _add_server_with_share(self, server_number, share_number=None,
+                               readonly=False):
+        self._add_server(server_number, readonly)
         if share_number:
hunk ./src/allmydata/test/test_upload.py 732
-            # Copy share i from the directory associated with the first 
-            # storage server to the directory associated with this one.
-            old_share_location = self.shares[share_number][2]
-            new_share_location = os.path.join(ss.storedir, "shares")
-            si = uri.from_string(self.uri).get_storage_index()
-            new_share_location = os.path.join(new_share_location,
-                                              storage_index_to_dir(si))
-            if not os.path.exists(new_share_location):
-                os.makedirs(new_share_location)
-            new_share_location = os.path.join(new_share_location,
-                                              str(share_number))
-            shutil.copy(old_share_location, new_share_location)
-            shares = self.find_shares(self.uri)
-            # Make sure that the storage server has the share.
-            self.failUnless((share_number, ss.my_nodeid, new_share_location)
-                            in shares)
+            self._copy_share_to_server(share_number, server_number)
+
+    def _copy_share_to_server(self, share_number, server_number):
+        ss = self.g.servers_by_number[server_number]
+        # Copy share i from the directory associated with the first 
+        # storage server to the directory associated with this one.
+        assert self.g, "I tried to find a grid at self.g, but failed"
+        assert self.shares, "I tried to find shares at self.shares, but failed"
+        old_share_location = self.shares[share_number][2]
+        new_share_location = os.path.join(ss.storedir, "shares")
+        si = uri.from_string(self.uri).get_storage_index()
+        new_share_location = os.path.join(new_share_location,
+                                          storage_index_to_dir(si))
+        if not os.path.exists(new_share_location):
+            os.makedirs(new_share_location)
+        new_share_location = os.path.join(new_share_location,
+                                          str(share_number))
+        shutil.copy(old_share_location, new_share_location)
+        shares = self.find_shares(self.uri)
+        # Make sure that the storage server has the share.
+        self.failUnless((share_number, ss.my_nodeid, new_share_location)
+                        in shares)
+
 
     def _setup_and_upload(self):
         """
hunk ./src/allmydata/test/test_upload.py 917
         d.addCallback(lambda ign:
             self._add_server_with_share(server_number=3, share_number=1))
         # So, we now have the following layout:
-        # server 0: shares 1 - 10
+        # server 0: shares 0 - 9
         # server 1: share 0
         # server 2: share 1
         # server 3: share 2
hunk ./src/allmydata/test/test_upload.py 934
         # instance already has a happy parameter set.
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+
+
+        # This scenario is basically comment:53, but with the order reversed;
+        # this means that the Tahoe2PeerSelector sees
+        # server 0: shares 1-10
+        # server 1: share 1
+        # server 2: share 2
+        # server 3: share 3
+        d.addCallback(_change_basedir)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2, share_number=0))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3, share_number=1))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1, share_number=2))
+        # Copy all of the other shares to server number 2
+        def _copy_shares(ign):
+            for i in xrange(1, 10):
+                self._copy_share_to_server(i, 2)
+        d.addCallback(_copy_shares)
+        # Remove the first server, and add a placeholder with share 0
+        d.addCallback(lambda ign:
+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=0, share_number=0))
+        # Now try uploading. 
+        d.addCallback(_reset_encoding_parameters)
+        d.addCallback(lambda client:
+            client.upload(upload.Data("data" * 10000, convergence="")))
+        # Try the same thing, but with empty servers after the first one
+        # We want to make sure that Tahoe2PeerSelector will redistribute
+        # shares as necessary, not simply discover an existing layout.
+        d.addCallback(_change_basedir)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        d.addCallback(lambda ign:
+            self._add_server(server_number=2))
+        d.addCallback(lambda ign:
+            self._add_server(server_number=3))
+        d.addCallback(lambda ign:
+            self._add_server(server_number=1))
+        d.addCallback(_copy_shares)
+        d.addCallback(lambda ign:
+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign:
+            self._add_server(server_number=0))
+        d.addCallback(_reset_encoding_parameters)
+        d.addCallback(lambda client:
+            client.upload(upload.Data("data" * 10000, convergence="")))
+        # Try the following layout
+        # server 0: shares 1-10
+        # server 1: share 1, read-only
+        # server 2: share 2, read-only
+        # server 3: share 3, read-only
+        d.addCallback(_change_basedir)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2, share_number=0))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3, share_number=1,
+                                        readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1, share_number=2,
+                                        readonly=True))
+        # Copy all of the other shares to server number 2
+        d.addCallback(_copy_shares)
+        # Remove server 0, and add another in its place
+        d.addCallback(lambda ign:
+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=0, share_number=0,
+                                        readonly=True))
+        d.addCallback(_reset_encoding_parameters)
+        d.addCallback(lambda client:
+            client.upload(upload.Data("data" * 10000, convergence="")))
         return d
 
 
}
[Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers
Kevan Carstensen <kevan@isnotajoke.com>**20091109003735
 Ignore-this: 12f9b4cff5752fca7ed32a6ebcff6446
] hunk ./src/allmydata/test/test_upload.py 1125
         self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
 
 
+    def test_existing_share_detection(self):
+        self.basedir = self.mktemp()
+        d = self._setup_and_upload()
+        # Our final setup should look like this:
+        # server 1: shares 1 - 10, read-only
+        # server 2: empty
+        # server 3: empty
+        # server 4: empty
+        # The purpose of this test is to make sure that the peer selector
+        # knows about the shares on server 1, even though it is read-only.
+        # It used to simply filter these out, which would cause the test
+        # to fail when servers_of_happiness = 4.
+        d.addCallback(lambda ign:
+            self._add_server_with_share(1, 0, True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(2))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(3))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(4))
+        def _copy_shares(ign):
+            for i in xrange(1, 10):
+                self._copy_share_to_server(i, 1)
+        d.addCallback(_copy_shares)
+        d.addCallback(lambda ign:
+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        def _prepare_client(ign):
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
+            return client
+        d.addCallback(_prepare_client)
+        d.addCallback(lambda client:
+            client.upload(upload.Data("data" * 10000, convergence="")))
+        return d
+
+
     def _set_up_nodes_extra_config(self, clientdir):
         cfgfn = os.path.join(clientdir, "tahoe.cfg")
         oldcfg = open(cfgfn, "r").read()
[Re-work 'test_upload.py' to be more readable; add more tests for #778
Kevan Carstensen <kevan@isnotajoke.com>**20091116192334
 Ignore-this: 7e8565f92fe51dece5ae28daf442d659
] {
hunk ./src/allmydata/test/test_upload.py 722
         d.addCallback(_have_shareholders)
         return d
 
+
     def _add_server(self, server_number, readonly=False):
         assert self.g, "I tried to find a grid at self.g, but failed"
         assert self.shares, "I tried to find shares at self.shares, but failed"
hunk ./src/allmydata/test/test_upload.py 729
         ss = self.g.make_server(server_number, readonly)
         self.g.add_server(server_number, ss)
 
+
     def _add_server_with_share(self, server_number, share_number=None,
                                readonly=False):
         self._add_server(server_number, readonly)
hunk ./src/allmydata/test/test_upload.py 733
-        if share_number:
+        if share_number is not None:
             self._copy_share_to_server(share_number, server_number)
 
hunk ./src/allmydata/test/test_upload.py 736
+
     def _copy_share_to_server(self, share_number, server_number):
         ss = self.g.servers_by_number[server_number]
         # Copy share i from the directory associated with the first 
hunk ./src/allmydata/test/test_upload.py 752
             os.makedirs(new_share_location)
         new_share_location = os.path.join(new_share_location,
                                           str(share_number))
-        shutil.copy(old_share_location, new_share_location)
+        if old_share_location != new_share_location:
+            shutil.copy(old_share_location, new_share_location)
         shares = self.find_shares(self.uri)
         # Make sure that the storage server has the share.
         self.failUnless((share_number, ss.my_nodeid, new_share_location)
hunk ./src/allmydata/test/test_upload.py 782
         d.addCallback(_store_shares)
         return d
 
+
     def test_configure_parameters(self):
         self.basedir = self.mktemp()
         hooks = {0: self._set_up_nodes_extra_config}
hunk ./src/allmydata/test/test_upload.py 802
         d.addCallback(_check)
         return d
 
+
     def _setUp(self, ns):
         # Used by test_happy_semantics and test_prexisting_share_behavior
         # to set up the grid.
hunk ./src/allmydata/test/test_upload.py 811
         self.u.running = True
         self.u.parent = self.node
 
+
     def test_happy_semantics(self):
         self._setUp(2)
         DATA = upload.Data("kittens" * 10000, convergence="")
hunk ./src/allmydata/test/test_upload.py 844
             self.u.upload(DATA))
         return d
 
-    def test_problem_layouts(self):
-        self.basedir = self.mktemp()
+
+    def test_problem_layout_comment_52(self):
+        def _basedir():
+            self.basedir = self.mktemp()
+        _basedir()
         # This scenario is at 
         # http://allmydata.org/trac/tahoe/ticket/778#comment:52
         #
hunk ./src/allmydata/test/test_upload.py 890
         # Uploading data should fail
         d.addCallback(lambda client:
             self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
-                            "shares could only be placed on 1 servers "
+                            "shares could only be placed on 2 servers "
                             "(4 were requested)",
                             client.upload, upload.Data("data" * 10000,
                                                        convergence="")))
hunk ./src/allmydata/test/test_upload.py 895
 
+        # Do comment:52, but like this:
+        # server 2: empty
+        # server 3: share 0, read-only
+        # server 1: share 0, read-only
+        # server 0: shares 0-9
+        d.addCallback(lambda ign:
+            _basedir())
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3, share_number=0,
+                                        readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1, share_number=0,
+                                        readonly=True))
+        def _prepare2():
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
+            return client
+        d.addCallback(lambda ign:
+            _prepare2())
+        d.addCallback(lambda client:
+            self.shouldFail(NotEnoughSharesError, "test_happy_sematics",
+                            "shares could only be placed on 2 servers "
+                            "(3 were requested)",
+                            client.upload, upload.Data("data" * 10000,
+                                                       convergence="")))
+        return d
+
 
hunk ./src/allmydata/test/test_upload.py 927
+    def test_problem_layout_comment_53(self):
         # This scenario is at
         # http://allmydata.org/trac/tahoe/ticket/778#comment:53
         #
hunk ./src/allmydata/test/test_upload.py 934
         # Set up the grid to have one server
         def _change_basedir(ign):
             self.basedir = self.mktemp()
-        d.addCallback(_change_basedir)
-        d.addCallback(lambda ign:
-            self._setup_and_upload())
-        # We want to have a layout like this:
-        # server 1: share 1
-        # server 2: share 2
-        # server 3: share 3
-        # server 4: shares 1 - 10
-        # (this is an expansion of Zooko's example because it is easier
-        #  to code, but it will fail in the same way)
-        # To start, we'll create a server with shares 1-10 of the data 
-        # we're about to upload.
+        _change_basedir(None)
+        d = self._setup_and_upload()
+        # We start by uploading all of the shares to one server (which has 
+        # already been done above).
         # Next, we'll add three new servers to our NoNetworkGrid. We'll add
         # one share from our initial upload to each of these.
         # The counterintuitive ordering of the share numbers is to deal with 
hunk ./src/allmydata/test/test_upload.py 952
             self._add_server_with_share(server_number=3, share_number=1))
         # So, we now have the following layout:
         # server 0: shares 0 - 9
-        # server 1: share 0
-        # server 2: share 1
-        # server 3: share 2
+        # server 1: share 2
+        # server 2: share 0
+        # server 3: share 1
         # We want to change the 'happy' parameter in the client to 4. 
hunk ./src/allmydata/test/test_upload.py 956
-        # We then want to feed the upload process a list of peers that
-        # server 0 is at the front of, so we trigger Zooko's scenario.
+        # The Tahoe2PeerSelector will see the peers permuted as:
+        # 2, 3, 1, 0
         # Ideally, a reupload of our original data should work.
hunk ./src/allmydata/test/test_upload.py 959
-        def _reset_encoding_parameters(ign):
+        def _reset_encoding_parameters(ign, happy=4):
             client = self.g.clients[0]
hunk ./src/allmydata/test/test_upload.py 961
-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
             return client
         d.addCallback(_reset_encoding_parameters)
hunk ./src/allmydata/test/test_upload.py 964
-        # We need this to get around the fact that the old Data 
-        # instance already has a happy parameter set.
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
 
hunk ./src/allmydata/test/test_upload.py 970
 
         # This scenario is basically comment:53, but with the order reversed;
         # this means that the Tahoe2PeerSelector sees
-        # server 0: shares 1-10
-        # server 1: share 1
-        # server 2: share 2
-        # server 3: share 3
+        # server 2: shares 1-10
+        # server 3: share 1
+        # server 1: share 2
+        # server 4: share 3
         d.addCallback(_change_basedir)
         d.addCallback(lambda ign:
             self._setup_and_upload())
hunk ./src/allmydata/test/test_upload.py 992
         d.addCallback(lambda ign:
             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
         d.addCallback(lambda ign:
-            self._add_server_with_share(server_number=0, share_number=0))
+            self._add_server_with_share(server_number=4, share_number=0))
         # Now try uploading. 
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
hunk ./src/allmydata/test/test_upload.py 1013
         d.addCallback(lambda ign:
             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
         d.addCallback(lambda ign:
-            self._add_server(server_number=0))
+            self._add_server(server_number=4))
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
hunk ./src/allmydata/test/test_upload.py 1017
+        return d
+
+
+    def test_happiness_with_some_readonly_peers(self):
         # Try the following layout
hunk ./src/allmydata/test/test_upload.py 1022
-        # server 0: shares 1-10
-        # server 1: share 1, read-only
-        # server 2: share 2, read-only
-        # server 3: share 3, read-only
-        d.addCallback(_change_basedir)
-        d.addCallback(lambda ign:
-            self._setup_and_upload())
+        # server 2: shares 0-9
+        # server 4: share 0, read-only
+        # server 3: share 1, read-only
+        # server 1: share 2, read-only
+        self.basedir = self.mktemp()
+        d = self._setup_and_upload()
         d.addCallback(lambda ign:
             self._add_server_with_share(server_number=2, share_number=0))
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 1037
             self._add_server_with_share(server_number=1, share_number=2,
                                         readonly=True))
         # Copy all of the other shares to server number 2
+        def _copy_shares(ign):
+            for i in xrange(1, 10):
+                self._copy_share_to_server(i, 2)
         d.addCallback(_copy_shares)
         # Remove server 0, and add another in its place
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 1045
             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
         d.addCallback(lambda ign:
-            self._add_server_with_share(server_number=0, share_number=0,
+            self._add_server_with_share(server_number=4, share_number=0,
                                         readonly=True))
hunk ./src/allmydata/test/test_upload.py 1047
+        def _reset_encoding_parameters(ign, happy=4):
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
+            return client
+        d.addCallback(_reset_encoding_parameters)
+        d.addCallback(lambda client:
+            client.upload(upload.Data("data" * 10000, convergence="")))
+        return d
+
+
+    def test_happiness_with_all_readonly_peers(self):
+        # server 3: share 1, read-only
+        # server 1: share 2, read-only
+        # server 2: shares 0-9, read-only
+        # server 4: share 0, read-only
+        # The idea with this test is to make sure that the survey of
+        # read-only peers doesn't undercount servers of happiness
+        self.basedir = self.mktemp()
+        d = self._setup_and_upload()
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=4, share_number=0,
+                                        readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3, share_number=1,
+                                        readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1, share_number=2,
+                                        readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2, share_number=0,
+                                        readonly=True))
+        def _copy_shares(ign):
+            for i in xrange(1, 10):
+                self._copy_share_to_server(i, 2)
+        d.addCallback(_copy_shares)
+        d.addCallback(lambda ign:
+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        def _reset_encoding_parameters(ign, happy=4):
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
+            return client
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
hunk ./src/allmydata/test/test_upload.py 1099
             self.basedir = self.mktemp()
         _set_basedir()
         d = self._setup_and_upload();
-        # Add 5 servers, with one share each from the original
+        # Add 5 servers
         def _do_server_setup(ign):
hunk ./src/allmydata/test/test_upload.py 1101
-            self._add_server_with_share(1, 1)
+            self._add_server_with_share(1)
             self._add_server_with_share(2)
             self._add_server_with_share(3)
             self._add_server_with_share(4)
hunk ./src/allmydata/test/test_upload.py 1126
         d.addCallback(_remove_server)
         d.addCallback(lambda ign:
             self.shouldFail(NotEnoughSharesError,
-                            "test_dropped_server_in_encoder", "",
+                            "test_dropped_servers_in_encoder",
+                            "lost too many servers during upload "
+                            "(still have 3, want 4)",
+                            self._do_upload_with_broken_servers, 2))
+        # Now do the same thing over again, but make some of the servers
+        # readonly, break some of the ones that aren't, and make sure that
+        # happiness accounting is preserved.
+        d.addCallback(_set_basedir)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        def _do_server_setup_2(ign):
+            self._add_server_with_share(1)
+            self._add_server_with_share(2)
+            self._add_server_with_share(3)
+            self._add_server_with_share(4, 7, readonly=True)
+            self._add_server_with_share(5, 8, readonly=True)
+        d.addCallback(_do_server_setup_2)
+        d.addCallback(_remove_server)
+        d.addCallback(lambda ign:
+            self._do_upload_with_broken_servers(1))
+        d.addCallback(_set_basedir)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        d.addCallback(_do_server_setup_2)
+        d.addCallback(_remove_server)
+        d.addCallback(lambda ign:
+            self.shouldFail(NotEnoughSharesError,
+                            "test_dropped_servers_in_encoder",
+                            "lost too many servers during upload "
+                            "(still have 3, want 4)",
                             self._do_upload_with_broken_servers, 2))
         return d
 
hunk ./src/allmydata/test/test_upload.py 1179
         self.failUnlessEqual(3, len(unique_servers))
         for server in ["server1", "server2", "server3"]:
             self.failUnlessIn(server, unique_servers)
-        # servers_with_unique_shares expects a set of PeerTracker
-        # instances as a used_peers argument, but only uses the peerid
-        # instance variable to assess uniqueness. So we feed it some fake
-        # PeerTrackers whose only important characteristic is that they 
-        # have peerid set to something.
+        # servers_with_unique_shares expects to receive some object with
+        # a peerid attribute. So we make a FakePeerTracker whose only
+        # job is to have a peerid attribute.
         class FakePeerTracker:
             pass
         trackers = []
hunk ./src/allmydata/test/test_upload.py 1185
-        for server in ["server5", "server6", "server7", "server8"]:
+        for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
             t = FakePeerTracker()
             t.peerid = server
hunk ./src/allmydata/test/test_upload.py 1188
+            t.buckets = [i]
             trackers.append(t)
         # Recall that there are 3 unique servers in test1. Since none of
         # those overlap with the ones in trackers, we should get 7 back
hunk ./src/allmydata/test/test_upload.py 1201
         # Now add an overlapping server to trackers.
         t = FakePeerTracker()
         t.peerid = "server1"
+        t.buckets = [1]
         trackers.append(t)
         unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
         self.failUnlessEqual(7, len(unique_servers))
hunk ./src/allmydata/test/test_upload.py 1207
         for server in expected_servers:
             self.failUnlessIn(server, unique_servers)
+        test = {}
+        unique_servers = upload.servers_with_unique_shares(test)
+        self.failUnlessEqual(0, len(test))
 
 
     def test_shares_by_server(self):
hunk ./src/allmydata/test/test_upload.py 1213
-        test = {
-                    1 : "server1",
-                    2 : "server2",
-                    3 : "server3",
-                    4 : "server4"
-               }
+        test = dict([(i, "server%d" % i) for i in xrange(1, 5)])
         shares_by_server = upload.shares_by_server(test)
         self.failUnlessEqual(set([1]), shares_by_server["server1"])
         self.failUnlessEqual(set([2]), shares_by_server["server2"])
hunk ./src/allmydata/test/test_upload.py 1267
         return d
 
 
+    def test_should_add_server(self):
+        shares = dict([(i, "server%d" % i) for i in xrange(10)])
+        self.failIf(upload.should_add_server(shares, "server1", 4))
+        shares[4] = "server1"
+        self.failUnless(upload.should_add_server(shares, "server4", 4))
+        shares = {}
+        self.failUnless(upload.should_add_server(shares, "server1", 1))
+
+
     def _set_up_nodes_extra_config(self, clientdir):
         cfgfn = os.path.join(clientdir, "tahoe.cfg")
         oldcfg = open(cfgfn, "r").read()
}
[Add tests for the behavior described in #834.
Kevan Carstensen <kevan@isnotajoke.com>**20091123012008
 Ignore-this: d8e0aa0f3f7965ce9b5cea843c6d6f9f
] {
hunk ./src/allmydata/test/test_encode.py 12
 from allmydata.util.assertutil import _assert
 from allmydata.util.consumer import MemoryConsumer
 from allmydata.interfaces import IStorageBucketWriter, IStorageBucketReader, \
-     NotEnoughSharesError, IStorageBroker
+     NotEnoughSharesError, IStorageBroker, UploadHappinessError
 from allmydata.monitor import Monitor
 import common_util as testutil
 
hunk ./src/allmydata/test/test_encode.py 794
         d = self.send_and_recover((4,8,10), bucket_modes=modemap)
         def _done(res):
             self.failUnless(isinstance(res, Failure))
-            self.failUnless(res.check(NotEnoughSharesError), res)
+            self.failUnless(res.check(UploadHappinessError), res)
         d.addBoth(_done)
         return d
 
hunk ./src/allmydata/test/test_encode.py 805
         d = self.send_and_recover((4,8,10), bucket_modes=modemap)
         def _done(res):
             self.failUnless(isinstance(res, Failure))
-            self.failUnless(res.check(NotEnoughSharesError))
+            self.failUnless(res.check(UploadHappinessError))
         d.addBoth(_done)
         return d
hunk ./src/allmydata/test/test_upload.py 13
 import allmydata # for __full_version__
 from allmydata import uri, monitor, client
 from allmydata.immutable import upload, encode
-from allmydata.interfaces import FileTooLargeError, NoSharesError, \
-     NotEnoughSharesError
+from allmydata.interfaces import FileTooLargeError, UploadHappinessError
 from allmydata.util.assertutil import precondition
 from allmydata.util.deferredutil import DeferredListShouldSucceed
 from no_network import GridTestMixin
hunk ./src/allmydata/test/test_upload.py 402
 
     def test_first_error_all(self):
         self.make_node("first-fail")
-        d = self.shouldFail(NoSharesError, "first_error_all",
+        d = self.shouldFail(UploadHappinessError, "first_error_all",
                             "peer selection failed",
                             upload_data, self.u, DATA)
         def _check((f,)):
hunk ./src/allmydata/test/test_upload.py 434
 
     def test_second_error_all(self):
         self.make_node("second-fail")
-        d = self.shouldFail(NotEnoughSharesError, "second_error_all",
+        d = self.shouldFail(UploadHappinessError, "second_error_all",
                             "peer selection failed",
                             upload_data, self.u, DATA)
         def _check((f,)):
hunk ./src/allmydata/test/test_upload.py 452
         self.u.parent = self.node
 
     def _should_fail(self, f):
-        self.failUnless(isinstance(f, Failure) and f.check(NoSharesError), f)
+        self.failUnless(isinstance(f, Failure) and f.check(UploadHappinessError), f)
 
     def test_data_large(self):
         data = DATA
hunk ./src/allmydata/test/test_upload.py 817
         # These parameters are unsatisfiable with the client that we've made
         # -- we'll use them to test that the semnatics work correctly.
         self.set_encoding_parameters(k=3, happy=5, n=10)
-        d = self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
+        d = self.shouldFail(UploadHappinessError, "test_happy_semantics",
                             "shares could only be placed on 2 servers "
                             "(5 were requested)",
                             self.u.upload, DATA)
hunk ./src/allmydata/test/test_upload.py 888
             _prepare())
         # Uploading data should fail
         d.addCallback(lambda client:
-            self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
+            self.shouldFail(UploadHappinessError, "test_happy_semantics",
                             "shares could only be placed on 2 servers "
                             "(4 were requested)",
                             client.upload, upload.Data("data" * 10000,
hunk ./src/allmydata/test/test_upload.py 918
         d.addCallback(lambda ign:
             _prepare2())
         d.addCallback(lambda client:
-            self.shouldFail(NotEnoughSharesError, "test_happy_sematics",
+            self.shouldFail(UploadHappinessError, "test_happy_sematics",
                             "shares could only be placed on 2 servers "
                             "(3 were requested)",
                             client.upload, upload.Data("data" * 10000,
hunk ./src/allmydata/test/test_upload.py 1124
         d.addCallback(_do_server_setup)
         d.addCallback(_remove_server)
         d.addCallback(lambda ign:
-            self.shouldFail(NotEnoughSharesError,
+            self.shouldFail(UploadHappinessError,
                             "test_dropped_servers_in_encoder",
                             "lost too many servers during upload "
                             "(still have 3, want 4)",
hunk ./src/allmydata/test/test_upload.py 1151
         d.addCallback(_do_server_setup_2)
         d.addCallback(_remove_server)
         d.addCallback(lambda ign:
-            self.shouldFail(NotEnoughSharesError,
+            self.shouldFail(UploadHappinessError,
                             "test_dropped_servers_in_encoder",
                             "lost too many servers during upload "
                             "(still have 3, want 4)",
hunk ./src/allmydata/test/test_upload.py 1275
         self.failUnless(upload.should_add_server(shares, "server1", 1))
 
 
+    def test_exception_messages_during_peer_selection(self):
+        # server 1: readonly, no shares
+        # server 2: readonly, no shares
+        # server 3: readonly, no shares
+        # server 4: readonly, no shares
+        # server 5: readonly, no shares
+        # This will fail, but we want to make sure that the log messages
+        # are informative about why it has failed.
+        self.basedir = self.mktemp()
+        d = self._setup_and_upload()
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1, readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2, readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3, readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=4, readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=5, readonly=True))
+        d.addCallback(lambda ign:
+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        def _reset_encoding_parameters(ign):
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
+            return client
+        d.addCallback(_reset_encoding_parameters)
+        d.addCallback(lambda client:
+            self.shouldFail(UploadHappinessError, "test_selection_exceptions",
+                            "peer selection failed for <Tahoe2PeerSelector "
+                            "for upload dglev>: placed 0 shares out of 10 "
+                            "total (10 homeless), want to place on 4 servers,"
+                            " sent 5 queries to 5 peers, 0 queries placed "
+                            "some shares, 5 placed none "
+                            "(of which 5 placed none due to the server being "
+                            "full and 0 placed none due to an error)",
+                            client.upload,
+                            upload.Data("data" * 10000, convergence="")))
+
+
+        # server 1: readonly, no shares
+        # server 2: broken, no shares
+        # server 3: readonly, no shares
+        # server 4: readonly, no shares
+        # server 5: readonly, no shares
+        def _reset(ign):
+            self.basedir = self.mktemp()
+        d.addCallback(_reset)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1, readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2))
+        def _break_server_2(ign):
+            server = self.g.servers_by_number[2].my_nodeid
+            # We have to break the server in servers_by_id, 
+            # because the ones in servers_by_number isn't wrapped,
+            # and doesn't look at its broken attribute
+            self.g.servers_by_id[server].broken = True
+        d.addCallback(_break_server_2)
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3, readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=4, readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=5, readonly=True))
+        d.addCallback(lambda ign:
+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
+        def _reset_encoding_parameters(ign):
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
+            return client
+        d.addCallback(_reset_encoding_parameters)
+        d.addCallback(lambda client:
+            self.shouldFail(UploadHappinessError, "test_selection_exceptions",
+                            "peer selection failed for <Tahoe2PeerSelector "
+                            "for upload dglev>: placed 0 shares out of 10 "
+                            "total (10 homeless), want to place on 4 servers,"
+                            " sent 5 queries to 5 peers, 0 queries placed "
+                            "some shares, 5 placed none "
+                            "(of which 4 placed none due to the server being "
+                            "full and 1 placed none due to an error)",
+                            client.upload,
+                            upload.Data("data" * 10000, convergence="")))
+        return d
+
+
     def _set_up_nodes_extra_config(self, clientdir):
         cfgfn = os.path.join(clientdir, "tahoe.cfg")
         oldcfg = open(cfgfn, "r").read()
}
[Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.
Kevan Carstensen <kevan@isnotajoke.com>**20091205043453
 Ignore-this: 83f4bc50c697d21b5f4e2a4cd91862ca
] {
replace ./src/allmydata/test/test_encode.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError
replace ./src/allmydata/test/test_upload.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError
}
[Alter various unit tests to work with the new happy behavior
Kevan Carstensen <kevan@isnotajoke.com>**20100107181325
 Ignore-this: 132032bbf865e63a079f869b663be34a
] {
hunk ./src/allmydata/test/common.py 941
             # We need multiple segments to test crypttext hash trees that are
             # non-trivial (i.e. they have more than just one hash in them).
             cl0.DEFAULT_ENCODING_PARAMETERS['max_segment_size'] = 12
+            # Tests that need to test servers of happiness using this should
+            # set their own value for happy -- the default (7) breaks stuff.
+            cl0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
             d2 = cl0.upload(immutable.upload.Data(TEST_DATA, convergence=""))
             def _after_upload(u):
                 filecap = u.uri
hunk ./src/allmydata/test/test_checker.py 283
         self.basedir = "checker/AddLease/875"
         self.set_up_grid(num_servers=1)
         c0 = self.g.clients[0]
+        c0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
         self.uris = {}
         DATA = "data" * 100
         d = c0.upload(Data(DATA, convergence=""))
hunk ./src/allmydata/test/test_system.py 93
         d = self.set_up_nodes()
         def _check_connections(res):
             for c in self.clients:
+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 5
                 all_peerids = c.get_storage_broker().get_all_serverids()
                 self.failUnlessEqual(len(all_peerids), self.numclients)
                 sb = c.storage_broker
hunk ./src/allmydata/test/test_system.py 205
                                                       add_to_sparent=True))
         def _added(extra_node):
             self.extra_node = extra_node
+            self.extra_node.DEFAULT_ENCODING_PARAMETERS['happy'] = 5
         d.addCallback(_added)
 
         HELPER_DATA = "Data that needs help to upload" * 1000
hunk ./src/allmydata/test/test_system.py 705
         self.basedir = "system/SystemTest/test_filesystem"
         self.data = LARGE_DATA
         d = self.set_up_nodes(use_stats_gatherer=True)
+        def _new_happy_semantics(ign):
+            for c in self.clients:
+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
+        d.addCallback(_new_happy_semantics)
         d.addCallback(self._test_introweb)
         d.addCallback(self.log, "starting publish")
         d.addCallback(self._do_publish1)
hunk ./src/allmydata/test/test_system.py 1129
         d.addCallback(self.failUnlessEqual, "new.txt contents")
         # and again with something large enough to use multiple segments,
         # and hopefully trigger pauseProducing too
+        def _new_happy_semantics(ign):
+            for c in self.clients:
+                # these get reset somewhere? Whatever.
+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
+        d.addCallback(_new_happy_semantics)
         d.addCallback(lambda res: self.PUT(public + "/subdir3/big.txt",
                                            "big" * 500000)) # 1.5MB
         d.addCallback(lambda res: self.GET(public + "/subdir3/big.txt"))
hunk ./src/allmydata/test/test_upload.py 178
 
 class FakeClient:
     DEFAULT_ENCODING_PARAMETERS = {"k":25,
-                                   "happy": 75,
+                                   "happy": 25,
                                    "n": 100,
                                    "max_segment_size": 1*MiB,
                                    }
hunk ./src/allmydata/test/test_upload.py 316
         data = self.get_data(SIZE_LARGE)
         segsize = int(SIZE_LARGE / 2.5)
         # we want 3 segments, since that's not a power of two
-        self.set_encoding_parameters(25, 75, 100, segsize)
+        self.set_encoding_parameters(25, 25, 100, segsize)
         d = upload_data(self.u, data)
         d.addCallback(extract_uri)
         d.addCallback(self._check_large, SIZE_LARGE)
hunk ./src/allmydata/test/test_upload.py 395
     def test_first_error(self):
         mode = dict([(0,"good")] + [(i,"first-fail") for i in range(1,10)])
         self.make_node(mode)
+        self.set_encoding_parameters(k=25, happy=1, n=50)
         d = upload_data(self.u, DATA)
         d.addCallback(extract_uri)
         d.addCallback(self._check_large, SIZE_LARGE)
hunk ./src/allmydata/test/test_upload.py 513
 
         self.make_client()
         data = self.get_data(SIZE_LARGE)
-        self.set_encoding_parameters(50, 75, 100)
+        # if there are 50 peers, then happy needs to be <= 50
+        self.set_encoding_parameters(50, 50, 100)
         d = upload_data(self.u, data)
         d.addCallback(extract_uri)
         d.addCallback(self._check_large, SIZE_LARGE)
hunk ./src/allmydata/test/test_upload.py 560
 
         self.make_client()
         data = self.get_data(SIZE_LARGE)
-        self.set_encoding_parameters(100, 150, 200)
+        # if there are 50 peers, then happy should be no more than 50 if
+        # we want this to work.
+        self.set_encoding_parameters(100, 50, 200)
         d = upload_data(self.u, data)
         d.addCallback(extract_uri)
         d.addCallback(self._check_large, SIZE_LARGE)
hunk ./src/allmydata/test/test_upload.py 580
 
         self.make_client(3)
         data = self.get_data(SIZE_LARGE)
-        self.set_encoding_parameters(3, 5, 10)
+        self.set_encoding_parameters(3, 3, 10)
         d = upload_data(self.u, data)
         d.addCallback(extract_uri)
         d.addCallback(self._check_large, SIZE_LARGE)
hunk ./src/allmydata/test/test_web.py 4073
         self.basedir = "web/Grid/exceptions"
         self.set_up_grid(num_clients=1, num_servers=2)
         c0 = self.g.clients[0]
+        c0.DEFAULT_ENCODING_PARAMETERS['happy'] = 2
         self.fileurls = {}
         DATA = "data" * 100
         d = c0.create_dirnode()
}
[Revisions of the #778 tests, per reviewers' comments
Kevan Carstensen <kevan@isnotajoke.com>**20100507220051
 Ignore-this: c15abb92ad0176037ff612d758c8f3ce
 
 - Fix comments and confusing naming.
 - Add tests for the new error messages suggested by David-Sarah
   and Zooko.
 - Alter existing tests for new error messages.
 - Make sure that the tests continue to work with the trunk.
 - Add a test for a mutual disjointedness assertion that I added to
   upload.servers_of_happiness.
 - Fix the comments to correctly reflect read-onlyness
 - Add a test for an edge case in should_add_server
 - Add an assertion to make sure that share redistribution works as it 
   should
 - Alter tests to work with revised servers_of_happiness semantics
 - Remove tests for should_add_server, since that function no longer exists.
 - Alter tests to know about merge_peers, and to use it before calling 
   servers_of_happiness.
 - Add tests for merge_peers.
 - Add Zooko's puzzles to the tests.
 - Edit encoding tests to expect the new kind of failure message.
 
] {
hunk ./src/allmydata/test/test_encode.py 28
 class FakeBucketReaderWriterProxy:
     implements(IStorageBucketWriter, IStorageBucketReader)
     # these are used for both reading and writing
-    def __init__(self, mode="good"):
+    def __init__(self, mode="good", peerid="peer"):
         self.mode = mode
         self.blocks = {}
         self.plaintext_hashes = []
hunk ./src/allmydata/test/test_encode.py 36
         self.block_hashes = None
         self.share_hashes = None
         self.closed = False
+        self.peerid = peerid
 
     def get_peerid(self):
hunk ./src/allmydata/test/test_encode.py 39
-        return "peerid"
+        return self.peerid
 
     def _start(self):
         if self.mode == "lost-early":
hunk ./src/allmydata/test/test_encode.py 306
             for shnum in range(NUM_SHARES):
                 peer = FakeBucketReaderWriterProxy()
                 shareholders[shnum] = peer
-                servermap[shnum] = str(shnum)
+                servermap.setdefault(shnum, set()).add(peer.get_peerid())
                 all_shareholders.append(peer)
             e.set_shareholders(shareholders, servermap)
             return e.start()
hunk ./src/allmydata/test/test_encode.py 463
         def _ready(res):
             k,happy,n = e.get_param("share_counts")
             assert n == NUM_SHARES # else we'll be completely confused
-            all_peers = []
+            servermap = {}
             for shnum in range(NUM_SHARES):
                 mode = bucket_modes.get(shnum, "good")
hunk ./src/allmydata/test/test_encode.py 466
-                peer = FakeBucketReaderWriterProxy(mode)
+                peer = FakeBucketReaderWriterProxy(mode, "peer%d" % shnum)
                 shareholders[shnum] = peer
hunk ./src/allmydata/test/test_encode.py 468
-                servermap[shnum] = str(shnum)
+                servermap.setdefault(shnum, set()).add(peer.get_peerid())
             e.set_shareholders(shareholders, servermap)
             return e.start()
         d.addCallback(_ready)
hunk ./src/allmydata/test/test_upload.py 16
 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
 from allmydata.util.assertutil import precondition
 from allmydata.util.deferredutil import DeferredListShouldSucceed
+from allmydata.util.happinessutil import servers_of_happiness, \
+                                         shares_by_server, merge_peers
 from no_network import GridTestMixin
 from common_util import ShouldFailMixin
 from allmydata.storage_client import StorageFarmBroker
hunk ./src/allmydata/test/test_upload.py 708
         num_segments = encoder.get_param("num_segments")
         d = selector.get_shareholders(broker, sh, storage_index,
                                       share_size, block_size, num_segments,
-                                      10, 4)
+                                      10, 3, 4)
         def _have_shareholders((used_peers, already_peers)):
             assert servers_to_break <= len(used_peers)
             for index in xrange(servers_to_break):
hunk ./src/allmydata/test/test_upload.py 720
             for peer in used_peers:
                 buckets.update(peer.buckets)
                 for bucket in peer.buckets:
-                    servermap[bucket] = peer.peerid
+                    servermap.setdefault(bucket, set()).add(peer.peerid)
             encoder.set_shareholders(buckets, servermap)
             d = encoder.start()
             return d
hunk ./src/allmydata/test/test_upload.py 764
         self.failUnless((share_number, ss.my_nodeid, new_share_location)
                         in shares)
 
+    def _setup_grid(self):
+        """
+        I set up a NoNetworkGrid with a single server and client.
+        """
+        self.set_up_grid(num_clients=1, num_servers=1)
 
     def _setup_and_upload(self):
         """
hunk ./src/allmydata/test/test_upload.py 776
         upload a file to it, store its uri in self.uri, and store its
         sharedata in self.shares.
         """
-        self.set_up_grid(num_clients=1, num_servers=1)
+        self._setup_grid()
         client = self.g.clients[0]
         client.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
         data = upload.Data("data" * 10000, convergence="")
hunk ./src/allmydata/test/test_upload.py 814
 
 
     def _setUp(self, ns):
-        # Used by test_happy_semantics and test_prexisting_share_behavior
+        # Used by test_happy_semantics and test_preexisting_share_behavior
         # to set up the grid.
         self.node = FakeClient(mode="good", num_servers=ns)
         self.u = upload.Uploader()
hunk ./src/allmydata/test/test_upload.py 825
     def test_happy_semantics(self):
         self._setUp(2)
         DATA = upload.Data("kittens" * 10000, convergence="")
-        # These parameters are unsatisfiable with the client that we've made
-        # -- we'll use them to test that the semnatics work correctly.
+        # These parameters are unsatisfiable with only 2 servers.
         self.set_encoding_parameters(k=3, happy=5, n=10)
         d = self.shouldFail(UploadUnhappinessError, "test_happy_semantics",
hunk ./src/allmydata/test/test_upload.py 828
-                            "shares could only be placed on 2 servers "
-                            "(5 were requested)",
+                            "shares could only be placed or found on 2 "
+                            "server(s). We were asked to place shares on "
+                            "at least 5 server(s) such that any 3 of them "
+                            "have enough shares to recover the file",
                             self.u.upload, DATA)
         # Let's reset the client to have 10 servers
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 836
             self._setUp(10))
-        # These parameters are satisfiable with the client we've made.
+        # These parameters are satisfiable with 10 servers.
         d.addCallback(lambda ign:
             self.set_encoding_parameters(k=3, happy=5, n=10))
hunk ./src/allmydata/test/test_upload.py 839
-        # this should work
         d.addCallback(lambda ign:
             self.u.upload(DATA))
         # Let's reset the client to have 7 servers
hunk ./src/allmydata/test/test_upload.py 845
         # (this is less than n, but more than h)
         d.addCallback(lambda ign:
             self._setUp(7))
-        # These encoding parameters should still be satisfiable with our 
-        # client setup
+        # These parameters are satisfiable with 7 servers.
         d.addCallback(lambda ign:
             self.set_encoding_parameters(k=3, happy=5, n=10))
hunk ./src/allmydata/test/test_upload.py 848
-        # This, then, should work.
         d.addCallback(lambda ign:
             self.u.upload(DATA))
         return d
hunk ./src/allmydata/test/test_upload.py 862
         #
         # The scenario in comment:52 proposes that we have a layout
         # like:
-        # server 1: share 1
-        # server 2: share 1
-        # server 3: share 1
-        # server 4: shares 2 - 10
+        # server 0: shares 1 - 9
+        # server 1: share 0, read-only
+        # server 2: share 0, read-only
+        # server 3: share 0, read-only
         # To get access to the shares, we will first upload to one 
hunk ./src/allmydata/test/test_upload.py 867
-        # server, which will then have shares 1 - 10. We'll then 
+        # server, which will then have shares 0 - 9. We'll then 
         # add three new servers, configure them to not accept any new
hunk ./src/allmydata/test/test_upload.py 869
-        # shares, then write share 1 directly into the serverdir of each.
-        # Then each of servers 1 - 3 will report that they have share 1, 
-        # and will not accept any new share, while server 4 will report that
-        # it has shares 2 - 10 and will accept new shares.
+        # shares, then write share 0 directly into the serverdir of each,
+        # and then remove share 0 from server 0 in the same way.
+        # Then each of servers 1 - 3 will report that they have share 0, 
+        # and will not accept any new share, while server 0 will report that
+        # it has shares 1 - 9 and will accept new shares.
         # We'll then set 'happy' = 4, and see that an upload fails
         # (as it should)
         d = self._setup_and_upload()
hunk ./src/allmydata/test/test_upload.py 878
         d.addCallback(lambda ign:
-            self._add_server_with_share(1, 0, True))
+            self._add_server_with_share(server_number=1, share_number=0,
+                                        readonly=True))
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 881
-            self._add_server_with_share(2, 0, True))
+            self._add_server_with_share(server_number=2, share_number=0,
+                                        readonly=True))
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 884
-            self._add_server_with_share(3, 0, True))
+            self._add_server_with_share(server_number=3, share_number=0,
+                                        readonly=True))
         # Remove the first share from server 0.
hunk ./src/allmydata/test/test_upload.py 887
-        def _remove_share_0():
+        def _remove_share_0_from_server_0():
             share_location = self.shares[0][2]
             os.remove(share_location)
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 891
-            _remove_share_0())
+            _remove_share_0_from_server_0())
         # Set happy = 4 in the client.
         def _prepare():
             client = self.g.clients[0]
hunk ./src/allmydata/test/test_upload.py 901
             _prepare())
         # Uploading data should fail
         d.addCallback(lambda client:
-            self.shouldFail(UploadUnhappinessError, "test_happy_semantics",
-                            "shares could only be placed on 2 servers "
-                            "(4 were requested)",
+            self.shouldFail(UploadUnhappinessError,
+                            "test_problem_layout_comment_52_test_1",
+                            "shares could be placed or found on 4 server(s), "
+                            "but they are not spread out evenly enough to "
+                            "ensure that any 3 of these servers would have "
+                            "enough shares to recover the file. "
+                            "We were asked to place shares on at "
+                            "least 4 servers such that any 3 of them have "
+                            "enough shares to recover the file",
                             client.upload, upload.Data("data" * 10000,
                                                        convergence="")))
 
hunk ./src/allmydata/test/test_upload.py 932
                                         readonly=True))
         def _prepare2():
             client = self.g.clients[0]
-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
             return client
         d.addCallback(lambda ign:
             _prepare2())
hunk ./src/allmydata/test/test_upload.py 937
         d.addCallback(lambda client:
-            self.shouldFail(UploadUnhappinessError, "test_happy_sematics",
-                            "shares could only be placed on 2 servers "
-                            "(3 were requested)",
+            self.shouldFail(UploadUnhappinessError,
+                            "test_problem_layout_comment_52_test_2",
+                            "shares could only be placed on 3 server(s) such "
+                            "that any 3 of them have enough shares to recover "
+                            "the file, but we were asked to place shares on "
+                            "at least 4 such servers.",
                             client.upload, upload.Data("data" * 10000,
                                                        convergence="")))
         return d
hunk ./src/allmydata/test/test_upload.py 956
         def _change_basedir(ign):
             self.basedir = self.mktemp()
         _change_basedir(None)
-        d = self._setup_and_upload()
-        # We start by uploading all of the shares to one server (which has 
-        # already been done above).
+        # We start by uploading all of the shares to one server.
         # Next, we'll add three new servers to our NoNetworkGrid. We'll add
         # one share from our initial upload to each of these.
         # The counterintuitive ordering of the share numbers is to deal with 
hunk ./src/allmydata/test/test_upload.py 962
         # the permuting of these servers -- distributing the shares this 
         # way ensures that the Tahoe2PeerSelector sees them in the order 
-        # described above.
+        # described below.
+        d = self._setup_and_upload()
         d.addCallback(lambda ign:
             self._add_server_with_share(server_number=1, share_number=2))
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 975
         # server 1: share 2
         # server 2: share 0
         # server 3: share 1
-        # We want to change the 'happy' parameter in the client to 4. 
+        # We change the 'happy' parameter in the client to 4. 
         # The Tahoe2PeerSelector will see the peers permuted as:
         # 2, 3, 1, 0
         # Ideally, a reupload of our original data should work.
hunk ./src/allmydata/test/test_upload.py 988
             client.upload(upload.Data("data" * 10000, convergence="")))
 
 
-        # This scenario is basically comment:53, but with the order reversed;
-        # this means that the Tahoe2PeerSelector sees
-        # server 2: shares 1-10
-        # server 3: share 1
-        # server 1: share 2
-        # server 4: share 3
+        # This scenario is basically comment:53, but changed so that the 
+        # Tahoe2PeerSelector sees the server with all of the shares before
+        # any of the other servers.
+        # The layout is:
+        # server 2: shares 0 - 9
+        # server 3: share 0 
+        # server 1: share 1 
+        # server 4: share 2
+        # The Tahoe2PeerSelector sees the peers permuted as:
+        # 2, 3, 1, 4
+        # Note that server 0 has been replaced by server 4; this makes it 
+        # easier to ensure that the last server seen by Tahoe2PeerSelector
+        # has only one share. 
         d.addCallback(_change_basedir)
         d.addCallback(lambda ign:
             self._setup_and_upload())
hunk ./src/allmydata/test/test_upload.py 1012
             self._add_server_with_share(server_number=1, share_number=2))
         # Copy all of the other shares to server number 2
         def _copy_shares(ign):
-            for i in xrange(1, 10):
+            for i in xrange(0, 10):
                 self._copy_share_to_server(i, 2)
         d.addCallback(_copy_shares)
         # Remove the first server, and add a placeholder with share 0
hunk ./src/allmydata/test/test_upload.py 1024
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+
+
         # Try the same thing, but with empty servers after the first one
         # We want to make sure that Tahoe2PeerSelector will redistribute
         # shares as necessary, not simply discover an existing layout.
hunk ./src/allmydata/test/test_upload.py 1029
+        # The layout is:
+        # server 2: shares 0 - 9
+        # server 3: empty
+        # server 1: empty
+        # server 4: empty
         d.addCallback(_change_basedir)
         d.addCallback(lambda ign:
             self._setup_and_upload())
hunk ./src/allmydata/test/test_upload.py 1043
             self._add_server(server_number=3))
         d.addCallback(lambda ign:
             self._add_server(server_number=1))
+        d.addCallback(lambda ign:
+            self._add_server(server_number=4))
         d.addCallback(_copy_shares)
         d.addCallback(lambda ign:
             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
hunk ./src/allmydata/test/test_upload.py 1048
-        d.addCallback(lambda ign:
-            self._add_server(server_number=4))
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
hunk ./src/allmydata/test/test_upload.py 1051
+        # Make sure that only as many shares as necessary to satisfy
+        # servers of happiness were pushed.
+        d.addCallback(lambda results:
+            self.failUnlessEqual(results.pushed_shares, 3))
         return d
 
 
hunk ./src/allmydata/test/test_upload.py 1133
 
 
     def test_dropped_servers_in_encoder(self):
+        # The Encoder does its own "servers_of_happiness" check if it 
+        # happens to lose a bucket during an upload (it assumes that 
+        # the layout presented to it satisfies "servers_of_happiness"
+        # until a failure occurs)
+        # 
+        # This test simulates an upload where servers break after peer
+        # selection, but before they are written to.
         def _set_basedir(ign=None):
             self.basedir = self.mktemp()
         _set_basedir()
hunk ./src/allmydata/test/test_upload.py 1146
         d = self._setup_and_upload();
         # Add 5 servers
         def _do_server_setup(ign):
-            self._add_server_with_share(1)
-            self._add_server_with_share(2)
-            self._add_server_with_share(3)
-            self._add_server_with_share(4)
-            self._add_server_with_share(5)
+            self._add_server_with_share(server_number=1)
+            self._add_server_with_share(server_number=2)
+            self._add_server_with_share(server_number=3)
+            self._add_server_with_share(server_number=4)
+            self._add_server_with_share(server_number=5)
         d.addCallback(_do_server_setup)
         # remove the original server
         # (necessary to ensure that the Tahoe2PeerSelector will distribute
hunk ./src/allmydata/test/test_upload.py 1159
             server = self.g.servers_by_number[0]
             self.g.remove_server(server.my_nodeid)
         d.addCallback(_remove_server)
-        # This should succeed.
+        # This should succeed; we still have 4 servers, and the 
+        # happiness of the upload is 4.
         d.addCallback(lambda ign:
             self._do_upload_with_broken_servers(1))
         # Now, do the same thing over again, but drop 2 servers instead
hunk ./src/allmydata/test/test_upload.py 1164
-        # of 1. This should fail.
+        # of 1. This should fail, because servers_of_happiness is 4 and 
+        # we can't satisfy that.
         d.addCallback(_set_basedir)
         d.addCallback(lambda ign:
             self._setup_and_upload())
hunk ./src/allmydata/test/test_upload.py 1174
         d.addCallback(lambda ign:
             self.shouldFail(UploadUnhappinessError,
                             "test_dropped_servers_in_encoder",
-                            "lost too many servers during upload "
-                            "(still have 3, want 4)",
+                            "shares could only be placed on 3 server(s) "
+                            "such that any 3 of them have enough shares to "
+                            "recover the file, but we were asked to place "
+                            "shares on at least 4",
                             self._do_upload_with_broken_servers, 2))
         # Now do the same thing over again, but make some of the servers
         # readonly, break some of the ones that aren't, and make sure that
hunk ./src/allmydata/test/test_upload.py 1203
         d.addCallback(lambda ign:
             self.shouldFail(UploadUnhappinessError,
                             "test_dropped_servers_in_encoder",
-                            "lost too many servers during upload "
-                            "(still have 3, want 4)",
+                            "shares could only be placed on 3 server(s) "
+                            "such that any 3 of them have enough shares to "
+                            "recover the file, but we were asked to place "
+                            "shares on at least 4",
                             self._do_upload_with_broken_servers, 2))
         return d
 
hunk ./src/allmydata/test/test_upload.py 1211
 
-    def test_servers_with_unique_shares(self):
-        # servers_with_unique_shares expects a dict of 
-        # shnum => peerid as a preexisting shares argument.
+    def test_merge_peers(self):
+        # merge_peers merges a list of used_peers and a dict of 
+        # shareid -> peerid mappings.
+        shares = {
+                    1 : set(["server1"]),
+                    2 : set(["server2"]),
+                    3 : set(["server3"]),
+                    4 : set(["server4", "server5"]),
+                    5 : set(["server1", "server2"]),
+                 }
+        # if not provided with a used_peers argument, it should just
+        # return the first argument unchanged.
+        self.failUnlessEqual(shares, merge_peers(shares, set([])))
+        class FakePeerTracker:
+            pass
+        trackers = []
+        for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
+            t = FakePeerTracker()
+            t.peerid = server
+            t.buckets = [i]
+            trackers.append(t)
+        expected = {
+                    1 : set(["server1"]),
+                    2 : set(["server2"]),
+                    3 : set(["server3"]),
+                    4 : set(["server4", "server5"]),
+                    5 : set(["server1", "server2", "server5"]),
+                    6 : set(["server6"]),
+                    7 : set(["server7"]),
+                    8 : set(["server8"]),
+                   }
+        self.failUnlessEqual(expected, merge_peers(shares, set(trackers)))
+        shares2 = {}
+        expected = {
+                    5 : set(["server5"]),
+                    6 : set(["server6"]),
+                    7 : set(["server7"]),
+                    8 : set(["server8"]),
+                   }
+        self.failUnlessEqual(expected, merge_peers(shares2, set(trackers)))
+        shares3 = {}
+        trackers = []
+        expected = {}
+        for (i, server) in [(i, "server%d" % i) for i in xrange(10)]:
+            shares3[i] = set([server])
+            t = FakePeerTracker()
+            t.peerid = server
+            t.buckets = [i]
+            trackers.append(t)
+            expected[i] = set([server])
+        self.failUnlessEqual(expected, merge_peers(shares3, set(trackers)))
+
+
+    def test_servers_of_happiness_utility_function(self):
+        # These tests are concerned with the servers_of_happiness()
+        # utility function, and its underlying matching algorithm. Other
+        # aspects of the servers_of_happiness behavior are tested
+        # elsehwere These tests exist to ensure that
+        # servers_of_happiness doesn't under or overcount the happiness
+        # value for given inputs.
+
+        # servers_of_happiness expects a dict of 
+        # shnum => set(peerids) as a preexisting shares argument.
         test1 = {
hunk ./src/allmydata/test/test_upload.py 1275
-                 1 : "server1",
-                 2 : "server2",
-                 3 : "server3",
-                 4 : "server4"
+                 1 : set(["server1"]),
+                 2 : set(["server2"]),
+                 3 : set(["server3"]),
+                 4 : set(["server4"])
                 }
hunk ./src/allmydata/test/test_upload.py 1280
-        unique_servers = upload.servers_with_unique_shares(test1)
-        self.failUnlessEqual(4, len(unique_servers))
-        for server in ["server1", "server2", "server3", "server4"]:
-            self.failUnlessIn(server, unique_servers)
-        test1[4] = "server1"
-        # Now there should only be 3 unique servers.
-        unique_servers = upload.servers_with_unique_shares(test1)
-        self.failUnlessEqual(3, len(unique_servers))
-        for server in ["server1", "server2", "server3"]:
-            self.failUnlessIn(server, unique_servers)
-        # servers_with_unique_shares expects to receive some object with
-        # a peerid attribute. So we make a FakePeerTracker whose only
-        # job is to have a peerid attribute.
+        happy = servers_of_happiness(test1)
+        self.failUnlessEqual(4, happy)
+        test1[4] = set(["server1"])
+        # We've added a duplicate server, so now servers_of_happiness
+        # should be 3 instead of 4.
+        happy = servers_of_happiness(test1)
+        self.failUnlessEqual(3, happy)
+        # The second argument of merge_peers should be a set of 
+        # objects with peerid and buckets as attributes. In actual use, 
+        # these will be PeerTracker instances, but for testing it is fine 
+        # to make a FakePeerTracker whose job is to hold those instance
+        # variables to test that part.
         class FakePeerTracker:
             pass
         trackers = []
hunk ./src/allmydata/test/test_upload.py 1300
             t.peerid = server
             t.buckets = [i]
             trackers.append(t)
-        # Recall that there are 3 unique servers in test1. Since none of
-        # those overlap with the ones in trackers, we should get 7 back
-        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
-        self.failUnlessEqual(7, len(unique_servers))
-        expected_servers = ["server" + str(i) for i in xrange(1, 9)]
-        expected_servers.remove("server4")
-        for server in expected_servers:
-            self.failUnlessIn(server, unique_servers)
-        # Now add an overlapping server to trackers.
+        # Recall that test1 is a server layout with servers_of_happiness
+        # = 3.  Since there isn't any overlap between the shnum ->
+        # set([peerid]) correspondences in test1 and those in trackers,
+        # the result here should be 7.
+        test2 = merge_peers(test1, set(trackers))
+        happy = servers_of_happiness(test2)
+        self.failUnlessEqual(7, happy)
+        # Now add an overlapping server to trackers. This is redundant,
+        # so it should not cause the previously reported happiness value
+        # to change.
         t = FakePeerTracker()
         t.peerid = "server1"
         t.buckets = [1]
hunk ./src/allmydata/test/test_upload.py 1314
         trackers.append(t)
-        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
-        self.failUnlessEqual(7, len(unique_servers))
-        for server in expected_servers:
-            self.failUnlessIn(server, unique_servers)
+        test2 = merge_peers(test1, set(trackers))
+        happy = servers_of_happiness(test2)
+        self.failUnlessEqual(7, happy)
         test = {}
hunk ./src/allmydata/test/test_upload.py 1318
-        unique_servers = upload.servers_with_unique_shares(test)
-        self.failUnlessEqual(0, len(test))
+        happy = servers_of_happiness(test)
+        self.failUnlessEqual(0, happy)
+        # Test a more substantial overlap between the trackers and the 
+        # existing assignments.
+        test = {
+            1 : set(['server1']),
+            2 : set(['server2']),
+            3 : set(['server3']),
+            4 : set(['server4']),
+        }
+        trackers = []
+        t = FakePeerTracker()
+        t.peerid = 'server5'
+        t.buckets = [4]
+        trackers.append(t)
+        t = FakePeerTracker()
+        t.peerid = 'server6'
+        t.buckets = [3, 5]
+        trackers.append(t)
+        # The value returned by servers_of_happiness is the size 
+        # of a maximum matching in the bipartite graph that
+        # servers_of_happiness() makes between peerids and share
+        # numbers. It should find something like this:
+        # (server 1, share 1)
+        # (server 2, share 2)
+        # (server 3, share 3)
+        # (server 5, share 4)
+        # (server 6, share 5)
+        # 
+        # and, since there are 5 edges in this matching, it should
+        # return 5.
+        test2 = merge_peers(test, set(trackers))
+        happy = servers_of_happiness(test2)
+        self.failUnlessEqual(5, happy)
+        # Zooko's first puzzle: 
+        # (from http://allmydata.org/trac/tahoe-lafs/ticket/778#comment:156)
+        #
+        # server 1: shares 0, 1
+        # server 2: shares 1, 2
+        # server 3: share 2
+        #
+        # This should yield happiness of 3.
+        test = {
+            0 : set(['server1']),
+            1 : set(['server1', 'server2']),
+            2 : set(['server2', 'server3']),
+        }
+        self.failUnlessEqual(3, servers_of_happiness(test))
+        # Zooko's second puzzle:        
+        # (from http://allmydata.org/trac/tahoe-lafs/ticket/778#comment:158)
+        # 
+        # server 1: shares 0, 1
+        # server 2: share 1
+        # 
+        # This should yield happiness of 2.
+        test = {
+            0 : set(['server1']),
+            1 : set(['server1', 'server2']),
+        }
+        self.failUnlessEqual(2, servers_of_happiness(test))
 
 
     def test_shares_by_server(self):
hunk ./src/allmydata/test/test_upload.py 1381
-        test = dict([(i, "server%d" % i) for i in xrange(1, 5)])
-        shares_by_server = upload.shares_by_server(test)
-        self.failUnlessEqual(set([1]), shares_by_server["server1"])
-        self.failUnlessEqual(set([2]), shares_by_server["server2"])
-        self.failUnlessEqual(set([3]), shares_by_server["server3"])
-        self.failUnlessEqual(set([4]), shares_by_server["server4"])
+        test = dict([(i, set(["server%d" % i])) for i in xrange(1, 5)])
+        sbs = shares_by_server(test)
+        self.failUnlessEqual(set([1]), sbs["server1"])
+        self.failUnlessEqual(set([2]), sbs["server2"])
+        self.failUnlessEqual(set([3]), sbs["server3"])
+        self.failUnlessEqual(set([4]), sbs["server4"])
         test1 = {
hunk ./src/allmydata/test/test_upload.py 1388
-                    1 : "server1",
-                    2 : "server1",
-                    3 : "server1",
-                    4 : "server2",
-                    5 : "server2"
+                    1 : set(["server1"]),
+                    2 : set(["server1"]),
+                    3 : set(["server1"]),
+                    4 : set(["server2"]),
+                    5 : set(["server2"])
                 }
hunk ./src/allmydata/test/test_upload.py 1394
-        shares_by_server = upload.shares_by_server(test1)
-        self.failUnlessEqual(set([1, 2, 3]), shares_by_server["server1"])
-        self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
+        sbs = shares_by_server(test1)
+        self.failUnlessEqual(set([1, 2, 3]), sbs["server1"])
+        self.failUnlessEqual(set([4, 5]), sbs["server2"])
+        # This should fail unless the peerid part of the mapping is a set
+        test2 = {1: "server1"}
+        self.shouldFail(AssertionError,
+                       "test_shares_by_server",
+                       "",
+                       shares_by_server, test2)
 
 
     def test_existing_share_detection(self):
hunk ./src/allmydata/test/test_upload.py 1409
         self.basedir = self.mktemp()
         d = self._setup_and_upload()
         # Our final setup should look like this:
-        # server 1: shares 1 - 10, read-only
+        # server 1: shares 0 - 9, read-only
         # server 2: empty
         # server 3: empty
         # server 4: empty
hunk ./src/allmydata/test/test_upload.py 1441
         return d
 
 
-    def test_should_add_server(self):
-        shares = dict([(i, "server%d" % i) for i in xrange(10)])
-        self.failIf(upload.should_add_server(shares, "server1", 4))
-        shares[4] = "server1"
-        self.failUnless(upload.should_add_server(shares, "server4", 4))
-        shares = {}
-        self.failUnless(upload.should_add_server(shares, "server1", 1))
-
-
     def test_exception_messages_during_peer_selection(self):
hunk ./src/allmydata/test/test_upload.py 1442
-        # server 1: readonly, no shares
-        # server 2: readonly, no shares
-        # server 3: readonly, no shares
-        # server 4: readonly, no shares
-        # server 5: readonly, no shares
+        # server 1: read-only, no shares
+        # server 2: read-only, no shares
+        # server 3: read-only, no shares
+        # server 4: read-only, no shares
+        # server 5: read-only, no shares
         # This will fail, but we want to make sure that the log messages
         # are informative about why it has failed.
         self.basedir = self.mktemp()
hunk ./src/allmydata/test/test_upload.py 1472
             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
                             "peer selection failed for <Tahoe2PeerSelector "
                             "for upload dglev>: placed 0 shares out of 10 "
-                            "total (10 homeless), want to place on 4 servers,"
-                            " sent 5 queries to 5 peers, 0 queries placed "
+                            "total (10 homeless), want to place shares on at "
+                            "least 4 servers such that any 3 of them have "
+                            "enough shares to recover the file, "
+                            "sent 5 queries to 5 peers, 0 queries placed "
                             "some shares, 5 placed none "
                             "(of which 5 placed none due to the server being "
                             "full and 0 placed none due to an error)",
hunk ./src/allmydata/test/test_upload.py 1483
                             upload.Data("data" * 10000, convergence="")))
 
 
-        # server 1: readonly, no shares
+        # server 1: read-only, no shares
         # server 2: broken, no shares
hunk ./src/allmydata/test/test_upload.py 1485
-        # server 3: readonly, no shares
-        # server 4: readonly, no shares
-        # server 5: readonly, no shares
+        # server 3: read-only, no shares
+        # server 4: read-only, no shares
+        # server 5: read-only, no shares
         def _reset(ign):
             self.basedir = self.mktemp()
         d.addCallback(_reset)
hunk ./src/allmydata/test/test_upload.py 1500
         def _break_server_2(ign):
             server = self.g.servers_by_number[2].my_nodeid
             # We have to break the server in servers_by_id, 
-            # because the ones in servers_by_number isn't wrapped,
-            # and doesn't look at its broken attribute
+            # because the one in servers_by_number isn't wrapped,
+            # and doesn't look at its broken attribute when answering
+            # queries.
             self.g.servers_by_id[server].broken = True
         d.addCallback(_break_server_2)
         d.addCallback(lambda ign:
hunk ./src/allmydata/test/test_upload.py 1513
             self._add_server_with_share(server_number=5, readonly=True))
         d.addCallback(lambda ign:
             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
-        def _reset_encoding_parameters(ign):
+        def _reset_encoding_parameters(ign, happy=4):
             client = self.g.clients[0]
hunk ./src/allmydata/test/test_upload.py 1515
-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
             return client
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
hunk ./src/allmydata/test/test_upload.py 1522
             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
                             "peer selection failed for <Tahoe2PeerSelector "
                             "for upload dglev>: placed 0 shares out of 10 "
-                            "total (10 homeless), want to place on 4 servers,"
-                            " sent 5 queries to 5 peers, 0 queries placed "
+                            "total (10 homeless), want to place shares on at "
+                            "least 4 servers such that any 3 of them have "
+                            "enough shares to recover the file, "
+                            "sent 5 queries to 5 peers, 0 queries placed "
                             "some shares, 5 placed none "
                             "(of which 4 placed none due to the server being "
                             "full and 1 placed none due to an error)",
hunk ./src/allmydata/test/test_upload.py 1531
                             client.upload,
                             upload.Data("data" * 10000, convergence="")))
+        # server 0, server 1 = empty, accepting shares
+        # This should place all of the shares, but still fail with happy=4.
+        # We want to make sure that the exception message is worded correctly.
+        d.addCallback(_reset)
+        d.addCallback(lambda ign:
+            self._setup_grid())
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1))
+        d.addCallback(_reset_encoding_parameters)
+        d.addCallback(lambda client:
+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
+                            "shares could only be placed or found on 2 "
+                            "server(s). We were asked to place shares on at "
+                            "least 4 server(s) such that any 3 of them have "
+                            "enough shares to recover the file.",
+                            client.upload, upload.Data("data" * 10000,
+                                                       convergence="")))
+        # servers 0 - 4 = empty, accepting shares
+        # This too should place all the shares, and this too should fail,
+        # but since the effective happiness is more than the k encoding
+        # parameter, it should trigger a different error message than the one
+        # above.
+        d.addCallback(_reset)
+        d.addCallback(lambda ign:
+            self._setup_grid())
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=4))
+        d.addCallback(_reset_encoding_parameters, happy=7)
+        d.addCallback(lambda client:
+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
+                            "shares could only be placed on 5 server(s) such "
+                            "that any 3 of them have enough shares to recover "
+                            "the file, but we were asked to place shares on "
+                            "at least 7 such servers.",
+                            client.upload, upload.Data("data" * 10000,
+                                                       convergence="")))
+        # server 0: shares 0 - 9
+        # server 1: share 0, read-only
+        # server 2: share 0, read-only
+        # server 3: share 0, read-only
+        # This should place all of the shares, but fail with happy=4.
+        # Since the number of servers with shares is more than the number
+        # necessary to reconstitute the file, this will trigger a different
+        # error message than either of those above. 
+        d.addCallback(_reset)
+        d.addCallback(lambda ign:
+            self._setup_and_upload())
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=1, share_number=0,
+                                        readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=2, share_number=0,
+                                        readonly=True))
+        d.addCallback(lambda ign:
+            self._add_server_with_share(server_number=3, share_number=0,
+                                        readonly=True))
+        d.addCallback(_reset_encoding_parameters, happy=7)
+        d.addCallback(lambda client:
+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
+                            "shares could be placed or found on 4 server(s), "
+                            "but they are not spread out evenly enough to "
+                            "ensure that any 3 of these servers would have "
+                            "enough shares to recover the file. We were asked "
+                            "to place shares on at least 7 servers such that "
+                            "any 3 of them have enough shares to recover the "
+                            "file",
+                            client.upload, upload.Data("data" * 10000,
+                                                       convergence="")))
         return d
 
 
}

Context:

[Dependency on Windmill test framework is not needed yet.
david-sarah@jacaranda.org**20100504161043
 Ignore-this: be088712bec650d4ef24766c0026ebc8
] 
[tests: pass z to tar so that BSD tar will know to ungzip
zooko@zooko.com**20100504090628
 Ignore-this: 1339e493f255e8fc0b01b70478f23a09
] 
[setup: update comments and URLs in setup.cfg
zooko@zooko.com**20100504061653
 Ignore-this: f97692807c74bcab56d33100c899f829
] 
[setup: reorder and extend the show-tool-versions script, the better to glean information about our new buildslaves
zooko@zooko.com**20100504045643
 Ignore-this: 836084b56b8d4ee8f1de1f4efb706d36
] 
[CLI: Support for https url in option --node-url
Francois Deppierraz <francois@ctrlaltdel.ch>**20100430185609
 Ignore-this: 1717176b4d27c877e6bc67a944d9bf34
 
 This patch modifies the regular expression used for verifying of '--node-url'
 parameter.  Support for accessing a Tahoe gateway over HTTPS was already
 present, thanks to Python's urllib.
 
] 
[backupdb.did_create_directory: use REPLACE INTO, not INSERT INTO + ignore error
Brian Warner <warner@lothar.com>**20100428050803
 Ignore-this: 1fca7b8f364a21ae413be8767161e32f
 
 This handles the case where we upload a new tahoe directory for a
 previously-processed local directory, possibly creating a new dircap (if the
 metadata had changed). Now we replace the old dirhash->dircap record. The
 previous behavior left the old record in place (with the old dircap and
 timestamps), so we'd never stop creating new directories and never converge
 on a null backup.
] 
["tahoe webopen": add --info flag, to get ?t=info
Brian Warner <warner@lothar.com>**20100424233003
 Ignore-this: 126b0bb6db340fabacb623d295eb45fa
 
 Also fix some trailing whitespace.
] 
[docs: install.html http-equiv refresh to quickstart.html
zooko@zooko.com**20100421165708
 Ignore-this: 52b4b619f9dde5886ae2cd7f1f3b734b
] 
[docs: install.html -> quickstart.html
zooko@zooko.com**20100421155757
 Ignore-this: 6084e203909306bed93efb09d0e6181d
 It is not called "installing" because that implies that it is going to change the configuration of your operating system. It is not called "building" because that implies that you need developer tools like a compiler. Also I added a stern warning against looking at the "InstallDetails" wiki page, which I have renamed to "AdvancedInstall".
] 
[Fix another typo in tahoe_storagespace munin plugin
david-sarah@jacaranda.org**20100416220935
 Ignore-this: ad1f7aa66b554174f91dfb2b7a3ea5f3
] 
[Add dependency on windmill >= 1.3
david-sarah@jacaranda.org**20100416190404
 Ignore-this: 4437a7a464e92d6c9012926b18676211
] 
[licensing: phrase the OpenSSL-exemption in the vocabulary of copyright instead of computer technology, and replicate the exemption from the GPL to the TGPPL
zooko@zooko.com**20100414232521
 Ignore-this: a5494b2f582a295544c6cad3f245e91
] 
[munin-tahoe_storagespace
freestorm77@gmail.com**20100221203626
 Ignore-this: 14d6d6a587afe1f8883152bf2e46b4aa
 
 Plugin configuration rename
 
] 
[setup: add licensing declaration for setuptools (noticed by the FSF compliance folks)
zooko@zooko.com**20100309184415
 Ignore-this: 2dfa7d812d65fec7c72ddbf0de609ccb
] 
[setup: fix error in licensing declaration from Shawn Willden, as noted by the FSF compliance division
zooko@zooko.com**20100309163736
 Ignore-this: c0623d27e469799d86cabf67921a13f8
] 
[CREDITS to Jacob Appelbaum
zooko@zooko.com**20100304015616
 Ignore-this: 70db493abbc23968fcc8db93f386ea54
] 
[desert-island-build-with-proper-versions
jacob@appelbaum.net**20100304013858] 
[docs: a few small edits to try to guide newcomers through the docs
zooko@zooko.com**20100303231902
 Ignore-this: a6aab44f5bf5ad97ea73e6976bc4042d
 These edits were suggested by my watching over Jake Appelbaum's shoulder as he completely ignored/skipped/missed install.html and also as he decided that debian.txt wouldn't help him with basic installation. Then I threw in a few docs edits that have been sitting around in my sandbox asking to be committed for months.
] 
[TAG allmydata-tahoe-1.6.1
david-sarah@jacaranda.org**20100228062314
 Ignore-this: eb5f03ada8ea953ee7780e7fe068539
] 
Patch bundle hash:
a703df8b85684b2a84e19d5f1e7d055c7ee1d317