11 patches for repository /Users/warner2/stuff/tahoe/trunk: Wed Jun 15 10:48:38 PDT 2011 warner@lothar.com * apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts Wed Jun 15 10:49:19 PDT 2011 warner@lothar.com * upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass Wed Jun 15 10:49:38 PDT 2011 warner@lothar.com * replace IServer.name() with get_name(), and get_longname() Wed Jun 15 10:50:11 PDT 2011 warner@lothar.com * test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s Wed Jun 15 10:50:45 PDT 2011 warner@lothar.com * remove now-unused ShareManglingMixin Wed Jun 15 10:51:04 PDT 2011 warner@lothar.com * remove get_serverid from DownloadStatus.add_dyhb_sent and customers Wed Jun 15 10:51:27 PDT 2011 warner@lothar.com * remove get_serverid from DownloadStatus.add_request_sent and customers Wed Jun 15 10:51:57 PDT 2011 warner@lothar.com * web/status.py: remove spurious whitespace, no code changes Wed Jun 15 10:52:22 PDT 2011 warner@lothar.com * DownloadStatus.add_known_share wants to be used by Finder, web.status Wed Jun 15 10:52:45 PDT 2011 warner@lothar.com * remove nodeid from WriteBucketProxy classes and customers Wed Jun 15 10:53:03 PDT 2011 warner@lothar.com * remove get_serverid() from ReadBucketProxy and customers, including Checker and debug.py dump-share commands New patches: [apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts warner@lothar.com**20110615174838 Ignore-this: 859d5f8acdeb4b4bb555986fe5ea1301 ] { hunk ./src/allmydata/storage_client.py 127 return sorted(self.get_connected_servers(), key=_permuted) def get_all_serverids(self): - serverids = set() - serverids.update(self.servers.keys()) - return frozenset(serverids) + return frozenset(self.servers.keys()) def get_connected_servers(self): hunk ./src/allmydata/storage_client.py 130 - return frozenset([s for s in self.get_known_servers() - if s.get_rref()]) + return frozenset([s for s in self.servers.values() if s.get_rref()]) def get_known_servers(self): hunk ./src/allmydata/storage_client.py 133 - return sorted(self.servers.values(), key=lambda s: s.get_serverid()) + return frozenset(self.servers.values()) def get_nickname_for_serverid(self, serverid): if serverid in self.servers: hunk ./src/allmydata/web/root.py 254 def data_services(self, ctx, data): sb = self.client.get_storage_broker() - return sb.get_known_servers() + return sorted(sb.get_known_servers(), key=lambda s: s.get_serverid()) def render_service_row(self, ctx, server): nodeid = server.get_serverid() } [upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass warner@lothar.com**20110615174919 Ignore-this: 30ae7ee20c0afc1c73ef43aa861a7be7 ] { hunk ./src/allmydata/immutable/upload.py 176 num_segments, total_shares, needed_shares, servers_of_happiness): """ - @return: (upload_trackers, already_servers), where upload_trackers is - a set of ServerTracker instances that have agreed to hold + @return: (upload_trackers, already_serverids), where upload_trackers + is a set of ServerTracker instances that have agreed to hold some shares for us (the shareids are stashed inside the hunk ./src/allmydata/immutable/upload.py 179 - ServerTracker), and already_servers is a dict mapping shnum - to a set of serverids which claim to already have the share. + ServerTracker), and already_serverids is a dict mapping + shnum to a set of serverids for servers which claim to + already have the share. """ if self._status: hunk ./src/allmydata/immutable/upload.py 192 self.needed_shares = needed_shares self.homeless_shares = set(range(total_shares)) - self.contacted_trackers = [] # servers worth asking again - self.contacted_trackers2 = [] # servers that we have asked again - self._started_second_pass = False self.use_trackers = set() # ServerTrackers that have shares assigned # to them self.preexisting_shares = {} # shareid => set(serverids) holding shareid hunk ./src/allmydata/immutable/upload.py 250 renew, cancel) trackers.append(st) return trackers - self.uncontacted_trackers = _make_trackers(writable_servers) + + # We assign each servers/trackers into one three lists. They all + # start in the "first pass" list. During the first pass, as we ask + # each one to hold a share, we move their tracker to the "second + # pass" list, until the first-pass list is empty. Then during the + # second pass, as we ask each to hold more shares, we move their + # tracker to the "next pass" list, until the second-pass list is + # empty. Then we move everybody from the next-pass list back to the + # second-pass list and repeat the "second" pass (really the third, + # fourth, etc pass), until all shares are assigned, or we've run out + # of potential servers. + self.first_pass_trackers = _make_trackers(writable_servers) + self.second_pass_trackers = [] # servers worth asking again + self.next_pass_trackers = [] # servers that we have asked again + self._started_second_pass = False # We don't try to allocate shares to these servers, since they've # said that they're incapable of storing shares of the size that we'd hunk ./src/allmydata/immutable/upload.py 371 shares_to_spread = sum([len(list(sharelist)) - 1 for (server, sharelist) in shares.items()]) - if delta <= len(self.uncontacted_trackers) and \ + if delta <= len(self.first_pass_trackers) and \ shares_to_spread >= delta: items = shares.items() while len(self.homeless_shares) < delta: hunk ./src/allmydata/immutable/upload.py 407 self.log(servmsg, level=log.INFREQUENT) return self._failed("%s (%s)" % (failmsg, self._get_progress_message())) - if self.uncontacted_trackers: - tracker = self.uncontacted_trackers.pop(0) + if self.first_pass_trackers: + tracker = self.first_pass_trackers.pop(0) # TODO: don't pre-convert all serverids to ServerTrackers assert isinstance(tracker, ServerTracker) hunk ./src/allmydata/immutable/upload.py 423 len(self.homeless_shares))) d = tracker.query(shares_to_ask) d.addBoth(self._got_response, tracker, shares_to_ask, - self.contacted_trackers) + self.second_pass_trackers) return d hunk ./src/allmydata/immutable/upload.py 425 - elif self.contacted_trackers: + elif self.second_pass_trackers: # ask a server that we've already asked. if not self._started_second_pass: self.log("starting second pass", hunk ./src/allmydata/immutable/upload.py 432 level=log.NOISY) self._started_second_pass = True num_shares = mathutil.div_ceil(len(self.homeless_shares), - len(self.contacted_trackers)) - tracker = self.contacted_trackers.pop(0) + len(self.second_pass_trackers)) + tracker = self.second_pass_trackers.pop(0) shares_to_ask = set(sorted(self.homeless_shares)[:num_shares]) self.homeless_shares -= shares_to_ask self.query_count += 1 hunk ./src/allmydata/immutable/upload.py 444 len(self.homeless_shares))) d = tracker.query(shares_to_ask) d.addBoth(self._got_response, tracker, shares_to_ask, - self.contacted_trackers2) + self.next_pass_trackers) return d hunk ./src/allmydata/immutable/upload.py 446 - elif self.contacted_trackers2: + elif self.next_pass_trackers: # we've finished the second-or-later pass. Move all the remaining hunk ./src/allmydata/immutable/upload.py 448 - # servers back into self.contacted_trackers for the next pass. - self.contacted_trackers.extend(self.contacted_trackers2) - self.contacted_trackers2[:] = [] + # servers back into self.second_pass_trackers for the next pass. + self.second_pass_trackers.extend(self.next_pass_trackers) + self.next_pass_trackers[:] = [] return self._loop() else: # no more servers. If we haven't placed enough shares, we fail. hunk ./src/allmydata/immutable/upload.py 485 self.error_count += 1 self.bad_query_count += 1 self.homeless_shares |= shares_to_ask - if (self.uncontacted_trackers - or self.contacted_trackers - or self.contacted_trackers2): + if (self.first_pass_trackers + or self.second_pass_trackers + or self.next_pass_trackers): # there is still hope, so just loop pass else: hunk ./src/allmydata/immutable/upload.py 938 d.addCallback(_done) return d - def set_shareholders(self, (upload_trackers, already_servers), encoder): + def set_shareholders(self, holders, encoder): """ @param upload_trackers: a sequence of ServerTracker objects that have agreed to hold some shares for us (the hunk ./src/allmydata/immutable/upload.py 943 shareids are stashed inside the ServerTracker) - @paran already_servers: a dict mapping sharenum to a set of serverids - that claim to already have this share + + @paran already_serverids: a dict mapping sharenum to a set of + serverids for servers that claim to already + have this share """ hunk ./src/allmydata/immutable/upload.py 948 - msgtempl = "set_shareholders; upload_trackers is %s, already_servers is %s" + (upload_trackers, already_serverids) = holders + msgtempl = "set_shareholders; upload_trackers is %s, already_serverids is %s" values = ([', '.join([str_shareloc(k,v) for k,v in st.buckets.iteritems()]) hunk ./src/allmydata/immutable/upload.py 952 - for st in upload_trackers], already_servers) + for st in upload_trackers], already_serverids) self.log(msgtempl % values, level=log.OPERATIONAL) # record already-present shares in self._results hunk ./src/allmydata/immutable/upload.py 955 - self._results.preexisting_shares = len(already_servers) + self._results.preexisting_shares = len(already_serverids) self._server_trackers = {} # k: shnum, v: instance of ServerTracker for tracker in upload_trackers: hunk ./src/allmydata/immutable/upload.py 961 assert isinstance(tracker, ServerTracker) buckets = {} - servermap = already_servers.copy() + servermap = already_serverids.copy() for tracker in upload_trackers: buckets.update(tracker.buckets) for shnum in tracker.buckets: } [replace IServer.name() with get_name(), and get_longname() warner@lothar.com**20110615174938 Ignore-this: 4c1ddc68e63a7e3fba96c0f52e2edba5 ] { hunk ./src/allmydata/control.py 103 if not everyone_left: return results server = everyone_left.pop(0) - server_name = server.longname() + server_name = server.get_longname() connection = server.get_rref() start = time.time() d = connection.callRemote("get_buckets", "\x00"*16) hunk ./src/allmydata/immutable/checker.py 506 cancel_secret = self._get_cancel_secret(lease_seed) d2 = rref.callRemote("add_lease", storageindex, renew_secret, cancel_secret) - d2.addErrback(self._add_lease_failed, s.name(), storageindex) + d2.addErrback(self._add_lease_failed, s.get_name(), storageindex) d = rref.callRemote("get_buckets", storageindex) def _wrap_results(res): hunk ./src/allmydata/immutable/downloader/finder.py 90 # internal methods def loop(self): - pending_s = ",".join([rt.server.name() + pending_s = ",".join([rt.server.get_name() for rt in self.pending_requests]) # sort? self.log(format="ShareFinder loop: running=%(running)s" " hungry=%(hungry)s, pending=%(pending)s", hunk ./src/allmydata/immutable/downloader/finder.py 135 def send_request(self, server): req = RequestToken(server) self.pending_requests.add(req) - lp = self.log(format="sending DYHB to [%(name)s]", name=server.name(), + lp = self.log(format="sending DYHB to [%(name)s]", name=server.get_name(), level=log.NOISY, umid="Io7pyg") time_sent = now() d_ev = self._download_status.add_dyhb_sent(server.get_serverid(), hunk ./src/allmydata/immutable/downloader/finder.py 171 d_ev.finished(shnums, time_received) dyhb_rtt = time_received - time_sent if not buckets: - self.log(format="no shares from [%(name)s]", name=server.name(), + self.log(format="no shares from [%(name)s]", name=server.get_name(), level=log.NOISY, parent=lp, umid="U7d4JA") return shnums_s = ",".join([str(shnum) for shnum in shnums]) hunk ./src/allmydata/immutable/downloader/finder.py 176 self.log(format="got shnums [%(shnums)s] from [%(name)s]", - shnums=shnums_s, name=server.name(), + shnums=shnums_s, name=server.get_name(), level=log.NOISY, parent=lp, umid="0fcEZw") shares = [] for shnum, bucket in buckets.iteritems(): hunk ./src/allmydata/immutable/downloader/finder.py 223 def _got_error(self, f, server, req, d_ev, lp): d_ev.finished("error", now()) self.log(format="got error from [%(name)s]", - name=server.name(), failure=f, + name=server.get_name(), failure=f, level=log.UNUSUAL, parent=lp, umid="zUKdCw") hunk ./src/allmydata/immutable/downloader/share.py 96 self.had_corruption = False # for unit tests def __repr__(self): - return "Share(sh%d-on-%s)" % (self._shnum, self._server.name()) + return "Share(sh%d-on-%s)" % (self._shnum, self._server.get_name()) def is_alive(self): # XXX: reconsider. If the share sees a single error, should it remain hunk ./src/allmydata/immutable/downloader/share.py 792 log.msg(format="error requesting %(start)d+%(length)d" " from %(server)s for si %(si)s", start=start, length=length, - server=self._server.name(), si=self._si_prefix, + server=self._server.get_name(), si=self._si_prefix, failure=f, parent=lp, level=log.UNUSUAL, umid="BZgAJw") # retire our observers, assuming we won't be able to make any # further progress hunk ./src/allmydata/immutable/offloaded.py 67 # buckets is a dict: maps shum to an rref of the server who holds it shnums_s = ",".join([str(shnum) for shnum in buckets]) self.log("got_response: [%s] has %d shares (%s)" % - (server.name(), len(buckets), shnums_s), + (server.get_name(), len(buckets), shnums_s), level=log.NOISY) self._found_shares.update(buckets.keys()) for k in buckets: hunk ./src/allmydata/immutable/upload.py 96 def __repr__(self): return ("" - % (self._server.name(), si_b2a(self.storage_index)[:5])) + % (self._server.get_name(), si_b2a(self.storage_index)[:5])) def get_serverid(self): return self._server.get_serverid() hunk ./src/allmydata/immutable/upload.py 100 - def name(self): - return self._server.name() + def get_name(self): + return self._server.get_name() def query(self, sharenums): rref = self._server.get_rref() hunk ./src/allmydata/immutable/upload.py 289 self.num_servers_contacted += 1 self.query_count += 1 self.log("asking server %s for any existing shares" % - (tracker.name(),), level=log.NOISY) + (tracker.get_name(),), level=log.NOISY) dl = defer.DeferredList(ds) dl.addCallback(lambda ign: self._loop()) return dl hunk ./src/allmydata/immutable/upload.py 303 serverid = tracker.get_serverid() if isinstance(res, failure.Failure): self.log("%s got error during existing shares check: %s" - % (tracker.name(), res), level=log.UNUSUAL) + % (tracker.get_name(), res), level=log.UNUSUAL) self.error_count += 1 self.bad_query_count += 1 else: hunk ./src/allmydata/immutable/upload.py 311 if buckets: self.serverids_with_shares.add(serverid) self.log("response to get_buckets() from server %s: alreadygot=%s" - % (tracker.name(), tuple(sorted(buckets))), + % (tracker.get_name(), tuple(sorted(buckets))), level=log.NOISY) for bucket in buckets: self.preexisting_shares.setdefault(bucket, set()).add(serverid) hunk ./src/allmydata/immutable/upload.py 419 if self._status: self._status.set_status("Contacting Servers [%s] (first query)," " %d shares left.." - % (tracker.name(), + % (tracker.get_name(), len(self.homeless_shares))) d = tracker.query(shares_to_ask) d.addBoth(self._got_response, tracker, shares_to_ask, hunk ./src/allmydata/immutable/upload.py 440 if self._status: self._status.set_status("Contacting Servers [%s] (second query)," " %d shares left.." - % (tracker.name(), + % (tracker.get_name(), len(self.homeless_shares))) d = tracker.query(shares_to_ask) d.addBoth(self._got_response, tracker, shares_to_ask, hunk ./src/allmydata/immutable/upload.py 501 else: (alreadygot, allocated) = res self.log("response to allocate_buckets() from server %s: alreadygot=%s, allocated=%s" - % (tracker.name(), + % (tracker.get_name(), tuple(sorted(alreadygot)), tuple(sorted(allocated))), level=log.NOISY) progress = False hunk ./src/allmydata/storage_client.py 193 self._trigger_cb = None def __repr__(self): - return "" % self.name() + return "" % self.get_name() def get_serverid(self): return self._tubid def get_permutation_seed(self): hunk ./src/allmydata/storage_client.py 202 if self.rref: return self.rref.version return None - def name(self): # keep methodname short + def get_name(self): # keep methodname short return self.serverid_s hunk ./src/allmydata/storage_client.py 204 - def longname(self): + def get_longname(self): return idlib.nodeid_b2a(self._tubid) def get_lease_seed(self): return self._tubid hunk ./src/allmydata/storage_client.py 231 def _got_connection(self, rref): lp = log.msg(format="got connection to %(name)s, getting versions", - name=self.name(), + name=self.get_name(), facility="tahoe.storage_broker", umid="coUECQ") if self._trigger_cb: eventually(self._trigger_cb) hunk ./src/allmydata/storage_client.py 239 d = add_version_to_remote_reference(rref, default) d.addCallback(self._got_versioned_service, lp) d.addErrback(log.err, format="storageclient._got_connection", - name=self.name(), umid="Sdq3pg") + name=self.get_name(), umid="Sdq3pg") def _got_versioned_service(self, rref, lp): log.msg(format="%(name)s provided version info %(version)s", hunk ./src/allmydata/storage_client.py 243 - name=self.name(), version=rref.version, + name=self.get_name(), version=rref.version, facility="tahoe.storage_broker", umid="SWmJYg", level=log.NOISY, parent=lp) hunk ./src/allmydata/storage_client.py 256 return self.rref def _lost(self): - log.msg(format="lost connection to %(name)s", name=self.name(), + log.msg(format="lost connection to %(name)s", name=self.get_name(), facility="tahoe.storage_broker", umid="zbRllw") self.last_loss_time = time.time() self.rref = None hunk ./src/allmydata/test/no_network.py 125 self.serverid = serverid self.rref = rref def __repr__(self): - return "" % self.name() + return "" % self.get_name() def get_serverid(self): return self.serverid def get_permutation_seed(self): hunk ./src/allmydata/test/no_network.py 132 return self.serverid def get_lease_seed(self): return self.serverid - def name(self): + def get_name(self): return idlib.shortnodeid_b2a(self.serverid) hunk ./src/allmydata/test/no_network.py 134 - def longname(self): + def get_longname(self): return idlib.nodeid_b2a(self.serverid) def get_nickname(self): return "nickname" hunk ./src/allmydata/test/test_download.py 1285 self._server = server self._dyhb_rtt = rtt def __repr__(self): - return "sh%d-on-%s" % (self._shnum, self._server.name()) + return "sh%d-on-%s" % (self._shnum, self._server.get_name()) class MySegmentFetcher(SegmentFetcher): def __init__(self, *args, **kwargs): hunk ./src/allmydata/test/test_download.py 1343 def _check2(ign): self.failUnless(node.failed) self.failUnless(node.failed.check(NotEnoughSharesError)) - sname = serverA.name() + sname = serverA.get_name() self.failUnlessIn("complete= pending=sh0-on-%s overdue= unused=" % sname, str(node.failed)) d.addCallback(_check2) hunk ./src/allmydata/test/test_download.py 1565 def _check4(ign): self.failUnless(node.failed) self.failUnless(node.failed.check(NotEnoughSharesError)) - sname = servers["peer-2"].name() + sname = servers["peer-2"].get_name() self.failUnlessIn("complete=sh0 pending= overdue=sh2-on-%s unused=" % sname, str(node.failed)) d.addCallback(_check4) hunk ./src/allmydata/test/test_immutable.py 106 return self.serverid def get_rref(self): return self.rref - def name(self): + def get_name(self): return "name-%s" % self.serverid def get_version(self): return self.rref.version hunk ./src/allmydata/web/check_results.py 154 shareids.reverse() shareids_s = [ T.tt[shareid, " "] for shareid in sorted(shareids) ] servermap.append(T.tr[T.td[T.div(class_="nickname")[nickname], - T.div(class_="nodeid")[T.tt[s.name()]]], + T.div(class_="nodeid")[T.tt[s.get_name()]]], T.td[shareids_s], ]) num_shares_left -= len(shareids) hunk ./src/allmydata/web/root.py 259 def render_service_row(self, ctx, server): nodeid = server.get_serverid() - ctx.fillSlots("peerid", server.longname()) + ctx.fillSlots("peerid", server.get_longname()) ctx.fillSlots("nickname", server.get_nickname()) rhost = server.get_remote_host() if rhost: } [test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s warner@lothar.com**20110615175011 Ignore-this: 99644e8a104d413d1eaa5ba1cbad1965 ] { hunk ./src/allmydata/test/test_immutable.py 1 -from allmydata.test import common -from allmydata.interfaces import NotEnoughSharesError -from allmydata.util.consumer import download_to_data -from allmydata import uri -from twisted.internet import defer -from twisted.trial import unittest import random hunk ./src/allmydata/test/test_immutable.py 3 +from twisted.trial import unittest +from twisted.internet import defer +import mock from foolscap.api import eventually hunk ./src/allmydata/test/test_immutable.py 7 + +from allmydata.test import common +from allmydata.test.no_network import GridTestMixin +from allmydata.test.common import TEST_DATA +from allmydata import uri from allmydata.util import log hunk ./src/allmydata/test/test_immutable.py 13 +from allmydata.util.consumer import download_to_data hunk ./src/allmydata/test/test_immutable.py 15 +from allmydata.interfaces import NotEnoughSharesError +from allmydata.immutable.upload import Data from allmydata.immutable.downloader import finder hunk ./src/allmydata/test/test_immutable.py 19 -import mock - class MockNode(object): def __init__(self, check_reneging, check_fetch_failed): self.got = 0 hunk ./src/allmydata/test/test_immutable.py 135 return mocknode.when_finished() -class Test(common.ShareManglingMixin, common.ShouldFailMixin, unittest.TestCase): + +class Test(GridTestMixin, unittest.TestCase, common.ShouldFailMixin): + def startup(self, basedir): + self.basedir = basedir + self.set_up_grid(num_clients=2, num_servers=5) + c1 = self.g.clients[1] + # We need multiple segments to test crypttext hash trees that are + # non-trivial (i.e. they have more than just one hash in them). + c1.DEFAULT_ENCODING_PARAMETERS['max_segment_size'] = 12 + # Tests that need to test servers of happiness using this should + # set their own value for happy -- the default (7) breaks stuff. + c1.DEFAULT_ENCODING_PARAMETERS['happy'] = 1 + d = c1.upload(Data(TEST_DATA, convergence="")) + def _after_upload(ur): + self.uri = ur.uri + self.filenode = self.g.clients[0].create_node_from_uri(ur.uri) + return self.uri + d.addCallback(_after_upload) + return d + + def _stash_shares(self, shares): + self.shares = shares + + def _download_and_check_plaintext(self, ign=None): + num_reads = self._count_reads() + d = download_to_data(self.filenode) + def _after_download(result): + self.failUnlessEqual(result, TEST_DATA) + return self._count_reads() - num_reads + d.addCallback(_after_download) + return d + + def _shuffled(self, num_shnums): + shnums = range(10) + random.shuffle(shnums) + return shnums[:num_shnums] + + def _count_reads(self): + return sum([s.stats_provider.get_stats() ['counters'].get('storage_server.read', 0) + for s in self.g.servers_by_number.values()]) + + + def _count_allocates(self): + return sum([s.stats_provider.get_stats() ['counters'].get('storage_server.allocate', 0) + for s in self.g.servers_by_number.values()]) + + def _count_writes(self): + return sum([s.stats_provider.get_stats() ['counters'].get('storage_server.write', 0) + for s in self.g.servers_by_number.values()]) + def test_test_code(self): # The following process of stashing the shares, running # replace_shares, and asserting that the new set of shares equals the hunk ./src/allmydata/test/test_immutable.py 189 # old is more to test this test code than to test the Tahoe code... - d = defer.succeed(None) - d.addCallback(self.find_all_shares) - stash = [None] - def _stash_it(res): - stash[0] = res - return res - d.addCallback(_stash_it) + d = self.startup("immutable/Test/code") + d.addCallback(self.copy_shares) + d.addCallback(self._stash_shares) + d.addCallback(self._download_and_check_plaintext) # The following process of deleting 8 of the shares and asserting # that you can't download it is more to test this test code than to hunk ./src/allmydata/test/test_immutable.py 197 # test the Tahoe code... - def _then_delete_8(unused=None): - self.replace_shares(stash[0], storage_index=self.uri.get_storage_index()) - for i in range(8): - self._delete_a_share() + def _then_delete_8(ign): + self.restore_all_shares(self.shares) + self.delete_shares_numbered(self.uri, range(8)) d.addCallback(_then_delete_8) hunk ./src/allmydata/test/test_immutable.py 201 - - def _then_download(unused=None): - d2 = download_to_data(self.n) - - def _after_download_callb(result): - self.fail() # should have gotten an errback instead - return result - def _after_download_errb(failure): - failure.trap(NotEnoughSharesError) - return None # success! - d2.addCallbacks(_after_download_callb, _after_download_errb) - return d2 - d.addCallback(_then_download) - + d.addCallback(lambda ign: + self.shouldFail(NotEnoughSharesError, "download-2", + "ran out of shares", + download_to_data, self.filenode)) return d def test_download(self): hunk ./src/allmydata/test/test_immutable.py 212 tested by test code in other modules, but this module is also going to test some more specific things about immutable download.) """ - d = defer.succeed(None) - before_download_reads = self._count_reads() - def _after_download(unused=None): - after_download_reads = self._count_reads() - #print before_download_reads, after_download_reads - self.failIf(after_download_reads-before_download_reads > 41, - (after_download_reads, before_download_reads)) + d = self.startup("immutable/Test/download") d.addCallback(self._download_and_check_plaintext) hunk ./src/allmydata/test/test_immutable.py 214 + def _after_download(ign): + num_reads = self._count_reads() + #print num_reads + self.failIf(num_reads > 41, num_reads) d.addCallback(_after_download) return d hunk ./src/allmydata/test/test_immutable.py 224 def test_download_from_only_3_remaining_shares(self): """ Test download after 7 random shares (of the 10) have been removed.""" - d = defer.succeed(None) - def _then_delete_7(unused=None): - for i in range(7): - self._delete_a_share() - before_download_reads = self._count_reads() - d.addCallback(_then_delete_7) - def _after_download(unused=None): - after_download_reads = self._count_reads() - #print before_download_reads, after_download_reads - self.failIf(after_download_reads-before_download_reads > 41, (after_download_reads, before_download_reads)) + d = self.startup("immutable/Test/download_from_only_3_remaining_shares") + d.addCallback(lambda ign: + self.delete_shares_numbered(self.uri, range(7))) d.addCallback(self._download_and_check_plaintext) hunk ./src/allmydata/test/test_immutable.py 228 + def _after_download(num_reads): + #print num_reads + self.failIf(num_reads > 41, num_reads) d.addCallback(_after_download) return d hunk ./src/allmydata/test/test_immutable.py 237 def test_download_from_only_3_shares_with_good_crypttext_hash(self): """ Test download after 7 random shares (of the 10) have had their crypttext hash tree corrupted.""" - d = defer.succeed(None) - def _then_corrupt_7(unused=None): - shnums = range(10) - random.shuffle(shnums) - for i in shnums[:7]: - self._corrupt_a_share(None, common._corrupt_offset_of_block_hashes_to_truncate_crypttext_hashes, i) - #before_download_reads = self._count_reads() - d.addCallback(_then_corrupt_7) + d = self.startup("download_from_only_3_shares_with_good_crypttext_hash") + def _corrupt_7(ign): + c = common._corrupt_offset_of_block_hashes_to_truncate_crypttext_hashes + self.corrupt_shares_numbered(self.uri, self._shuffled(7), c) + d.addCallback(_corrupt_7) d.addCallback(self._download_and_check_plaintext) return d hunk ./src/allmydata/test/test_immutable.py 248 def test_download_abort_if_too_many_missing_shares(self): """ Test that download gives up quickly when it realizes there aren't enough shares out there.""" - for i in range(8): - self._delete_a_share() - d = self.shouldFail(NotEnoughSharesError, "delete 8", None, - download_to_data, self.n) + d = self.startup("download_abort_if_too_many_missing_shares") + d.addCallback(lambda ign: + self.delete_shares_numbered(self.uri, range(8))) + d.addCallback(lambda ign: + self.shouldFail(NotEnoughSharesError, "delete 8", + "Last failure: None", + download_to_data, self.filenode)) # the new downloader pipelines a bunch of read requests in parallel, # so don't bother asserting anything about the number of reads return d hunk ./src/allmydata/test/test_immutable.py 264 enough uncorrupted shares out there. It should be able to tell because the corruption occurs in the sharedata version number, which it checks first.""" - d = defer.succeed(None) - def _then_corrupt_8(unused=None): - shnums = range(10) - random.shuffle(shnums) - for shnum in shnums[:8]: - self._corrupt_a_share(None, common._corrupt_sharedata_version_number, shnum) - d.addCallback(_then_corrupt_8) - - before_download_reads = self._count_reads() - def _attempt_to_download(unused=None): - d2 = download_to_data(self.n) + d = self.startup("download_abort_if_too_many_corrupted_shares") + def _corrupt_8(ign): + c = common._corrupt_sharedata_version_number + self.corrupt_shares_numbered(self.uri, self._shuffled(8), c) + d.addCallback(_corrupt_8) + def _try_download(ign): + start_reads = self._count_reads() + d2 = self.shouldFail(NotEnoughSharesError, "corrupt 8", + "LayoutInvalid", + download_to_data, self.filenode) + def _check_numreads(ign): + num_reads = self._count_reads() - start_reads + #print num_reads hunk ./src/allmydata/test/test_immutable.py 278 - def _callb(res): - self.fail("Should have gotten an error from attempt to download, not %r" % (res,)) - def _errb(f): - self.failUnless(f.check(NotEnoughSharesError)) - d2.addCallbacks(_callb, _errb) + # To pass this test, you are required to give up before + # reading all of the share data. Actually, we could give up + # sooner than 45 reads, but currently our download code does + # 45 reads. This test then serves as a "performance + # regression detector" -- if you change download code so that + # it takes *more* reads, then this test will fail. + self.failIf(num_reads > 45, num_reads) + d2.addCallback(_check_numreads) return d2 hunk ./src/allmydata/test/test_immutable.py 287 - - d.addCallback(_attempt_to_download) - - def _after_attempt(unused=None): - after_download_reads = self._count_reads() - #print before_download_reads, after_download_reads - # To pass this test, you are required to give up before reading - # all of the share data. Actually, we could give up sooner than - # 45 reads, but currently our download code does 45 reads. This - # test then serves as a "performance regression detector" -- if - # you change download code so that it takes *more* reads, then - # this test will fail. - self.failIf(after_download_reads-before_download_reads > 45, - (after_download_reads, before_download_reads)) - d.addCallback(_after_attempt) + d.addCallback(_try_download) return d } [remove now-unused ShareManglingMixin warner@lothar.com**20110615175045 Ignore-this: abf3f361b6789eca18522b72b0d53eb3 ] { hunk ./src/allmydata/test/common.py 17 DeepCheckResults, DeepCheckAndRepairResults from allmydata.mutable.common import CorruptShareError from allmydata.mutable.layout import unpack_header -from allmydata.storage.server import storage_index_to_dir from allmydata.storage.mutable import MutableShareFile from allmydata.util import hashutil, log, fileutil, pollmixin from allmydata.util.assertutil import precondition hunk ./src/allmydata/test/common.py 20 -from allmydata.util.consumer import download_to_data from allmydata.stats import StatsGathererService from allmydata.key_generator import KeyGeneratorService import allmydata.test.common_util as testutil hunk ./src/allmydata/test/common.py 918 TEST_DATA="\x02"*(immutable.upload.Uploader.URI_LIT_SIZE_THRESHOLD+1) -class ShareManglingMixin(SystemTestMixin): - - def setUp(self): - # Set self.basedir to a temp dir which has the name of the current - # test method in its name. - self.basedir = self.mktemp() - - d = defer.maybeDeferred(SystemTestMixin.setUp, self) - d.addCallback(lambda x: self.set_up_nodes()) - - def _upload_a_file(ignored): - cl0 = self.clients[0] - # We need multiple segments to test crypttext hash trees that are - # non-trivial (i.e. they have more than just one hash in them). - cl0.DEFAULT_ENCODING_PARAMETERS['max_segment_size'] = 12 - # Tests that need to test servers of happiness using this should - # set their own value for happy -- the default (7) breaks stuff. - cl0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1 - d2 = cl0.upload(immutable.upload.Data(TEST_DATA, convergence="")) - def _after_upload(u): - filecap = u.uri - self.n = self.clients[1].create_node_from_uri(filecap) - self.uri = uri.CHKFileURI.init_from_string(filecap) - return cl0.create_node_from_uri(filecap) - d2.addCallback(_after_upload) - return d2 - d.addCallback(_upload_a_file) - - def _stash_it(filenode): - self.filenode = filenode - d.addCallback(_stash_it) - return d - - def find_all_shares(self, unused=None): - """Locate shares on disk. Returns a dict that maps - (clientnum,sharenum) to a string that contains the share container - (copied directly from the disk, containing leases etc). You can - modify this dict and then call replace_shares() to modify the shares. - """ - shares = {} # k: (i, sharenum), v: data - - for i, c in enumerate(self.clients): - sharedir = c.getServiceNamed("storage").sharedir - for (dirp, dirns, fns) in os.walk(sharedir): - for fn in fns: - try: - sharenum = int(fn) - except TypeError: - # Whoops, I guess that's not a share file then. - pass - else: - data = open(os.path.join(sharedir, dirp, fn), "rb").read() - shares[(i, sharenum)] = data - - return shares - - def replace_shares(self, newshares, storage_index): - """Replace shares on disk. Takes a dictionary in the same form - as find_all_shares() returns.""" - - for i, c in enumerate(self.clients): - sharedir = c.getServiceNamed("storage").sharedir - for (dirp, dirns, fns) in os.walk(sharedir): - for fn in fns: - try: - sharenum = int(fn) - except TypeError: - # Whoops, I guess that's not a share file then. - pass - else: - pathtosharefile = os.path.join(sharedir, dirp, fn) - os.unlink(pathtosharefile) - for ((clientnum, sharenum), newdata) in newshares.iteritems(): - if clientnum == i: - fullsharedirp=os.path.join(sharedir, storage_index_to_dir(storage_index)) - fileutil.make_dirs(fullsharedirp) - wf = open(os.path.join(fullsharedirp, str(sharenum)), "wb") - wf.write(newdata) - wf.close() - - def _delete_a_share(self, unused=None, sharenum=None): - """ Delete one share. """ - - shares = self.find_all_shares() - ks = shares.keys() - if sharenum is not None: - k = [ key for key in shares.keys() if key[1] == sharenum ][0] - else: - k = random.choice(ks) - del shares[k] - self.replace_shares(shares, storage_index=self.uri.get_storage_index()) - - return unused - - def _corrupt_a_share(self, unused, corruptor_func, sharenum): - shares = self.find_all_shares() - ks = [ key for key in shares.keys() if key[1] == sharenum ] - assert ks, (shares.keys(), sharenum) - k = ks[0] - shares[k] = corruptor_func(shares[k]) - self.replace_shares(shares, storage_index=self.uri.get_storage_index()) - return corruptor_func - - def _corrupt_all_shares(self, unused, corruptor_func): - """ All shares on disk will be corrupted by corruptor_func. """ - shares = self.find_all_shares() - for k in shares.keys(): - self._corrupt_a_share(unused, corruptor_func, k[1]) - return corruptor_func - - def _corrupt_a_random_share(self, unused, corruptor_func): - """ Exactly one share on disk will be corrupted by corruptor_func. """ - shares = self.find_all_shares() - ks = shares.keys() - k = random.choice(ks) - self._corrupt_a_share(unused, corruptor_func, k[1]) - return k[1] - - def _count_reads(self): - sum_of_read_counts = 0 - for thisclient in self.clients: - counters = thisclient.stats_provider.get_stats()['counters'] - sum_of_read_counts += counters.get('storage_server.read', 0) - return sum_of_read_counts - - def _count_allocates(self): - sum_of_allocate_counts = 0 - for thisclient in self.clients: - counters = thisclient.stats_provider.get_stats()['counters'] - sum_of_allocate_counts += counters.get('storage_server.allocate', 0) - return sum_of_allocate_counts - - def _count_writes(self): - sum_of_write_counts = 0 - for thisclient in self.clients: - counters = thisclient.stats_provider.get_stats()['counters'] - sum_of_write_counts += counters.get('storage_server.write', 0) - return sum_of_write_counts - - def _download_and_check_plaintext(self, unused=None): - d = download_to_data(self.n) - def _after_download(result): - self.failUnlessEqual(result, TEST_DATA) - d.addCallback(_after_download) - return d - class ShouldFailMixin: def shouldFail(self, expected_failure, which, substring, callable, *args, **kwargs): } [remove get_serverid from DownloadStatus.add_dyhb_sent and customers warner@lothar.com**20110615175104 Ignore-this: 6f4776aec7152bd46c93d21a2b1e81a ] { hunk ./src/allmydata/immutable/downloader/finder.py 138 lp = self.log(format="sending DYHB to [%(name)s]", name=server.get_name(), level=log.NOISY, umid="Io7pyg") time_sent = now() - d_ev = self._download_status.add_dyhb_sent(server.get_serverid(), - time_sent) + d_ev = self._download_status.add_dyhb_sent(server, time_sent) # TODO: get the timer from a Server object, it knows best self.overdue_timers[req] = reactor.callLater(self.OVERDUE_TIMEOUT, self.overdue, req) hunk ./src/allmydata/immutable/downloader/status.py 43 self.helper = False self.started = None # self.dyhb_requests tracks "do you have a share" requests and - # responses. It maps serverid to a tuple of: + # responses. It maps an IServer instance to a tuple of: # send time # tuple of response shnums (None if response hasn't arrived, "error") # response time (None if response hasn't arrived yet) hunk ./src/allmydata/immutable/downloader/status.py 81 self.problems = [] - def add_dyhb_sent(self, serverid, when): + def add_dyhb_sent(self, server, when): r = (when, None, None) hunk ./src/allmydata/immutable/downloader/status.py 83 - if serverid not in self.dyhb_requests: - self.dyhb_requests[serverid] = [] - self.dyhb_requests[serverid].append(r) - tag = (serverid, len(self.dyhb_requests[serverid])-1) + if server not in self.dyhb_requests: + self.dyhb_requests[server] = [] + self.dyhb_requests[server].append(r) + tag = (server, len(self.dyhb_requests[server])-1) return DYHBEvent(self, tag) def add_dyhb_finished(self, tag, shnums, when): hunk ./src/allmydata/immutable/downloader/status.py 91 # received="error" on error, else tuple(shnums) - (serverid, index) = tag - r = self.dyhb_requests[serverid][index] + (server, index) = tag + r = self.dyhb_requests[server][index] (sent, _, _) = r r = (sent, shnums, when) hunk ./src/allmydata/immutable/downloader/status.py 95 - self.dyhb_requests[serverid][index] = r + self.dyhb_requests[server][index] = r def add_request_sent(self, serverid, shnum, start, length, when): r = (shnum, start, length, when, None, None) hunk ./src/allmydata/test/test_web.py 77 def get_helper_info(self): return (None, False) +class FakeIServer: + def get_name(self): return "short" + def get_longname(self): return "long" + def get_serverid(self): return "binary-serverid" + def build_one_ds(): ds = DownloadStatus("storage_index", 1234) now = time.time() hunk ./src/allmydata/test/test_web.py 86 + serverA = FakeIServer() + serverB = FakeIServer() ds.add_segment_request(0, now) # segnum, when, start,len, decodetime ds.add_segment_delivery(0, now+1, 0, 100, 0.5) hunk ./src/allmydata/test/test_web.py 101 ds.add_segment_request(4, now) ds.add_segment_delivery(4, now, 0, 140, 0.5) - e = ds.add_dyhb_sent("serverid_a", now) + e = ds.add_dyhb_sent(serverA, now) e.finished([1,2], now+1) hunk ./src/allmydata/test/test_web.py 103 - e = ds.add_dyhb_sent("serverid_b", now+2) # left unfinished + e = ds.add_dyhb_sent(serverB, now+2) # left unfinished e = ds.add_read_event(0, 120, now) e.update(60, 0.5, 0.1) # bytes, decrypttime, pausetime hunk ./src/allmydata/web/status.py 367 req.setHeader("content-type", "text/plain") data = {} dyhb_events = [] - for serverid,requests in self.download_status.dyhb_requests.iteritems(): + for server,requests in self.download_status.dyhb_requests.iteritems(): for req in requests: hunk ./src/allmydata/web/status.py 369 - dyhb_events.append( (base32.b2a(serverid),) + req ) + dyhb_events.append( (server.get_longname(),) + req ) dyhb_events.sort(key=lambda req: req[1]) data["dyhb"] = dyhb_events request_events = [] hunk ./src/allmydata/web/status.py 392 t[T.tr[T.th["serverid"], T.th["sent"], T.th["received"], T.th["shnums"], T.th["RTT"]]] dyhb_events = [] - for serverid,requests in self.download_status.dyhb_requests.iteritems(): + for server,requests in self.download_status.dyhb_requests.iteritems(): for req in requests: hunk ./src/allmydata/web/status.py 394 - dyhb_events.append( (serverid,) + req ) + dyhb_events.append( (server,) + req ) dyhb_events.sort(key=lambda req: req[1]) for d_ev in dyhb_events: hunk ./src/allmydata/web/status.py 397 - (serverid, sent, shnums, received) = d_ev - serverid_s = idlib.shortnodeid_b2a(serverid) + (server, sent, shnums, received) = d_ev rtt = None if received is not None: rtt = received - sent hunk ./src/allmydata/web/status.py 403 if not shnums: shnums = ["-"] - t[T.tr(style="background: %s" % self.color(serverid))[ - [T.td[serverid_s], T.td[srt(sent)], T.td[srt(received)], + color = self.color(server.get_serverid()) + t[T.tr(style="background: %s" % color)[ + [T.td[server.get_name()], T.td[srt(sent)], T.td[srt(received)], T.td[",".join([str(shnum) for shnum in shnums])], T.td[self.render_time(None, rtt)], ]]] } [remove get_serverid from DownloadStatus.add_request_sent and customers warner@lothar.com**20110615175127 Ignore-this: 92a77e724f17bc450aca7944276f4fc9 ] { hunk ./src/allmydata/immutable/downloader/share.py 729 share=repr(self), start=start, length=length, level=log.NOISY, parent=self._lp, umid="sgVAyA") - req_ev = ds.add_request_sent(self._server.get_serverid(), - self._shnum, + req_ev = ds.add_request_sent(self._server, self._shnum, start, length, now()) d = self._send_request(start, length) d.addCallback(self._got_data, start, length, req_ev, lp) hunk ./src/allmydata/immutable/downloader/status.py 50 self.dyhb_requests = {} # self.requests tracks share-data requests and responses. It maps - # serverid to a tuple of: + # IServer instance to a tuple of: # shnum, # start,length, (of data requested) # send time hunk ./src/allmydata/immutable/downloader/status.py 97 r = (sent, shnums, when) self.dyhb_requests[server][index] = r - def add_request_sent(self, serverid, shnum, start, length, when): + def add_request_sent(self, server, shnum, start, length, when): r = (shnum, start, length, when, None, None) hunk ./src/allmydata/immutable/downloader/status.py 99 - if serverid not in self.requests: - self.requests[serverid] = [] - self.requests[serverid].append(r) - tag = (serverid, len(self.requests[serverid])-1) + if server not in self.requests: + self.requests[server] = [] + self.requests[server].append(r) + tag = (server, len(self.requests[server])-1) return RequestEvent(self, tag) def add_request_finished(self, tag, received, when): hunk ./src/allmydata/immutable/downloader/status.py 107 # received="error" on error, else len(data) - (serverid, index) = tag - r = self.requests[serverid][index] + (server, index) = tag + r = self.requests[server][index] (shnum, start, length, sent, _, _) = r r = (shnum, start, length, sent, received, when) hunk ./src/allmydata/immutable/downloader/status.py 111 - self.requests[serverid][index] = r + self.requests[server][index] = r def add_segment_request(self, segnum, when): if self.started is None: hunk ./src/allmydata/test/test_web.py 110 e.finished(now+1) e = ds.add_read_event(120, 30, now+2) # left unfinished - e = ds.add_request_sent("serverid_a", 1, 100, 20, now) + e = ds.add_request_sent(serverA, 1, 100, 20, now) e.finished(20, now+1) hunk ./src/allmydata/test/test_web.py 112 - e = ds.add_request_sent("serverid_a", 1, 120, 30, now+1) # left unfinished + e = ds.add_request_sent(serverA, 1, 120, 30, now+1) # left unfinished # make sure that add_read_event() can come first too ds1 = DownloadStatus("storage_index", 1234) hunk ./src/allmydata/web/status.py 373 dyhb_events.sort(key=lambda req: req[1]) data["dyhb"] = dyhb_events request_events = [] - for serverid,requests in self.download_status.requests.iteritems(): + for server,requests in self.download_status.requests.iteritems(): for req in requests: hunk ./src/allmydata/web/status.py 375 - request_events.append( (base32.b2a(serverid),) + req ) + request_events.append( (server.get_longname(),) + req ) request_events.sort(key=lambda req: (req[4],req[1])) data["requests"] = request_events data["segment"] = self.download_status.segment_events hunk ./src/allmydata/web/status.py 466 T.td[segtime], T.td[speed]]] elif etype == "error": t[T.tr[T.td["error"], T.td["seg%d" % segnum]]] - + l[T.h2["Segment Events:"], t] l[T.br(clear="all")] hunk ./src/allmydata/web/status.py 475 T.th["txtime"], T.th["rxtime"], T.th["received"], T.th["RTT"]]] reqtime = (None, None) request_events = [] - for serverid,requests in self.download_status.requests.iteritems(): + for server,requests in self.download_status.requests.iteritems(): for req in requests: hunk ./src/allmydata/web/status.py 477 - request_events.append( (serverid,) + req ) + request_events.append( (server,) + req ) request_events.sort(key=lambda req: (req[4],req[1])) for r_ev in request_events: hunk ./src/allmydata/web/status.py 480 - (peerid, shnum, start, length, sent, receivedlen, received) = r_ev + (server, shnum, start, length, sent, receivedlen, received) = r_ev rtt = None if received is not None: rtt = received - sent hunk ./src/allmydata/web/status.py 484 - peerid_s = idlib.shortnodeid_b2a(peerid) - t[T.tr(style="background: %s" % self.color(peerid))[ - T.td[peerid_s], T.td[shnum], + color = self.color(server.get_serverid()) + t[T.tr(style="background: %s" % color)[ + T.td[server.get_name()], T.td[shnum], T.td["[%d:+%d]" % (start, length)], T.td[srt(sent)], T.td[srt(received)], T.td[receivedlen], T.td[self.render_time(None, rtt)], hunk ./src/allmydata/web/status.py 491 ]] - + l[T.h2["Requests:"], t] l[T.br(clear="all")] } [web/status.py: remove spurious whitespace, no code changes warner@lothar.com**20110615175157 Ignore-this: b59b929dea09be4a5e48b8e15c2d44a ] { hunk ./src/allmydata/web/status.py 387 return srt = self.short_relative_time l = T.div() - + t = T.table(align="left", class_="status-download-events") t[T.tr[T.th["serverid"], T.th["sent"], T.th["received"], T.th["shnums"], T.th["RTT"]]] hunk ./src/allmydata/web/status.py 409 T.td[",".join([str(shnum) for shnum in shnums])], T.td[self.render_time(None, rtt)], ]]] - l[T.h2["DYHB Requests:"], t] l[T.br(clear="all")] hunk ./src/allmydata/web/status.py 411 - + t = T.table(align="left",class_="status-download-events") t[T.tr[T.th["range"], T.th["start"], T.th["finish"], T.th["got"], T.th["time"], T.th["decrypttime"], T.th["pausedtime"], hunk ./src/allmydata/web/status.py 431 T.td[bytes], T.td[rtt], T.td[decrypt], T.td[paused], T.td[speed], ]] - l[T.h2["Read Events:"], t] l[T.br(clear="all")] hunk ./src/allmydata/web/status.py 433 - + t = T.table(align="left",class_="status-download-events") t[T.tr[T.th["type"], T.th["segnum"], T.th["when"], T.th["range"], T.th["decodetime"], T.th["segtime"], T.th["speed"]]] hunk ./src/allmydata/web/status.py 448 T.td["-"], T.td["-"], T.td["-"]]] - reqtime = (segnum, when) elif etype == "delivery": if reqtime[0] == segnum: } [DownloadStatus.add_known_share wants to be used by Finder, web.status warner@lothar.com**20110615175222 Ignore-this: 10bc40413e7a4980a96e16a55a84800f ] hunk ./src/allmydata/immutable/downloader/status.py 146 r = (start, length, requesttime, finishtime, bytes, decrypt, paused) self.read_events[tag] = r - def add_known_share(self, serverid, shnum): - self.known_shares.append( (serverid, shnum) ) + def add_known_share(self, server, shnum): # XXX use me + self.known_shares.append( (server, shnum) ) def add_problem(self, p): self.problems.append(p) [remove nodeid from WriteBucketProxy classes and customers warner@lothar.com**20110615175245 Ignore-this: f239f65c4d8b99d555de16c10db71fec ] { hunk ./src/allmydata/immutable/downloader/share.py 125 # use the upload-side code to get this as accurate as possible ht = IncompleteHashTree(N) num_share_hashes = len(ht.needed_hashes(0, include_leaf=True)) - wbp = make_write_bucket_proxy(None, share_size, r["block_size"], - r["num_segments"], num_share_hashes, 0, - None) + wbp = make_write_bucket_proxy(None, None, share_size, r["block_size"], + r["num_segments"], num_share_hashes, 0) self._fieldsize = wbp.fieldsize self._fieldstruct = wbp.fieldstruct self.guessed_offsets = wbp._offsets hunk ./src/allmydata/immutable/layout.py 79 FORCE_V2 = False # set briefly by unit tests to make small-sized V2 shares -def make_write_bucket_proxy(rref, data_size, block_size, num_segments, - num_share_hashes, uri_extension_size_max, nodeid): +def make_write_bucket_proxy(rref, server, + data_size, block_size, num_segments, + num_share_hashes, uri_extension_size_max): # Use layout v1 for small files, so they'll be readable by older versions # (" % nodeid_s + return "" % self._server.get_name() def put_header(self): return self._write(0, self._offset_data) hunk ./src/allmydata/immutable/layout.py 248 return self._rref.callRemoteOnly("abort") + def get_servername(self): + return self._server.get_name() def get_peerid(self): hunk ./src/allmydata/immutable/layout.py 251 - if self._nodeid: - return self._nodeid - return None + return self._server.get_serverid() class WriteBucketProxy_v2(WriteBucketProxy): fieldsize = 8 hunk ./src/allmydata/immutable/upload.py 80 self.buckets = {} # k: shareid, v: IRemoteBucketWriter self.sharesize = sharesize - wbp = layout.make_write_bucket_proxy(None, sharesize, + wbp = layout.make_write_bucket_proxy(None, None, sharesize, blocksize, num_segments, num_share_hashes, hunk ./src/allmydata/immutable/upload.py 83 - EXTENSION_SIZE, server.get_serverid()) + EXTENSION_SIZE) self.wbp_class = wbp.__class__ # to create more of them self.allocated_size = wbp.get_allocated_size() self.blocksize = blocksize hunk ./src/allmydata/immutable/upload.py 123 #log.msg("%s._got_reply(%s)" % (self, (alreadygot, buckets))) b = {} for sharenum, rref in buckets.iteritems(): - bp = self.wbp_class(rref, self.sharesize, + bp = self.wbp_class(rref, self._server, self.sharesize, self.blocksize, self.num_segments, self.num_share_hashes, hunk ./src/allmydata/immutable/upload.py 127 - EXTENSION_SIZE, - self._server.get_serverid()) + EXTENSION_SIZE) b[sharenum] = bp self.buckets.update(b) return (alreadygot, set(b.keys())) hunk ./src/allmydata/immutable/upload.py 151 def str_shareloc(shnum, bucketwriter): - return "%s: %s" % (shnum, idlib.shortnodeid_b2a(bucketwriter._nodeid),) + return "%s: %s" % (shnum, bucketwriter.get_servername(),) class Tahoe2ServerSelector(log.PrefixingLogMixin): hunk ./src/allmydata/immutable/upload.py 207 num_share_hashes = len(ht.needed_hashes(0, include_leaf=True)) # figure out how much space to ask for - wbp = layout.make_write_bucket_proxy(None, share_size, 0, num_segments, - num_share_hashes, EXTENSION_SIZE, - None) + wbp = layout.make_write_bucket_proxy(None, None, + share_size, 0, num_segments, + num_share_hashes, EXTENSION_SIZE) allocated_size = wbp.get_allocated_size() all_servers = storage_broker.get_servers_for_psi(storage_index) if not all_servers: hunk ./src/allmydata/test/test_storage.py 138 def test_create(self): bw, rb, sharefname = self.make_bucket("test_create", 500) - bp = WriteBucketProxy(rb, + bp = WriteBucketProxy(rb, None, data_size=300, block_size=10, num_segments=5, hunk ./src/allmydata/test/test_storage.py 143 num_share_hashes=3, - uri_extension_size_max=500, nodeid=None) + uri_extension_size_max=500) self.failUnless(interfaces.IStorageBucketWriter.providedBy(bp), bp) def _do_test_readwrite(self, name, header_size, wbp_class, rbp_class): hunk ./src/allmydata/test/test_storage.py 169 uri_extension = "s" + "E"*498 + "e" bw, rb, sharefname = self.make_bucket(name, sharesize) - bp = wbp_class(rb, + bp = wbp_class(rb, None, data_size=95, block_size=25, num_segments=4, hunk ./src/allmydata/test/test_storage.py 174 num_share_hashes=3, - uri_extension_size_max=len(uri_extension), - nodeid=None) + uri_extension_size_max=len(uri_extension)) d = bp.put_header() d.addCallback(lambda res: bp.put_block(0, "a"*25)) } [remove get_serverid() from ReadBucketProxy and customers, including Checker warner@lothar.com**20110615175303 Ignore-this: b90a83b7fd88c897c1f9920ff2f65246 and debug.py dump-share commands ] { hunk ./src/allmydata/immutable/checker.py 500 rref = s.get_rref() lease_seed = s.get_lease_seed() - serverid = s.get_serverid() if self._add_lease: renew_secret = self._get_renewal_secret(lease_seed) cancel_secret = self._get_cancel_secret(lease_seed) hunk ./src/allmydata/immutable/checker.py 509 d = rref.callRemote("get_buckets", storageindex) def _wrap_results(res): - return (res, serverid, True) + return (res, True) def _trap_errs(f): level = log.WEIRD hunk ./src/allmydata/immutable/checker.py 518 self.log("failure from server on 'get_buckets' the REMOTE failure was:", facility="tahoe.immutable.checker", failure=f, level=level, umid="AX7wZQ") - return ({}, serverid, False) + return ({}, False) d.addCallbacks(_wrap_results, _trap_errs) return d hunk ./src/allmydata/immutable/checker.py 557 level=log.WEIRD, umid="hEGuQg") - def _download_and_verify(self, serverid, sharenum, bucket): + def _download_and_verify(self, server, sharenum, bucket): """Start an attempt to download and verify every block in this bucket and return a deferred that will eventually fire once the attempt completes. hunk ./src/allmydata/immutable/checker.py 577 results.""" vcap = self._verifycap - b = layout.ReadBucketProxy(bucket, serverid, vcap.get_storage_index()) + b = layout.ReadBucketProxy(bucket, server, vcap.get_storage_index()) veup = ValidatedExtendedURIProxy(b, vcap) d = veup.start() hunk ./src/allmydata/immutable/checker.py 660 def _verify_server_shares(self, s): """ Return a deferred which eventually fires with a tuple of - (set(sharenum), serverid, set(corruptsharenum), + (set(sharenum), server, set(corruptsharenum), set(incompatiblesharenum), success) showing all the shares verified to be served by this server, and all the corrupt shares served by the server, and all the incompatible shares served by the server. In case hunk ./src/allmydata/immutable/checker.py 684 d = self._get_buckets(s, self._verifycap.get_storage_index()) def _got_buckets(result): - bucketdict, serverid, success = result + bucketdict, success = result shareverds = [] for (sharenum, bucket) in bucketdict.items(): hunk ./src/allmydata/immutable/checker.py 688 - d = self._download_and_verify(serverid, sharenum, bucket) + d = self._download_and_verify(s, sharenum, bucket) shareverds.append(d) dl = deferredutil.gatherResults(shareverds) hunk ./src/allmydata/immutable/checker.py 705 corrupt.add(sharenum) elif whynot == 'incompatible': incompatible.add(sharenum) - return (verified, serverid, corrupt, incompatible, success) + return (verified, s, corrupt, incompatible, success) dl.addCallback(collect) return dl hunk ./src/allmydata/immutable/checker.py 712 def _err(f): f.trap(RemoteException, DeadReferenceError) - return (set(), s.get_serverid(), set(), set(), False) + return (set(), s, set(), set(), False) d.addCallbacks(_got_buckets, _err) return d hunk ./src/allmydata/immutable/checker.py 719 def _check_server_shares(self, s): """Return a deferred which eventually fires with a tuple of - (set(sharenum), serverid, set(), set(), responded) showing all the + (set(sharenum), server, set(), set(), responded) showing all the shares claimed to be served by this server. In case the server is hunk ./src/allmydata/immutable/checker.py 721 - disconnected then it fires with (set() serverid, set(), set(), False) + disconnected then it fires with (set(), server, set(), set(), False) (a server disconnecting when we ask it for buckets is the same, for our purposes, as a server that says it has none, except that we want to track and report whether or not each server responded.)""" hunk ./src/allmydata/immutable/checker.py 726 def _curry_empty_corrupted(res): - buckets, serverid, responded = res - return (set(buckets), serverid, set(), set(), responded) + buckets, responded = res + return (set(buckets), s, set(), set(), responded) d = self._get_buckets(s, self._verifycap.get_storage_index()) d.addCallback(_curry_empty_corrupted) return d hunk ./src/allmydata/immutable/checker.py 743 corruptsharelocators = [] # (serverid, storageindex, sharenum) incompatiblesharelocators = [] # (serverid, storageindex, sharenum) - for theseverifiedshares, thisserverid, thesecorruptshares, theseincompatibleshares, thisresponded in results: + for theseverifiedshares, thisserver, thesecorruptshares, theseincompatibleshares, thisresponded in results: + thisserverid = thisserver.get_serverid() servers.setdefault(thisserverid, set()).update(theseverifiedshares) for sharenum in theseverifiedshares: verifiedshares.setdefault(sharenum, set()).add(thisserverid) hunk ./src/allmydata/immutable/layout.py 6 from twisted.internet import defer from allmydata.interfaces import IStorageBucketWriter, IStorageBucketReader, \ FileTooLargeError, HASH_SIZE -from allmydata.util import mathutil, idlib, observer, pipeline +from allmydata.util import mathutil, observer, pipeline from allmydata.util.assertutil import precondition from allmydata.storage.server import si_b2a hunk ./src/allmydata/immutable/layout.py 297 MAX_UEB_SIZE = 2000 # actual size is closer to 419, but varies by a few bytes - def __init__(self, rref, peerid, storage_index): + def __init__(self, rref, server, storage_index): self._rref = rref hunk ./src/allmydata/immutable/layout.py 299 - self._peerid = peerid - peer_id_s = idlib.shortnodeid_b2a(peerid) - storage_index_s = si_b2a(storage_index) - self._reprstr = "" % (id(self), peer_id_s, storage_index_s) + self._server = server + self._storage_index = storage_index self._started = False # sent request to server self._ready = observer.OneShotObserverList() # got response from server hunk ./src/allmydata/immutable/layout.py 305 def get_peerid(self): - return self._peerid + return self._server.get_serverid() def __repr__(self): hunk ./src/allmydata/immutable/layout.py 308 - return self._reprstr + return "" % \ + (id(self), self._server.get_name(), si_b2a(self._storage_index)) def _start_if_needed(self): """ Returns a deferred that will be fired when I'm ready to return hunk ./src/allmydata/immutable/offloaded.py 88 self.log("no readers, so no UEB", level=log.NOISY) return b,server = self._readers.pop() - rbp = ReadBucketProxy(b, server.get_serverid(), si_b2a(self._storage_index)) + rbp = ReadBucketProxy(b, server, si_b2a(self._storage_index)) d = rbp.get_uri_extension() d.addCallback(self._got_uri_extension) d.addErrback(self._ueb_error) hunk ./src/allmydata/scripts/debug.py 71 from allmydata.util.encodingutil import quote_output, to_str # use a ReadBucketProxy to parse the bucket and find the uri extension - bp = ReadBucketProxy(None, '', '') + bp = ReadBucketProxy(None, None, '') offsets = bp._parse_offsets(f.read_share_data(0, 0x44)) print >>out, "%20s: %d" % ("version", bp._version) seek = offsets['uri_extension'] hunk ./src/allmydata/scripts/debug.py 613 class ImmediateReadBucketProxy(ReadBucketProxy): def __init__(self, sf): self.sf = sf - ReadBucketProxy.__init__(self, "", "", "") + ReadBucketProxy.__init__(self, None, None, "") def __repr__(self): return "" def _read(self, offset, size): hunk ./src/allmydata/scripts/debug.py 771 else: # otherwise assume it's immutable f = ShareFile(fn) - bp = ReadBucketProxy(None, '', '') + bp = ReadBucketProxy(None, None, '') offsets = bp._parse_offsets(f.read_share_data(0, 0x24)) start = f._data_offset + offsets["data"] end = f._data_offset + offsets["plaintext_hash_tree"] hunk ./src/allmydata/test/test_storage.py 26 from allmydata.interfaces import BadWriteEnablerError from allmydata.test.common import LoggingServiceParent from allmydata.test.common_web import WebRenderingMixin +from allmydata.test.no_network import NoNetworkServer from allmydata.web.storage import StorageStatus, remove_prefix class Marker: hunk ./src/allmydata/test/test_storage.py 193 br = BucketReader(self, sharefname) rb = RemoteBucket() rb.target = br - rbp = rbp_class(rb, peerid="abc", storage_index="") + server = NoNetworkServer("abc", None) + rbp = rbp_class(rb, server, storage_index="") self.failUnlessIn("to peer", repr(rbp)) self.failUnless(interfaces.IStorageBucketReader.providedBy(rbp), rbp) } Context: [Rename test_package_initialization.py to (much shorter) test_import.py . Brian Warner **20110611190234 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822 The former name was making my 'ls' listings hard to read, by forcing them down to just two columns. ] [tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430] zooko@zooko.com**20110611163741 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20. fixes #1412 ] [wui: right-align the size column in the WUI zooko@zooko.com**20110611153758 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell. fixes #1412 ] [docs: three minor fixes zooko@zooko.com**20110610121656 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2 CREDITS for arc for stats tweak fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing) English usage tweak ] [docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne. david-sarah@jacaranda.org**20110609223719 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a ] [server.py: get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous. wilcoxjg@gmail.com**20110527120135 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e interfaces.py: modified the return type of RIStatsProvider.get_stats to allow for None as a return value NEWS.rst, stats.py: documentation of change to get_latencies stats.rst: now documents percentile modification in get_latencies test_storage.py: test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported. fixes #1392 ] [corrected "k must never be smaller than N" to "k must never be greater than N" secorp@allmydata.org**20110425010308 Ignore-this: 233129505d6c70860087f22541805eac ] [docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000. david-sarah@jacaranda.org**20110517011214 Ignore-this: 6a5be6e70241e3ec0575641f64343df7 ] [docs: convert NEWS to NEWS.rst and change all references to it. david-sarah@jacaranda.org**20110517010255 Ignore-this: a820b93ea10577c77e9c8206dbfe770d ] [docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404 david-sarah@jacaranda.org**20110512140559 Ignore-this: 784548fc5367fac5450df1c46890876d ] [scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342 david-sarah@jacaranda.org**20110130164923 Ignore-this: a271e77ce81d84bb4c43645b891d92eb ] [setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError zooko@zooko.com**20110128142006 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement(). ] [M-x whitespace-cleanup zooko@zooko.com**20110510193653 Ignore-this: dea02f831298c0f65ad096960e7df5c7 ] [docs: fix typo in running.rst, thanks to arch_o_median zooko@zooko.com**20110510193633 Ignore-this: ca06de166a46abbc61140513918e79e8 ] [relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342 david-sarah@jacaranda.org**20110204204902 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b ] [relnotes.txt: forseeable -> foreseeable. refs #1342 david-sarah@jacaranda.org**20110204204116 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9 ] [replace remaining .html docs with .rst docs zooko@zooko.com**20110510191650 Ignore-this: d557d960a986d4ac8216d1677d236399 Remove install.html (long since deprecated). Also replace some obsolete references to install.html with references to quickstart.rst. Fix some broken internal references within docs/historical/historical_known_issues.txt. Thanks to Ravi Pinjala and Patrick McDonald. refs #1227 ] [docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297 zooko@zooko.com**20110428055232 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39 ] [munin tahoe_files plugin: fix incorrect file count francois@ctrlaltdel.ch**20110428055312 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34 fixes #1391 ] [Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389 david-sarah@jacaranda.org**20110411190738 Ignore-this: 7847d26bc117c328c679f08a7baee519 ] [tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389 david-sarah@jacaranda.org**20110410155844 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa ] [allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389 david-sarah@jacaranda.org**20110410155705 Ignore-this: 2f87b8b327906cf8bfca9440a0904900 ] [remove unused variable detected by pyflakes zooko@zooko.com**20110407172231 Ignore-this: 7344652d5e0720af822070d91f03daf9 ] [allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388 david-sarah@jacaranda.org**20110401202750 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f ] [update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1 Brian Warner **20110325232511 Ignore-this: d5307faa6900f143193bfbe14e0f01a ] [control.py: remove all uses of s.get_serverid() warner@lothar.com**20110227011203 Ignore-this: f80a787953bd7fa3d40e828bde00e855 ] [web: remove some uses of s.get_serverid(), not all warner@lothar.com**20110227011159 Ignore-this: a9347d9cf6436537a47edc6efde9f8be ] [immutable/downloader/fetcher.py: remove all get_serverid() calls warner@lothar.com**20110227011156 Ignore-this: fb5ef018ade1749348b546ec24f7f09a ] [immutable/downloader/fetcher.py: fix diversity bug in server-response handling warner@lothar.com**20110227011153 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the _shares_from_server dict was being popped incorrectly (using shnum as the index instead of serverid). I'm still thinking through the consequences of this bug. It was probably benign and really hard to detect. I think it would cause us to incorrectly believe that we're pulling too many shares from a server, and thus prefer a different server rather than asking for a second share from the first server. The diversity code is intended to spread out the number of shares simultaneously being requested from each server, but with this bug, it might be spreading out the total number of shares requested at all, not just simultaneously. (note that SegmentFetcher is scoped to a single segment, so the effect doesn't last very long). ] [immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps warner@lothar.com**20110227011150 Ignore-this: d8d56dd8e7b280792b40105e13664554 test_download.py: create+check MyShare instances better, make sure they share Server objects, now that finder.py cares ] [immutable/downloader/finder.py: reduce use of get_serverid(), one left warner@lothar.com**20110227011146 Ignore-this: 5785be173b491ae8a78faf5142892020 ] [immutable/offloaded.py: reduce use of get_serverid() a bit more warner@lothar.com**20110227011142 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f ] [immutable/upload.py: reduce use of get_serverid() warner@lothar.com**20110227011138 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6 ] [immutable/checker.py: remove some uses of s.get_serverid(), not all warner@lothar.com**20110227011134 Ignore-this: e480a37efa9e94e8016d826c492f626e ] [add remaining get_* methods to storage_client.Server, NoNetworkServer, and warner@lothar.com**20110227011132 Ignore-this: 6078279ddf42b179996a4b53bee8c421 MockIServer stubs ] [upload.py: rearrange _make_trackers a bit, no behavior changes warner@lothar.com**20110227011128 Ignore-this: 296d4819e2af452b107177aef6ebb40f ] [happinessutil.py: finally rename merge_peers to merge_servers warner@lothar.com**20110227011124 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e ] [test_upload.py: factor out FakeServerTracker warner@lothar.com**20110227011120 Ignore-this: 6c182cba90e908221099472cc159325b ] [test_upload.py: server-vs-tracker cleanup warner@lothar.com**20110227011115 Ignore-this: 2915133be1a3ba456e8603885437e03 ] [happinessutil.py: server-vs-tracker cleanup warner@lothar.com**20110227011111 Ignore-this: b856c84033562d7d718cae7cb01085a9 ] [upload.py: more tracker-vs-server cleanup warner@lothar.com**20110227011107 Ignore-this: bb75ed2afef55e47c085b35def2de315 ] [upload.py: fix var names to avoid confusion between 'trackers' and 'servers' warner@lothar.com**20110227011103 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d ] [refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload warner@lothar.com**20110227011100 Ignore-this: 7ea858755cbe5896ac212a925840fe68 No behavioral changes, just updating variable/method names and log messages. The effects outside these three files should be minimal: some exception messages changed (to say "server" instead of "peer"), and some internal class names were changed. A few things still use "peer" to minimize external changes, like UploadResults.timings["peer_selection"] and happinessutil.merge_peers, which can be changed later. ] [storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers warner@lothar.com**20110227011056 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc ] [test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code warner@lothar.com**20110227011051 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d ] [test: increase timeout on a network test because Francois's ARM machine hit that timeout zooko@zooko.com**20110317165909 Ignore-this: 380c345cdcbd196268ca5b65664ac85b I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish. ] [docs/configuration.rst: add a "Frontend Configuration" section Brian Warner **20110222014323 Ignore-this: 657018aa501fe4f0efef9851628444ca this points to docs/frontends/*.rst, which were previously underlinked ] [web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366 "Brian Warner "**20110221061544 Ignore-this: 799d4de19933f2309b3c0c19a63bb888 ] [Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable. david-sarah@jacaranda.org**20110221015817 Ignore-this: 51d181698f8c20d3aca58b057e9c475a ] [allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355. david-sarah@jacaranda.org**20110221020125 Ignore-this: b0744ed58f161bf188e037bad077fc48 ] [Refactor StorageFarmBroker handling of servers Brian Warner **20110221015804 Ignore-this: 842144ed92f5717699b8f580eab32a51 Pass around IServer instance instead of (peerid, rref) tuple. Replace "descriptor" with "server". Other replacements: get_all_servers -> get_connected_servers/get_known_servers get_servers_for_index -> get_servers_for_psi (now returns IServers) This change still needs to be pushed further down: lots of code is now getting the IServer and then distributing (peerid, rref) internally. Instead, it ought to distribute the IServer internally and delay extracting a serverid or rref until the last moment. no_network.py was updated to retain parallelism. ] [TAG allmydata-tahoe-1.8.2 warner@lothar.com**20110131020101] Patch bundle hash: 2494631bc26ff8c52b1d2a57bfb49f0079da50e7