support multiple storage backends, including amazon s3 #999
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#999
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
The focus of this ticket is (now) adapting the existing codebase to use multiple backends, rather than supporting any particular backend.
We already have one backend -- the filesystem backend -- which I think should be a plugin in the same sense that the others will be plugins (i.e.: other code in tahoe-lafs can interact with a filesystem plugin without caring very much about how or where it is storing its files -- otherwise it doesn't seem very extensible). If you accept this, then we'd need to figure out what a backend plugin should look like.
There is backend-independent logic in the current server implementation that we wouldn't want to duplicate in every other backend implementation. To address this, we could start by refactoring the existing code that reads or writes shares on disk, to use a local backend implementation supporting an IStorageProvider interface (probably a fairly simplistic filesystem-ish API).
(This involves changing the code in source:src/allmydata/storage/server.py that reads from local disk in its [_iter_share_files()]source:src/allmydata/storage/server.py@4164#L359 method, and also changing [storage/shares.py]source:src/allmydata/storage/shares.py@3762, [storage/immutable.py]source:src/allmydata/storage/immutable.py@3871#L39, and [storage/mutable.py]source:src/allmydata/storage/mutable.py@3815#L34 that write shares to local disk.)
At this point all the existing tests should still pass, since we haven't actually changed the behaviour.
Then we have to add the ability to configure new storage providers. This involves figuring out how to map user configuration choices to what actually happens when a node is started, and how the credentials needed to log into a particular storage backend should be specified. The skeletal RIStorageServer would instantiate its IStorageProvider based on what the user configured, and use it to write/read data, get statistics, and so on.
Naturally, all of this would require a decent amount of documentation and testing, too.
Once we have all of this worked out, the rest of this project (probably to be handled in other tickets) would be identifying what other backends we'd want in tahoe-lafs, then documenting, implementing, and testing them. We already have Amazon S3 and Rackspace as targets -- users of tahoe-lafs will probably have their own suggestions, and more backends will come up with more research.
See the RAIC diagram.
(this is an email I sent to zooko a while ago with my thoughts on how this should be implemented:)
First, I'll summarize, to make sure that I understand what you had in
mind. Please correct me if you disagree with any of this.
The "redundant array of inexpensive clouds" idea means extending the
current storage server in tahoe-lafs to support storage backends that
aren't what we have now (writing shares to the local filesystem). Well
actually, the redundant array of inexpensive clouds idea means doing
that, then implementing plugins for popular existing cloud storage
services -- Amazon S3 and Rackspace are two that you've mentioned, but
there are probably others (if we end up going through with this, I'll
probably email tahoe-dev so I can get an idea of what else is out
there/what else people want to see supported, in addition to my own
research).
The benefit (or at least the benefit that seems clear to me from your
explanation -- perhaps there are others that are more obvious if you run
a big tahoe-lafs installation like allmydata.com, or if you're more
familiar with tahoe-lafs than I am) is decoupling the ability of a
tahoe-lafs node to store files from its physical filesystem. So if, say,
allmydata.com were to start running tahoe-lafs nodes using S3 as a
backend, and their grid was filled, they could create more space on the
grid by buying more S3 buckets, rather than upgrading physical servers
or adding new servers (I've never used S3, but I would bet that it is
easier to buy more S3 buckets than to upgrade servers). Or, if you
wanted to create a grid without purchasing a bunch of servers, you could
run a bunch of nodes on one machine (I was thinking vmware images, but
then I started wondering whether it was even necessary to have that
level of separation between tahoe-lafs nodes -- is it? but that's not
really on topic), each mapping to a different S3 bucket or buckets.
Am I missing anything (aside from more examples)?
It seems like -- at least for S3 -- you could already sort of do this.
There are projects like s3fs, which provide a FUSE interface to an
S3 bucket (though the last file for it is more than a year old. it
seems like there should be other projects like that, though) (edit: this is actually wrong -- I just hadn't found the Google code project, which is at http://code.google.com/p/s3fs/). Using
that, you could mount your S3 bucket somewhere in the filesystem of your
server, then kajigger the basedir of the tahoe-lafs node so that it
rests in that area of the filesystem, or otherwise configure the
tahoe-lafs node to save files there. This requires more work than what
we'd eventually want with "redundant array of inexpensive clouds", of
course, and (depending on how well FUSE or other S3 interfaces play) may
only work on tahoe-lafs nodes running one unix or other, but if an
operator got it working, it seems like they'd have most of the benefit
outlined above without any further work on my/our part.
(not that I mind working on this, of course, but I figured it would be
worthwhile to mention that)
In any case, I think implementing this would come down to two basic parts.
The first part would be adapting the existing codebase to use multiple
backends.
We already have one backend -- the filesystem backend -- which I think
should be a plugin in the same sense that the others will be plugins
(i.e.: other code in tahoe-lafs can interact with a filesystem plugin
without caring very much about how or where it is storing its files --
otherwise it doesn't seem very extensible). If you accept this, then
we'd need to figure out what a backend plugin should look like. Maybe we
can make each plugin implement RIStorageServer, and leave it at that.
Then we might not need to do very much work on the existing server to
make it work with the rest of the (new) system. However, it's possible
that there is backend-independent logic in the current server
implementation that we wouldn't want to duplicate in every other backend
implementation. To address this, we could instead make a sort of
backend-agnostic storage server that implements RIStorageServer, then
make another interface for backends to implement, say IStorageProvider.
The skeletal RIStorageServer would instantiate its IStorageProvider
based on what the user configured, and use it to write/read data, get
statistics, and so on. Then IStorageProvider would be a fairly
simplistic filesystem-ish API.
The other part of preparation would be figuring out how to map user
configuration choices to what actually happens when a node is started.
Also, we'd want to figure out how (if?) we need to do anything special
with the credentials that users might need to log in to their storage
backend. I'll have a better idea of how I'd implement this once I look
at the way it works for other things that users configure.
Naturally, all of this would require a decent amount of documentation
and testing, too.
(I'm open to other ideas, of course -- these are just what came to my mind)
Once we have all of this worked out, the rest of this project would be
identifying what other backends we'd want in tahoe-lafs, then
documenting, implementing, and testing those. We already have Amazon S3
and Rackspace as targets -- users of tahoe-lafs will probably have their
own suggestions, and more backends will come up with more research.
Generalizing this to include support for multiple backends (since I don't think we want to do it in a way that would only support S3 and local disk).
amazon s3 backendto support multiple storage backends, including amazon s3fix typo
Update description to reflect kevan's suggested approach.
Attachment storagemocktest01.darcs.patch (6013 bytes) added
Attachment sservertests.darcs.patch (9054 bytes) added
Here is an incomplete patch for others (arc) to look at or improve.
Attachment for-arctic.darcs.patch (28992 bytes) added
Attachment for-arctic-2.darcs.patch (630265 bytes) added
Attachment workingonbackend01.darcs.patch (46623 bytes) added
Implements tests of read and write for the nullbackend
Attachment snapshotofbackendimplementation.darcs.patch (96411 bytes) added
just so I don't lose it all...
Attachment checkpoint3.darcs.patch (99326 bytes) added
another checkpoint
Attachment checkpoint4.darcs.patch (111935 bytes) added
Attachment checkpoint5.darcs.patch (124608 bytes) added
more precise tests in TestServerFSBackend
Attachment checkpoint6.darcs.patch (130227 bytes) added
backing myself up, some comments cleaned in interfaces, new tests in test_backends
Attachment checkpoint7.darcs.patch (130662 bytes) added
tiny change, now tests that allocated returns correct value
Attachment checkpoint8.darcs.patch (132043 bytes) added
The null backend test is useful for testing what happens when there's no effective limit on the backend
Attachment checkpoint9.darcs.patch (140783 bytes) added
checkpoint 9
Attachment checkpoint10.darcs.patch (144949 bytes) added
Completed coverage of remote_allocate_buckets
Attachment checkpoint11.darcs.patch (152965 bytes) added
(JACP) Just Another CheckPoint
Attachment consistentifysi.darcs.patch (161829 bytes) added
all storage_index (word tokens) to storageindex in storage/server.py
Attachment checkpoint12.darcs.patch (170830 bytes) added
no longer trying to mock FS in TestServerFSBackend
Attachment jacp13.darcs.patch (192631 bytes) added
Attachment jacp14.darcs.patch (205520 bytes) added
Attachment jacp15.darcs.patch (210813 bytes) added
I'll review this.
Attachment work-in-progress-on-tests-from-pair-programming-with-Zancas.darcs.patch (227416 bytes) added
Attachment work-in-progress-2011-07-14_21_23.darcs.patch (235017 bytes) added
Attachment work-in-progress-2011-07-15_19_15.darcs.patch (255454 bytes) added
Attachment work-in-progress-2011-07-20_06_05Z.darcs.patch (283324 bytes) added
Before going much further in relying on
twisted.python.filepath.FilePath
, can we think about the Unicode issue raised in ticket:1437#comment:76377? Currently, storage directories with Unicode paths are intended to be supported on Windows.Replying to davidsarah:
Replying to [arch_o_median]comment:13:
http://twistedmatrix.com/trac/ticket/2366
Replying to [arch_o_median]comment:14:
(Is replying to myself bad form?) OK so I can't tell how 2366 is (or is not resolved) should I get a twisted login so I can ask about it on that ticket... I await direction.
I did some investigation about non-ASCII filename handling in filepath and in Tahoe-LAFS and posted my notes on Twisted #5203.
Attachment jacp16Zancas20110722.darcs.patch (301848 bytes) added
Attachment jacp17Zancas20110723.darcs.patch (309840 bytes) added
Attachment jacp18Zancas20110723.darcs.patch (321159 bytes) added
After some chatting with zooko and warner in IRC, I've tentatively decided to use composition to inform the base Crawler object about the backend it is associated with. I'm not sure, but I think passing the whole Core object might be appropriate.
Attachment jacp19Zancas20110727.darcs.patch (347272 bytes) added
Attachment jacp20Zancas20110728.darcs.patch (358454 bytes) added
I'm confused about leases. When I look at the constructor for an immutable share file in a 'pristine' repository, (or in my latest version for that matter) I see that in the "create" clause of the constructor a python string representation of a big endian '0' is used for the number of leases.
http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/src/allmydata/storage/immutable.py#L63
This is confusing because in my test vector data (created some time ago) I have '1' as the initial number of leases. My guess is that I somehow got a bum test-vector value, but it'd be nice to hear from an architect that immutable share files really should start life with '0' leases!
Attachment FinishFPWRTest_Zancas20110728.darcs.patch (375575 bytes) added
Patch passes allmydata.test.test_backends.TestServerAndFSBackend.test_write_and_read_share
Cool! Will review.
Attachment readoldshpasses_Zancas20110729.darcs.patch (380678 bytes) added
TestServerAndFSBackend.test_read_old_share passes
Attachment TestServerandFSBackPasses_Zancas20110729.darcs.patch (392691 bytes) added
TestServerAndFSBackend passes all (3) tests
Attachment test_backendpasses_Zancas20110729.darcs.patch (399923 bytes) added
5 test_backend tests pass
Attachment JACP20_Zancas20110801.darcs.patch (414808 bytes) added
uggg... bugs...
Attachment jacp22_test_backendpasses_Zancas20110802.darcs.patch (425317 bytes) added
the 5 tests pass... so what?
Ticket 1465 more succinctly organizes the same code contained in these patches.
Attachment backends-configuration-docs.darcs.patch (172619 bytes) added
I added backends-configuration-docs.darcs.patch which contains documentation of the configuration options for the backends feature. I like Brian Warner's approach to development where he writes the docs first, even before the tests. (He writes tests second.) I encourage anyone working on this ticket to read (and possibly improve/fix/extend) these docs!
Review of backends-configuration-docs.darcs.patch:
s3.rst
:Add a short introduction saying what S3 is and why anyone might want to use it.
It's a bit inconsistent that the value of the backend option is uppercase "S3", but the other option names are lowercase "s3_". Also, I would make it "s3.", since that's similar to the use of "." to group other related options.
Should the
s3_url
option include the scheme name, i.e. defaulting tohttp://s3.amazonaws.com
? We might want to support https in future (although there would be more to configure if we check certificates).In the description of
s3_max_space
, copy the paragraph starting "This string contains a number" fromdisk.rst
rather than referring to it."enabling ``s3_max_space`` causes an extra S3 usage query to be sent for each share upload, causing the upload process to run slightly slower and incur more S3 request charges."
disk.rst
:"Storing Shares in local filesystem" -> "Storing Shares on a Local Filesystem"
use
backend = disk
, notbackend = local filesystem
, and say that it is the default.configuration.rst
:"Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."
"including how to limit the space that will be consumed" -> "including how to reserve a minimum amount of free space"
I closed the subsidiary ticket #1465 as "fixed". The current patch set for this ticket as of this writing is [attachment:20110829passespyflakes.darcs.patch]attachment:20110829passespyflakes.darcs.patch🎫1465 (from that ticket) plus attachment:backends-configuration-docs.darcs.patch.
Attachment pluggable-backends-davidsarah.darcs.patch (213441 bytes) added
This is just a "flat" recording of my refactoring of pluggable backends. I'll do a better recording tomorrow, and explain the refactoring.
Attachment pluggable-backends-davidsarah-v2.darcs.patch (305880 bytes) added
This is still just a flat recording (a lot more changes to tests were needed than I anticipated).
Attachment pluggable-backends-davidsarah-v3.darcs.patch (346554 bytes) added
Bleeding edge pluggable backends code from David-Sarah. refs #999
Attachment pluggable-backends-davidsarah-v4.darcs.patch (295378 bytes) added
Rerecording of pluggable-backends-davidsarah-v3.darcs.patch that should fix the darcs performance problem when applied to trunk.
Attachment pluggable-backends-davidsarah-v5.darcs.patch (315537 bytes) added
Work-in-progress, includes fix to bug involving BucketWriter. refs #999
Replying to davidsarah:
Attachment backends-configuration-docs-v2.darcs.patch (24325 bytes) added
docs: document the configuration options for the new backends scheme. This takes into account /tahoe-lafs/trac-2024-07-25/issues/6061#comment:-1 and is rerecorded to avoid darcs context problems.
Replying to [zancas]comment:28:
They are? I don't think so. How would they find out about the backend type?
backends-configuration-docs-v2.darcs.patch looks good to me. One thing I would change is to remove the "Issues" section about the costs of querying S3 objects and the effects on our crawler/lease-renewal scheme. I'm not sure that this branch will eventually land without a lease-checker implemented, so that part is making a statement that might be wrong. Also I'm not really sure the costs of querying S3 objects are worth mentioning. The current S3 pricing has 10,000 GET requests for $0.01. Let's remove that documentation for now and add in documentation when we understand better what the actual limitations or costs will be.
Replying to [zancas]comment:28:
The doc meant that client nodes need not be aware of backend type. Although the current hack to wire up a StorageServer to a backend in pluggable-backends-davidsarah-v5.darcs.patch is in allmydata/client.py, that code isn't actually run by clients, it is run only when setting up a storage server.
Replying to [davidsarah]comment:31:
Ugh, I should never use the term "node" :-/. I meant the code that acts as a storage protocol client.
Attachment pluggable-backends-davidsarah-v6.darcs.patch (329873 bytes) added
v6. Tests are looking in much better shape now -- still some problems with path vs FilePath and other stale assumptions in the test framework, but the disk backend basically works now.
Attachment trace-exceptions-option.darcs.patch (19736 bytes) added
Add --trace-exceptions option to trace raised exceptions on stderr. refs #999
Attachment pluggable-backends-davidsarah-v7.darcs.patch (368250 bytes) added
Latest snapshot, more tests passing.
Attachment snapshot-backend-config-parse.patch (6145 bytes) added
snapshot of work in progress
Attachment pluggable-backends-davidsarah-v8.darcs.patch (384096 bytes) added
v8 snapshot. More tests pass.
Attachment pluggable-backends-davidsarah-v9.darcs.patch (420760 bytes) added
Still more test fixes.
Josh wrote, re: pluggable-backends-davidsarah-v8.darcs.patch:
No,
load_state
usespickle.loads(self.statefp.getContent())
which is correct. The state handling is a red herring for the test_crawlers failure, I think.In v9,
allmydata.test.test_storage.LeaseCrawler.test_basic
is hanging due to an infinite recursion in pickle.py. Use(with trace-exceptions-option.darcs.patch applied) to see the recursion. I'm on the case...
Replying to davidsarah:
That was another red herring; there was an innocuous exception in pickle.py that was happening in each iteration of whatever other code is livelocking.
Attachment pluggable-backends-davidsarah-v10.darcs.patch (435928 bytes) added
Fix most of the crawler tests. Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999
Attachment pluggable-backends-davidsarah-v11.darcs.patch (498823 bytes) added
Includes a fix for iterating over a dict while removing entries from it in mutable/publish.py, some cosmetic changes, and a start on the S3 backend.
Attachment pluggable-backends-davidsarah-v12.darcs.patch (528987 bytes) added
Updates to null and S3 backends.
Attachment passtest_status_bad_disk_stats.darcs.patch (512142 bytes) added
contains changes in v12
Attachment pluggable-backends-davidsarah-v13.darcs.patch (555578 bytes) added
Includes fixes to test_status_bad_disk_stats and test_no_st_blocks in test_storage.py, and more work on the S3 backend.
Attachment pluggable-backends-davidsarah-v14.darcs.patch (602336 bytes) added
Work in progress for asyncifying the backend interface (necessary to call txaws methods that return Deferreds). This is incomplete so lots of tests fail. refs #999
In v13,
test_storage.LeaseCrawler.test_share_corruption
fails. However this is a test that is known to have race conditions -- it used to fail when logging was enabled (#923), and we tried to fix that in changeset:3b1b0147a867759c, but in a way that in retrospect didn't really address the cause of the race condition. The problem is that it's trying to check for a particular instantaneous state of the lease crawler while it is running, which is inherently race-prone.I suggest we not worry about this test for the current LAE iteration.
Attachment pluggable-backends-davidsarah-v13a.darcs.patch (544869 bytes) added
This does not include the asyncification changes from v14, but does include a couple of fixes for failures in test_system.
Attachment pluggable-backends-davidsarah-v15.darcs.patch (703824 bytes) added
bleeding edge of asyncification work
0 /home/arc/sandbox/working 550 $ darcs apply pluggable-backends-davidsarah-v15.darcs.patch
darcs failed: Bad patch bundle!
2 /home/arc/sandbox/working 551 $
Attachment pluggable-backends-davidsarah-v16.darcs.patch (841151 bytes) added
Latest asyncified patch. About 90% of tests pass.
Attachment s3-v13a-to-v16.diff (26727 bytes) added
Differences, just in the S3 backend, between v13a and v16.
Attachment split_s3share_classes_and_prune_unused_methods.diff (10735 bytes) added
Attachment split_s3share_classes_and_prune_unused_methods.dpatch (20675 bytes) added
Attachment configure-backends-incomplete.dpatch (26608 bytes) added
Attachment pluggable-backends-davidsarah-v17.darcs.patch (861252 bytes) added
Completes the splitting of IStoredShare into IShareForReading and IShareForWriting. Does not include configuration changes.
Attachment test_backends.py (7734 bytes) added
Snapshot of test_backends.py in David-Sarah's tree
Attachment pluggable-backends-davidsarah-v18.darcs.patch (879876 bytes) added
Includes backend configuration (rerecorded from zooko's patch), and other minor fixes.
Attachment asyncify-tests.dpatch (13774 bytes) added
Attachment pluggable-backends-davidsarah-v19.darcs.patch (919584 bytes) added
Include missing files for real and mock S3 backends. Also some fixes to tests, scripts/debug.py, and config parsing.
In [5373/ticket999-S3-backend]:
In [5374/ticket999-S3-backend]:
In [5375/ticket999-S3-backend]:
In [5376/ticket999-S3-backend]:
In [5379/ticket999-S3-backend]:
In [5380/ticket999-S3-backend]:
In [5382/ticket999-S3-backend]:
Attachment debug-mutable-hash-validation-failure.dpatch (35380 bytes) added
In [5387/ticket999-S3-backend]:
In [5388/ticket999-S3-backend]:
In [5391/ticket999-S3-backend]:
In [5392/ticket999-S3-backend]:
Attachment pluggable-backends-davidsarah-v20.darcs.patch (1101695 bytes) added
Fix various bugs and tests. v20
Re: pluggable-backends-davidsarah-v20.darcs.patch, I made a mistake in recording it that will cause a conflict with the ticket999-S3-backend branch. I'll attach a fixed version.
In [5400/ticket999-S3-backend]:
In [5402/ticket999-S3-backend]:
In [5403/ticket999-S3-backend]:
In [5404/ticket999-S3-backend]:
In [5405/ticket999-S3-backend]:
In [5406/ticket999-S3-backend]:
In [5407/ticket999-S3-backend]:
In [5408/ticket999-S3-backend]:
In [5409/ticket999-S3-backend]:
In [5410/ticket999-S3-backend]:
In [5411/ticket999-S3-backend]:
In [5412/ticket999-S3-backend]:
In [5414/ticket999-S3-backend]:
Please ignore pluggable-backends-davidsarah-v20.darcs.patch; the equivalent of that patch is on the ticket999-S3-backend branch now.
In [5415/ticket999-S3-backend]:
[5415/ticket999-S3-backend] fixes all but one of the tests in
test_mutable.py
.In [5416/ticket999-S3-backend]:
In [5417/ticket999-S3-backend]:
In [5419/ticket999-S3-backend]:
In [5421/ticket999-S3-backend]:
In [5422/ticket999-S3-backend]:
In [5423/ticket999-S3-backend]:
In [5424/ticket999-S3-backend]:
In [5425/ticket999-S3-backend]:
In [5426/ticket999-S3-backend]:
In [5427/ticket999-S3-backend]:
In [5429/ticket999-S3-backend]:
In [5430/ticket999-S3-backend]:
In [5431/ticket999-S3-backend]:
In [5432/ticket999-S3-backend]:
In [5433/ticket999-S3-backend]:
In [5434/ticket999-S3-backend]:
In [5445/ticket999-S3-backend]:
In [5446/ticket999-S3-backend]:
In [5447/ticket999-S3-backend]:
In [5448/ticket999-S3-backend]:
In [5449/ticket999-S3-backend]:
In [5450/ticket999-S3-backend]:
In [5451/ticket999-S3-backend]:
In [5452/ticket999-S3-backend]:
In [5453/ticket999-S3-backend]:
In [5454/ticket999-S3-backend]:
In [5455/ticket999-S3-backend]:
In [5456/ticket999-S3-backend]:
In [5457/ticket999-S3-backend]:
In [5458/ticket999-S3-backend]:
In [5459/ticket999-S3-backend]:
In [5461/ticket999-S3-backend]:
In [5462/ticket999-S3-backend]:
In [5463/ticket999-S3-backend]:
In [5464/ticket999-S3-backend]:
In [5465/ticket999-S3-backend]:
In [5466/ticket999-S3-backend]:
In [5467/ticket999-S3-backend]:
In [5468/ticket999-S3-backend]:
In [5469/ticket999-S3-backend]:
In [5470/ticket999-S3-backend]:
In [5471/ticket999-S3-backend]:
In [5472/ticket999-S3-backend]:
In [5473/ticket999-S3-backend]:
In [5474/ticket999-S3-backend]:
In [5475/ticket999-S3-backend]:
In [5476/ticket999-S3-backend]:
In [5477/ticket999-S3-backend]:
In [5478/ticket999-S3-backend]:
In [5479/ticket999-S3-backend]:
In [5480/ticket999-S3-backend]:
In [5479/ticket999-S3-backend], there's also a fix to a preexisting bug in
test_storage.LeaseCrawler.test_unpredictable_future
, where it was checking thes["estimated-remaining-cycle"]["space-recovered"]
key twice, rather than both that key ands["estimated-current-cycle"]["space-recovered"]
as intended.In [5481/ticket999-S3-backend]:
In [5482/ticket999-S3-backend]:
In [5483/ticket999-S3-backend]:
In [5484/ticket999-S3-backend]:
In [5485/ticket999-S3-backend]:
In [5486/ticket999-S3-backend]:
In [5487/ticket999-S3-backend]:
In [5488/ticket999-S3-backend]:
In [5489/ticket999-S3-backend]:
In [5490/ticket999-S3-backend]:
In [5491/ticket999-S3-backend]:
In [5492/ticket999-S3-backend]:
In [5493/ticket999-S3-backend]:
In [5494/ticket999-S3-backend]:
In [5495/ticket999-S3-backend]:
In [5514/ticket999-S3-backend]:
In [5515/ticket999-S3-backend]:
In [5516/ticket999-S3-backend]:
In [5517/ticket999-S3-backend]:
In [5518/ticket999-S3-backend]:
In [5519/ticket999-S3-backend]:
In [5520/ticket999-S3-backend]:
In [5521/ticket999-S3-backend]:
In [5522/ticket999-S3-backend]:
In [5523/ticket999-S3-backend]:
In [5524/ticket999-S3-backend]:
In [5525/ticket999-S3-backend]:
In [5526/ticket999-S3-backend]:
Further work on this functionality will be in ticket #1569.