Amazon S3 compatible frontend #917

Open
opened 2010-01-19 23:50:09 +00:00 by francois · 5 comments
francois commented 2010-01-19 23:50:09 +00:00
Owner

Allowing existing S3 clients to access files on a Tahoe grid sounds cool.

Would it be possible to build such a S3-compatible frontend for Tahoe? Should it be implemented as a new frontend or as a proxy between the WAPI and S3 clients?

A few interesting references:

Allowing existing S3 clients to access files on a Tahoe grid sounds cool. Would it be possible to build such a S3-compatible frontend for Tahoe? Should it be implemented as a new frontend or as a proxy between the WAPI and S3 clients? A few interesting references: * [S3 API documentation](http://docs.amazonwebservices.com/AmazonS3/latest/) * [A Twisted API for accessing Amazon Web Services](https://launchpad.net/txaws)
tahoe-lafs added the
code-frontend
minor
enhancement
1.5.0
labels 2010-01-19 23:50:09 +00:00
tahoe-lafs added this to the eventually milestone 2010-01-19 23:50:09 +00:00

Yes! Very cool! The way to do it is to make a variant of source:src/allmydata/storage/server.py which doesn't read from local disk in its [_iter_share_files()]source:src/allmydata/storage/server.py@4164#L359, but instead reads the files from its S3 bucket (it is an S3 client and a Tahoe-LAFS storage server). Likewise variants of [storage/shares.py]source:src/allmydata/storage/shares.py@3762, [storage/immutable.py]source:src/allmydata/storage/immutable.py@3871#L39, and [storage/mutable.py]source:src/allmydata/storage/mutable.py@3815#L34 which write their data out to S3 instead of to their local filesystem.

Probably one should first start by abstracting out just the "does this go to local disk, S3, Rackspace Cloudfiles, etc" part from all the other functionality in those four files... :-)

Yes! Very cool! The way to do it is to make a variant of source:src/allmydata/storage/server.py which doesn't read from local disk in its [_iter_share_files()]source:src/allmydata/storage/server.py@4164#L359, but instead reads the files from its S3 bucket (it is an S3 client and a Tahoe-LAFS storage server). Likewise variants of [storage/shares.py]source:src/allmydata/storage/shares.py@3762, [storage/immutable.py]source:src/allmydata/storage/immutable.py@3871#L39, and [storage/mutable.py]source:src/allmydata/storage/mutable.py@3815#L34 which write their data out to S3 instead of to their local filesystem. Probably one should first start by abstracting out just the "does this go to local disk, S3, Rackspace Cloudfiles, etc" part from all the other functionality in those four files... :-)
davidsarah commented 2010-01-20 07:17:02 +00:00
Author
Owner

Replying to zooko:

Yes! Very cool! The way to do it is to make a variant of source:src/allmydata/storage/server.py which doesn't read from local disk in its [_iter_share_files()]source:src/allmydata/storage/server.py@4164#L359, but instead reads the files from its S3 bucket (it is an S3 client and a Tahoe-LAFS storage server).

Er, that would be an S3-compatible storage backend, no? "Allowing existing S3 clients to access files on a Tahoe grid" is definitely the frontend.

Note that S3 is supposed to be a RESTful web interface (and basically is, with some quirks), so an HTTP client that doesn't make too many assumptions should be able to access either S3 servers or Tahoe webapi servers already. It may be that we can add operations to the webapi to make it more closely compatible with S3, but I doubt that we can exactly emulate S3's access control model.

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/917#issuecomment-74848): > Yes! Very cool! The way to do it is to make a variant of source:src/allmydata/storage/server.py which doesn't read from local disk in its [_iter_share_files()]source:src/allmydata/storage/server.py@4164#L359, but instead reads the files from its S3 bucket (it is an S3 client and a Tahoe-LAFS storage server). Er, that would be an S3-compatible storage *backend*, no? "Allowing existing S3 clients to access files on a Tahoe grid" is definitely the frontend. Note that S3 is supposed to be a RESTful web interface (and basically is, with some quirks), so an HTTP client that doesn't make too many assumptions should be able to access either S3 servers or Tahoe webapi servers already. It may be that we can add operations to the webapi to make it more closely compatible with S3, but I doubt that we can exactly emulate S3's access control model.

Yes, you're right. I'm so tired I don't know my frontend from my backend.

An S3-compatible backend is interesting because of this: Redundant Array of Inexpensive Clouds: http://allmydata.org/~zooko/RAIC.png . :-)

An S3-compatible frontend is interesting because lots of people have tools and knowledge about how to store things with the S3 API, and they could more easil retarget those things to a Tahoe-LAFS grid.

Yes, you're right. I'm so tired I don't know my frontend from my backend. An S3-compatible *backend* is interesting because of this: Redundant Array of Inexpensive Clouds: <http://allmydata.org/~zooko/RAIC.png> . :-) An S3-compatible *frontend* is interesting because lots of people have tools and knowledge about how to store things with the S3 API, and they could more easil retarget those things to a Tahoe-LAFS grid.
francois commented 2010-01-20 08:21:35 +00:00
Author
Owner

Replying to zooko:

Yes, you're right. I'm so tired I don't know my frontend from my backend.

An S3-compatible backend is interesting because of this: Redundant Array of Inexpensive Clouds: http://allmydata.org/~zooko/RAIC.png . :-)

I think that backend in already adressed by #510.

An S3-compatible frontend is interesting because lots of people have tools and knowledge about how to store things with the S3 API, and they could more easil retarget those things to a Tahoe-LAFS grid.

Yeah, that's the frontend I was talking about.

Many people are investing time in building good S3 clients nowadays. And there's probalby some of them which have already addressed many of the problems we have with 'tahoe backup', the FUSE frontends, and so on.

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/917#issuecomment-74850): > Yes, you're right. I'm so tired I don't know my frontend from my backend. > > An S3-compatible *backend* is interesting because of this: Redundant Array of Inexpensive Clouds: <http://allmydata.org/~zooko/RAIC.png> . :-) I think that *backend* in already adressed by #510. > An S3-compatible *frontend* is interesting because lots of people have tools and knowledge about how to store things with the S3 API, and they could more easil retarget those things to a Tahoe-LAFS grid. Yeah, that's the *frontend* I was talking about. Many people are investing time in building good S3 clients nowadays. And there's probalby some of them which have already addressed many of the problems we have with 'tahoe backup', the FUSE frontends, and so on.
davidsarah commented 2012-10-10 23:15:46 +00:00
Author
Owner

S3 clients will assume that files are accessible under specified names, rather than cap URIs. I'm not really sure that S3's security model is very compatible with Tahoe's, unless you have a file mapping (Access Key ID, secret key) pairs to root caps, like the SFTP and FTP frontends.

S3 clients will assume that files are accessible under specified names, rather than cap URIs. I'm not really sure that S3's security model is very compatible with Tahoe's, unless you have a file mapping (Access Key ID, secret key) pairs to root caps, like the SFTP and FTP frontends.
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Reference: tahoe-lafs/trac-2024-07-25#917
No description provided.