Convert docs/frontends and docs/specifications to reStructuredText format (not including file moves).
This commit is contained in:
parent
ee4e0d8106
commit
2100273ce3
|
@ -1,56 +1,63 @@
|
||||||
= The Tahoe CLI commands =
|
======================
|
||||||
|
The Tahoe CLI commands
|
||||||
|
======================
|
||||||
|
|
||||||
1. Overview
|
1. `Overview`_
|
||||||
2. CLI Command Overview
|
2. `CLI Command Overview`_
|
||||||
3. Node Management
|
3. `Node Management`_
|
||||||
4. Virtual Drive Manipulation
|
4. `Filesystem Manipulation`_
|
||||||
4.1. Starting Directories
|
|
||||||
4.1.1. SECURITY NOTE: For users of shared systems
|
|
||||||
4.2. Command Syntax Summary
|
|
||||||
4.3. Command Examples
|
|
||||||
5. Virtual Drive Maintenance
|
|
||||||
6. Debugging
|
|
||||||
|
|
||||||
== Overview ==
|
1. `Starting Directories`_
|
||||||
|
2. `Command Syntax Summary`_
|
||||||
|
3. `Command Examples`_
|
||||||
|
|
||||||
Tahoe provides a single executable named "tahoe", which can be used to create
|
5. `Storage Grid Maintenance`_
|
||||||
and manage client/server nodes, manipulate the filesystem, and perform
|
6. `Debugging`_
|
||||||
|
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
Tahoe provides a single executable named "``tahoe``", which can be used to
|
||||||
|
create and manage client/server nodes, manipulate the filesystem, and perform
|
||||||
several debugging/maintenance tasks.
|
several debugging/maintenance tasks.
|
||||||
|
|
||||||
This executable lives in the source tree at "bin/tahoe". Once you've done a
|
This executable lives in the source tree at "``bin/tahoe``". Once you've done a
|
||||||
build (by running "make"), bin/tahoe can be run in-place: if it discovers
|
build (by running "make"), ``bin/tahoe`` can be run in-place: if it discovers
|
||||||
that it is being run from within a Tahoe source tree, it will modify sys.path
|
that it is being run from within a Tahoe source tree, it will modify sys.path
|
||||||
as necessary to use all the source code and dependent libraries contained in
|
as necessary to use all the source code and dependent libraries contained in
|
||||||
that tree.
|
that tree.
|
||||||
|
|
||||||
If you've installed Tahoe (using "make install", or by installing a binary
|
If you've installed Tahoe (using "``make install``", or by installing a binary
|
||||||
package), then the tahoe executable will be available somewhere else, perhaps
|
package), then the tahoe executable will be available somewhere else, perhaps
|
||||||
in /usr/bin/tahoe . In this case, it will use your platform's normal
|
in ``/usr/bin/tahoe``. In this case, it will use your platform's normal
|
||||||
PYTHONPATH search paths to find the tahoe code and other libraries.
|
PYTHONPATH search paths to find the tahoe code and other libraries.
|
||||||
|
|
||||||
|
|
||||||
== CLI Command Overview ==
|
CLI Command Overview
|
||||||
|
====================
|
||||||
|
|
||||||
The "tahoe" tool provides access to three categories of commands.
|
The "``tahoe``" tool provides access to three categories of commands.
|
||||||
|
|
||||||
* node management: create a client/server node, start/stop/restart it
|
* node management: create a client/server node, start/stop/restart it
|
||||||
* filesystem manipulation: list files, upload, download, delete, rename
|
* filesystem manipulation: list files, upload, download, delete, rename
|
||||||
* debugging: unpack cap-strings, examine share files
|
* debugging: unpack cap-strings, examine share files
|
||||||
|
|
||||||
To get a list of all commands, just run "tahoe" with no additional arguments.
|
To get a list of all commands, just run "``tahoe``" with no additional
|
||||||
"tahoe --help" might also provide something useful.
|
arguments. "``tahoe --help``" might also provide something useful.
|
||||||
|
|
||||||
Running "tahoe --version" will display a list of version strings, starting
|
Running "``tahoe --version``" will display a list of version strings, starting
|
||||||
with the "allmydata" module (which contains the majority of the Tahoe
|
with the "allmydata" module (which contains the majority of the Tahoe
|
||||||
functionality) and including versions for a number of dependent libraries,
|
functionality) and including versions for a number of dependent libraries,
|
||||||
like Twisted, Foolscap, pycryptopp, and zfec.
|
like Twisted, Foolscap, pycryptopp, and zfec.
|
||||||
|
|
||||||
|
|
||||||
== Node Management ==
|
Node Management
|
||||||
|
===============
|
||||||
|
|
||||||
"tahoe create-node [NODEDIR]" is the basic make-a-new-node command. It
|
"``tahoe create-node [NODEDIR]``" is the basic make-a-new-node command. It
|
||||||
creates a new directory and populates it with files that will allow the
|
creates a new directory and populates it with files that will allow the
|
||||||
"tahoe start" command to use it later on. This command creates nodes that
|
"``tahoe start``" command to use it later on. This command creates nodes that
|
||||||
have client functionality (upload/download files), web API services
|
have client functionality (upload/download files), web API services
|
||||||
(controlled by the 'webport' file), and storage services (unless
|
(controlled by the 'webport' file), and storage services (unless
|
||||||
"--no-storage" is specified).
|
"--no-storage" is specified).
|
||||||
|
@ -58,18 +65,18 @@ have client functionality (upload/download files), web API services
|
||||||
NODEDIR defaults to ~/.tahoe/ , and newly-created nodes default to
|
NODEDIR defaults to ~/.tahoe/ , and newly-created nodes default to
|
||||||
publishing a web server on port 3456 (limited to the loopback interface, at
|
publishing a web server on port 3456 (limited to the loopback interface, at
|
||||||
127.0.0.1, to restrict access to other programs on the same host). All of the
|
127.0.0.1, to restrict access to other programs on the same host). All of the
|
||||||
other "tahoe" subcommands use corresponding defaults.
|
other "``tahoe``" subcommands use corresponding defaults.
|
||||||
|
|
||||||
"tahoe create-client [NODEDIR]" creates a node with no storage service.
|
"``tahoe create-client [NODEDIR]``" creates a node with no storage service.
|
||||||
That is, it behaves like "tahoe create-node --no-storage [NODEDIR]".
|
That is, it behaves like "``tahoe create-node --no-storage [NODEDIR]``".
|
||||||
(This is a change from versions prior to 1.6.0.)
|
(This is a change from versions prior to 1.6.0.)
|
||||||
|
|
||||||
"tahoe create-introducer [NODEDIR]" is used to create the Introducer node.
|
"``tahoe create-introducer [NODEDIR]``" is used to create the Introducer node.
|
||||||
This node provides introduction services and nothing else. When started, this
|
This node provides introduction services and nothing else. When started, this
|
||||||
node will produce an introducer.furl, which should be published to all
|
node will produce an introducer.furl, which should be published to all
|
||||||
clients.
|
clients.
|
||||||
|
|
||||||
"tahoe create-key-generator [NODEDIR]" is used to create a special
|
"``tahoe create-key-generator [NODEDIR]``" is used to create a special
|
||||||
"key-generation" service, which allows a client to offload their RSA key
|
"key-generation" service, which allows a client to offload their RSA key
|
||||||
generation to a separate process. Since RSA key generation takes several
|
generation to a separate process. Since RSA key generation takes several
|
||||||
seconds, and must be done each time a directory is created, moving it to a
|
seconds, and must be done each time a directory is created, moving it to a
|
||||||
|
@ -77,22 +84,23 @@ separate process allows the first process (perhaps a busy wapi server) to
|
||||||
continue servicing other requests. The key generator exports a FURL that can
|
continue servicing other requests. The key generator exports a FURL that can
|
||||||
be copied into a node to enable this functionality.
|
be copied into a node to enable this functionality.
|
||||||
|
|
||||||
"tahoe run [NODEDIR]" will start a previously-created node in the foreground.
|
"``tahoe run [NODEDIR]``" will start a previously-created node in the foreground.
|
||||||
|
|
||||||
"tahoe start [NODEDIR]" will launch a previously-created node. It will launch
|
"``tahoe start [NODEDIR]``" will launch a previously-created node. It will launch
|
||||||
the node into the background, using the standard Twisted "twistd"
|
the node into the background, using the standard Twisted "twistd"
|
||||||
daemon-launching tool. On some platforms (including Windows) this command is
|
daemon-launching tool. On some platforms (including Windows) this command is
|
||||||
unable to run a daemon in the background; in that case it behaves in the
|
unable to run a daemon in the background; in that case it behaves in the
|
||||||
same way as "tahoe run".
|
same way as "``tahoe run``".
|
||||||
|
|
||||||
"tahoe stop [NODEDIR]" will shut down a running node.
|
"``tahoe stop [NODEDIR]``" will shut down a running node.
|
||||||
|
|
||||||
"tahoe restart [NODEDIR]" will stop and then restart a running node. This is
|
"``tahoe restart [NODEDIR]``" will stop and then restart a running node. This is
|
||||||
most often used by developers who have just modified the code and want to
|
most often used by developers who have just modified the code and want to
|
||||||
start using their changes.
|
start using their changes.
|
||||||
|
|
||||||
|
|
||||||
== Filesystem Manipulation ==
|
Filesystem Manipulation
|
||||||
|
=======================
|
||||||
|
|
||||||
These commands let you exmaine a Tahoe filesystem, providing basic
|
These commands let you exmaine a Tahoe filesystem, providing basic
|
||||||
list/upload/download/delete/rename/mkdir functionality. They can be used as
|
list/upload/download/delete/rename/mkdir functionality. They can be used as
|
||||||
|
@ -114,7 +122,8 @@ As of Tahoe v1.7, passing non-ASCII characters to the CLI should work,
|
||||||
except on Windows. The command-line arguments are assumed to use the
|
except on Windows. The command-line arguments are assumed to use the
|
||||||
character encoding specified by the current locale.
|
character encoding specified by the current locale.
|
||||||
|
|
||||||
=== Starting Directories ===
|
Starting Directories
|
||||||
|
--------------------
|
||||||
|
|
||||||
As described in architecture.txt, the Tahoe distributed filesystem consists
|
As described in architecture.txt, the Tahoe distributed filesystem consists
|
||||||
of a collection of directories and files, each of which has a "read-cap" or a
|
of a collection of directories and files, each of which has a "read-cap" or a
|
||||||
|
@ -125,11 +134,11 @@ connected together into a directed graph.
|
||||||
|
|
||||||
To use this collection of files and directories, you need to choose a
|
To use this collection of files and directories, you need to choose a
|
||||||
starting point: some specific directory that we will refer to as a
|
starting point: some specific directory that we will refer to as a
|
||||||
"starting directory". For a given starting directory, the "ls
|
"starting directory". For a given starting directory, the "``ls
|
||||||
[STARTING_DIR]:" command would list the contents of this directory,
|
[STARTING_DIR]:``" command would list the contents of this directory,
|
||||||
the "ls [STARTING_DIR]:dir1" command would look inside this directory
|
the "``ls [STARTING_DIR]:dir1``" command would look inside this directory
|
||||||
for a child named "dir1" and list its contents, "ls
|
for a child named "dir1" and list its contents, "``ls
|
||||||
[STARTING_DIR]:dir1/subdir2" would look two levels deep, etc.
|
[STARTING_DIR]:dir1/subdir2``" would look two levels deep, etc.
|
||||||
|
|
||||||
Note that there is no real global "root" directory, but instead each
|
Note that there is no real global "root" directory, but instead each
|
||||||
starting directory provides a different, possibly overlapping
|
starting directory provides a different, possibly overlapping
|
||||||
|
@ -138,9 +147,9 @@ perspective on the graph of files and directories.
|
||||||
Each tahoe node remembers a list of starting points, named "aliases",
|
Each tahoe node remembers a list of starting points, named "aliases",
|
||||||
in a file named ~/.tahoe/private/aliases . These aliases are short UTF-8
|
in a file named ~/.tahoe/private/aliases . These aliases are short UTF-8
|
||||||
encoded strings that stand in for a directory read- or write- cap. If
|
encoded strings that stand in for a directory read- or write- cap. If
|
||||||
you use the command line "ls" without any "[STARTING_DIR]:" argument,
|
you use the command line "``ls``" without any "[STARTING_DIR]:" argument,
|
||||||
then it will use the default alias, which is "tahoe", therefore "tahoe
|
then it will use the default alias, which is "tahoe", therefore "``tahoe
|
||||||
ls" has the same effect as "tahoe ls tahoe:". The same goes for the
|
ls``" has the same effect as "``tahoe ls tahoe:``". The same goes for the
|
||||||
other commands which can reasonably use a default alias: get, put,
|
other commands which can reasonably use a default alias: get, put,
|
||||||
mkdir, mv, and rm.
|
mkdir, mv, and rm.
|
||||||
|
|
||||||
|
@ -148,7 +157,7 @@ For backwards compatibility with Tahoe-1.0, if the "tahoe": alias is not
|
||||||
found in ~/.tahoe/private/aliases, the CLI will use the contents of
|
found in ~/.tahoe/private/aliases, the CLI will use the contents of
|
||||||
~/.tahoe/private/root_dir.cap instead. Tahoe-1.0 had only a single starting
|
~/.tahoe/private/root_dir.cap instead. Tahoe-1.0 had only a single starting
|
||||||
point, and stored it in this root_dir.cap file, so Tahoe-1.1 will use it if
|
point, and stored it in this root_dir.cap file, so Tahoe-1.1 will use it if
|
||||||
necessary. However, once you've set a "tahoe:" alias with "tahoe set-alias",
|
necessary. However, once you've set a "tahoe:" alias with "``tahoe set-alias``",
|
||||||
that will override anything in the old root_dir.cap file.
|
that will override anything in the old root_dir.cap file.
|
||||||
|
|
||||||
The Tahoe CLI commands use the same filename syntax as scp and rsync
|
The Tahoe CLI commands use the same filename syntax as scp and rsync
|
||||||
|
@ -168,30 +177,32 @@ alias, you can use that alias as an argument to commands.
|
||||||
|
|
||||||
The best way to get started with Tahoe is to create a node, start it, then
|
The best way to get started with Tahoe is to create a node, start it, then
|
||||||
use the following command to create a new directory and set it as your
|
use the following command to create a new directory and set it as your
|
||||||
"tahoe:" alias:
|
"tahoe:" alias::
|
||||||
|
|
||||||
tahoe create-alias tahoe
|
tahoe create-alias tahoe
|
||||||
|
|
||||||
After that you can use "tahoe ls tahoe:" and "tahoe cp local.txt tahoe:",
|
After that you can use "``tahoe ls tahoe:``" and
|
||||||
and both will refer to the directory that you've just created.
|
"``tahoe cp local.txt tahoe:``", and both will refer to the directory that
|
||||||
|
you've just created.
|
||||||
|
|
||||||
==== SECURITY NOTE: For users of shared systems ====
|
SECURITY NOTE: For users of shared systems
|
||||||
|
``````````````````````````````````````````
|
||||||
|
|
||||||
Another way to achieve the same effect as the above "tahoe create-alias"
|
Another way to achieve the same effect as the above "tahoe create-alias"
|
||||||
command is:
|
command is::
|
||||||
|
|
||||||
tahoe add-alias tahoe `tahoe mkdir`
|
tahoe add-alias tahoe `tahoe mkdir`
|
||||||
|
|
||||||
However, command-line arguments are visible to other users (through the
|
However, command-line arguments are visible to other users (through the
|
||||||
'ps' command, or the Windows Process Explorer tool), so if you are using a
|
'ps' command, or the Windows Process Explorer tool), so if you are using a
|
||||||
tahoe node on a shared host, your login neighbors will be able to see (and
|
tahoe node on a shared host, your login neighbors will be able to see (and
|
||||||
capture) any directory caps that you set up with the "tahoe add-alias"
|
capture) any directory caps that you set up with the "``tahoe add-alias``"
|
||||||
command.
|
command.
|
||||||
|
|
||||||
The "tahoe create-alias" command avoids this problem by creating a new
|
The "``tahoe create-alias``" command avoids this problem by creating a new
|
||||||
directory and putting the cap into your aliases file for you. Alternatively,
|
directory and putting the cap into your aliases file for you. Alternatively,
|
||||||
you can edit the NODEDIR/private/aliases file directly, by adding a line like
|
you can edit the NODEDIR/private/aliases file directly, by adding a line like
|
||||||
this:
|
this::
|
||||||
|
|
||||||
fun: URI:DIR2:ovjy4yhylqlfoqg2vcze36dhde:4d4f47qko2xm5g7osgo2yyidi5m4muyo2vjjy53q4vjju2u55mfa
|
fun: URI:DIR2:ovjy4yhylqlfoqg2vcze36dhde:4d4f47qko2xm5g7osgo2yyidi5m4muyo2vjjy53q4vjju2u55mfa
|
||||||
|
|
||||||
|
@ -203,38 +214,58 @@ other arguments you type there, but not the caps that Tahoe uses to permit
|
||||||
access to your files and directories.
|
access to your files and directories.
|
||||||
|
|
||||||
|
|
||||||
=== Command Syntax Summary ===
|
Command Syntax Summary
|
||||||
|
----------------------
|
||||||
|
|
||||||
tahoe add-alias alias cap
|
tahoe add-alias alias cap
|
||||||
|
|
||||||
tahoe create-alias alias
|
tahoe create-alias alias
|
||||||
|
|
||||||
tahoe list-aliases
|
tahoe list-aliases
|
||||||
|
|
||||||
tahoe mkdir
|
tahoe mkdir
|
||||||
|
|
||||||
tahoe mkdir [alias:]path
|
tahoe mkdir [alias:]path
|
||||||
|
|
||||||
tahoe ls [alias:][path]
|
tahoe ls [alias:][path]
|
||||||
|
|
||||||
tahoe webopen [alias:][path]
|
tahoe webopen [alias:][path]
|
||||||
|
|
||||||
tahoe put [--mutable] [localfrom:-]
|
tahoe put [--mutable] [localfrom:-]
|
||||||
|
|
||||||
tahoe put [--mutable] [localfrom:-] [alias:]to
|
tahoe put [--mutable] [localfrom:-] [alias:]to
|
||||||
|
|
||||||
tahoe put [--mutable] [localfrom:-] [alias:]subdir/to
|
tahoe put [--mutable] [localfrom:-] [alias:]subdir/to
|
||||||
|
|
||||||
tahoe put [--mutable] [localfrom:-] dircap:to
|
tahoe put [--mutable] [localfrom:-] dircap:to
|
||||||
|
|
||||||
tahoe put [--mutable] [localfrom:-] dircap:./subdir/to
|
tahoe put [--mutable] [localfrom:-] dircap:./subdir/to
|
||||||
|
|
||||||
tahoe put [localfrom:-] mutable-file-writecap
|
tahoe put [localfrom:-] mutable-file-writecap
|
||||||
|
|
||||||
tahoe get [alias:]from [localto:-]
|
tahoe get [alias:]from [localto:-]
|
||||||
|
|
||||||
tahoe cp [-r] [alias:]frompath [alias:]topath
|
tahoe cp [-r] [alias:]frompath [alias:]topath
|
||||||
|
|
||||||
tahoe rm [alias:]what
|
tahoe rm [alias:]what
|
||||||
|
|
||||||
tahoe mv [alias:]from [alias:]to
|
tahoe mv [alias:]from [alias:]to
|
||||||
|
|
||||||
tahoe ln [alias:]from [alias:]to
|
tahoe ln [alias:]from [alias:]to
|
||||||
|
|
||||||
tahoe backup localfrom [alias:]to
|
tahoe backup localfrom [alias:]to
|
||||||
|
|
||||||
=== Command Examples ===
|
Command Examples
|
||||||
|
----------------
|
||||||
|
|
||||||
tahoe mkdir
|
``tahoe mkdir``
|
||||||
|
|
||||||
This creates a new empty unlinked directory, and prints its write-cap to
|
This creates a new empty unlinked directory, and prints its write-cap to
|
||||||
stdout. The new directory is not attached to anything else.
|
stdout. The new directory is not attached to anything else.
|
||||||
|
|
||||||
tahoe add-alias fun DIRCAP
|
``tahoe add-alias fun DIRCAP``
|
||||||
|
|
||||||
An example would be:
|
An example would be::
|
||||||
|
|
||||||
tahoe add-alias fun URI:DIR2:ovjy4yhylqlfoqg2vcze36dhde:4d4f47qko2xm5g7osgo2yyidi5m4muyo2vjjy53q4vjju2u55mfa
|
tahoe add-alias fun URI:DIR2:ovjy4yhylqlfoqg2vcze36dhde:4d4f47qko2xm5g7osgo2yyidi5m4muyo2vjjy53q4vjju2u55mfa
|
||||||
|
|
||||||
|
@ -243,67 +274,83 @@ tahoe add-alias fun DIRCAP
|
||||||
directory. Use "tahoe add-alias tahoe DIRCAP" to set the contents of the
|
directory. Use "tahoe add-alias tahoe DIRCAP" to set the contents of the
|
||||||
default "tahoe:" alias.
|
default "tahoe:" alias.
|
||||||
|
|
||||||
tahoe create-alias fun
|
``tahoe create-alias fun``
|
||||||
|
|
||||||
This combines 'tahoe mkdir' and 'tahoe add-alias' into a single step.
|
This combines "``tahoe mkdir``" and "``tahoe add-alias``" into a single step.
|
||||||
|
|
||||||
tahoe list-aliases
|
``tahoe list-aliases``
|
||||||
|
|
||||||
This displays a table of all configured aliases.
|
This displays a table of all configured aliases.
|
||||||
|
|
||||||
tahoe mkdir subdir
|
``tahoe mkdir subdir``
|
||||||
tahoe mkdir /subdir
|
|
||||||
|
``tahoe mkdir /subdir``
|
||||||
|
|
||||||
This both create a new empty directory and attaches it to your root with the
|
This both create a new empty directory and attaches it to your root with the
|
||||||
name "subdir".
|
name "subdir".
|
||||||
|
|
||||||
tahoe ls
|
``tahoe ls``
|
||||||
tahoe ls /
|
|
||||||
tahoe ls tahoe:
|
``tahoe ls /``
|
||||||
tahoe ls tahoe:/
|
|
||||||
|
``tahoe ls tahoe:``
|
||||||
|
|
||||||
|
``tahoe ls tahoe:/``
|
||||||
|
|
||||||
All four list the root directory of your personal virtual filesystem.
|
All four list the root directory of your personal virtual filesystem.
|
||||||
|
|
||||||
tahoe ls subdir
|
``tahoe ls subdir``
|
||||||
|
|
||||||
This lists a subdirectory of your filesystem.
|
This lists a subdirectory of your filesystem.
|
||||||
|
|
||||||
tahoe webopen
|
``tahoe webopen``
|
||||||
tahoe webopen tahoe:
|
|
||||||
tahoe webopen tahoe:subdir/
|
``tahoe webopen tahoe:``
|
||||||
tahoe webopen subdir/
|
|
||||||
|
``tahoe webopen tahoe:subdir/``
|
||||||
|
|
||||||
|
``tahoe webopen subdir/``
|
||||||
|
|
||||||
This uses the python 'webbrowser' module to cause a local web browser to
|
This uses the python 'webbrowser' module to cause a local web browser to
|
||||||
open to the web page for the given directory. This page offers interfaces to
|
open to the web page for the given directory. This page offers interfaces to
|
||||||
add, dowlonad, rename, and delete files in the directory. If not given an
|
add, dowlonad, rename, and delete files in the directory. If not given an
|
||||||
alias or path, opens "tahoe:", the root dir of the default alias.
|
alias or path, opens "tahoe:", the root dir of the default alias.
|
||||||
|
|
||||||
tahoe put file.txt
|
``tahoe put file.txt``
|
||||||
tahoe put ./file.txt
|
|
||||||
tahoe put /tmp/file.txt
|
``tahoe put ./file.txt``
|
||||||
tahoe put ~/file.txt
|
|
||||||
|
``tahoe put /tmp/file.txt``
|
||||||
|
|
||||||
|
``tahoe put ~/file.txt``
|
||||||
|
|
||||||
These upload the local file into the grid, and prints the new read-cap to
|
These upload the local file into the grid, and prints the new read-cap to
|
||||||
stdout. The uploaded file is not attached to any directory. All one-argument
|
stdout. The uploaded file is not attached to any directory. All one-argument
|
||||||
forms of "tahoe put" perform an unlinked upload.
|
forms of "``tahoe put``" perform an unlinked upload.
|
||||||
|
|
||||||
tahoe put -
|
``tahoe put -``
|
||||||
tahoe put
|
|
||||||
|
``tahoe put``
|
||||||
|
|
||||||
These also perform an unlinked upload, but the data to be uploaded is taken
|
These also perform an unlinked upload, but the data to be uploaded is taken
|
||||||
from stdin.
|
from stdin.
|
||||||
|
|
||||||
tahoe put file.txt uploaded.txt
|
``tahoe put file.txt uploaded.txt``
|
||||||
tahoe put file.txt tahoe:uploaded.txt
|
|
||||||
|
``tahoe put file.txt tahoe:uploaded.txt``
|
||||||
|
|
||||||
These upload the local file and add it to your root with the name
|
These upload the local file and add it to your root with the name
|
||||||
"uploaded.txt"
|
"uploaded.txt"
|
||||||
|
|
||||||
tahoe put file.txt subdir/foo.txt
|
``tahoe put file.txt subdir/foo.txt``
|
||||||
tahoe put - subdir/foo.txt
|
|
||||||
tahoe put file.txt tahoe:subdir/foo.txt
|
``tahoe put - subdir/foo.txt``
|
||||||
tahoe put file.txt DIRCAP:./foo.txt
|
|
||||||
tahoe put file.txt DIRCAP:./subdir/foo.txt
|
``tahoe put file.txt tahoe:subdir/foo.txt``
|
||||||
|
|
||||||
|
``tahoe put file.txt DIRCAP:./foo.txt``
|
||||||
|
|
||||||
|
``tahoe put file.txt DIRCAP:./subdir/foo.txt``
|
||||||
|
|
||||||
These upload the named file and attach them to a subdirectory of the given
|
These upload the named file and attach them to a subdirectory of the given
|
||||||
root directory, under the name "foo.txt". Note that to use a directory
|
root directory, under the name "foo.txt". Note that to use a directory
|
||||||
|
@ -311,55 +358,65 @@ tahoe put file.txt DIRCAP:./subdir/foo.txt
|
||||||
than ":", to help the CLI parser figure out where the dircap ends. When the
|
than ":", to help the CLI parser figure out where the dircap ends. When the
|
||||||
source file is named "-", the contents are taken from stdin.
|
source file is named "-", the contents are taken from stdin.
|
||||||
|
|
||||||
tahoe put file.txt --mutable
|
``tahoe put file.txt --mutable``
|
||||||
|
|
||||||
Create a new mutable file, fill it with the contents of file.txt, and print
|
Create a new mutable file, fill it with the contents of file.txt, and print
|
||||||
the new write-cap to stdout.
|
the new write-cap to stdout.
|
||||||
|
|
||||||
tahoe put file.txt MUTABLE-FILE-WRITECAP
|
``tahoe put file.txt MUTABLE-FILE-WRITECAP``
|
||||||
|
|
||||||
Replace the contents of the given mutable file with the contents of file.txt
|
Replace the contents of the given mutable file with the contents of file.txt
|
||||||
and prints the same write-cap to stdout.
|
and prints the same write-cap to stdout.
|
||||||
|
|
||||||
tahoe cp file.txt tahoe:uploaded.txt
|
``tahoe cp file.txt tahoe:uploaded.txt``
|
||||||
tahoe cp file.txt tahoe:
|
|
||||||
tahoe cp file.txt tahoe:/
|
``tahoe cp file.txt tahoe:``
|
||||||
tahoe cp ./file.txt tahoe:
|
|
||||||
|
``tahoe cp file.txt tahoe:/``
|
||||||
|
|
||||||
|
``tahoe cp ./file.txt tahoe:``
|
||||||
|
|
||||||
These upload the local file and add it to your root with the name
|
These upload the local file and add it to your root with the name
|
||||||
"uploaded.txt".
|
"uploaded.txt".
|
||||||
|
|
||||||
tahoe cp tahoe:uploaded.txt downloaded.txt
|
``tahoe cp tahoe:uploaded.txt downloaded.txt``
|
||||||
tahoe cp tahoe:uploaded.txt ./downloaded.txt
|
|
||||||
tahoe cp tahoe:uploaded.txt /tmp/downloaded.txt
|
``tahoe cp tahoe:uploaded.txt ./downloaded.txt``
|
||||||
tahoe cp tahoe:uploaded.txt ~/downloaded.txt
|
|
||||||
|
``tahoe cp tahoe:uploaded.txt /tmp/downloaded.txt``
|
||||||
|
|
||||||
|
``tahoe cp tahoe:uploaded.txt ~/downloaded.txt``
|
||||||
|
|
||||||
This downloads the named file from your tahoe root, and puts the result on
|
This downloads the named file from your tahoe root, and puts the result on
|
||||||
your local filesystem.
|
your local filesystem.
|
||||||
|
|
||||||
tahoe cp tahoe:uploaded.txt fun:stuff.txt
|
``tahoe cp tahoe:uploaded.txt fun:stuff.txt``
|
||||||
|
|
||||||
This copies a file from your tahoe root to a different virtual directory,
|
This copies a file from your tahoe root to a different virtual directory,
|
||||||
set up earlier with "tahoe add-alias fun DIRCAP".
|
set up earlier with "tahoe add-alias fun DIRCAP".
|
||||||
|
|
||||||
tahoe rm uploaded.txt
|
``tahoe rm uploaded.txt``
|
||||||
tahoe rm tahoe:uploaded.txt
|
|
||||||
|
``tahoe rm tahoe:uploaded.txt``
|
||||||
|
|
||||||
This deletes a file from your tahoe root.
|
This deletes a file from your tahoe root.
|
||||||
|
|
||||||
tahoe mv uploaded.txt renamed.txt
|
``tahoe mv uploaded.txt renamed.txt``
|
||||||
tahoe mv tahoe:uploaded.txt tahoe:renamed.txt
|
|
||||||
|
``tahoe mv tahoe:uploaded.txt tahoe:renamed.txt``
|
||||||
|
|
||||||
These rename a file within your tahoe root directory.
|
These rename a file within your tahoe root directory.
|
||||||
|
|
||||||
tahoe mv uploaded.txt fun:
|
``tahoe mv uploaded.txt fun:``
|
||||||
tahoe mv tahoe:uploaded.txt fun:
|
|
||||||
tahoe mv tahoe:uploaded.txt fun:uploaded.txt
|
``tahoe mv tahoe:uploaded.txt fun:``
|
||||||
|
|
||||||
|
``tahoe mv tahoe:uploaded.txt fun:uploaded.txt``
|
||||||
|
|
||||||
These move a file from your tahoe root directory to the virtual directory
|
These move a file from your tahoe root directory to the virtual directory
|
||||||
set up earlier with "tahoe add-alias fun DIRCAP"
|
set up earlier with "tahoe add-alias fun DIRCAP"
|
||||||
|
|
||||||
tahoe backup ~ work:backups
|
``tahoe backup ~ work:backups``
|
||||||
|
|
||||||
This command performs a full versioned backup of every file and directory
|
This command performs a full versioned backup of every file and directory
|
||||||
underneath your "~" home directory, placing an immutable timestamped
|
underneath your "~" home directory, placing an immutable timestamped
|
||||||
|
@ -377,7 +434,7 @@ tahoe backup ~ work:backups
|
||||||
should delete the stale backupdb.sqlite file, to force "tahoe backup" to
|
should delete the stale backupdb.sqlite file, to force "tahoe backup" to
|
||||||
upload all files to the new grid.
|
upload all files to the new grid.
|
||||||
|
|
||||||
tahoe backup --exclude=*~ ~ work:backups
|
``tahoe backup --exclude=*~ ~ work:backups``
|
||||||
|
|
||||||
Same as above, but this time the backup process will ignore any
|
Same as above, but this time the backup process will ignore any
|
||||||
filename that will end with '~'. '--exclude' will accept any standard
|
filename that will end with '~'. '--exclude' will accept any standard
|
||||||
|
@ -387,16 +444,15 @@ tahoe backup --exclude=*~ ~ work:backups
|
||||||
attention that the pattern will be matched against any level of the
|
attention that the pattern will be matched against any level of the
|
||||||
directory tree, it's still impossible to specify absolute path exclusions.
|
directory tree, it's still impossible to specify absolute path exclusions.
|
||||||
|
|
||||||
tahoe backup --exclude-from=/path/to/filename ~ work:backups
|
``tahoe backup --exclude-from=/path/to/filename ~ work:backups``
|
||||||
|
|
||||||
'--exclude-from' is similar to '--exclude', but reads exclusion
|
'--exclude-from' is similar to '--exclude', but reads exclusion
|
||||||
patterns from '/path/to/filename', one per line.
|
patterns from '/path/to/filename', one per line.
|
||||||
|
|
||||||
tahoe backup --exclude-vcs ~ work:backups
|
``tahoe backup --exclude-vcs ~ work:backups``
|
||||||
|
|
||||||
This command will ignore any known file or directory that's used by
|
This command will ignore any known file or directory that's used by
|
||||||
version control systems to store metadata. The list of the exluded
|
version control systems to store metadata. The excluded names are:
|
||||||
names is:
|
|
||||||
|
|
||||||
* CVS
|
* CVS
|
||||||
* RCS
|
* RCS
|
||||||
|
@ -417,13 +473,18 @@ tahoe backup --exclude-vcs ~ work:backups
|
||||||
* .hgignore
|
* .hgignore
|
||||||
* _darcs
|
* _darcs
|
||||||
|
|
||||||
== Storage Grid Maintenance ==
|
Storage Grid Maintenance
|
||||||
|
========================
|
||||||
|
|
||||||
tahoe manifest tahoe:
|
``tahoe manifest tahoe:``
|
||||||
tahoe manifest --storage-index tahoe:
|
|
||||||
tahoe manifest --verify-cap tahoe:
|
``tahoe manifest --storage-index tahoe:``
|
||||||
tahoe manifest --repair-cap tahoe:
|
|
||||||
tahoe manifest --raw tahoe:
|
``tahoe manifest --verify-cap tahoe:``
|
||||||
|
|
||||||
|
``tahoe manifest --repair-cap tahoe:``
|
||||||
|
|
||||||
|
``tahoe manifest --raw tahoe:``
|
||||||
|
|
||||||
This performs a recursive walk of the given directory, visiting every file
|
This performs a recursive walk of the given directory, visiting every file
|
||||||
and directory that can be reached from that point. It then emits one line to
|
and directory that can be reached from that point. It then emits one line to
|
||||||
|
@ -441,46 +502,47 @@ tahoe manifest --raw tahoe:
|
||||||
strings, and cap strings. The last line of the --raw output will be a JSON
|
strings, and cap strings. The last line of the --raw output will be a JSON
|
||||||
encoded deep-stats dictionary.
|
encoded deep-stats dictionary.
|
||||||
|
|
||||||
tahoe stats tahoe:
|
``tahoe stats tahoe:``
|
||||||
|
|
||||||
This performs a recursive walk of the given directory, visiting every file
|
This performs a recursive walk of the given directory, visiting every file
|
||||||
and directory that can be reached from that point. It gathers statistics on
|
and directory that can be reached from that point. It gathers statistics on
|
||||||
the sizes of the objects it encounters, and prints a summary to stdout.
|
the sizes of the objects it encounters, and prints a summary to stdout.
|
||||||
|
|
||||||
|
|
||||||
== Debugging ==
|
Debugging
|
||||||
|
=========
|
||||||
|
|
||||||
For a list of all debugging commands, use "tahoe debug".
|
For a list of all debugging commands, use "tahoe debug".
|
||||||
|
|
||||||
"tahoe debug find-shares STORAGEINDEX NODEDIRS.." will look through one or
|
"``tahoe debug find-shares STORAGEINDEX NODEDIRS..``" will look through one or
|
||||||
more storage nodes for the share files that are providing storage for the
|
more storage nodes for the share files that are providing storage for the
|
||||||
given storage index.
|
given storage index.
|
||||||
|
|
||||||
"tahoe debug catalog-shares NODEDIRS.." will look through one or more storage
|
"``tahoe debug catalog-shares NODEDIRS..``" will look through one or more
|
||||||
nodes and locate every single share they contain. It produces a report on
|
storage nodes and locate every single share they contain. It produces a report
|
||||||
stdout with one line per share, describing what kind of share it is, the
|
on stdout with one line per share, describing what kind of share it is, the
|
||||||
storage index, the size of the file is used for, etc. It may be useful to
|
storage index, the size of the file is used for, etc. It may be useful to
|
||||||
concatenate these reports from all storage hosts and use it to look for
|
concatenate these reports from all storage hosts and use it to look for
|
||||||
anomalies.
|
anomalies.
|
||||||
|
|
||||||
"tahoe debug dump-share SHAREFILE" will take the name of a single share file
|
"``tahoe debug dump-share SHAREFILE``" will take the name of a single share file
|
||||||
(as found by "tahoe find-shares") and print a summary of its contents to
|
(as found by "tahoe find-shares") and print a summary of its contents to
|
||||||
stdout. This includes a list of leases, summaries of the hash tree, and
|
stdout. This includes a list of leases, summaries of the hash tree, and
|
||||||
information from the UEB (URI Extension Block). For mutable file shares, it
|
information from the UEB (URI Extension Block). For mutable file shares, it
|
||||||
will describe which version (seqnum and root-hash) is being stored in this
|
will describe which version (seqnum and root-hash) is being stored in this
|
||||||
share.
|
share.
|
||||||
|
|
||||||
"tahoe debug dump-cap CAP" will take a URI (a file read-cap, or a directory
|
"``tahoe debug dump-cap CAP``" will take a URI (a file read-cap, or a directory
|
||||||
read- or write- cap) and unpack it into separate pieces. The most useful
|
read- or write- cap) and unpack it into separate pieces. The most useful
|
||||||
aspect of this command is to reveal the storage index for any given URI. This
|
aspect of this command is to reveal the storage index for any given URI. This
|
||||||
can be used to locate the share files that are holding the encoded+encrypted
|
can be used to locate the share files that are holding the encoded+encrypted
|
||||||
data for this file.
|
data for this file.
|
||||||
|
|
||||||
"tahoe debug repl" will launch an interactive python interpreter in which the
|
"``tahoe debug repl``" will launch an interactive python interpreter in which
|
||||||
Tahoe packages and modules are available on sys.path (e.g. by using 'import
|
the Tahoe packages and modules are available on sys.path (e.g. by using 'import
|
||||||
allmydata'). This is most useful from a source tree: it simply sets the
|
allmydata'). This is most useful from a source tree: it simply sets the
|
||||||
PYTHONPATH correctly and runs the 'python' executable.
|
PYTHONPATH correctly and runs the 'python' executable.
|
||||||
|
|
||||||
"tahoe debug corrupt-share SHAREFILE" will flip a bit in the given sharefile.
|
"``tahoe debug corrupt-share SHAREFILE``" will flip a bit in the given
|
||||||
This can be used to test the client-side verification/repair code. Obviously
|
sharefile. This can be used to test the client-side verification/repair code.
|
||||||
this command should not be used during normal operation.
|
Obviously, this command should not be used during normal operation.
|
||||||
|
|
|
@ -1,15 +1,19 @@
|
||||||
= Tahoe-LAFS FTP and SFTP Frontends =
|
=================================
|
||||||
|
Tahoe-LAFS FTP and SFTP Frontends
|
||||||
|
=================================
|
||||||
|
|
||||||
1. FTP/SFTP Background
|
1. `FTP/SFTP Background`_
|
||||||
2. Tahoe-LAFS Support
|
2. `Tahoe-LAFS Support`_
|
||||||
3. Creating an Account File
|
3. `Creating an Account File`_
|
||||||
4. Configuring FTP Access
|
4. `Configuring FTP Access`_
|
||||||
5. Configuring SFTP Access
|
5. `Configuring SFTP Access`_
|
||||||
6. Dependencies
|
6. `Dependencies`_
|
||||||
7. Immutable and mutable files
|
7. `Immutable and mutable files`_
|
||||||
|
8. `Known Issues`_
|
||||||
|
|
||||||
|
|
||||||
== FTP/SFTP Background ==
|
FTP/SFTP Background
|
||||||
|
===================
|
||||||
|
|
||||||
FTP is the venerable internet file-transfer protocol, first developed in
|
FTP is the venerable internet file-transfer protocol, first developed in
|
||||||
1971. The FTP server usually listens on port 21. A separate connection is
|
1971. The FTP server usually listens on port 21. A separate connection is
|
||||||
|
@ -26,8 +30,8 @@ Both FTP and SFTP were developed assuming a UNIX-like server, with accounts
|
||||||
and passwords, octal file modes (user/group/other, read/write/execute), and
|
and passwords, octal file modes (user/group/other, read/write/execute), and
|
||||||
ctime/mtime timestamps.
|
ctime/mtime timestamps.
|
||||||
|
|
||||||
|
Tahoe-LAFS Support
|
||||||
== Tahoe-LAFS Support ==
|
==================
|
||||||
|
|
||||||
All Tahoe-LAFS client nodes can run a frontend FTP server, allowing regular FTP
|
All Tahoe-LAFS client nodes can run a frontend FTP server, allowing regular FTP
|
||||||
clients (like /usr/bin/ftp, ncftp, and countless others) to access the
|
clients (like /usr/bin/ftp, ncftp, and countless others) to access the
|
||||||
|
@ -49,12 +53,12 @@ HTTP-based login mechanism, backed by simple PHP script and a database. The
|
||||||
latter form is used by allmydata.com to provide secure access to customer
|
latter form is used by allmydata.com to provide secure access to customer
|
||||||
rootcaps.
|
rootcaps.
|
||||||
|
|
||||||
|
Creating an Account File
|
||||||
== Creating an Account File ==
|
========================
|
||||||
|
|
||||||
To use the first form, create a file (probably in
|
To use the first form, create a file (probably in
|
||||||
BASEDIR/private/ftp.accounts) in which each non-comment/non-blank line is a
|
BASEDIR/private/ftp.accounts) in which each non-comment/non-blank line is a
|
||||||
space-separated line of (USERNAME, PASSWORD, ROOTCAP), like so:
|
space-separated line of (USERNAME, PASSWORD, ROOTCAP), like so::
|
||||||
|
|
||||||
% cat BASEDIR/private/ftp.accounts
|
% cat BASEDIR/private/ftp.accounts
|
||||||
# This is a password line, (username, password, rootcap)
|
# This is a password line, (username, password, rootcap)
|
||||||
|
@ -69,11 +73,11 @@ these strings.
|
||||||
Now add an 'accounts.file' directive to your tahoe.cfg file, as described
|
Now add an 'accounts.file' directive to your tahoe.cfg file, as described
|
||||||
in the next sections.
|
in the next sections.
|
||||||
|
|
||||||
|
Configuring FTP Access
|
||||||
== Configuring FTP Access ==
|
======================
|
||||||
|
|
||||||
To enable the FTP server with an accounts file, add the following lines to
|
To enable the FTP server with an accounts file, add the following lines to
|
||||||
the BASEDIR/tahoe.cfg file:
|
the BASEDIR/tahoe.cfg file::
|
||||||
|
|
||||||
[ftpd]
|
[ftpd]
|
||||||
enabled = true
|
enabled = true
|
||||||
|
@ -85,7 +89,7 @@ interface only. The "accounts.file" pathname will be interpreted
|
||||||
relative to the node's BASEDIR.
|
relative to the node's BASEDIR.
|
||||||
|
|
||||||
To enable the FTP server with an account server instead, provide the URL of
|
To enable the FTP server with an account server instead, provide the URL of
|
||||||
that server in an "accounts.url" directive:
|
that server in an "accounts.url" directive::
|
||||||
|
|
||||||
[ftpd]
|
[ftpd]
|
||||||
enabled = true
|
enabled = true
|
||||||
|
@ -100,8 +104,8 @@ if you connect to the FTP server remotely. The examples above include
|
||||||
":interface=127.0.0.1" in the "port" option, which causes the server to only
|
":interface=127.0.0.1" in the "port" option, which causes the server to only
|
||||||
accept connections from localhost.
|
accept connections from localhost.
|
||||||
|
|
||||||
|
Configuring SFTP Access
|
||||||
== Configuring SFTP Access ==
|
=======================
|
||||||
|
|
||||||
The Tahoe-LAFS SFTP server requires a host keypair, just like the regular SSH
|
The Tahoe-LAFS SFTP server requires a host keypair, just like the regular SSH
|
||||||
server. It is important to give each server a distinct keypair, to prevent
|
server. It is important to give each server a distinct keypair, to prevent
|
||||||
|
@ -122,16 +126,16 @@ policy by including ":interface=127.0.0.1" in the "port" option, which
|
||||||
causes the server to only accept connections from localhost.
|
causes the server to only accept connections from localhost.
|
||||||
|
|
||||||
You will use directives in the tahoe.cfg file to tell the SFTP code where to
|
You will use directives in the tahoe.cfg file to tell the SFTP code where to
|
||||||
find these keys. To create one, use the ssh-keygen tool (which comes with the
|
find these keys. To create one, use the ``ssh-keygen`` tool (which comes with
|
||||||
standard openssh client distribution):
|
the standard openssh client distribution)::
|
||||||
|
|
||||||
% cd BASEDIR
|
% cd BASEDIR
|
||||||
% ssh-keygen -f private/ssh_host_rsa_key
|
% ssh-keygen -f private/ssh_host_rsa_key
|
||||||
|
|
||||||
The server private key file must not have a passphrase.
|
The server private key file must not have a passphrase.
|
||||||
|
|
||||||
Then, to enable the SFTP server with an accounts file, add the following
|
Then, to enable the SFTP server with an accounts file, add the following
|
||||||
lines to the BASEDIR/tahoe.cfg file:
|
lines to the BASEDIR/tahoe.cfg file::
|
||||||
|
|
||||||
[sftpd]
|
[sftpd]
|
||||||
enabled = true
|
enabled = true
|
||||||
|
@ -144,7 +148,7 @@ The SFTP server will listen on the given port number and on the loopback
|
||||||
interface only. The "accounts.file" pathname will be interpreted
|
interface only. The "accounts.file" pathname will be interpreted
|
||||||
relative to the node's BASEDIR.
|
relative to the node's BASEDIR.
|
||||||
|
|
||||||
Or, to use an account server instead, do this:
|
Or, to use an account server instead, do this::
|
||||||
|
|
||||||
[sftpd]
|
[sftpd]
|
||||||
enabled = true
|
enabled = true
|
||||||
|
@ -158,13 +162,13 @@ isn't very useful except for testing.
|
||||||
|
|
||||||
For further information on SFTP compatibility and known issues with various
|
For further information on SFTP compatibility and known issues with various
|
||||||
clients and with the sshfs filesystem, see
|
clients and with the sshfs filesystem, see
|
||||||
<http://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend>.
|
http://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend .
|
||||||
|
|
||||||
|
Dependencies
|
||||||
|
============
|
||||||
|
|
||||||
== Dependencies ==
|
The Tahoe-LAFS SFTP server requires the Twisted "Conch" component (a "conch" is
|
||||||
|
a twisted shell, get it?). Many Linux distributions package the Conch code
|
||||||
The Tahoe-LAFS SFTP server requires the Twisted "Conch" component (a "conch" is a
|
|
||||||
twisted shell, get it?). Many Linux distributions package the Conch code
|
|
||||||
separately: debian puts it in the "python-twisted-conch" package. Conch
|
separately: debian puts it in the "python-twisted-conch" package. Conch
|
||||||
requires the "pycrypto" package, which is a Python+C implementation of many
|
requires the "pycrypto" package, which is a Python+C implementation of many
|
||||||
cryptographic functions (the debian package is named "python-crypto").
|
cryptographic functions (the debian package is named "python-crypto").
|
||||||
|
@ -183,8 +187,8 @@ http://twistedmatrix.com/trac/ticket/3462 . The Tahoe-LAFS node will refuse to
|
||||||
start the FTP server unless it detects the necessary support code in Twisted.
|
start the FTP server unless it detects the necessary support code in Twisted.
|
||||||
This patch is not needed for SFTP.
|
This patch is not needed for SFTP.
|
||||||
|
|
||||||
|
Immutable and Mutable Files
|
||||||
== Immutable and Mutable Files ==
|
===========================
|
||||||
|
|
||||||
All files created via SFTP (and FTP) are immutable files. However, files
|
All files created via SFTP (and FTP) are immutable files. However, files
|
||||||
can only be created in writeable directories, which allows the directory
|
can only be created in writeable directories, which allows the directory
|
||||||
|
@ -211,22 +215,26 @@ read-only.
|
||||||
If SFTP is used to write to an existing mutable file, it will publish a
|
If SFTP is used to write to an existing mutable file, it will publish a
|
||||||
new version when the file handle is closed.
|
new version when the file handle is closed.
|
||||||
|
|
||||||
|
Known Issues
|
||||||
|
============
|
||||||
|
|
||||||
== Known Issues ==
|
Mutable files are not supported by the FTP frontend (`ticket #680
|
||||||
|
<http://tahoe-lafs.org/trac/tahoe-lafs/ticket/680>`_). Currently, a directory
|
||||||
Mutable files are not supported by the FTP frontend (ticket #680). Currently,
|
containing mutable files cannot even be listed over FTP.
|
||||||
a directory containing mutable files cannot even be listed over FTP.
|
|
||||||
|
|
||||||
The FTP frontend sometimes fails to report errors, for example if an upload
|
The FTP frontend sometimes fails to report errors, for example if an upload
|
||||||
fails because it does meet the "servers of happiness" threshold (ticket #1081).
|
fails because it does meet the "servers of happiness" threshold (`ticket #1081
|
||||||
Upload errors also may not be reported when writing files using SFTP via sshfs
|
<http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1081>`_). Upload errors also may not
|
||||||
(ticket #1059).
|
be reported when writing files using SFTP via sshfs (`ticket #1059
|
||||||
|
<http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1059>`_).
|
||||||
|
|
||||||
Non-ASCII filenames are not supported by FTP (ticket #682). They can be used
|
Non-ASCII filenames are not supported by FTP (`ticket #682
|
||||||
with SFTP only if the client encodes filenames as UTF-8 (ticket #1089).
|
<http://tahoe-lafs.org/trac/tahoe-lafs/ticket/682>`_). They can be used
|
||||||
|
with SFTP only if the client encodes filenames as UTF-8 (`ticket #1089
|
||||||
|
<http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1089>`_).
|
||||||
|
|
||||||
The gateway node may incur a memory leak when accessing many files via SFTP
|
The gateway node may incur a memory leak when accessing many files via SFTP
|
||||||
(ticket #1045).
|
(`ticket #1045 <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1045>`_).
|
||||||
|
|
||||||
For other known issues in SFTP, see
|
For other known issues in SFTP, see
|
||||||
<http://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend>.
|
<http://tahoe-lafs.org/trac/tahoe-lafs/wiki/SftpFrontend>.
|
||||||
|
|
|
@ -1,3 +1,11 @@
|
||||||
|
===============
|
||||||
|
Download status
|
||||||
|
===============
|
||||||
|
|
||||||
|
|
||||||
|
Introduction
|
||||||
|
============
|
||||||
|
|
||||||
The WUI will display the "status" of uploads and downloads.
|
The WUI will display the "status" of uploads and downloads.
|
||||||
|
|
||||||
The Welcome Page has a link entitled "Recent Uploads and Downloads"
|
The Welcome Page has a link entitled "Recent Uploads and Downloads"
|
||||||
|
@ -18,53 +26,110 @@ http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1169#comment:1
|
||||||
Then Zooko lightly edited it while copying it into the docs/
|
Then Zooko lightly edited it while copying it into the docs/
|
||||||
directory.
|
directory.
|
||||||
|
|
||||||
-------
|
What's involved in a download?
|
||||||
|
==============================
|
||||||
|
|
||||||
First, what's involved in a download?:
|
Downloads are triggered by read() calls, each with a starting offset (defaults
|
||||||
|
to 0) and a length (defaults to the whole file). A regular webapi GET request
|
||||||
|
will result in a whole-file read() call.
|
||||||
|
|
||||||
downloads are triggered by read() calls, each with a starting offset (defaults to 0) and a length (defaults to the whole file). A regular webapi GET request will result in a whole-file read() call
|
Each read() call turns into an ordered sequence of get_segment() calls. A
|
||||||
each read() call turns into an ordered sequence of get_segment() calls. A whole-file read will fetch all segments, in order, but partial reads or multiple simultaneous reads will result in random-access of segments. Segment reads always return ciphertext: the layer above that (in read()) is responsible for decryption.
|
whole-file read will fetch all segments, in order, but partial reads or
|
||||||
before we can satisfy any segment reads, we need to find some shares. ("DYHB" is an abbreviation for "Do You Have Block", and is the message we send to storage servers to ask them if they have any shares for us. The name is historical, from Mojo Nation/Mnet/Mountain View, but nicely distinctive. Tahoe-LAFS's actual message name is remote_get_buckets().). Responses come back eventually, or don't.
|
multiple simultaneous reads will result in random-access of segments. Segment
|
||||||
Once we get enough positive DYHB responses, we have enough shares to start downloading. We send "block requests" for various pieces of the share. Responses come back eventually, or don't.
|
reads always return ciphertext: the layer above that (in read()) is responsible
|
||||||
When we get enough block-request responses for a given segment, we can decode the data and satisfy the segment read.
|
for decryption.
|
||||||
When the segment read completes, some or all of the segment data is used to satisfy the read() call (if the read call started or ended in the middle of a segment, we'll only use part of the data, otherwise we'll use all of it).
|
|
||||||
|
|
||||||
With that background, here is the data currently on the download-status page:
|
Before we can satisfy any segment reads, we need to find some shares. ("DYHB"
|
||||||
|
is an abbreviation for "Do You Have Block", and is the message we send to
|
||||||
|
storage servers to ask them if they have any shares for us. The name is
|
||||||
|
historical, from Mojo Nation/Mnet/Mountain View, but nicely distinctive.
|
||||||
|
Tahoe-LAFS's actual message name is remote_get_buckets().). Responses come
|
||||||
|
back eventually, or don't.
|
||||||
|
|
||||||
"DYHB Requests": this shows every Do-You-Have-Block query sent to storage servers and their results. Each line shows the following:
|
Once we get enough positive DYHB responses, we have enough shares to start
|
||||||
the serverid to which the request was sent
|
downloading. We send "block requests" for various pieces of the share.
|
||||||
the time at which the request was sent. Note that all timestamps are relative to the start of the first read() call and indicated with a "+" sign
|
Responses come back eventually, or don't.
|
||||||
the time at which the response was received (if ever)
|
|
||||||
the share numbers that the server has, if any
|
|
||||||
the elapsed time taken by the request
|
|
||||||
also, each line is colored according to the serverid. This color is also used in the "Requests" section below.
|
|
||||||
|
|
||||||
"Read Events": this shows all the FileNode read() calls and their overall results. Each line shows:
|
When we get enough block-request responses for a given segment, we can decode
|
||||||
the range of the file that was requested (as [OFFSET:+LENGTH]). A whole-file GET will start at 0 and read the entire file.
|
the data and satisfy the segment read.
|
||||||
the time at which the read() was made
|
|
||||||
the time at which the request finished, either because the last byte of data was returned to the read() caller, or because they cancelled the read by calling stopProducing (i.e. closing the HTTP connection)
|
|
||||||
the number of bytes returned to the caller so far
|
|
||||||
the time spent on the read, so far
|
|
||||||
the total time spent in AES decryption
|
|
||||||
total time spend paused by the client (pauseProducing), generally because the HTTP connection filled up, which most streaming media players will do to limit how much data they have to buffer
|
|
||||||
effective speed of the read(), not including paused time
|
|
||||||
|
|
||||||
"Segment Events": this shows each get_segment() call and its resolution. This table is not well organized, and my post-1.8.0 work will clean it up a lot. In its present form, it records "request" and "delivery" events separately, indicated by the "type" column.
|
When the segment read completes, some or all of the segment data is used to
|
||||||
Each request shows the segment number being requested and the time at which the get_segment() call was made
|
satisfy the read() call (if the read call started or ended in the middle of a
|
||||||
Each delivery shows:
|
segment, we'll only use part of the data, otherwise we'll use all of it).
|
||||||
segment number
|
|
||||||
range of file data (as [OFFSET:+SIZE]) delivered
|
|
||||||
elapsed time spent doing ZFEC decoding
|
|
||||||
overall elapsed time fetching the segment
|
|
||||||
effective speed of the segment fetch
|
|
||||||
|
|
||||||
"Requests": this shows every block-request sent to the storage servers. Each line shows:
|
Data on the download-status page
|
||||||
the server to which the request was sent
|
================================
|
||||||
which share number it is referencing
|
|
||||||
the portion of the share data being requested (as [OFFSET:+SIZE])
|
|
||||||
the time the request was sent
|
|
||||||
the time the response was received (if ever)
|
|
||||||
the amount of data that was received (which might be less than SIZE if we tried to read off the end of the share)
|
|
||||||
the elapsed time for the request (RTT=Round-Trip-Time)
|
|
||||||
|
|
||||||
Also note that each Request line is colored according to the serverid it was sent to. And all timestamps are shown relative to the start of the first read() call: for example the first DYHB message was sent at +0.001393s about 1.4 milliseconds after the read() call started everything off.
|
DYHB Requests
|
||||||
|
-------------
|
||||||
|
|
||||||
|
This shows every Do-You-Have-Block query sent to storage servers and their
|
||||||
|
results. Each line shows the following:
|
||||||
|
|
||||||
|
* the serverid to which the request was sent
|
||||||
|
* the time at which the request was sent. Note that all timestamps are
|
||||||
|
relative to the start of the first read() call and indicated with a "+" sign
|
||||||
|
* the time at which the response was received (if ever)
|
||||||
|
* the share numbers that the server has, if any
|
||||||
|
* the elapsed time taken by the request
|
||||||
|
|
||||||
|
Also, each line is colored according to the serverid. This color is also used
|
||||||
|
in the "Requests" section below.
|
||||||
|
|
||||||
|
Read Events
|
||||||
|
-----------
|
||||||
|
|
||||||
|
This shows all the FileNode read() calls and their overall results. Each line
|
||||||
|
shows:
|
||||||
|
|
||||||
|
* the range of the file that was requested (as [OFFSET:+LENGTH]). A whole-file
|
||||||
|
GET will start at 0 and read the entire file.
|
||||||
|
* the time at which the read() was made
|
||||||
|
* the time at which the request finished, either because the last byte of data
|
||||||
|
was returned to the read() caller, or because they cancelled the read by
|
||||||
|
calling stopProducing (i.e. closing the HTTP connection)
|
||||||
|
* the number of bytes returned to the caller so far
|
||||||
|
* the time spent on the read, so far
|
||||||
|
* the total time spent in AES decryption
|
||||||
|
* total time spend paused by the client (pauseProducing), generally because the
|
||||||
|
HTTP connection filled up, which most streaming media players will do to
|
||||||
|
limit how much data they have to buffer
|
||||||
|
* effective speed of the read(), not including paused time
|
||||||
|
|
||||||
|
Segment Events
|
||||||
|
--------------
|
||||||
|
|
||||||
|
This shows each get_segment() call and its resolution. This table is not well
|
||||||
|
organized, and my post-1.8.0 work will clean it up a lot. In its present form,
|
||||||
|
it records "request" and "delivery" events separately, indicated by the "type"
|
||||||
|
column.
|
||||||
|
|
||||||
|
Each request shows the segment number being requested and the time at which the
|
||||||
|
get_segment() call was made.
|
||||||
|
|
||||||
|
Each delivery shows:
|
||||||
|
|
||||||
|
* segment number
|
||||||
|
* range of file data (as [OFFSET:+SIZE]) delivered
|
||||||
|
* elapsed time spent doing ZFEC decoding
|
||||||
|
* overall elapsed time fetching the segment
|
||||||
|
* effective speed of the segment fetch
|
||||||
|
|
||||||
|
Requests
|
||||||
|
--------
|
||||||
|
|
||||||
|
This shows every block-request sent to the storage servers. Each line shows:
|
||||||
|
|
||||||
|
* the server to which the request was sent
|
||||||
|
* which share number it is referencing
|
||||||
|
* the portion of the share data being requested (as [OFFSET:+SIZE])
|
||||||
|
* the time the request was sent
|
||||||
|
* the time the response was received (if ever)
|
||||||
|
* the amount of data that was received (which might be less than SIZE if we
|
||||||
|
tried to read off the end of the share)
|
||||||
|
* the elapsed time for the request (RTT=Round-Trip-Time)
|
||||||
|
|
||||||
|
Also note that each Request line is colored according to the serverid it was
|
||||||
|
sent to. And all timestamps are shown relative to the start of the first
|
||||||
|
read() call: for example the first DYHB message was sent at +0.001393s about
|
||||||
|
1.4 milliseconds after the read() call started everything off.
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,5 +1,6 @@
|
||||||
|
===================
|
||||||
"URI Extension Block"
|
URI Extension Block
|
||||||
|
===================
|
||||||
|
|
||||||
This block is a serialized dictionary with string keys and string values
|
This block is a serialized dictionary with string keys and string values
|
||||||
(some of which represent numbers, some of which are SHA-256 hashes). All
|
(some of which represent numbers, some of which are SHA-256 hashes). All
|
||||||
|
@ -13,7 +14,7 @@ clients who do not wish to do incremental validation) can be performed solely
|
||||||
with the data from this block.
|
with the data from this block.
|
||||||
|
|
||||||
At the moment, this data block contains the following keys (and an estimate
|
At the moment, this data block contains the following keys (and an estimate
|
||||||
on their sizes):
|
on their sizes)::
|
||||||
|
|
||||||
size 5
|
size 5
|
||||||
segment_size 7
|
segment_size 7
|
||||||
|
@ -42,7 +43,7 @@ files, regardless of file size. Therefore hash trees (which have a size that
|
||||||
depends linearly upon the number of segments) are stored elsewhere in the
|
depends linearly upon the number of segments) are stored elsewhere in the
|
||||||
bucket, with only the hash tree root stored in this data block.
|
bucket, with only the hash tree root stored in this data block.
|
||||||
|
|
||||||
This block will be serialized as follows:
|
This block will be serialized as follows::
|
||||||
|
|
||||||
assert that all keys match ^[a-zA-z_\-]+$
|
assert that all keys match ^[a-zA-z_\-]+$
|
||||||
sort all the keys lexicographically
|
sort all the keys lexicographically
|
||||||
|
@ -51,7 +52,7 @@ This block will be serialized as follows:
|
||||||
write(netstring(data[k]))
|
write(netstring(data[k]))
|
||||||
|
|
||||||
|
|
||||||
Serialized size:
|
Serialized size::
|
||||||
|
|
||||||
dense binary (but decimal) packing: 160+46=206
|
dense binary (but decimal) packing: 160+46=206
|
||||||
including 'key:' (185) and netstring (6*3+7*4=46) on values: 231
|
including 'key:' (185) and netstring (6*3+7*4=46) on values: 231
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
|
==========================
|
||||||
= Tahoe-LAFS Directory Nodes =
|
Tahoe-LAFS Directory Nodes
|
||||||
|
==========================
|
||||||
|
|
||||||
As explained in the architecture docs, Tahoe-LAFS can be roughly viewed as
|
As explained in the architecture docs, Tahoe-LAFS can be roughly viewed as
|
||||||
a collection of three layers. The lowest layer is the key-value store: it
|
a collection of three layers. The lowest layer is the key-value store: it
|
||||||
|
@ -13,12 +14,30 @@ friends.
|
||||||
|
|
||||||
This document examines the middle layer, the "filesystem".
|
This document examines the middle layer, the "filesystem".
|
||||||
|
|
||||||
== Key-value Store Primitives ==
|
1. `Key-value Store Primitives`_
|
||||||
|
2. `Filesystem goals`_
|
||||||
|
3. `Dirnode goals`_
|
||||||
|
4. `Dirnode secret values`_
|
||||||
|
5. `Dirnode storage format`_
|
||||||
|
6. `Dirnode sizes, mutable-file initial read sizes`_
|
||||||
|
7. `Design Goals, redux`_
|
||||||
|
|
||||||
|
1. `Confidentiality leaks in the storage servers`_
|
||||||
|
2. `Integrity failures in the storage servers`_
|
||||||
|
3. `Improving the efficiency of dirnodes`_
|
||||||
|
4. `Dirnode expiration and leases`_
|
||||||
|
|
||||||
|
8. `Starting Points: root dirnodes`_
|
||||||
|
9. `Mounting and Sharing Directories`_
|
||||||
|
10. `Revocation`_
|
||||||
|
|
||||||
|
Key-value Store Primitives
|
||||||
|
==========================
|
||||||
|
|
||||||
In the lowest layer (key-value store), there are two operations that reference
|
In the lowest layer (key-value store), there are two operations that reference
|
||||||
immutable data (which we refer to as "CHK URIs" or "CHK read-capabilities" or
|
immutable data (which we refer to as "CHK URIs" or "CHK read-capabilities" or
|
||||||
"CHK read-caps"). One puts data into the grid (but only if it doesn't exist
|
"CHK read-caps"). One puts data into the grid (but only if it doesn't exist
|
||||||
already), the other retrieves it:
|
already), the other retrieves it::
|
||||||
|
|
||||||
chk_uri = put(data)
|
chk_uri = put(data)
|
||||||
data = get(chk_uri)
|
data = get(chk_uri)
|
||||||
|
@ -26,13 +45,14 @@ already), the other retrieves it:
|
||||||
We also have three operations which reference mutable data (which we refer to
|
We also have three operations which reference mutable data (which we refer to
|
||||||
as "mutable slots", or "mutable write-caps and read-caps", or sometimes "SSK
|
as "mutable slots", or "mutable write-caps and read-caps", or sometimes "SSK
|
||||||
slots"). One creates a slot with some initial contents, a second replaces the
|
slots"). One creates a slot with some initial contents, a second replaces the
|
||||||
contents of a pre-existing slot, and the third retrieves the contents:
|
contents of a pre-existing slot, and the third retrieves the contents::
|
||||||
|
|
||||||
mutable_uri = create(initial_data)
|
mutable_uri = create(initial_data)
|
||||||
replace(mutable_uri, new_data)
|
replace(mutable_uri, new_data)
|
||||||
data = get(mutable_uri)
|
data = get(mutable_uri)
|
||||||
|
|
||||||
== Filesystem Goals ==
|
Filesystem Goals
|
||||||
|
================
|
||||||
|
|
||||||
The main goal for the middle (filesystem) layer is to give users a way to
|
The main goal for the middle (filesystem) layer is to give users a way to
|
||||||
organize the data that they have uploaded into the grid. The traditional way
|
organize the data that they have uploaded into the grid. The traditional way
|
||||||
|
@ -48,23 +68,24 @@ The directory structure is therefore a directed graph of nodes, in which each
|
||||||
node might be a directory node or a file node. All file nodes are terminal
|
node might be a directory node or a file node. All file nodes are terminal
|
||||||
nodes.
|
nodes.
|
||||||
|
|
||||||
== Dirnode Goals ==
|
Dirnode Goals
|
||||||
|
=============
|
||||||
|
|
||||||
What properties might be desirable for these directory nodes? In no
|
What properties might be desirable for these directory nodes? In no
|
||||||
particular order:
|
particular order:
|
||||||
|
|
||||||
1: functional. Code which does not work doesn't count.
|
1. functional. Code which does not work doesn't count.
|
||||||
2: easy to document, explain, and understand
|
2. easy to document, explain, and understand
|
||||||
3: confidential: it should not be possible for others to see the contents of
|
3. confidential: it should not be possible for others to see the contents of
|
||||||
a directory
|
a directory
|
||||||
4: integrity: it should not be possible for others to modify the contents
|
4. integrity: it should not be possible for others to modify the contents
|
||||||
of a directory
|
of a directory
|
||||||
5: available: directories should survive host failure, just like files do
|
5. available: directories should survive host failure, just like files do
|
||||||
6: efficient: in storage, communication bandwidth, number of round-trips
|
6. efficient: in storage, communication bandwidth, number of round-trips
|
||||||
7: easy to delegate individual directories in a flexible way
|
7. easy to delegate individual directories in a flexible way
|
||||||
8: updateness: everybody looking at a directory should see the same contents
|
8. updateness: everybody looking at a directory should see the same contents
|
||||||
9: monotonicity: everybody looking at a directory should see the same
|
9. monotonicity: everybody looking at a directory should see the same
|
||||||
sequence of updates
|
sequence of updates
|
||||||
|
|
||||||
Some of these goals are mutually exclusive. For example, availability and
|
Some of these goals are mutually exclusive. For example, availability and
|
||||||
consistency are opposing, so it is not possible to achieve #5 and #8 at the
|
consistency are opposing, so it is not possible to achieve #5 and #8 at the
|
||||||
|
@ -102,7 +123,8 @@ version 1 and other shares of version 2). In extreme cases of simultaneous
|
||||||
update, mutable files might suffer from non-monotonicity.
|
update, mutable files might suffer from non-monotonicity.
|
||||||
|
|
||||||
|
|
||||||
== Dirnode secret values ==
|
Dirnode secret values
|
||||||
|
=====================
|
||||||
|
|
||||||
As mentioned before, dirnodes are simply a special way to interpret the
|
As mentioned before, dirnodes are simply a special way to interpret the
|
||||||
contents of a mutable file, so the secret keys and capability strings
|
contents of a mutable file, so the secret keys and capability strings
|
||||||
|
@ -126,7 +148,8 @@ URI:DIR2-RO:buxjqykt637u61nnmjg7s8zkny:ar8r5j99a4mezdojejmsfp4fj1zeky9gjigyrid4u
|
||||||
is a read-capability URI, both for the same dirnode.
|
is a read-capability URI, both for the same dirnode.
|
||||||
|
|
||||||
|
|
||||||
== Dirnode storage format ==
|
Dirnode storage format
|
||||||
|
======================
|
||||||
|
|
||||||
Each dirnode is stored in a single mutable file, distributed in the Tahoe-LAFS
|
Each dirnode is stored in a single mutable file, distributed in the Tahoe-LAFS
|
||||||
grid. The contents of this file are a serialized list of netstrings, one per
|
grid. The contents of this file are a serialized list of netstrings, one per
|
||||||
|
@ -159,7 +182,8 @@ other users who have read-only access to 'foo' will be unable to decrypt its
|
||||||
rwcap slot, this limits those users to read-only access to 'bar' as well,
|
rwcap slot, this limits those users to read-only access to 'bar' as well,
|
||||||
thus providing the transitive readonlyness that we desire.
|
thus providing the transitive readonlyness that we desire.
|
||||||
|
|
||||||
=== Dirnode sizes, mutable-file initial read sizes ===
|
Dirnode sizes, mutable-file initial read sizes
|
||||||
|
==============================================
|
||||||
|
|
||||||
How big are dirnodes? When reading dirnode data out of mutable files, how
|
How big are dirnodes? When reading dirnode data out of mutable files, how
|
||||||
large should our initial read be? If we guess exactly, we can read a dirnode
|
large should our initial read be? If we guess exactly, we can read a dirnode
|
||||||
|
@ -171,6 +195,8 @@ will cost us at least another RTT.
|
||||||
Assuming child names are between 10 and 99 characters long, how long are the
|
Assuming child names are between 10 and 99 characters long, how long are the
|
||||||
various pieces of a dirnode?
|
various pieces of a dirnode?
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
netstring(name) ~= 4+len(name)
|
netstring(name) ~= 4+len(name)
|
||||||
chk-cap = 97 (for 4-char filesizes)
|
chk-cap = 97 (for 4-char filesizes)
|
||||||
dir-rw-cap = 88
|
dir-rw-cap = 88
|
||||||
|
@ -181,8 +207,10 @@ various pieces of a dirnode?
|
||||||
JSON({ctime=float,mtime=float,'tahoe':{linkcrtime=float,linkmotime=float}}): 137
|
JSON({ctime=float,mtime=float,'tahoe':{linkcrtime=float,linkmotime=float}}): 137
|
||||||
netstring(metadata) = 4+137 = 141
|
netstring(metadata) = 4+137 = 141
|
||||||
|
|
||||||
so a CHK entry is:
|
so a CHK entry is::
|
||||||
|
|
||||||
5+ 4+len(name) + 4+97 + 5+16+97+32 + 4+137
|
5+ 4+len(name) + 4+97 + 5+16+97+32 + 4+137
|
||||||
|
|
||||||
And a 15-byte filename gives a 416-byte entry. When the entry points at a
|
And a 15-byte filename gives a 416-byte entry. When the entry points at a
|
||||||
subdirectory instead of a file, the entry is a little bit smaller. So an
|
subdirectory instead of a file, the entry is a little bit smaller. So an
|
||||||
empty directory uses 0 bytes, a directory with one child uses about 416
|
empty directory uses 0 bytes, a directory with one child uses about 416
|
||||||
|
@ -193,7 +221,7 @@ get 139ish bytes of data in each share per child.
|
||||||
|
|
||||||
The pubkey, signature, and hashes form the first 935ish bytes of the
|
The pubkey, signature, and hashes form the first 935ish bytes of the
|
||||||
container, then comes our data, then about 1216 bytes of encprivkey. So if we
|
container, then comes our data, then about 1216 bytes of encprivkey. So if we
|
||||||
read the first:
|
read the first::
|
||||||
|
|
||||||
1kB: we get 65bytes of dirnode data : only empty directories
|
1kB: we get 65bytes of dirnode data : only empty directories
|
||||||
2kB: 1065bytes: about 8
|
2kB: 1065bytes: about 8
|
||||||
|
@ -205,42 +233,44 @@ we read the mutable file, which should give good performance (one RTT) for
|
||||||
small directories.
|
small directories.
|
||||||
|
|
||||||
|
|
||||||
== Design Goals, redux ==
|
Design Goals, redux
|
||||||
|
===================
|
||||||
|
|
||||||
How well does this design meet the goals?
|
How well does this design meet the goals?
|
||||||
|
|
||||||
#1 functional: YES: the code works and has extensive unit tests
|
1. functional: YES: the code works and has extensive unit tests
|
||||||
#2 documentable: YES: this document is the existence proof
|
2. documentable: YES: this document is the existence proof
|
||||||
#3 confidential: YES: see below
|
3. confidential: YES: see below
|
||||||
#4 integrity: MOSTLY: a coalition of storage servers can rollback individual
|
4. integrity: MOSTLY: a coalition of storage servers can rollback individual
|
||||||
mutable files, but not a single one. No server can
|
mutable files, but not a single one. No server can
|
||||||
substitute fake data as genuine.
|
substitute fake data as genuine.
|
||||||
#5 availability: YES: as long as 'k' storage servers are present and have
|
5. availability: YES: as long as 'k' storage servers are present and have
|
||||||
the same version of the mutable file, the dirnode will
|
the same version of the mutable file, the dirnode will
|
||||||
be available.
|
be available.
|
||||||
#6 efficient: MOSTLY:
|
6. efficient: MOSTLY:
|
||||||
network: single dirnode lookup is very efficient, since clients can
|
network: single dirnode lookup is very efficient, since clients can
|
||||||
fetch specific keys rather than being required to get or set
|
fetch specific keys rather than being required to get or set
|
||||||
the entire dirnode each time. Traversing many directories
|
the entire dirnode each time. Traversing many directories
|
||||||
takes a lot of roundtrips, and these can't be collapsed with
|
takes a lot of roundtrips, and these can't be collapsed with
|
||||||
promise-pipelining because the intermediate values must only
|
promise-pipelining because the intermediate values must only
|
||||||
be visible to the client. Modifying many dirnodes at once
|
be visible to the client. Modifying many dirnodes at once
|
||||||
(e.g. importing a large pre-existing directory tree) is pretty
|
(e.g. importing a large pre-existing directory tree) is pretty
|
||||||
slow, since each graph edge must be created independently.
|
slow, since each graph edge must be created independently.
|
||||||
storage: each child has a separate IV, which makes them larger than
|
storage: each child has a separate IV, which makes them larger than
|
||||||
if all children were aggregated into a single encrypted string
|
if all children were aggregated into a single encrypted string
|
||||||
#7 delegation: VERY: each dirnode is a completely independent object,
|
7. delegation: VERY: each dirnode is a completely independent object,
|
||||||
to which clients can be granted separate read-write or
|
to which clients can be granted separate read-write or
|
||||||
read-only access
|
read-only access
|
||||||
#8 updateness: VERY: with only a single point of access, and no caching,
|
8. updateness: VERY: with only a single point of access, and no caching,
|
||||||
each client operation starts by fetching the current
|
each client operation starts by fetching the current
|
||||||
value, so there are no opportunities for staleness
|
value, so there are no opportunities for staleness
|
||||||
#9 monotonicity: VERY: the single point of access also protects against
|
9. monotonicity: VERY: the single point of access also protects against
|
||||||
retrograde motion
|
retrograde motion
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
=== Confidentiality leaks in the storage servers ===
|
Confidentiality leaks in the storage servers
|
||||||
|
--------------------------------------------
|
||||||
|
|
||||||
Dirnode (and the mutable files upon which they are based) are very private
|
Dirnode (and the mutable files upon which they are based) are very private
|
||||||
against other clients: traffic between the client and the storage servers is
|
against other clients: traffic between the client and the storage servers is
|
||||||
|
@ -261,7 +291,8 @@ attacker may be able to build up a graph with the same shape as the plaintext
|
||||||
filesystem, but with unlabeled edges and unknown file contents.
|
filesystem, but with unlabeled edges and unknown file contents.
|
||||||
|
|
||||||
|
|
||||||
=== Integrity failures in the storage servers ===
|
Integrity failures in the storage servers
|
||||||
|
-----------------------------------------
|
||||||
|
|
||||||
The mutable file's integrity mechanism (RSA signature on the hash of the file
|
The mutable file's integrity mechanism (RSA signature on the hash of the file
|
||||||
contents) prevents the storage server from modifying the dirnode's contents
|
contents) prevents the storage server from modifying the dirnode's contents
|
||||||
|
@ -276,7 +307,8 @@ version number. This insures that one or two misbehaving storage servers
|
||||||
cannot cause this rollback on their own.
|
cannot cause this rollback on their own.
|
||||||
|
|
||||||
|
|
||||||
=== Improving the efficiency of dirnodes ===
|
Improving the efficiency of dirnodes
|
||||||
|
------------------------------------
|
||||||
|
|
||||||
The current mutable-file -based dirnode scheme suffers from certain
|
The current mutable-file -based dirnode scheme suffers from certain
|
||||||
inefficiencies. A very large directory (with thousands or millions of
|
inefficiencies. A very large directory (with thousands or millions of
|
||||||
|
@ -305,7 +337,6 @@ mutable file scheme which will use shared parameters to reduce the
|
||||||
directory-creation effort to a bare minimum (picking a random number instead
|
directory-creation effort to a bare minimum (picking a random number instead
|
||||||
of generating two random primes).
|
of generating two random primes).
|
||||||
|
|
||||||
|
|
||||||
When a backup program is run for the first time, it needs to copy a large
|
When a backup program is run for the first time, it needs to copy a large
|
||||||
amount of data from a pre-existing filesystem into reliable storage. This
|
amount of data from a pre-existing filesystem into reliable storage. This
|
||||||
means that a large and complex directory structure needs to be duplicated in
|
means that a large and complex directory structure needs to be duplicated in
|
||||||
|
@ -345,7 +376,8 @@ encryption keys for each component directory, to get the benefits of both
|
||||||
schemes at once.
|
schemes at once.
|
||||||
|
|
||||||
|
|
||||||
=== Dirnode expiration and leases ===
|
Dirnode expiration and leases
|
||||||
|
-----------------------------
|
||||||
|
|
||||||
Dirnodes are created any time a client wishes to add a new directory. How
|
Dirnodes are created any time a client wishes to add a new directory. How
|
||||||
long do they live? What's to keep them from sticking around forever, taking
|
long do they live? What's to keep them from sticking around forever, taking
|
||||||
|
@ -377,14 +409,16 @@ prompts the client to send out lease-cancellation messages, allowing the data
|
||||||
to be deleted.
|
to be deleted.
|
||||||
|
|
||||||
|
|
||||||
== Starting Points: root dirnodes ==
|
Starting Points: root dirnodes
|
||||||
|
==============================
|
||||||
|
|
||||||
Any client can record the URI of a directory node in some external form (say,
|
Any client can record the URI of a directory node in some external form (say,
|
||||||
in a local file) and use it as the starting point of later traversal. Each
|
in a local file) and use it as the starting point of later traversal. Each
|
||||||
Tahoe-LAFS user is expected to create a new (unattached) dirnode when they first
|
Tahoe-LAFS user is expected to create a new (unattached) dirnode when they first
|
||||||
start using the grid, and record its URI for later use.
|
start using the grid, and record its URI for later use.
|
||||||
|
|
||||||
== Mounting and Sharing Directories ==
|
Mounting and Sharing Directories
|
||||||
|
================================
|
||||||
|
|
||||||
The biggest benefit of this dirnode approach is that sharing individual
|
The biggest benefit of this dirnode approach is that sharing individual
|
||||||
directories is almost trivial. Alice creates a subdirectory that she wants to
|
directories is almost trivial. Alice creates a subdirectory that she wants to
|
||||||
|
@ -409,7 +443,8 @@ indicate whether they want to grant read-write or read-only access to the
|
||||||
recipient. The recipient then needs an interface to drag the new folder into
|
recipient. The recipient then needs an interface to drag the new folder into
|
||||||
their vdrive and give it a home.
|
their vdrive and give it a home.
|
||||||
|
|
||||||
== Revocation ==
|
Revocation
|
||||||
|
==========
|
||||||
|
|
||||||
When Alice decides that she no longer wants Bob to be able to access the
|
When Alice decides that she no longer wants Bob to be able to access the
|
||||||
shared directory, what should she do? Suppose she's shared this folder with
|
shared directory, what should she do? Suppose she's shared this folder with
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
|
=============
|
||||||
== FileEncoding ==
|
File Encoding
|
||||||
|
=============
|
||||||
|
|
||||||
When the client wishes to upload an immutable file, the first step is to
|
When the client wishes to upload an immutable file, the first step is to
|
||||||
decide upon an encryption key. There are two methods: convergent or random.
|
decide upon an encryption key. There are two methods: convergent or random.
|
||||||
|
@ -43,8 +44,7 @@ table that maps SI to shares.
|
||||||
Anybody who knows a Storage Index can retrieve the associated ciphertext:
|
Anybody who knows a Storage Index can retrieve the associated ciphertext:
|
||||||
ciphertexts are not secret.
|
ciphertexts are not secret.
|
||||||
|
|
||||||
|
.. image:: file-encoding1.svg
|
||||||
[[Image(file-encoding1.png)]]
|
|
||||||
|
|
||||||
The ciphertext file is then broken up into segments. The last segment is
|
The ciphertext file is then broken up into segments. The last segment is
|
||||||
likely to be shorter than the rest. Each segment is erasure-coded into a
|
likely to be shorter than the rest. Each segment is erasure-coded into a
|
||||||
|
@ -60,7 +60,7 @@ aka landlord, aka storage node, aka peer). The "share" held by each remote
|
||||||
shareholder is nominally just a collection of these blocks. The file will
|
shareholder is nominally just a collection of these blocks. The file will
|
||||||
be recoverable when a certain number of shares have been retrieved.
|
be recoverable when a certain number of shares have been retrieved.
|
||||||
|
|
||||||
[[Image(file-encoding2.png)]]
|
.. image:: file-encoding2.svg
|
||||||
|
|
||||||
The blocks are hashed as they are generated and transmitted. These
|
The blocks are hashed as they are generated and transmitted. These
|
||||||
block hashes are put into a Merkle hash tree. When the last share has been
|
block hashes are put into a Merkle hash tree. When the last share has been
|
||||||
|
@ -71,7 +71,7 @@ nodes ahead of time, so we can validate each block independently.
|
||||||
The root of this block hash tree is called the "block root hash" and
|
The root of this block hash tree is called the "block root hash" and
|
||||||
used in the next step.
|
used in the next step.
|
||||||
|
|
||||||
[[Image(file-encoding3.png)]]
|
.. image:: file-encoding3.svg
|
||||||
|
|
||||||
There is a higher-level Merkle tree called the "share hash tree". Its leaves
|
There is a higher-level Merkle tree called the "share hash tree". Its leaves
|
||||||
are the block root hashes from each share. The root of this tree is called
|
are the block root hashes from each share. The root of this tree is called
|
||||||
|
@ -89,11 +89,11 @@ time, sufficient download queries can be generated in parallel).
|
||||||
|
|
||||||
The URI (also known as the immutable-file read-cap, since possessing it
|
The URI (also known as the immutable-file read-cap, since possessing it
|
||||||
grants the holder the capability to read the file's plaintext) is then
|
grants the holder the capability to read the file's plaintext) is then
|
||||||
represented as a (relatively) short printable string like so:
|
represented as a (relatively) short printable string like so::
|
||||||
|
|
||||||
URI:CHK:auxet66ynq55naiy2ay7cgrshm:6rudoctmbxsmbg7gwtjlimd6umtwrrsxkjzthuldsmo4nnfoc6fa:3:10:1000000
|
URI:CHK:auxet66ynq55naiy2ay7cgrshm:6rudoctmbxsmbg7gwtjlimd6umtwrrsxkjzthuldsmo4nnfoc6fa:3:10:1000000
|
||||||
|
|
||||||
[[Image(file-encoding4.png)]]
|
.. image:: file-encoding4.svg
|
||||||
|
|
||||||
During download, when a peer begins to transmit a share, it first transmits
|
During download, when a peer begins to transmit a share, it first transmits
|
||||||
all of the parts of the share hash tree that are necessary to validate its
|
all of the parts of the share hash tree that are necessary to validate its
|
||||||
|
@ -102,7 +102,7 @@ that are necessary to validate the first block. Then it transmits the
|
||||||
first block. It then continues this loop: transmitting any portions of the
|
first block. It then continues this loop: transmitting any portions of the
|
||||||
block hash tree to validate block#N, then sending block#N.
|
block hash tree to validate block#N, then sending block#N.
|
||||||
|
|
||||||
[[Image(file-encoding5.png)]]
|
.. image:: file-encoding5.svg
|
||||||
|
|
||||||
So the "share" that is sent to the remote peer actually consists of three
|
So the "share" that is sent to the remote peer actually consists of three
|
||||||
pieces, sent in a specific order as they become available, and retrieved
|
pieces, sent in a specific order as they become available, and retrieved
|
||||||
|
@ -125,13 +125,14 @@ peers) into decoding, to produce the first segment of crypttext, which is
|
||||||
then decrypted to produce the first segment of plaintext, which is finally
|
then decrypted to produce the first segment of plaintext, which is finally
|
||||||
delivered to the user.
|
delivered to the user.
|
||||||
|
|
||||||
[[Image(file-encoding6.png)]]
|
.. image:: file-encoding6.svg
|
||||||
|
|
||||||
== Hashes ==
|
Hashes
|
||||||
|
======
|
||||||
|
|
||||||
All hashes use SHA-256d, as defined in Practical Cryptography (by Ferguson
|
All hashes use SHA-256d, as defined in Practical Cryptography (by Ferguson
|
||||||
and Schneier). All hashes use a single-purpose tag, e.g. the hash that
|
and Schneier). All hashes use a single-purpose tag, e.g. the hash that
|
||||||
converts an encryption key into a storage index is defined as follows:
|
converts an encryption key into a storage index is defined as follows::
|
||||||
|
|
||||||
SI = SHA256d(netstring("allmydata_immutable_key_to_storage_index_v1") + key)
|
SI = SHA256d(netstring("allmydata_immutable_key_to_storage_index_v1") + key)
|
||||||
|
|
||||||
|
@ -142,7 +143,8 @@ Using SHA-256d (instead of plain SHA-256) guards against length-extension
|
||||||
attacks. Using the tag protects our Merkle trees against attacks in which the
|
attacks. Using the tag protects our Merkle trees against attacks in which the
|
||||||
hash of a leaf is confused with a hash of two children (allowing an attacker
|
hash of a leaf is confused with a hash of two children (allowing an attacker
|
||||||
to generate corrupted data that nevertheless appears to be valid), and is
|
to generate corrupted data that nevertheless appears to be valid), and is
|
||||||
simply good "cryptograhic hygiene". The "Chosen Protocol Attack" by Kelsey,
|
simply good "cryptograhic hygiene". The `"Chosen Protocol Attack" by Kelsey,
|
||||||
Schneier, and Wagner (http://www.schneier.com/paper-chosen-protocol.html) is
|
Schneier, and Wagner <http://www.schneier.com/paper-chosen-protocol.html>`_ is
|
||||||
relevant. Putting the tag in a netstring guards against attacks that seek to
|
relevant. Putting the tag in a netstring guards against attacks that seek to
|
||||||
confuse the end of the tag with the beginning of the subsequent value.
|
confuse the end of the tag with the beginning of the subsequent value.
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,22 @@
|
||||||
|
=============
|
||||||
|
Mutable Files
|
||||||
|
=============
|
||||||
|
|
||||||
This describes the "RSA-based mutable files" which were shipped in Tahoe v0.8.0.
|
This describes the "RSA-based mutable files" which were shipped in Tahoe v0.8.0.
|
||||||
|
|
||||||
= Mutable Files =
|
1. `Consistency vs. Availability`_
|
||||||
|
2. `The Prime Coordination Directive: "Don't Do That"`_
|
||||||
|
3. `Small Distributed Mutable Files`_
|
||||||
|
|
||||||
|
1. `SDMF slots overview`_
|
||||||
|
2. `Server Storage Protocol`_
|
||||||
|
3. `Code Details`_
|
||||||
|
4. `SMDF Slot Format`_
|
||||||
|
5. `Recovery`_
|
||||||
|
|
||||||
|
4. `Medium Distributed Mutable Files`_
|
||||||
|
5. `Large Distributed Mutable Files`_
|
||||||
|
6. `TODO`_
|
||||||
|
|
||||||
Mutable File Slots are places with a stable identifier that can hold data
|
Mutable File Slots are places with a stable identifier that can hold data
|
||||||
that changes over time. In contrast to CHK slots, for which the
|
that changes over time. In contrast to CHK slots, for which the
|
||||||
|
@ -27,7 +42,8 @@ shares cannot read or modify them: the worst they can do is deny service (by
|
||||||
deleting or corrupting the shares), or attempt a rollback attack (which can
|
deleting or corrupting the shares), or attempt a rollback attack (which can
|
||||||
only succeed with the cooperation of at least k servers).
|
only succeed with the cooperation of at least k servers).
|
||||||
|
|
||||||
== Consistency vs Availability ==
|
Consistency vs. Availability
|
||||||
|
============================
|
||||||
|
|
||||||
There is an age-old battle between consistency and availability. Epic papers
|
There is an age-old battle between consistency and availability. Epic papers
|
||||||
have been written, elaborate proofs have been established, and generations of
|
have been written, elaborate proofs have been established, and generations of
|
||||||
|
@ -45,25 +61,26 @@ effective ways to merge multiple versions, so inconsistency is not
|
||||||
necessarily a problem (i.e. directory nodes can usually merge multiple "add
|
necessarily a problem (i.e. directory nodes can usually merge multiple "add
|
||||||
child" operations).
|
child" operations).
|
||||||
|
|
||||||
== The Prime Coordination Directive: "Don't Do That" ==
|
The Prime Coordination Directive: "Don't Do That"
|
||||||
|
=================================================
|
||||||
|
|
||||||
The current rule for applications which run on top of Tahoe is "do not
|
The current rule for applications which run on top of Tahoe is "do not
|
||||||
perform simultaneous uncoordinated writes". That means you need non-tahoe
|
perform simultaneous uncoordinated writes". That means you need non-tahoe
|
||||||
means to make sure that two parties are not trying to modify the same mutable
|
means to make sure that two parties are not trying to modify the same mutable
|
||||||
slot at the same time. For example:
|
slot at the same time. For example:
|
||||||
|
|
||||||
* don't give the read-write URI to anyone else. Dirnodes in a private
|
* don't give the read-write URI to anyone else. Dirnodes in a private
|
||||||
directory generally satisfy this case, as long as you don't use two
|
directory generally satisfy this case, as long as you don't use two
|
||||||
clients on the same account at the same time
|
clients on the same account at the same time
|
||||||
* if you give a read-write URI to someone else, stop using it yourself. An
|
* if you give a read-write URI to someone else, stop using it yourself. An
|
||||||
inbox would be a good example of this.
|
inbox would be a good example of this.
|
||||||
* if you give a read-write URI to someone else, call them on the phone
|
* if you give a read-write URI to someone else, call them on the phone
|
||||||
before you write into it
|
before you write into it
|
||||||
* build an automated mechanism to have your agents coordinate writes.
|
* build an automated mechanism to have your agents coordinate writes.
|
||||||
For example, we expect a future release to include a FURL for a
|
For example, we expect a future release to include a FURL for a
|
||||||
"coordination server" in the dirnodes. The rule can be that you must
|
"coordination server" in the dirnodes. The rule can be that you must
|
||||||
contact the coordination server and obtain a lock/lease on the file
|
contact the coordination server and obtain a lock/lease on the file
|
||||||
before you're allowed to modify it.
|
before you're allowed to modify it.
|
||||||
|
|
||||||
If you do not follow this rule, Bad Things will happen. The worst-case Bad
|
If you do not follow this rule, Bad Things will happen. The worst-case Bad
|
||||||
Thing is that the entire file will be lost. A less-bad Bad Thing is that one
|
Thing is that the entire file will be lost. A less-bad Bad Thing is that one
|
||||||
|
@ -91,7 +108,8 @@ run. The Prime Coordination Directive therefore applies to inter-node
|
||||||
conflicts, not intra-node ones.
|
conflicts, not intra-node ones.
|
||||||
|
|
||||||
|
|
||||||
== Small Distributed Mutable Files ==
|
Small Distributed Mutable Files
|
||||||
|
===============================
|
||||||
|
|
||||||
SDMF slots are suitable for small (<1MB) files that are editing by rewriting
|
SDMF slots are suitable for small (<1MB) files that are editing by rewriting
|
||||||
the entire file. The three operations are:
|
the entire file. The three operations are:
|
||||||
|
@ -103,7 +121,8 @@ the entire file. The three operations are:
|
||||||
The first use of SDMF slots will be to hold directories (dirnodes), which map
|
The first use of SDMF slots will be to hold directories (dirnodes), which map
|
||||||
encrypted child names to rw-URI/ro-URI pairs.
|
encrypted child names to rw-URI/ro-URI pairs.
|
||||||
|
|
||||||
=== SDMF slots overview ===
|
SDMF slots overview
|
||||||
|
-------------------
|
||||||
|
|
||||||
Each SDMF slot is created with a public/private key pair. The public key is
|
Each SDMF slot is created with a public/private key pair. The public key is
|
||||||
known as the "verification key", while the private key is called the
|
known as the "verification key", while the private key is called the
|
||||||
|
@ -138,6 +157,8 @@ The read-write URI consists of the write key and the verification key hash.
|
||||||
The read-only URI contains the read key and the verification key hash. The
|
The read-only URI contains the read key and the verification key hash. The
|
||||||
verify-only URI contains the storage index and the verification key hash.
|
verify-only URI contains the storage index and the verification key hash.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
URI:SSK-RW:b2a(writekey):b2a(verification_key_hash)
|
URI:SSK-RW:b2a(writekey):b2a(verification_key_hash)
|
||||||
URI:SSK-RO:b2a(readkey):b2a(verification_key_hash)
|
URI:SSK-RO:b2a(readkey):b2a(verification_key_hash)
|
||||||
URI:SSK-Verify:b2a(storage_index):b2a(verification_key_hash)
|
URI:SSK-Verify:b2a(storage_index):b2a(verification_key_hash)
|
||||||
|
@ -158,64 +179,75 @@ write enabler with anyone else.
|
||||||
The SDMF slot structure will be described in more detail below. The important
|
The SDMF slot structure will be described in more detail below. The important
|
||||||
pieces are:
|
pieces are:
|
||||||
|
|
||||||
* a sequence number
|
* a sequence number
|
||||||
* a root hash "R"
|
* a root hash "R"
|
||||||
* the encoding parameters (including k, N, file size, segment size)
|
* the encoding parameters (including k, N, file size, segment size)
|
||||||
* a signed copy of [seqnum,R,encoding_params], using the signature key
|
* a signed copy of [seqnum,R,encoding_params], using the signature key
|
||||||
* the verification key (not encrypted)
|
* the verification key (not encrypted)
|
||||||
* the share hash chain (part of a Merkle tree over the share hashes)
|
* the share hash chain (part of a Merkle tree over the share hashes)
|
||||||
* the block hash tree (Merkle tree over blocks of share data)
|
* the block hash tree (Merkle tree over blocks of share data)
|
||||||
* the share data itself (erasure-coding of read-key-encrypted file data)
|
* the share data itself (erasure-coding of read-key-encrypted file data)
|
||||||
* the signature key, encrypted with the write key
|
* the signature key, encrypted with the write key
|
||||||
|
|
||||||
The access pattern for read is:
|
The access pattern for read is:
|
||||||
* hash read-key to get storage index
|
|
||||||
* use storage index to locate 'k' shares with identical 'R' values
|
* hash read-key to get storage index
|
||||||
* either get one share, read 'k' from it, then read k-1 shares
|
* use storage index to locate 'k' shares with identical 'R' values
|
||||||
* or read, say, 5 shares, discover k, either get more or be finished
|
|
||||||
* or copy k into the URIs
|
* either get one share, read 'k' from it, then read k-1 shares
|
||||||
* read verification key
|
* or read, say, 5 shares, discover k, either get more or be finished
|
||||||
* hash verification key, compare against verification key hash
|
* or copy k into the URIs
|
||||||
* read seqnum, R, encoding parameters, signature
|
|
||||||
* verify signature against verification key
|
* read verification key
|
||||||
* read share data, compute block-hash Merkle tree and root "r"
|
* hash verification key, compare against verification key hash
|
||||||
* read share hash chain (leading from "r" to "R")
|
* read seqnum, R, encoding parameters, signature
|
||||||
* validate share hash chain up to the root "R"
|
* verify signature against verification key
|
||||||
* submit share data to erasure decoding
|
* read share data, compute block-hash Merkle tree and root "r"
|
||||||
* decrypt decoded data with read-key
|
* read share hash chain (leading from "r" to "R")
|
||||||
* submit plaintext to application
|
* validate share hash chain up to the root "R"
|
||||||
|
* submit share data to erasure decoding
|
||||||
|
* decrypt decoded data with read-key
|
||||||
|
* submit plaintext to application
|
||||||
|
|
||||||
The access pattern for write is:
|
The access pattern for write is:
|
||||||
* hash write-key to get read-key, hash read-key to get storage index
|
|
||||||
* use the storage index to locate at least one share
|
|
||||||
* read verification key and encrypted signature key
|
|
||||||
* decrypt signature key using write-key
|
|
||||||
* hash signature key, compare against write-key
|
|
||||||
* hash verification key, compare against verification key hash
|
|
||||||
* encrypt plaintext from application with read-key
|
|
||||||
* application can encrypt some data with the write-key to make it only
|
|
||||||
available to writers (use this for transitive read-onlyness of dirnodes)
|
|
||||||
* erasure-code crypttext to form shares
|
|
||||||
* split shares into blocks
|
|
||||||
* compute Merkle tree of blocks, giving root "r" for each share
|
|
||||||
* compute Merkle tree of shares, find root "R" for the file as a whole
|
|
||||||
* create share data structures, one per server:
|
|
||||||
* use seqnum which is one higher than the old version
|
|
||||||
* share hash chain has log(N) hashes, different for each server
|
|
||||||
* signed data is the same for each server
|
|
||||||
* now we have N shares and need homes for them
|
|
||||||
* walk through peers
|
|
||||||
* if share is not already present, allocate-and-set
|
|
||||||
* otherwise, try to modify existing share:
|
|
||||||
* send testv_and_writev operation to each one
|
|
||||||
* testv says to accept share if their(seqnum+R) <= our(seqnum+R)
|
|
||||||
* count how many servers wind up with which versions (histogram over R)
|
|
||||||
* keep going until N servers have the same version, or we run out of servers
|
|
||||||
* if any servers wound up with a different version, report error to
|
|
||||||
application
|
|
||||||
* if we ran out of servers, initiate recovery process (described below)
|
|
||||||
|
|
||||||
=== Server Storage Protocol ===
|
* hash write-key to get read-key, hash read-key to get storage index
|
||||||
|
* use the storage index to locate at least one share
|
||||||
|
* read verification key and encrypted signature key
|
||||||
|
* decrypt signature key using write-key
|
||||||
|
* hash signature key, compare against write-key
|
||||||
|
* hash verification key, compare against verification key hash
|
||||||
|
* encrypt plaintext from application with read-key
|
||||||
|
|
||||||
|
* application can encrypt some data with the write-key to make it only
|
||||||
|
available to writers (use this for transitive read-onlyness of dirnodes)
|
||||||
|
|
||||||
|
* erasure-code crypttext to form shares
|
||||||
|
* split shares into blocks
|
||||||
|
* compute Merkle tree of blocks, giving root "r" for each share
|
||||||
|
* compute Merkle tree of shares, find root "R" for the file as a whole
|
||||||
|
* create share data structures, one per server:
|
||||||
|
|
||||||
|
* use seqnum which is one higher than the old version
|
||||||
|
* share hash chain has log(N) hashes, different for each server
|
||||||
|
* signed data is the same for each server
|
||||||
|
|
||||||
|
* now we have N shares and need homes for them
|
||||||
|
* walk through peers
|
||||||
|
|
||||||
|
* if share is not already present, allocate-and-set
|
||||||
|
* otherwise, try to modify existing share:
|
||||||
|
* send testv_and_writev operation to each one
|
||||||
|
* testv says to accept share if their(seqnum+R) <= our(seqnum+R)
|
||||||
|
* count how many servers wind up with which versions (histogram over R)
|
||||||
|
* keep going until N servers have the same version, or we run out of servers
|
||||||
|
|
||||||
|
* if any servers wound up with a different version, report error to
|
||||||
|
application
|
||||||
|
* if we ran out of servers, initiate recovery process (described below)
|
||||||
|
|
||||||
|
Server Storage Protocol
|
||||||
|
-----------------------
|
||||||
|
|
||||||
The storage servers will provide a mutable slot container which is oblivious
|
The storage servers will provide a mutable slot container which is oblivious
|
||||||
to the details of the data being contained inside it. Each storage index
|
to the details of the data being contained inside it. Each storage index
|
||||||
|
@ -228,7 +260,7 @@ as the filename.
|
||||||
The container holds space for a container magic number (for versioning), the
|
The container holds space for a container magic number (for versioning), the
|
||||||
write enabler, the nodeid which accepted the write enabler (used for share
|
write enabler, the nodeid which accepted the write enabler (used for share
|
||||||
migration, described below), a small number of lease structures, the embedded
|
migration, described below), a small number of lease structures, the embedded
|
||||||
data itself, and expansion space for additional lease structures.
|
data itself, and expansion space for additional lease structures::
|
||||||
|
|
||||||
# offset size name
|
# offset size name
|
||||||
1 0 32 magic verstr "tahoe mutable container v1" plus binary
|
1 0 32 magic verstr "tahoe mutable container v1" plus binary
|
||||||
|
@ -270,53 +302,63 @@ portions of the container are inaccessible to the clients.
|
||||||
The two methods provided by the storage server on these "MutableSlot" share
|
The two methods provided by the storage server on these "MutableSlot" share
|
||||||
objects are:
|
objects are:
|
||||||
|
|
||||||
* readv(ListOf(offset=int, length=int))
|
* readv(ListOf(offset=int, length=int))
|
||||||
* returns a list of bytestrings, of the various requested lengths
|
|
||||||
* offset < 0 is interpreted relative to the end of the data
|
|
||||||
* spans which hit the end of the data will return truncated data
|
|
||||||
|
|
||||||
* testv_and_writev(write_enabler, test_vector, write_vector)
|
* returns a list of bytestrings, of the various requested lengths
|
||||||
* this is a test-and-set operation which performs the given tests and only
|
* offset < 0 is interpreted relative to the end of the data
|
||||||
applies the desired writes if all tests succeed. This is used to detect
|
* spans which hit the end of the data will return truncated data
|
||||||
simultaneous writers, and to reduce the chance that an update will lose
|
|
||||||
data recently written by some other party (written after the last time
|
* testv_and_writev(write_enabler, test_vector, write_vector)
|
||||||
this slot was read).
|
|
||||||
* test_vector=ListOf(TupleOf(offset, length, opcode, specimen))
|
* this is a test-and-set operation which performs the given tests and only
|
||||||
* the opcode is a string, from the set [gt, ge, eq, le, lt, ne]
|
applies the desired writes if all tests succeed. This is used to detect
|
||||||
* each element of the test vector is read from the slot's data and
|
simultaneous writers, and to reduce the chance that an update will lose
|
||||||
compared against the specimen using the desired (in)equality. If all
|
data recently written by some other party (written after the last time
|
||||||
tests evaluate True, the write is performed
|
this slot was read).
|
||||||
* write_vector=ListOf(TupleOf(offset, newdata))
|
* test_vector=ListOf(TupleOf(offset, length, opcode, specimen))
|
||||||
* offset < 0 is not yet defined, it probably means relative to the
|
* the opcode is a string, from the set [gt, ge, eq, le, lt, ne]
|
||||||
end of the data, which probably means append, but we haven't nailed
|
* each element of the test vector is read from the slot's data and
|
||||||
it down quite yet
|
compared against the specimen using the desired (in)equality. If all
|
||||||
* write vectors are executed in order, which specifies the results of
|
tests evaluate True, the write is performed
|
||||||
overlapping writes
|
* write_vector=ListOf(TupleOf(offset, newdata))
|
||||||
* return value:
|
|
||||||
* error: OutOfSpace
|
* offset < 0 is not yet defined, it probably means relative to the
|
||||||
* error: something else (io error, out of memory, whatever)
|
end of the data, which probably means append, but we haven't nailed
|
||||||
* (True, old_test_data): the write was accepted (test_vector passed)
|
it down quite yet
|
||||||
* (False, old_test_data): the write was rejected (test_vector failed)
|
* write vectors are executed in order, which specifies the results of
|
||||||
* both 'accepted' and 'rejected' return the old data that was used
|
overlapping writes
|
||||||
for the test_vector comparison. This can be used by the client
|
|
||||||
to detect write collisions, including collisions for which the
|
* return value:
|
||||||
desired behavior was to overwrite the old version.
|
|
||||||
|
* error: OutOfSpace
|
||||||
|
* error: something else (io error, out of memory, whatever)
|
||||||
|
* (True, old_test_data): the write was accepted (test_vector passed)
|
||||||
|
* (False, old_test_data): the write was rejected (test_vector failed)
|
||||||
|
|
||||||
|
* both 'accepted' and 'rejected' return the old data that was used
|
||||||
|
for the test_vector comparison. This can be used by the client
|
||||||
|
to detect write collisions, including collisions for which the
|
||||||
|
desired behavior was to overwrite the old version.
|
||||||
|
|
||||||
In addition, the storage server provides several methods to access these
|
In addition, the storage server provides several methods to access these
|
||||||
share objects:
|
share objects:
|
||||||
|
|
||||||
* allocate_mutable_slot(storage_index, sharenums=SetOf(int))
|
* allocate_mutable_slot(storage_index, sharenums=SetOf(int))
|
||||||
* returns DictOf(int, MutableSlot)
|
|
||||||
* get_mutable_slot(storage_index)
|
* returns DictOf(int, MutableSlot)
|
||||||
* returns DictOf(int, MutableSlot)
|
|
||||||
* or raises KeyError
|
* get_mutable_slot(storage_index)
|
||||||
|
|
||||||
|
* returns DictOf(int, MutableSlot)
|
||||||
|
* or raises KeyError
|
||||||
|
|
||||||
We intend to add an interface which allows small slots to allocate-and-write
|
We intend to add an interface which allows small slots to allocate-and-write
|
||||||
in a single call, as well as do update or read in a single call. The goal is
|
in a single call, as well as do update or read in a single call. The goal is
|
||||||
to allow a reasonably-sized dirnode to be created (or updated, or read) in
|
to allow a reasonably-sized dirnode to be created (or updated, or read) in
|
||||||
just one round trip (to all N shareholders in parallel).
|
just one round trip (to all N shareholders in parallel).
|
||||||
|
|
||||||
==== migrating shares ====
|
migrating shares
|
||||||
|
````````````````
|
||||||
|
|
||||||
If a share must be migrated from one server to another, two values become
|
If a share must be migrated from one server to another, two values become
|
||||||
invalid: the write enabler (since it was computed for the old server), and
|
invalid: the write enabler (since it was computed for the old server), and
|
||||||
|
@ -357,7 +399,8 @@ operations on either client or server.
|
||||||
Migrating the leases will require a similar protocol. This protocol will be
|
Migrating the leases will require a similar protocol. This protocol will be
|
||||||
defined concretely at a later date.
|
defined concretely at a later date.
|
||||||
|
|
||||||
=== Code Details ===
|
Code Details
|
||||||
|
------------
|
||||||
|
|
||||||
The MutableFileNode class is used to manipulate mutable files (as opposed to
|
The MutableFileNode class is used to manipulate mutable files (as opposed to
|
||||||
ImmutableFileNodes). These are initially generated with
|
ImmutableFileNodes). These are initially generated with
|
||||||
|
@ -370,13 +413,15 @@ NOTE: this section is out of date. Please see src/allmydata/interfaces.py
|
||||||
|
|
||||||
The methods of MutableFileNode are:
|
The methods of MutableFileNode are:
|
||||||
|
|
||||||
* download_to_data() -> [deferred] newdata, NotEnoughSharesError
|
* download_to_data() -> [deferred] newdata, NotEnoughSharesError
|
||||||
* if there are multiple retrieveable versions in the grid, get() returns
|
|
||||||
the first version it can reconstruct, and silently ignores the others.
|
* if there are multiple retrieveable versions in the grid, get() returns
|
||||||
In the future, a more advanced API will signal and provide access to
|
the first version it can reconstruct, and silently ignores the others.
|
||||||
the multiple heads.
|
In the future, a more advanced API will signal and provide access to
|
||||||
* update(newdata) -> OK, UncoordinatedWriteError, NotEnoughSharesError
|
the multiple heads.
|
||||||
* overwrite(newdata) -> OK, UncoordinatedWriteError, NotEnoughSharesError
|
|
||||||
|
* update(newdata) -> OK, UncoordinatedWriteError, NotEnoughSharesError
|
||||||
|
* overwrite(newdata) -> OK, UncoordinatedWriteError, NotEnoughSharesError
|
||||||
|
|
||||||
download_to_data() causes a new retrieval to occur, pulling the current
|
download_to_data() causes a new retrieval to occur, pulling the current
|
||||||
contents from the grid and returning them to the caller. At the same time,
|
contents from the grid and returning them to the caller. At the same time,
|
||||||
|
@ -386,7 +431,7 @@ change has occured between the two, this information will be out of date,
|
||||||
triggering the UncoordinatedWriteError.
|
triggering the UncoordinatedWriteError.
|
||||||
|
|
||||||
update() is therefore intended to be used just after a download_to_data(), in
|
update() is therefore intended to be used just after a download_to_data(), in
|
||||||
the following pattern:
|
the following pattern::
|
||||||
|
|
||||||
d = mfn.download_to_data()
|
d = mfn.download_to_data()
|
||||||
d.addCallback(apply_delta)
|
d.addCallback(apply_delta)
|
||||||
|
@ -399,7 +444,7 @@ its own. To accomplish this, the app needs to pause, download the new
|
||||||
(post-collision and post-recovery) form of the file, reapply their delta,
|
(post-collision and post-recovery) form of the file, reapply their delta,
|
||||||
then submit the update request again. A randomized pause is necessary to
|
then submit the update request again. A randomized pause is necessary to
|
||||||
reduce the chances of colliding a second time with another client that is
|
reduce the chances of colliding a second time with another client that is
|
||||||
doing exactly the same thing:
|
doing exactly the same thing::
|
||||||
|
|
||||||
d = mfn.download_to_data()
|
d = mfn.download_to_data()
|
||||||
d.addCallback(apply_delta)
|
d.addCallback(apply_delta)
|
||||||
|
@ -419,7 +464,7 @@ retry forever, but such apps are encouraged to provide a means to the user of
|
||||||
giving up after a while.
|
giving up after a while.
|
||||||
|
|
||||||
UCW does not mean that the update was not applied, so it is also a good idea
|
UCW does not mean that the update was not applied, so it is also a good idea
|
||||||
to skip the retry-update step if the delta was already applied:
|
to skip the retry-update step if the delta was already applied::
|
||||||
|
|
||||||
d = mfn.download_to_data()
|
d = mfn.download_to_data()
|
||||||
d.addCallback(apply_delta)
|
d.addCallback(apply_delta)
|
||||||
|
@ -456,12 +501,11 @@ you want to replace the file's contents with completely unrelated ones. When
|
||||||
raw files are uploaded into a mutable slot through the tahoe webapi (using
|
raw files are uploaded into a mutable slot through the tahoe webapi (using
|
||||||
POST and the ?mutable=true argument), they are put in place with overwrite().
|
POST and the ?mutable=true argument), they are put in place with overwrite().
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
The peer-selection and data-structure manipulation (and signing/verification)
|
The peer-selection and data-structure manipulation (and signing/verification)
|
||||||
steps will be implemented in a separate class in allmydata/mutable.py .
|
steps will be implemented in a separate class in allmydata/mutable.py .
|
||||||
|
|
||||||
=== SMDF Slot Format ===
|
SMDF Slot Format
|
||||||
|
----------------
|
||||||
|
|
||||||
This SMDF data lives inside a server-side MutableSlot container. The server
|
This SMDF data lives inside a server-side MutableSlot container. The server
|
||||||
is oblivious to this format.
|
is oblivious to this format.
|
||||||
|
@ -470,44 +514,47 @@ This data is tightly packed. In particular, the share data is defined to run
|
||||||
all the way to the beginning of the encrypted private key (the encprivkey
|
all the way to the beginning of the encrypted private key (the encprivkey
|
||||||
offset is used both to terminate the share data and to begin the encprivkey).
|
offset is used both to terminate the share data and to begin the encprivkey).
|
||||||
|
|
||||||
# offset size name
|
::
|
||||||
1 0 1 version byte, \x00 for this format
|
|
||||||
2 1 8 sequence number. 2^64-1 must be handled specially, TBD
|
|
||||||
3 9 32 "R" (root of share hash Merkle tree)
|
|
||||||
4 41 16 IV (share data is AES(H(readkey+IV)) )
|
|
||||||
5 57 18 encoding parameters:
|
|
||||||
57 1 k
|
|
||||||
58 1 N
|
|
||||||
59 8 segment size
|
|
||||||
67 8 data length (of original plaintext)
|
|
||||||
6 75 32 offset table:
|
|
||||||
75 4 (8) signature
|
|
||||||
79 4 (9) share hash chain
|
|
||||||
83 4 (10) block hash tree
|
|
||||||
87 4 (11) share data
|
|
||||||
91 8 (12) encrypted private key
|
|
||||||
99 8 (13) EOF
|
|
||||||
7 107 436ish verification key (2048 RSA key)
|
|
||||||
8 543ish 256ish signature=RSAenc(sigkey, H(version+seqnum+r+IV+encparm))
|
|
||||||
9 799ish (a) share hash chain, encoded as:
|
|
||||||
"".join([pack(">H32s", shnum, hash)
|
|
||||||
for (shnum,hash) in needed_hashes])
|
|
||||||
10 (927ish) (b) block hash tree, encoded as:
|
|
||||||
"".join([pack(">32s",hash) for hash in block_hash_tree])
|
|
||||||
11 (935ish) LEN share data (no gap between this and encprivkey)
|
|
||||||
12 ?? 1216ish encrypted private key= AESenc(write-key, RSA-key)
|
|
||||||
13 ?? -- EOF
|
|
||||||
|
|
||||||
(a) The share hash chain contains ceil(log(N)) hashes, each 32 bytes long.
|
# offset size name
|
||||||
|
1 0 1 version byte, \x00 for this format
|
||||||
|
2 1 8 sequence number. 2^64-1 must be handled specially, TBD
|
||||||
|
3 9 32 "R" (root of share hash Merkle tree)
|
||||||
|
4 41 16 IV (share data is AES(H(readkey+IV)) )
|
||||||
|
5 57 18 encoding parameters:
|
||||||
|
57 1 k
|
||||||
|
58 1 N
|
||||||
|
59 8 segment size
|
||||||
|
67 8 data length (of original plaintext)
|
||||||
|
6 75 32 offset table:
|
||||||
|
75 4 (8) signature
|
||||||
|
79 4 (9) share hash chain
|
||||||
|
83 4 (10) block hash tree
|
||||||
|
87 4 (11) share data
|
||||||
|
91 8 (12) encrypted private key
|
||||||
|
99 8 (13) EOF
|
||||||
|
7 107 436ish verification key (2048 RSA key)
|
||||||
|
8 543ish 256ish signature=RSAenc(sigkey, H(version+seqnum+r+IV+encparm))
|
||||||
|
9 799ish (a) share hash chain, encoded as:
|
||||||
|
"".join([pack(">H32s", shnum, hash)
|
||||||
|
for (shnum,hash) in needed_hashes])
|
||||||
|
10 (927ish) (b) block hash tree, encoded as:
|
||||||
|
"".join([pack(">32s",hash) for hash in block_hash_tree])
|
||||||
|
11 (935ish) LEN share data (no gap between this and encprivkey)
|
||||||
|
12 ?? 1216ish encrypted private key= AESenc(write-key, RSA-key)
|
||||||
|
13 ?? -- EOF
|
||||||
|
|
||||||
|
(a) The share hash chain contains ceil(log(N)) hashes, each 32 bytes long.
|
||||||
This is the set of hashes necessary to validate this share's leaf in the
|
This is the set of hashes necessary to validate this share's leaf in the
|
||||||
share Merkle tree. For N=10, this is 4 hashes, i.e. 128 bytes.
|
share Merkle tree. For N=10, this is 4 hashes, i.e. 128 bytes.
|
||||||
(b) The block hash tree contains ceil(length/segsize) hashes, each 32 bytes
|
(b) The block hash tree contains ceil(length/segsize) hashes, each 32 bytes
|
||||||
long. This is the set of hashes necessary to validate any given block of
|
long. This is the set of hashes necessary to validate any given block of
|
||||||
share data up to the per-share root "r". Each "r" is a leaf of the share
|
share data up to the per-share root "r". Each "r" is a leaf of the share
|
||||||
has tree (with root "R"), from which a minimal subset of hashes is put in
|
has tree (with root "R"), from which a minimal subset of hashes is put in
|
||||||
the share hash chain in (8).
|
the share hash chain in (8).
|
||||||
|
|
||||||
=== Recovery ===
|
Recovery
|
||||||
|
--------
|
||||||
|
|
||||||
The first line of defense against damage caused by colliding writes is the
|
The first line of defense against damage caused by colliding writes is the
|
||||||
Prime Coordination Directive: "Don't Do That".
|
Prime Coordination Directive: "Don't Do That".
|
||||||
|
@ -540,63 +587,70 @@ somebody else's.
|
||||||
|
|
||||||
The write-shares-to-peers algorithm is as follows:
|
The write-shares-to-peers algorithm is as follows:
|
||||||
|
|
||||||
* permute peers according to storage index
|
* permute peers according to storage index
|
||||||
* walk through peers, trying to assign one share per peer
|
* walk through peers, trying to assign one share per peer
|
||||||
* for each peer:
|
* for each peer:
|
||||||
* send testv_and_writev, using "old(seqnum+R) <= our(seqnum+R)" as the test
|
|
||||||
* this means that we will overwrite any old versions, and we will
|
|
||||||
overwrite simultaenous writers of the same version if our R is higher.
|
|
||||||
We will not overwrite writers using a higher seqnum.
|
|
||||||
* record the version that each share winds up with. If the write was
|
|
||||||
accepted, this is our own version. If it was rejected, read the
|
|
||||||
old_test_data to find out what version was retained.
|
|
||||||
* if old_test_data indicates the seqnum was equal or greater than our
|
|
||||||
own, mark the "Simultanous Writes Detected" flag, which will eventually
|
|
||||||
result in an error being reported to the writer (in their close() call).
|
|
||||||
* build a histogram of "R" values
|
|
||||||
* repeat until the histogram indicate that some version (possibly ours)
|
|
||||||
has N shares. Use new servers if necessary.
|
|
||||||
* If we run out of servers:
|
|
||||||
* if there are at least shares-of-happiness of any one version, we're
|
|
||||||
happy, so return. (the close() might still get an error)
|
|
||||||
* not happy, need to reinforce something, goto RECOVERY
|
|
||||||
|
|
||||||
RECOVERY:
|
* send testv_and_writev, using "old(seqnum+R) <= our(seqnum+R)" as the test
|
||||||
* read all shares, count the versions, identify the recoverable ones,
|
|
||||||
discard the unrecoverable ones.
|
* this means that we will overwrite any old versions, and we will
|
||||||
* sort versions: locate max(seqnums), put all versions with that seqnum
|
overwrite simultaenous writers of the same version if our R is higher.
|
||||||
in the list, sort by number of outstanding shares. Then put our own
|
We will not overwrite writers using a higher seqnum.
|
||||||
version. (TODO: put versions with seqnum <max but >us ahead of us?).
|
|
||||||
* for each version:
|
* record the version that each share winds up with. If the write was
|
||||||
* attempt to recover that version
|
accepted, this is our own version. If it was rejected, read the
|
||||||
* if not possible, remove it from the list, go to next one
|
old_test_data to find out what version was retained.
|
||||||
* if recovered, start at beginning of peer list, push that version,
|
* if old_test_data indicates the seqnum was equal or greater than our
|
||||||
continue until N shares are placed
|
own, mark the "Simultanous Writes Detected" flag, which will eventually
|
||||||
* if pushing our own version, bump up the seqnum to one higher than
|
result in an error being reported to the writer (in their close() call).
|
||||||
the max seqnum we saw
|
* build a histogram of "R" values
|
||||||
* if we run out of servers:
|
* repeat until the histogram indicate that some version (possibly ours)
|
||||||
* schedule retry and exponential backoff to repeat RECOVERY
|
has N shares. Use new servers if necessary.
|
||||||
* admit defeat after some period? presumeably the client will be shut down
|
* If we run out of servers:
|
||||||
eventually, maybe keep trying (once per hour?) until then.
|
|
||||||
|
* if there are at least shares-of-happiness of any one version, we're
|
||||||
|
happy, so return. (the close() might still get an error)
|
||||||
|
* not happy, need to reinforce something, goto RECOVERY
|
||||||
|
|
||||||
|
Recovery:
|
||||||
|
|
||||||
|
* read all shares, count the versions, identify the recoverable ones,
|
||||||
|
discard the unrecoverable ones.
|
||||||
|
* sort versions: locate max(seqnums), put all versions with that seqnum
|
||||||
|
in the list, sort by number of outstanding shares. Then put our own
|
||||||
|
version. (TODO: put versions with seqnum <max but >us ahead of us?).
|
||||||
|
* for each version:
|
||||||
|
|
||||||
|
* attempt to recover that version
|
||||||
|
* if not possible, remove it from the list, go to next one
|
||||||
|
* if recovered, start at beginning of peer list, push that version,
|
||||||
|
continue until N shares are placed
|
||||||
|
* if pushing our own version, bump up the seqnum to one higher than
|
||||||
|
the max seqnum we saw
|
||||||
|
* if we run out of servers:
|
||||||
|
|
||||||
|
* schedule retry and exponential backoff to repeat RECOVERY
|
||||||
|
|
||||||
|
* admit defeat after some period? presumeably the client will be shut down
|
||||||
|
eventually, maybe keep trying (once per hour?) until then.
|
||||||
|
|
||||||
|
|
||||||
|
Medium Distributed Mutable Files
|
||||||
|
================================
|
||||||
== Medium Distributed Mutable Files ==
|
|
||||||
|
|
||||||
These are just like the SDMF case, but:
|
These are just like the SDMF case, but:
|
||||||
|
|
||||||
* we actually take advantage of the Merkle hash tree over the blocks, by
|
* we actually take advantage of the Merkle hash tree over the blocks, by
|
||||||
reading a single segment of data at a time (and its necessary hashes), to
|
reading a single segment of data at a time (and its necessary hashes), to
|
||||||
reduce the read-time alacrity
|
reduce the read-time alacrity
|
||||||
* we allow arbitrary writes to the file (i.e. seek() is provided, and
|
* we allow arbitrary writes to the file (i.e. seek() is provided, and
|
||||||
O_TRUNC is no longer required)
|
O_TRUNC is no longer required)
|
||||||
* we write more code on the client side (in the MutableFileNode class), to
|
* we write more code on the client side (in the MutableFileNode class), to
|
||||||
first read each segment that a write must modify. This looks exactly like
|
first read each segment that a write must modify. This looks exactly like
|
||||||
the way a normal filesystem uses a block device, or how a CPU must perform
|
the way a normal filesystem uses a block device, or how a CPU must perform
|
||||||
a cache-line fill before modifying a single word.
|
a cache-line fill before modifying a single word.
|
||||||
* we might implement some sort of copy-based atomic update server call,
|
* we might implement some sort of copy-based atomic update server call,
|
||||||
to allow multiple writev() calls to appear atomic to any readers.
|
to allow multiple writev() calls to appear atomic to any readers.
|
||||||
|
|
||||||
MDMF slots provide fairly efficient in-place edits of very large files (a few
|
MDMF slots provide fairly efficient in-place edits of very large files (a few
|
||||||
GB). Appending data is also fairly efficient, although each time a power of 2
|
GB). Appending data is also fairly efficient, although each time a power of 2
|
||||||
|
@ -608,7 +662,8 @@ the block hash tree and the actual data).
|
||||||
MDMF1 uses the Merkle tree to enable low-alacrity random-access reads. MDMF2
|
MDMF1 uses the Merkle tree to enable low-alacrity random-access reads. MDMF2
|
||||||
adds cache-line reads to allow random-access writes.
|
adds cache-line reads to allow random-access writes.
|
||||||
|
|
||||||
== Large Distributed Mutable Files ==
|
Large Distributed Mutable Files
|
||||||
|
===============================
|
||||||
|
|
||||||
LDMF slots use a fundamentally different way to store the file, inspired by
|
LDMF slots use a fundamentally different way to store the file, inspired by
|
||||||
Mercurial's "revlog" format. They enable very efficient insert/remove/replace
|
Mercurial's "revlog" format. They enable very efficient insert/remove/replace
|
||||||
|
@ -624,7 +679,8 @@ back an entire tree to a specific point in history.
|
||||||
LDMF1 provides deltas but tries to avoid dealing with multiple heads. LDMF2
|
LDMF1 provides deltas but tries to avoid dealing with multiple heads. LDMF2
|
||||||
provides explicit support for revision identifiers and branching.
|
provides explicit support for revision identifiers and branching.
|
||||||
|
|
||||||
== TODO ==
|
TODO
|
||||||
|
====
|
||||||
|
|
||||||
improve allocate-and-write or get-writer-buckets API to allow one-call (or
|
improve allocate-and-write or get-writer-buckets API to allow one-call (or
|
||||||
maybe two-call) updates. The challenge is in figuring out which shares are on
|
maybe two-call) updates. The challenge is in figuring out which shares are on
|
||||||
|
@ -639,9 +695,9 @@ do for updating the write enabler. However we need to know which lease to
|
||||||
update.. maybe send back a list of all old nodeids that we find, then try all
|
update.. maybe send back a list of all old nodeids that we find, then try all
|
||||||
of them when we accept the update?
|
of them when we accept the update?
|
||||||
|
|
||||||
We now do this in a specially-formatted IndexError exception:
|
We now do this in a specially-formatted IndexError exception:
|
||||||
"UNABLE to renew non-existent lease. I have leases accepted by " +
|
"UNABLE to renew non-existent lease. I have leases accepted by " +
|
||||||
"nodeids: '12345','abcde','44221' ."
|
"nodeids: '12345','abcde','44221' ."
|
||||||
|
|
||||||
confirm that a repairer can regenerate shares without the private key. Hmm,
|
confirm that a repairer can regenerate shares without the private key. Hmm,
|
||||||
without the write-enabler they won't be able to write those shares to the
|
without the write-enabler they won't be able to write those shares to the
|
||||||
|
|
|
@ -1,4 +1,6 @@
|
||||||
= Specification Document Outline =
|
==============================
|
||||||
|
Specification Document Outline
|
||||||
|
==============================
|
||||||
|
|
||||||
While we do not yet have a clear set of specification documents for Tahoe
|
While we do not yet have a clear set of specification documents for Tahoe
|
||||||
(explaining the file formats, so that others can write interoperable
|
(explaining the file formats, so that others can write interoperable
|
||||||
|
@ -8,7 +10,13 @@ Tahoe.
|
||||||
|
|
||||||
We currently imagine 4 documents.
|
We currently imagine 4 documents.
|
||||||
|
|
||||||
== #1: Share Format, Encoding Algorithm ==
|
1. `#1: Share Format, Encoding Algorithm`_
|
||||||
|
2. `#2: Share Exchange Protocol`_
|
||||||
|
3. `#3: Server Selection Algorithm, filecap format`_
|
||||||
|
4. `#4: Directory Format`_
|
||||||
|
|
||||||
|
#1: Share Format, Encoding Algorithm
|
||||||
|
====================================
|
||||||
|
|
||||||
This document will describe the way that files are encrypted and encoded into
|
This document will describe the way that files are encrypted and encoded into
|
||||||
shares. It will include a specification of the share format, and explain both
|
shares. It will include a specification of the share format, and explain both
|
||||||
|
@ -43,7 +51,8 @@ from destroying shares). We don't yet have a document dedicated to explaining
|
||||||
these, but let's call it "Access Control" for now.
|
these, but let's call it "Access Control" for now.
|
||||||
|
|
||||||
|
|
||||||
== #2: Share Exchange Protocol ==
|
#2: Share Exchange Protocol
|
||||||
|
===========================
|
||||||
|
|
||||||
This document explains the wire-protocol used to upload, download, and modify
|
This document explains the wire-protocol used to upload, download, and modify
|
||||||
shares on the various storage servers.
|
shares on the various storage servers.
|
||||||
|
@ -75,7 +84,8 @@ each protocol. The first one to be written will describe the Foolscap-based
|
||||||
protocol that tahoe currently uses, but we anticipate a subsequent one to
|
protocol that tahoe currently uses, but we anticipate a subsequent one to
|
||||||
describe a more HTTP-based protocol.
|
describe a more HTTP-based protocol.
|
||||||
|
|
||||||
== #3: Server Selection Algorithm, filecap format ==
|
#3: Server Selection Algorithm, filecap format
|
||||||
|
==============================================
|
||||||
|
|
||||||
This document has two interrelated purposes. With a deeper understanding of
|
This document has two interrelated purposes. With a deeper understanding of
|
||||||
the issues, we may be able to separate these more cleanly in the future.
|
the issues, we may be able to separate these more cleanly in the future.
|
||||||
|
@ -90,27 +100,27 @@ of work?
|
||||||
This question implies many things, all of which should be explained in this
|
This question implies many things, all of which should be explained in this
|
||||||
document:
|
document:
|
||||||
|
|
||||||
* the notion of a "grid", nominally a set of servers who could potentially
|
* the notion of a "grid", nominally a set of servers who could potentially
|
||||||
hold shares, which might change over time
|
hold shares, which might change over time
|
||||||
* a way to configure which grid should be used
|
* a way to configure which grid should be used
|
||||||
* a way to discover which servers are a part of that grid
|
* a way to discover which servers are a part of that grid
|
||||||
* a way to decide which servers are reliable enough to be worth sending
|
* a way to decide which servers are reliable enough to be worth sending
|
||||||
shares
|
shares
|
||||||
* an algorithm to handle servers which refuse shares
|
* an algorithm to handle servers which refuse shares
|
||||||
* a way for a downloader to locate which servers have shares
|
* a way for a downloader to locate which servers have shares
|
||||||
* a way to choose which shares should be used for download
|
* a way to choose which shares should be used for download
|
||||||
|
|
||||||
The server-selection algorithm has several obviously competing goals:
|
The server-selection algorithm has several obviously competing goals:
|
||||||
|
|
||||||
* minimize the amount of work that must be done during upload
|
* minimize the amount of work that must be done during upload
|
||||||
* minimize the total storage resources used
|
* minimize the total storage resources used
|
||||||
* avoid "hot spots", balance load among multiple servers
|
* avoid "hot spots", balance load among multiple servers
|
||||||
* maximize the chance that enough shares will be downloadable later, by
|
* maximize the chance that enough shares will be downloadable later, by
|
||||||
uploading lots of shares, and by placing them on reliable servers
|
uploading lots of shares, and by placing them on reliable servers
|
||||||
* minimize the work that the future downloader must do
|
* minimize the work that the future downloader must do
|
||||||
* tolerate temporary server failures, permanent server departure, and new
|
* tolerate temporary server failures, permanent server departure, and new
|
||||||
server insertions
|
server insertions
|
||||||
* minimize the amount of information that must be added to the filecap
|
* minimize the amount of information that must be added to the filecap
|
||||||
|
|
||||||
The server-selection algorithm is defined in some context: some set of
|
The server-selection algorithm is defined in some context: some set of
|
||||||
expectations about the servers or grid with which it is expected to operate.
|
expectations about the servers or grid with which it is expected to operate.
|
||||||
|
@ -185,7 +195,8 @@ Tahoe-1.3.0 filecaps do not contain hostnames, because the failure of DNS or
|
||||||
an individual host might then impact file availability (however the
|
an individual host might then impact file availability (however the
|
||||||
Introducer contains DNS names or IP addresses).
|
Introducer contains DNS names or IP addresses).
|
||||||
|
|
||||||
== #4: Directory Format ==
|
#4: Directory Format
|
||||||
|
====================
|
||||||
|
|
||||||
Tahoe directories are a special way of interpreting and managing the contents
|
Tahoe directories are a special way of interpreting and managing the contents
|
||||||
of a file (either mutable or immutable). These "dirnode" files are basically
|
of a file (either mutable or immutable). These "dirnode" files are basically
|
||||||
|
|
|
@ -1,4 +1,6 @@
|
||||||
= Servers of Happiness =
|
====================
|
||||||
|
Servers of Happiness
|
||||||
|
====================
|
||||||
|
|
||||||
When you upload a file to a Tahoe-LAFS grid, you expect that it will
|
When you upload a file to a Tahoe-LAFS grid, you expect that it will
|
||||||
stay there for a while, and that it will do so even if a few of the
|
stay there for a while, and that it will do so even if a few of the
|
||||||
|
@ -34,7 +36,8 @@ health provides a stronger assurance of file availability over time;
|
||||||
with 3-of-10 encoding, and happy=7, a healthy file is still guaranteed
|
with 3-of-10 encoding, and happy=7, a healthy file is still guaranteed
|
||||||
to be available even if 4 peers fail.
|
to be available even if 4 peers fail.
|
||||||
|
|
||||||
== Measuring Servers of Happiness ==
|
Measuring Servers of Happiness
|
||||||
|
==============================
|
||||||
|
|
||||||
We calculate servers-of-happiness by computing a matching on a
|
We calculate servers-of-happiness by computing a matching on a
|
||||||
bipartite graph that is related to the layout of shares on the grid.
|
bipartite graph that is related to the layout of shares on the grid.
|
||||||
|
@ -71,7 +74,8 @@ of shares later without having to re-encode the file. Also, it is
|
||||||
computationally reasonable to compute a maximum matching in a bipartite
|
computationally reasonable to compute a maximum matching in a bipartite
|
||||||
graph, and there are well-studied algorithms to do that.
|
graph, and there are well-studied algorithms to do that.
|
||||||
|
|
||||||
== Issues ==
|
Issues
|
||||||
|
======
|
||||||
|
|
||||||
The uploader is good at detecting unhealthy upload layouts, but it
|
The uploader is good at detecting unhealthy upload layouts, but it
|
||||||
doesn't always know how to make an unhealthy upload into a healthy
|
doesn't always know how to make an unhealthy upload into a healthy
|
||||||
|
|
|
@ -1,5 +1,15 @@
|
||||||
|
==========
|
||||||
|
Tahoe URIs
|
||||||
|
==========
|
||||||
|
|
||||||
= Tahoe URIs =
|
1. `File URIs`_
|
||||||
|
|
||||||
|
1. `CHK URIs`_
|
||||||
|
2. `LIT URIs`_
|
||||||
|
3. `Mutable File URIs`_
|
||||||
|
|
||||||
|
2. `Directory URIs`_
|
||||||
|
3. `Internal Usage of URIs`_
|
||||||
|
|
||||||
Each file and directory in a Tahoe filesystem is described by a "URI". There
|
Each file and directory in a Tahoe filesystem is described by a "URI". There
|
||||||
are different kinds of URIs for different kinds of objects, and there are
|
are different kinds of URIs for different kinds of objects, and there are
|
||||||
|
@ -7,11 +17,11 @@ different kinds of URIs to provide different kinds of access to those
|
||||||
objects. Each URI is a string representation of a "capability" or "cap", and
|
objects. Each URI is a string representation of a "capability" or "cap", and
|
||||||
there are read-caps, write-caps, verify-caps, and others.
|
there are read-caps, write-caps, verify-caps, and others.
|
||||||
|
|
||||||
Each URI provides both '''location''' and '''identification''' properties.
|
Each URI provides both ``location`` and ``identification`` properties.
|
||||||
'''location''' means that holding the URI is sufficient to locate the data it
|
``location`` means that holding the URI is sufficient to locate the data it
|
||||||
represents (this means it contains a storage index or a lookup key, whatever
|
represents (this means it contains a storage index or a lookup key, whatever
|
||||||
is necessary to find the place or places where the data is being kept).
|
is necessary to find the place or places where the data is being kept).
|
||||||
'''identification''' means that the URI also serves to validate the data: an
|
``identification`` means that the URI also serves to validate the data: an
|
||||||
attacker who wants to trick you into into using the wrong data will be
|
attacker who wants to trick you into into using the wrong data will be
|
||||||
limited in their abilities by the identification properties of the URI.
|
limited in their abilities by the identification properties of the URI.
|
||||||
|
|
||||||
|
@ -22,11 +32,12 @@ modify it. Directories, for example, have a read-cap which is derived from
|
||||||
the write-cap: anyone with read/write access to the directory can produce a
|
the write-cap: anyone with read/write access to the directory can produce a
|
||||||
limited URI that grants read-only access, but not the other way around.
|
limited URI that grants read-only access, but not the other way around.
|
||||||
|
|
||||||
source:src/allmydata/uri.py is the main place where URIs are processed. It is
|
src/allmydata/uri.py is the main place where URIs are processed. It is
|
||||||
the authoritative definition point for all the the URI types described
|
the authoritative definition point for all the the URI types described
|
||||||
herein.
|
herein.
|
||||||
|
|
||||||
== File URIs ==
|
File URIs
|
||||||
|
=========
|
||||||
|
|
||||||
The lowest layer of the Tahoe architecture (the "grid") is reponsible for
|
The lowest layer of the Tahoe architecture (the "grid") is reponsible for
|
||||||
mapping URIs to data. This is basically a distributed hash table, in which
|
mapping URIs to data. This is basically a distributed hash table, in which
|
||||||
|
@ -41,11 +52,12 @@ For mutable entries, the URI identifies a "slot" or "container", which can be
|
||||||
filled with different pieces of data at different times.
|
filled with different pieces of data at different times.
|
||||||
|
|
||||||
It is important to note that the "files" described by these URIs are just a
|
It is important to note that the "files" described by these URIs are just a
|
||||||
bunch of bytes, and that __no__ filenames or other metadata is retained at
|
bunch of bytes, and that **no** filenames or other metadata is retained at
|
||||||
this layer. The vdrive layer (which sits above the grid layer) is entirely
|
this layer. The vdrive layer (which sits above the grid layer) is entirely
|
||||||
responsible for directories and filenames and the like.
|
responsible for directories and filenames and the like.
|
||||||
|
|
||||||
=== CHI URIs ===
|
CHK URIs
|
||||||
|
--------
|
||||||
|
|
||||||
CHK (Content Hash Keyed) files are immutable sequences of bytes. They are
|
CHK (Content Hash Keyed) files are immutable sequences of bytes. They are
|
||||||
uploaded in a distributed fashion using a "storage index" (for the "location"
|
uploaded in a distributed fashion using a "storage index" (for the "location"
|
||||||
|
@ -58,7 +70,7 @@ tagged SHA-256d hash, then truncated to 128 bits), so it does not need to be
|
||||||
physically present in the URI.
|
physically present in the URI.
|
||||||
|
|
||||||
The current format for CHK URIs is the concatenation of the following
|
The current format for CHK URIs is the concatenation of the following
|
||||||
strings:
|
strings::
|
||||||
|
|
||||||
URI:CHK:(key):(hash):(needed-shares):(total-shares):(size)
|
URI:CHK:(key):(hash):(needed-shares):(total-shares):(size)
|
||||||
|
|
||||||
|
@ -71,9 +83,9 @@ representation of the size of the data represented by this URI. All base32
|
||||||
encodings are expressed in lower-case, with the trailing '=' signs removed.
|
encodings are expressed in lower-case, with the trailing '=' signs removed.
|
||||||
|
|
||||||
For example, the following is a CHK URI, generated from the contents of the
|
For example, the following is a CHK URI, generated from the contents of the
|
||||||
architecture.txt document that lives next to this one in the source tree:
|
architecture.txt document that lives next to this one in the source tree::
|
||||||
|
|
||||||
URI:CHK:ihrbeov7lbvoduupd4qblysj7a:bg5agsdt62jb34hxvxmdsbza6do64f4fg5anxxod2buttbo6udzq:3:10:28733
|
URI:CHK:ihrbeov7lbvoduupd4qblysj7a:bg5agsdt62jb34hxvxmdsbza6do64f4fg5anxxod2buttbo6udzq:3:10:28733
|
||||||
|
|
||||||
Historical note: The name "CHK" is somewhat inaccurate and continues to be
|
Historical note: The name "CHK" is somewhat inaccurate and continues to be
|
||||||
used for historical reasons. "Content Hash Key" means that the encryption key
|
used for historical reasons. "Content Hash Key" means that the encryption key
|
||||||
|
@ -86,7 +98,8 @@ about the file's contents (except filesize), which improves privacy. The
|
||||||
URI:CHK: prefix really indicates that an immutable file is in use, without
|
URI:CHK: prefix really indicates that an immutable file is in use, without
|
||||||
saying anything about how the key was derived.
|
saying anything about how the key was derived.
|
||||||
|
|
||||||
=== LIT URIs ===
|
LIT URIs
|
||||||
|
--------
|
||||||
|
|
||||||
LITeral files are also an immutable sequence of bytes, but they are so short
|
LITeral files are also an immutable sequence of bytes, but they are so short
|
||||||
that the data is stored inside the URI itself. These are used for files of 55
|
that the data is stored inside the URI itself. These are used for files of 55
|
||||||
|
@ -97,14 +110,15 @@ LIT URIs do not require an upload or download phase, as their data is stored
|
||||||
directly in the URI.
|
directly in the URI.
|
||||||
|
|
||||||
The format of a LIT URI is simply a fixed prefix concatenated with the base32
|
The format of a LIT URI is simply a fixed prefix concatenated with the base32
|
||||||
encoding of the file's data:
|
encoding of the file's data::
|
||||||
|
|
||||||
URI:LIT:bjuw4y3movsgkidbnrwg26lemf2gcl3xmvrc6kropbuhi3lmbi
|
URI:LIT:bjuw4y3movsgkidbnrwg26lemf2gcl3xmvrc6kropbuhi3lmbi
|
||||||
|
|
||||||
The LIT URI for an empty file is "URI:LIT:", and the LIT URI for a 5-byte
|
The LIT URI for an empty file is "URI:LIT:", and the LIT URI for a 5-byte
|
||||||
file that contains the string "hello" is "URI:LIT:nbswy3dp".
|
file that contains the string "hello" is "URI:LIT:nbswy3dp".
|
||||||
|
|
||||||
=== Mutable File URIs ===
|
Mutable File URIs
|
||||||
|
-----------------
|
||||||
|
|
||||||
The other kind of DHT entry is the "mutable slot", in which the URI names a
|
The other kind of DHT entry is the "mutable slot", in which the URI names a
|
||||||
container to which data can be placed and retrieved without changing the
|
container to which data can be placed and retrieved without changing the
|
||||||
|
@ -117,10 +131,10 @@ contents).
|
||||||
|
|
||||||
Mutable slots use public key technology to provide data integrity, and put a
|
Mutable slots use public key technology to provide data integrity, and put a
|
||||||
hash of the public key in the URI. As a result, the data validation is
|
hash of the public key in the URI. As a result, the data validation is
|
||||||
limited to confirming that the data retrieved matches _some_ data that was
|
limited to confirming that the data retrieved matches *some* data that was
|
||||||
uploaded in the past, but not _which_ version of that data.
|
uploaded in the past, but not _which_ version of that data.
|
||||||
|
|
||||||
The format of the write-cap for mutable files is:
|
The format of the write-cap for mutable files is::
|
||||||
|
|
||||||
URI:SSK:(writekey):(fingerprint)
|
URI:SSK:(writekey):(fingerprint)
|
||||||
|
|
||||||
|
@ -129,7 +143,7 @@ that is used to encrypt the RSA private key, and (fingerprint) is the base32
|
||||||
encoded 32-byte SHA-256 hash of the RSA public key. For more details about
|
encoded 32-byte SHA-256 hash of the RSA public key. For more details about
|
||||||
the way these keys are used, please see docs/mutable.txt .
|
the way these keys are used, please see docs/mutable.txt .
|
||||||
|
|
||||||
The format for mutable read-caps is:
|
The format for mutable read-caps is::
|
||||||
|
|
||||||
URI:SSK-RO:(readkey):(fingerprint)
|
URI:SSK-RO:(readkey):(fingerprint)
|
||||||
|
|
||||||
|
@ -143,45 +157,45 @@ Historical note: the "SSK" prefix is a perhaps-inaccurate reference to
|
||||||
"Sub-Space Keys" from the Freenet project, which uses a vaguely similar
|
"Sub-Space Keys" from the Freenet project, which uses a vaguely similar
|
||||||
structure to provide mutable file access.
|
structure to provide mutable file access.
|
||||||
|
|
||||||
== Directory URIs ==
|
Directory URIs
|
||||||
|
==============
|
||||||
|
|
||||||
The grid layer provides a mapping from URI to data. To turn this into a graph
|
The grid layer provides a mapping from URI to data. To turn this into a graph
|
||||||
of directories and files, the "vdrive" layer (which sits on top of the grid
|
of directories and files, the "vdrive" layer (which sits on top of the grid
|
||||||
layer) needs to keep track of "directory nodes", or "dirnodes" for short.
|
layer) needs to keep track of "directory nodes", or "dirnodes" for short.
|
||||||
source:docs/dirnodes.txt describes how these work.
|
docs/dirnodes.txt describes how these work.
|
||||||
|
|
||||||
Dirnodes are contained inside mutable files, and are thus simply a particular
|
Dirnodes are contained inside mutable files, and are thus simply a particular
|
||||||
way to interpret the contents of these files. As a result, a directory
|
way to interpret the contents of these files. As a result, a directory
|
||||||
write-cap looks a lot like a mutable-file write-cap:
|
write-cap looks a lot like a mutable-file write-cap::
|
||||||
|
|
||||||
URI:DIR2:(writekey):(fingerprint)
|
URI:DIR2:(writekey):(fingerprint)
|
||||||
|
|
||||||
Likewise directory read-caps (which provide read-only access to the
|
Likewise directory read-caps (which provide read-only access to the
|
||||||
directory) look much like mutable-file read-caps:
|
directory) look much like mutable-file read-caps::
|
||||||
|
|
||||||
URI:DIR2-RO:(readkey):(fingerprint)
|
URI:DIR2-RO:(readkey):(fingerprint)
|
||||||
|
|
||||||
Historical note: the "DIR2" prefix is used because the non-distributed
|
Historical note: the "DIR2" prefix is used because the non-distributed
|
||||||
dirnodes in earlier Tahoe releases had already claimed the "DIR" prefix.
|
dirnodes in earlier Tahoe releases had already claimed the "DIR" prefix.
|
||||||
|
|
||||||
== Internal Usage of URIs ==
|
Internal Usage of URIs
|
||||||
|
======================
|
||||||
|
|
||||||
The classes in source:src/allmydata/uri.py are used to pack and unpack these
|
The classes in source:src/allmydata/uri.py are used to pack and unpack these
|
||||||
various kinds of URIs. Three Interfaces are defined (IURI, IFileURI, and
|
various kinds of URIs. Three Interfaces are defined (IURI, IFileURI, and
|
||||||
IDirnodeURI) which are implemented by these classes, and string-to-URI-class
|
IDirnodeURI) which are implemented by these classes, and string-to-URI-class
|
||||||
conversion routines have been registered as adapters, so that code which
|
conversion routines have been registered as adapters, so that code which
|
||||||
wants to extract e.g. the size of a CHK or LIT uri can do:
|
wants to extract e.g. the size of a CHK or LIT uri can do::
|
||||||
|
|
||||||
{{{
|
print IFileURI(uri).get_size()
|
||||||
print IFileURI(uri).get_size()
|
|
||||||
}}}
|
|
||||||
|
|
||||||
If the URI does not represent a CHK or LIT uri (for example, if it was for a
|
If the URI does not represent a CHK or LIT uri (for example, if it was for a
|
||||||
directory instead), the adaptation will fail, raising a TypeError inside the
|
directory instead), the adaptation will fail, raising a TypeError inside the
|
||||||
IFileURI() call.
|
IFileURI() call.
|
||||||
|
|
||||||
Several utility methods are provided on these objects. The most important is
|
Several utility methods are provided on these objects. The most important is
|
||||||
{{{ to_string() }}}, which returns the string form of the URI. Therefore {{{
|
``to_string()``, which returns the string form of the URI. Therefore
|
||||||
IURI(uri).to_string == uri }}} is true for any valid URI. See the IURI class
|
``IURI(uri).to_string == uri`` is true for any valid URI. See the IURI class
|
||||||
in source:src/allmydata/interfaces.py for more details.
|
in source:src/allmydata/interfaces.py for more details.
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue