Merge remote-tracking branch 'origin/master' into 3324-humanreadable-python-3
This commit is contained in:
commit
e06d41442a
2
Makefile
2
Makefile
|
@ -6,7 +6,7 @@ default:
|
||||||
|
|
||||||
PYTHON=python
|
PYTHON=python
|
||||||
export PYTHON
|
export PYTHON
|
||||||
PYFLAKES=pyflakes
|
PYFLAKES=flake8
|
||||||
export PYFLAKES
|
export PYFLAKES
|
||||||
|
|
||||||
SOURCES=src/allmydata static misc setup.py
|
SOURCES=src/allmydata static misc setup.py
|
||||||
|
|
|
@ -10,7 +10,8 @@ function correctly, preserving your privacy and security.
|
||||||
For full documentation, please see
|
For full documentation, please see
|
||||||
http://tahoe-lafs.readthedocs.io/en/latest/ .
|
http://tahoe-lafs.readthedocs.io/en/latest/ .
|
||||||
|
|
||||||
|readthedocs| |travis| |circleci| |codecov|
|
|Contributor Covenant| |readthedocs| |travis| |circleci| |codecov|
|
||||||
|
|
||||||
|
|
||||||
INSTALLING
|
INSTALLING
|
||||||
==========
|
==========
|
||||||
|
@ -105,3 +106,7 @@ slides.
|
||||||
.. |codecov| image:: https://codecov.io/github/tahoe-lafs/tahoe-lafs/coverage.svg?branch=master
|
.. |codecov| image:: https://codecov.io/github/tahoe-lafs/tahoe-lafs/coverage.svg?branch=master
|
||||||
:alt: test coverage percentage
|
:alt: test coverage percentage
|
||||||
:target: https://codecov.io/github/tahoe-lafs/tahoe-lafs?branch=master
|
:target: https://codecov.io/github/tahoe-lafs/tahoe-lafs?branch=master
|
||||||
|
|
||||||
|
.. |Contributor Covenant| image:: https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg
|
||||||
|
:alt: code of conduct
|
||||||
|
:target: docs/CODE_OF_CONDUCT.md
|
||||||
|
|
|
@ -0,0 +1,54 @@
|
||||||
|
# Contributor Code of Conduct
|
||||||
|
|
||||||
|
As contributors and maintainers of this project, and in the interest of
|
||||||
|
fostering an open and welcoming community, we pledge to respect all people who
|
||||||
|
contribute through reporting issues, posting feature requests, updating
|
||||||
|
documentation, submitting pull requests or patches, and other activities.
|
||||||
|
|
||||||
|
We are committed to making participation in this project a harassment-free
|
||||||
|
experience for everyone, regardless of level of experience, gender, gender
|
||||||
|
identity and expression, sexual orientation, disability, personal appearance,
|
||||||
|
body size, race, ethnicity, age, religion, or nationality.
|
||||||
|
|
||||||
|
Examples of unacceptable behavior by participants include:
|
||||||
|
|
||||||
|
* The use of sexualized language or imagery
|
||||||
|
* Personal attacks
|
||||||
|
* Trolling or insulting/derogatory comments
|
||||||
|
* Public or private harassment
|
||||||
|
* Publishing other's private information, such as physical or electronic
|
||||||
|
addresses, without explicit permission
|
||||||
|
* Other unethical or unprofessional conduct
|
||||||
|
|
||||||
|
Project maintainers have the right and responsibility to remove, edit, or
|
||||||
|
reject comments, commits, code, wiki edits, issues, and other contributions
|
||||||
|
that are not aligned to this Code of Conduct, or to ban temporarily or
|
||||||
|
permanently any contributor for other behaviors that they deem inappropriate,
|
||||||
|
threatening, offensive, or harmful.
|
||||||
|
|
||||||
|
By adopting this Code of Conduct, project maintainers commit themselves to
|
||||||
|
fairly and consistently applying these principles to every aspect of managing
|
||||||
|
this project. Project maintainers who do not follow or enforce the Code of
|
||||||
|
Conduct may be permanently removed from the project team.
|
||||||
|
|
||||||
|
This Code of Conduct applies both within project spaces and in public spaces
|
||||||
|
when an individual is representing the project or its community.
|
||||||
|
|
||||||
|
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||||
|
reported by contacting a project maintainer (see below). All
|
||||||
|
complaints will be reviewed and investigated and will result in a response that
|
||||||
|
is deemed necessary and appropriate to the circumstances. Maintainers are
|
||||||
|
obligated to maintain confidentiality with regard to the reporter of an
|
||||||
|
incident.
|
||||||
|
|
||||||
|
The following community members have made themselves available for conduct issues:
|
||||||
|
|
||||||
|
- Jean-Paul Calderone (jean-paul at leastauthority dot com)
|
||||||
|
- meejah (meejah at meejah dot ca)
|
||||||
|
|
||||||
|
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||||
|
version 1.3.0, available at
|
||||||
|
[http://contributor-covenant.org/version/1/3/0/][version]
|
||||||
|
|
||||||
|
[homepage]: http://contributor-covenant.org
|
||||||
|
[version]: http://contributor-covenant.org/version/1/3/0/
|
|
@ -24,6 +24,7 @@ Contents:
|
||||||
|
|
||||||
known_issues
|
known_issues
|
||||||
../.github/CONTRIBUTING
|
../.github/CONTRIBUTING
|
||||||
|
CODE_OF_CONDUCT
|
||||||
|
|
||||||
servers
|
servers
|
||||||
helper
|
helper
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
The Tahoe-LAFS project has adopted a formal code of conduct.
|
|
@ -5,3 +5,8 @@ install = update_version install
|
||||||
develop = update_version develop
|
develop = update_version develop
|
||||||
bdist_egg = update_version bdist_egg
|
bdist_egg = update_version bdist_egg
|
||||||
bdist_wheel = update_version bdist_wheel
|
bdist_wheel = update_version bdist_wheel
|
||||||
|
|
||||||
|
[flake8]
|
||||||
|
# For now, only use pyflakes errors; flake8 is still helpful because it allows
|
||||||
|
# ignoring specific errors/warnings when needed.
|
||||||
|
select = F
|
3
setup.py
3
setup.py
|
@ -358,11 +358,12 @@ setup(name="tahoe-lafs", # also set in __init__.py
|
||||||
# discussion.
|
# discussion.
|
||||||
':sys_platform=="win32"': ["pywin32 != 226"],
|
':sys_platform=="win32"': ["pywin32 != 226"],
|
||||||
"test": [
|
"test": [
|
||||||
|
"flake8",
|
||||||
# Pin a specific pyflakes so we don't have different folks
|
# Pin a specific pyflakes so we don't have different folks
|
||||||
# disagreeing on what is or is not a lint issue. We can bump
|
# disagreeing on what is or is not a lint issue. We can bump
|
||||||
# this version from time to time, but we will do it
|
# this version from time to time, but we will do it
|
||||||
# intentionally.
|
# intentionally.
|
||||||
"pyflakes == 2.1.0",
|
"pyflakes == 2.2.0",
|
||||||
# coverage 5.0 breaks the integration tests in some opaque way.
|
# coverage 5.0 breaks the integration tests in some opaque way.
|
||||||
# This probably needs to be addressed in a more permanent way
|
# This probably needs to be addressed in a more permanent way
|
||||||
# eventually...
|
# eventually...
|
||||||
|
|
|
@ -12,6 +12,17 @@ from twisted.trial import unittest
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.application import service
|
from twisted.application import service
|
||||||
|
from twisted.web.template import flattenString
|
||||||
|
|
||||||
|
# We need to use `nevow.inevow.IRequest` for now for compatibility
|
||||||
|
# with the code in web/common.py. Once nevow bits are gone from
|
||||||
|
# web/common.py, we can use `twisted.web.iweb.IRequest` here.
|
||||||
|
from nevow.inevow import IRequest
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
from twisted.web.test.requesthelper import DummyChannel
|
||||||
|
from zope.interface import implementer
|
||||||
|
|
||||||
from foolscap.api import fireEventually
|
from foolscap.api import fireEventually
|
||||||
import itertools
|
import itertools
|
||||||
from allmydata import interfaces
|
from allmydata import interfaces
|
||||||
|
@ -36,9 +47,12 @@ from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \
|
||||||
SHARE_HASH_CHAIN_SIZE
|
SHARE_HASH_CHAIN_SIZE
|
||||||
from allmydata.interfaces import BadWriteEnablerError
|
from allmydata.interfaces import BadWriteEnablerError
|
||||||
from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
|
from allmydata.test.common import LoggingServiceParent, ShouldFailMixin
|
||||||
from allmydata.test.common_web import WebRenderingMixin
|
|
||||||
from allmydata.test.no_network import NoNetworkServer
|
from allmydata.test.no_network import NoNetworkServer
|
||||||
from allmydata.web.storage import StorageStatus, remove_prefix
|
from allmydata.web.storage import (
|
||||||
|
StorageStatus,
|
||||||
|
StorageStatusElement,
|
||||||
|
remove_prefix
|
||||||
|
)
|
||||||
from allmydata.storage_client import (
|
from allmydata.storage_client import (
|
||||||
_StorageServer,
|
_StorageServer,
|
||||||
)
|
)
|
||||||
|
@ -2972,6 +2986,39 @@ def remove_tags(s):
|
||||||
s = re.sub(r'\s+', ' ', s)
|
s = re.sub(r'\s+', ' ', s)
|
||||||
return s
|
return s
|
||||||
|
|
||||||
|
def renderSynchronously(ss):
|
||||||
|
"""
|
||||||
|
Return fully rendered HTML document.
|
||||||
|
|
||||||
|
:param _StorageStatus ss: a StorageStatus instance.
|
||||||
|
"""
|
||||||
|
return unittest.TestCase().successResultOf(renderDeferred(ss))
|
||||||
|
|
||||||
|
def renderDeferred(ss):
|
||||||
|
"""
|
||||||
|
Return a `Deferred` HTML renderer.
|
||||||
|
|
||||||
|
:param _StorageStatus ss: a StorageStatus instance.
|
||||||
|
"""
|
||||||
|
elem = StorageStatusElement(ss._storage, ss._nickname)
|
||||||
|
return flattenString(None, elem)
|
||||||
|
|
||||||
|
def renderJSON(resource):
|
||||||
|
"""Render a JSON from the given resource."""
|
||||||
|
|
||||||
|
@implementer(IRequest)
|
||||||
|
class JSONRequest(Request):
|
||||||
|
"""
|
||||||
|
A Request with t=json argument added to it. This is useful to
|
||||||
|
invoke a Resouce.render_JSON() method.
|
||||||
|
"""
|
||||||
|
def __init__(self):
|
||||||
|
Request.__init__(self, DummyChannel())
|
||||||
|
self.args = {"t": ["json"]}
|
||||||
|
self.fields = {}
|
||||||
|
|
||||||
|
return resource.render(JSONRequest())
|
||||||
|
|
||||||
class MyBucketCountingCrawler(BucketCountingCrawler):
|
class MyBucketCountingCrawler(BucketCountingCrawler):
|
||||||
def finished_prefix(self, cycle, prefix):
|
def finished_prefix(self, cycle, prefix):
|
||||||
BucketCountingCrawler.finished_prefix(self, cycle, prefix)
|
BucketCountingCrawler.finished_prefix(self, cycle, prefix)
|
||||||
|
@ -3008,7 +3055,7 @@ class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
|
||||||
w = StorageStatus(ss)
|
w = StorageStatus(ss)
|
||||||
|
|
||||||
# this sample is before the crawler has started doing anything
|
# this sample is before the crawler has started doing anything
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Accepting new shares: Yes", s)
|
self.failUnlessIn("Accepting new shares: Yes", s)
|
||||||
|
@ -3031,7 +3078,7 @@ class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
|
||||||
self.failUnlessEqual(state["last-complete-prefix"],
|
self.failUnlessEqual(state["last-complete-prefix"],
|
||||||
ss.bucket_counter.prefixes[0])
|
ss.bucket_counter.prefixes[0])
|
||||||
ss.bucket_counter.cpu_slice = 100.0 # finish as fast as possible
|
ss.bucket_counter.cpu_slice = 100.0 # finish as fast as possible
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn(" Current crawl ", s)
|
self.failUnlessIn(" Current crawl ", s)
|
||||||
self.failUnlessIn(" (next work in ", s)
|
self.failUnlessIn(" (next work in ", s)
|
||||||
|
@ -3043,7 +3090,7 @@ class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
|
||||||
d.addCallback(lambda ignored: self.poll(_watch))
|
d.addCallback(lambda ignored: self.poll(_watch))
|
||||||
def _check2(ignored):
|
def _check2(ignored):
|
||||||
ss.bucket_counter.cpu_slice = orig_cpu_slice
|
ss.bucket_counter.cpu_slice = orig_cpu_slice
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Total buckets: 0 (the number of", s)
|
self.failUnlessIn("Total buckets: 0 (the number of", s)
|
||||||
self.failUnless("Next crawl in 59 minutes" in s or "Next crawl in 60 minutes" in s, s)
|
self.failUnless("Next crawl in 59 minutes" in s or "Next crawl in 60 minutes" in s, s)
|
||||||
|
@ -3105,20 +3152,20 @@ class BucketCounter(unittest.TestCase, pollmixin.PollMixin):
|
||||||
|
|
||||||
def _check_1(ignored):
|
def _check_1(ignored):
|
||||||
# no ETA is available yet
|
# no ETA is available yet
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("complete (next work", s)
|
self.failUnlessIn("complete (next work", s)
|
||||||
|
|
||||||
def _check_2(ignored):
|
def _check_2(ignored):
|
||||||
# one prefix has finished, so an ETA based upon that elapsed time
|
# one prefix has finished, so an ETA based upon that elapsed time
|
||||||
# should be available.
|
# should be available.
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("complete (ETA ", s)
|
self.failUnlessIn("complete (ETA ", s)
|
||||||
|
|
||||||
def _check_3(ignored):
|
def _check_3(ignored):
|
||||||
# two prefixes have finished
|
# two prefixes have finished
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("complete (ETA ", s)
|
self.failUnlessIn("complete (ETA ", s)
|
||||||
d.callback("done")
|
d.callback("done")
|
||||||
|
@ -3161,7 +3208,7 @@ class InstrumentedStorageServer(StorageServer):
|
||||||
class No_ST_BLOCKS_StorageServer(StorageServer):
|
class No_ST_BLOCKS_StorageServer(StorageServer):
|
||||||
LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
|
LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler
|
||||||
|
|
||||||
class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin):
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.s = service.MultiService()
|
self.s = service.MultiService()
|
||||||
|
@ -3291,7 +3338,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failIfEqual(sr2["configured-diskbytes"], None)
|
self.failIfEqual(sr2["configured-diskbytes"], None)
|
||||||
self.failIfEqual(sr2["original-sharebytes"], None)
|
self.failIfEqual(sr2["original-sharebytes"], None)
|
||||||
d.addCallback(_after_first_bucket)
|
d.addCallback(_after_first_bucket)
|
||||||
d.addCallback(lambda ign: self.render1(webstatus))
|
d.addCallback(lambda ign: renderDeferred(webstatus))
|
||||||
def _check_html_in_cycle(html):
|
def _check_html_in_cycle(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("So far, this cycle has examined "
|
self.failUnlessIn("So far, this cycle has examined "
|
||||||
|
@ -3366,7 +3413,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failUnlessEqual(count_leases(mutable_si_2), 1)
|
self.failUnlessEqual(count_leases(mutable_si_2), 1)
|
||||||
self.failUnlessEqual(count_leases(mutable_si_3), 2)
|
self.failUnlessEqual(count_leases(mutable_si_3), 2)
|
||||||
d.addCallback(_after_first_cycle)
|
d.addCallback(_after_first_cycle)
|
||||||
d.addCallback(lambda ign: self.render1(webstatus))
|
d.addCallback(lambda ign: renderDeferred(webstatus))
|
||||||
def _check_html(html):
|
def _check_html(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("recovered: 0 shares, 0 buckets "
|
self.failUnlessIn("recovered: 0 shares, 0 buckets "
|
||||||
|
@ -3375,7 +3422,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
"(2 mutable / 2 immutable),", s)
|
"(2 mutable / 2 immutable),", s)
|
||||||
self.failUnlessIn("but expiration was not enabled", s)
|
self.failUnlessIn("but expiration was not enabled", s)
|
||||||
d.addCallback(_check_html)
|
d.addCallback(_check_html)
|
||||||
d.addCallback(lambda ign: self.render_json(webstatus))
|
d.addCallback(lambda ign: renderJSON(webstatus))
|
||||||
def _check_json(raw):
|
def _check_json(raw):
|
||||||
data = json.loads(raw)
|
data = json.loads(raw)
|
||||||
self.failUnlessIn("lease-checker", data)
|
self.failUnlessIn("lease-checker", data)
|
||||||
|
@ -3466,7 +3513,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
d2.addCallback(_after_first_bucket)
|
d2.addCallback(_after_first_bucket)
|
||||||
return d2
|
return d2
|
||||||
d.addCallback(_after_first_bucket)
|
d.addCallback(_after_first_bucket)
|
||||||
d.addCallback(lambda ign: self.render1(webstatus))
|
d.addCallback(lambda ign: renderDeferred(webstatus))
|
||||||
def _check_html_in_cycle(html):
|
def _check_html_in_cycle(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
# the first bucket encountered gets deleted, and its prefix
|
# the first bucket encountered gets deleted, and its prefix
|
||||||
|
@ -3525,7 +3572,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failUnless(rec["configured-diskbytes"] >= 0,
|
self.failUnless(rec["configured-diskbytes"] >= 0,
|
||||||
rec["configured-diskbytes"])
|
rec["configured-diskbytes"])
|
||||||
d.addCallback(_after_first_cycle)
|
d.addCallback(_after_first_cycle)
|
||||||
d.addCallback(lambda ign: self.render1(webstatus))
|
d.addCallback(lambda ign: renderDeferred(webstatus))
|
||||||
def _check_html(html):
|
def _check_html(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Expiration Enabled: expired leases will be removed", s)
|
self.failUnlessIn("Expiration Enabled: expired leases will be removed", s)
|
||||||
|
@ -3610,7 +3657,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
d2.addCallback(_after_first_bucket)
|
d2.addCallback(_after_first_bucket)
|
||||||
return d2
|
return d2
|
||||||
d.addCallback(_after_first_bucket)
|
d.addCallback(_after_first_bucket)
|
||||||
d.addCallback(lambda ign: self.render1(webstatus))
|
d.addCallback(lambda ign: renderDeferred(webstatus))
|
||||||
def _check_html_in_cycle(html):
|
def _check_html_in_cycle(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
# the first bucket encountered gets deleted, and its prefix
|
# the first bucket encountered gets deleted, and its prefix
|
||||||
|
@ -3671,7 +3718,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failUnless(rec["configured-diskbytes"] >= 0,
|
self.failUnless(rec["configured-diskbytes"] >= 0,
|
||||||
rec["configured-diskbytes"])
|
rec["configured-diskbytes"])
|
||||||
d.addCallback(_after_first_cycle)
|
d.addCallback(_after_first_cycle)
|
||||||
d.addCallback(lambda ign: self.render1(webstatus))
|
d.addCallback(lambda ign: renderDeferred(webstatus))
|
||||||
def _check_html(html):
|
def _check_html(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Expiration Enabled:"
|
self.failUnlessIn("Expiration Enabled:"
|
||||||
|
@ -3733,7 +3780,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failUnlessEqual(count_shares(mutable_si_3), 1)
|
self.failUnlessEqual(count_shares(mutable_si_3), 1)
|
||||||
self.failUnlessEqual(count_leases(mutable_si_3), 2)
|
self.failUnlessEqual(count_leases(mutable_si_3), 2)
|
||||||
d.addCallback(_after_first_cycle)
|
d.addCallback(_after_first_cycle)
|
||||||
d.addCallback(lambda ign: self.render1(webstatus))
|
d.addCallback(lambda ign: renderDeferred(webstatus))
|
||||||
def _check_html(html):
|
def _check_html(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("The following sharetypes will be expired: immutable.", s)
|
self.failUnlessIn("The following sharetypes will be expired: immutable.", s)
|
||||||
|
@ -3790,7 +3837,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failUnlessEqual(count_shares(mutable_si_2), 0)
|
self.failUnlessEqual(count_shares(mutable_si_2), 0)
|
||||||
self.failUnlessEqual(count_shares(mutable_si_3), 0)
|
self.failUnlessEqual(count_shares(mutable_si_3), 0)
|
||||||
d.addCallback(_after_first_cycle)
|
d.addCallback(_after_first_cycle)
|
||||||
d.addCallback(lambda ign: self.render1(webstatus))
|
d.addCallback(lambda ign: renderDeferred(webstatus))
|
||||||
def _check_html(html):
|
def _check_html(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("The following sharetypes will be expired: mutable.", s)
|
self.failUnlessIn("The following sharetypes will be expired: mutable.", s)
|
||||||
|
@ -4012,7 +4059,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failUnlessEqual(so_far["corrupt-shares"], [(first_b32, 0)])
|
self.failUnlessEqual(so_far["corrupt-shares"], [(first_b32, 0)])
|
||||||
d.addCallback(_after_first_bucket)
|
d.addCallback(_after_first_bucket)
|
||||||
|
|
||||||
d.addCallback(lambda ign: self.render_json(w))
|
d.addCallback(lambda ign: renderJSON(w))
|
||||||
def _check_json(raw):
|
def _check_json(raw):
|
||||||
data = json.loads(raw)
|
data = json.loads(raw)
|
||||||
# grr. json turns all dict keys into strings.
|
# grr. json turns all dict keys into strings.
|
||||||
|
@ -4021,7 +4068,7 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
# it also turns all tuples into lists
|
# it also turns all tuples into lists
|
||||||
self.failUnlessEqual(corrupt_shares, [[first_b32, 0]])
|
self.failUnlessEqual(corrupt_shares, [[first_b32, 0]])
|
||||||
d.addCallback(_check_json)
|
d.addCallback(_check_json)
|
||||||
d.addCallback(lambda ign: self.render1(w))
|
d.addCallback(lambda ign: renderDeferred(w))
|
||||||
def _check_html(html):
|
def _check_html(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Corrupt shares: SI %s shnum 0" % first_b32, s)
|
self.failUnlessIn("Corrupt shares: SI %s shnum 0" % first_b32, s)
|
||||||
|
@ -4039,14 +4086,14 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failUnlessEqual(rec["examined-shares"], 3)
|
self.failUnlessEqual(rec["examined-shares"], 3)
|
||||||
self.failUnlessEqual(last["corrupt-shares"], [(first_b32, 0)])
|
self.failUnlessEqual(last["corrupt-shares"], [(first_b32, 0)])
|
||||||
d.addCallback(_after_first_cycle)
|
d.addCallback(_after_first_cycle)
|
||||||
d.addCallback(lambda ign: self.render_json(w))
|
d.addCallback(lambda ign: renderJSON(w))
|
||||||
def _check_json_history(raw):
|
def _check_json_history(raw):
|
||||||
data = json.loads(raw)
|
data = json.loads(raw)
|
||||||
last = data["lease-checker"]["history"]["0"]
|
last = data["lease-checker"]["history"]["0"]
|
||||||
corrupt_shares = last["corrupt-shares"]
|
corrupt_shares = last["corrupt-shares"]
|
||||||
self.failUnlessEqual(corrupt_shares, [[first_b32, 0]])
|
self.failUnlessEqual(corrupt_shares, [[first_b32, 0]])
|
||||||
d.addCallback(_check_json_history)
|
d.addCallback(_check_json_history)
|
||||||
d.addCallback(lambda ign: self.render1(w))
|
d.addCallback(lambda ign: renderDeferred(w))
|
||||||
def _check_html_history(html):
|
def _check_html_history(html):
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Corrupt shares: SI %s shnum 0" % first_b32, s)
|
self.failUnlessIn("Corrupt shares: SI %s shnum 0" % first_b32, s)
|
||||||
|
@ -4059,11 +4106,8 @@ class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
d.addBoth(_cleanup)
|
d.addBoth(_cleanup)
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def render_json(self, page):
|
|
||||||
d = self.render1(page, args={"t": ["json"]})
|
|
||||||
return d
|
|
||||||
|
|
||||||
class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
class WebStatus(unittest.TestCase, pollmixin.PollMixin):
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.s = service.MultiService()
|
self.s = service.MultiService()
|
||||||
|
@ -4073,7 +4117,7 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
|
|
||||||
def test_no_server(self):
|
def test_no_server(self):
|
||||||
w = StorageStatus(None)
|
w = StorageStatus(None)
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
self.failUnlessIn("<h1>No Storage Server Running</h1>", html)
|
self.failUnlessIn("<h1>No Storage Server Running</h1>", html)
|
||||||
|
|
||||||
def test_status(self):
|
def test_status(self):
|
||||||
|
@ -4083,7 +4127,7 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
ss = StorageServer(basedir, nodeid)
|
ss = StorageServer(basedir, nodeid)
|
||||||
ss.setServiceParent(self.s)
|
ss.setServiceParent(self.s)
|
||||||
w = StorageStatus(ss, "nickname")
|
w = StorageStatus(ss, "nickname")
|
||||||
d = self.render1(w)
|
d = renderDeferred(w)
|
||||||
def _check_html(html):
|
def _check_html(html):
|
||||||
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
|
@ -4092,7 +4136,7 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
self.failUnlessIn("Accepting new shares: Yes", s)
|
self.failUnlessIn("Accepting new shares: Yes", s)
|
||||||
self.failUnlessIn("Reserved space: - 0 B (0)", s)
|
self.failUnlessIn("Reserved space: - 0 B (0)", s)
|
||||||
d.addCallback(_check_html)
|
d.addCallback(_check_html)
|
||||||
d.addCallback(lambda ign: self.render_json(w))
|
d.addCallback(lambda ign: renderJSON(w))
|
||||||
def _check_json(raw):
|
def _check_json(raw):
|
||||||
data = json.loads(raw)
|
data = json.loads(raw)
|
||||||
s = data["stats"]
|
s = data["stats"]
|
||||||
|
@ -4103,9 +4147,6 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
d.addCallback(_check_json)
|
d.addCallback(_check_json)
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def render_json(self, page):
|
|
||||||
d = self.render1(page, args={"t": ["json"]})
|
|
||||||
return d
|
|
||||||
|
|
||||||
def test_status_no_disk_stats(self):
|
def test_status_no_disk_stats(self):
|
||||||
def call_get_disk_stats(whichdir, reserved_space=0):
|
def call_get_disk_stats(whichdir, reserved_space=0):
|
||||||
|
@ -4119,7 +4160,7 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
ss = StorageServer(basedir, "\x00" * 20)
|
ss = StorageServer(basedir, "\x00" * 20)
|
||||||
ss.setServiceParent(self.s)
|
ss.setServiceParent(self.s)
|
||||||
w = StorageStatus(ss)
|
w = StorageStatus(ss)
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Accepting new shares: Yes", s)
|
self.failUnlessIn("Accepting new shares: Yes", s)
|
||||||
|
@ -4139,7 +4180,7 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
ss = StorageServer(basedir, "\x00" * 20)
|
ss = StorageServer(basedir, "\x00" * 20)
|
||||||
ss.setServiceParent(self.s)
|
ss.setServiceParent(self.s)
|
||||||
w = StorageStatus(ss)
|
w = StorageStatus(ss)
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Accepting new shares: No", s)
|
self.failUnlessIn("Accepting new shares: No", s)
|
||||||
|
@ -4175,7 +4216,7 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
|
|
||||||
ss.setServiceParent(self.s)
|
ss.setServiceParent(self.s)
|
||||||
w = StorageStatus(ss)
|
w = StorageStatus(ss)
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
|
|
||||||
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
|
@ -4193,7 +4234,7 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
|
ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True)
|
||||||
ss.setServiceParent(self.s)
|
ss.setServiceParent(self.s)
|
||||||
w = StorageStatus(ss)
|
w = StorageStatus(ss)
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Accepting new shares: No", s)
|
self.failUnlessIn("Accepting new shares: No", s)
|
||||||
|
@ -4204,7 +4245,7 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
|
ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
|
||||||
ss.setServiceParent(self.s)
|
ss.setServiceParent(self.s)
|
||||||
w = StorageStatus(ss)
|
w = StorageStatus(ss)
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
|
self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
|
||||||
|
@ -4215,16 +4256,16 @@ class WebStatus(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin):
|
||||||
ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
|
ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6)
|
||||||
ss.setServiceParent(self.s)
|
ss.setServiceParent(self.s)
|
||||||
w = StorageStatus(ss)
|
w = StorageStatus(ss)
|
||||||
html = w.renderSynchronously()
|
html = renderSynchronously(w)
|
||||||
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
self.failUnlessIn("<h1>Storage Server Status</h1>", html)
|
||||||
s = remove_tags(html)
|
s = remove_tags(html)
|
||||||
self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
|
self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s)
|
||||||
|
|
||||||
def test_util(self):
|
def test_util(self):
|
||||||
w = StorageStatus(None)
|
w = StorageStatusElement(None, None)
|
||||||
self.failUnlessEqual(w.render_space(None, None), "?")
|
self.failUnlessEqual(w.render_space(None), "?")
|
||||||
self.failUnlessEqual(w.render_space(None, 10e6), "10000000")
|
self.failUnlessEqual(w.render_space(10e6), "10000000")
|
||||||
self.failUnlessEqual(w.render_abbrev_space(None, None), "?")
|
self.failUnlessEqual(w.render_abbrev_space(None), "?")
|
||||||
self.failUnlessEqual(w.render_abbrev_space(None, 10e6), "10.00 MB")
|
self.failUnlessEqual(w.render_abbrev_space(10e6), "10.00 MB")
|
||||||
self.failUnlessEqual(remove_prefix("foo.bar", "foo."), "bar")
|
self.failUnlessEqual(remove_prefix("foo.bar", "foo."), "bar")
|
||||||
self.failUnlessEqual(remove_prefix("foo.bar", "baz."), None)
|
self.failUnlessEqual(remove_prefix("foo.bar", "baz."), None)
|
||||||
|
|
|
@ -23,6 +23,16 @@ class Util(ShouldFailMixin, testutil.ReallyEqualMixin, unittest.TestCase):
|
||||||
self.failUnlessReallyEqual(common.abbreviate_time(0.000123), "123us")
|
self.failUnlessReallyEqual(common.abbreviate_time(0.000123), "123us")
|
||||||
self.failUnlessReallyEqual(common.abbreviate_time(-123000), "-123000000000us")
|
self.failUnlessReallyEqual(common.abbreviate_time(-123000), "-123000000000us")
|
||||||
|
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_time(None), "")
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_time(2.5), "2.50s")
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_time(0.25), "250ms")
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_time(0.0021), "2.1ms")
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_time(0.000123), "123us")
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_rate(None), "")
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_rate(2500000), "2.50MBps")
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_rate(30100), "30.1kBps")
|
||||||
|
self.failUnlessReallyEqual(common.abbreviate_rate(123), "123Bps")
|
||||||
|
|
||||||
def test_compute_rate(self):
|
def test_compute_rate(self):
|
||||||
self.failUnlessReallyEqual(common.compute_rate(None, None), None)
|
self.failUnlessReallyEqual(common.compute_rate(None, None), None)
|
||||||
self.failUnlessReallyEqual(common.compute_rate(None, 1), None)
|
self.failUnlessReallyEqual(common.compute_rate(None, 1), None)
|
||||||
|
|
|
@ -954,8 +954,9 @@ class Web(WebMixin, WebErrorMixin, testutil.StallMixin, testutil.ReallyEqualMixi
|
||||||
def test_storage(self):
|
def test_storage(self):
|
||||||
d = self.GET("/storage")
|
d = self.GET("/storage")
|
||||||
def _check(res):
|
def _check(res):
|
||||||
self.failUnlessIn('Storage Server Status', res)
|
soup = BeautifulSoup(res, 'html5lib')
|
||||||
self.failUnlessIn(FAVICON_MARKUP, res)
|
assert_soup_has_text(self, soup, 'Storage Server Status')
|
||||||
|
assert_soup_has_favicon(self, soup)
|
||||||
res_u = res.decode('utf-8')
|
res_u = res.decode('utf-8')
|
||||||
self.failUnlessIn(u'<li>Server Nickname: <span class="nickname mine">fake_nickname \u263A</span></li>', res_u)
|
self.failUnlessIn(u'<li>Server Nickname: <span class="nickname mine">fake_nickname \u263A</span></li>', res_u)
|
||||||
d.addCallback(_check)
|
d.addCallback(_check)
|
||||||
|
@ -1046,17 +1047,6 @@ class Web(WebMixin, WebErrorMixin, testutil.StallMixin, testutil.ReallyEqualMixi
|
||||||
self.failUnlessReallyEqual(drrm.render_rate(None, 30100), "30.1kBps")
|
self.failUnlessReallyEqual(drrm.render_rate(None, 30100), "30.1kBps")
|
||||||
self.failUnlessReallyEqual(drrm.render_rate(None, 123), "123Bps")
|
self.failUnlessReallyEqual(drrm.render_rate(None, 123), "123Bps")
|
||||||
|
|
||||||
urrm = status.UploadResultsRendererMixin()
|
|
||||||
self.failUnlessReallyEqual(urrm.render_time(None, None), "")
|
|
||||||
self.failUnlessReallyEqual(urrm.render_time(None, 2.5), "2.50s")
|
|
||||||
self.failUnlessReallyEqual(urrm.render_time(None, 0.25), "250ms")
|
|
||||||
self.failUnlessReallyEqual(urrm.render_time(None, 0.0021), "2.1ms")
|
|
||||||
self.failUnlessReallyEqual(urrm.render_time(None, 0.000123), "123us")
|
|
||||||
self.failUnlessReallyEqual(urrm.render_rate(None, None), "")
|
|
||||||
self.failUnlessReallyEqual(urrm.render_rate(None, 2500000), "2.50MBps")
|
|
||||||
self.failUnlessReallyEqual(urrm.render_rate(None, 30100), "30.1kBps")
|
|
||||||
self.failUnlessReallyEqual(urrm.render_rate(None, 123), "123Bps")
|
|
||||||
|
|
||||||
def test_GET_FILEURL(self):
|
def test_GET_FILEURL(self):
|
||||||
d = self.GET(self.public_url + "/foo/bar.txt")
|
d = self.GET(self.public_url + "/foo/bar.txt")
|
||||||
d.addCallback(self.failUnlessIsBarDotTxt)
|
d.addCallback(self.failUnlessIsBarDotTxt)
|
||||||
|
|
|
@ -35,187 +35,236 @@ class RateAndTimeMixin(object):
|
||||||
def render_rate(self, ctx, data):
|
def render_rate(self, ctx, data):
|
||||||
return abbreviate_rate(data)
|
return abbreviate_rate(data)
|
||||||
|
|
||||||
class UploadResultsRendererMixin(RateAndTimeMixin):
|
|
||||||
|
class UploadResultsRendererMixin(Element):
|
||||||
# this requires a method named 'upload_results'
|
# this requires a method named 'upload_results'
|
||||||
|
|
||||||
def render_pushed_shares(self, ctx, data):
|
@renderer
|
||||||
|
def pushed_shares(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res: res.get_pushed_shares())
|
d.addCallback(lambda res: str(res.get_pushed_shares()))
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def render_preexisting_shares(self, ctx, data):
|
@renderer
|
||||||
|
def preexisting_shares(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res: res.get_preexisting_shares())
|
d.addCallback(lambda res: str(res.get_preexisting_shares()))
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def render_sharemap(self, ctx, data):
|
@renderer
|
||||||
|
def sharemap(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res: res.get_sharemap())
|
d.addCallback(lambda res: res.get_sharemap())
|
||||||
def _render(sharemap):
|
def _render(sharemap):
|
||||||
if sharemap is None:
|
if sharemap is None:
|
||||||
return "None"
|
return "None"
|
||||||
l = T.ul()
|
ul = tags.ul()
|
||||||
for shnum, servers in sorted(sharemap.items()):
|
for shnum, servers in sorted(sharemap.items()):
|
||||||
server_names = ', '.join([s.get_name() for s in servers])
|
server_names = ', '.join([s.get_name() for s in servers])
|
||||||
l[T.li["%d -> placed on [%s]" % (shnum, server_names)]]
|
ul(tags.li("%d -> placed on [%s]" % (shnum, server_names)))
|
||||||
return l
|
return ul
|
||||||
d.addCallback(_render)
|
d.addCallback(_render)
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def render_servermap(self, ctx, data):
|
@renderer
|
||||||
|
def servermap(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res: res.get_servermap())
|
d.addCallback(lambda res: res.get_servermap())
|
||||||
def _render(servermap):
|
def _render(servermap):
|
||||||
if servermap is None:
|
if servermap is None:
|
||||||
return "None"
|
return "None"
|
||||||
l = T.ul()
|
ul = tags.ul()
|
||||||
for server, shnums in sorted(servermap.items()):
|
for server, shnums in sorted(servermap.items()):
|
||||||
shares_s = ",".join(["#%d" % shnum for shnum in shnums])
|
shares_s = ",".join(["#%d" % shnum for shnum in shnums])
|
||||||
l[T.li["[%s] got share%s: %s" % (server.get_name(),
|
ul(tags.li("[%s] got share%s: %s" % (server.get_name(),
|
||||||
plural(shnums), shares_s)]]
|
plural(shnums), shares_s)))
|
||||||
return l
|
return ul
|
||||||
d.addCallback(_render)
|
d.addCallback(_render)
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def data_file_size(self, ctx, data):
|
@renderer
|
||||||
|
def file_size(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res: res.get_file_size())
|
d.addCallback(lambda res: str(res.get_file_size()))
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def _get_time(self, name):
|
def _get_time(self, name):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res: res.get_timings().get(name))
|
d.addCallback(lambda res: abbreviate_time(res.get_timings().get(name)))
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def data_time_total(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("total")
|
def time_total(self, req, tag):
|
||||||
|
return tag(self._get_time("total"))
|
||||||
|
|
||||||
def data_time_storage_index(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("storage_index")
|
def time_storage_index(self, req, tag):
|
||||||
|
return tag(self._get_time("storage_index"))
|
||||||
|
|
||||||
def data_time_contacting_helper(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("contacting_helper")
|
def time_contacting_helper(self, req, tag):
|
||||||
|
return tag(self._get_time("contacting_helper"))
|
||||||
|
|
||||||
def data_time_cumulative_fetch(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("cumulative_fetch")
|
def time_cumulative_fetch(self, req, tag):
|
||||||
|
return tag(self._get_time("cumulative_fetch"))
|
||||||
|
|
||||||
def data_time_helper_total(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("helper_total")
|
def time_helper_total(self, req, tag):
|
||||||
|
return tag(self._get_time("helper_total"))
|
||||||
|
|
||||||
def data_time_peer_selection(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("peer_selection")
|
def time_peer_selection(self, req, tag):
|
||||||
|
return tag(self._get_time("peer_selection"))
|
||||||
|
|
||||||
def data_time_total_encode_and_push(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("total_encode_and_push")
|
def time_total_encode_and_push(self, req, tag):
|
||||||
|
return tag(self._get_time("total_encode_and_push"))
|
||||||
|
|
||||||
def data_time_cumulative_encoding(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("cumulative_encoding")
|
def time_cumulative_encoding(self, req, tag):
|
||||||
|
return tag(self._get_time("cumulative_encoding"))
|
||||||
|
|
||||||
def data_time_cumulative_sending(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("cumulative_sending")
|
def time_cumulative_sending(self, req, tag):
|
||||||
|
return tag(self._get_time("cumulative_sending"))
|
||||||
|
|
||||||
def data_time_hashes_and_close(self, ctx, data):
|
@renderer
|
||||||
return self._get_time("hashes_and_close")
|
def time_hashes_and_close(self, req, tag):
|
||||||
|
return tag(self._get_time("hashes_and_close"))
|
||||||
|
|
||||||
def _get_rate(self, name):
|
def _get_rate(self, name):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
def _convert(r):
|
def _convert(r):
|
||||||
file_size = r.get_file_size()
|
file_size = r.get_file_size()
|
||||||
duration = r.get_timings().get(name)
|
duration = r.get_timings().get(name)
|
||||||
return compute_rate(file_size, duration)
|
return abbreviate_rate(compute_rate(file_size, duration))
|
||||||
d.addCallback(_convert)
|
d.addCallback(_convert)
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def data_rate_total(self, ctx, data):
|
@renderer
|
||||||
return self._get_rate("total")
|
def rate_total(self, req, tag):
|
||||||
|
return tag(self._get_rate("total"))
|
||||||
|
|
||||||
def data_rate_storage_index(self, ctx, data):
|
@renderer
|
||||||
return self._get_rate("storage_index")
|
def rate_storage_index(self, req, tag):
|
||||||
|
return tag(self._get_rate("storage_index"))
|
||||||
|
|
||||||
def data_rate_encode(self, ctx, data):
|
@renderer
|
||||||
return self._get_rate("cumulative_encoding")
|
def rate_encode(self, req, tag):
|
||||||
|
return tag(self._get_rate("cumulative_encoding"))
|
||||||
|
|
||||||
def data_rate_push(self, ctx, data):
|
@renderer
|
||||||
|
def rate_push(self, req, tag):
|
||||||
return self._get_rate("cumulative_sending")
|
return self._get_rate("cumulative_sending")
|
||||||
|
|
||||||
def data_rate_encode_and_push(self, ctx, data):
|
@renderer
|
||||||
|
def rate_encode_and_push(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
def _convert(r):
|
def _convert(r):
|
||||||
file_size = r.get_file_size()
|
file_size = r.get_file_size()
|
||||||
time1 = r.get_timings().get("cumulative_encoding")
|
time1 = r.get_timings().get("cumulative_encoding")
|
||||||
time2 = r.get_timings().get("cumulative_sending")
|
time2 = r.get_timings().get("cumulative_sending")
|
||||||
if (time1 is None or time2 is None):
|
if (time1 is None or time2 is None):
|
||||||
return None
|
return abbreviate_rate(None)
|
||||||
else:
|
else:
|
||||||
return compute_rate(file_size, time1+time2)
|
return abbreviate_rate(compute_rate(file_size, time1+time2))
|
||||||
d.addCallback(_convert)
|
d.addCallback(_convert)
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def data_rate_ciphertext_fetch(self, ctx, data):
|
@renderer
|
||||||
|
def rate_ciphertext_fetch(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
def _convert(r):
|
def _convert(r):
|
||||||
fetch_size = r.get_ciphertext_fetched()
|
fetch_size = r.get_ciphertext_fetched()
|
||||||
duration = r.get_timings().get("cumulative_fetch")
|
duration = r.get_timings().get("cumulative_fetch")
|
||||||
return compute_rate(fetch_size, duration)
|
return abbreviate_rate(compute_rate(fetch_size, duration))
|
||||||
d.addCallback(_convert)
|
d.addCallback(_convert)
|
||||||
return d
|
return d
|
||||||
|
|
||||||
class UploadStatusPage(UploadResultsRendererMixin, rend.Page):
|
|
||||||
docFactory = getxmlfile("upload-status.xhtml")
|
|
||||||
|
|
||||||
def __init__(self, data):
|
class UploadStatusPage(Resource, object):
|
||||||
rend.Page.__init__(self, data)
|
"""Renders /status/up-%d."""
|
||||||
self.upload_status = data
|
|
||||||
|
def __init__(self, upload_status):
|
||||||
|
"""
|
||||||
|
:param IUploadStatus upload_status: stats provider.
|
||||||
|
"""
|
||||||
|
super(UploadStatusPage, self).__init__()
|
||||||
|
self._upload_status = upload_status
|
||||||
|
|
||||||
|
def render_GET(self, req):
|
||||||
|
elem = UploadStatusElement(self._upload_status)
|
||||||
|
return renderElement(req, elem)
|
||||||
|
|
||||||
|
|
||||||
|
class UploadStatusElement(UploadResultsRendererMixin):
|
||||||
|
|
||||||
|
loader = XMLFile(FilePath(__file__).sibling("upload-status.xhtml"))
|
||||||
|
|
||||||
|
def __init__(self, upload_status):
|
||||||
|
super(UploadStatusElement, self).__init__()
|
||||||
|
self._upload_status = upload_status
|
||||||
|
|
||||||
def upload_results(self):
|
def upload_results(self):
|
||||||
return defer.maybeDeferred(self.upload_status.get_results)
|
return defer.maybeDeferred(self._upload_status.get_results)
|
||||||
|
|
||||||
def render_results(self, ctx, data):
|
@renderer
|
||||||
|
def results(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
def _got_results(results):
|
def _got_results(results):
|
||||||
if results:
|
if results:
|
||||||
return ctx.tag
|
return tag
|
||||||
return ""
|
return ""
|
||||||
d.addCallback(_got_results)
|
d.addCallback(_got_results)
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def render_started(self, ctx, data):
|
@renderer
|
||||||
started_s = render_time(data.get_started())
|
def started(self, req, tag):
|
||||||
return started_s
|
started_s = render_time(self._upload_status.get_started())
|
||||||
|
return tag(started_s)
|
||||||
|
|
||||||
def render_si(self, ctx, data):
|
@renderer
|
||||||
si_s = base32.b2a_or_none(data.get_storage_index())
|
def si(self, req, tag):
|
||||||
|
si_s = base32.b2a_or_none(self._upload_status.get_storage_index())
|
||||||
if si_s is None:
|
if si_s is None:
|
||||||
si_s = "(None)"
|
si_s = "(None)"
|
||||||
return si_s
|
return tag(str(si_s))
|
||||||
|
|
||||||
def render_helper(self, ctx, data):
|
@renderer
|
||||||
return {True: "Yes",
|
def helper(self, req, tag):
|
||||||
False: "No"}[data.using_helper()]
|
return tag({True: "Yes",
|
||||||
|
False: "No"}[self._upload_status.using_helper()])
|
||||||
|
|
||||||
def render_total_size(self, ctx, data):
|
@renderer
|
||||||
size = data.get_size()
|
def total_size(self, req, tag):
|
||||||
|
size = self._upload_status.get_size()
|
||||||
if size is None:
|
if size is None:
|
||||||
return "(unknown)"
|
return "(unknown)"
|
||||||
return size
|
return tag(str(size))
|
||||||
|
|
||||||
def render_progress_hash(self, ctx, data):
|
@renderer
|
||||||
progress = data.get_progress()[0]
|
def progress_hash(self, req, tag):
|
||||||
|
progress = self._upload_status.get_progress()[0]
|
||||||
|
# TODO: make an ascii-art bar
|
||||||
|
return tag("%.1f%%" % (100.0 * progress))
|
||||||
|
|
||||||
|
@renderer
|
||||||
|
def progress_ciphertext(self, req, tag):
|
||||||
|
progress = self._upload_status.get_progress()[1]
|
||||||
# TODO: make an ascii-art bar
|
# TODO: make an ascii-art bar
|
||||||
return "%.1f%%" % (100.0 * progress)
|
return "%.1f%%" % (100.0 * progress)
|
||||||
|
|
||||||
def render_progress_ciphertext(self, ctx, data):
|
@renderer
|
||||||
progress = data.get_progress()[1]
|
def progress_encode_push(self, req, tag):
|
||||||
|
progress = self._upload_status.get_progress()[2]
|
||||||
# TODO: make an ascii-art bar
|
# TODO: make an ascii-art bar
|
||||||
return "%.1f%%" % (100.0 * progress)
|
return tag("%.1f%%" % (100.0 * progress))
|
||||||
|
|
||||||
def render_progress_encode_push(self, ctx, data):
|
@renderer
|
||||||
progress = data.get_progress()[2]
|
def status(self, req, tag):
|
||||||
# TODO: make an ascii-art bar
|
return tag(self._upload_status.get_status())
|
||||||
return "%.1f%%" % (100.0 * progress)
|
|
||||||
|
|
||||||
def render_status(self, ctx, data):
|
|
||||||
return data.get_status()
|
|
||||||
|
|
||||||
class DownloadResultsRendererMixin(RateAndTimeMixin):
|
class DownloadResultsRendererMixin(RateAndTimeMixin):
|
||||||
# this requires a method named 'download_results'
|
# this requires a method named 'download_results'
|
||||||
|
@ -334,7 +383,7 @@ class DownloadResultsRendererMixin(RateAndTimeMixin):
|
||||||
l = T.ul()
|
l = T.ul()
|
||||||
for peerid in sorted(per_server.keys()):
|
for peerid in sorted(per_server.keys()):
|
||||||
peerid_s = idlib.shortnodeid_b2a(peerid)
|
peerid_s = idlib.shortnodeid_b2a(peerid)
|
||||||
times_s = ", ".join([self.render_time(None, t)
|
times_s = ", ".join([abbreviate_time(t)
|
||||||
for t in per_server[peerid]])
|
for t in per_server[peerid]])
|
||||||
l[T.li["[%s]: %s" % (peerid_s, times_s)]]
|
l[T.li["[%s]: %s" % (peerid_s, times_s)]]
|
||||||
return T.li["Per-Server Segment Fetch Response Times: ", l]
|
return T.li["Per-Server Segment Fetch Response Times: ", l]
|
||||||
|
|
|
@ -1,10 +1,16 @@
|
||||||
|
|
||||||
import time, json
|
import time, json
|
||||||
from nevow import rend, tags as T
|
from twisted.python.filepath import FilePath
|
||||||
|
from twisted.web.template import (
|
||||||
|
Element,
|
||||||
|
XMLFile,
|
||||||
|
tags as T,
|
||||||
|
renderer,
|
||||||
|
renderElement
|
||||||
|
)
|
||||||
from allmydata.web.common import (
|
from allmydata.web.common import (
|
||||||
getxmlfile,
|
|
||||||
abbreviate_time,
|
abbreviate_time,
|
||||||
MultiFormatPage,
|
MultiFormatResource
|
||||||
)
|
)
|
||||||
from allmydata.util.abbreviate import abbreviate_space
|
from allmydata.util.abbreviate import abbreviate_space
|
||||||
from allmydata.util import time_format, idlib
|
from allmydata.util import time_format, idlib
|
||||||
|
@ -16,91 +22,108 @@ def remove_prefix(s, prefix):
|
||||||
return s[len(prefix):]
|
return s[len(prefix):]
|
||||||
|
|
||||||
|
|
||||||
class StorageStatus(MultiFormatPage):
|
class StorageStatusElement(Element):
|
||||||
docFactory = getxmlfile("storage_status.xhtml")
|
"""Class to render a storage status page."""
|
||||||
# the default 'data' argument is the StorageServer instance
|
|
||||||
|
loader = XMLFile(FilePath(__file__).sibling("storage_status.xhtml"))
|
||||||
|
|
||||||
def __init__(self, storage, nickname=""):
|
def __init__(self, storage, nickname=""):
|
||||||
rend.Page.__init__(self, storage)
|
"""
|
||||||
self.storage = storage
|
:param _StorageServer storage: data about storage.
|
||||||
self.nickname = nickname
|
:param string nickname: friendly name for storage.
|
||||||
|
"""
|
||||||
|
super(StorageStatusElement, self).__init__()
|
||||||
|
self._storage = storage
|
||||||
|
self._nickname = nickname
|
||||||
|
|
||||||
def render_JSON(self, req):
|
@renderer
|
||||||
req.setHeader("content-type", "text/plain")
|
def nickname(self, req, tag):
|
||||||
d = {"stats": self.storage.get_stats(),
|
return tag(self._nickname)
|
||||||
"bucket-counter": self.storage.bucket_counter.get_state(),
|
|
||||||
"lease-checker": self.storage.lease_checker.get_state(),
|
@renderer
|
||||||
"lease-checker-progress": self.storage.lease_checker.get_progress(),
|
def nodeid(self, req, tag):
|
||||||
|
return tag(idlib.nodeid_b2a(self._storage.my_nodeid))
|
||||||
|
|
||||||
|
def _get_storage_stat(self, key):
|
||||||
|
"""Get storage server statistics.
|
||||||
|
|
||||||
|
Storage Server keeps a dict that contains various usage and
|
||||||
|
latency statistics. The dict looks like this:
|
||||||
|
|
||||||
|
{
|
||||||
|
'storage_server.accepting_immutable_shares': 1,
|
||||||
|
'storage_server.allocated': 0,
|
||||||
|
'storage_server.disk_avail': 106539192320,
|
||||||
|
'storage_server.disk_free_for_nonroot': 106539192320,
|
||||||
|
'storage_server.disk_free_for_root': 154415284224,
|
||||||
|
'storage_server.disk_total': 941088460800,
|
||||||
|
'storage_server.disk_used': 786673176576,
|
||||||
|
'storage_server.latencies.add-lease.01_0_percentile': None,
|
||||||
|
'storage_server.latencies.add-lease.10_0_percentile': None,
|
||||||
|
...
|
||||||
}
|
}
|
||||||
return json.dumps(d, indent=1) + "\n"
|
|
||||||
|
|
||||||
def data_nickname(self, ctx, storage):
|
``StorageServer.get_stats()`` returns the above dict. Storage
|
||||||
return self.nickname
|
status page uses a subset of the items in the dict, concerning
|
||||||
def data_nodeid(self, ctx, storage):
|
disk usage.
|
||||||
return idlib.nodeid_b2a(self.storage.my_nodeid)
|
|
||||||
|
|
||||||
def render_storage_running(self, ctx, storage):
|
:param str key: storage server statistic we want to know.
|
||||||
if storage:
|
"""
|
||||||
return ctx.tag
|
return self._storage.get_stats().get(key)
|
||||||
else:
|
|
||||||
return T.h1["No Storage Server Running"]
|
|
||||||
|
|
||||||
def render_bool(self, ctx, data):
|
def render_abbrev_space(self, size):
|
||||||
return {True: "Yes", False: "No"}[bool(data)]
|
|
||||||
|
|
||||||
def render_abbrev_space(self, ctx, size):
|
|
||||||
if size is None:
|
if size is None:
|
||||||
return "?"
|
return u"?"
|
||||||
return abbreviate_space(size)
|
return abbreviate_space(size)
|
||||||
|
|
||||||
def render_space(self, ctx, size):
|
def render_space(self, size):
|
||||||
if size is None:
|
if size is None:
|
||||||
return "?"
|
return u"?"
|
||||||
return "%d" % size
|
return u"%d" % size
|
||||||
|
|
||||||
def data_stats(self, ctx, data):
|
@renderer
|
||||||
# FYI: 'data' appears to be self, rather than the StorageServer
|
def storage_stats(self, req, tag):
|
||||||
# object in self.original that gets passed to render_* methods. I
|
# Render storage status table that appears near the top of the page.
|
||||||
# still don't understand Nevow.
|
total = self._get_storage_stat("storage_server.disk_total")
|
||||||
|
used = self._get_storage_stat("storage_server.disk_used")
|
||||||
|
free_root = self._get_storage_stat("storage_server.disk_free_for_root")
|
||||||
|
free_nonroot = self._get_storage_stat("storage_server.disk_free_for_nonroot")
|
||||||
|
reserved = self._get_storage_stat("storage_server.reserved_space")
|
||||||
|
available = self._get_storage_stat("storage_server.disk_avail")
|
||||||
|
|
||||||
# Nevow has nevow.accessors.DictionaryContainer: Any data= directive
|
tag.fillSlots(
|
||||||
# that appears in a context in which the current data is a dictionary
|
disk_total = self.render_space(total),
|
||||||
# will be looked up as keys in that dictionary. So if data_stats()
|
disk_total_abbrev = self.render_abbrev_space(total),
|
||||||
# returns a dictionary, then we can use something like this:
|
disk_used = self.render_space(used),
|
||||||
#
|
disk_used_abbrev = self.render_abbrev_space(used),
|
||||||
# <ul n:data="stats">
|
disk_free_for_root = self.render_space(free_root),
|
||||||
# <li>disk_total: <span n:render="abbrev" n:data="disk_total" /></li>
|
disk_free_for_root_abbrev = self.render_abbrev_space(free_root),
|
||||||
# </ul>
|
disk_free_for_nonroot = self.render_space(free_nonroot),
|
||||||
|
disk_free_for_nonroot_abbrev = self.render_abbrev_space(free_nonroot),
|
||||||
|
reserved_space = self.render_space(reserved),
|
||||||
|
reserved_space_abbrev = self.render_abbrev_space(reserved),
|
||||||
|
disk_avail = self.render_space(available),
|
||||||
|
disk_avail_abbrev = self.render_abbrev_space(available)
|
||||||
|
)
|
||||||
|
return tag
|
||||||
|
|
||||||
# to use get_stats()["storage_server.disk_total"] . However,
|
@renderer
|
||||||
# DictionaryContainer does a raw d[] instead of d.get(), so any
|
def accepting_immutable_shares(self, req, tag):
|
||||||
# missing keys will cause an error, even if the renderer can tolerate
|
accepting = self._get_storage_stat("storage_server.accepting_immutable_shares")
|
||||||
# None values. To overcome this, we either need a dict-like object
|
return tag({True: "Yes", False: "No"}[bool(accepting)])
|
||||||
# that always returns None for unknown keys, or we must pre-populate
|
|
||||||
# our dict with those missing keys, or we should get rid of data_
|
|
||||||
# methods that return dicts (or find some way to override Nevow's
|
|
||||||
# handling of dictionaries).
|
|
||||||
|
|
||||||
d = dict([ (remove_prefix(k, "storage_server."), v)
|
@renderer
|
||||||
for k,v in self.storage.get_stats().items() ])
|
def last_complete_bucket_count(self, req, tag):
|
||||||
d.setdefault("disk_total", None)
|
s = self._storage.bucket_counter.get_state()
|
||||||
d.setdefault("disk_used", None)
|
|
||||||
d.setdefault("disk_free_for_root", None)
|
|
||||||
d.setdefault("disk_free_for_nonroot", None)
|
|
||||||
d.setdefault("reserved_space", None)
|
|
||||||
d.setdefault("disk_avail", None)
|
|
||||||
return d
|
|
||||||
|
|
||||||
def data_last_complete_bucket_count(self, ctx, data):
|
|
||||||
s = self.storage.bucket_counter.get_state()
|
|
||||||
count = s.get("last-complete-bucket-count")
|
count = s.get("last-complete-bucket-count")
|
||||||
if count is None:
|
if count is None:
|
||||||
return "Not computed yet"
|
return tag("Not computed yet")
|
||||||
return count
|
return tag(str(count))
|
||||||
|
|
||||||
def render_count_crawler_status(self, ctx, storage):
|
@renderer
|
||||||
p = self.storage.bucket_counter.get_progress()
|
def count_crawler_status(self, req, tag):
|
||||||
return ctx.tag[self.format_crawler_progress(p)]
|
p = self._storage.bucket_counter.get_progress()
|
||||||
|
return tag(self.format_crawler_progress(p))
|
||||||
|
|
||||||
def format_crawler_progress(self, p):
|
def format_crawler_progress(self, p):
|
||||||
cycletime = p["estimated-time-per-cycle"]
|
cycletime = p["estimated-time-per-cycle"]
|
||||||
|
@ -127,56 +150,52 @@ class StorageStatus(MultiFormatPage):
|
||||||
return ["Next crawl in %s" % abbreviate_time(soon),
|
return ["Next crawl in %s" % abbreviate_time(soon),
|
||||||
cycletime_s]
|
cycletime_s]
|
||||||
|
|
||||||
def render_lease_expiration_enabled(self, ctx, data):
|
@renderer
|
||||||
lc = self.storage.lease_checker
|
def storage_running(self, req, tag):
|
||||||
if lc.expiration_enabled:
|
if self._storage:
|
||||||
return ctx.tag["Enabled: expired leases will be removed"]
|
return tag
|
||||||
else:
|
return T.h1("No Storage Server Running")
|
||||||
return ctx.tag["Disabled: scan-only mode, no leases will be removed"]
|
|
||||||
|
|
||||||
def render_lease_expiration_mode(self, ctx, data):
|
@renderer
|
||||||
lc = self.storage.lease_checker
|
def lease_expiration_enabled(self, req, tag):
|
||||||
|
lc = self._storage.lease_checker
|
||||||
|
if lc.expiration_enabled:
|
||||||
|
return tag("Enabled: expired leases will be removed")
|
||||||
|
else:
|
||||||
|
return tag("Disabled: scan-only mode, no leases will be removed")
|
||||||
|
|
||||||
|
@renderer
|
||||||
|
def lease_expiration_mode(self, req, tag):
|
||||||
|
lc = self._storage.lease_checker
|
||||||
if lc.mode == "age":
|
if lc.mode == "age":
|
||||||
if lc.override_lease_duration is None:
|
if lc.override_lease_duration is None:
|
||||||
ctx.tag["Leases will expire naturally, probably 31 days after "
|
tag("Leases will expire naturally, probably 31 days after "
|
||||||
"creation or renewal."]
|
"creation or renewal.")
|
||||||
else:
|
else:
|
||||||
ctx.tag["Leases created or last renewed more than %s ago "
|
tag("Leases created or last renewed more than %s ago "
|
||||||
"will be considered expired."
|
"will be considered expired."
|
||||||
% abbreviate_time(lc.override_lease_duration)]
|
% abbreviate_time(lc.override_lease_duration))
|
||||||
else:
|
else:
|
||||||
assert lc.mode == "cutoff-date"
|
assert lc.mode == "cutoff-date"
|
||||||
localizedutcdate = time.strftime("%d-%b-%Y", time.gmtime(lc.cutoff_date))
|
localizedutcdate = time.strftime("%d-%b-%Y", time.gmtime(lc.cutoff_date))
|
||||||
isoutcdate = time_format.iso_utc_date(lc.cutoff_date)
|
isoutcdate = time_format.iso_utc_date(lc.cutoff_date)
|
||||||
ctx.tag["Leases created or last renewed before %s (%s) UTC "
|
tag("Leases created or last renewed before %s (%s) UTC "
|
||||||
"will be considered expired." % (isoutcdate, localizedutcdate, )]
|
"will be considered expired."
|
||||||
|
% (isoutcdate, localizedutcdate, ))
|
||||||
if len(lc.mode) > 2:
|
if len(lc.mode) > 2:
|
||||||
ctx.tag[" The following sharetypes will be expired: ",
|
tag(" The following sharetypes will be expired: ",
|
||||||
" ".join(sorted(lc.sharetypes_to_expire)), "."]
|
" ".join(sorted(lc.sharetypes_to_expire)), ".")
|
||||||
return ctx.tag
|
return tag
|
||||||
|
|
||||||
def format_recovered(self, sr, a):
|
@renderer
|
||||||
def maybe(d):
|
def lease_current_cycle_progress(self, req, tag):
|
||||||
if d is None:
|
lc = self._storage.lease_checker
|
||||||
return "?"
|
|
||||||
return "%d" % d
|
|
||||||
return "%s shares, %s buckets (%s mutable / %s immutable), %s (%s / %s)" % \
|
|
||||||
(maybe(sr["%s-shares" % a]),
|
|
||||||
maybe(sr["%s-buckets" % a]),
|
|
||||||
maybe(sr["%s-buckets-mutable" % a]),
|
|
||||||
maybe(sr["%s-buckets-immutable" % a]),
|
|
||||||
abbreviate_space(sr["%s-diskbytes" % a]),
|
|
||||||
abbreviate_space(sr["%s-diskbytes-mutable" % a]),
|
|
||||||
abbreviate_space(sr["%s-diskbytes-immutable" % a]),
|
|
||||||
)
|
|
||||||
|
|
||||||
def render_lease_current_cycle_progress(self, ctx, data):
|
|
||||||
lc = self.storage.lease_checker
|
|
||||||
p = lc.get_progress()
|
p = lc.get_progress()
|
||||||
return ctx.tag[self.format_crawler_progress(p)]
|
return tag(self.format_crawler_progress(p))
|
||||||
|
|
||||||
def render_lease_current_cycle_results(self, ctx, data):
|
@renderer
|
||||||
lc = self.storage.lease_checker
|
def lease_current_cycle_results(self, req, tag):
|
||||||
|
lc = self._storage.lease_checker
|
||||||
p = lc.get_progress()
|
p = lc.get_progress()
|
||||||
if not p["cycle-in-progress"]:
|
if not p["cycle-in-progress"]:
|
||||||
return ""
|
return ""
|
||||||
|
@ -190,7 +209,7 @@ class StorageStatus(MultiFormatPage):
|
||||||
|
|
||||||
p = T.ul()
|
p = T.ul()
|
||||||
def add(*pieces):
|
def add(*pieces):
|
||||||
p[T.li[pieces]]
|
p(T.li(pieces))
|
||||||
|
|
||||||
def maybe(d):
|
def maybe(d):
|
||||||
if d is None:
|
if d is None:
|
||||||
|
@ -226,29 +245,29 @@ class StorageStatus(MultiFormatPage):
|
||||||
|
|
||||||
if so_far["corrupt-shares"]:
|
if so_far["corrupt-shares"]:
|
||||||
add("Corrupt shares:",
|
add("Corrupt shares:",
|
||||||
T.ul[ [T.li[ ["SI %s shnum %d" % corrupt_share
|
T.ul( (T.li( ["SI %s shnum %d" % corrupt_share
|
||||||
for corrupt_share in so_far["corrupt-shares"] ]
|
for corrupt_share in so_far["corrupt-shares"] ]
|
||||||
]]])
|
))))
|
||||||
|
return tag("Current cycle:", p)
|
||||||
|
|
||||||
return ctx.tag["Current cycle:", p]
|
@renderer
|
||||||
|
def lease_last_cycle_results(self, req, tag):
|
||||||
def render_lease_last_cycle_results(self, ctx, data):
|
lc = self._storage.lease_checker
|
||||||
lc = self.storage.lease_checker
|
|
||||||
h = lc.get_state()["history"]
|
h = lc.get_state()["history"]
|
||||||
if not h:
|
if not h:
|
||||||
return ""
|
return ""
|
||||||
last = h[max(h.keys())]
|
last = h[max(h.keys())]
|
||||||
|
|
||||||
start, end = last["cycle-start-finish-times"]
|
start, end = last["cycle-start-finish-times"]
|
||||||
ctx.tag["Last complete cycle (which took %s and finished %s ago)"
|
tag("Last complete cycle (which took %s and finished %s ago)"
|
||||||
" recovered: " % (abbreviate_time(end-start),
|
" recovered: " % (abbreviate_time(end-start),
|
||||||
abbreviate_time(time.time() - end)),
|
abbreviate_time(time.time() - end)),
|
||||||
self.format_recovered(last["space-recovered"], "actual")
|
self.format_recovered(last["space-recovered"], "actual"))
|
||||||
]
|
|
||||||
|
|
||||||
p = T.ul()
|
p = T.ul()
|
||||||
|
|
||||||
def add(*pieces):
|
def add(*pieces):
|
||||||
p[T.li[pieces]]
|
p(T.li(pieces))
|
||||||
|
|
||||||
saw = self.format_recovered(last["space-recovered"], "examined")
|
saw = self.format_recovered(last["space-recovered"], "examined")
|
||||||
add("and saw a total of ", saw)
|
add("and saw a total of ", saw)
|
||||||
|
@ -260,8 +279,42 @@ class StorageStatus(MultiFormatPage):
|
||||||
|
|
||||||
if last["corrupt-shares"]:
|
if last["corrupt-shares"]:
|
||||||
add("Corrupt shares:",
|
add("Corrupt shares:",
|
||||||
T.ul[ [T.li[ ["SI %s shnum %d" % corrupt_share
|
T.ul( (T.li( ["SI %s shnum %d" % corrupt_share
|
||||||
for corrupt_share in last["corrupt-shares"] ]
|
for corrupt_share in last["corrupt-shares"] ]
|
||||||
]]])
|
))))
|
||||||
|
|
||||||
return ctx.tag[p]
|
return tag(p)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def format_recovered(sr, a):
|
||||||
|
def maybe(d):
|
||||||
|
if d is None:
|
||||||
|
return "?"
|
||||||
|
return "%d" % d
|
||||||
|
return "%s shares, %s buckets (%s mutable / %s immutable), %s (%s / %s)" % \
|
||||||
|
(maybe(sr["%s-shares" % a]),
|
||||||
|
maybe(sr["%s-buckets" % a]),
|
||||||
|
maybe(sr["%s-buckets-mutable" % a]),
|
||||||
|
maybe(sr["%s-buckets-immutable" % a]),
|
||||||
|
abbreviate_space(sr["%s-diskbytes" % a]),
|
||||||
|
abbreviate_space(sr["%s-diskbytes-mutable" % a]),
|
||||||
|
abbreviate_space(sr["%s-diskbytes-immutable" % a]),
|
||||||
|
)
|
||||||
|
|
||||||
|
class StorageStatus(MultiFormatResource):
|
||||||
|
def __init__(self, storage, nickname=""):
|
||||||
|
super(StorageStatus, self).__init__()
|
||||||
|
self._storage = storage
|
||||||
|
self._nickname = nickname
|
||||||
|
|
||||||
|
def render_HTML(self, req):
|
||||||
|
return renderElement(req, StorageStatusElement(self._storage, self._nickname))
|
||||||
|
|
||||||
|
def render_JSON(self, req):
|
||||||
|
req.setHeader("content-type", "text/plain")
|
||||||
|
d = {"stats": self._storage.get_stats(),
|
||||||
|
"bucket-counter": self._storage.bucket_counter.get_state(),
|
||||||
|
"lease-checker": self._storage.lease_checker.get_state(),
|
||||||
|
"lease-checker-progress": self._storage.lease_checker.get_progress(),
|
||||||
|
}
|
||||||
|
return json.dumps(d, indent=1) + "\n"
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
<html xmlns:n="http://nevow.com/ns/nevow/0.1">
|
<html xmlns:t="http://twistedmatrix.com/ns/twisted.web.template/0.1">
|
||||||
<head>
|
<head>
|
||||||
<title>Tahoe-LAFS - Storage Server Status</title>
|
<title>Tahoe-LAFS - Storage Server Status</title>
|
||||||
<link href="/tahoe.css" rel="stylesheet" type="text/css"/>
|
<link href="/tahoe.css" rel="stylesheet" type="text/css"/>
|
||||||
|
@ -7,19 +7,19 @@
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
|
|
||||||
<div n:render="storage_running">
|
<div t:render="storage_running">
|
||||||
|
|
||||||
<h1>Storage Server Status</h1>
|
<h1>Storage Server Status</h1>
|
||||||
|
|
||||||
<table n:data="stats">
|
<table class="storage_status" t:render="storage_stats">
|
||||||
<tr><td>Total disk space:</td>
|
<tr><td>Total disk space:</td>
|
||||||
<td><span n:render="abbrev_space" n:data="disk_total" /></td>
|
<td><t:slot name="disk_total_abbrev" /></td>
|
||||||
<td>(<span n:render="space" n:data="disk_total" />)</td>
|
<td>(<t:slot name="disk_total" />)</td>
|
||||||
<td />
|
<td />
|
||||||
</tr>
|
</tr>
|
||||||
<tr><td>Disk space used:</td>
|
<tr><td>Disk space used:</td>
|
||||||
<td>- <span n:render="abbrev_space" n:data="disk_used" /></td>
|
<td>- <t:slot name="disk_used_abbrev" /></td>
|
||||||
<td>(<span n:render="space" n:data="disk_used" />)</td>
|
<td>(<t:slot name="disk_used" />)</td>
|
||||||
<td />
|
<td />
|
||||||
</tr>
|
</tr>
|
||||||
<tr><td />
|
<tr><td />
|
||||||
|
@ -28,18 +28,18 @@
|
||||||
<td />
|
<td />
|
||||||
</tr>
|
</tr>
|
||||||
<tr><td>Disk space free (root):</td>
|
<tr><td>Disk space free (root):</td>
|
||||||
<td><span n:render="abbrev_space" n:data="disk_free_for_root"/></td>
|
<td><t:slot name="disk_free_for_root_abbrev"/></td>
|
||||||
<td>(<span n:render="space" n:data="disk_free_for_root"/>)</td>
|
<td>(<t:slot name="disk_free_for_root"/>)</td>
|
||||||
<td>[see 1]</td>
|
<td>[see 1]</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr><td>Disk space free (non-root):</td>
|
<tr><td>Disk space free (non-root):</td>
|
||||||
<td><span n:render="abbrev_space" n:data="disk_free_for_nonroot" /></td>
|
<td><t:slot name="disk_free_for_nonroot_abbrev" /></td>
|
||||||
<td>(<span n:render="space" n:data="disk_free_for_nonroot" />)</td>
|
<td>(<t:slot name="disk_free_for_nonroot" />)</td>
|
||||||
<td>[see 2]</td>
|
<td>[see 2]</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr><td>Reserved space:</td>
|
<tr><td>Reserved space:</td>
|
||||||
<td>- <span n:render="abbrev_space" n:data="reserved_space" /></td>
|
<td>- <t:slot name="reserved_space_abbrev" /></td>
|
||||||
<td>(<span n:render="space" n:data="reserved_space" />)</td>
|
<td>(<t:slot name="reserved_space" />)</td>
|
||||||
<td />
|
<td />
|
||||||
</tr>
|
</tr>
|
||||||
<tr><td />
|
<tr><td />
|
||||||
|
@ -48,23 +48,23 @@
|
||||||
<td />
|
<td />
|
||||||
</tr>
|
</tr>
|
||||||
<tr><td>Space Available to Tahoe:</td>
|
<tr><td>Space Available to Tahoe:</td>
|
||||||
<td><span n:render="abbrev_space" n:data="disk_avail" /></td>
|
<td><t:slot name="disk_avail_abbrev" /></td>
|
||||||
<td>(<span n:render="space" n:data="disk_avail" />)</td>
|
<td>(<t:slot name="disk_avail" />)</td>
|
||||||
<td />
|
<td />
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
<ul>
|
<ul>
|
||||||
<li>Server Nickname: <span class="nickname mine" n:render="data" n:data="nickname" /></li>
|
<li>Server Nickname: <span class="nickname mine"><t:transparent t:render="nickname" /></span></li>
|
||||||
<li>Server Nodeid: <span class="nodeid mine data-chars" n:render="string" n:data="nodeid" /></li>
|
<li>Server Nodeid: <span class="nodeid mine data-chars"> <t:transparent t:render="nodeid" /></span></li>
|
||||||
<li n:data="stats">Accepting new shares:
|
<li>Accepting new shares:
|
||||||
<span n:render="bool" n:data="accepting_immutable_shares" /></li>
|
<span t:render="accepting_immutable_shares" /></li>
|
||||||
<li>Total buckets:
|
<li>Total buckets:
|
||||||
<span n:render="string" n:data="last_complete_bucket_count" />
|
<span t:render="last_complete_bucket_count" />
|
||||||
(the number of files and directories for which this server is holding
|
(the number of files and directories for which this server is holding
|
||||||
a share)
|
a share)
|
||||||
<ul>
|
<ul>
|
||||||
<li n:render="count_crawler_status" />
|
<li><span t:render="count_crawler_status" /></li>
|
||||||
</ul>
|
</ul>
|
||||||
</li>
|
</li>
|
||||||
</ul>
|
</ul>
|
||||||
|
@ -72,11 +72,11 @@
|
||||||
<h2>Lease Expiration Crawler</h2>
|
<h2>Lease Expiration Crawler</h2>
|
||||||
|
|
||||||
<ul>
|
<ul>
|
||||||
<li>Expiration <span n:render="lease_expiration_enabled" /></li>
|
<li>Expiration <span t:render="lease_expiration_enabled" /></li>
|
||||||
<li n:render="lease_expiration_mode" />
|
<li t:render="lease_expiration_mode" />
|
||||||
<li n:render="lease_current_cycle_progress" />
|
<li t:render="lease_current_cycle_progress" />
|
||||||
<li n:render="lease_current_cycle_results" />
|
<li t:render="lease_current_cycle_results" />
|
||||||
<li n:render="lease_last_cycle_results" />
|
<li t:render="lease_last_cycle_results" />
|
||||||
</ul>
|
</ul>
|
||||||
|
|
||||||
<hr />
|
<hr />
|
||||||
|
|
|
@ -2,11 +2,25 @@
|
||||||
import urllib
|
import urllib
|
||||||
from twisted.web import http
|
from twisted.web import http
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from nevow import rend, url, tags as T
|
from twisted.python.filepath import FilePath
|
||||||
|
from twisted.web.resource import Resource
|
||||||
|
from twisted.web.template import (
|
||||||
|
XMLFile,
|
||||||
|
renderer,
|
||||||
|
renderElement,
|
||||||
|
tags,
|
||||||
|
)
|
||||||
|
from nevow import url
|
||||||
from allmydata.immutable.upload import FileHandle
|
from allmydata.immutable.upload import FileHandle
|
||||||
from allmydata.mutable.publish import MutableFileHandle
|
from allmydata.mutable.publish import MutableFileHandle
|
||||||
from allmydata.web.common import getxmlfile, get_arg, boolean_of_arg, \
|
from allmydata.web.common import (
|
||||||
convert_children_json, WebError, get_format, get_mutable_type
|
get_arg,
|
||||||
|
boolean_of_arg,
|
||||||
|
convert_children_json,
|
||||||
|
WebError,
|
||||||
|
get_format,
|
||||||
|
get_mutable_type,
|
||||||
|
)
|
||||||
from allmydata.web import status
|
from allmydata.web import status
|
||||||
|
|
||||||
def PUTUnlinkedCHK(req, client):
|
def PUTUnlinkedCHK(req, client):
|
||||||
|
@ -59,34 +73,53 @@ def POSTUnlinkedCHK(req, client):
|
||||||
return d
|
return d
|
||||||
|
|
||||||
|
|
||||||
class UploadResultsPage(status.UploadResultsRendererMixin, rend.Page):
|
class UploadResultsPage(Resource, object):
|
||||||
"""'POST /uri', to create an unlinked file."""
|
"""'POST /uri', to create an unlinked file."""
|
||||||
docFactory = getxmlfile("upload-results.xhtml")
|
|
||||||
|
|
||||||
def __init__(self, upload_results):
|
def __init__(self, upload_results):
|
||||||
rend.Page.__init__(self)
|
"""
|
||||||
self.results = upload_results
|
:param IUploadResults upload_results: stats provider.
|
||||||
|
"""
|
||||||
|
super(UploadResultsPage, self).__init__()
|
||||||
|
self._upload_results = upload_results
|
||||||
|
|
||||||
|
def render_POST(self, req):
|
||||||
|
elem = UploadResultsElement(self._upload_results)
|
||||||
|
return renderElement(req, elem)
|
||||||
|
|
||||||
|
|
||||||
|
class UploadResultsElement(status.UploadResultsRendererMixin):
|
||||||
|
|
||||||
|
loader = XMLFile(FilePath(__file__).sibling("upload-results.xhtml"))
|
||||||
|
|
||||||
|
def __init__(self, upload_results):
|
||||||
|
super(UploadResultsElement, self).__init__()
|
||||||
|
self._upload_results = upload_results
|
||||||
|
|
||||||
def upload_results(self):
|
def upload_results(self):
|
||||||
return defer.succeed(self.results)
|
return defer.succeed(self._upload_results)
|
||||||
|
|
||||||
def data_done(self, ctx, data):
|
@renderer
|
||||||
|
def done(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res: "done!")
|
d.addCallback(lambda res: "done!")
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def data_uri(self, ctx, data):
|
@renderer
|
||||||
|
def uri(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res: res.get_uri())
|
d.addCallback(lambda res: res.get_uri())
|
||||||
return d
|
return d
|
||||||
|
|
||||||
def render_download_link(self, ctx, data):
|
@renderer
|
||||||
|
def download_link(self, req, tag):
|
||||||
d = self.upload_results()
|
d = self.upload_results()
|
||||||
d.addCallback(lambda res:
|
d.addCallback(lambda res:
|
||||||
T.a(href="/uri/" + urllib.quote(res.get_uri()))
|
tags.a("/uri/" + res.get_uri(),
|
||||||
["/uri/" + res.get_uri()])
|
href="/uri/" + urllib.quote(res.get_uri())))
|
||||||
return d
|
return d
|
||||||
|
|
||||||
|
|
||||||
def POSTUnlinkedSSK(req, client, version):
|
def POSTUnlinkedSSK(req, client, version):
|
||||||
# "POST /uri", to create an unlinked file.
|
# "POST /uri", to create an unlinked file.
|
||||||
# SDMF: files are small, and we can only upload data
|
# SDMF: files are small, and we can only upload data
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
<html xmlns:n="http://nevow.com/ns/nevow/0.1">
|
<html xmlns:t="http://twistedmatrix.com/ns/twisted.web.template/0.1">
|
||||||
<head>
|
<head>
|
||||||
<title>Tahoe-LAFS - File Uploaded</title>
|
<title>Tahoe-LAFS - File Uploaded</title>
|
||||||
<link href="/tahoe.css" rel="stylesheet" type="text/css"/>
|
<link href="/tahoe.css" rel="stylesheet" type="text/css"/>
|
||||||
|
@ -7,37 +7,37 @@
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
|
|
||||||
<h1>Uploading File... <span n:render="string" n:data="done" /></h1>
|
<h1>Uploading File... <t:transparent t:render="done" /></h1>
|
||||||
|
|
||||||
<h2>Upload Results:</h2>
|
<h2>Upload Results:</h2>
|
||||||
<ul>
|
<ul>
|
||||||
<li>URI: <tt><span n:render="string" n:data="uri" /></tt></li>
|
<li>URI: <tt><span><t:transparent t:render="uri" /></span></tt></li>
|
||||||
<li>Download link: <span n:render="download_link" /></li>
|
<li>Download link: <t:transparent t:render="download_link" /></li>
|
||||||
<li>Sharemap: <span n:render="sharemap" /></li>
|
<li>Sharemap: <t:transparent t:render="sharemap" /></li>
|
||||||
<li>Servermap: <span n:render="servermap" /></li>
|
<li>Servermap: <t:transparent t:render="servermap" /></li>
|
||||||
<li>Timings:</li>
|
<li>Timings:</li>
|
||||||
<ul>
|
<ul>
|
||||||
<li>File Size: <span n:render="string" n:data="file_size" /> bytes</li>
|
<li>File Size: <t:transparent t:render="file_size" /> bytes</li>
|
||||||
<li>Total: <span n:render="time" n:data="time_total" />
|
<li>Total: <t:transparent t:render="time_total" />
|
||||||
(<span n:render="rate" n:data="rate_total" />)</li>
|
(<t:transparent t:render="rate_total" />)</li>
|
||||||
<ul>
|
<ul>
|
||||||
<li>Storage Index: <span n:render="time" n:data="time_storage_index" />
|
<li>Storage Index: <t:transparent t:render="time_storage_index" />
|
||||||
(<span n:render="rate" n:data="rate_storage_index" />)</li>
|
(<t:transparent t:render="rate_storage_index" />)</li>
|
||||||
<li>[Contacting Helper]: <span n:render="time" n:data="time_contacting_helper" /></li>
|
<li>[Contacting Helper]: <t:transparent t:render="time_contacting_helper" /></li>
|
||||||
<li>[Upload Ciphertext To Helper]: <span n:render="time" n:data="time_cumulative_fetch" />
|
<li>[Upload Ciphertext To Helper]: <t:transparent t:render="time_cumulative_fetch" />
|
||||||
(<span n:render="rate" n:data="rate_ciphertext_fetch" />)</li>
|
(<t:transparent t:render="rate_ciphertext_fetch" />)</li>
|
||||||
|
|
||||||
<li>Peer Selection: <span n:render="time" n:data="time_peer_selection" /></li>
|
<li>Peer Selection: <t:transparent t:render="time_peer_selection" /></li>
|
||||||
<li>Encode And Push: <span n:render="time" n:data="time_total_encode_and_push" />
|
<li>Encode And Push: <t:transparent t:render="time_total_encode_and_push" />
|
||||||
(<span n:render="rate" n:data="rate_encode_and_push" />)</li>
|
(<t:transparent t:render="rate_encode_and_push" />)</li>
|
||||||
<ul>
|
<ul>
|
||||||
<li>Cumulative Encoding: <span n:render="time" n:data="time_cumulative_encoding" />
|
<li>Cumulative Encoding: <t:transparent t:render="time_cumulative_encoding" />
|
||||||
(<span n:render="rate" n:data="rate_encode" />)</li>
|
(<t:transparent t:render="rate_encode" />)</li>
|
||||||
<li>Cumulative Pushing: <span n:render="time" n:data="time_cumulative_sending" />
|
<li>Cumulative Pushing: <t:transparent t:render="time_cumulative_sending" />
|
||||||
(<span n:render="rate" n:data="rate_push" />)</li>
|
(<t:transparent t:render="rate_push" />)</li>
|
||||||
<li>Send Hashes And Close: <span n:render="time" n:data="time_hashes_and_close" /></li>
|
<li>Send Hashes And Close: <t:transparent t:render="time_hashes_and_close" /></li>
|
||||||
</ul>
|
</ul>
|
||||||
<li>[Helper Total]: <span n:render="time" n:data="time_helper_total" /></li>
|
<li>[Helper Total]: <t:transparent t:render="time_helper_total" /></li>
|
||||||
</ul>
|
</ul>
|
||||||
</ul>
|
</ul>
|
||||||
</ul>
|
</ul>
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
<html xmlns:n="http://nevow.com/ns/nevow/0.1">
|
<html xmlns:t="http://twistedmatrix.com/ns/twisted.web.template/0.1">
|
||||||
<head>
|
<head>
|
||||||
<title>Tahoe-LAFS - File Upload Status</title>
|
<title>Tahoe-LAFS - File Upload Status</title>
|
||||||
<link href="/tahoe.css" rel="stylesheet" type="text/css"/>
|
<link href="/tahoe.css" rel="stylesheet" type="text/css"/>
|
||||||
|
@ -10,46 +10,46 @@
|
||||||
<h1>File Upload Status</h1>
|
<h1>File Upload Status</h1>
|
||||||
|
|
||||||
<ul>
|
<ul>
|
||||||
<li>Started: <span n:render="started"/></li>
|
<li>Started: <t:transparent t:render="started"/></li>
|
||||||
<li>Storage Index: <span n:render="si"/></li>
|
<li>Storage Index: <t:transparent t:render="si"/></li>
|
||||||
<li>Helper?: <span n:render="helper"/></li>
|
<li>Helper?: <t:transparent t:render="helper"/></li>
|
||||||
<li>Total Size: <span n:render="total_size"/></li>
|
<li>Total Size: <t:transparent t:render="total_size"/></li>
|
||||||
<li>Progress (Hash): <span n:render="progress_hash"/></li>
|
<li>Progress (Hash): <t:transparent t:render="progress_hash"/></li>
|
||||||
<li>Progress (Ciphertext): <span n:render="progress_ciphertext"/></li>
|
<li>Progress (Ciphertext): <t:transparent t:render="progress_ciphertext"/></li>
|
||||||
<li>Progress (Encode+Push): <span n:render="progress_encode_push"/></li>
|
<li>Progress (Encode+Push): <t:transparent t:render="progress_encode_push"/></li>
|
||||||
<li>Status: <span n:render="status"/></li>
|
<li>Status: <t:transparent t:render="status"/></li>
|
||||||
</ul>
|
</ul>
|
||||||
|
|
||||||
<div n:render="results">
|
<div t:render="results">
|
||||||
<h2>Upload Results</h2>
|
<h2>Upload Results</h2>
|
||||||
<ul>
|
<ul>
|
||||||
<li>Shares Pushed: <span n:render="pushed_shares" /></li>
|
<li>Shares Pushed: <t:transparent t:render="pushed_shares" /></li>
|
||||||
<li>Shares Already Present: <span n:render="preexisting_shares" /></li>
|
<li>Shares Already Present: <t:transparent t:render="preexisting_shares" /></li>
|
||||||
<li>Sharemap: <span n:render="sharemap" /></li>
|
<li>Sharemap: <t:transparent t:render="sharemap" /></li>
|
||||||
<li>Servermap: <span n:render="servermap" /></li>
|
<li>Servermap: <t:transparent t:render="servermap" /></li>
|
||||||
<li>Timings:</li>
|
<li>Timings:</li>
|
||||||
<ul>
|
<ul>
|
||||||
<li>File Size: <span n:render="string" n:data="file_size" /> bytes</li>
|
<li>File Size: <t:transparent t:render="file_size" /> bytes</li>
|
||||||
<li>Total: <span n:render="time" n:data="time_total" />
|
<li>Total: <t:transparent t:render="time_total" />
|
||||||
(<span n:render="rate" n:data="rate_total" />)</li>
|
(<t:transparent t:render="rate_total" />)</li>
|
||||||
<ul>
|
<ul>
|
||||||
<li>Storage Index: <span n:render="time" n:data="time_storage_index" />
|
<li>Storage Index: <t:transparent t:render="time_storage_index" />
|
||||||
(<span n:render="rate" n:data="rate_storage_index" />)</li>
|
(<t:transparent t:render="rate_storage_index" />)</li>
|
||||||
<li>[Contacting Helper]: <span n:render="time" n:data="time_contacting_helper" /></li>
|
<li>[Contacting Helper]: <t:transparent t:render="time_contacting_helper" /></li>
|
||||||
<li>[Upload Ciphertext To Helper]: <span n:render="time" n:data="time_cumulative_fetch" />
|
<li>[Upload Ciphertext To Helper]: <t:transparent t:render="time_cumulative_fetch" />
|
||||||
(<span n:render="rate" n:data="rate_ciphertext_fetch" />)</li>
|
(<t:transparent t:render="rate_ciphertext_fetch" />)</li>
|
||||||
|
|
||||||
<li>Peer Selection: <span n:render="time" n:data="time_peer_selection" /></li>
|
<li>Peer Selection: <t:transparent t:render="time_peer_selection" /></li>
|
||||||
<li>Encode And Push: <span n:render="time" n:data="time_total_encode_and_push" />
|
<li>Encode And Push: <t:transparent t:render="time_total_encode_and_push" />
|
||||||
(<span n:render="rate" n:data="rate_encode_and_push" />)</li>
|
(<t:transparent t:render="rate_encode_and_push" />)</li>
|
||||||
<ul>
|
<ul>
|
||||||
<li>Cumulative Encoding: <span n:render="time" n:data="time_cumulative_encoding" />
|
<li>Cumulative Encoding: <t:transparent t:render="time_cumulative_encoding" />
|
||||||
(<span n:render="rate" n:data="rate_encode" />)</li>
|
(<t:transparent t:render="rate_encode" />)</li>
|
||||||
<li>Cumulative Pushing: <span n:render="time" n:data="time_cumulative_sending" />
|
<li>Cumulative Pushing: <t:transparent t:render="time_cumulative_sending" />
|
||||||
(<span n:render="rate" n:data="rate_push" />)</li>
|
(<t:transparent t:render="rate_push" />)</li>
|
||||||
<li>Send Hashes And Close: <span n:render="time" n:data="time_hashes_and_close" /></li>
|
<li>Send Hashes And Close: <t:transparent t:render="time_hashes_and_close" /></li>
|
||||||
</ul>
|
</ul>
|
||||||
<li>[Helper Total]: <span n:render="time" n:data="time_helper_total" /></li>
|
<li>[Helper Total]: <t:transparent t:render="time_helper_total" /></li>
|
||||||
</ul>
|
</ul>
|
||||||
</ul>
|
</ul>
|
||||||
</ul>
|
</ul>
|
||||||
|
|
2
tox.ini
2
tox.ini
|
@ -75,7 +75,7 @@ commands =
|
||||||
whitelist_externals =
|
whitelist_externals =
|
||||||
/bin/mv
|
/bin/mv
|
||||||
commands =
|
commands =
|
||||||
pyflakes src static misc setup.py
|
flake8 src static misc setup.py
|
||||||
python misc/coding_tools/check-umids.py src
|
python misc/coding_tools/check-umids.py src
|
||||||
python misc/coding_tools/check-debugging.py
|
python misc/coding_tools/check-debugging.py
|
||||||
python misc/coding_tools/find-trailing-spaces.py -r src static misc setup.py
|
python misc/coding_tools/find-trailing-spaces.py -r src static misc setup.py
|
||||||
|
|
Loading…
Reference in New Issue