migrate to new server #1572
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#1572
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
This ticket is to track the task of moving this Trac instance, and the mailing lists, etc, from the old original "org" box (currently in the Undisclosed Location) to the new linode-based "new-org" box.
Remaining Tasks:
get basic webserver runningmove Mailman, list archives, cgi-bin/mailman control panelsupdate MX records to send tahoe-lafs.org mail to new boxcopy all trac DBs/workdirs to new boxactivate all trac instances on new boxcopy all static content (tarballs, deps, debs, all sorts of random stuff)move source trees, canonical darcs reposget post-commit hooks workingget any cronjobs migratedmove buildmaster, either get buildslave admins to update their configs, or wait for DNS to changeadd "buildmaster.tahoe-lafs.org" DNS name (pointing at new-org)tell buildslave admins to point buildslaves at thatmeanwhile, old slaves will just use tahoe-lafs.org and work normallyupdate DNSpower down old box, or move to Peter's basement for emergency accessset up trac backup-to-git cronjob on new boxget SSL cert fixedfix bitcoin-donation box on front pagerepopulate http://tahoe-lafs.org/source/tahoe-lafs/deps/tahoe-dep-sdists/check other tahoe-deps directoriesmove the external (non-https) hosted images from the front page to the local server, to hush the mixed-content warningdavidsarah noticed that the 'view tickets' link in the nav bar is broken.. it points to<https://tahoe-lafs.org/trac/tahoe-lafs//tahoe-lafs.org/trac/tahoe-lafs/wiki/ViewTickets>
some buildslaves are failing to do darcs fetches. A local 'wget' of the inventory file (on my OS-X 10.6 box) complained about not being able to verify the cert. related?hypothesis is that the old darcs on those boxes is treating HTTPS urls as if they were HTTP, and then failing with the http->https redirect. tcpdump shows one buildslave clearly fetching from port 80 despite being given an HTTPS URLmove those buildslaves to Unsupportedzooko will mail the owners, ask them to update darcslater, move them back to Supporteddarcs pushes aren't showing up on the Trac timeline, although the buildbot sees themtrac.db and all darcs-repo files need to be gid=source, and all committers must be in the 'source' groupbookmarks like http://tahoe-lafs.org/trac/tahoe no longer work (they get redirected to https://tahoe-lafs.org/trac/tahoe , but that gives a "Environment not found" error). This also happens for deeper URLs like http://tahoe-lafs.org/trac/tahoe/ticket/1572 . I think this needs to be rewritten to e.g. https://tahoe-lafs.org/trac/tahoe-lafsthere's something weird going on with page caching, like the ETag or Date headers are wonky. When I follow a trac link in my browser to a page that I know has just changed (like modifying a wiki page, and the submit button does a POST which returns a redirect back to a GET of the modified page), I see the old contents. Then doing a reload lets me see the new contents.I've started the rsync of /home/source/darcs/tahoe-lafs (1.7GB in 33 trees). The target is
/home/warner/incoming
buildbot is migrated
Add note about https://tahoe-lafs.org/trac/tahoe/ticket/1572 not working.
Note: having "Tahoe-LAFS" be the full public name of the project is great (since "Tahoe" by itself is hard to search for), and having "tahoe-lafs.org" be the hostname for the project is great (because "tahoe.org" is probably owned by some car company or ski resort). But of course, when talking about it, we use just "tahoe" as an abbreviation. And I don't think it's necessary to make every single directory and URL component spell out the full name. There are a growing number of symlinks and URL rewriting rules to replace "tahoe" with "tahoe-lafs" on the web site, and I think those are going to eventually drive us completely insane. So let's avoid changing anything if we can, identify which URLs have been around the longest, and make them work without introducing new variants.
Looking more carefully at the HTTP response headers (in Firebug), it looks like every response from the server (including the dynamically-generated ones, from trac) is getting an "
Expires
" header dated one month in the future. That can't be right. There's also aCache-Control: max-age=2592000, public
.I'm suspecting that nginx is unconditionally telling browsers to cache the result, when for any uwsgi-based resource, it should not offer caching at all.
I see the same headers on static content (pointing at
/chrome
), which gets back a correct304 Not Modified
.Ok, I think I fixed that problem: there was an overbroad
expires 30d;
clause in the nginx config, causing "it's ok to cache this for one month"Expires:
headers to be added to all pages, including generated ones. I addedexpires off;
clauses to all the dynamic resources, and now things seem to be getting better.If you've visited any trac or buildbot page in the last few days, you probably have a month-to-go cache entry for it (you might notice the page being shown to you really really fast, like 5ms, when really it should take at least a couple hundred ms to reach the server and generate the page). You'll need to manually reload each of those pages once to clear the effects. Or clear your whole cache. Until you do that, following any link to one of the cached pages will show you the old version.
Hmm, not sure why this is still open.
Replying to daira:
Oh, because of this:
I think this is not important and it's fine as it is.