Cap URLs leaked via HTTP Referer header #127
Labels
No Label
0.2.0
0.3.0
0.4.0
0.5.0
0.5.1
0.6.0
0.6.1
0.7.0
0.8.0
0.9.0
1.0.0
1.1.0
1.10.0
1.10.1
1.10.2
1.10a2
1.11.0
1.12.0
1.12.1
1.13.0
1.14.0
1.15.0
1.15.1
1.2.0
1.3.0
1.4.1
1.5.0
1.6.0
1.6.1
1.7.0
1.7.1
1.7β
1.8.0
1.8.1
1.8.2
1.8.3
1.8β
1.9.0
1.9.0-s3branch
1.9.0a1
1.9.0a2
1.9.0b1
1.9.1
1.9.2
1.9.2a1
LeastAuthority.com automation
blocker
cannot reproduce
cloud-branch
code
code-dirnodes
code-encoding
code-frontend
code-frontend-cli
code-frontend-ftp-sftp
code-frontend-magic-folder
code-frontend-web
code-mutable
code-network
code-nodeadmin
code-peerselection
code-storage
contrib
critical
defect
dev-infrastructure
documentation
duplicate
enhancement
fixed
invalid
major
minor
n/a
normal
operational
packaging
somebody else's problem
supercritical
task
trivial
unknown
was already fixed
website
wontfix
worksforme
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Reference: tahoe-lafs/trac-2024-07-25#127
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
It occurred to me that, despite the show-all-files-by-URI change we made to protect the private-vdrive URI from referrer headers and active javascript attacks, that there is still one remaining: the URI of the HTML file itself.
Let's say that you write a secret HTML document and upload it to the grid, attached to some directory in your private vdrive so nobody else can see it. In this document you include an HREF to mysuperevilsite.org, because it's really cool. When you use your tahoe node's web server to read that document, it will serve it to you as /uri/RANDOMURI, and when you follow that HREF to the super evil site, I (as the operator ot mysuperevilsite.org) will see a Referrer header with /uri/RANDOMURI in it. Then I fire up my own tahoe node and get to read your secret document.
Alternatively, once you're viewing a page from the evil site, I can put javascript in that page which uses your tahoe node to read /uri/RANDOMURI and do something clever to get the data back to me.
I'm less concerned about this one than the previous attack (#98), but it would still be nice to find a way to fix it. I can't think of any clever solutions, though. Note that the javascript attack means that (like with all confused deputy attacks) somehow encrypting the URL wouldn't help (since ambient authority can still be exploited by the attacker).
See also this e-mail thread about this:
http://allmydata.org/pipermail/tahoe-dev/2007-September/000134.html
I want to go over this carefully, and some related issues about navigation in the web user interface, for the v0.6.1 release in two weeks. See the aforementioned mailing list thread for starters.
See also #103.
Bumping this to v0.7.
smaller XSRF attack still possibleto smaller CSRF attack still possibleWe're focussing on an imminent v0.7.0 (see the roadmap) which hopefully has [#197 #197 -- Small Distributed Mutable Files] and also a fix for [#199 #199 -- bad SHA-256]. So I'm bumping less urgent tickets to v0.7.1.
It would be good to tighten our security properties here, but I don't think we are going to get it done in the next six weeks, so I'm putting it in Milestone 1.0.
from IRC:
<kpreid_> http://allmydata.org/trac/tahoe/ticket/127 -- When I asked Tyler Close about this back at the Croquet workshop, he said that HTTPS drops the referrer header so that things like this don't arise.
<kpreid_> Another workaround is to rewrite links into a redirection service (your own, typically) so that the referrer is the redirector, not the private URL
<kpreid_> (but HTTPS is probably a better idea provided one can provide it)
#127: good to know, I didn't think HTTPS did that. I've kicked this around with Tyler a couple of times, I think his recommendation would be to use JS and the fragment header, but a) that would require JS, and b) I don't see how it could work with anything but HTML-for-display (not for downloading arbitrary files)
(http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.36)
and
http://www.w3.org/Protocols/rfc2616/rfc2616-sec15.html#sec15.1.3
Here are the parts that are potentially useful to us:
This attack isn't CSRF; changing the summary accordingly.
If you like this bug, you might also like #615 and #821 :-)
(#821 is about leaking the URL to scripts in the file itself, #615 is about leaking it to other pages.)
smaller CSRF attack still possibleto Cap URLs leaked via HTTP Referer headerReplying to zooko:
I have heard someone, I think Tyler Close, say that clients interpret this in a stupidly literal way: they do include the Referer header in an HTTP-over-SSL/TLS request -- because that is not a "non-secure" request -- when the referring page was also transferred over HTTP-over-SSL/TLS, even if the keys or domains are different.
Also, http://community.livejournal.com/lj_dev/707379.html seems to suggest that non-Mozilla browsers do not follow the above restriction on sending Referer at all -- although that was in 2006.
The behaviour of Mozilla browsers for the secure -> secure case is controlled by this preference ["rr" spelling]note:
http://kb.mozillazine.org/Network.http.sendSecureXSiteReferrer
Summary: it does the wrong thing by default :-(
(This preference controls when to send Referer in other cases:
http://kb.mozillazine.org/Network.http.sendRefererHeader
I just changed my Firefox config to never send it, i.e.
network.http.sendRefererHeader = 0
andnetwork.http.sendSecureXSiteReferrer = false
. I doubt anything will break.)Microsoft cannot be trusted to document IE's behaviour; their knowledge base article at
http://support.microsoft.com/kb/178066
is contradicted by
http://marc.info/?l=bugtraq&m=107282279713152&w=2
Last year I asked Collin Jackson (who knows a good deal about web security) how to automatically prevent Referer Headers from being sent. He repied:
If all of these work, option C seems to be the simplest. Option A requires an ftp server, which seems like an unwarranted excursion if we can possibly avoid it. Option B depends on more of the DOM and HTML, hence greater exposure to browser idiosyncrasies, than option C does.
(The location URL in option C needs to be properly escaped for an URL-in-JSStringLiteral-in-HTML-in-JSStringLiteral-in-JSStringLiteral-in-HTML, but that's straightforward :-)
I don't really understand those options very well or how they would be implemented in Tahoe-LAFS. I should mention another option: moving the cap from the URL itself into the URL fragment, as Tyler Close's web-key does: http://waterken.sourceforge.net/web-key .
This would certainly prevent caps from leaking into the Referer header, although they might still leak due to tools like "Yahoo Toolbar" and the like. (Tools which send all the URLs that you view to some remote server for you.)
Also, as Brian wrote in comment:61687, it isn't clear how Tahoe-LAFS could use caps-in-fragments for purposes other than displaying the result in on a web page. Perhaps there could be a two-layer design where the WAPI has caps in URLs (which is consistent with the REST paradigm), but a new WUI (which would be written in JavaScript, Cajita or Jacaranda) would somehow translate between caps-in-fragments and caps-in-URLs so that the URL that actually appeared in the URL widget would always be caps-in-fragment.
Cap URLs leaked via HTTP Referer headerto vvto Cap URLs leaked via HTTP Referer headerFor anyone trying to test option C, the syntax above was wrong; it should be
However, I'm not sure that options B or C work for what we are trying to do. The problem we're trying to solve is that following a link from the contents of a Tahoe file may reveal the file's URL ('capURL'). Options B and C prevent the page at 'capURL' from seeing the referring URL (of the page containing the JavaScript), but they don't prevent leakage of 'capURL' to a site that the page at 'capURL' links to.
Only option A allows to you prevent sending a Referer header when following a link from a page with arbitrary contents (by serving that page via FTP).
Replying to davidsarah:
Actually there's a variant of B that will work: send the read cap in the form data. You would make an initial request to the gateway for a given read cap encoded in the URL, and would get back a stub page containing a form filled in with that read cap. If that form is POSTed to the gateway, it would respond with the real file. When POST is used, the URL of the latter would just be the URL of the form, which is not sensitive, so it doesn't matter whether it is leaked via Referer.
This approach needn't depend on JavaScript, but if you don't have JavaScript the user would have to click a button to submit the form. (Is there a way to do that automatically on page load even if scripting is disabled?) Alternatively the server could set a cookie and have that cookie echoed back to it with an HTTP-Refresh, but that potentially introduces other cookie-related weaknesses and complications.
In the case where the referring page is generated by the gateway (for example a directory listing), then that page can directly include a form for each file link, so there is no extra request or button click even when scripting is disabled.
If you can depend on JavaScript, you can combine this with Tyler's approach and put the sensitive part of the URL in the fragment, then have a script copy it to the form data. The difference is that because form submission is used instead of XMLHttpRequest, you can download arbitrary files rather than just displaying them.
A disadvantage of using POST is that if the user goes back to an URL via the history, they will get a spurious prompt asking whether they want to resubmit form data. (Using GET won't work, because then the read cap would still be in the URL.) I think this is acceptable, though.
Replying to [davidsarah]comment:21:
This approach would simultaneously fix #821. It would not fix #615.
We don't appear to have a separate ticket about cap URLs leaking to phishing filters. Let's consider this ticket to cover both issues, since it is possible that the same solution could work. If we depend on JavaScript, I think that the fragment + form technique in comment:21 solves both.
Cap URLs leaked via HTTP Referer headerto Cap URLs leaked via HTTP Referer header, and to phishing filtersI split the phishing filter problem into #907 (having forgotten that I'd covered it here, but it is a separate issue).
Cap URLs leaked via HTTP Referer header, and to phishing filtersto Cap URLs leaked via HTTP Referer headerI plan to fix this along the lines described in comment:21.
Incidentally, someone told me the other day that any URLs sent through various google products (Google Talk the IM system, Gmail, anything you browse while the Google Toolbar is in your browser) gets spidered and added to the public index. The person couldn't think of any conventions (beyond robots.txt) to convince them to not follow those links, but they could think of lots of things to encourage their spider even more.
I plan to do some tests of this (or just ask google's spider to tell me about tests which somebody else has undoubtedly performed already).
I know, I know, it's one of those boiling the ocean things, it's really unfortunate that so many tools are so hostile to the really-convenient idea of secret URLs.
I wrote up a spec for a new Content-Security-Policy directive that would allow us (or any server operator) to completely block Referer leakage (edit: in a much simpler way than comment:21 ). I'll attach it here.
Attachment restrict-referrer-leakage.txt (5022 bytes) added
Proposed Content-Security-Policy directive: "restrict-referrer-leakage"
The noreferrer attribute on links could prevent leaking dircaps when clicking the link to a potentially malicious html file on the WUI
http://www.whatwg.org/specs/web-apps/current-work/multipage/links.html#link-type-noreferrer
I'd like to try to get the
restrict-referrer-leakage
CSP standardized and supported by user agents. What's the next step? Write to some W3C Web Sec working group, post david-sarah's attachment:restrict-referrer-leakage.txt, and ask them to consider it?Replying to ChosenOne:
Neat! Thank you! We could even consider (reluctant as I am to get into HTML rewriting) trying to inject that attribute onto arbitrary links inside HTML that the tahoe gateway serves up!
Replying to ChosenOne:
restrict-referrer-leakage.txt is a more complete and simpler solution, as the spec says:
(We don't need the third point but the other three are important, because rewriting is hard.)
Replying to zooko:
Yes. I will do that some time in the next few days.
Opened #1890 (submit proposal for restrict-referer-leakage to the CSP standardizers and implementors) for that part of this project.
I just learned that there's an HTML meta tag specifically to control Referer leakage, and that it's already implemented in a couple of browsers (chrome now, FF in progress, but alas not IE):
The FF bugzilla discussion also mentions some per-link options.
Great! We should try to turn these on in Tahoe-LAFS ASAP.
Hey, I remember that Brad Hill offered to support standardization of a CSP rule to restrict Referer leakage. What happened with that? Did we drop the ball on giving him some sort of spec doc?
Replying to zooko:
We did. I'll try to get round to that.
There's a new "meta referrer" header that's been proposed, which would allow websites to tell browsers to leave things out of the Referer header, or to omit it entirely. https://blog.mozilla.org/security/2015/01/21/meta-referrer/ . Still early, but if it gains momentum, we should turn it on.
In 639cc92/trunk:
There are some useful comments on PR-151 that should probably be considered, if this ticket doesn't already cover them.