Closed
Bug 149943
Opened 22 years ago
Closed 21 years ago
Use "DNS pinning" to prevent Princeton-like exploits
Categories
(Core :: Security: CAPS, defect, P1)
Core
Security: CAPS
Tracking
()
RESOLVED
WONTFIX
mozilla1.0.1
People
(Reporter: dougt, Assigned: darin.moz)
References
Details
(Keywords: topembed, Whiteboard: [ADT1 RTM] [ETA 07/16])
Attachments
(4 files, 3 obsolete files)
(deleted),
patch
|
dougt
:
review+
dveditz
:
superreview+
chofmann
:
approval+
|
Details | Diff | Splinter Review |
(deleted),
text/plain
|
Details | |
(deleted),
text/html
|
Details | |
(deleted),
patch
|
Details | Diff | Splinter Review |
Here is the email thread:
Doug Turner wrote:
Jim,
Part of your speech today worried me somewhat. What exact restrictions on the
dns do we need so that we don't have security problems with the javascript "call
home" functionality? Is this really a *javascript* problem, or is this just a
*java* problem?
If there is a "call home" javascript requirement, then we may have a *possible*
problem.... Recently I added a "feature" which would reset the dns resolver if
a dns lookup failed on 'nix. I currently do not blow away the IP cache which
mozilla owns. However, I was thinking about doing this at some point so that we
don't have shadow domains.
So, if there is a "call home" thing that js depends on, the bad guy can cause a
dns reset by providing a href on their site to bogus.com. If we purge our IP
cache at this point, the exploit, which you discussed, would be possible, right?
Doug
Jim responded:
I believe you are quite correct in your fears. Mitch needs to verify if we use
the name of the site (to establish the codebase principal), or we use an IP
address (post DNS translation). Most critically, you can't allow a multitude of
IP addresses that are identified by DNS to be considered "equivalent," and you
also can't allow "newer" responses from DNS to replace older responses (unless,
as I pointed out, you are assured that there is no content from an "old" IP
address to act on the "new" IP address).
It sure sounds like a nice find! Note that my presentation was given to a TON
of folks at RSA, Conference a year ago, and this class of attack was also
published (I think) by the folks from Princeton (when they discovered it).
Recent versions of BIND probably protect sites (at the firewall, by not
propagating bogus IP addresses).... but I don't believe we can count on that :-(.
Jim
Brendan Eich wrote:
Codebase principals use hostnames.
Jim Responded:
And I'm guessing/hoping that codebase equivalence is based on string
compares of names (Java used to be so generous that if *any* of the IP
addresses for a given host corresponded to IP addresses for a second host,
the the two hosts were considered "equivalent.)
Then the next question is whether there is a chance for either of the
following to happen (which can then "confuse" the translation of name to
IP):
a) More than one IP address associated with the DNS record fetched is used
for connecting at different times;
or:
b) The same host name is used to query DNS more than once (rather than
remembering the solo IP identified in the first DNS lookup).
If either of these are the case, then there is chance for the Princeton
exploit that I discussed to work. The (ever so slightly simplistic)
solution in 4.x was to avoid calling DNS to lookup a given name more than
once, and to take exactly one of the DNS supplied IP addresses as *the*
canonical IP address for all time for that name.
Without this "look it up once" approach, there is a significant dependence
on bind to maintain security. As I said in the talk, a fix to bind reduced
the major problem (where an external host name was advertised as having an
internal IP address). Perhaps you could argue that the state of bind these
days is better... but at the time (1986 or 1987) most (all?) implementations
of bind handled this problem poorly. I'd be willing to guess that this is
still the case, as a bind implementation would have to be "smart" about
which IP addresses are on which side of a firewall (which typically can mean
a lot more configuration).
Is there any current "bind" expert around that could comment on this?
Thanks,
Jim
p.s., The original (published?) attack from Princeton used the "multiple IP
addresses for a hostname" element, and I extended that to include the issue
of time-varying DNS lookup results.
p.p.s., In the demonstrated Java attack, the follow on was to use Java to
attack SMTP port on the targeted internal machine. You could possibly argue
that JS is less capable of exploiting connection weaknesses on internal
machines. It is hard to know exactly how vulnerable internal web sites are
to theft of information, if not direct exploits from buffer overruns in
their servers etc.
Reporter | ||
Updated•22 years ago
|
Severity: normal → blocker
OS: Windows 2000 → All
Priority: -- → P1
Hardware: PC → All
Target Milestone: --- → mozilla1.0.1
Comment 1•22 years ago
|
||
The caps code could keep the IP address for the host part of the origin string
in each codebase principal, and verify that the host part maps to that address
on each subsequent use. If the freshly resolved address didn't match, the caps
code could deny access.
/be
Comment 3•22 years ago
|
||
Better, the caps code should resolve the IP address for the host part of the
origin when it creates the codebase principal, which is when the script loads
from a server, if remote (note that <script src=http://bar/baz.js> in a file
whose origin is http://foo loads as if the script's origin the page's origin --
i.e., were http://foo, not http://bar). It should save the address in the
principal and use it in addition to the origin string when deciding equality.
This avoids bogus denials due to IP address rotation for load balancing; it also
avoids the overhead of requerying DNS on each access check.
/be
Comment 4•22 years ago
|
||
I am not familiar with the exact exploit, but very similar issue with
document.domain and dns mischief arised in July 2000.
Currently document.domain seems kind of broken so this exploit does not work,
but here is an exploit scenario with dns tricks.
------------------------OLD MAIL-(Can't find it in bugzilla)------------
Message-ID: <3985896B.6F45636B@nat.bg>
Date: Mon, 31 Jul 2000 17:12:59 +0300
From: Georgi Guninski <joro@nat.bg>
X-Mailer: Mozilla 4.74 [en] (Win98; U)
X-Accept-Language: en
MIME-Version: 1.0
To: Mitchell Stoltz <mstoltz@netscape.com>
Subject: BUG: document.domain security functionality is broken for Mozilla and NC
(when combined with a malicous name server)
Content-Type: multipart/mixed;
boundary="------------78D53A4B0EE26966CEA0D7D1"
This is a multi-part message in MIME format.
--------------78D53A4B0EE26966CEA0D7D1
Content-Type: text/plain; charset=koi8-r
Content-Transfer-Encoding: 7bit
Mitchell,
Note: I do not advise posting this to Bugzilla because it affects
Communicator 4.74
There is a major design flaw in the security functionality of
document.domain which circumvents Same Origin Security policy.
The problem is if window1.document.domain == window2.document.domain
then scripts in window1 may access the DOM of window2.
Consider the following scenario:
1) the document in window1 is loaded from local.malicousdomain.org
2) a script in window1 opens a new window (window2) with URL
http://malicousdomain.org. The DNS server of malicousdomain.org returns
an IP for the host malicousdomain.org which is the IP of a target
victim's web server - for example 216.32.74.53, which is the IP of
www.yahoo.com. In this case window2 will in fact load www.yahoo.com, but
window2.document.domain == "malicousdomain.org"
3) the script in window1 does: document.domain="malicousdomain.org",
which is permissible, because window1 is loaded from
local.malicousdomain.org
4) all consequtive scripts in window1 have access to the DOM of
window2.document.
I know this does not work if the target is www.netscape.com, guess
because it does HTTP redirects, but it works for the majority of
webservers I tested it on.
I do not see any solution to this except removing document.domain and
not trusting it at all, hope I am wrong.
(Code snipped)
Maybe I'm missing something, but doesn't the window2 URL bar show
www.maliciousdomain.org? This sounds like a really complicated way to just do a
MITM attack, where your maliciousdomain.org server relays requests and responses
between the user and yahoo.com.
If we get the Yahoo URL in the URL bar, that would be Bad(tm). Sad-making.
Comment 6•22 years ago
|
||
jar:
You mentioned two ways for this attack to work. Either more than one IP address
is associated with the DNS record fetched for a real host in the attacker's
domain, or successive fetches for the DNS record of what started out to be a
real host in the attacker's domain will return different IP addresses at
different times. In both cases, the trick is to get one of those IP addresses
to be the address of a machine inside the victim's firewall.
It sounds to me like Guninski's posting is saying there is yet a third variant
on this attack. Namely that the DNS record is for a fictitious hostname in the
attackers domain, and it has an IP address of a machine inside the victim's
firewall. In this case there is no changing of IP addresses -- a single DNS
lookup on http://malicousdomain.org was made and it returned a single IP
address. If I'm interpreting this correctly, then all the solutions discussed
above about not clearing the cache would not work in this case.
shaver:
I believe that Gunninski was just using the loading of the yahoo site in a
separate window of a demonstration that you can get to this site by using a
hostname in the attacker's domain. The key aspect of the attack is not that the
window was opened, but that the file was accessible via javascript and that the
attacker's can then transmit contents of the file back to the another host in
the attacker's domain.
This is not an MITM attack. There is no middleman here -- the attacker's
site is not relying requests and responses between the user and yahoo.
From what I read of Georgi's pasted mail, the user is going to get the content
of www.yahoo.com in his window, but the URL bar should still say
"http://maliciousdomain.org", no? I'm not able to get any other behaviour from
a window.open like that, regardless of what I do with document.domain. Is there
a test case that demonstrates otherwise?
This is just like having maliciousdomain.org serve up www.yahoo.com-esque
content, and we can't protect the user against that. I know that it's not an
MITM, but it seems to have exactly the same effect: the user loads content that
looks like site1, but is served by site2.
Please explain how anything is subverted here, other than a user's belief that
the Yahoo logo appearing on the top of the page means they're really dealing
with Yahoo.
(I don't think jar is copied on this bug, so you might want to mail him your
comments, if you want to continue the discussion.)
Comment 8•22 years ago
|
||
I believe the key point of Gunninski's posting is
The problem is if window1.document.domain == window2.document.domain
then scripts in window1 may access the DOM of window2.
Instead of yahoo, consider the contents of window2 to come from a machine inside
the firewall instead. This means that the scripts in window 1 (which is from
the attacker's site) can read the contents of a file inside our firewall and
send it back to a host in the domain of window 1, which is the attacker's
domain.
Ah, yeah, I forgot about the case where the malicious server couldn't just read
the content, and was concentrating on the issue of capturing user input. Thanks.
Comment 10•22 years ago
|
||
Here are some general commens on this bug, must of which I've tried to
communicate in discussions, and some of which may not have come across in the
email threads quoted.
This bug is (at least currently) all about subverting a firewall. Historically
in the Princeton demonstration, the attacker ran code on a local machine that
then proceeded (via common weaknesses) to effectively attack a second machine
inside a firewall.
Guninski's sample, when applied to *internal* sites, implies that an external
attacker can attempt to read content served from an *internal* server. One
partial fix (implemented way back when as the Princeton exploit was
exposed) involves better handling of DNS IP addresses across the firewall (that
is being subverted). IF the firewall has BIND configured sufficiently well,
then it *could* prevent an external authority (in Guninski's example,
maliciousdomain.org) from supplying an IP address that refers to an internal
machine.
Just for clarification (and in keeping with Shaver's comment), it should be
noted that when an external site (example: Yahoo) is accessed via this
procedure, there is usually little value to be gained. Most critically, the
browser would *not* serve up any (auth?) cookies that are specific to the target
site, as the cookies are derived from the domain name (and other stuff), rather
than the IP address. As a result, the yahoo content would not be personalized,
and the JS couldn't access any personalized cookies either. There would be a
vague chance that hitting some ports on a target machine could induce
malicious side effects, which could only then be traced to the intermediate
(victim) machine (i.e., the victim macine would be blamed for the contact with
the target). In general, there are quite a few restrictions on ports that can
be hit on the basis of "phoning home," so it is generally a bit doubtful
(example: You can't hit port 25, which handles smtp). There is always a tiny
chance that the target machine has an ACL which would only allow the IP of the
victim machine to make contact.... but such is the weakness IMO of simple IP
based ACLs.
Anyway... those are some comments that might help fill out the field in terms of
problems, and threats (and non-threats).
Comment 11•22 years ago
|
||
...and one more point...
When I say "attack" a machine inside the firewall (historically in the Princeton
incarnation), I meant more than "steal content." They actually use common
weaknesses on machines (that assume safety via a firewall) to obtain effectively
root access, with control from an external source. Stealing content is good...
but complete control is even more significant.
However, the stealing content held behind a firewall is plenty good enough to
make this attack worth working to block. We just have to be fair in the
judgement, and note what could already be done using redirects.
Comment 12•22 years ago
|
||
I am not sure I am familiar with the "Princeton exploit" but this comment:
----
They actually use common weaknesses on machines (that assume safety via a
firewall) to obtain effectively root access
----
really surprises me.
Am I missing something or someone is implying that it is Mozilla's
responsibility buggy unpatched web servers to not be compromised by <img
src="http://10.10.10.10/${LONGSTRING}.ext"> ?
Comment 13•22 years ago
|
||
At the time of the Java attack, I believe they used weaknesses in SMTP servers
that were commonly exposed (and active) behind firewalls. I believe that Java
was (then) allowed to contact any port on the "home" machine.
I agree with Guninski that we can't block attacks that are possible by mere
redirects (and hence, attacks on weak web servers are beyond our control).
Tangenting off here... it *might* be interesting to think about enforcing some
limits on URL sizes, *if* we can justify the restrictions with standards, an not
break much (any?) existing web practice (other than viruses ;-) ). ...but I'd
leave that to a different bug or feature request.
Our security story *does* currently include blocking access to arbitrary ports
(I'm not sure if we're using a black list, or a white list, but I believe that
JS can't access the standard SMTP port for example). It would (as a
hypothetical, but presumably blocked attack) be wrong for the browser to allow a
"phone home" to *any* port, as this would then be the necessary stepping stone
to set of more general attacks (from this firewall subversion bug).
Comment 14•22 years ago
|
||
The Princeton exploit involved Java, where once it was determined a given host
was OK the Java code was allowed full communication to any port using any data
or protocol it wanted to emulate -- the beast was loose inside your firewall and
our browser provided the tunnel.
In the Mozilla case we've got a lot more walls to limit the damage, such as
supporting a limited number of protocols and Necko not allowing access to a long
list of ports. Which is not to say this isn't bad -- we just had to release
6.2.3 to address a SameOrigin check failure. This problem allows folks to get
around that check and servers inside a firewall can contain a lot of data worth
stealing. Don't get hung up on the particulars of the classic Princeton Exploit,
it's inspiration.
Assignee | ||
Comment 16•22 years ago
|
||
so i have a 80% solution in mind for this bug... i got together with dveditz and
mstoltz to discuss it yesterday, and we agreed that it probably is a decent
solution for the short term at least. here goes...
consider navigating the web via a proxy server. at best, the proxy server would
be unable to reach intranet addresses and would only provide a way to connect to
internet sites (provided the proxy server is placed outside the intranet's
firewall). my proposed solution would give us essentially this level of
security when not navigating via a proxy server from behind a firewall.
1- setup an socket connection observer that would record all (hostname,
ip-address) pairs corresponding to actual connections.
2- the socket connection observer would observe changes in the hostname ->
ip-address mapping, and would flag any hostname's that previously mapped to a
valid internet address and now map to an invalid internet address (e.g.,
192.168.x.y) such as those commonly used behind firewalls. flaged hostnames
would be added to a shit-list.
3- caps would consult the shit-list whenever performing a cross-origin check to
see if the hostname corresponding to the origin exists on the shit-list. if so,
it would fail the cross-origin check.
like i said before this is not a complete solution because it does not address
intranets that use valid internet addresses internally.
thoughts?
Status: NEW → ASSIGNED
Comment 17•22 years ago
|
||
The above solution is problematic exactly in the same way as the fix to "bind"
is problematic. It is hard to define "intranet" addresses. Your definition
works if you are living in a NAT'ed world, behind a firewall, using the
reserved "intranet address ranges," but does not work well when you have global
IP addresses inside your intranet.
Bottom line: A lot of configuration is needed to distinguish between internet,
and intranet IP addreses. It is hard to tell when a set of IP addresses are
equivalent.
Assignee | ||
Comment 18•22 years ago
|
||
jar:
my thought was that invalid internet addresses like those commonly used behind
NATs might be the most frequent targets for this sort of attack. i mean, i'd
imagine 192.168.1.1 is a commonly used address for servers behind a NAT. if i
were trying to make use of the princeton exploit, that IP address would be one
of the first i targeted.
but of course, like you said... blocking just the invalid set of internet
addresses doesn't really solve the problem. i figured it would eliminate this
exploit in most cases. maybe we can do better... hmm.
Comment 19•22 years ago
|
||
FWIW: The cool address to add to the mix is 127.0.0.1, which is a loopback
address to attack the local client machine. I think that was actually used in
the first demo of this exploit against Java ;-).
If you wanted to special case a blockade, I'd *expect* that blocking such
equivalence would be good (i.e., put a host name on the %&$^ list if its IP
numbers included 127.0.0.1).
Sadly, if and when the attack is used in a significant way, the bad guys would
probably know exactly what they were after in terms of a vulnerable IP.
Assignee | ||
Comment 20•22 years ago
|
||
It turns out that HTTP/1.1 (if implemented correctly) actually prevents this
exploit on any HTTP connection:
see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.23
The Host request-header field specifies the Internet host and port number of
the resource being requested, as obtained from the original URI given by the
user or referring resource.... The Host field value MUST represent the naming
authority of the origin server or gateway given by the original URL....
A client MUST include a Host header field in all HTTP/1.1 request messages....
All Internet-based HTTP/1.1 servers MUST respond with a 400 (Bad Request)
status code to any HTTP/1.1 request message which lacks a [valid] Host header
field.
In other words, a HTTP/1.1 server is empowered to reject requests that contain a
Host header for a domain that does not correspond to the server (the "[valid]"
is my interpretation based on the "MUST" in the first paragraph). Apache, for
one, enforces this restriction.
Of course, this says nothing about preventing the Princeton exploit with
non-HTTP/1.1 servers.
I also wanted to add that our port blocking system should also significantly
limit the range of attacks allowed by this bug. For example, none of our
protocols can connect to port 25. Of course, an attacker may want to target
some other server that is known to exist behind a corporate firewall on some
port that we do not block.
At any rate, I'm just posting this comment to show that there are some
preventative measures already built into our product to limit Princeton-like
exploits. Hopefully something interesting at least ;-)
Comment 21•22 years ago
|
||
darin,
this may be in rfc, but apache 1.3 is quite happy to serve another hosts:
nc -vvv localhost 80
localhost.localdomain [127.0.0.1] 80 (http) open
GET / HTTP/1.1
Host: myhost
HTTP/1.1 200 OK
Date: Sat, 15 Jun 2002 16:13:13 GMT
Server: Apache/1.3.20 (Unix)
Accept-Ranges: bytes
Content-Length: 34
Connection: close
Content-Type: text/html
<html>
<body>
...
(myhost is not the name of the server)
Assignee | ||
Comment 22•22 years ago
|
||
yikes! i just tested out apache 1.3.23 and noticed the same thing. however, i
recall having to explicitly allow variations of my machine's hostname (unagi,
unagi.mcom.com, unagi.nscp.aoltw.net, etc.) in apache's configuration files with
older versions of apache. this "new" default behavior is somewhat unfortunate.
i suppose they found it more convenient, compatible, or something like that to
drop the restriction :(
Updated•22 years ago
|
Keywords: mozilla1.0.1,
nsbeta1
Whiteboard: [ADT1 RTM]
Updated•22 years ago
|
Assignee | ||
Comment 23•22 years ago
|
||
this is part of the patch to implement the design in comment #16. it lacks
code for the shit list, but otherwise it is complete. i consulted RFC 1918 to
determine the set of "private" ip addresses. the detection code is implemented
entirely within caps. the only necko changes include some extra code to enable
an external xpcom component to observe established connections.
i'm not sure how valuable this patch really is. i think it helps, but i'm not
certain that it really gets us all that close to solving this bug.
Updated•22 years ago
|
Whiteboard: [ADT1 RTM] [ETA Needed] → [ADT1 RTM] [ETA 6/27]
Updated•22 years ago
|
Whiteboard: [ADT1 RTM] [ETA 6/27] → [ADT1 RTM] [ETA 6/26]
Assignee | ||
Comment 24•22 years ago
|
||
this patch is a complete implementation of the solution in comment #16. mitch
needs to tell me if i've hooked into the script security manager properly. i'm
also slightly concerned that asynchronously adding hostnames to the shit list
may be wrong. i'm doing it to avoid having to enter a lock before checking the
shit list. i figure "shitter events" would be among the first few events in
the event queue on new page load, so this slight delay before adding hosts to
the shit list is probably OK. it should happen well before javascript
executes.
Attachment #89042 -
Attachment is obsolete: true
Assignee | ||
Comment 25•22 years ago
|
||
same patch with some additional documentation.
Attachment #89108 -
Attachment is obsolete: true
Assignee | ||
Comment 26•22 years ago
|
||
nevermind these patches! i think i've come up with an easy way to implement the
complete solution that is ~ no more costly (in terms of memory consumption) than
the v0.3 patch.
Comment 27•22 years ago
|
||
Comment on attachment 89112 [details] [diff] [review]
v0.3 patch (revised some comments)
Darin's new idea is to have the socket transport service keep a hashtable of
IP address mappings to do name-to-IP pinning for the length of the session. In
the simplest case, no shitlist is necessary, and no synchronization. This
sounds like the way to go. The one downside is that if a server's IP address
changes, the browser must be restarted to access that server again.
Attachment #89112 -
Attachment is obsolete: true
Assignee | ||
Comment 28•22 years ago
|
||
this patch pins hostname->ipaddress mappings that result in a socket connection
indefinitely. the socket transport service keeps a hash table that it'll
consult before querying the DNS service. if there is a match the socket
transport will use that address. we will remember and reuse this cached
address indefinitely (with one exception: the user can clear this list by
toggling the offline/online button, which is really just a side-effect of the
fact that toggling that button restarts the socket transport service).
Comment 29•22 years ago
|
||
By "indefinitely", I assume you don't really mean that but rather mean the
duration of the current browser session.
Doesn't that present problems for the laptop user who puts his machine into
sleep mode (with the browser running) and transports his machine between home to
work?
Comment 30•22 years ago
|
||
Looks good to me, although someone who knows this code better should review as
well. The problem Steve described shouldn't be a problem unless you have a
machine named "foo" on your office network and another "foo" on your home
network, right?
Assignee | ||
Comment 31•22 years ago
|
||
mitch: right.
plus, perhaps we should document the offline/online behavior in the release notes.
Comment 32•22 years ago
|
||
The more realistic problem is that some large commercial site pulls a machine
out of the round-robin for maintenance that just happens to be the IP you're
pinned on. The toggling on/off line workaround seems simple enough; it's
probably way too obscure but maybe we can deal with a better UI in Buffy.
Comment 33•22 years ago
|
||
Comment on attachment 89141 [details] [diff] [review]
v1 patch (pin hostname->ipaddress mappings indefinitely)
Looks good, sr=dveditz
Attachment #89141 -
Flags: superreview+
Assignee | ||
Comment 34•22 years ago
|
||
yeah, let's try this out on the trunk and see what kind of response we get.
Reporter | ||
Comment 35•22 years ago
|
||
Comment on attachment 89141 [details] [diff] [review]
v1 patch (pin hostname->ipaddress mappings indefinitely)
let see what kind of regressions this will cause. :-)
r=dougt
Attachment #89141 -
Flags: review+
Assignee | ||
Comment 36•22 years ago
|
||
ok, fixed-on-trunk :)
marking FIXED
(who wants to own this while i'm on vacation?)
Status: ASSIGNED → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
Comment 37•22 years ago
|
||
I'll own it - looks like I can't change ownership without reopening the bug, and
I'm not sure we want to do that right now - yet another reason why resolving the
bug before the branch fix is a bad idea. Do we have a testcase for the exploit?
Can somebody write one? Otherwise we have no way to verify the fix.
Comment 38•22 years ago
|
||
Adam Megacz <adam@xwt.org> has submitted an exploit of this to Bugtraq moderator
Dave Ahmad for publication on 7/28
Summary: Princeton exploit may be possible → Princeton-like exploit may be possible
Whiteboard: [ADT1 RTM] [ETA 6/26] → [ADT1 RTM] [ETA 6/26][public on 7/28]
Comment 39•22 years ago
|
||
Updated•22 years ago
|
Attachment #89639 -
Attachment description: Text of announcement that will be posted to bugtraq → XWT Foundation Security Advisory
Comment 40•22 years ago
|
||
The XWT advisory describes the issues raised in comment #4 which is not
addressed by the current patches. Rather than reopen this bug and confuse two
issues I've spun off bug 154930.
Whiteboard: [ADT1 RTM] [ETA 6/26][public on 7/28] → [ADT1 RTM] [ETA 6/26]
Comment 41•22 years ago
|
||
Hey, I think restricting security by IP (rather than hostname) is definately the
way to go since the name-to-IP mapping is under the attacker's control, while
the IP-to-physical-server mapping is not.
Unfortunately there are a lot of people out there behind proxies who lack DNS
access. Worse, most of those proxies are perfectly happy to serve pages off the
intranet (simplifies proxy configuration; you can just send all HTTP traffic
through the proxy instead of bothering with PAC scripts).
- a
Comment 42•22 years ago
|
||
This needs modification in a DNS server or /etc/hosts. Check comments in the
attachment.
Comment 43•22 years ago
|
||
In the testcase above and for the suggested patch:
local.mall.xx == 1.1.1.1 (server somewhere in the internet)
mall.xx == 10.10.10.10 (web server in the intranet)
Adam suggest in comment #41 that users who don't have DNS and their proxies
serve the intranet are vulnerable no matter what IP/name checks the browser
makes. I believe in the above case almost nothing can be done undless a change
in the web server/proxy (though I am *not* dns expert).
But at least we should protect for other attacks.
What the browser should do is protect systems which have dns or at least use
/etc/hosts (I strongly doubt it a lot of users access web applications with URLs
like http://10.10.10.10/cgi-bin/report.cgi - at least they have an alias in
/etc/hosts)
My idea is the following:
A check should be made when assigning to document.domain.
An exploit condition arises when document.domain is being tried to be assigned
to a value which have an IP, i.e. there is a host at the value.
The main problem is that the DNS mall.xx claims that mall.xx == 10.10.10.10
because the DNS has control over mall.xx
But the DNS doesn't have control over the 10.10.10.10 == ?? (the reverse lookup)
So the browser asks for the name of 10.10.10.10 a trusted DNS
Local DNS or /etc/hosts returns "intranetserver1"
Obiously intranetserver1 != mall.xx so something suspicous is going on.
Possible problems:
1. 10.10.10.10 may not have a name. I strongly doubt a user will use a local web
server by its IP and in this case use of document.domain is illegal but who knows?
2. This may get tricky if a web server is served by different machines with
different IPs (is this "load balancing") ? Probably in this case the IPs shall
be in some range and this case is probably not so common behind firewalls.
Another possibility is adding a preference for this check.
Comment 44•22 years ago
|
||
I've got a DNS server that can be configured for testing this fix, but I don't
understand what the test case would be, based on the variants discussed here...
Comment 45•22 years ago
|
||
Ben, ignore for now the continuing discussion on the document.domain issue I've
tried to split off into bug 154930. The test case we need at the moment is for
the part that Darin thinks he has fixed on the trunk: a machine with multiple or
changing-on-expire IP addressses, where some of those addresses point back
inside the firewall.
It should be a good enough test to have a machine that advertises itself with
two IP addresses, a real one and 127.0.0.1, we'll call it my.evil.com. On a
vulnerable browser load a page http://my.evil.com/attack.html which has a button
that opens a window http://my.evil.com/victim.html. After loading attack.html
twiddle your evil.com server so that my.evil.com stops responding on its real IP
address, then press the button and your localhost/victim.html should get loaded.
Once you've figured the twiddling required to perform the attack try again with
a trunk build to see if Darin's patch prevents this.
Comment 46•22 years ago
|
||
benc, how is the testing going on this fix?
Comment 47•22 years ago
|
||
cc'ing gagan too.
Comment 48•22 years ago
|
||
Paul: I'm not the QA owner of this bug.
dveditz: I've got an "evil.domain" entry in our internal DNS, but it currently
points to one of our main document servers.
I take it, based on your last comment, you need a web server that can be turned
on and off as needed? So we need to configure an "evil" DNS entry that uses a
locally administered test-http server.
I don't have control of a server I can easly bring up-and-down at this point,
but I'll get to work on that. This was in the existing lab plans for this test
cycle, but has not been completed yet.
Comment 49•22 years ago
|
||
dougt, dveditz, and bsharma have verified this fix using the DNS that Ben set
up. We're going to make a testcase for general use inside Netscape, but in the
meantime I'm marking this Verified so we can move on.
Status: RESOLVED → VERIFIED
Comment 50•22 years ago
|
||
adding adt1.0.1+. Please get drivers approval before checking into the branch.
Keywords: adt1.0.1+
Comment 51•22 years ago
|
||
Before checking this in to the branch, can you look at
http://bugzilla.mozilla.org/show_bug.cgi?id=156581? David has tracked down a
crash when exiting in offline mode to the fix for this bug on the trunk.
Comment 52•22 years ago
|
||
David has been looking at the regression this caused, bug 156581.
Updated•22 years ago
|
Whiteboard: [ADT1 RTM] [ETA 6/26] → [ADT1 RTM] [ETA 7/12]
Comment 53•22 years ago
|
||
Mitch, can you make sure that you check in the fix for bug 156581 when you check
this in? It basically entails moving these three lines of the patch:
+ // clear the hostname database (NOTE: this runs when the browser
+ // enters the offline state).
+ PL_DHashTableFinish(&mHostDB);
+
up into the if (mThread) {} clause instead of after the whole if then else.
Please let me know if this doesn't make sense. The alternative is for you to
check in the broken patch and have me fix it on the branch, and that seems silly.
Comment 54•22 years ago
|
||
Comment on attachment 89141 [details] [diff] [review]
v1 patch (pin hostname->ipaddress mappings indefinitely)
a=chofmann for 1.0.1 add the fixed1.0.1 keyword after checking into the branch
Attachment #89141 -
Flags: approval+
Comment 55•22 years ago
|
||
adding mozilla1.0.1+ based on chofmann's comments in #54.
Keywords: mozilla1.0.1 → mozilla1.0.1+
Updated•22 years ago
|
Whiteboard: [ADT1 RTM] [ETA 7/12] → [ADT1 RTM] [ETA 07/16]
Comment 57•22 years ago
|
||
juuust kidding...looks like I forgot to check this one in on the branch. I will
do so today as soon as the branch opens.
Keywords: fixed1.0.1
Comment 59•22 years ago
|
||
Verified on 2002-07-25-branch on Win 2000.
The site from the test case was not accessible. And this is the correct behavior.
Keywords: fixed1.0.1 → verified1.0.1
Assignee | ||
Comment 60•22 years ago
|
||
this is a back-port of the patch for bug 89141. i had to work around the fact
that the socket transport in mozilla 0.9.4 only uses the first ip-address
returned from the DNS service. i've also included the offline crash fix.
Updated•22 years ago
|
Group: security?
Assignee | ||
Comment 61•22 years ago
|
||
removing mozilla1.0.1+ since this has been fixed on the 1.0.1 branch already.
Keywords: mozilla1.0.1+
Comment 62•22 years ago
|
||
Darin, your interpretation of the HTTP/1.1-specification in comment #20 is
incorrect. You MUST NOT implicitly include the word [valid] anywhere. Inserting
that word ist not implied by the sentence before. I believe that the intention
was really just to make clients not send no Host-header at all.
And by the way, neither old nor new versions of Apache show no document at all
if the Host-header is incorrect (in the default configuration). If you are using
a not-too-old version of Apache, however, it can be configured to do so somehow
(I believe by modifying the "default configuration" to not service requests, and
only have working configurations for name-based virtual hosts, where all
"invalid" specifications of Host-headers are usually serviced by the default
configuration).
Assignee | ||
Comment 63•22 years ago
|
||
Marc:
my point was that since "The Host field value MUST represent the naming
authority of the origin server or gateway given by the original URL", it should
be valid for webservers to reject invalid Host field values, and hence respond
with a 400 bad request.
Assignee | ||
Comment 64•21 years ago
|
||
ok, this is going to get backed out in the near future for mozilla 1.5. please
see bug 205726 and bug 162871 for details.
Assignee | ||
Comment 65•21 years ago
|
||
reopening now that the patch for bug 205726 effectively backs this fix out.
Status: VERIFIED → REOPENED
Resolution: FIXED → ---
Assignee | ||
Comment 66•21 years ago
|
||
marking WONTFIX unless someone can come up with a solution that does not break
the web. IMO this is better solved by either the local DNS (don't allow
non-intranet hostnames to point at intranet addresses), the origin server (be
strict about what Host headers you accept), or belt-and-suspenders (only allow
access to sensitive documents via SSL-enabled connections, even if access is
only available behind a firewall).
Status: REOPENED → RESOLVED
Closed: 22 years ago → 21 years ago
Resolution: --- → WONTFIX
Comment 67•21 years ago
|
||
This is the Right Way to fix it without DNS pinning. It should be up on ietf.org as an i-d soon.
http://www.xwt.org/x-requestorigin.html
The implementation within Mozilla should be trivial for somebody who knows their way around the
code. Please consider reopening this bug.
Comment 68•21 years ago
|
||
This is the Right Way to fix it without DNS pinning. It should be up on ietf.org as an i-d soon.
http://www.xwt.org/x-requestorigin.html
The implementation within Mozilla should be trivial for somebody who knows their way around the
code. Please consider reopening this bug.
Comment 69•21 years ago
|
||
I have a proposal which I'll probably clean up at some point for unsecured
connections.
http://viper.haque.net/~timeless/blog/11
http://viper.haque.net/~timeless/blog/12
basically the idea that instead of pinning DNS entries to IP addresses, we pin
cached http data to dns/ip pairs. when the dns entry needs to be changed there
are choices: allow the new ip access to the old data (ie), refuse to connect
(n6), disallow access to old data and connect (proposed default), prompt (would
probably require setting a hidden pref to get). For https there isn't an issue
since the dns service no longer pins and the https protocol is responsible for
making sure the host is consistent.
that said, i tried to read XWT. but it's 5am and i couldn't understand how it'd
be useful.
Assignee | ||
Comment 70•21 years ago
|
||
adam: it should be easy to add the request header you mentioned, but can you
explain how exactly that will help? (how does it differ from the host header?)
timeless: your solution sounds like it will break the web in some cases or
require user interaction that will be well beyond the comprehension of most users.
Comment 71•21 years ago
|
||
> adam: it should be easy to add the request header you mentioned, but can you
> explain how exactly that will help? (how does it differ from the host header?)
The initial XWT Foundation Advisory (www.xwt.org/sop.txt) explains easy to implement solutions
for defending against all attacks except the case where the user is behind an HTTP proxy and does
not have access to a DNS server to resolve names. The remaining vulnerability only works against
HTTP servers with a default NameVirtualHost (to use the Apache jargon) behind the same proxy.
The crux of the issue here is that in order to decide if an HTTP transaction should be permitted or
denied, the decision-maker must know two things:
1) "who" is making the request: is it the user him/herself, or is it untrusted mobile content
[javascript, flash, or java-applet] that is simply being executed on the user's machine?
2) what is the destination IP of the request?
Unfortunately, in proxied, no-dns networks, *no single element in the network* knows both these
things. The browser knows (1), and the proxy knows (2).
The draft RFC proposes inserting a header into the HTTP request which the browser uses to tell the
proxy the value of (1). Organizations with the aforementioned network configuration (proxy + no
DNS + default NameVirtualHost) can instruct their proxy to deny requests based on the
RequestOrigin and destination (for example, if RequestOrigin isn't in our organization and the Host
is, deny the request). The patch for SQUID lets you add this with a one-line acl.
In summary, X-RequestOrigin is nothing more than a trivial protocol for the browser to tell the
proxy what (1) is so that it can make the appropriate permit/deny decision.
- a
Comment 72•21 years ago
|
||
X-RequestOrigin is now an IETF Internet Draft
http://www.ietf.org/internet-drafts/draft-megacz-x-requestorigin-00.txt
Comment 73•18 years ago
|
||
(In reply to comment #72)
> X-RequestOrigin is now an IETF Internet Draft
>
> http://www.ietf.org/internet-drafts/draft-megacz-x-requestorigin-00.txt
A new bug should be filed if this is still worth implementing.
/be
Updated•17 years ago
|
Summary: Princeton-like exploit may be possible → Use "DNS pinning" to prevent Princeton-like exploits
Comment 74•17 years ago
|
||
oh the irony in it:
http://crypto.stanford.edu/dns/
Collin Jackson, Adam Barth, Andrew Bortz, Weidong Shao, and Dan Boneh
Protecting Browsers from DNS Rebinding Attacks (pre-proceedings draft)
To appear at ACM CCS, October 2007
even more ironic in the pdf:
Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...$5.00.
and the most ironic part - i am ready to bet the standford guys are gonna get ZERO from $5 per 11 page pdf and ACM holds the copyright
You need to log in
before you can comment on or make changes to this bug.
Description
•