Closed
Bug 309128
Opened 19 years ago
Closed 19 years ago
More IDN DNS lookup and URL display problems: potential for spoofing
Categories
(Core :: Networking, defect)
Core
Networking
Tracking
()
RESOLVED
FIXED
mozilla1.8beta5
People
(Reporter: usenet, Assigned: darin.moz)
References
(Blocks 1 open bug)
Details
(Keywords: fixed1.8, Whiteboard: [sg:spoof])
Attachments
(2 files)
(deleted),
text/html
|
Details | |
(deleted),
patch
|
Biesinger
:
review+
darin.moz
:
superreview+
asa
:
approval1.8rc1+
|
Details | Diff | Splinter Review |
Nasty broken DNS lookups are being performed when certain Unicode characters are
present in URLs. Combined with the display of these URLs in their normalized
forms, this may potentially create a spoofing problem, although at the moment I
have not yet found an exploit.
How to reproduce problem:
1 Start up a packet sniffer
2 Open Firefox, and go to the web page provided in the attachment below
3 Click on some links
4 Wince at the garbage being sent in DNS lookups
5 Note that some of the displayed domain names in URLs contain spaces,
potentially allowing domain spoofing if a large number of space-generating
characters were appended to an apparent domain name: eg:
www.google.comXXXXXXXXXXXXXXXXXXXXXX....XXXXXXXX
where X represents a space-generating character
What happens:
* DNS lookups are generated with characters other than ASCII letters, digits or
hyphen, violating RFC 1035.
What should happen instead:
Either:
* the bogus characters should never get as far as the DNS logic
Or:
* The invalid DNS lookups should fail
Preferably both fixes should be applied, if possible.
Updated•19 years ago
|
Summary: More IDN DNS lookup and URL display problems: potential for spoofing → More IDN DNS lookup and URL display problems: potential for spoofing
Reporter | ||
Comment 1•19 years ago
|
||
This is the HTML page referred to above, with the links to click on to generate
the DNS lookup and/or name display problems mentioned.
Reporter | ||
Comment 2•19 years ago
|
||
IMPORTANT: please note that not all of the links given cause problems: however,
most of them cause either a bad display or an invalid DNS lookup to occur.
Reporter | ||
Comment 3•19 years ago
|
||
By the way, the Unicode characters used to make this list are all of the
non-iso-8559-1 characters whose nameprep()'d form contains an iso-8859-1
character, as of the version of nameprep() in Python 2.3.3.
Updated•19 years ago
|
Assignee: nobody → darin
Component: Security → Networking
Flags: blocking1.8b5?
OS: Linux → All
Product: Firefox → Core
Hardware: PC → All
Whiteboard: [sg:fix]
Version: unspecified → Trunk
Updated•19 years ago
|
Flags: blocking1.8b5? → blocking1.8b5+
Updated•19 years ago
|
Whiteboard: [sg:fix] → [sg:spoof]
Assignee | ||
Updated•19 years ago
|
Target Milestone: --- → mozilla1.8beta5
Assignee | ||
Comment 4•19 years ago
|
||
Can this bug also be solved by displaying these domain names using punycode, or
do we need to extend the net_IsValidHostName check? (see bug 304904)
Reporter | ||
Comment 5•19 years ago
|
||
Well, probably both, in a belt-and-braces way.
That is to say, the existence of any blacklisted character in the source Unicode
FQDN string should result in the string forced to be displayed as Punycode, due
to display issues.
AND
since the Unicode -> domain name code is clearly not watertight yet, any
character other than 0-9a-zA-Z and '-' in a value that might be passed to the
name resolver library should result both in lookups failing at the low level,
and the domain name being reported as broken at the high level.
THEN
We should then file a bug to fix the upstream issue: namely, that normalization
of some FQDN labels can give rise to all-ASCII strings which contain illegal
values for domain names. Presumably the the bug arises from the illegal
character check currently being carried out before NAMEPREP normalization of the
FQDN labels: it should clearly be carried out _after_ NAMEPREP normalization. Or
perhaps the code is even cruder, and just blindly trusts the URL parser to catch
all illegal characters at parse time -- which would be a bad idea, since such a
parser is inevitably complex, and likely to be a place where obscure bugs may
remain hidden for long periods.
FINALLY
Even when the Unicode -> domain name code is fixed, the low-level LDH check just
before sending to the name resolver should still remain: it's cheap, enforces
compliance with RFC 1035, on the "be careful what you send" principle of
engineering, and may catch any errors introduced in later amendments to the
higher-level code. Perhaps, when the upstream code is (thought to be) finally
fixed, it should be left in as an assert?
Reporter | ||
Comment 6•19 years ago
|
||
More specific bug for the ASCII leakage issue now filed at Bug 310361, as per
comment above.
Assignee | ||
Comment 7•19 years ago
|
||
Usenet: That sounds like a good plan to me. I particularly like the idea of
applying a whitelist before sending a string off to the host resolver.
Comment 8•19 years ago
|
||
We now have an ASCII character blacklist which is applied before DNS resolving.
This was added in bug 304904. Here is the branch version of the patch:
+ const char* bad = net_FindCharInSet(hostname, end,
+ "\x01\x02\x03\x04\x05\x06\x07\x08"
+ "\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10"
+ "\x11\x12\x13\x14\x15\x16\x17\x18"
+ "\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20"
+ "\x21\x22\x23\x25\x26\x27\x28\x29"
+ "\x2a\x2b\x2c\x2f\x3b\x3c\x3d\x3e"
+ "\x3f\x40\x5c\x5e\x7b\x7c\x7e\x7f");
Even if we have bugs which allow bogus ASCII characters to make it through to
the DNS resolver, this should stop the lookup from happening. Does that deal
with the security issue? Is that character list extensive enough?
Do any of the characters in the HTML testcase attached to this bug need to be
added to the IDN blacklist in bug 309133?
Gerv
Comment 9•19 years ago
|
||
Darin, so are we covered here as far as the security concern goes, as Gerv
suggests in comment #8?
Comment 10•19 years ago
|
||
After talking with Darin, we're going to push this out to after the beta2 (first
RC).
Flags: blocking1.8b5+ → blocking1.8b5-
Assignee | ||
Updated•19 years ago
|
Flags: blocking1.8rc1?
Updated•19 years ago
|
Flags: blocking1.8rc1? → blocking1.8rc1+
Comment 11•19 years ago
|
||
Masayuki and Dveditz, we could use some help here. Darin's overloaded and this
risks falling off this list.
Comment 12•19 years ago
|
||
(In reply to comment #8)
> We now have an ASCII character blacklist which is applied before DNS resolving.
> This was added in bug 304904. Here is the branch version of the patch:
This is not what was checked in, as checked in we only catch "%/\\", which in
particular leaves spaces wide open, among other bogus chars.
> Even if we have bugs which allow bogus ASCII characters to make it through to
> the DNS resolver,
They aren't bugs, it's NAMEPREP doing what NAMEPREP does.
> Is that character list extensive enough?
The one checked in isn't. The one you quoted in your comment would be.
Comment 13•19 years ago
|
||
Dan, so what needs to happen with this bug?
Comment 14•19 years ago
|
||
Updated•19 years ago
|
Attachment #199633 -
Attachment description: patch to update the blacklist → patch to update the blacklist
Attachment #199633 -
Flags: superreview?(darin)
Attachment #199633 -
Flags: review?(cbiesinger)
Assignee | ||
Comment 15•19 years ago
|
||
Comment on attachment 199633 [details] [diff] [review]
patch to update the blacklist
An implementation similar to nsHttpChannel's IsValidToken() would be faster,
but I don't think we need to worry about performance here.
sr=darin
Attachment #199633 -
Flags: superreview?(darin) → superreview+
Comment 16•19 years ago
|
||
Comment on attachment 199633 [details] [diff] [review]
patch to update the blacklist
can you add a comment mentioning which characters this includes? The list isn't
particularly readable...
shouldn't characters > 127 also be excluded?
maybe a whitelist would be easier?
anyway, r=biesi with at least the comment added
Attachment #199633 -
Flags: review?(cbiesinger) → review+
Comment 17•19 years ago
|
||
I will put the comment back in, it was in the original patch for bug 304904, but
it got lost on the way. Later today I will check the patch into the trunk, and
into the 1.8 branch after I get approval.
Updated•19 years ago
|
Attachment #199633 -
Flags: approval1.8rc1?
Assignee | ||
Comment 18•19 years ago
|
||
Yeah, what about characters > 127? If those are to be excluded, then a
whitelist or some range checks (i.e., a hand rolled loop) would be better.
Updated•19 years ago
|
Attachment #199633 -
Flags: approval1.8rc1? → approval1.8rc1+
Assignee | ||
Comment 20•19 years ago
|
||
We can resolve this bug as fixed then, right?
Reporter | ||
Comment 21•19 years ago
|
||
This is _probably_ fixed, given
a) the new blacklist at the ASCII side of the DNS resolver, which should fix the invalid DNS lookup problem, and
b) the new blacklist at the Unicode side of the URL display code, which should fix the potential DNS spoofing problem.
However, it should still be _tested_ with an up-to-date build before it can be resolved as fixed.
Assignee | ||
Comment 22•19 years ago
|
||
OK, in that case, we can mark this as FIXED, and then either REOPEN it or mark as VERIFIED once testing is complete.
Status: NEW → RESOLVED
Closed: 19 years ago
Resolution: --- → FIXED
Comment 23•19 years ago
|
||
I kept it open because of the question about characters > 127. Bug 309671 has a new patch that converts the blacklist to a whitelist and covers those.
Updated•19 years ago
|
Flags: blocking-aviary1.0.8?
Comment 24•19 years ago
|
||
If we want this in 1.0.8 we also need some code from bug 304904.
Updated•19 years ago
|
Flags: blocking-aviary1.0.8? → blocking-aviary1.0.8-
Updated•18 years ago
|
Group: security
You need to log in
before you can comment on or make changes to this bug.
Description
•