The URL parser truncates URL hashes if it starts with null, otherwise percent encodes
Categories
(Core :: DOM: Networking, defect, P2)
Tracking
()
Tracking | Status | |
---|---|---|
firefox97 | --- | fixed |
People
(Reporter: pere.jobs, Assigned: valentin)
References
(Blocks 1 open bug)
Details
(Whiteboard: [reporter-external] [client-bounty-form] [verif?][necko-triaged])
Attachments
(1 file)
(deleted),
text/x-phabricator-request
|
Details |
The following snippet :
u=new URL("http://a.b#\x00abc")
console.log(u.hash)
u.hash = "\x00abc"
console.log(u.hash)
Prints the following result
#%00abc
<empty string>
Instead of the expected
#%00abc
#%00abc
This means that when an attacker controls the start of the hash of an URL, they can remove the hash completely. I have not studied the security implications this may have, and do not have an example of a web application that is vulnerable because of this.
Chromium's handling of the same input is also buggy, but in a different manner.
Updated•3 years ago
|
Comment 1•3 years ago
|
||
We're definitely parsing this wrong.
- We do what the reporter expects if you start with the '#' fragment character (
u.hash = "#\x00abc";
), but according to the spec that optional character should be stripped first and the remaining parsing done the same way in either case. - In this specific example since the input string starts with a C0 control or space there should have been a validation error. What does that even mean if it doesn't stop the parsing algorithm? I don't see an error or warning anywhere on the Web or Browser consoles.
- we then should have trimmed leading and trailing
C0 control or space
characters, resulting in "#abc" according to the spec and not truncation. Why doesn't the algorithm stop at the validation error? Dropping control characters seems unexpected (trimming space seems fine, though). - If you start with a different C0 control character (
u.hash = "\x04abc"; // #%04abc
) we hash encode it as the reporter expects, but the spec says it should be the same validation error and then trimming as if it were a space. - If the null is anywhere else we percent encode it as the reporter expects
- null and other control characters like \x04 are not URL code points. In the URL parser's fragment state. My read of the spec is that should have resulted in another validation error, followed by percent encoding it.
- We mostly of honor the fragment percent encode set, except we seem to take the name "C0 control percent-encode set" literally and miss that for some reason that's defined as also including everything above \x7F
While reality and the spec don't agree, this is more of a bug than a vulnerability in Firefox itself that needs to be hidden. Also hard to imagine a scenario where this turns into a vulnerability for a web application.
Updated•3 years ago
|
Updated•3 years ago
|
Assignee | ||
Updated•3 years ago
|
Assignee | ||
Comment 2•3 years ago
|
||
Pushed by valentin.gosu@gmail.com: https://hg.mozilla.org/integration/autoland/rev/73de9735fdc8 Don't remove old hash from URL when it starts with null codepoint r=necko-reviewers,dragana
Comment 4•3 years ago
|
||
bugherder |
Updated•3 years ago
|
Description
•