Checks for Sec-Fetch-* should not block the request for adding Opensearch search engine
Categories
(Core :: DOM: Security, defect)
Tracking
()
Tracking | Status | |
---|---|---|
firefox89 | --- | fixed |
People
(Reporter: tt, Assigned: n.goeggi)
References
(Regression)
Details
(Keywords: regression, Whiteboard: [domsecurity-active])
Attachments
(1 file)
(deleted),
text/x-phabricator-request
|
Details |
STR:
Ensure dom.security.secFetch.enabled istrue
(Nightly is true
by default)
Open https://translate.google.com/
Click "Add Search Engine" from the page actions menu in the location bar.
Actual Result:
Download Error and the status code in OnStopRequest
is 0 but the request was blocked (403).
Expected Result:
The search engine is successfully added.
Reporter | ||
Comment 1•4 years ago
|
||
Moreover, this request is made by user interaction but I checked the Sec-Fetch-*
headers for that request and they don't seem to reflect that.
Christoph, would you mind taking a look? Thank you in advance!
Updated•4 years ago
|
Comment 2•4 years ago
|
||
(In reply to Tom Tung [:tt, :ttung] from comment #1)
Moreover, this request is made by user interaction but I checked the
Sec-Fetch-*
headers for that request and they don't seem to reflect that.
Christoph, would you mind taking a look? Thank you in advance!
302 to Niklas who is working on Sec-Fetch-* right now. Niklas, can you please take a look?
The headers are indeed set incorrectly:
http request [
GET /opensearch.xml?hl=en_US HTTP/1.1
Host: translate.google.com
...
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: no-cors
Sec-Fetch-Site: cross-site
]
Sec-Fetch-Site
should be none
or same-origin
and Sec-Fetch-User
is missing completely but should be set to true.
Reporter | ||
Comment 4•4 years ago
|
||
Note: only when both this bug and bug 1703464 are fixed, the search engine can be successfully added.
I am wondering what the value for Sec-Fetch-Site
should be for requests like this or if the headers should be sent at all.
If the browser it self makes a request (i.e. the loading principal is the system principal) like in this case downloading the OpenSearchDescription,
the requests should always work and i cant think of an attack where the sec-fetch-*
headers would be useful in this context. On the other hand, the link to the description could be cross-site
(e.g.: a.com
links to b.com/opensearch.xml
) and maybe b.com
would like to disallow that behavior so Sec-Fetch-Site: cross-site
would help.
https://translate.google.com/opensearch.xml?hl=en does not allow sec-fetch-site: cross-site
requests:
curl -H "Sec-Fetch-Site: cross-site" -X GET https://translate.google.com/opensearch.xml\?hl\=en
this implies that they expect the headers.
What are your thoughts on this?
Comment 6•4 years ago
|
||
It's not a navigation, so I would not expect Sec-Fetch-User
to be set. I would expect Sec-Fetch-Site
to be none
as it didn't originate from another site.
(I tried finding out what Chrome does, but I cannot find this in their UI.)
Comment 7•4 years ago
|
||
(In reply to Niklas from comment #5)
the requests should always work and i cant think of an attack where the
sec-fetch-*
headers would be useful in this context.
Generally I think the browser should always
send Sec-Fetch-* Headers for all
HTTP(s) requests.
Updated•4 years ago
|
Comment 10•4 years ago
|
||
bugherder |
Updated•3 years ago
|
Description
•