[GeckoWebExecutor] AC integration test failing: get200WithCookiePolicy
Categories
(GeckoView :: General, defect, P1)
Tracking
(firefox79 fixed, firefox80 fixed)
People
(Reporter: sebastian, Assigned: snorp, NeedInfo)
References
Details
(Whiteboard: [geckoview:m77][geckoview:m78][geckoview:m79][geckoview:m80])
Attachments
(3 files)
(deleted),
text/x-phabricator-request
|
Details | |
(deleted),
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-beta+
|
Details |
(deleted),
patch
|
RyanVM
:
approval-mozilla-beta+
|
Details | Diff | Splinter Review |
We have the following test in AC:
https://github.com/mozilla-mobile/android-components/blob/04dc7bc8d4c732118272876f278ab35910efa7d7/components/tooling/fetch-tests/src/main/java/mozilla/components/tooling/fetch/tests/FetchTestCases.kt#L403-L430
This test:
- Does a request to a test web server which returns with a "Set-Cookie" response
- Does a second request and verifies that the cookie is in the request
- Does a third request with
FETCH_FLAGS_ANONYMOUS
and verifies that the cookies is not in the request
We just wanted to get them running in automation on Firebase and noticed that this test started failing since we wrote it - which is surprising since GeckoView has a very similar test:
https://searchfox.org/mozilla-central/source/mobile/android/geckoview/src/androidTest/java/org/mozilla/geckoview/test/WebExecutorTest.kt#235-265
concept-fetch to GeckoWebExecutor translation happens here:
https://github.com/mozilla-mobile/android-components/blob/master/components/browser/engine-gecko/src/main/java/mozilla/components/browser/engine/gecko/fetch/GeckoViewFetchClient.kt#L41-L64
I was unable to figure out what caused it, so I started bisecting and ended up at this commit which only bumps GeckoView Nightly from 75.0.20200226092757 to 75.0.20200227094956. So maybe it is a GeckoView regression after all - not sure why this is not caught by the test though.
https://github.com/mozilla-mobile/android-components/commit/15ca6252549bcfdcedb12ca487bd20aed44bbede
Updated•5 years ago
|
Updated•5 years ago
|
Comment 1•5 years ago
|
||
Assignee | ||
Updated•5 years ago
|
Updated•5 years ago
|
Updated•5 years ago
|
Updated•4 years ago
|
Assignee | ||
Comment 2•4 years ago
|
||
This is caused by SameSite=Lax
as default, though I don't understand why. We're setting a cookie from http://localhost:4242/cookies/set/uptimeMillis/<value>
and then requesting http://localhost:4242/cookies
. Why doesn't this satisfy same site restrictions? Are we missing something that tells necko that this is a "top-level" request?
Also, it's annoying that our existing test correctly fails (so it would've caught this bug) if we run in the environment that we ship to users, which is SameSite=Lax
by default on nightly builds. The automation profiles intentionally disable this, so we had no shot at catching this bug without special treatment to enable it.
Assignee | ||
Comment 3•4 years ago
|
||
baku, any ideas about this? You can see how we set up the channel here: https://searchfox.org/mozilla-central/source/widget/android/WebExecutorSupport.cpp#600
Assignee | ||
Comment 4•4 years ago
|
||
Using BasePrincipal::CreateContentPrincipal(uri)
instead of the system principal seems to fix things here. Is that more correct? If we switch away from the system principal we'll have to find a different workaround for bug 1432949....
Assignee | ||
Comment 5•4 years ago
|
||
NI to ckerschb here as well...see the last two comments
Comment 6•4 years ago
|
||
(In reply to James Willcox (:snorp) (jwillcox@mozilla.com) (he/him) from comment #5)
NI to ckerschb here as well...see the last two comments
Can the URI to be loaded be influenced by the web? I guess we can't easily pass the actual triggeringPrincipal, which is the principal that caused the load to happen, right? Obviously I would prefer to somehow use the principal that triggered the load, but if that's not easily doable then I think we can use BasePrincipal::CreateContentPrincipal(uri)
though it would be great to have some restrictions on the URI. What uri are we expecting? Is that always an http(s) URI? If so, could we check that and only allow schemes that make sense?
Assignee | ||
Comment 7•4 years ago
|
||
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #6)
(In reply to James Willcox (:snorp) (jwillcox@mozilla.com) (he/him) from comment #5)
NI to ckerschb here as well...see the last two comments
Can the URI to be loaded be influenced by the web?
It's possible, but we don't consider that with the current API. It's intended to be a generic http request mechanism. I would generally not expect the URLs here to come from web content with the exception of downloads. We want to transition away from GeckoWebExecutor
for those in the near future and directly use the response from Gecko.
I guess we can't easily pass the actual triggeringPrincipal, which is the principal that caused the load to happen, right? Obviously I would prefer to somehow use the principal that triggered the load, but if that's not easily doable then I think we can use
BasePrincipal::CreateContentPrincipal(uri)
though it would be great to have some restrictions on the URI. What uri are we expecting? Is that always an http(s) URI? If so, could we check that and only allow schemes that make sense?
As I mentioned above, it's a generic http API, so the URI could come from anywhere. We support http[s] and blob currently. The blob support is new, but required to allow downloads for the time being. We recently added a hack in Bug 1432949 that relies on us using the system principal here to access revoked blob URLs, so I think we'd need to solve that differently if we want to migrate to a content principal.
It seems wrong to me that any URI should be considered third party for the system principal. Shouldn't it be the opposite?
Comment 8•4 years ago
|
||
(In reply to James Willcox (:snorp) (jwillcox@mozilla.com) (he/him) from comment #7)
Can the URI to be loaded be influenced by the web?
It's possible, but we don't consider that with the current API. It's intended to be a generic http request mechanism. I would generally not expect the URLs here to come from web content with the exception of downloads.
Ok, that sounds good. Also using a ContentPrincipal(uri) is definitely not any worse than SystemPrincipal, which is what we currently use. So that part would be fine. Please add a comment why we are doing that though. E.g. to send same-site cookies.
It seems wrong to me that any URI should be considered third party for the system principal. Shouldn't it be the opposite?
Ah right, in fact that shouldn't be the case, because if a user types a URL into the URL bar then we also send same-site cookies. And typing into the URL bar uses a systemPrincipal as the triggeringPrincipal. Baku, do you know what might be wrong here?
Updated•4 years ago
|
Assignee | ||
Comment 9•4 years ago
|
||
Assignee | ||
Comment 10•4 years ago
|
||
Comment 11•4 years ago
|
||
Comment 12•4 years ago
|
||
Backed out for xpcshell failures on test_cookies_thirdparty.js
backout: https://hg.mozilla.org/integration/autoland/rev/b99e0178292cdd08ee04386526f3aa20f6c02f8d
failure log: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=305698578&repo=autoland&lineNumber=3562
[task 2020-06-09T21:47:12.653Z] 21:47:12 INFO - TEST-START | netwerk/test/unit/test_cookies_thirdparty.js
[task 2020-06-09T21:47:14.301Z] 21:47:14 WARNING - TEST-UNEXPECTED-FAIL | netwerk/test/unit/test_cookies_thirdparty.js | xpcshell return code: 0
[task 2020-06-09T21:47:14.301Z] 21:47:14 INFO - TEST-INFO took 1646ms
[task 2020-06-09T21:47:14.301Z] 21:47:14 INFO - >>>>>>>
Updated•4 years ago
|
Comment 13•4 years ago
|
||
Assignee | ||
Updated•4 years ago
|
Comment 14•4 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/108a95d2e2f0
https://hg.mozilla.org/mozilla-central/rev/43d2d0aa775e
Comment 15•4 years ago
|
||
I tested and the issue seen to be fixed nightly, I think we need this patch to be uplifted as it's a release blocker for Fenix.
Assignee | ||
Comment 16•4 years ago
|
||
Comment on attachment 9153969 [details]
Bug 1626335 - Don't treat any URI as third party for system principal
Beta/Release Uplift Approval Request
- User impact if declined: Download problems in Fenix
- Is this code covered by automated tests?: Yes
- Has the fix been verified in Nightly?: Yes
- Needs manual test from QE?: No
- If yes, steps to reproduce:
- List of other uplifts needed: None
- Risk to taking this patch: Medium
- Why is the change risky/not risky? (and alternatives if risky): It changes the behavior of the system principal, which makes me nervous.
- String changes made/needed: None
Assignee | ||
Updated•4 years ago
|
Assignee | ||
Comment 18•4 years ago
|
||
Updated•4 years ago
|
Comment 19•4 years ago
|
||
Comment on attachment 9153970 [details]
Bug 1626335 - Enable SameSite=Lax for GeckoView unit tests
Approved for 79.0b8/GV79.
Updated•4 years ago
|
Comment 20•4 years ago
|
||
bugherder uplift |
https://hg.mozilla.org/releases/mozilla-beta/rev/f30799135119
https://hg.mozilla.org/releases/mozilla-beta/rev/d8c998cc661c
Comment 21•4 years ago
|
||
My bad, I was able to download my payment statement from https://workforcenow.adp.com, but unfortunately we are still seeing the issue with Bugzilla attachments. Related Fenix issue
Steps to reproduce
- Login to Bugzilla
- Open a restricted eg bug 1607647
- anyone who needs access to this just ask. This is a test bug that has no confidential information. Its sole purpose is to have an attachment behind a login.
- Download Success.html attachment
Expected behavior
Success.html file is downloaded
Actual behavior
Access denied Bugzilla page is saved.
Device information
Updated•4 years ago
|
Comment 22•4 years ago
|
||
Sorry for all the confusion, I think I understood what is happening here. I thought this bug was the caused of not able to download any file from authenticated sites. But I was wrong, there were some authenticated sites that before this patch, we were able to download from, this was the case of https://workforcenow.adp.com. It looks like when I tested I was using an old cached dependency of geckoview and there https://workforcenow.adp.com was working, now with this patch I'm getting an HTTP 500 error and not able to download, when I omit the passing the headers I'm able to download from that site, unfortunately it looks like we are regressing with the patch.
Comment 23•4 years ago
|
||
Marked as FIXED as described here, for the bugzilla download issue I opened this bug https://bugzilla.mozilla.org/show_bug.cgi?id=1652856
Assignee | ||
Updated•4 years ago
|
Description
•