Closed
Bug 1261784
Opened 9 years ago
Closed 9 years ago
FetchEvent#respondWith(new Response()) causes invocation of CacheEntry callbacks to be delayed by either 5 or 10 seconds, in non-e10s
Categories
(Core :: DOM: Service Workers, defect)
Core
DOM: Service Workers
Tracking
()
RESOLVED
FIXED
mozilla48
People
(Reporter: till, Assigned: bkelly)
References
Details
Attachments
(1 file)
(deleted),
patch
|
jdm
:
review+
lizzard
:
approval-mozilla-aurora+
lizzard
:
approval-mozilla-beta+
|
Details | Diff | Splinter Review |
The 'Service Worker responds to fetch event with null response body' test in fetch-event.https.html[1] completes after 5 seconds, plus 2-4ms, instead of immediately, but only in non-e10s testing.
The easiest way to test this is to disable all the other tests in fetch-event.https.html, run the test and see that the iframe sticks around for 5s before the test completes. Or by adding `dump`s to [2] and [3] and comparing timestamps.
With the new Promise implementation, the delay increases to 10s, again plus 2-4ms. Those 2-4ms means that it fails in opt builds because there, the test harness times out after exactly 10s:
https://treeherder.mozilla.org/logviewer.html#?job_id=18665373&repo=try
I'm not sure why the new Promise implementation causes a 10s delay, but I'm quite certain it's not because of anything being really wrong with Promise, but because of subtle timing changes.
What I found out is that for `new Response()`, InterceptedChannelChrome::FinishSynthesizedResponse doesn't immediately call mChannel->continueConnect()[4], which causes the delay. That's because under the OpenCacheEntry call a few lines above, the CacheEntry callbacks aren't invoked, and so mChannel->AwaitingCacheCallbacks() returns `true`.
The reason for that is that CacheEntry::GetDataSize() bails out at [5], where it presumably should get a value of `0`.
IIUC, if a value is passed to `new Response`, it is passed into a parser which sets up the desired state, whereas not passing a value means this doesn't happen, or happens at some later point, perhaps based on some timeout? That's where
This is probably also the cause of bug 1194691, though I of course don't know for sure.
[1] http://mxr.mozilla.org/mozilla-central/source/testing/web-platform/mozilla/tests/service-workers/service-worker/fetch-event.https.html?force=1#283
[2] http://mxr.mozilla.org/mozilla-central/source/testing/web-platform/mozilla/tests/service-workers/service-worker/fetch-event.https.html?force=1#289
[3] http://mxr.mozilla.org/mozilla-central/source/testing/web-platform/mozilla/tests/service-workers/service-worker/fetch-event.https.html?force=1#290
[4] http://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/InterceptedChannel.cpp#291
[5] http://mxr.mozilla.org/mozilla-central/source/netwerk/cache2/CacheEntry.cpp#1450
Flags: needinfo?(bkelly)
Reporter | ||
Comment 1•9 years ago
|
||
Ben, I needinfo'd you because this is blocking the new Promise implementation, so would be great to have solved.
Assignee | ||
Updated•9 years ago
|
Assignee: nobody → bkelly
Status: NEW → ASSIGNED
Flags: needinfo?(bkelly)
Assignee | ||
Comment 2•9 years ago
|
||
Josh, sorry for throwing another review at you, but this is fairly small and is blocking the promises-in-spidermonkey work.
Essentially we don't close out our nsIOutputStreams in all cases which causes us to wait for the necko timeout. This patch adds code to close the stream in both FinishSynthesizeResponse() and ResetInterception() to make sure we don't have to wait for resources to clean up.
I used local debugging to show that the delay is gone. I'm not sure I can write an automated test for this without adding a flaky timeout.
Here's a try to make sure I didn't break anything else:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=214dfb870037
Attachment #8737840 -
Flags: review?(josh)
Updated•9 years ago
|
Attachment #8737840 -
Flags: review?(josh) → review+
Assignee | ||
Comment 4•9 years ago
|
||
Comment on attachment 8737840 [details] [diff] [review]
Make sure InterceptedChannel body streams are always closed. r=jdm
Approval Request Comment
[Feature/regressing bug #]: Service workers.
[User impact if declined]: Under certain circumstances we introduce user visible delay. This is a compat issue because chrome does not have this delay AFAIK. Its possible we also hold on to memory resources a bit too long without this fix.
[Describe test coverage new/current, TreeHerder]: Existing wpt and mochitests verify we don't break any existing functionality. There is no new test for this specific error because we try to avoid exact timings in automation.
[Risks and why]: Minimal. It only affects service workers and its a small patch.
[String/UUID change made/needed]: None.
Attachment #8737840 -
Flags: approval-mozilla-beta?
Attachment #8737840 -
Flags: approval-mozilla-aurora?
Comment 5•9 years ago
|
||
bugherder |
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla48
Comment on attachment 8737840 [details] [diff] [review]
Make sure InterceptedChannel body streams are always closed. r=jdm
Performance fix, has existing test coverage, OK to uplift to aurora and for the beta 9 build.
Attachment #8737840 -
Flags: approval-mozilla-beta?
Attachment #8737840 -
Flags: approval-mozilla-beta+
Attachment #8737840 -
Flags: approval-mozilla-aurora?
Attachment #8737840 -
Flags: approval-mozilla-aurora+
Assignee | ||
Comment 7•9 years ago
|
||
Beta:
https://hg.mozilla.org/releases/mozilla-beta/rev/1c9d427df90e
Aurora:
https://hg.mozilla.org/releases/mozilla-aurora/rev/601a549ff293
status-firefox45:
--- → wontfix
status-firefox46:
--- → fixed
status-firefox47:
--- → fixed
status-firefox-esr45:
--- → disabled
You need to log in
before you can comment on or make changes to this bug.
Description
•