Closed
Bug 1367375
Opened 7 years ago
Closed 6 years ago
Streams API needs upload chunked transfer encoding in HTTP channel
Categories
(Core :: Networking: HTTP, enhancement, P3)
Core
Networking: HTTP
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: baku, Unassigned)
References
Details
(Whiteboard: [necko-backlog])
Streams API allows the creation of ReadableStream objects and these objects can be used as body for Fetch/XHR/SendBeacon/WebSocket requests. When this happens, firefox must send data using chunked transfer encoding.
Currently, the interaction between necko and DOM, when data is sent to the network, is done using nsIInputStream. nsIInputStream::Available() is called to know the full size of the stream and, this information is used in the HTTP header Content-Length.
ReadableStream doesn't work in this way and it's possible that the full size of the stream is unknown when used. What I would like to have is the possibility to enable the chunked transfer encoding in nsIHttpChannel (or elsewhere). When enabled, consecutive nsIInputStream::Read() and nsIAsyncInputStream::AsyncWait() calls should be done in order to retrieve data from the stream.
Unfortunately, all of this is not specified in the Fetch API spec yet: https://fetch.spec.whatwg.org/#streams
Note that, this bug doesn't block the first implementation of Streams API: we will probably ship Streams API without this feature initially.
Reporter | ||
Updated•7 years ago
|
Updated•7 years ago
|
Assignee: nobody → valentin.gosu
Whiteboard: [necko-active]
Comment 1•7 years ago
|
||
Step 4 of https://fetch.spec.whatwg.org/#concept-http-network-fetch is roughly what's supposed to define this. (See also the condition in step 5 for HTTP/1.0.)
The HTTP standard defines the details of chunked encoding. But it's not always super clear how to wrap the HTTP standard. It doesn't have a clear API.
Updated•7 years ago
|
Summary: Streams API needs chunked transfer encoding in HTTP channel → Streams API needs upload chunked transfer encoding in HTTP channel
Comment 2•7 years ago
|
||
When this is about to be implemented, one has to think about our transaction restart logic. The input stream, regardless if it's async or not, has to be seekable back to start position (0). We may only need some kind of buffering of the head of the request body (and limit the first chunk of data before we know the connection has not been closed before the request reached the server), nothing more. Just keep this in mind.
Comment 3•7 years ago
|
||
This isn't part of v1 of the Streams API. The part that needs chunked encoding seems to be future work for now (not implemented in Chrome yet for instance). So taking off the 'active' label for now.
Assignee: valentin.gosu → nobody
Whiteboard: [necko-active] → [necko-backlog]
Comment 4•7 years ago
|
||
Bulk change to priority: https://bugzilla.mozilla.org/show_bug.cgi?id=1399258
Priority: -- → P1
Comment 5•7 years ago
|
||
Bulk change to priority: https://bugzilla.mozilla.org/show_bug.cgi?id=1399258
Priority: P1 → P3
Comment 6•6 years ago
|
||
Is this work part of the Streams API at this point?
Flags: needinfo?(valentin.gosu)
Comment 7•6 years ago
|
||
When Streams are used for a request the length of data is not know at the time the headers are written. Therefore we cannot add Content-Length. We need to be careful with this because it will most probably break on some servers and middleboxes. I would say we do not implement this for now.
Comment 8•6 years ago
|
||
Probably need to have a separate conversation about streams. Closing this WONTFIX for now.
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → WONTFIX
Updated•6 years ago
|
Flags: needinfo?(valentin.gosu)
You need to log in
before you can comment on or make changes to this bug.
Description
•