Closed
Bug 170594
Opened 22 years ago
Closed 22 years ago
pipelining batching should be more flexible
Categories
(Core :: Networking: HTTP, defect, P2)
Tracking
()
RESOLVED
FIXED
mozilla1.3beta
People
(Reporter: jud, Assigned: darin.moz)
References
Details
(Keywords: perf, topembed+, Whiteboard: [pipelining][snap])
I'm bugifying a conversation I just had w/ Darin... please correct summary or description as necessary. Our current pipelining implementation buffers up multiple requests into a single stream, then hands that stream to the connection layer which shuttles it to the server in a single request. That's all well and good, however, it's not granular enough. If the first two (of four) requests are written, and two more (total of six) have been created in the meantime, we won't get the new two into the "batch" until the full initial "batch" has been sent. it sounds like the concept of a "batch" needs to be broken apart such that as requests get sent, new requests can be added to the "batch" in the pipeline.
Assignee | ||
Comment 1•22 years ago
|
||
difficult to say if this is something i can do in the 1.2 beta timeframe given the likelihood of introducing regressions/crashes. targeting 1.3 alpha for now.
Status: NEW → ASSIGNED
Target Milestone: --- → mozilla1.3alpha
Assignee | ||
Updated•22 years ago
|
Whiteboard: [pipelining]
Assignee | ||
Updated•22 years ago
|
Priority: -- → P2
Assignee | ||
Updated•22 years ago
|
Target Milestone: mozilla1.3alpha → mozilla1.3beta
-> http
Status: ASSIGNED → NEW
Component: Networking → Networking: HTTP
QA Contact: benc → httpqa
Assignee | ||
Comment 5•22 years ago
|
||
my patch for bug 176919 includes the fix for this bug. latest results indicate that pipelining Tp to cowtools over a high-bandwidth LAN connection jumps from a 3% improvement to a 7% improvement. what remains is to test it over DSL and modem connections. the algorithm i've implemented is as follows: 1- build up an initial pipeline (4 max requests) and send them in one chunk. this is usually written out as two TCP segments depending on the MSS for the connection. 2- when the first response completes, send another request. 3- repeat (2). if N responses are read from the socket at a time (read size is 4k), then we will queue up at most N more requests before writing to the socket again. as a result we get a continuously overlapped stream of requests/responses, which i think explains the increased performance.
Assignee | ||
Comment 6•22 years ago
|
||
FIXED with patch from bug 176919.
Status: NEW → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•