Closed Bug 170594 Opened 22 years ago Closed 22 years ago

pipelining batching should be more flexible

Categories

(Core :: Networking: HTTP, defect, P2)

x86
Windows 2000
defect

Tracking

()

RESOLVED FIXED
mozilla1.3beta

People

(Reporter: jud, Assigned: darin.moz)

References

Details

(Keywords: perf, topembed+, Whiteboard: [pipelining][snap])

I'm bugifying a conversation I just had w/ Darin... please correct summary or
description as necessary.

Our current pipelining implementation buffers up multiple requests into a single
stream, then hands that stream to the connection layer which shuttles it to the
server in a single request. That's all well and good, however, it's not granular
enough. If the first two (of four) requests are written, and two more (total of
six) have been created in the meantime, we won't get the new two into the
"batch" until the full initial "batch" has been sent.

it sounds like the concept of a "batch" needs to be broken apart such that as
requests get sent, new requests can be added to the "batch" in the pipeline.
Keywords: perf
difficult to say if this is something i can do in the 1.2 beta timeframe given
the likelihood of introducing regressions/crashes.  targeting 1.3 alpha for now.
Status: NEW → ASSIGNED
Target Milestone: --- → mozilla1.3alpha
Whiteboard: [pipelining]
Whiteboard: [pipelining] → [pipelining][snap]
Blocks: 176101
Blocks: grouper
Bulk adding topembed keyword.  Gecko/embedding needed.
Keywords: topembed
Marking topembed+ as per topembed triage.
Keywords: topembedtopembed+
Priority: -- → P2
Target Milestone: mozilla1.3alpha → mozilla1.3beta
Depends on: 176919
-> http
Status: ASSIGNED → NEW
Component: Networking → Networking: HTTP
QA Contact: benc → httpqa
my patch for bug 176919 includes the fix for this bug.  latest results indicate
that pipelining Tp to cowtools over a high-bandwidth LAN connection jumps from a
3% improvement to a 7% improvement.  what remains is to test it over DSL and
modem connections.

the algorithm i've implemented is as follows:

1- build up an initial pipeline (4 max requests) and send them in one chunk. 
this is usually written out as two TCP segments depending on the MSS for the
connection.

2- when the first response completes, send another request.

3- repeat (2).

if N responses are read from the socket at a time (read size is 4k), then we
will queue up at most N more requests before writing to the socket again.  as a
result we get a continuously overlapped stream of requests/responses, which i
think explains the increased performance.
FIXED with patch from bug 176919.
Status: NEW → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.