Closed
Bug 83526
Opened 23 years ago
Closed 23 years ago
http should use fewer connections per server per page
Categories
(Core :: Networking: HTTP, defect, P2)
Core
Networking: HTTP
Tracking
()
VERIFIED
FIXED
mozilla0.9.5
People
(Reporter: darin.moz, Assigned: darin.moz)
References
Details
(Keywords: perf, Whiteboard: r=bbaetz, sr=dougt,blizzard, fixed-on-trunk)
Attachments
(2 files, 4 obsolete files)
(deleted),
patch
|
darin.moz
:
review+
blizzard
:
superreview+
|
Details | Diff | Splinter Review |
(deleted),
image/gif
|
Details |
Playing around with the pref: network.http.max-connections-per-server yielded
interesting performance results w/ the 5/30-04 win32 build on winnt-sp5.
Using http://jrgm.mcom.com/page-loader/loader.pl
[max-connections-per-server=2]
Avg. Median : 1040 msec Minimum : 390 msec
Average : 1041 msec Maximum : 2818 msec
[max-connections-per-server=3]
Avg. Median : 1056 msec Minimum : 384 msec
Average : 1082 msec Maximum : 2932 msec
[max-connections-per-server=8]
Avg. Median : 1106 msec Minimum : 390 msec
Average : 1128 msec Maximum : 3156 msec
Using http://dogspit.mcom.com/i-bench/ibench.htm
[max-connections-per-server=2]
All iterations 222.42
First iteration (downloaded) 47.22
Subsequent iteration (cached) 25.03
[max-connections-per-server=3]
All iterations 177.59
First iteration (downloaded) 32.09
Subsequent iteration (cached) 20.79
[max-connections-per-server=8]
All iterations 180.39
First iteration (downloaded) 33.25
Subsequent iteration (cached) 21.02
From these results it appears obvious that our default value for
max-connections-per-server should be reduced from 8 to 2 or 3. The i-bench
results suffer if the number of connections is limited to 2 because of the meta
charset reload bug (bug 81253). Once this bug is fixed, I believe the i-bench
results would improve even if the number of connetions is limited to 2.
Moreover, the HTTP spec *requires* the number of connections per user-agent to a
server to be no more than 2. So, even from the point of view of spec
correctness, we should not be using more than 2 connections.
From what I understand IE uses only 2 connections per server.
Assignee | ||
Updated•23 years ago
|
Comment 1•23 years ago
|
||
Before deciding on this, we need to fix bug 83772. With that patch, even with
the pref set to 8, I never see more than 3 connections in use on the page
loading tests on the lan - it may be different on a slower connection.
Depends on: 83772
Comment 2•23 years ago
|
||
I should note that I tested yesterday's (5/31) builds on Mac/Linux/win98,
using the same machines used in smoketests [500MHz/128MB Linux, 500MHz/128MB
Win98, 450MHz/256MB Mac] running over the internal LAN, and my results showed
a slight degradation in times for the uncached case (and no delta for cached,
but then with everything (in theory) cached already, there isn't much call
for HTTP connections). (Man, I hate getting the "wrong" result).
Comment 3•23 years ago
|
||
I should be specific: win98 - 4%, Linux - 6%, Mac - 8% slower on uncached.
Comment 4•23 years ago
|
||
adding info from darin's post to help get folks set up to
test the suggested new defaults.
The following prefs can be set in your prefs.js file to tweak the number of HTTP
connections per server:
user_pref("network.http.max-connections-per-server", N);
user_pref("network.http.keep-alive.max-connections-per-server", N);
Some comments, if you don't mind.
If you're referring to RFC2616 section 8.1.4, then it says "SHOULD NOT" as
opposed to "MUST NOT" so the limit is not a mandate. Furthermore, that section
is in the context of HTTP/1.1 persistent connections. You can't assume that all
servers will know about that or allow that. (I know some admins who have
disabled persistent connections on their HTTP servers because buggy clients we
giving them grief.) To me that says there would be two classes of connections
and mozilla would ahve to treat them differently.
I am also concerned about the real world. What will happen if some of the
connections stall out? How well will this work with a server farm because
reusing connections seems to defeat the purpose of a farm.
The RFC also recommends the same limit for a proxy server. If the user has three
windows open, should mozilla retrieve all that data through the same two proxy
connections? I would think mozilla would also have to know what the proxy server
is and where it is. It is one thing to connect to an industrial strength proxy
on a remote machine; it's quite another to connect to a junkbuster class proxy
on localhost.
Assignee | ||
Comment 6•23 years ago
|
||
Agreed.. the RFC is only advisory. The thing about persistent connections is
that we don't know if they'll be persistent until after we have established
the connection. So, we could potentially _not_ count non-persistent connections
in the max-connections-per-server limit, but once max-connections-per-server
is reached, we wouldn't be able to open up any more connections, even if the
connection would not be persistent.
I would think that this is not an issue for a server farm. I could use only
one connection per browser and give out my browser to a 1000 users, and it
would still not impact a server farm, because the server farm would be handling
1000 different connections from instances of my browser.
As far as limiting the number of connections per window, I believe that each
browser window should be treated as a separate user-agent with its own limits
on the number of active server connections. In terms of the mozilla source,
this means limiting the number of connections per server per load group.
Also, bear in mind that any limits on the number of connections are configurable
from the preferences (though the related prefs may not be exposed in the
preferences dialog).
Assignee | ||
Updated•23 years ago
|
Priority: -- → P2
Assignee | ||
Comment 8•23 years ago
|
||
Unfortunately, it looks like it won't be possible/simple to limit the number
of connections per server PER LOAD GROUP b/c imagelib doesn't set a load group
on many of the http channels it opens!!
Unless imagelib is fixed to always set a load group, we can't really fix this
properly.
pavlov: wouldn't it be possible to ensure that every http request gets a load
group? is there a bug on this perhaps? i understand that image requests for
the same object can come from different pages (hence different load groups)
but you should be able to just select one of the load groups to use for the
http request, right?
But, could one load group end up canceling a request that was meant for another
load group as well?
Assignee | ||
Comment 10•23 years ago
|
||
ahh... yeah... that is definitely a problem. perhaps what i really need is
another way of determining what "page" a http channel belongs to. hmm...
suggestions anyone? pavlov?
Assignee | ||
Updated•23 years ago
|
Priority: P2 → P5
Comment 12•23 years ago
|
||
I would like to see Mozilla comply unconditionally with rfc2616 8.1.4 at least
when using persistent connections (HTTP/1.1) to a proxy - as the rfc
says, "these guidelines are intended to improve HTTP response times and avoid
congestion".
I have a concrete example of Mozilla's behaviour causing congestion over a GPRS
connection (low bandwidth, very high latency). Ironically, MSIE performs
significantly better under the same circumstances by complying with the
standard - not normally one of Microsoft's strongpoints.
Assignee | ||
Comment 13•23 years ago
|
||
great feedback.. thanks!
Assignee | ||
Comment 15•23 years ago
|
||
Assignee | ||
Comment 16•23 years ago
|
||
hmmm.. perhaps this could happen for 0.9.4
Target Milestone: mozilla1.0 → mozilla0.9.4
Assignee | ||
Updated•23 years ago
|
Priority: P5 → P2
Assignee | ||
Comment 17•23 years ago
|
||
Assignee | ||
Comment 18•23 years ago
|
||
Assignee | ||
Comment 19•23 years ago
|
||
v1.0 is a relatively large patch (~785 lines). i've done a fair bit of testing
with it, and i've run it through the page loader tests. i'm not seeing the same
performance win as i previously saw when i knocked the number of connections
down to 2, but it definitely isn't any slower and there is a slight perf win.
i've also resolved the problem of trying to download and browse from the same
site (look for the "network.request.max-start-delay" pref). at any rate, this
patch is really needed in order for us to claim HTTP/1.1 compliance, and
furthermore i suspect that servers are probably really hating mozilla right
about now on account of how many connections we slam them with.
Assignee | ||
Comment 20•23 years ago
|
||
Assignee | ||
Comment 21•23 years ago
|
||
got r=bbaetz and r=dougt
Comment 22•23 years ago
|
||
Some questions if you don't mind.
1) How does this affect a user who keeps many windows open? I rarely
open more than two but some users like to have 10 or more open.
2) Are the various maxima set too low in a proxy environment? If I proxy
http, https, ftp, and gopher through the same proxy (certainly
possible), how does this extra http load affect performance? If there is
an effect, should there be some scale factor depending on the number of
proxied services?
3) Maybe I've got this wrong, but is
network.http.request.max-start-delay set too low, particularly for a
modem link or other high-latency link? It may well take longer than 10
seconds for a packet to fight its way out and a response to come back.
There are also proxy servers that throttle bandwidth so timing can be a
problem there too.
Assignee | ||
Comment 23•23 years ago
|
||
1) the connections are shared by multiple windows (and are only needed while the
window is loading).
2) the HTTP/1.1 spec [RFC 2616, section 8.1.4] says:
Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A single-user
client SHOULD NOT maintain more than 2 connections with any server or proxy.
A proxy SHOULD use up to 2*N connections to another server or proxy, where N
is the number of simultaneously active users. These guidelines are intended
to improve HTTP response times and avoid congestion.
3) network.http.request.max-start-delay is relative to the time at which the
first data packet arrives, so it does not include the interval between request
and start-of-response. it can be thought of as such: if the time taken to
download a response exceeds max-start-delay, then the connection will not be
reused. the assumption being that another persistent connection will likely
take its place. so i chose 10 seconds based purely on what i thought might be a
reasonable amount of time for a user to wait. consider this example: a user
goes to www.kernel.org and downloads two versions of the linux kernel. they
then wish to browse around the site some more. after 10 seconds of downloading
the kernel, those two connections would decide not to persist and would
therefore "allow" other connections to be created (which would be used to allow
the user to surf the same site). any longer than 10 seconds seems like it might
be too long for user comfort.
Comment 24•23 years ago
|
||
1) Well, a window can refresh its content so it could open a set of
sockets for that. You can't really assume it will refresh from just one
host so that might be 6 or 8 sockets. If more than one window of the
many refresh at the same time, you could run up against one of the limit
values.
2) RFC2616, section 8.1.4 is advisory at best. There are no MUST's there
so honoring it is not necessary for compliance. In any event, it's just
theory with no data to justify it. In the absence of experimental data
it seems unwise to make mozilla the test bed for this theory.
3) Are you saying that any connection which takes more than 10 seconds
from the first data arrival to completion will be marked as
non-persistent? If so, that would apply to the majority of modem
traffic not to mention those overloaded servers out there.
Assignee | ||
Comment 25•23 years ago
|
||
1) not a problem IMO.. we'd prefer to serialize communication over fewer
persistent connections.
2) i've spoken a bit with jim gettys on this and he spoke of experimental data
using libwww to confirm the suggestions of section 8.1.4. moreover, IE only
opens 2 persistent connection per server, and it loads pages faster than mozilla.
3) yes... that is what the behavior would be with my patch. i have a solution
in mind that would make us only drop the connection if a new persistent
connection were created, so as to keep the number as near to 2 as possible.
Assignee | ||
Comment 26•23 years ago
|
||
Assignee | ||
Comment 27•23 years ago
|
||
this new version of the patch keep-alive connections are no longer eagerly
dropped. this means that it may be possible to end up with as many as 8
keep-alive connections, but only if the connections are very slow or if the
requests are for very large entities. the result is a limiter on the number of
connections that IMO scale nicely with server load, while balancing the need to
make the browser responsive to the user.
so, if the server/network is fast, then no more than 2 connections will ever be
created at a time. otherwise, more keep-alive connections may be created as
necessary.
Assignee | ||
Comment 28•23 years ago
|
||
got r=dougt on the new patch
Comment 29•23 years ago
|
||
r=bbaetz.
I'm not convinced on the timeout value's default, but we'll see what sort of
feedback we get.
Comment 30•23 years ago
|
||
1) Serialization will turn into throttling if mozilla reaches the
network.http.max-connections limit. I think that will happen a lot more
often than you think. I would increase the limit to 36.
2) Page loading is a complex issue. In mozilla, layout and netlib
interact in interesting ways (see bug 77718 for an extreme example) so
I'm not convinced that libwww data is particularly relevant. BTW, a
truly cynical person might think that IE limits its connections to make
MS servers look better.
Now that some hard limits have become soft, I would suggest changing the
pref names. People will complain, and file bugs, if they see "maxima"
exceeded.
Assignee | ||
Updated•23 years ago
|
Whiteboard: r=bbaetz, sr=dougt, a=?
Assignee | ||
Comment 31•23 years ago
|
||
1) yes, and my patch increases the max limit from 16 to 24, which i believe
should be plenty. with a limit of 2 persistent connections per server, this
implies that you can be downloading from 12 sites at one time without hitting a
limit on the number of connections. ideally we really shouldn't limit the
number of connections, but on some OSs there is already a limit (Win9x for
example). so, considering that there are other socket consumers and that
sockets may stick in the 2MSL wait state for some time, i think 24 is a
reasonable default limit.
2) this is why i have done extensive testing with various page load tests, which
all confirm the libwww findings.
the maxima apply to "active" connections, so i don't see any problem with the names.
Assignee | ||
Comment 32•23 years ago
|
||
pushing out to 0.9.5 since this is probably too risky for 0.9.4 now.
Target Milestone: mozilla0.9.4 → mozilla0.9.5
Comment 33•23 years ago
|
||
Comment on attachment 47292 [details] [diff] [review]
v1.2 revised per comments from tenthumbs
Looks fine to me. r/sr=blizzard
Attachment #47292 -
Flags: superreview+
Assignee | ||
Updated•23 years ago
|
Attachment #47292 -
Flags: review+
Assignee | ||
Updated•23 years ago
|
Attachment #46701 -
Attachment is obsolete: true
Assignee | ||
Updated•23 years ago
|
Attachment #46848 -
Attachment is obsolete: true
Assignee | ||
Updated•23 years ago
|
Attachment #46981 -
Attachment is obsolete: true
Assignee | ||
Updated•23 years ago
|
Attachment #47083 -
Attachment is obsolete: true
Assignee | ||
Updated•23 years ago
|
Whiteboard: r=bbaetz, sr=dougt, a=? → r=bbaetz, sr=dougt,blizzard
Assignee | ||
Comment 34•23 years ago
|
||
fixed-on-trunk
Status: ASSIGNED → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
Whiteboard: r=bbaetz, sr=dougt,blizzard → r=bbaetz, sr=dougt,blizzard, fixed-on-trunk
Comment 35•23 years ago
|
||
I have done some recent testing using web sites such as IBM, Formula1, BBC
captured on my own local server (no network interference). I found that IE6.0
ran twice as fast as us. However, when I changed
network.http.max-persistent-connections-per-server to 8 we ran just as fast as
they do. I used V 0.99 on a 333HZ NT 4.0 machine with 256 mb. I looked back at
the reasoning behind this bug and it mystifies me. I clipped whats below from
comments from this bug:
[max-connections-per-server=2]
All iterations 222.42
First iteration (downloaded) 47.22 -> note 47.2 first time
Subsequent iteration (cached) 25.03
[max-connections-per-server=8]
All iterations 180.39
First iteration (downloaded) 33.25 -> 33.23 (going to 2 connections is
a 42% slow down)
Subsequent iteration (cached) 21.02
In addition, by setting network.http.max-persistent-connections-per-server to 2
are you really setting max connections per server to 1? Comments says that
connections to server will be LESS THAN
network.http.max-persistent-connections-per-server.
My times dropped in some cases from 4 seconds+ for Formula1 to 1.5 seconds.
Note, this is first time only.
Ivan
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Comment 36•23 years ago
|
||
Just an additional note. When I test I bring the browser down between tests
and clear the caches before testing. I tested 3,4,6 and 10
network.http.max-persistent-connections-per-server. I found performance
maximized at 8.
Ivan
Comment 37•23 years ago
|
||
It is defenetly much more quicker when set to 8.
Assignee | ||
Comment 38•23 years ago
|
||
interesting results. are you connected directly to the internet? or are you
connecting via a proxy? maybe a transparent proxy? perhaps a HTTP/1.0 proxy?
if you are seeing this kind of performance difference then i suspect something
is fouling up persistent connections. i doubt that 8 persistent connections
would ever perform better than 2 persistent connections.
also, max-connections-per-server=2 means that no more than 2 persistent
connections will be created. if the documentation in all.js is incorrect, then
could you please file a separate bug for that.
also, i would prefer to carry on this discussion on the newsgroup:
netscape.public.mozilla.netlib. feel free to reference any newgroups threads
here, but let's please keep discussion out of this bug report since it was
fixed. reopening it is not the right way to change the number of persistent
connections used by mozilla.
Status: REOPENED → RESOLVED
Closed: 23 years ago → 23 years ago
Resolution: --- → FIXED
Comment 39•22 years ago
|
||
Netscape 3.x had a nice way of allowing the user to set this. (see attachment).
Having this setting in prefs.js isn't obvious for the average user. (as
prefs.js has to be looked up in the profile dir...)
Regarding the current limit of 8 (as seen in release 1.2a):
Under heavy load, even if network bandwidth may allow it, setting the limit to
8 is too low. And this limit is global.
Example 1:
multiple browser windows loading pages with image galleries. Have 5-6 pages
loading galleries with thumbnails will have 1 or 2 windows loading while the
others are stalling.
As a comparison, I have much better performance with Netscape 3 and connections
set to 20.
Example 2: have 8 browser windows open, each loading a large image (>200k). any
subsequent windows will stall until at least one of the others finishes. This
is valid for any server.
I know the above is irrelevant, as this is considered now to be fixed, but:
enhancement request: add max number of network connections to advanced tab in
prefs.
Assignee | ||
Comment 40•22 years ago
|
||
Netscape 3 uses HTTP/1.0 without persistent connections. As a result, you will
see a huge improvement under Netscape 3 by increasing the number of parallel
connections. Mozilla on the other hand uses HTTP/1.1 and is designed to reuse
connections. Doing so has been shown to improve performance. See for example:
http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html
Comment 41•22 years ago
|
||
Not having this configurable in the UI is still something I miss.
I know about the pipelining and persistent connections and their benefits, but
there are cases where the concept simply fails to apply.
The example I gava is perfect. Have a 56k modem connection (even a 128 isdn) and
load a page with an image gallery with many links to images each 250-300k. Open
in a new window 8 links (images) and let them load.
Before at least one finishes, nothing else can be retrieved.
Since the connections limit has been reached, persistent connections are
irrelevant, pipelining is also irrelevant. Their benefits can only be used below
the limit.
Comment 42•22 years ago
|
||
verified fixed. andrixnet@yahoo.com, please open a new bug if you feel that more
than 8 connections is justified.
Status: RESOLVED → VERIFIED
QA Contact: benc → junruh
Comment 43•22 years ago
|
||
Wow, did this one sorta came back from the dead? Hope its not just a useless
meta-comment, but to me its seems clear, for the case that large web content
providers have concerns about maximum connections per client, then it is their
responsibility to write page content in a manner not 'wanting' so many
concurrent streams. An artifical, arbitrary, throttle in everybody's browser is
not the right way to correct some few hosts that have content and/or capacity
delivery problems.
Assignee | ||
Comment 44•22 years ago
|
||
we've had a maximum limit of 2 persistent connections per server (4 per proxy)
since late 2001. this is the recommendation of RFC 2616. it is the standard
for the WWW (it is equivalent to IE's implementation). mozilla would be
black-flagged for sure if this number ever increased for normal web browsing.
please: let's not rekindle this debate.
Comment 45•22 years ago
|
||
On comment #42:
As long as it is user configurable somewhere, prefs gui or file, I have nothing
more to add.
I am currently using 32 as limit and this is much better. At least for the way I
use the browser (>10 or so windows open...).
Comment 46•17 years ago
|
||
I just ran into a bug related to the maximum number of connections allowed to a single server. Having a look at that other bug might provide some input for a good connection limit design. It's posted as bug 419526 ( https://bugzilla.mozilla.org/show_bug.cgi?id=419526 ).
You need to log in
before you can comment on or make changes to this bug.
Description
•