Closed Bug 190585 Opened 22 years ago Closed 9 years ago

# of connections limit should apply to the end host, not an intervening proxy

Categories

(Core :: Networking, enhancement)

x86
Linux
enhancement
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: rhamph, Unassigned)

References

Details

User-Agent:       Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.3b) Gecko/20030121
Build Identifier: Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.3b) Gecko/20030121

If you have a proxy enabled and you try to load a page with lots of images on a
slow-responding host, then open a new tab and try loading another page on a
different server, it won't start the second page until individual images on the
first page have finished (or sometimes until the entire page is done).  If you
disable the proxy things will load properly.

Reproducible: Sometimes

Steps to Reproduce:
1. Enable a proxy
2. Open a page on a slow server that contains lots of images
3. Wait for the HTML to finish loading and for mozilla to have started loading
several of the images
4. Open a new page on a different server in another tab
Actual Results:  
The second page won't load until the connections for the first page finish.

Expected Results:  
The second page should have loaded regardless of the first page.
And /. the proxy at the same time ? What if I open 20 tabs (to different hosts),
should I be allowed to open 20 connections all passing through the same
proxy-server ? No, that one should be protected too.

These are the current defaults  (in all.js) :

// limit the absolute number of http connections.
pref("network.http.max-connections", 24);

// limit the absolute number of http connections that can be established per
// host.  if a http proxy server is enabled, then the "server" is the proxy
// server.  Otherwise, "server" is the http origin server.
pref("network.http.max-connections-per-server", 8);

// if network.http.keep-alive is true, and if NOT connecting via a proxy, then
// a new connection will only be attempted if the number of active persistent
// connections to the server is less then max-persistent-connections-per-server.
pref("network.http.max-persistent-connections-per-server", 2);

// if network.http.keep-alive is true, and if connecting via a proxy, then a
// new connection will only be attempted if the number of active persistent
// connections to the proxy is less then max-persistent-connections-per-proxy.
pref("network.http.max-persistent-connections-per-proxy", 4);
How is opening 20 tabs with a proxy any different than opening 20 tabs without a
proxy?  In both cases your ISP has to bear the the networking load you put on
it.  The only difference is that there may be some intervening software that
parses the HTTP, rather than just relaying it at the TCP/IP level.
In your proposal, opening 20 tabs on 20 different hosts, consumes maximum 8
connection per host (if they used a lot of images ofcourse), but a maximum of
160 (= 20 * 8) on the proxy. 160 connections is what I call a DOS-attack. That's
also why we have a global limit of 24 connections.

At the moment, Mozilla uses the same limit for a host as for a proxy. Ok, we can
change that if you want. But there still has to be a limit.

The problem with the connections isn't the network load. It's the resources used
at the receiving side of the tcp-connection : webserver or proxyserver. If you
used the maximum of 24 connections on a proxyserver, your proxyserver would use
a substantial amount of its resources for a single user. You're DOS'ing your
fellow users of your ISP ! Most proxyservers (or webservers for that matter) can
only sustain a few hundred or thousand connections at the same time.

BTW: I hope you use keep-alive and proxy-pipelining with your proxy, because
that will help a lot to increase the performance. If you see a slow down when
you use a proxy, it's not only because Mozilla would limit the number of
connections, but also because the proxyserver is overloaded !
I think this is very related to bug 190508...
I wasn't aware there was a global limit when I submitted the bug.  Using the
global limit for proxies rather than the per-host limit would be the ideal solution.

Incidently, I'm using a proxy running on my own box.  The CPU requirements are
far less than that of mozilla itself, and the only person I can DOS is myself.

BTW, performance isn't the issue, atleast on my end.  The sites that cause the
problem seem to hang frequently, and all I want to do is look at another page
while I wait for them to load, which I can't.

If performance is a concern than the global limit should be lowered.  That's
beyond the scope of this bug though.
Yes, increasing the number of connections per proxy (up to the max connections)
would help.

See also the comments in bug 172957 (which points to bug 142255 and bug 170668).
there's a plan to use some kind of priority distribution in the transaction
queue. This would help when loading multiple tabs in parallel, whic is becoming
more of a problem becuase some people are loading whole tab groups at the same
time (even as their homepage).
Status: UNCONFIRMED → NEW
Ever confirmed: true
-> http
Assignee: dougt → darin
Component: Networking → Networking: HTTP
QA Contact: benc → httpqa
The original proxy architecture assumed that a central server would aggregate
traffic from many clients. I think allowing many connections to your local proxy
is probably reasonable, but I think Darin should decide how a design change
might be done.

I guess you could just increase your limit in your profile.
we have a bug about giving the more recently requested URLs precidence over
older URLs.  that would help a bit here.  see bug 142255.  marking this bug as
depending on a solution for that bug, although we could just get around this
immediate bug by increasing our connection limits for localhost proxies.

-> future, enhancement
Severity: normal → enhancement
Status: NEW → ASSIGNED
Depends on: 142255
Target Milestone: --- → Future
Which variable should I change that would allow me to connect to more than one
host at once?  From what I can tell, none of the variables would allow that.
user_pref("network.http.max-persistent-connections-per-proxy", N);

where N says how many persistent connections you wish to allow to your proxy server.
Ahh, let me rephrase my question.  What two variables do I need to change to
control the max global connections and max per-host connections.  There needs to
be two variables, with the per-host being less than the global, so that I can
ensure I'll always be able to connect to two servers at once.
adam: the second variable does not exist.  so, no, there is not a way to
configure mozilla as you desire.
As far as I can tell, there are the following prefs:

user_pref("network.http.max-persistent-connections-per-proxy", N);
user_pref("network.http.max-persistent-connections-per-server", N);

However, connections-per-server is ignored when using a proxy, leading to the
behaviour where mozilla will (in direct contravention of the HTTP 1.1 RFC) open
more than two persistant connections to the same server, leading to other tabs
being starved of resources.

EXAMPLE:
network.http.max-persistent-connections-per-proxy is 4 (default)
network.http.max-persistent-connections-per-server is 2 (default)

I open a slow website (www.everything2.com, for example). I open ten or twenty
new tabs on the same website.
I open a new window, and go to another website (www.google.com, for example).

Expected behaviour:
Mozilla will open 2 connections to www.everything2.com (as specified in the
RFC), to retrieve HTML, leaving 2 of the 4 proxy slots open. At the same time,
it will open 1 connection to www.google.com, to retrieve HTML. It will then open
a second connection to www.google.com, to retrieve pictures, while also reusing
the 3 existing connections to retrieve HTML/pictures.

Observed behaviour:
Mozilla opens 4 connections to www.everything2.com (too many by the RFC),
leaving no proxy slots open. The second window starves until (at least) 4 HTML
pages have been retrieved from www.everything2.com. If the server is
pathologically slow, this can stall the browser.

This can really be a problem when using google's webcache on a down site, as the
4 proxy slots are occupied trying to retrieve images from a non-responding server.

This behaviour still occurs in mozilla 1.3 rc2.
-> default owner
Assignee: darin → nobody
Status: ASSIGNED → NEW
Component: Networking: HTTP → Networking
QA Contact: networking.http → networking
Target Milestone: Future → ---
I'd like to see if we could get a clear change request.

Summary: as time has passed, there are a lot of situations where someone is going to access many servers simultaneously through a proxy. This is because of both: increased use of tab-groups, and growth in the number of pages that have lots of links to other hostname/IP addresses.

In the past, proxy was assumed to be a remote host, provided as part of the networking topology. It was a shared resource. Early implementations were process-mob based, current implementations are probably thread-based.

The design assumed that users needed to connect to a proxy server "politely".

Adam points out that some people are now going to have local proxies, where these concerns are less important.

A couple points for consideration:

1- Don't forget that this problem isn't necessarily unique to using a proxy. If you open enough tabs at once, even a DIRECT connection will cause the network pipe to fill up. I see this with several tabs when am nomadic with public WiFi, and also with about a dozen tabs when I am using DSL at home.

So some of the problems might need to be addressed at the tab-management and HTTP level.

2- By default, we should retain low limits to proxy connections. The simple fact is: there are still process-mob implementations out there, and even with threaded versions, you are still talking about a "HTTP/URL" proxy, which is the front of a little web server + value added features + a little web client. 

Absent additions to the HTTP proxy protocol (like a capacity header), we shouldn't raise the defaults capriciously.

For a client-server proxy architecture with lower overhead, like SOCKS (connection-level, not application-level), we might have a higher limit (and user populations that feel congested should upgrading to this proxy model).

3- The nature of the congestion: apparently most proxies cannot pipeline requests, and even if they did, it would not solve the responsiveness problem if you encountered a slow server or a big file. The only solution for that would be what Ari proposed in his proxy book as "HTTP 2.0", which would be multiplexing the requests over a pipelined HTTP connection. I don't think that is ever going to happen (suddenly doesn't it sound like SOCKS anyhow?).

---
That being said, I'm not sure what effective changes could be made. 
Summary: connection limits should operate on the end host, not an intervening proxy → # of connections limit should apply to the end host, not an intervening proxy
h2
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.