Closed
Bug 69931
Opened 24 years ago
Closed 9 years ago
Necko should be smarter about what buffer sizes to use
Categories
(Core :: Networking, defect)
Core
Networking
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: sfraser_bugs, Unassigned)
References
Details
(Keywords: perf)
Attachments
(1 file)
(deleted),
patch
|
Details | Diff | Splinter Review |
It's evident that the buffer sizes that necko uses can have large impacts on both
performance of sucking data from the net, and percieved performance in the form
in incremental page layout. (See, for example, bug 69836).
We need something better than the hard-coded buffer sizes we have now. We
obviously could have different default buffer sizes for HTTP of text/html, and
image/ data. FTP should always use large buffers (IMAP??).
We should also tune buffer sizes in relation to the network speed.
Two possibilities:
1. User sets some kind of "networking speed" pref (56kmodem/ISDN/DSL/LAN) that
we use to pick good buffer sizes.
2. We sense network speed somehow, or have necko call OnDataAvailable on a timer,
so that page layout can be automagically optimized for different data input
speeds.
I think there is some low-hanging performance fruit here.
Comment 2•24 years ago
|
||
cc'ing darin. Another alternative might be to add a method that allows a
client (e.g., of an HTTP or FTP connection) to set a hint on the nsIChannel
and/or nsIRequest? Then each client could tune the buffer sizes...e.g., imagelib
might want a larger buffer than the parser, etc.
Comment 3•24 years ago
|
||
dougt's patch removed the SetBufferSegmentSize and SetBufferMaxSize methods
from nsIChannel b/c they were unused. It was felt that clients of nsIChannel
could not set these parameters correctly without knowing something about the
implementation of the nsIChannel.
I totally agree that we should have an attribute on nsIChannel (or possibly
nsIRequest) that puts constraints on the size of the data for each
OnDataAvailable call. But, segmentSize and maxSize are not correct for this,
as it presumes the use of a segmented buffer, etc. We really need a chunkSize
or minChunkSize, maxChunkSize. minChunkSize would not always be honored, of
course, but if buffering were possible the channel impl could try to honor
these settings. It could always honor maxChunkSize.
I'm not sold on these names -- suggestions welcome -- but I think they represent
the correct concept.
nice to investigate but not a must have for beta. also moving to dougt
Assignee: gagan → dougt
Target Milestone: --- → mozilla1.0
Comment 6•24 years ago
|
||
Don't dismiss this just yet (maybe it isn't needed for beta, but it might be
needed shortly thereafter). I'm not yet convinced there isn't a page load speed
win to be had by reducing the number of round trips and resultant paints that
layout does during image loading by tuning buffer and segment sizes for faster
connections. I havn't had time time to do the buffer size and segment size
tweaking and quanitfy runs yet.
I do know there is even more juice to be squeezed out of image loading in
general than what we're getting with libimg2 today (based on some tested I had
jgrm run), I'm just not sure if it is in the buffer/seqment size interaction or
strictly within decoding and painting in libimg2 (which are not yet optimal).
We're still visibly popping images in after the page is fully laid out and that
shouldn't be happening. Watch IE and N6 side by side and you'll see the
difference. Or turn off image loading in IE5 and on a tip build, run jgrm's
performance tests, and see the performance gap close (closes to 0 on Mac).
Just be prepared for me to be a monkey on your back if I find the
buffer/segement sizes need to be run time adjustable :-)
Comment 7•24 years ago
|
||
chris: do you really think you might want to tweak segment sizes as well?
Reporter | ||
Comment 8•24 years ago
|
||
Please explain segment vs. buffer sizes. These seem like an implementation detail
in nsPipe2, but they do affect how much data we suck from the socket on each
read.
Comment 9•24 years ago
|
||
Independent of network connection type, the nsIFileTransport buffer constant
should be raised.
This made caching much faster on OS/2.
Index: nsFileTransport.h
===================================================================
RCS file: /cvsroot/mozilla/netwerk/base/src/nsFileTransport.h,v
retrieving revision 1.44
diff -u -r1.44 nsFileTransport.h
--- nsFileTransport.h 2000/08/22 07:03:04 1.44
+++ nsFileTransport.h 2001/03/12 20:54:37
@@ -134,7 +134,7 @@
nsFileTransportService* mService;
};
-#define NS_FILE_TRANSPORT_DEFAULT_SEGMENT_SIZE (2*1024)
-#define NS_FILE_TRANSPORT_DEFAULT_BUFFER_SIZE (8*1024)
+#define NS_FILE_TRANSPORT_DEFAULT_SEGMENT_SIZE (8*1024) //Perf - was
(2*1024)
+#define NS_FILE_TRANSPORT_DEFAULT_BUFFER_SIZE (64*1024) //Perf - was
(8*1024)
#endif // nsFileTransport_h__
Comment 10•24 years ago
|
||
Nice find... have you tried other buffer sizes? Does 64k seem to be the best
value? Considering that there cannot be too many active file transport, this
is probably not likely to lead to too much transient bloat. r=darin
Comment 11•24 years ago
|
||
Here are the other changes our performance team recommended - these are the ones
that caused this bug to be opened:
netwerk\base\src\nsSocketTransport.h
#define NS_SOCKET_TRANSPORT_SEGMENT_SIZE (4*1024) //Perf - was
(2*1024)
#define NS_SOCKET_TRANSPORT_BUFFER_SIZE (32*1024) //Perf - was
(8*1024)
#define MAX_IO_TRANSFER_SIZE (32*1024) //Perf - was (8*1024)
Reporter | ||
Comment 12•24 years ago
|
||
Note that simply bumping up buffer sizes might make things faster, but it could
also hurt user-perceived performance by making them wait longer before they see
any content being rendered. So I think we need to be careful here; rather than
just bumping up the buffer sizes, we need to ensure that we modulate consumer
notifications based on how fast data is coming in. That's what this bug is about.
Comment 13•24 years ago
|
||
Good point. That's where choosing the max size comes into play.
Comment 14•24 years ago
|
||
Mike, do you have any idea how much bigger the memory footprint is with the
larger buffer? Probably not an issue, but we need to keep in mind the embedding
needs.
Comment 15•24 years ago
|
||
Not really, unforunately. The perf team was focused entirely on speed, not
footprint.
Comment 16•24 years ago
|
||
I seem to remember vidur having had a lot of experience with this issue in 4.x;
adding him to the CC.
Comment 17•24 years ago
|
||
I'm uneasy about allowing a global 'connection speed'
pref, OR upping hard-coded buffer sizes. Remember that
point-to-point transfer rates are asymmetric -- the
USER might be sitting on a 10Mbps while the remote
server might be on either T3 or an ISDN... or just damn
loaded to hell, whereupon the user would still benefit from
the decreased latency of lower buffer sizes from
pretending that they're on a 56.6Kbps link...
So rack up a vote for autotuning based on throughput.
OS: Mac System 8.5 → All
Comment 19•23 years ago
|
||
what is milestone "mozilla1.0" anyway? Moving to future.
Target Milestone: mozilla1.0 → Future
Comment 20•23 years ago
|
||
See http://www.mozilla.org/roadmap/mozilla-1.0.html and ask yourself whether
this bug should be fixed -- in particular, if redhat (e.g.) ships mozilla DSOs
as system libraries, could netscape.com (e.g. again) reduce its download size by
using those libraries if they were mozilla1.0-level or better (while still being
backward compatible)? Would this bug, or its fix, require netscape.com to ship
its own DSO overriding the system one? Or is it a matter of indifference to all
concerned?
/be
Comment 22•23 years ago
|
||
Possible API changes would be required?
Target Milestone: Future → mozilla1.0
Comment 23•23 years ago
|
||
this will depend a lot on bug 93055 "support partial reads from OnDataAvailable
events"
since with the fix to that bug, it'll be possible to make necko's internal
buffers very large without forcing the parser to consume all of the data in one go.
Comment 25•23 years ago
|
||
Bugs targeted at mozilla1.0 without the mozilla1.0 keyword moved to mozilla1.0.1
(you can query for this string to delete spam or retrieve the list of bugs I've
moved)
Target Milestone: mozilla1.0 → mozilla1.0.1
Comment 26•23 years ago
|
||
don't move bugs that are in the 1.0 dependency tree. sorry.
Target Milestone: mozilla1.0.1 → mozilla1.0
Comment 28•23 years ago
|
||
smart enough for mozilla 1.0 perhaps? -> future
Target Milestone: mozilla1.0 → Future
Updated•18 years ago
|
Assignee: darin → nobody
QA Contact: benc → networking
Target Milestone: Future → ---
Comment 29•17 years ago
|
||
4K is nowadays very small.
How about upping to 16K or 32K?
Note, memory fragmentation is now more an issue than actual usage.
Comment 30•16 years ago
|
||
(In reply to comment #0)
>...
> We need something better than the hard-coded buffer sizes we have now. We
> obviously could have different default buffer sizes for HTTP of text/html, and
> image/ data. FTP should always use large buffers (IMAP??).
with respect to imap, bienvenu comments:
From my reading of the bug, it could [benefit]. But we use the default segment size and number of segments when we open our channels, and I suspect the defaults are reasonable for IMAP. I suspect we're more like HTTP than FTP, and I have a feeling that the defaults are probably OK for HTTP :-)
I suppose it's something we could play with, but we do enough different sized requests on the same cached channel (fetching flags, fetching headers, fetching whole message bodies) that I think we'd end up with a compromise anyway. We're not like FTP, where you're optimizing for downloading large amounts of data really quickly.
Comment 31•16 years ago
|
||
this patch allows you to set the default buffer sizes via preferences. Also included is firefox.js diff which shows you how to use it.
Unless I did something wrong, I do not see any difference in tp.
Comment 32•16 years ago
|
||
Comment on attachment 319110 [details] [diff] [review]
pref based buffer sizes
oh, when I said that about no different, I meant I see no different between running with:
4k buffer sizes, 8 segments
16k buffer sizes, 32 segments
There might be a better geometry for these buffers.
You can find builds here:
https://build.mozilla.org/tryserver-builds/2008-05-02_15:52-dougt@mozilla.com-16kx32_necko/
Updated•9 years ago
|
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
Comment 33•9 years ago
|
||
What was the reason for the "WONTFIX" on this bug? Thanks.
Comment 34•9 years ago
|
||
(In reply to Worcester12345 from comment #33)
> What was the reason for the "WONTFIX" on this bug? Thanks.
There is no actionable data or plan in this bug. Such a project would be welcomed, and its certainly one potential aspect of presto, but this bug presumes a solution first.
If there is a well supported reason to make a specific change, I'm happy to review it.
Comment 35•9 years ago
|
||
OK, I thought there was a patch, but outdated. Perhaps the patch submitter can give some more info, or flesh this out?
Flags: needinfo?(dougt)
Comment 36•8 years ago
|
||
2008 was a long time ago.
Updated•8 years ago
|
Flags: needinfo?(dougt)
You need to log in
before you can comment on or make changes to this bug.
Description
•