Closed
Bug 65315
Opened 24 years ago
Closed 24 years ago
nsImageGTK should update pixmap incrementally
Categories
(Core :: XUL, defect)
Tracking
()
VERIFIED
FIXED
mozilla0.8
People
(Reporter: tor, Assigned: tor)
Details
(Keywords: perf)
Attachments
(2 files)
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
Details | Diff | Splinter Review |
Currently nsImageGTK puts off updating the server pixmap until Draw() is called,
when it pushes a block of pixels roughly the size of the valid decoded portion
of the image. As you can imagine, thus method led to rather poor responsiveness
when a large image was loading incrementally.
The following patch makes the pixmap update more eager by moving it into the
ImageUpdated() method and taking advantage of the update rectangle information.
The patch causes us not to touch the image memory as much, and in the case of
a single-pass load only reads it once.
Some informal testing shows this speeds up loading a 1280x1024 image over a
fast connection by about 2x. Mozilla is much more responsive while loading
large images.
It would be nice to get some timing results for more typical sites, but I
don't have the test harnesses and data. Could we get the netscape performance
team to try this?
Comment 2•24 years ago
|
||
I would like to test this, but I'm in a bit of a catch-22: the server that is
used for testing is currently my linux box where I have my build. So, I
wouldn't really know what to make of the numbers if it's over a loopback
network interface, and the HTTP, X server and mozilla are all competing for
the same memory/cpu/etc. resources.
Anyone inside the Netscape firewall can run this test, though. Just point the
browser at http://jrgm.mcom.com/page-loader/loader.pl, take the defaults on
that form and hit submit. The test will cycle 5 times through 40 mock pages,
and kick out a report at the end. (If a page fails to fire the onload, which
happens rarely on linux+mozilla, wait >30s and another function will kick in
to continue the test. If the test ever appears truly stuck (which I've never
seen on linux+mozilla), just hit reload and the server will do (approximately)
the right thing and continue the test).
The report at the end is a little lame, and I promise (really) that I will
polish it up Soon. But, the numbers presented provide a good indication of
page loading performance changes (in my opinion -- errors and bone-headed
mistakes are mine). Spark up your favorite spreadsheet if you want to drill
down into the numbers in detail (or give me the test ID (in the URL generated)
and I will have a look).
Comment 3•24 years ago
|
||
Oh, and if you run this test, make sure that your cache has been properly
created (e.g. if you are running a commercial build (with activation), then
run the browser once, create a new profile, quit, start again, then run the
test) -- http://bugzilla.mozilla.org/show_bug.cgi?id=65166
And ... make sure you don't have the menu set for
'"Auto-Detect (All)" causes a duplicate HTTP GET (or POST) to be sent'
which is http://bugzilla.mozilla.org/show_bug.cgi?id=64612, although that
also only affects a commercial build.
Comment 5•24 years ago
|
||
OK, I tried this and I didn't see any ill effects and I think the UI seemed more
responsive when I was loading large images. The code looks fine to me, too.
sr=blizzard
Comment 6•24 years ago
|
||
r=jag. Works fine here, feels more responsive, and the changes look good.
Comment 7•24 years ago
|
||
code looks allright, though I have clue about that part of mozilla.
I tested this on solaris, and a remote X connection on the
http://www.libpng.org/pub/mng/mngpics.html
page. Way snappy, performance is ALOT better. (OK, testing with a remote X is
a rather biased test)
Axel
Checked in.
Status: ASSIGNED → RESOLVED
Closed: 24 years ago
Resolution: --- → FIXED
Comment 9•24 years ago
|
||
Well, I didn't do a comprehensive before and after test, but I have to say
it feels slower to visit a page with a lot of images than it did before. Like
Salon.com, say. The UI may be more responsive, but it feels like I have
to wait longer for all of the images all over the page to scan in.
Just an FYI, I would expect others would have to chime in on this as well,
and possibly do some actual performance testing with some sort of timing tools
to verify this.
I'm on Linux 2.2.17, 256megs RAM, XFree86 4.0.1 displaying locally, on a 384kbps
rated DSL line.
Comment 10•24 years ago
|
||
I ran three tests for page load times with a 20010117 build, two with
20010118, and two with 20010118 and looked through the results. These tests
simply cause the browser to repetitively callback to a server program that
dishes out mock pages which were derived from typical web sites (e.g.,
home.netscape.com) and a few that test special conditions (e.g., form controls
on bugzilla.mozilla.org query page). The tests are a "global" measure of
performance (e.g., these page loads exercise necko, cache, parser, content
sink, style system, layout, imglib, dom/js, etc.).
At any rate, the net outcome is that for these "typical" pages, comparing
builds from before the checkin, to two builds after the checkin, I am unable
to measure any significant difference in the page loading times (e.g., any
differences are less than the inherent variance in the samples).
I then added a page with a single 1280x1024 24bit PNG to the tests, and for
that page, I found that the page load time was indeed improved by an average
of 16% for the tests (reduction of ~400ms for a ~2500ms page load time).
A very nice gain. Thank you, tor!
So, this is a definite improvement for the loading of large images on X,
although the data for "typical" pages is not noticeably changed (but it may
be that other factors (e.g., cache) is masking any measurable gains).
Status: RESOLVED → VERIFIED
Comment 11•24 years ago
|
||
> two with 20010118, and two with 20010118
two with 20010118, and two with 20010119 <-- oops
You need to log in
before you can comment on or make changes to this bug.
Description
•