Closed Bug 153280 Opened 23 years ago Closed 23 years ago

[pjpeg] Tp performance regression on Tinderbox 06/20/2002 after 16:20

Categories

(Core :: Graphics: ImageLib, defect)

x86
All
defect
Not set
normal

Tracking

()

RESOLVED FIXED

People

(Reporter: spambot, Assigned: tor)

References

()

Details

(Keywords: perf, regression)

Attachments

(1 file)

This would appear to be fallout from the progressive jpeg changes. tor and I were discussing this earlier on IRC. Here's some data that I dug up. Probably worthwhile drilling down on the worst performers to see if there is something we are doing sub optimally (although progressive rendering, to my understanding, is going to be more work than the current implementation). I put in a hack to show me when the new pjpeg code was hit. Index: nsJPEGDecoder.cpp =================================================================== RCS file: /cvsroot/mozilla/modules/libpr0n/decoders/jpeg/nsJPEGDecoder.cpp,v retrieving revision 1.48 diff -u -r1.48 nsJPEGDecoder.cpp --- nsJPEGDecoder.cpp 20 Jun 2002 23:44:24 -0000 1.48 +++ nsJPEGDecoder.cpp 21 Jun 2002 08:28:06 -0000 @@ -406,7 +407,7 @@ if (mState == JPEG_DECOMPRESS_PROGRESSIVE) { LOG_SCOPE(gJPEGlog, "nsJPEGDecoder::WriteFrom -- JPEG_DECOMPRESS..<snip - + printf("WriteFrom JDSEQ\n"); while (!jpeg_input_complete(&mInfo)) { if (mInfo.output_scanline == mInfo.output_height) Then I ran a pageload cycle to identify which pages had pjpeg code called, and then correlated those pages against the changes in time shown on btek tinderbox. 13 of the 40 pages in the test sequence had pjpeg code called, and on average, those pages were 3% slower with the new code. The other 27 pages, on average, showed 0% change (all within 3% of previous measures). Now actually, for a given page to move by 3% or less is generally within the noise. So, it seems that some of these pages aren't notably affected by the change. But some, especially the top two, are significantly slower. Delta Calls Any Largest Jpegs Jpeg (bytes) ------------------------------------------------------------ home.netscape.com 11% 3 3 4059 www.time.com 10% 6 8 10274 www.moviefone.com 7% 4 4 5461 www.nytimes.com_Table 6% 1 1 10019 www.msnbc.com 5% 2 1 17661 www.ebay.com 2% 1 2 3045 www.travelocity.com 1% 2 3 4081 espn.go.com 1% 1 5 12946 www.aol.com 1% 1 1 2709 www.voodooextreme.com 1% 1 5 10810 www.tomshardware.com 0% 1 1 2138 www.digitalcity.com 0% 1 1 1781 my.netscape.com -3% 1 1 4295 In the above table, Delta -- percent increase in pageload time (after/before) Calls -- numbers of printf's observed for that page Any Jpegs -- count of the unique JPEG files in content [1] Largest Jpeg -- largest jpeg file size (but not necessarily the largest progressive jpeg file -- need to do more searching to sort this out) [1] I haven't done the legwork to break down which were progressive and which were not. I also haven't sorted out if multiple copies were on the page, although, typically, jpeg is used for unique images (e.g., photos) and not for repeated graphical elements (where gif is more common).
Assignee: Matti → tor
Status: UNCONFIRMED → NEW
Component: Browser-General → ImageLib
Ever confirmed: true
Keywords: perf
QA Contact: imajes-qa → tpreston
Summary: Tp performance regression on Tinderbox 06/20/2002 after 16:20 → [pjpeg] Tp performance regression on Tinderbox 06/20/2002 after 16:20
Depends on: 76776
Actually, after thinking about it, this may be one of those areas where it's "ok" to be slower in that test (i.e., a high-bandwidth user won't really notice 10msec on average, but a low-bandwidth user will get a better experience when large jpegs are progressively shown as content arrives). I guess the open question is whether the 10% costs on a couple of those pages are expected behaviour, or whether they suggest that there are some cases that are not optimally handled.
There's a polish patch for pjpeg in bug 153433 that might change the numbers a bit more if any of the pages have pjpegs as background images.
actually, when lazily load backgrounds (whether by intent or not, I'm not sure). In other words, we fire the window.onload before the onload of background images, so changes in decoding time of background images might not affect measured page load times.
Keywords: regression
tinderbox data: bteck : 1160ms -> 1170ms ( + 10 ms, or 0.86% ) luna : 1207ms -> 1228ms ( + 21 ms, or 1.74% ) ash : 1933ms -> 1950ms ( + 17 ms, or 0.88% ) maple : 1218ms -> 1233ms ( + 15 ms, or 1.23% ) mecca : 2270ms -> 2315ms ( + 45 ms, or 1.98% ) This is a pretty big page-load regression. According to rules, regression should be backed out and re-try again with a better fix. tor?
Backed out bug 76776 (pjpeg). Closing.
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
I put a demo at http://jrgm.mcom.com/bugs/76776/page.html that shows how this is a big improvement in perceived performance for a low-bandwidth user. (Sorry, don't have a place to set it up externally, but if anyone wants the script, then I'll attach it here).
Attached file attach the darn testcase. (deleted) —
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: