Closed
Bug 51179
Opened 24 years ago
Closed 24 years ago
When overlapping PNG images with pure transparancy there is a faint residue that is displayed
Categories
(Core Graveyard :: GFX, defect, P3)
Tracking
(Not tracked)
VERIFIED
FIXED
mozilla0.8
People
(Reporter: chris, Assigned: tor)
References
()
Details
Attachments
(6 files)
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
Details | Diff | Splinter Review | |
(deleted),
patch
|
Details | Diff | Splinter Review |
More Toying with PNG files has brought this bug up. I've created a checker
patterend PNG file in photoshop 5.0.2/win - a black image w/ an alpha channel
that is 1/2 completely solid & half completely knocked out.
I then layer that image over itself multiple times in my html document. When I
do this, a "residue" is created in the knocked out areas of the image - it's as
if the knocked out parts are not 100% knocked out, but 99% knocked out.
See a simplified example at:
http://placenamehere.com/Mozilla/checker50_composite.html
Reporter | ||
Comment 1•24 years ago
|
||
Forgot to mention... I'm using Win98 & the nightly w/ Build ID 2000081508
Comment 2•24 years ago
|
||
Win95 2000090208
The images appear to be mirrored in todays build - white where the gif has
black, black where the gif has white. Please download and re-examine with a
more-current daily. If the image from photoshop is how it's supposed to appear,
it definitely doesn't, so I'm confirming.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Reporter | ||
Comment 3•24 years ago
|
||
my apologies if I wasn't clear...
The PNG I am using is a solid black (rgb of 255,255,255) image. If there was no
alpha transparency present, it would appear as a 50px square.
The sample gif is to show the alpha channel I used to create the transparency in
photoshop. The black areas areas of the alpha channel are 100% transparent and
the white areas 100% solid - any greys (not prosent here) are the partial
transparent.
So yes, looking at the two shapes next to each other they are reversed, but your
comparing apples & oranges.
The important part is the bit of residue that appears when I overlap the files
(in this case, 10 times)... I've pointed it out in this screen shot:
http://placenamehere.com/Mozilla/c50_screenshot.png
Because you tossed this back to me I took a closer look at the image a without
the composite I noticed that when I sampled the white portion of the imageit
gave me a value of 254,254,254 (something that noone's going to catch with
their eye). The slight reside is made much clearer if I layer the image over
itself by means of CSS positioning.
I'm not an expert by any means in the creation of PNGs and really haven't played
much with them before this week. But I have done everything by the book as far
as creating the image in photoshop gos. Having said that I don't have any other
programs that view PNG files properly so I don't know if this is an issues with
photoshop not making the transparent areas 100% transparent (even though the
black part of the alpha channel is pure black), or with Mozilla not rendering
the 100% transparent areas as totally transparent.
Comment 4•24 years ago
|
||
With the latest daily I didn't see any 'residue'. Please retry with a more
recent daily. (I do see the residue in your screenshot).
Comment 5•24 years ago
|
||
I don't see the residue with a current build on WINNT.
Don, could you take a look on WIN98?
Reassigning.
Marking future.
Assignee: kmcclusk → dcone
Target Milestone: --- → Future
Updated•24 years ago
|
Status: NEW → ASSIGNED
On WinNT, SP6, Mozilla Nightly Build 2000101104, I see the 'residue';
the 'white' in the checkerboard is definitely off-white compared to the
background. More visible at some monitor color settings than others.
Comment 7•24 years ago
|
||
Confirmed (Gimp) these images are 100% vs 0% alpha so the displayed image SHOULD
be a simple black/ white checker.
Mozilla (M18/Linux here, on 24bpp display) definitely miscalculates or wrongly
displays the composited result.
Suggestions: Check division in compositor, using 256 instead of 255 in a
critical place would cause these symptoms (and almost no other visible problems)
Comment 8•24 years ago
|
||
Okay, for an example of what I think is wrong, check:
gfx/src/gtk/nsImageGTK.cpp
DrawComposited*() functions all use ">> 8" where they should use "/ 255", the
shift may be faster but it's wrong.
There may be equivalent problems on some/all other Mozilla platforms.
From here I leave it to Mozilla source hackers to write a patch, test it and
take all the glory.
Many (competent) eyes make bugs shallow.
Comment 10•24 years ago
|
||
r=bryner
Comment 11•24 years ago
|
||
sr=blizzard
Comment 12•24 years ago
|
||
This particular bug is a window 98 bug.. because the URL seems to work on NT, so
this bug should be to find why NT works and 98 is broken. I opened a new bug =
59386 for the GTK problem.. and referenced that the patch in here should be used
for that bug. This bug should be for the alpha problem on windows 98.
Assignee | ||
Comment 13•24 years ago
|
||
Isn't the cause of this only working on NT due to nsImageWin::CanAlphaBlend()
method that looks for a "AlphaBlend" function and checks that it isn't running
on Win98?
Comment 14•24 years ago
|
||
Bug 59386 covers GTK+ but the same erroneous compositor formula seems to be used
in at least the following places:
gfx/src/nsBlender.cpp
gfx/src/windows/nsImageWin.cpp
gfx/src/xlib/nsImageXlib.cpp
If it's actually being used (perhaps on Win98?) nsImageWin::DrawComposited24
could be made more correct, faster, shorter and easier to understand. The macro
from the GTK+ fix might come in handy too.
Assignee | ||
Comment 15•24 years ago
|
||
Assignee | ||
Comment 16•24 years ago
|
||
The attached patch fixes the off-by-one error in the win32 and xlib compositor.
It also fixes most of nsBlender, with the exception of nsBlender::Do8Blend()
which is trying to be too clever for a trivial fix.
Adding roc, who cvs blame appears to finger (uid408) as the person responsible
for the compositing code in win32.
I didn't write the nsImageWin code, I just checked it in.
I actually have a new version of nsBlender which fixes this bug (in nsBlender
only) and a number of others. It uses this:
> // This is a fast approximate division by 255. It has the property that
> // for all 0 <= n <= 255*255, FAST_DIVIDE_BY_255(n) == n/255.
> // But it only uses two adds and two shifts instead of an integer division
> // (which is expensive on many processors).
> #define FAST_DIVIDE_BY_255(v) ((((v) << 8) + (v) + 255) >> 16)
However, I'm all in favour of tor's current patch being checked in. I'll try to
get my blender work checked in later.
Assignee | ||
Comment 18•24 years ago
|
||
Assignee | ||
Comment 19•24 years ago
|
||
On a pentium-3 roc's FAST_DIVIDE_BY_255 is about twice as fast as integer
divide; on a usparc2i it runs about 6-12 times faster, depending upon the
compiler used.
The attached patch switches all the blending to using FAST_DIVIDE_BY_255 (in
a manner which assumes the optimizer does a decent job of pulling out common
subexpressions). I also found and fixed a few more blending calculations in
nsImageWin.cpp.
I think you should bite the bullet and assign to a temporary before applying
FAST_DIVIDE_BY_255. Otherwise it's just not clear to the reader that this is
correct (or fast, for that matter).
Comment 21•24 years ago
|
||
FAST_DIVIDE_BY_255 is not necessary for gcc. It already knows how to optimize
division by 255. I would make it a compile-time option.
Assignee | ||
Comment 22•24 years ago
|
||
While gcc does do a code drop-in for divide by 255, its version uses a multiply,
a subtraction, and two shifts. FAST_DIVIDE_BY_255 has two shifts and two
additions, and runs twice as fast as gcc's version (pentium-3, egcs-1.1.2).
Comment 23•24 years ago
|
||
I did some tests and the macro seems somewhat faster, although not as fast as
some people see. Speed seems to depend on the surrounding context. Still, it's a
clever macro.
However, correct macro behavior depends on correct compiler behavior. To be
safe, one would have to verify the macro for each compiler at the appropriate
optimizaion settings. If the patch said something like:
#ifdef USE_FAST_DIV_255
#define FAST_DIVIDE_BY_255(v) ((((v) << 8) + (v) + 255) >> 16)
#else
#define FAST_DIVIDE_BY_255(v) ((v)/255)
#endif
and USE_FAST_DIV_255 were defined in a header file or on the command line, I
would withdraw my objection.
> However, correct macro behavior depends on correct compiler behavior.
Er, no. It is always correct over the allowed input range (0 <= v <= 255*255).
If your compiler doesn't produce the right answer, then your compiler is
hideously broken and Mozilla will never run on it.
Also, no compiler is going to produce code as efficient as that macro, because
the macro exploits the fact that the input is in the given range --- something
the compiler does not know.
Comment 25•24 years ago
|
||
When I see a compiler that is guaranteed bug free then I'll accept that a macro
will always work as advertised. Until them I will be cautious.
If your compiler can't compile arithmetic, then how do you know "v/255" is going
to be correct? And what are you going to do about the other 2 million lines of
code in Mozilla?
Don't be a moron. If your compiler can't compile simple C code, throw it away.
Comment 27•24 years ago
|
||
Can we put FAST_DIVIDE_BY_255 in a common header file, consolidating redundant
source definitions?
How about those RED16, etc. common subexpressions -- do gcc -O (especially, we
are not able to use -O3 yet IIRC) and other compilers do the right thing?
Looks good to me otherwise. Slick hack!
/be
Comment 28•24 years ago
|
||
Has a correct fix gone in for the things covered by the 11/09 patch?
Assignee | ||
Comment 29•24 years ago
|
||
Targeting mozilla0.8
Assignee: dcone → tor
Status: ASSIGNED → NEW
Target Milestone: Future → mozilla0.8
Assignee | ||
Comment 30•24 years ago
|
||
Assignee | ||
Comment 31•24 years ago
|
||
Comment 32•24 years ago
|
||
I would rather see
#define FAST_DIVIDE_BY_255(target,v) \
do{int tmp=v; target=((((tmp) << 8) + (tmp) + 255) >> 16);} while (0)
and make the semicolons mandatory.
Also, has someone verified that all the compilers behave reasonably with this.
In particular, do all the compilers optimize the local variable "v" into a
register? If not, there will be a lot of stack diddling.
Comment 33•24 years ago
|
||
tenthumbs is right, and we have standard (in prtypes.h) macros for the do{ and
}while(0) ugliness needed to make a macro safe to use as an expression
statement. Also, use trailing _ when naming locals in macro block statements.
#define FAST_DIVIDE_BY_255(target,v) \
PR_BEGIN_MACRO \
int tmp_ = v; target = ((tmp_ << 8) + tmp_ + 255) >> 16; \
PR_END_MACRO
I don't think we should fret about a cheezy compiler spilling to memory merely
because of the added block-local. Such a compiler is going to suck on Mozilla
code all over the place. OTOH, explicitly copying v to tmp_ not only commons
the subexpression for lame and smart compilers, it makes the macro safe to use
with any argument, including one containing other side effects or costs that
should not be evaluated twice.
/be
Assignee | ||
Comment 34•24 years ago
|
||
Assignee | ||
Comment 35•24 years ago
|
||
For what it's worth, gcc tip generates about 5% faster code with the "{ }"
macro versus "do { } while (0)" at the -O level with my image compositing
test application. This difference disappears at higher optimization levels.
Comment 36•24 years ago
|
||
Ok, I'll bite: how about if you use if(1){...}else instead of do{...}while(0) --
what does gcc -O do then?
I was wondering about tmp_'s type, but figured int would do given Mozilla's _de
facto_ support of 32-bit and up architectures only (Win16 aka Windows 3.1 died
with MozillaClassic, if not earlier), and given the domain restriction on v (0
<= v <= 255*255). Is v's type always unsigned?
/be
Assignee | ||
Comment 37•24 years ago
|
||
v is generated from arithmetic operations on unsigned 8-bit and unsigned 32-bit
values. As for timing results on the various macros:
-O:
compiler processor straight block while(0) if(1)
gcc 2.96-54 866 Pent-III 4.45sec 4.45sec 4.15sec 4.25sec
gcc 2.97 1231 866 Pent-III 4.47sec 4.23sec 4.48sec 4.23sec
-O2:
compiler processor straight block while(0) if(1)
gcc 2.96-54 866 Pent-III 4.47sec 4.45sec 4.47sec 4.47sec
gcc 2.97 1231 866 Pent-III 4.19sec 4.18sec 4.19sec 4.19sec
Status: NEW → ASSIGNED
Looks fine to me. I don't think we should worry about minor timing details.
sr=roc+moz
PS, I don't think we're targeting platforms with 16-bit ints, are we?
If we are, then tor should change the type of tmp_ to PRInt32.
Comment 40•24 years ago
|
||
As I wrote, Mozilla has dropped Win16 support, and I know of no other target
that has 16-bit ints of interest to anyone active in the community. What's
more, we have 64-bit targets, which may some day make int be 64 bits for
performance reasons. In that world, PRUint32 for tmp_ could introduce narrowing
instructions (to chop tmp_ << 8, e.g.). So the natural types int and unsigned
still have uses, provided you can count on them having > 16 bits!
r=brendan@mozilla.org already, alright!
/be
Assignee | ||
Comment 41•24 years ago
|
||
Checked in.
Status: ASSIGNED → RESOLVED
Closed: 24 years ago
Resolution: --- → FIXED
Updated•16 years ago
|
Product: Core → Core Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•