Closed Bug 57499 Opened 24 years ago Closed 23 years ago

"old" plugin api streaming corrupted [buffering issue]

Categories

(Core Graveyard :: Plug-ins, defect, P3)

x86
Linux
defect

Tracking

(Not tracked)

VERIFIED FIXED
mozilla1.0

People

(Reporter: jshore, Assigned: serhunt)

References

Details

Attachments

(1 file)

I think this may be similar to the bug reported back in August (55959). I'm running M18+ (pulled from CVS trunk on 10/17/2000). The problem is the following: When receiving a stream with the "older" plugin api, if the consumer is slow (in other words does not always give a > 0 value with NPP_WriteReady is called), when the plugin is ready to receive again the next write call will contain a buffer which is slightly corrupted. If the consume is always ready to consume then the stream will be delivered correctly. What I've observed is that mozilla provides an extra character in the stream (I think at the begining of buffer in the corrupt buffer call). I think this is a "boundary condition" problem in the browser-side buffering code. When using the same program and making the consumer "fast", the problem disappears. I've stripped a plugin I'm developing and using a random number generator to make the dummy plugin appear as a slow consumer. The plugin dumps the buffers to a file. Comparing this file with the original shows problems occuring usually after the first NPP_WriteReady returns 0. To test this effectively make sure your stream is local (local file). The dummy plugin is intended for mpeg files (so you'll need one to test this). Compile the code provided with this message, move into plugin directory and load embedded mpeg page. cmp -l the generated file with the original. Should see differences in the region after the first WriteReady() returned 0. I see this as a critical bug. Appreciate any prompt responses. Thanks Jonathan Shore E-Publishing Group
There are situations when the plugin does not want the stream and returns zero on WriteReady. Please see bug 53451, and specifically my comments on 2000-09-27 16:16 and 2000-09-28 17:09. You arguments against this are welcome. Chris, I am adding you since we decided that the situation when a plugin says "wait, I am not ready for the stream yet, bother me later" does not seem to be looking like 'normal'.
I think this bug and the bug that you are referring to in 53451 is quite different. In that case you had a plugin that always returned 0 from the WriteReady call because it was not expecting a stream. Here we are talking about a plugin that does expect a stream but has limited buffering. It tells the browser how much buffer is left at any given point. If the plugin's buffer is full because it consumes more slowly then the provider if will return 0 on some WriteReady() calls, but will soon thereafter return non-zero when it has space in its buffer. The issue is *not* that returning 0 causes the browser to delay but rather that the browser inserts a garbage character in the buffer on the next non-zero WriteReady/Write sequence. This *is* a critical bug IMO.
A further comment on your comment: I think your comment is implying that the plugin always has to be willing to accomodate more data. This would not be a viable solution for a *very* large stream (think video or audio). Potentially there would not be enough VM on the box to even support the buffer. WriteReady(), IMO, should be an indicator to the browser as to how to throttle the stream to the plugin. It is much easier (say possible) to slow down browser-side reads on a stream, handing the stream off to the plugin as it is ready. The bug here resides in the browser's buffering code I believe. Providing a corrupted stream to a plugin is a serious problem.
Did some investigation into the mozilla source. Noticed that an error is returned within the onDataAvailable() method in ns4xPluginInstance.cpp when WriteReady() returns 0 (as per your fix to the previously mentioned problem). From my understanding this then causes the channel to be cancelled (noting the line: if (NS_FAILED(rv)) channel->Cancel(rv) in nsPluginHostImpl.cpp [I think this is the calling code?]. This behavior would indeed be undesirable if we wanted WriteReady() to work with zero values. It seems that you are saying that WriteReady should always return a non-zero values? To make my slow consumer work properly (sorry can't make it consume faster) I could put a select statement in the WriteReady() function to cause it to wait until my plugin can consume again. Question is, is the code that "feeds" plugins threaded? Can I safetly cause the WriteReady() function to block for a second or two and not affect other browser events? Otherwise, we're back to the same issue. I won't have enough VM to buffer the whole stream assuming the stream is fead faster than I can consume.
No, the code that feeds you is not threaded. If you block it, you will starve the UI. av, warren, and I talked about potentially having a "zero" return from NPP_WriteReady() trigger the plugin glue to 1. buffer the data that it's just read 2. queue a callback through the main event loop The callback would try to deliver the buffered data again via NPP_WriteReady(), and re-enqueue itself if it got another zero answer (possibly with some backoff scheme). This would probably require us to put the plugin glue in some kind of state so that it'd know *not* to continue reading when Necko informed it that new data has arrived from the server. We'd also want it to not "forget" necko notifications that came while blocked on a slow plugin. Anyway, at the time we "fixed" bug 53451, this seemed like overkill. What is the plugin that is depending on this behavior?
I understand your dilema in this regard - I think you should look at "scheduling" the writes in this sort of case. The best solution (though more difficult to implement) would to create a thread to feed each plugin instance. As for my plugin - it's similar to plugger. The pluggin feeds the stream over a pipe to another process. As is typical in processes of this sort, they consume from the pipe as they need the stream. If the stream feeds faster than the play then you have a problem. I have internal buffering in the plugin which I could make to expand dynamically. My concern is that some of the streams I'll be sending through are very large. If I am forced to consume the whole stream will be displacing alot of VM. I've gotten around the issue for the moment by getting the stream from a file. I do think that the current behavior does not conform to the expectations presented in the API and should be changed. As implemented it doesn't matter what WriteReady() returns, the browser will continuously try to stuff data down to the plugin without backing off. Supposing the bit it received was 8K, even if I return 1 in the WriteReady() call it will still try to stuff 8K down by calling the functions in quick succesion.
setting bug status to New
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: 4xp
Blocks: 55959
Moving to m1.0
Target Milestone: --- → mozilla1.0
Reporter: Is this still happening in recent builds as many new changes have gone into networking and the cache. Also, is this Linux only? Sounds like the problem is XP.
Definitely still happens with Mozilla 0.9 on Linux using the plugger plugin to play an mp3. I'm using the source RPM provided at www.mozilla.org on Mandrake 7.2.
Okay, see bug 55959 for patches. This will hopefully will be fixed real soon.
Status: NEW → ASSIGNED
FIXED with builds from 0518. The buffer should now be at the same address for the duration of the stream.
Status: ASSIGNED → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
no more seeing buffer corruption problems while playing with plugins. Marking VERIFIED 0530 trunk builds.
Status: RESOLVED → VERIFIED
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: