Closed Bug 81008 Opened 24 years ago Closed 24 years ago

Following links with empty url parameter causes helper to appear, v2

Categories

(Core :: Networking: HTTP, defect)

x86
Windows 2000
defect
Not set
normal

Tracking

()

VERIFIED FIXED
mozilla0.9.1

People

(Reporter: erich, Assigned: darin.moz)

References

()

Details

(Whiteboard: r=gagan, sr=sfraser, a=asa)

Attachments

(9 files)

For certain pages Navigator has started displaying the download dialog for HTML pages, rather than just displaying the rendered HTML. For example, with this URL it says... You have chosen to download a file of type: "Hyper Text Markup Language" [text/html] from http://ask.yahoo.com/ What should Mozilla do with this file? ..... 2001051506 win32-installer on win2k
The server returns no HTTP headers. None at all. So something breaks. This seems to have broken with the HTTP branch landing. And there's already a similar bug filed.
Assignee: asa → neeti
Component: Browser-General → Networking: HTTP
QA Contact: doronr → tever
Whiteboard: DUPEME
Yes, they use still HTTP/0.9. The similar bug is bug 80849, which is duped to bug 80544 (fix already checked in).
*** This bug has been marked as a duplicate of 80544 ***
Status: UNCONFIRMED → RESOLVED
Closed: 24 years ago
Resolution: --- → DUPLICATE
Whiteboard: DUPEME
Reopening. Does not work with the fix for bug 80544 (2001-05-16-04, Win NT). Now I get no download dialog, but part of the source is shown as text/plain.
Status: RESOLVED → UNCONFIRMED
Resolution: DUPLICATE → ---
*** Bug 80849 has been marked as a duplicate of this bug. ***
Confirming. New summary.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Summary: browser opens download dialog instead of displaying HTML → HTTP/0.9 responses not handled properly
Hmm. Works with http://ask.yahoo.com/20010515.html , but does not work with http://ask.yahoo.com/ (both are HTTP/0.9). Changing URL.
-->regression
Assignee: neeti → darin
investigating...
Status: NEW → ASSIGNED
Keywords: nsbeta1
Target Milestone: --- → mozilla0.9.1
yeah... probably. might have been a regression from when i made the parsing routines return void.
Attached patch fixes the problem (deleted) — Splinter Review
Keywords: patch
great find. r=gagan
rs=mscott
fix checked in
Status: ASSIGNED → RESOLVED
Closed: 24 years ago24 years ago
Resolution: --- → FIXED
*** Bug 81598 has been marked as a duplicate of this bug. ***
qa to me. change summary so it shows up in mostfreq queries and is readable. set depends and blocks so the progression of problems is clear. If you you were unfixed from 80544, technically, you should look at this fix, but realistically, you want to get a build from 5/18+, because of bug 81273.
Blocks: 80544
Depends on: 81273
Keywords: mostfreq
QA Contact: tever → benc
Summary: HTTP/0.9 responses not handled properly → Following links with empty url parameter causes helper to appear, v2
Reopening. The patch eats still the first line of an HTTP/0.9 response. I'll attach an ugly patch which fixes that for 99.9% of all cases.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Attached patch patch (deleted) — Splinter Review
I have a better patch now. It fixes not only this bug. Other improvements are: - Implements handling of folded headers - Fixes bug 80313 - Fixes (probably theretical) problems with very long headers - Better parsing for Content-Type header, especially for the charset parameter - Accepts tabs as whitespace in some places - Removes redundant loop which could easily lead to an infinite loop Changes to nsHttpTransaction::ParseHead() : - Includes most of the removed function ParseLine() - Does no longer modify the buffer, but copies in mLineBuf. That fixes problems with HTTP/0.9 and is needed for handling of folded headers. - If the first call to ParseHead contains no '\n', treat it as HTTP/0.9. That may be wrong if we receive a very long status line, but otherwise we had to check for remaining content when the connection closes an we still have no status line. My previous patch did not consider that and my 99.9% estimation was wrong because most small images contain no '\n'. New function nsHttpTransaction::CheckForCompletedHeader() : - Used for handling of folded headers Changes to nsHttpTransaction::Read() : - Removed redundant loop. If |count| were ever != 0 before the end of headers, that would be an infinite loop. Changes to nsHttpResponseHead::ParseHeaderLine() : - Allow whitespace between header name and colon (bug 80313) - Allow tabs after the colon Changes to nsHttpResponseHead::ParseContentType() : - Allow whitespace between mime type and semicolon - Allow trailing whitespace other than a comment (BTW, comments are invalid in most headers, including Content-Type). - Allow other parameters than "charset" (but ignore them) I haven't tested this patch enough to check in yet. Will do so tomorrow.
Attached patch improved patch (deleted) — Splinter Review
Andreas: nice work! but, i have a few comments/nits about your patch: 1) usually best to keep the patch focused to the current bug. for example, the header parsing bugs have nothing to do with HTTP/0.9. i'm willing to let it fly this time, but in the future please try not to lump so many different changes on one bug. thx! 2) i think your test for HTTP/0.9 is somewhat flawed. not only would we break if the server sent a long status line, but we'd also break if the status line were sent in small chunks. i think we should simply check for "HTTP" as the first 4 chars before assuming the response is 0.9. 3) CheckForCompletedHeader() is a bad name. the function does not appear to return any kind of result as its name would suggest. moreover it has the side effect of parsing headers. so, what about a name like ParseHeaderIfComplete. that would seem to better describe what it does.
1) I have split off the part modifying nsHttpResponseHead. Will attach it to bug 80313. The part modifiying nsHttpTransaction::Read() is not directly related to this bug, but affects the same file. If you want, I can file a new bug for it, but for now I have left it in the patch I'll attach. 2) We fail for long headers or small chunks with existing code as well (if a header extends across more than two chunks). But you are right, that's bad. I have fixed it. To do so for the status line we need a new flag that states if we are in the first chunk of data. Another possibility would be to check for remaining content in the buffer when the connection closes, but I do not know how to achieve this. My new patch still fails if the first chunk is less than 4 bytes long. Do we need to care about that? 3) Agreed. I have tested the patch only on Linux (have no access to a Mac and no VC6 for Win). A generator for HTTP headers to test with is at http://clarence.de/tests/http-raw.html .
looks much better. i'll give it a try on win32.
the patch builds just fine on win2k-vc6
*** Bug 81682 has been marked as a duplicate of this bug. ***
Whiteboard: waiting for reviews
Whiteboard: waiting for reviews → waiting for reviews, want for mozilla 0.9.1
Attached patch new patch (deleted) — Splinter Review
I've submitted a patch that is functionally equivalent, but IMO cleaner, than the existing patch. I think the resulting code is much easier to understand. Andreas: thank you for taking the time to write up your patch... it revealed many of the failings of the existing code :-)
Status: REOPENED → ASSIGNED
Darin, yes, your code is easier to understand. But I think it looses the last header if a chunk happens to end right after it.
... it seems to fail also if a chunk ends right before a (CR)LF.
Another problem with all attached patches: A CR remains in the header if a chunk ends after it.
... which may cause random failures. We have some hard to reproduce bugs filed that may caused by that.
I have made a patch based on Darin's patch but without the problems I mentioned. Not very well tested yet.
Attached patch next patch (deleted) — Splinter Review
andreas: you are very astute ;-) while i was out today, i realized the same exact thing. i'll take a look at your new patch.
My last patch could still loose the last header. Have a new one.
Attached patch patch (deleted) — Splinter Review
Attached patch revised patch (deleted) — Splinter Review
my revised patch includes the following changes: 1) fixed code to ensure that last line is not dropped 2) fixed code to ensure that a trailing \r doesn't end up in a line 3) eliminated nextch from ParseLineSegment since it is not needed differences between what i've submitted and what clarence last submitted: 1) i am keeping ParseLine because i think it makes the code clearer 2) fewer calls to ParseLineSegments 3) cleaned up some of the code in ParseHead
r=gagan... now lets get it in soon!
Looks like a nice set of fixes. sr=sfraser
Whiteboard: waiting for reviews, want for mozilla 0.9.1 → r=gagan, sr=sfraser, a=?
Darin's last patch looks good and it has passed all my tests. Now a similar patch is probably needed for nsHttpChunkedDecoder. Darin, are you already working on that or should I look into it?
clarence: are you aware of specific problems with the chunked decoder?
I saw bugs suggesting similar problems. Do not know any explicit problem yet.
a= asa@mozilla.org for checkin to 0.9.1
Whiteboard: r=gagan, sr=sfraser, a=? → r=gagan, sr=sfraser, a=asa
fix checked in!
Status: ASSIGNED → RESOLVED
Closed: 24 years ago24 years ago
Resolution: --- → FIXED
Darin: I looked at nsHttpChunkedDecoder and found nothing similar to this bug (RFC 2616 allows LWS between chunk-size and chunk-extension, but that's a flaw, it would break the web). Your code looks good, with one exception (trailers). I filed bug 82873 for that and assigned it to you.
why would LWS between the chunk-size and chunk-extension be a problem?
Darin: We couldn't reliable distinguish between a continuation line and actual chunk-data starting with space or tab (I meant only LWS containing CRLF).
ah.. right
Verified.
Status: RESOLVED → VERIFIED
QA Contact: benc → junruh
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: