Closed
Bug 585939
Opened 14 years ago
Closed 9 years ago
Consider responses which have a larger content-length than available data size as failed
Categories
(Core :: Networking: HTTP, defect)
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: ghazel, Unassigned)
References
Details
Attachments
(1 file)
(deleted),
application/x-zip-compressed
|
Details |
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.125 Safari/533.4
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8
When a connection is lost before an entire javascript file is downloaded, FireFox will execute whatever fragment of a file it received, and call the onload handler, instead of onerror as it would if the request failed to connect.
This can result in some very broken behavior, with no ability to detect it and degrade gracefully.
Valid ways to know the Javascript file is incomplete include comparing against Content-Length, or not receiving the proper last chunk and ending of a chunked transfer.
I tested this with a number of browsers. Chrome handles it properly, and fires onerror.
Reproducible: Always
Steps to Reproduce:
1. Dynamically add a script tag and add onload and onerror handlers.
2. Set the src to a webserver you control.
3. Add the script tag to the DOM
4. From the webserver, return a Content-Length of 1000, write "alert('1');" and then close the connection.
Actual Results:
An alert box saying '1' pops up, and the onload handler fires.
Expected Results:
No alert box, and the onerror handler fires.
Comment 1•14 years ago
|
||
> include comparing against Content-Length,
This is often wrong, actually. So no, that's not a valid way of checking.
> or not receiving the proper last chunk and ending of a chunked transfer.
Was your transfer chunked?
> and then close the connection.
Close how? There are different ways of closing connections that have different effects... Can you point me to a server with the behavior you describe?
Reporter | ||
Comment 2•14 years ago
|
||
Reporter | ||
Comment 3•14 years ago
|
||
Content-Length is one way in which Chrome checks. It's the way(In reply to comment #1)
> > include comparing against Content-Length,
>
> This is often wrong, actually. So no, that's not a valid way of checking.
>
> > or not receiving the proper last chunk and ending of a chunked transfer.
>
> Was your transfer chunked?
I see the same behavior with chunked encoding, but I was using Content-Length for my test. So Chrome uses Content-Length to check, too. Obviously they thought it was worth it. I don't think trying to support servers which send an invalid content length is a good idea. This breaks the ability to handle conforming servers correctly. What would it even mean to have a HTTP/1.1 pipelined server which sent the wrong Content-Length?
> > and then close the connection.
>
> Close how? There are different ways of closing connections that have different
> effects... Can you point me to a server with the behavior you describe?
Any sort of close before Content-Length bytes have been received indicates that you did not get all the bytes. I attached the server which I hacked together to reproduce the problem.
Comment 4•14 years ago
|
||
> I don't think trying to support servers which send an invalid content length is
> a good idea
It's necesarry for web compat; not doing it leads to very broken behavior. See the relevant comments in our network code...
> What would it even mean to have a HTTP/1.1 pipelined server which sent the
> wrong Content-Length?
In practice, it would be broken. A common problem with pipelining, in fact.
> Any sort of close before Content-Length bytes have been received indicates
> that you did not get all the bytes.
Hmm. So the issues we've run into have been with too-small content-length, I think. So maybe we could add a check for a too-large one. Jason, what do you think?
This would need to happen on the HTTP level; this has nothing to do with the DOM.
Status: UNCONFIRMED → NEW
Component: DOM: Core & HTML → Networking: HTTP
Ever confirmed: true
QA Contact: general → networking.http
Summary: Partially downloaded Javascript still executes and fires onload → Consider responses which have a larger content-length than available data size as failed
Reporter | ||
Comment 5•14 years ago
|
||
(In reply to comment #4)
> > I don't think trying to support servers which send an invalid content length is
> > a good idea
>
> It's necesarry for web compat; not doing it leads to very broken behavior. See
> the relevant comments in our network code...
Well, do you have real servers you can demonstrate this on? From what I know Chrome is tested on lots and lots of pages. Maybe those comments you're referring too deal with servers which are no longer buggy / in operation?
> > Any sort of close before Content-Length bytes have been received indicates
> > that you did not get all the bytes.
>
> Hmm. So the issues we've run into have been with too-small content-length, I
> think. So maybe we could add a check for a too-large one. Jason, what do you
> think?
>
> This would need to happen on the HTTP level; this has nothing to do with the
> DOM.
Sorry, it wasn't clear which section to use. Also, I'm not sure about promoting this all the way up to *any* content which is partially loaded... It sounds good, but partially loading images or streaming things or other stuff might be ok. I don't know.
Comment 6•14 years ago
|
||
> Maybe those comments you're referring too deal with servers which are no
> longer buggy / in operation?
It's possible, but unlikely. If someone produces data that indicates this is no longer a problem, great. Otherwise, we're not going to potentially break things when there's no reason to.
> But partially loading images or streaming things or other stuff might be ok
"Request did not complete successfully" doesn't mean we don't show what content we got. Right now necko reports this load as completely successful, period.
Comment 7•14 years ago
|
||
Would it acceptable to report occurrences to the Error Console as a warning?
Reporter | ||
Comment 8•14 years ago
|
||
Well, that would be about like not fixing it. So, no.
Comment 9•14 years ago
|
||
I doubt this problem will be fixed unless you can prove that the current strategy contains unpatchable security vulnerabilities, or that the perceived need to maintain the relaxed approach is unnecessary because there arn't many broken servers/scripts in the wild. A quick google search brings up http://bugs.php.net/50940, which appears to have been a bug from php 5.2.0-12; there have only been two patches to 5.2 since then and 5.3 is only a year old, so I would expect that bug to still be out in the wild right now.
Reporter | ||
Comment 10•14 years ago
|
||
Sure, the unpatched security vulnerability is that incomplete javascript code will be executed if the connection is lost in the middle of a transfer of code. Executing partial code is unsafe. If a setTimeout call is made but then the code after that is not executed, the timer might go off with the wrong state.
Supporting broken servers is not an important feature when it introduces other bugs. Chrome has dropped support for invalid content lengths in at least the "content is shorter than content-length" direction. Unbreak valid servers by fixing this bug, and find a way to support broken servers if you can detect it.
Comment 11•14 years ago
|
||
I could go either way on using the downloaded object - generally speaking we try and stream the results into whatever their consumer is so in some ways the cat is out of the bag by the time the early close comes along.
jayvdb attached the pipelining bug to this one - and that's a good thought. This behavior is certainly weird enough that we should consider it a pipelining failure and add the host to the dynamic pipeline blacklist - if it can't get CL right, we don't want to be trying to pipeline!
Probably worth checking on the cache behavior too. The object should not be put in the cache under this circumstance - too fishy.
Comment 12•13 years ago
|
||
Isn't that partly fixed now with the recent commits concerning pipelining?
Comment 13•12 years ago
|
||
Bug is still reproduced in Fx 19 (Win7, OSX 10.8)
I've throttled my network connection and load large file with xhr. So I get xhr.readyState === 4, xhr.status === 200, xhr.responseText.length === 208720, but actual Content-Length header is 528307.
Comment 14•11 years ago
|
||
This is the reason Firefox happily considers truncated downloads as successful, right? It's a bit of a hassle having to manually check the file size after the fact.
From the comments I gather the only problems in the wild are content-lengths shorter than appropriate, so it'd be OK to consider responses shorter than the content-length erroneous.
Comment 15•9 years ago
|
||
we learned from the download manager we cannot rely on content length to declare channels as failed due to web compay
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•