Closed Bug 82418 Opened 24 years ago Closed 24 years ago

URL makes mozilla totally unresponsive

Categories

(Core :: Networking: Cookies, defect, P3)

x86
All
defect

Tracking

()

VERIFIED DUPLICATE of bug 90288
mozilla0.9.3

People

(Reporter: subsolar, Assigned: gagan)

References

()

Details

(Keywords: crash, hang, regression, Whiteboard: PDT+, have patch in bug 90288)

Attachments

(3 files)

From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux 2.4.4-ac11 i686; en-US; rv:0.9+) Gecko/20010523 BuildID: 2001052308 mozilla tries to go to the url but just hangs, you can't even exit out of it or use any other buttons. This URL worked fine in mozilla 0.9, but not in the nightly builds since then. Reproducible: Always Steps to Reproduce: 1.Just go to http://ilearn.ucr.edu 2. 3. Actual Results: mozilla seems to freeze when attempting to connect to the site. Expected Results: gone to the site. Works fine in mozilla 0.9 but not builds after that.
Confirming on 2001051720 Win2k, setting OS=All. If I visit that URL, Mozilla uses 100% CPU, and displays a "Document: DOne (x.xx secs)" with ever-changing values (they change 3 times a second or so for me). I can still use the UI and keyboard shortcuts, though. Possibly a networking issue. Over to HTTP Networking for a guess.
Assignee: asa → neeti
Status: UNCONFIRMED → NEW
Component: Browser-General → Networking: HTTP
Ever confirmed: true
OS: Linux → All
QA Contact: doronr → benc
Same thing happened on IE, NS4 and Moz. All browsers reloaded in frenzy upon loading the page. Here's the script... if(!document.cookie) { <blah...> } else { document.location.reload(1); } This is what I *think* the JS code is doing... The page tries to set cookie on the browser, but somehow it failed. So the "else" statement is executed and page is reloaded. The cookie still can't be set on the browser after reloading the page and the loop repeats itself, leading to the "hang" symptom.
Hmm, well, it works fine for me in mozilla 0.9 and Internet Explorer, just not the latest mozilla nightly builds.
could be regression after http landing, because it works fine on a build pulled on 5/10
Target Milestone: --- → mozilla0.9.2
-> cookies.
Component: Networking: HTTP → Cookies
I repeatedly get the following error Opening file cookperm.txt failed Error loading URL http://ilearn.ucr.edu/ : 2152398850
Assignee: neeti → morse
QA Contact: benc → tever
neeti: that is the nsresult NS_BINDING_ABORTED
Cookies appear to be working just fine. Each time the page is loaded it is setting the cookie called "session_id", and each time the page is being requested it is being sent the value of the "session_id" cookie.
doctor__j's analysis was not quite accurate. Yes, that is the javascript that is being executed. I'll post an attachment showing the entire javascript of which what doctor__j posted was only a small part. What the website is doing is setting a cookie (in it's http response headers) and then testing in javascript to see if that cookie got set. If it didn't get set, the javascript displays a page saying that you have cookies disabled (the "blah" portion of doctor__j's posting). However in this case the cookie is indeed getting set correctly and we get to the "else" part. That does document.location.reload(1); And so after one second the page gets reloaded. And the reloaded page makes the cookie test, the test passes, and then the reloaded page executes the reload. And so on ad finitum. The only thing I don't understand is why this webpage repeatedly reloads only on the mozilla browser and not on IE or on 4.x. In fact, the following page will infinitely reload on all browsers: <html> <head> <script> document.location.reload(1); </script> </head> </html>
Status: NEW → ASSIGNED
From looking at the sniffer traffic, here's what it I believe is happening. The traffic from Netscape 4.x shows the following: 1. Page is requested 2. Server sets cookie and returns the page with the javascript that I posted 3. The same page is requested again, and the cookie is sent along with request 4. Server returns a different page this time The reason the server returns a different page on the second request is probably because it detects the cookie and therefore knows that it has already delivered the page once and now it should deliver a second page The traffic for Mozilla/N6 stops after step 2. The only thing that could explain this is the browser has decided to fetch the page from the cache rather than requesting it from the server. That would cause it to keep getting the same page over and over, which is what has been observed. To validate this assumption, I set my pref to disable the cache. However that made no difference -- still got the infinite reloadings. (This may be a second bug, namely that the pref for disabling the cache is not working. I'll leave it to the netwerking people to determine if there is a second bug.) So returning to the original bug, I believe that the browser is going to the cache instead of going to the network even though things are different on the second page loading. Namely the cookies header is not the same in step 1 and 3 above, so the browser should not be using the cached copy. This means that the cache would need to save the cookie headers (does it?) and compare that to the cookie header of future requests. Neeti, if you agree with my analysis, you should probably be the owner of this bug.
cc'ing gordon
Assignee: morse → neeti
Status: ASSIGNED → NEW
Keywords: nsbeta1
gordon, darin: The first time we load the url we open the cache entry with the READ_WRITE access. Then while this cache entry is still open with READ_WRITE access, http makes another request for an entry because document.location.reload(1); is called in the script. Now the the cache service will downgrade the access to READ for the second entry and we read from the cache after the first one has been marked valid. And we keep reloading from the cache for all subsequents reloads. How should this be handled?
Priority: -- → P3
gordon and i worked out a plan for this that should work. clients, such as extensions/cookies, that set request headers that can vary will need to provide http with a key (an integer) that can be used to distinguish the request from other similar requests. http will then use this key in constructing a unique key for the cache. this is similar to the solution currently used to distinguish different POST responses to the same URL. our proposal is to add a method to nsICachingChannel resembling something like this: void addToCacheKey(in unsigned long keyPart); internally, we'll simply stringize keyPart and add it to the cache key. there is already a place for such integers in the cache key used for POST identifiers, and we can just concatenate keyPart to the set of additional cache key identifiers. this set will be sorted to ensure that the results remain constant from run-to-run. docshell already knows how to use cache keys for session history, so there should be nothing needed there. the cookie manager, however, will need to keep track of the appropriate keyPart, and should set it on the nsIHttpChannel (QI'd to a nsICachingChannel) from its OnModifyRequest notification handler. we figured that the cookie manager could simply increment the keyPart that it sends each time the cookies change for a given host/path. morse: does this sound like a feasible change to the cookie manager?
Priority: P3 → --
Priority: -- → P3
Unfortunately I don't have a data structure that shows me all the cookies for a particular host/domain, so there's nowhere that I could store the last key value that I used. But I could generate a hash of the cookie string each time I create one. Or, equivalently, you could provide an alternate routine to addToCacheKey (perhaps AddToCacheString) in which I can pass in the whole cookie string and you can generate a hash.
but how accurate is a hash? how would we handle collisions? how do you determine which cookies must be sent for a particular URL? there must be some mapping from URL to the set of cookies which must be sent, and couldn't we find a place to store an integer someplace in the cookie database?
> but how accurate is a hash? how would we handle collisions? I think the probability of a collision would be extremely small. We wouldn't "handle" them because we would have no way of even knowing when they occur. Instead the cache would simply fail in the same manner as described in this bug report. So we haven't solved the problem but instead reduced it's likelihood of occuring considerably. > how do you determine which cookies must be sent for a particular URL? By making a linear pass through a list of cookies and looking for a hostname match. > there must be some mapping from URL to the set of cookies which must be sent No there isn't > couldn't we find a place to store an integer someplace in the cookie database? Not that way the database is currently implemented.
I am getting it also at: Error loading URL http://www.vlak-bus.cz/Conn.asp?tt=2 : 2152398850 Error loading URL http://www.vlak-bus.cz/ : 2152398850 Document http://www.vlak-bus.cz/Conn.asp?tt=2 loaded successfully Error loading URL http://www.vlak-bus.cz/Conn.asp?tt=2 : 2152398850 Error loading URL http://www.vlak-bus.cz/Conn.asp?tt=2 : 2152398850 I can get successfull page displaying if I switch to offline mode.
*** Bug 83162 has been marked as a duplicate of this bug. ***
darin, per conversation with him
Assignee: neeti → darin
adding keywords from bug 83162
Target Milestone: mozilla0.9.2 → mozilla0.9.3
Attached patch v1.0 required http patch (deleted) — Splinter Review
i tested gordon's patch along with my patch and it seems to solve the problem. from a voice mail left by gordon, i think his concerns about his patch (ie. the reasons he claims it's only a draft) are pretty much resolved. he was worried that for pending READ requests we would be treating a Close w/o a preceding MarkValid as if the cache entry were really valid. this is correct IMO, since READ requestors "don't care at all" about the validity of a cache entry. anyways, as gordon said: we need it to be this way to support offline access to the cache.
sr=darin (on gordon's patch)
Status: NEW → ASSIGNED
sr
fix checked in (on trunk)
Whiteboard: fixed on trunk
The fix works for me on the testcase I submitted, using 2001062704 on Windows ME.
Works perfect for me. Thanks to everyone that helped, now I don't need to reboot into Windows and use IE when I want to grab lecture notes for class. =D
gagan: how about landing this on the ns branch?
-> gagan (for checkin on the nsbranch)
Assignee: darin → gagan
Status: ASSIGNED → NEW
I am gagan for the time...
Assignee: gagan → dougt
The third attachment is PDT+ for the branch.
Whiteboard: fixed on trunk → PDT+, fixed on trunk
Landed this on the branch. Checking in nsHttpChannel.cpp; /cvsroot/mozilla/netwerk/protocol/http/src/nsHttpChannel.cpp,v <-- nsHttpChannel.cpp new revision: 1.31.2.3; previous revision: 1.31.2.2 done
Status: NEW → RESOLVED
Closed: 24 years ago
Resolution: --- → FIXED
Reopening. I think the second attachment needed to go in on the branch along with the third attachment. This may also be behind bug 89472; see my comments there.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Is anyone around right now to try to correct this? On 2001-06-26 19:27 darin refer's to a patch from Gordon. Is it possible *that* is not on the branch? If we don't have a quick answer, we should back this out and see if things get better.
The check in into trunk busted non ascii page- see 89643 Please back it out.
>The check in into trunk busted non ascii page- see 89643. Please back it out. Sorry, it should be The check in into *m92 BRANCH* busted non ascii page- see 89643. Please back it out.
I back out my change which cause the ascii regression.
I back out my change which cause the ascii regression. John, I think that you are right. 89643 is a dup, and the cache service patch should have been checked in as well. The tree can reopen once leaf picks up my backout change and verifies that non-ascii pages load. I will update my tree, and verify that when I apply: http://bugzilla.mozilla.org/showattachment.cgi?attach_id=40227 http://bugzilla.mozilla.org/showattachment.cgi?attach_id=40111 to the branch, and verify that: a. the build allows pages to complete b. does not hang on http://ilearn.ucr.edu c. does not break non-ascii pages
I applied both patches onto the branch. However, I still can not load "http://ilearn.ucr.edu". The same continuous reload problem exists. If I press STOP, then shift reload, the page loads fine. Once it loads fine, I can not reproduce the problem. Seams like the branch is still missing something.
without this patch, the browser hangs.
I apply two patches into my m9.2 branch and it have no international problem. patch 06/26/01 11:33 and 06/26/01 19:23 dougt, please also take a look at 82720. Is that a dup of this one? I cannot reproduce that bug after apply your patch here.
gagan and I believe that the *real* bug here is that "pragma: no-cache" does not work. We are unsure of what exactly the proposed patches here do, but sure that they do not fix the problem. (maybe they should be backed out of the trunk?) We do not think that we can have a fix ready in the PDT+ timeframe. So, we are thinking that we are going to drop this feature from the next release off of the 0.9.2 branch. Thoughts?
gagan wants this one.
Assignee: dougt → gagan
Status: REOPENED → NEW
removing 'fixed on trunk' cuz it's not...
Whiteboard: PDT+, fixed on trunk → PDT+
marking dep on bug 90288
Depends on: 90288
No longer depends on: 90288
The patch in bug 90288 will fix this bug.
Whiteboard: PDT+ → PDT+, have patch in bug 90288
Are we done with this now?
done when bug 90288 gets checked in.
at this stage its only a dup of 90288, marking as such and making things look happier... :) *** This bug has been marked as a duplicate of 90288 ***
Status: NEW → RESOLVED
Closed: 24 years ago24 years ago
Resolution: --- → DUPLICATE
verified dup
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: