Closed Bug 115252 Opened 23 years ago Closed 23 years ago

going to http://orange.dk produces redirection error

Categories

(Core :: Networking: HTTP, defect, P1)

x86
Windows 2000
defect

Tracking

()

VERIFIED FIXED
mozilla0.9.8

People

(Reporter: bugzilla, Assigned: badami)

References

()

Details

(Whiteboard: [www.nytimes.com])

Attachments

(3 files, 5 obsolete files)

entering besked.dk into the location bar and hitting enter produces: "Redirection limit for this URL exceeded. Loading aborted." Hitting enter aging in the location field loads the page. build 20011213
http://besked.de/ redirected me to http://myorange.dk/besked/ which redirected me to http://myorange.dk/besked/?sid=$$rgnEh42YVhkNhOkYzr88UFGF$$&map_cookie_check=1 which then redirected me back to http://myorange.dk/besked/ after setting some cookies. It wouldn't surprise me if some combination of mozilla's cookie or other settings made this site loop.
using the 2001121408 linux build, i had no problem visiting this site. we might want to increase the redirection limit from 10 to 20 thereby greatly increasing the likelihood that the redirection error results from a true infinite redirection loop.
I get this error on win2k with an 4h old CVS build. Is there something wrong on win32 ?
possibly... this may depend on your cookie settings, etc. matti: can you try a new profile?
It the same with a new profile. only one email per bug change needed : I'm watching you :-)
ok thx for the quick feedback :)
-> vinay
Assignee: darin → badami
It works now for me with an 10min old CVS build (updated)
I don't see why increasing the limit does any good. Either a) the site is looping on its own, or b) some cookie/js/whatever setting is making it loop, or c) mozilla is strangely broken. In the first two cases, looping will still occur no matter what the limit. In the last, you don't want to cover up a bug, do you? :-)
Works for me. Reporter, can u please check with the latest build ?
This one works for me. Please reopne if it does not work with the latest moz builds.
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → WORKSFORME
just got the error using 20011231 on Win2k
Status: RESOLVED → REOPENED
Resolution: WORKSFORME → ---
If anyone's keeping score, the bug is still present when clicking on www.nytimes.com links in version 2002010203.
I've been getting this error on nytimes.com as well (build 2002010308 at the moment). Very annoying. Clicking on the link the second time usually loads it.
Darin does the set NSPR_LOG_MODULES=nsHttp:2 allow for logging of the http request/response part on optimized bits also ? If so we can ask these people to try it since I'am not able to reproduce it for whatever reason.
couldn't reproduce it on besked.dk today but on orange.dk. to reproduce: 1) exit mozilla 2) launch mozilla 3) enter into the Location field orange.dk and hit enter. you'll get the error. I'll attach my http log done with set NSPR_LOG_MODULES=nsHttp:2
Summary: going to http://besked.dk produces redirection error → going to http://orange.dk produces redirection error
Attached file http log (deleted) —
Attachment #63474 - Attachment mime type: application/octet-stream → text/plain
Actually, I half take that back. The bell.ca site has some really bizarre behavior that could legitimately be considered redirect looping. HOWEVER, mozilla only allows one redirect if it appears to be pointing back to itself (i.e. the URI in the Location header is the same as the current location). I made this sample script to demonstrate: http://ophelia.dogcow.org/cgi-bin/doredir.cgi This script checks for the presence and value of a cookie and if it isn't set (or is set to the wrong thing), it sets the cookie and redirects back to itself. If the cookie is set and has the right value, it shows the page. This works fine in IE -- the cookie gets set, the browser loads the page again and gets the data. In Navigator 4, it still works but not as cleanly -- I get a page that says "Found The document has moved here" with a link back to the page. But mozilla doesn't let me get to the page at all. It just pops the "redirection limit exceeded" error. It seems that at the very least, it should use the Nav 4.x behavior of giving a link to the page. But it doesn't really seem any more dangerous to just allow 10 redirects in this case also. If it really is an infinite loop, it'll be stopped as effectively as it would if the URLs weren't the same. And if it's not, the user experience is a lot nicer.
This bug happens because when we process a 302 we are caching the response. Hence we keep redirecting to the same site over and over again. The HTTP/1.1 spec says the following in section 10.3.3 The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. All testing against the test cgi at http://ophelia.dogcow.org/cgi-bin/doredir.cgi I do agree that http://www.bell.ca/shop/application/commercewf?origin=*.jsp&event=link(productDetail)&wlcs_catalog_item_sku=58303 is flawed and seems to be redirecting to itself. This is an invalid url and we are correctly detecting the looping in this case.
Attached patch Do not cache results for a 302 unless the response tells u to (obsolete) (deleted) — — Splinter Review
1. nsHttpChannel.cpp Separate case for 302 which first checks if the response can be cached as per the spec. 2. nsHttpResponseHead.[cpp,h] Added new method PRBool nsHttpResponseHead::IsCacheable(PRInt32 httpStatus) Broke up PRBool nsHttpResponseHead::MustRevalidate() into two parts to enable code reuse from IsCacheable. An alternative is to not cache any 302 responses at all.This may also just work out fine.
Target Milestone: --- → mozilla0.9.8
i talked to vinay via AIM about this bug... 302's are supposed to be cached for the purposes of offline browsing. however, unless the server specifies cache control headers that permit reuse, the cached 302 redirects should never be used to satisfy normal online http requests. the real bug here seems to be that the cached 302's are being used when the site says that they should not.
Here is what is going on request url was for http://ophelia.dogcow.org/cgi-bin/doredir.cgi Send back a 302 with a cookie and location header being the same as the original uri. Response gets into cache with expiration time being now + timeRemaining. Note that now is timeInSecs and timeRemaining is 0. The 302 triggers a redirect which finds the response in the cache. The access for the cache entry is 3 (READ_WRITE). Since we do time in seconds, the check for expired time in nsHttpChannel::CheckCache() if (NowInSeconds() <= time) succeeds. This reults in triggerring of another redirect which again finds the cache entry but with access being READ_ONLY. Hence this now exits CheckCache as a result of the following // If we were only granted read access, then assume the entry is valid. if (mCacheAccess == nsICache::ACCESS_READ) { LOG(("nsHTTPChannel::CheckCache returning because we have only read access mCachedContentIsValid is 1")); mCachedContentIsValid = PR_TRUE; return NS_OK; } This now continues until the redirection limit is reached. 1. Is it that offline browsing mode required this check for (mCacheAccess == nsICache::ACCESS_READ) ? 2. Is so then should we AND that condition with this check ? Is that the only case that this would be true ? 3. Would it suffice to correct the IsCacheable logic in my previous patch adding to it the caching logic ?
Attached file trace of the cache behaviour (deleted) —
Attaching a trace of debug messages from running mozilla with the logging at level 5. Search in this attachment for got cache entry . Note that access = 3(READ_WRITE) only the first time and is 1 (READ_ONLY) subsequently. Also search for the string "nsHTTPChannel::CheckCache returning because we have only read access". Note that this happens on whenever we get acces a READ_ONLY.
vinay: good work collecting information on this. one simple solution to this bug might be to modify the logic in UpdateExpirationTime. in the case of a zero freshnessLifetime, we could set an expiration time equal to zero instead of now + timeRemaining. i think this is better than trying to rework the CheckCache logic... since the cache entry's expiration time is essentially a parameter to CheckCache.
Attached patch based on darins comments (obsolete) (deleted) — — Splinter Review
If timeremaining is 0, set expiration time to 0.
i think it might be better to ignore currentAge if freshnessLifetime is zero. i say this because you could have a 302 response with a non-zero currentAge... that might result from a slow connection, and you would still want to force a zero expiration time.
Darin Are the early returns from UpdateExpirationTime on NS_FAILURE by intent ? I mean to point out that in these cases, we exit without updating the expiration time on the cache. Should we be setting it the expirtion time on the cache entry to 0 in these cases ? I would like to modify the code to read as follows. However, with this change the behavior on failure cases changes. I'am not sure if the early returns were intentional or otherwise. nsresult nsHttpChannel::UpdateExpirationTime() { PRUint32 now = NowInSeconds(), timeRemaining = 0, expirationTime = 0; NS_ENSURE_TRUE(mResponseHead, NS_ERROR_FAILURE); if (!mResponseHead->MustRevalidate()) { nsresult rv; PRUint32 freshnessLifetime = 0, currentAge = 0; rv = mResponseHead->ComputeFreshnessLifetime(&freshnessLifetime); if (NS_SUCCEEDED(rv) && (freshnessLifetime > 0)) { rv = mResponseHead->ComputeCurrentAge(now, mRequestTime,&currentAge); if (NS_SUCCEEDED(rv)) { LOG(("freshnessLifetime = %u, currentAge = %u\n", freshnessLifetime, currentAge)); if (freshnessLifetime > currentAge) { timeRemaining = freshnessLifetime - currentAge; if (timeRemaining > 0) expirationTime = now + timeRemaining; } } } } return mCacheEntry->SetExpirationTime(expirationTime); }
the early returns were okay because they would have resulted in canceling of the channel and therefore dooming of the cache entry. in other words, those errors can be thought of as exceptions. also, there is no reason to check if timeRemaining > 0... it always will be. if you remove this check, then timeRemaining can also go away... instead you'd have expirationTime = now + freshnessLifetime - currentAge. the declaration/assignment of now should probably also be moved inside the MustRevalidate if block. BTW: MustRevalidate is now MustValidate... that is the change i mentioned to you yesterday that would break this patch.
I'm not up enough on the HTTP specs to know whose fault this bug actually is. But I certainly hope that some kind of fix, override option or workaround is included in the 0.9.8 version. When a site like www.nytimes.com has links that cannot be accessed without bumping into this error, there is a real problem for end users.
ric: no worries... the fix is definitely in hand.
Severity: normal → major
Priority: -- → P1
Whiteboard: [www.nytimes.com]
Attached patch one more (obsolete) (deleted) — — Splinter Review
1. Return back failure status if there was a failure. Earlier patch would have returned wrong status from the function. 2. Got rid of timeRemaining. 3. Moved declaration of now into inner block. 4. The only difference between the previous code and the current one would be that we set expiration time of cache entry to 0 on failure. This should be ok in y opinion. Darin, can u confirm please ? The http://ophelia.dogcow.org/cgi-bin/doredir.cgiworks with this patch.
Comment on attachment 64473 [details] [diff] [review] one more returning immediately on NS_FAILED(rv) would simplify your patch quite a bit... for example, it would eliminate the need for temprv and eliminate at least one level of nesting. i suggest that you rewrite the patch to return on failure immediately instead.
Attached patch with the earlt returns (obsolete) (deleted) — — Splinter Review
there is a subtle error in this patch that i just caught. it turns out that a 0 expiration time actually translates to an infinite expiration time as far as the cache is concerned. even though HTTP sets the "doom-if-expired" cache flag to false, the memory cache will still use the expiration time in its eviction ranking, so we need to NOT set an expiration time of zero. for the purposes of this patch, a value of 1 would suffice. (i've filed bug 120833 to get the sense of an expiration time of zero reversed.)
Keywords: mozilla0.9.8
Attachment #63930 - Attachment is obsolete: true
Attachment #64086 - Attachment is obsolete: true
Attachment #64473 - Attachment is obsolete: true
Attachment #64715 - Attachment is obsolete: true
Attachment #65668 - Attachment is obsolete: true
Comment on attachment 65673 [details] [diff] [review] revised per comments from gordon (fixed a potential overflow bug) sr=mscott
Attachment #65673 - Flags: superreview+
Comment on attachment 65673 [details] [diff] [review] revised per comments from gordon (fixed a potential overflow bug) r=blizzard
Attachment #65673 - Flags: review+
got an r=gordon as well... so a=blizzard?
Comment on attachment 65673 [details] [diff] [review] revised per comments from gordon (fixed a potential overflow bug) a=dbaron for 0.9.8
Attachment #65673 - Flags: approval+
Keywords: mozilla0.9.8+
fixed-on-trunk
Status: REOPENED → RESOLVED
Closed: 23 years ago23 years ago
Resolution: --- → FIXED
*** Bug 119716 has been marked as a duplicate of this bug. ***
*** Bug 121128 has been marked as a duplicate of this bug. ***
verified: 01/25/02 trunk builds - win NT4, linux rh6, mac osX
Status: RESOLVED → VERIFIED
This is still a bug. I have downloaded build 2002011103, and visit the www.nytimes.com web site. [You need to register to be able to get past the main page.] Click on any link on the main page, and there's that redirection limit error. In other words, I see no change from the original bug, at least at the NY Times site.
Status: VERIFIED → REOPENED
Resolution: FIXED → ---
> ... downloaded build 2002011103 ... ... and since the fix landed on the trunk on 20020119, one shouldn't expect to see it fixed in builds prior to that date. For what it's worth, I am not seeing the redirection limit error on the nytimes.com site with current builds.
Status: REOPENED → RESOLVED
Closed: 23 years ago23 years ago
Resolution: --- → FIXED
yeah this was easily reproducable at various sites. It's working now.
Status: RESOLVED → VERIFIED
I just downloaded (25 jan 2002, 17:00 PST) version 2002102503, on Windows 2000, SP2. I am definitely still seeing the Redirection error, at the www.nytimes.com site. I'll shut up if you guys think this is fixed, but I'm still seeing a problem on my system. If anyone's interested, I could try some experiments.
Status: VERIFIED → REOPENED
Resolution: FIXED → ---
ricst@usa.net: Have you downloaded a trunk or branch build ?
ric: check "help->about mozilla" for the useragent string... it'll tell you if you are using a trunk or branch build. thx!
*** Bug 121818 has been marked as a duplicate of this bug. ***
I've downloaded a recent nightly (ftp://ftp.mozilla.org/pub/mozilla/nightly/2002-01-27-11-trunk/mozilla-win32-installer-sea.exe) It identifies itself as: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:0.9.8+) Gecko/20020127 Given the path, I assume this is a trunk build, and so the bug should be resolved for my installation. However, the bug still manifests itself at various sites I visit, particularly: www.nytimes.com www.techdata.ca (and .com too) Not sure why this is happening...I once commented on this bug in n.p.m.general newsgroup (Oct 2001 or earlier), but after downloading newer builds, the problem disappeared. It reappeared early Jan 2002, but I haven't a clue why. Hope this helps. BTW: if you need to contact me, do so at rcefis@rogers.com; the email address listed for my bugzilla account no longer exists (how can I change it? should I email someone?)
*ahem* Let me try again... I downloaded the installer (ftp://ftp.mozilla.org/pub/mozilla/nightly/2002-01-27-08-0.9.8/mozilla-win32-installer-sea.exe) which identifies itself as: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:0.9.8) Gecko/20020127 With my old profile, I still see the redirection limit problem. When I created a new profile, the problem is solved for every site I visit for which this problem existed. Why should profiles make such a difference? (Old profile was created with one of 0.6 or 0.8.0 release, new profile created with the build from link listed in this comment) Glad to see this works though!
Can u mail me a http log ? Please refer to comment #15 for the specifics.
i think i know what's going on. before the patch for this bug landed, we were computing expiration times relative to the response time + some non-negative delta. these expiration times were being written to the disk cache. patched builds OTOH set a zero expiration time for entries that should not be cached for normal page loads. my guess is that your old profile contains some of the older entries w/ expiration times sufficiently in the future relative to your system clock. this would cause mozilla to reuse the cached redirects over and over and over again until reaching the limit on the number of redirects. if you can run about:cache using your old profile and look at all the entires pertaining to nytimes.com or supply the http log as vinay requested that might enable us to determine what went wrong with your old profile. thx!
marking FIXED
Status: REOPENED → RESOLVED
Closed: 23 years ago23 years ago
Resolution: --- → FIXED
Darin, I tried about:cache, and there were *no* entries for nytimes.com (I purge the cache, but infrequently). Even under this scenario, the old profile gives the redirection error. I'm not that concerned about my old profile - I've created a new one and migrated my settings from the old one, and I plan on doing so again when 1.0 is released. I just thought I'd report the behaviour for the sake of completeness. I don't have any problems associated with this bug with the new profile. As for the http log, I assume the instructions (given in comment 15) are for individuals who create their own builds (?) - I don't currently have the tools to do so. I installed a web proxy to test; I'll put the results in another comment.
Here's the http log from the web proxy. I did the following: 1. Visit www.nytimes.com homepage 2. Select a link (http://www.nytimes.com/reuters/international/international-mideast.html) The last six lines of the log represent the link I followed that caused the redirection limit error. All other lines were from step 1. format=%Ses->client.ip% - %Req->vars.pauth-user% [%SYSDATE%] "%Req->reqpb.proxy-request%" %Req->srvhdrs.clf-status% %Req->vars.p2c-cl% %Req->vars.remote-status% %Req->vars.r2p-cl% %Req->headers.content-length% %Req->vars.p2r-cl% %Req->vars.c2p-hl% %Req->vars.p2c-hl% %Req->vars.p2r-hl% %Req->vars.r2p-hl% %Req->vars.xfer-time% %Req->vars.actual-route% %Req->vars.cli-status% %Req->vars.svr-status% %Req->vars.cch-status% [myIP] - - [31/Jan/2002:17:02:19 -0500] "GET http://www.nytimes.com/ " 200 0 200 59216 0 0 0 176 661 176 9 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:19 -0500] "GET http://www.nytimes.com/js/csssniff.js " 304 0 304 0 0 0 0 120 787 120 2 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:20 -0500] "GET http://www.nytimes.com/css/sft1.css " 304 0 304 0 0 0 0 120 786 120 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:21 -0500] "GET http://graphics4.nytimes.com/images/misc/spacer.gif " 304 0 304 0 0 0 0 239 717 239 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:21 -0500] "GET http://graphics4.nytimes.com/RealMedia/ads/Creatives/nytnytHP/96X60_EE-orange.gif " 304 0 304 0 0 0 0 254 717 254 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:21 -0500] "GET http://graphics4.nytimes.com/images/section/homepage/NYT_home_banner.gif " 304 0 304 0 0 0 0 239 717 239 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:22 -0500] "GET http://graphics4.nytimes.com/RealMedia/ads/Creatives/tiffan12/nyt_val.jpg " 304 0 304 0 0 0 0 264 760 264 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:22 -0500] "GET http://graphics4.nytimes.com/images/global/global_nav/classifieds/jobs.gif " 304 0 304 0 0 0 0 239 717 239 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:22 -0500] "GET http://amch.questionmarket.com/adsc/d96342/1/96438/randm.js " 200 505 200 505 505 0 0 331 653 331 1 DIRECT FIN FIN CREATED [myIP] - - [31/Jan/2002:17:02:22 -0500] "GET http://graphics4.nytimes.com/images/global/global_nav/classifieds/realestate.gif " 304 0 304 0 0 0 0 247 717 247 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:23 -0500] "GET http://graphics4.nytimes.com/images/global/global_nav/classifieds/automobiles.gif " 304 0 304 0 0 0 0 247 717 247 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:23 -0500] "GET http://graphics4.nytimes.com/images/global/global_nav/gn_news.gif " 304 0 304 0 0 0 0 239 717 239 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:23 -0500] "GET http://graphics4.nytimes.com/images/global/global_nav/gn_features.gif " 304 0 304 0 0 0 0 247 717 247 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:23 -0500] "GET http://graphics4.nytimes.com/images/global/global_nav/gn_newspaper.gif " 304 0 304 0 0 0 0 263 717 263 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:24 -0500] "GET http://graphics4.nytimes.com/ads/vail/vailbigdeal86x40.gif " 304 0 304 0 0 0 0 272 717 272 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:24 -0500] "GET http://graphics4.nytimes.com/ads/gap/gapmaternity2.gif " 304 0 304 0 0 0 0 247 717 247 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:24 -0500] "GET http://graphics4.nytimes.com/ads/juno_mktplce.gif " 304 0 304 0 0 0 0 263 717 263 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:24 -0500] "GET http://graphics4.nytimes.com/ads/british/86x40_2k.gif " 304 0 304 0 0 0 0 263 717 263 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:25 -0500] "GET http://graphics4.nytimes.com/ads/half/86x40box.gif " 304 0 304 0 0 0 0 239 717 239 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:25 -0500] "GET http://graphics4.nytimes.com/images/global/global_search/gs_search.gif " 304 0 304 0 0 0 0 295 717 295 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:25 -0500] "GET http://graphics4.nytimes.com/ads/starbucks.gif " 304 0 304 0 0 0 0 239 717 239 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:25 -0500] "GET http://graphics4.nytimes.com/images/2002/01/31/international/31cnd-fight.1.jpg " 200 6129 200 6129 6129 0 0 261 675 261 0 DIRECT FIN FIN CREATED [myIP] - - [31/Jan/2002:17:02:25 -0500] "GET http://graphics.nytimes.com/images/promos/homepage/29duck-overlay.gif " 304 0 304 0 0 0 0 110 750 110 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:26 -0500] "GET http://graphics.nytimes.com/images/promos/homepage/31tali-overlay.gif " 304 0 304 0 0 0 0 110 750 110 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:26 -0500] "GET http://graphics.nytimes.com/images/misc/spacer.gif " 304 0 304 0 0 0 0 101 716 101 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:26 -0500] "GET http://graphics.nytimes.com/images/misc/formArrow2.gif " 304 0 304 0 0 0 0 101 716 101 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:26 -0500] "GET http://graphics4.nytimes.com/images/section/c_markets.gif " 304 0 304 0 0 0 0 272 717 272 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:26 -0500] "GET http://graphics4.nytimes.com/images/2001/11/05/wall_st_60x60.gif " 304 0 304 0 0 0 0 296 717 296 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:27 -0500] "GET http://graphics4.nytimes.com/images/promos/homepage/19pog-banner.gif " 304 0 304 0 0 0 0 263 717 263 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:27 -0500] "GET http://graphics4.nytimes.com/images/2002/01/27/sports/27POGblackwell.1.gif " 304 0 304 0 0 0 0 239 717 239 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:27 -0500] "GET http://graphics4.nytimes.com/images/section/homepage/cat_wire.gif " 304 0 304 0 0 0 0 247 717 247 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:27 -0500] "GET http://graphics4.nytimes.com/images/section/homepage/cat_abuzz.gif " 304 0 304 0 0 0 0 247 717 247 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:27 -0500] "GET http://graphics4.nytimes.com/RealMedia/ads/Creatives/strade11-nyt1/ODDSIZE487.gif " 304 0 304 0 0 0 0 263 717 263 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:52 -0500] "GET http://www.nytimes.com/reuters/international/international-mideast.html " 302 0 302 0 0 0 0 288 669 288 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:54 -0500] "GET http://www.nytimes.com/reuters/international/international-mideast.html " 302 0 302 0 0 0 0 288 695 288 2 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:55 -0500] "GET http://www.nytimes.com/reuters/international/international-mideast.html " 302 0 302 0 0 0 0 288 695 288 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:56 -0500] "GET http://www.nytimes.com/reuters/international/international-mideast.html " 302 0 302 0 0 0 0 288 695 288 0 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:57 -0500] "GET http://www.nytimes.com/reuters/international/international-mideast.html " 302 0 302 0 0 0 0 288 695 288 1 DIRECT FIN FIN - [myIP] - - [31/Jan/2002:17:02:57 -0500] "GET http://www.nytimes.com/reuters/international/international-mideast.html " 302 0 302 0 0 0 0 288 695 288 0 DIRECT FIN FIN - I assume the 302 refer to the redirects...is this info specific enough? I'll mention again that my new profile does not exhibit this problem; I only submit this information in the event that it may be useful in the future.
*** Bug 122827 has been marked as a duplicate of this bug. ***
ric: actually the instructions in comment #15 apply to _all_ mozilla builds. vender specific builds may have logging disabled, but by default mozilla builds have this enabled for the purpose of helping bug reporters submit more useful bug reports ;-)
ric: thanks for the proxy server output, but it really doesn't tell me much. it only confirms that redirects are happening repeatedly.
(side comment: ... but the proxy logs do tell us that the bug is ~different from the situation previously noted with nytimes.com and cookie-validation, in that the previous situation (detailed in bug 119716) would have generated a request for http://www.nytimes.com/whatever... which would redirect to http://www.nytimes.com/auth/login?URI=http://www.nytimes.com/whatever... and then loop between the above two URLs. The above proxy log shows that it is only looping on a single 302 URL
Actually, sorry ... this may not be different, since it's not clear which requests were serviced internally by the disk cache (unheard by the proxy server) and which requests were actually serviced by the proxy server.
jrgm: exactly. but, also there is little distinction in the code between redirecting to the same URL and redirecting between two different URLs. they follow identical code paths at the networking/cache level.
wah! Just downloaded 0.9.8 and visited http://www.nytimes.com. This bug is still present on my Windows 2000 system. I can send further information, but I would need explicit directions on what is needed and how to get it. [I'm a network security techie... so I'm not familiar with the ins and outs of Mozilla logging/debugging.]
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
it seems that this is only fixed in the trunk builds: fixed-on-trunk (comment #42) and that means that 0.9.8 doesn't have this bug fix..
matti: actually, this bug should have been fixed in mozilla 0.9.8... the branch wasn't cut until after the patch landed. ric: try this: try a new profile... that might fix the problem. maybe.
Good grief! Creating a new profile seems to have removed the problem, at least as I was seeing it on the www.nytimes.com site. Can someone explain that?
The corruptability of profiles may be a bug. See bug 123027 for some current work.
ric: see comment #56
Status: REOPENED → RESOLVED
Closed: 23 years ago23 years ago
Resolution: --- → FIXED
*** Bug 125657 has been marked as a duplicate of this bug. ***
v 20020221
Status: RESOLVED → VERIFIED
*** Bug 114703 has been marked as a duplicate of this bug. ***
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.1a+) Gecko/20020714 I'm having problems with nytimes.com. Following the chain of bugs, its's pointed me to here. I recommend re-opening. http://www.nytimes.com/2002/07/14/national/14PILO.html?todaysheadlines
I regularly go the nytimes.com, and I haven't been having a problem with current trunk builds. However, if you are experiencing a problem, can you open a command shell, enter these commands (for win32 or *nix tcsh) and then start mozilla from that command line. The output in 'http.log' would be useful. set NSPR_LOG_MODULES=nsHttp:5 set NSPR_LOG_FILE=http.log
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: