Closed Bug 141702 Opened 23 years ago Closed 22 years ago

Redirection limit exceeded [not sending Authorization header on 302]

Categories

(Core :: Networking: HTTP, defect, P2)

x86
All
defect

Tracking

()

RESOLVED FIXED
mozilla1.4alpha

People

(Reporter: hc, Assigned: skasinathan)

References

()

Details

(Whiteboard: [http/1.1])

Attachments

(4 files, 3 obsolete files)

1. Go to the URL included above 2. Click on "reply to mewssage" Get this alert "Redirection limit for this URL exceeded. Unable to load the requested page." What is the current limit (maybe add it to the message) ? How can the limit be increased ?
Please always include build ID in bug-reports.
The url loads fine on XP, build ID 2002050108. WFM. Possible dup of bug 133973 or bug 127348
Have you disabled cookies ?
wfm using build 2002050108 on Win2k (trunk).
Build = Talkback 2002042908 OS = NT 4 Cookies = Enable all cookies, Disable cookies in Mail & Newsgroups, Ask me before storing cookie
If you need use the following zdnet account for testing this bug : User Name: mozilla141702 Password : mozilla141702
Similar error messages have been reported in the newsgroups by several other users; confirming, though I am unable to reproduce it myself. An RC 2 user on WinXPHome and an RC 2 user on WinME sees this problem at <http://www.klingonarmadainternational.org/>. Another RC 2 user on WinXP sees this problem when trying to load her Excite home page.
Status: UNCONFIRMED → NEW
Ever confirmed: true
*** Bug 144771 has been marked as a duplicate of this bug. ***
The www.klingonarmadainternational.org issue is a site problem. Sorry.
I see this error on http://www.sina.com.cn/
> I see this error on http://www.sina.com.cn/ I forgot to mention I'm running 7/24 commercial branch: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.1) Gecko/20020724 Netscape/7.0
http://www.sina.com.cn/ does not display this error with NS 4.79 and IE6.
Keywords: 4xp
This looks like a duplicate bug 128443.
Depends on: 128443
Darin came by my cube. We enabled cookies everywhere, and I still get the error message. We created an http.log that Darin will look at.
yeah, interestingly bob had cookies blocked on the site. he re-enabled cookies for the site (in fact, he re-enabled cookies for all sites). he then did a shift-reload, and still got the redirection limit exceeded dialog. so, something really weird is going on, and i'm hoping that the http log file will reveal the problem.
Severity: normal → major
Status: NEW → ASSIGNED
Priority: -- → P2
Target Milestone: --- → mozilla1.1alpha
Another URL that causes "Redirection limit exceeded" bug: http://www.happynese.de/shop/popup/f_index.asp?lvl1=1
Looks like there are several threads/Bug #'s for this problem... As far as I know, this is NOT a cookie problem. This alert pops up at sites such as: www.nationalgeographic.com, www.starbucks.com, and www.oreillynet.com. There is a work-around for you Privoxy/Junkbuster people. You'll need to add another filter (to your default.filter file), and then edit your privoxy.action file to take out stop this annoying alert (or do something equivalent). As far as I can determine, it's being caused by Mozilla trying to load an IFRAME from ad.doubleclick.net (change your Privoxy debug level to include URLs (1), regex (64), and kill popups (1024) - it should be "debug 1089" or equivalent - debug levels may be different in JunkBuster) You should observe the alert being popped up when a connect to ad.doubleclick.net is being crunched. Perhaps more generally, this alert is caused when Mozilla is trying to load an IFRAME that is blocked. Add this to your default.filter file: ################################################################################# # # doubleclick: Kill DoubleClick.net iframes # ################################################################################# FILTER: doubleclick Kill DoubleClick.net iframes s|<iframe [^>]*doubleclick.net.*</iframe>|<!-- Squished doubleclick.net Embed -->|sigU In your # Defaults in privoxy.action, you should have a line to enable that filter, like: +filter{doubleclick} \ If there are more ad serving sites causing this behaviour, just add it to the filter. Now try www.nationalgeographic.com, www.starbucks.com, and www.oreillynet.com.
http://boards.gamers.com/messages/overview.asp?name=bitchboard works in IE5.5 but not in Mozilla trunk 2002081808 on WinME.
Several link on www.nytimes.com are also broken.
*** Bug 166213 has been marked as a duplicate of this bug. ***
When attempting to go to boards.gamers.com: +++GET 281+++ GET / HTTP/1.1 Host: boards.gamers.com User-Agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.2b) Gecko/20021013 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,text/css,*/*;q=0.1 Accept-Language: en-us, en;q=0.50 Accept-Encoding: gzip, deflate, compress;q=0.9 Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66 Keep-Alive: 300 Cookie: SITESERVER=ID=1f7b37c48cfa85ebbc147ee4ee03de05 Connection: keep-alive +++RESP 281+++ HTTP/1.1 302 Object moved Server: Microsoft-IIS/5.0 Date: Wed, 16 Oct 2002 12:27:49 GMT Location: /messages/ Content-Length: 131 Content-Type: text/html Set-Cookie: ASPSESSIONIDGGGGQPQQ=OKGDDIHAMJGCNKCBNEKPOIOO; path=/ Cache-control: private +++CLOSE 281+++ +++GET 282+++ GET /messages/ HTTP/1.1 Host: boards.gamers.com User-Agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.2b) Gecko/20021013 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,text/css,*/*;q=0.1 Accept-Language: en-us, en;q=0.50 Accept-Encoding: gzip, deflate, compress;q=0.9 Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66 Keep-Alive: 300 Cookie: SITESERVER=ID=1f7b37c48cfa85ebbc147ee4ee03de05 Connection: keep-alive +++RESP 282+++ HTTP/1.1 302 Object moved Server: Microsoft-IIS/5.0 Date: Wed, 16 Oct 2002 12:27:50 GMT Location: /user/profiling/login/cookieread.asp?action=read&dest_url=%2Fmessages%2FDefault%2Easp%3F Content-Length: 209 Content-Type: text/html Set-Cookie: ASPSESSIONIDGGGGQPQQ=BLGDDIHACLOBEPACCFKBHCOC; path=/ Cache-control: private +++CLOSE 282+++ +++GET 283+++ GET /user/profiling/login/cookieread.asp?action=read&dest_url=%2Fmessages%2FDefault%2Easp%3F HTTP/1.1 Host: boards.gamers.com User-Agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.2b) Gecko/20021013 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,text/css,*/*;q=0.1 Accept-Language: en-us, en;q=0.50 Accept-Encoding: gzip, deflate, compress;q=0.9 Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66 Keep-Alive: 300 Cookie: SITESERVER=ID=1f7b37c48cfa85ebbc147ee4ee03de05 Connection: keep-alive +++RESP 283+++ HTTP/1.1 302 Object moved Server: Microsoft-IIS/5.0 Date: Wed, 16 Oct 2002 12:27:50 GMT Location: /messages/Default.asp? Content-Length: 143 Content-Type: text/html Set-Cookie: ASPSESSIONIDGGGGQPQQ=CLGDDIHAOKMDLNFGBJIKJACG; path=/ Cache-control: private +++CLOSE 283+++ +++GET 284+++ GET /messages/Default.asp? HTTP/1.1 Host: boards.gamers.com User-Agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.2b) Gecko/20021013 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,text/css,*/*;q=0.1 Accept-Language: en-us, en;q=0.50 Accept-Encoding: gzip, deflate, compress;q=0.9 Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66 Keep-Alive: 300 Cookie: SITESERVER=ID=1f7b37c48cfa85ebbc147ee4ee03de05 Connection: keep-alive +++RESP 284+++ HTTP/1.1 302 Object moved Server: Microsoft-IIS/5.0 Date: Wed, 16 Oct 2002 12:27:51 GMT Location: /user/profiling/login/cookieread.asp?action=read&dest_url=%2Fmessages%2FDefault%2Easp%3F Content-Length: 209 Content-Type: text/html Set-Cookie: ASPSESSIONIDGGGGQPQQ=DLGDDIHAKDLMOFKMACOAFLNP; path=/ Cache-control: private +++CLOSE 284+++ +++GET 285+++ GET /user/profiling/login/cookieread.asp?action=read&dest_url=%2Fmessages%2FDefault%2Easp%3F HTTP/1.1 Host: boards.gamers.com User-Agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.2b) Gecko/20021013 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,text/css,*/*;q=0.1 Accept-Language: en-us, en;q=0.50 Accept-Encoding: gzip, deflate, compress;q=0.9 Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66 Keep-Alive: 300 Cookie: SITESERVER=ID=1f7b37c48cfa85ebbc147ee4ee03de05 Connection: keep-alive +++RESP 285+++ HTTP/1.1 302 Object moved Server: Microsoft-IIS/5.0 Date: Wed, 16 Oct 2002 12:27:51 GMT Location: /messages/Default.asp? Content-Length: 143 Content-Type: text/html Set-Cookie: ASPSESSIONIDGGGGQPQQ=ELGDDIHAGNLNIGGGOCIPLEEO; path=/ Cache-control: private +++CLOSE 285+++ +++GET 286+++ GET /messages/Default.asp? HTTP/1.1 Host: boards.gamers.com User-Agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.2b) Gecko/20021013 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,text/css,*/*;q=0.1 Accept-Language: en-us, en;q=0.50 Accept-Encoding: gzip, deflate, compress;q=0.9 Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66 Keep-Alive: 300 Cookie: SITESERVER=ID=1f7b37c48cfa85ebbc147ee4ee03de05 Connection: keep-alive +++RESP 286+++ HTTP/1.1 302 Object moved Server: Microsoft-IIS/5.0 Date: Wed, 16 Oct 2002 12:27:52 GMT Location: /user/profiling/login/cookieread.asp?action=read&dest_url=%2Fmessages%2FDefault%2Easp%3F Content-Length: 209 Content-Type: text/html Set-Cookie: ASPSESSIONIDGGGGQPQQ=FLGDDIHAOKPAICJHAPMMFMGC; path=/ Cache-control: private +++CLOSE 286+++ +++GET 287+++ GET /user/profiling/login/cookieread.asp?action=read&dest_url=%2Fmessages%2FDefault%2Easp%3F HTTP/1.1 Host: boards.gamers.com User-Agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.2b) Gecko/20021013 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,text/css,*/*;q=0.1 Accept-Language: en-us, en;q=0.50 Accept-Encoding: gzip, deflate, compress;q=0.9 Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66 Keep-Alive: 300 Cookie: SITESERVER=ID=1f7b37c48cfa85ebbc147ee4ee03de05 Connection: keep-alive +++RESP 287+++ HTTP/1.1 302 Object moved Server: Microsoft-IIS/5.0 Date: Wed, 16 Oct 2002 12:27:52 GMT Location: /messages/Default.asp? Content-Length: 143 Content-Type: text/html Set-Cookie: ASPSESSIONIDGGGGQPQQ=GLGDDIHAKDLPDLHLKPGEAKGL; path=/ Cache-control: private +++CLOSE 287+++ etc.
OK, it needed the ASPSESSIONIDGGGGQPQQ cookie.. sorry about the long log.
This bug prevents Mozilla from viewing my company's web site, and it has been in every Mozilla build for a long while. I wish I could give you an account to see it on our site, but I think they'd fire me. To help isolate the cause, I could turn on any amount of debug tracing, if someone could give me some instructions or suggestions. Out of the box, Mozilla has the problem. It also has nothing to do with ad sites, because our site is self-contained. We do use a cookie to prevent login abuse (one account being shared by several users), so that could be part of the cause. When a user first comes into the tm3 website, a 401 error (unauthorized user) is generated to force the basic authentication login dialog. A cookie is sent along with the error to carry a unique session ID.
wrong milestone. david: there are some environment variables you can set to enable mozilla HTTP logging. if you are on a windows machine, just open up a DOS prompt and type: c:\> set NSPR_LOG_MODULES=nsHttp:5 c:\> set NSPR_LOG_FILE=c:\http.log then launch mozilla from the DOS prompt. repro the problem, and then upload the resulting HTTP log (c:\http.log) to this bug report. thx!!
Target Milestone: mozilla1.1alpha → ---
The log file is empty, after following those instructions. I do have a way to reproduce the problem, though. Is there a different level or type of logging you'd like me to try? Maybe something from our web server log files? We're using iPlanet 4.1 SP5.
hmm... that's odd. what mozilla build are you using? a network packet trace would also be helpful (there's a good tool available from www.ethereal.com).
Attached file Ethereal network packet trace (deleted) —
According to the "Help - About Mozilla" page: Mozilla 1.2a Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.2a) Gecko/20020910 I've captured the network packet trace from Ethereal (I think). I started a "Capture", clicked the link that brings up the Mozilla error dialog, waited until that dialog appeared, stopped the capture, and saved it as a file type of "libpcap (tcpdump, Ethereal, etc.)" -- This is attached as libpcap.dump
david: the packet trace only contains Xwindows and SSL traffic. is it a https:// site that is having the problem? if so, then packet tracing isn't an option. the mozilla logging as i described it should work with the mozilla build you mentioned. are you on a UNIX box?
nevermind the comment about a UNIX box... was just confused about seeing Xwindows traffic in your packet trace. the UA string you quoted indicates that your using Win2k or WinXP.
Attached file mozilla nsHttp:5 log file (deleted) —
Yes the site's https. I use emacs :-) through my Windows desktop :-( I realized that with the quick launch thing, Mozilla wasn't picking up the environment variables. So I exited from the tiny tray icon, and did get a log file. See attached (but it doesn't say much).
wow.. that's very strange. the log file indicates that you haven't loaded any https:// URLs. what exactly did you do after starting up mozilla?
Attached file pre-exit mozilla nsHttp:5 log file (deleted) —
Yes, it is strange. When I exit mozilla, the log file seems to truncate to the timy log file from before, at least sometimes. I just did it again, and attached the big log file before exiting. Here's what I did. After mozilla comes up with a blank page, I loaded https://www.tm3.com/news/main.htm?page=PullHeadlines&frame=content, which prompts for the user name and password, then displays the page. Then, when I truncate the URL to https://www.tm3.com/, I get the error.
that's because you are using quick launch (i would guess). quick launch annoying restarts the mozilla application after you close the last window. this is done to cleanup memory leaks :(
this looks like a problem with basic auth actually. briefly, the transactions look like this: 1) C: GET / S: 401 2) C: GET / Authorization: **** S: 302 Location: /foopy/ 3) C: GET /foopy/ S: 401 4) C: GET /foopy/ Authorization: **** S: 302 Location: / 5) C: GET / S: 401 ... there is at least the problem that we are not automatically offering an Authorization header for "/foopy/" after sending on for "/" ...RFC 2617 says we should. that might be the cause of the problem. -> moz 1.2
Keywords: mozilla1.2
Target Milestone: --- → mozilla1.2final
This bug shows up when trying to view any stories from the front page of www.nytimes.com. Would be nice for it to be fixed soon so we can read the news. Thanks, -- Chris
Chris: what version of mozilla are you using? the redirection limit was recently increased from 10 to 20 in order to hopefully eliminate the redirection limit reached errors on nytimes.com.
-> moz 1.3
Target Milestone: mozilla1.2final → mozilla1.3alpha
Sorry for the delay in response... My browser version is a nightly, from 2002102804 I decided to try a nightly after 1.2b was having the same problem.
chris: have you by chance modified your cookie privacy settings to restrict cookies in any way? doing so can sometimes lead to infinite redirection loops.
Summary: Redirection limit exceeded. → Redirection limit exceeded [not sending Authorization header on 302]
Whiteboard: [http/1.1]
Target Milestone: mozilla1.3alpha → mozilla1.3beta
Flags: blocking1.3b?
check this out: http://dealerpages.volvocars.se/de/dealers/190 (Cookies set off, Mozilla/5.0 (Windows; U; Windows NT 5.0; de-AT; rv:1.2.1) Gecko/20021130) it works when cookies are on!
Ole : Disabled cookies are the cause and this is no bug.
Not much activity. Doesn't look like it should block 1.3beta.
Flags: blocking1.3b? → blocking1.3b-
Thanks. This bug seems fixed in Mozilla 1.3a -- Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.3a) Gecko/20021212. Good news, since my company's website is now accessible through Mozilla.
Target Milestone: mozilla1.3beta → mozilla1.4alpha
I spoke too soon. Most of our pages are still inaccessible. This is a bug, and I am sure that Darin Fisher (comment #35) had identified the cause -- Mozilla is not sending the authorization headers in all requests, as required by RFC 2617. There might be more than one cause of exceeding the redirection limit, but cookies have nothing to do with it in the case of our website. You can always contact me to capture an Ethereal network packet trace for testing.
I am still experiencing this problem using build 2003021222 on Linux at site www.2test.com. Cookies are not blocked for this site, so cookies do not appear to be the problem. To duplicate: 1) Navigate to www.2test.com 2) Verify that cookies are enabled 3) Choose 'Information Technology' in the 'area of study' drop-down. 4) Choose 'United States' and 'Texas', and press 'Next' button. 5) Error Message displayed: 'Redirection limit for this URL exceeded. Unable to load the requested page." This site works properly on Linux using Mozilla 1.0.2.
OS: Windows NT → All
Mozilla 1.3b, using Junkbuster proxy. With "HTTP Networking --> Proxy Connection Options" set to "Use HTTP 1.1", an access to "http://www.davesp.net/" (which is a Network Solutions auto-forward to "http://www.speakeasy.org/~davesp/") results in the redirection-limit-exceeded error message. With "HTTP Networking --> Proxy Connection Options" set to "Use HTTP 1.0", an access to "http://www.davesp.net" is successful.
Mozilla 1.3b, using Junkbuster proxy. A follow-up: With "HTTP Networking --> Proxy Connection Options" set to "Use HTTP 1.1" and "Enable Keep-Alive" disabled under "Proxy Connection Options", an access to "http://www.davesp.net/" is successful. I hope this additional information is helpful.
Dave SP: Your problem is completly different. You have problems with a broken Proxy that doesn't understand http/1.1 (bug 38488). This bug is also in the most frequented bug list and very easy to find. (QA/Frequently Reported bugs)
-> suresh
Assignee: darin → suresh
Status: ASSIGNED → NEW
Anyone able to duplicate this bug lately? I tried most of the urls listed in this bug to duplicate, but couldn't :( Thanks!
Status: NEW → ASSIGNED
I can duplicate this problem at site http://www.2test.com (as described in comment #46), using build 2003030922 on Linux.
hmm...I couldn't duplicate this bug (using steps in comment #46) in linux build either :( :(
suresh: have you confirmed w/ a packet trace that we are not having the problem i described in comment #35? could it be that the website has simply changed? we should setup a testcase emmulating the steps in comment #35 and see if that is fixed.
darin, unfortunately I don't have an account/access to tm3.com web site to duplicate this bug :(
It would be nice if bugzilla-daemon@mozilla.org accepted e-mails. I can't give out a TM3 account, unfortunately. Just tell me which version of mozilla for windows 2000 you want tested, and I'll do it. If you want me to capture the IP traffic, you'll need to tell me what free software to install, since I've got a new desktop.
suresh, here's an internal testcase: http://foo:foo@unagi.mcom.com/bugs/bug_141702/test.cgi however, we don't seem to have any trouble with this testcase. so, either this bug is fixed now, or (more likely) my analysis in comment #35 was simply wrong =/
yeah, the internal testcase works fine for me too. hmm...i wonder whether this bug has anything to do with bug 194708 (which was fixed recently).
suresh: yeah, maybe.
I'm gonna mark this bug as WORKSFORME. Please reopen if you still see this problem in latest nightly builds. Thanks!
Status: ASSIGNED → RESOLVED
Closed: 22 years ago
Resolution: --- → WORKSFORME
This is still a bug in Mozilla 1.4a. Unfortunately, the Alert dialog now reports the same misunderstanding that keeps reappearing in this thread: Redirection limit for this URL exceeded. Unable to load the requested page. This may be caused by cookies that are blocked. The problem at my site is definitely with Basic Authentication -- Mozilla is not implementing the RFC 2617 spec properly. The problem is not caused by cookies in any way. The transactions leading up to a failure look like this: 1) C: GET /monitorpage/monitorPage S: 401 Set Cookie: ****=111=**** 2) C: GET /monitorpage/monitorPage Use Cookie: ****=111=**** Authorization: **** S: 302 Location: /tm3Login/main.htm?addr=/monitorpage/monitorPage 3) C: GET /tm3Login/main.htm?addr=/monitorpage/monitorPage Use Cookie: ****=111=**** HERE IS THE BUG: THE AUTHORIZATION IS NOT SENT BY MOZILLA! S: 401 Set Cookie: ****=222=**** 4) C: GET /tm3Login/main.htm?addr=/monitorpage/monitorPage Use Cookie: ****=222=**** Authorization: **** S: 302 Location: /monitorpage/monitorPage 5) C: GET /monitorpage/monitorPage Use Cookie: ****=222=**** HERE IS THE BUG: THE AUTHORIZATION IS NOT SENT BY MOZILLA! S: 401 Set Cookie: ****=333=**** ... Notice that Mozilla responses 3) and 5) do not carry the Authorization header, which RFC 2617 requires. Request 5) starts the infinite loop of requests, since it is really the same as request 1). The pattern of 2), 3), 4), 5) will repeat until after 42 requests, when Mozilla gives up and displays the error Alert dialog. The trivial difference is that the cookie keeps changing, since we start a new "session" on every 401 error. Let me emphasize that the cookies are not part of the problem. The missing authorization headers are the sole cause of this problem (at my site). I cannot explain why, but this problem does not always occur. It seems that if I type the URL directly as soon as the browser comes up, I get into the site. But if I click on any links that are a different URL than the one I am viewing, I get the error. Our site uses frames and dynamic HTML in places, so it could be difficult to distill a test case to distinguish the two. ==================================================================== Unfortunately, this bug crept into the Netscape builds a while ago, and our customer support is forced to tell customers to "upgrade" to Internet Explorer after they upgrade to a Netscape newer than about 4.7. This is a dreadful situation, really. Mozilla is losing what little share of the corporate desktops that are still out there.
David Crane, your last comment is very similar to darin's comment# 35. Is it possible to setup an external testcase for this problem? thanks!
Suresh, please call me at work, and we can work something out. My number is 646-822-3577.
hmm... mozilla is correct in not sending basic auth credentials in request #3 since /tm3Login/ is not in the same protection space as /monitorpage/. see RFC 2617 section 2 where it says: A client SHOULD assume that all paths at or deeper than the depth of the last symbolic element in the path field of the Request-URI also are within the protection space specified by the Basic realm value of the current challenge. A client MAY preemptively send the corresponding Authorization header with requests for resources in that space without receipt of another challenge from the server. however, mozilla should be able to issue request #5 with the basic auth credentials included.
the HTTP log from comment #33 shows something interesting and buggy on the part of mozilla: 0[234338]: GET /news/main.htm?page=PullHeadlines&frame=content HTTP/1.1 0[234338]: GET /news/main.htm?page=PullHeadlines&frame=content HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/news/main.htm&uquery=page=PullHeadlines%26frame=content HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/news/main.htm&uquery=page=PullHeadlines%26frame=content HTTP/1.1 0[234338]: GET /news/main.htm?page=PullHeadlines&frame=content HTTP/1.1 0[234338]: GET /gifs/TFMG.gif HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET /tm3Login/main.htm?addr=/ HTTP/1.1 0[234338]: GET / HTTP/1.1 0[234338]: GET / HTTP/1.1 notice the loop at the end... for some reason the requests for "/tm3Login/main.htm" are not sent with basic auth credentials even though they are technically in the same protection space as the requests for "/" i'll try to cook up a simpler testcase. reopening...
Status: RESOLVED → REOPENED
Resolution: WORKSFORME → ---
ok, mozilla trunk build 2003041622 passes my simple testcase. my testcase does the following: 1) C: GET / S: 401 2) C: GET / Auth: xxx S: 302 Location: /foo/bar.html 3) C: GET /foo/bar.html Auth: xxx S: 200 OK in other words, this recent version of mozilla seems to be sending the Auth header in request #3 unlike the version of mozilla used to capture the log file from comment #33. the code related to this has seen some big changes recently (to support NTLM), and i wonder if those changes didn't somehow fix this bug. however, the fact that this bug is still being reported with 1.4 alpha makes me think otherwise. i must somehow not have the right testcase. David: can you please capture a new log using a very recent mozilla trunk build (more recent than 1.4 alpha if possible)? thanks!
Darin, Sorry for the delay. For some reason, I didn't get e-mail of your comments, and just found them on the website. I was able to get a test login for you. This is probably better than recording an HTTP log, since you can do testing at will. This account will expire at the end of August, unless you ask me to extend it. It is entitled to the news portions of our web site, which is sufficient to recreate the problem. Login: netscape Password: bugfix If you point Mozilla at http://www.tm3.com, you get our home page. When you click on any of the news links (try the "Monitor Page") you will be prompted for the basic authentication (above), and you will get to the page through an https URL -- there is actually a sequence of 401 and 302 responses and cookie settings under the covers. This initial sequence works. But from there, with Mozilla 1.4a, any link that should go to a different page will get the "Redirection limit for this URL exceeded" error. After restarting Mozilla, the first link from the home page always works. Any link from that page fails unless it takes you back to the same page. Let me know if you have any difficulties, or need an extension. Thanks.
hi i get this error in mozilla >1.2: - i have made page in PHP which is redirecting itself using header() function - the redirect goes to the same page, but with a few GET vars, which are changing - after about 25 redirects (every 24 seconds) mozilla shows this error i have no problems with IE or Opera Mirza ------------- Mozilla/5.0 (Windows; U; Windows NT 5.0; de-AT; rv:1.3) Gecko/20030312
this is a test document in php. there are maximum 22 header redirect: <?php ## ## mozilla "redirect limit exceeded' test ## ## it redirects browser after 25 seconds ## sleep(25); header('Location: '.$PHP_SELF); exit; ?> Mirza
mirza: that behavior is by design. you have crafted an infinite redirect, and mozilla correctly detects it. david: thank you for the testcase. we will be trying it out shortly ;-)
QA Contact: tever → httpqa
Attached patch patch attached (obsolete) (deleted) — Splinter Review
Attachment #122448 - Flags: superreview?(darin)
Comment on attachment 122448 [details] [diff] [review] patch attached >Index: nsHttpAuthCache.cpp >+ while (mRoot) { >+ if (mRoot->mPath) >+ nsCRT::free(mRoot->mPath); >+ >+ mRoot = mRoot->mNext; >+ } don't you need to free each AuthPath object? see comments below about separate allocation for AuthPath objects... > if (NS_FAILED(rv)) { >+ free(newRealm); > return rv; > } > >+ AuthPath *newAuthPath; >+ if (path) { >+ newAuthPath = (AuthPath *) malloc(sizeof(AuthPath)); >+ if (!newAuthPath) >+ return NS_ERROR_OUT_OF_MEMORY; looks like this early return will leak |newRealm|. >+ >+ newAuthPath->mPath = nsCRT::strdup(path); >+ newAuthPath->mNext = nsnull; also, can you try allocating the AuthPath structure with the path as we discussed? struct AuthPath { struct AuthPath *mNext; char mPath[1]; }; // ... PRUint32 pathLen = nsCRT::strlen(path); AuthPath *ap = malloc(sizeof(AuthPath) + pathLen); memcpy(ap->mPath, path, pathLen+1); while this is more complicated, it results in slightly more compact memory usage since there is only one allocation instead of two. >+ //Append newAuthPath >+ AuthPath *tempPtr = mRoot; >+ while (tempPtr->mNext) >+ tempPtr = tempPtr->mNext; generally a good idea to keep a pointer to the tail of the list. however, we expect this list to never grow very large. please add a comment explaining why we think it is okay to walk the list each time. (might be worthwhile to just store a pointer to the tail of the list.) >@@ -455,13 +486,13 @@ nsHttpAuthNode::SetAuthEntry(const char >+ /*if (path) { >+ }*/ kill the commented out code >Index: nsHttpAuthCache.h >+typedef struct _AuthPath { >+ struct _AuthPath *mNext; >+ char *mPath; >+} AuthPath; use C++ style structure declaration. (as i've written above.) might also be good to declare this structure as a member of nsHttpAuthEntry. otherwise, you really need to prefix it with nsHttp to avoid namespace conflicts with other structures/classes in mozilla. hmm... i prefer nsHttpAuthPath i think since this code isn't using inner classes/structures anywhere else. >+ AuthPath *Root() { return mRoot; } RootPath() ?? >Index: nsHttpChannel.cpp >+ return authCache->SetAuthEntry(host, port, path.get(), realm.get(), >+ saveCreds ? creds.get() : nsnull, >+ saveChallenge ? challenge.get() : nsnull, >+ *ident, sessionState); hmm... all you really want to do here is update the path. maybe it would be better to call a method directly on |entry| instead of going through SetAuthEntry.
Attachment #122448 - Flags: superreview?(darin) → superreview-
Attached patch updated patch (obsolete) (deleted) — Splinter Review
>>maybe it would be better to call a method directly on |entry| >>instead of going through SetAuthEntry. darin: is it ok to make the Set(..) fn in nsHttpAuthEntry to be public?
Attachment #122448 - Attachment is obsolete: true
Attachment #122632 - Flags: superreview?(darin)
Status: REOPENED → ASSIGNED
I started seeing the "Redirection limit for this URL exceeded. Unable to load the requested page. This may be caused by cookies that are blocked" error when I upgraded to Mozilla 1.4a (build ID 2003040105) from 1.2.1. I saw in consistently when I went to <http://www.mycomicspage.com/sign-in>. I have cookies enabled, and I do have a cookie for this site, although I retried after removing those cookies and got the same error. You do not need a sign in to see the error. MSIE works fine. I see it whether I type in the URL or click on a link for the URL.
eric: sounds like you are experiencing a different bug (this one is about HTTP authentication and redirects). please download mozilla 1.4 beta and verify that the bug hasn't been fixed. if you can reproduce the problem with 1.4 beta, then please file a new bug. thanks!
Yeah, Darin, the 5/9 build worked ... I did not realize two days ago I was not downloading the latest code. Thanks.
Comment on attachment 122632 [details] [diff] [review] updated patch >+ >+ >+ if (!mRoot) { >+ //first entry >+ mRoot = newAuthPath; >+ } else { style nit: bag extra newline and format like this: if (!mRoot) mRoot = newAuthPath; // first entry else { ... } for consistency with the rest of the HTTP code ;-) >- return NS_OK; >+ return authCache->SetAuthEntry(host, port, path.get(), realm.get(), ok, so everything looks really great, except here i think you just want to call a public AddPath method on nsHttpAuthEntry. that function will need to walk mRoot and only add the given path if it is not found.
Attachment #122632 - Flags: superreview?(darin) → superreview-
Attached patch updated patch (obsolete) (deleted) — Splinter Review
updated patch addressing darin's previous comments. Also, I have added tail pointer to the link list (makes it easy to add new item to list!)
Attachment #122632 - Attachment is obsolete: true
Attachment #123077 - Flags: superreview?(darin)
Comment on attachment 123077 [details] [diff] [review] updated patch >Index: nsHttpAuthCache.cpp >+nsresult >+nsHttpAuthEntry::AddPath(const char *aPath) ... >+ mTail->mNext = newAuthPath; >+ mTail = newAuthPath; this worries me... seems like mTail can be null in the case where nsHttpAuthEntry::Set is called with a null path: >+ nsHttpAuthPath *newAuthPath; >+ if (path) { >+ newAuthPath = (nsHttpAuthPath *) malloc(sizeof(nsHttpAuthPath) + pathLen); >+ if (!newAuthPath) { >+ free(newRealm); >+ return NS_ERROR_OUT_OF_MEMORY; >+ } >+ >+ memcpy(newAuthPath->mPath, path, pathLen+1); >+ newAuthPath->mNext = nsnull; >+ } also, newAuthPath is referenced down below even if path == null! >+ if (!mRoot) >+ mRoot = newAuthPath; //first entry >+ else >+ mTail->mNext = newAuthPath; // Append newAuthPath please be sure to test with proxy auth and regular server auth.
Attachment #123077 - Flags: superreview?(darin) → superreview-
Attached patch updated patch. (deleted) — Splinter Review
Attachment #123077 - Attachment is obsolete: true
Attachment #123348 - Flags: superreview?(darin)
Comment on attachment 123348 [details] [diff] [review] updated patch. >+nsHttpAuthEntry::AddPath(const char *aPath) >+{ >+ // null path matches empty path >+ if (!aPath) >+ aPath = ""; >+ >+ nsHttpAuthPath *tempPtr = mRoot; >+ while (tempPtr) { >+ const char *curpath = tempPtr->mPath; >+ if (strncmp(aPath, curpath, strlen(curpath)) == 0) >+ return NS_OK; // path already in the list nit: this comment should probably say that the "subpath already exists in the list" instead. ... >+ //Append the aPath >+ if (aPath) { nit: no need to check aPath again. >+ nsHttpAuthPath *newAuthPath; >+ int newpathLen = nsCRT::strlen(aPath); above you call strlen by itself. do the same here for consistency (or use nsCRT::strlen everywhere if that is what the rest of the file does). >+ nsHttpAuthPath *authPtr = entry->RootPath(); nit: call this local variable |authPath| instead? sr=darin
Attachment #123348 - Flags: superreview?(darin) → superreview+
Attachment #123348 - Flags: review?(dougt)
Comment on attachment 123348 [details] [diff] [review] updated patch. what darin said.
Attachment #123348 - Flags: review?(dougt) → review+
Comment on attachment 123348 [details] [diff] [review] updated patch. seeking drivers approval for 1.4 final. this patch fixes a HTTP/1.1 RFC2617 compliance bug. thanks!
Attachment #123348 - Flags: approval1.4?
Blocks: 204085
*** Bug 204085 has been marked as a duplicate of this bug. ***
Hi, I am the original reporter of the bug 204085. Do peple have any idea when the above patch is incorporated into the released binary, say, the daily snapshot? I would like to see if the patched mozilla really solves the problem mentioned in 204085. (I am not entirely sure if the timing-related different behavior can be explained with this bug fix.) Thank you for the great detective work!
ishikawa, as soon as the mozilla drivers give approval, I'll checkin this patch to the trunk.
Comment on attachment 123348 [details] [diff] [review] updated patch. a=sspitzer, assuming it's been well tested.
Attachment #123348 - Flags: approval1.4? → approval1.4+
fixed in trunk!
Status: ASSIGNED → RESOLVED
Closed: 22 years ago22 years ago
Resolution: --- → FIXED
Hi everybody. I checked the operation of nightly trunk. Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4b) Gecko/20030520 I waited a full day to make sure I got the patched binary before downloading and testing. The problem I reported in 204085 was solved completely! I traced http log file on my own and saw that POST is correctly issued with BASIC authentication header from the beginning. (Also other GETs were also issued from the beginning. Before they were issued without BASIC auth info and then reissued only after unauthorized error was returned.) By looking at the URL file paths carefully, I now think I understand the meaning of the particular RFC compliance. The strange "we are leaving the encrypted page ..." or something like that I saw before should have given me a little clue about the underlying problem of authentication domain screw-up, but the simple symptom of POST failure was not very revealing after all. Anyway, thank you again. I can now use this experimental Mozilla for daily test (using it for work actually. Otherwise, I can't stress test so to speak..)! Great work!
I just verified that our www.tm3.com website works with 1.4 RC1. Great. Now we'll wait for Netscape plans to make a new release with this version of the Mozilla source code. Thanks a lot, David
Blocks: 207953
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: