Closed
Bug 50471
Opened 24 years ago
Closed 24 years ago
HTTP 1.1 support does not work for www.amexmail.com
Categories
(Core :: Networking, defect, P3)
Tracking
()
VERIFIED
FIXED
M18
People
(Reporter: flop.m, Assigned: ruslan)
References
()
Details
(Whiteboard: [nsbeta3+])
Attachments
(2 files)
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux 2.3.99-pre3 i586; en-US; m18) Gecko/20000825
BuildID: 2000082708
While trying to enter my account, (with the right password), I get an
error stating that there was an intrusion, - one can see the HTML header of the
page.
Reproducible: Always
Steps to Reproduce:
Open an account and try to log in.
Actual Results: Can't log in a web-based email box.
Expected Results: I should be able to enter my
email account as it works with Navigator 4.x
Comment 1•24 years ago
|
||
Possible lack of PSM security/SSL? Couldn't attempt to check; amexemail never
returned from my attempt to register.
Reporter | ||
Comment 2•24 years ago
|
||
I've tested to log in amexmail with Navigator 4.08 with 'warn me when cookies...'
The thing is Amexmail set an ID for your computer before opening the session &
check it just after the 1st redirection. When I refused to accept the cookie,
Amexmail saw it as an attempt for an intrusion. It redirected me on another page
stating that cookies should be enabled, blah blah...
So, the problem is lying on the cookies. My prefs are OK. So, there is a bug
here. There's another bug, after the redirection, Mozilla shows the html header
of the page, but not the page's body!
Same problems under Win2K.
Component: Browser-General → Cookies
Comment 4•24 years ago
|
||
Suspect that this might be yet another incarnation of 41950. dp just checked in
a fix for that bug this evening so could you please test this out again and see
if it is still occuring.
Comment 5•24 years ago
|
||
Also I don't see an answer to Jesup's question about PSM. Did you install PSM?
Reporter | ||
Comment 6•24 years ago
|
||
PSM is installed and working - and that fix is not working (if it has been
implemented in the build 2000082815-win32).
Comment 7•24 years ago
|
||
No, the fix couldn't possibly have been in an 8-28 build because dp checked in
the fix late in the evening of 8-28. So try a build from 8-29 or later.
Reporter | ||
Comment 8•24 years ago
|
||
build 2000082915 - fix is not working. I've tried using 'warn me when...'
For amexmail, it does not show anything warning. But for www.netscape.com for
example, it shows warnings. So, this says that the cookie of amexmail is not
even taken in account by mozilla?
Comment 9•24 years ago
|
||
OK, I am able to reproduce the problem. Will look into it. Should have a fix
(or at least know what's causing it) before the end of the week.
Comment 10•24 years ago
|
||
Need more info for triage. Is it a common problem? Safe or Risky change?
Whiteboard: [need info]
Comment 11•24 years ago
|
||
FWIW, here's the intrusion-alert that appears
--------------------------------
Thank You for Using AmExMail!
Intrusion detected.
Intrusion detected.
In order to protect the privacy of our users, we allow access to user sessions
only from the browser address authenticated upon login or a user session cookie
you can elect to be set on your computer at login. You have attempted to access
this session from a different browser address or cookie than the one used at the
session login. This may have happened in one of the following ways:
You have logged in from more than one machine. You cannot return to an earlier
session if you have subsequently logged in from a different IP address.
Your connection might be from a proxy server which changes your IP (Internet
Protocol) address within a single session. If this is the case, you must enable
cookies from your browser.
Your connection to the Internet (your modem line, for example) may have been
dropped. Once your connection is re-established, you need to log back into
AmExMail even though your browser window appears to still look connected.
You did not properly log out of your previous AmExMail session, in which you
used cookies for session security. You must wait for the cookie to expire (4
hours from the start of your previous session), or select the cookies option
again for your current session.
Comment 12•24 years ago
|
||
Comment 13•24 years ago
|
||
Comment 14•24 years ago
|
||
Attached traffic for both Nav 4.x and seamonkey. Studing the traffic reveals
the following:
1. Form is submitted with a POST. In the 4.x case the POST is sent twice
(WHY??), the first time with the POST data missing.
2. In both case site responds with meta-refresh for a "Welcome" page
3. Browser requests the "Welcome" page. It includes the cookie in its response
for the Nav 4 case but not for the seamonkey case. That's obviously the
problem. Furthermore, the cookie manager shows that no cookie has been set
Comment 15•24 years ago
|
||
Most bizarre. The set-cookie header is definitely appearing in the traffic
received by from the site. But if I do a printf of the headers as they are
received by the networking layer, the set-cookie header does not appear. The
printf that I inserted is in the routine nsHTTPServerListener of the file
netwerk/protocol/http/src/nsHTTPResponseListener.cpp
Target Milestone: --- → M18
Comment 16•24 years ago
|
||
Here is what I think is happening. From the sniffer traffic that I posted I can
see that the sequence of events that happens when the login is submitted is as
follows:
I. Browser submits form to /tpl/Door/LoginPost using a post
II. Response sets a cookie, has a meta-refresh for a "Welcome" page
III. Browser requests the "Welcome" page but does not send the cookie
The print statements that I included in nsHTTPServerListener do not show any of
the response headers received in step II. So it seems like the netwerking layer
does not process any headers from an HTTP response that has a meta-refresh to
some other page. That means that if any set-cookies that appear in that
response are not acted upon.
If I'm correct in what I said in the previous paragraph, this explains why the
cookies from amexmail.com are not getting set and why a user cannot log in.
Comment 17•24 years ago
|
||
Copying some networking people on this. To summarize, it appears that if the
http response contains a set-cookie header and a META HTTP-EQUIV="Refresh", then
the set-cookie header is never parsed and therefore the cookie never gets set.
Assignee | ||
Comment 18•24 years ago
|
||
Meta tags are processed by webshell, not necko. What is the timeout on refresh
tags? Is it possible that it starts the new request before the original one is
processed?
Comment 19•24 years ago
|
||
The timeout in this particular case is 0.
Who in the webshell group should we copy on this bug report?
Comment 20•24 years ago
|
||
Looks like this bug is timing related. When I originally tested it I did not
get the intrusion message that the reporter cited but rather got a page with
html written on it. Later when I tried it I got the intrusion message. In both
cases the URL bar showed the same page ("terminated"). I've run this many times
now in my testing and just now I had a run that was successful. I tried that
same run again without changing any code and it failed.
Comment 21•24 years ago
|
||
I'm not so sure anymore that this is related to the meta-refresh (but then I'm
baffled as to what is causing it). The reason I say that is because I commented
out the code that handles meta-refresh in nsHTMLContentSink.cpp by changing the
line that reads:
if (!header.CompareWithConversion("refresh", PR_TRUE)) {
to
if (0) {
Now the refresh does not occur (as expected) but still the cookie is not being
set.
Below is the traffic I that the sniffer shows, followed by the headers & status
that the browser actually sees (as obtained from printf's in the netwerking
code). Observe that the traffic shows that the server is sending back two
back-to-back status lines -- one for "100 Continue" and one for "200 OK" whereas
the browser is seeing only the first status line and not the second (nor is it
seeing the "Set-Cookie:" that follows the "200 OK").
*************** SNIFFER TRAFFIC ************************************
08:01:22.865 S 8
08:01:22.905 R 8
08:01:23.015 S 8
08:01:23.055 R 8
08:01:23.326 S POST /tpl/Door/LoginPost HTTP/1.1
S Referer: http://www.amexmail.com/
S Host: www.amexmail.com
S User-Agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; m18)
Gecko/20000831
S Accept: */*
S Accept-Language: en
S Accept-Encoding: gzip,deflate,compress,identity
S Keep-Alive: 300
S Connection: keep-alive
08:01:23.356 S 8
08:01:23.396 R 8
08:01:23.426 S Content-type: application/x-www-form-urlencoded
S Content-Length: 184
S
S
DomainID=5&LoginState=2&SuccessfulLogin=%2Ftpl&Project=amexmail&NewServerName=ww
S
w.amexmail.com&JavaScript=JavaScript1.2&UserID=spmorse&passwd=mhm1abm&Login.x=16
S &Login.y=14&Use_Cookie=1
08:01:23.446 S 8
08:01:23.496 R 8
08:01:23.636 R HTTP/1.1 100 Continue
08:01:23.816 R HTTP/1.1 200 OK
R Date: Sun, 03 Sep 2000 14:59:52 GMT
R Server: Apache/1.2.5
R Set-Cookie: REMOTE_ID=GFETAP; expires=Mon, 04-Sep-00 02:56:07
GMT; path=/tpl; do
R main=.amexmail.com
R Keep-Alive: timeout=1, max=100
R Connection: Keep-Alive
R Transfer-Encoding: chunked
R Content-Type: text/html
R
R 76
R <html>
R <head>
R <META HTTP-EQUIV="Refresh"
CONTENT="0;URL=http://www.amexmail.com/tpl/Door/110IX
R ITXM/Welcome">
R
R </html>
R
R 0
******* printf statements from netwerking code ********************
~~~POST /tpl/Door/LoginPost HTTP/1.1
Referer: http://www.amexmail.com/
Host: www.amexmail.com
User-Agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; m18) Gecko/20000831
Accept: */*
Accept-Language: en
Accept-Encoding: gzip,deflate,compress,identity
Keep-Alive: 300
Connection: keep-alive
*** received status = HTTP/1.1 100 Continue
*** received header =
*** received status =
Enabling Quirk StyleSheet
Enabling Quirk StyleSheet
Document http://www.amexmail.com/tpl/Door/LoginPost loaded successfully
Comment 22•24 years ago
|
||
I think I have another clue. With the refresh code temporarily blocked (as
mentioned above) so that no refresh occurs, below is what I see on the screen
when I do the login. Note that this is the status and all the headers that the
netwerking code never saw. Instead it was sucked in by the layout code and used
as the content for the page. The layout code also received the meta-refresh as
part of the content and it acted upon that properly (but it got blocked because
of my temporary coding change).
So the problem is that after the "100 Continue", all further http traffic is
considered part of the content rather than more headers.
Now to figure out why that's happening and how to fix it. Copying some layout
people on this.
-------------------
HTTP/1.1 200 OK Date: Sun, 03 Sep 2000 17:25:14 GMT Server: Apache/1.2.5
Set-Cookie: REMOTE_ID=TXOGBM; expires=Mon, 04-Sep-00 05:21:29 GMT; path=/tpl;
domain=.amexmail.com Keep-Alive: timeout=1, max=100 Connection: Keep-Alive
Transfer-Encoding: chunked Content-Type: text/html 76 0
Reporter | ||
Comment 23•24 years ago
|
||
Same problem (or similar one) one the site http://radio.sonicnet.com/ when one
tries to log in. Under IE5, that's working well. Under M18, one gets the header
of an error page "Object moved" sent from a IIS4 server! We can however get in
through the link provided on that page.
Assignee | ||
Comment 24•24 years ago
|
||
Hmm. That's probably a bug. To be sure - turn 1.1 support off in all.js and see
if it's still happening.
Comment 25•24 years ago
|
||
Bug 51244 may be related
Comment 26•24 years ago
|
||
Thanks, ruslan, you hit the nail on the head. With the pref set to 1.0, this
works just fine. The headers come in as headers and not as content, the cookie
gets set, and the logon succeeds. Interestingly, the "100 Continue" status no
longer gets received.
So now to figure out what is wrong with the 1.1 support that is causing this
problem.
Here is the output from my diagnostic print statements when using 1.0
~~~POST /tpl/Door/LoginPost HTTP/1.0
Referer: http://www.amexmail.com/
Host: www.amexmail.com
User-Agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; m18) Gecko/20000831
Accept: */*
Accept-Language: en
Accept-Encoding: gzip,deflate,compress,identity
Keep-Alive: 300
Connection: keep-alive
*** status = HTTP/1.1 200 OK
*** header = Date: Mon, 04 Sep 2000 15:18:48 GMT
*** header = Server: Apache/1.2.5
*** header = {Set-Cookie: REMOTE_ID=QIAFGQ; expires=Tue, 05-Sep-00 03:15:04 GMT
; path=/tpl; domain=.amexmail.com
*** header = Connection: close
*** header = {Content-Type: text/html
*** header =
Comment 27•24 years ago
|
||
The only significant effect of turning off the 1.1 support in all.js is to
determine what identifying string gets sent to the server. In fact, if I keep
all.js intact but simply change nsHTTPRequest.cpp so that the identifying string
sent out is always " HTTP/1.0", this bug does not occur.
So the next step is to look in detail at the traffic that the server sends back
in the two cases (when we identify ourselves as 1.0 and when we identify
ourselves as 1.1).
Comment 28•24 years ago
|
||
Here is the traffic for the http1.0 case (which succeeds) and for the http1.1
case (which fails).
The significant difference (as far as I can tell) is that the server sends back
the Continue status and the OK status in the 1.1 case whereas it sends back only
the OK status in the 1.0 case.
************************** HTTP 1.0 TRAFFIC *******************************
10:42:47.971 S 8
10:42:48.011 R 8
10:42:48.051 S 8
10:42:48.091 R 8
10:42:48.392 S POST /tpl/Door/LoginPost HTTP/1.0
S Referer: http://www.amexmail.com/
S Host: www.amexmail.com
S User-Agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; m18)
Gecko/20000831
S Accept: */*
S Accept-Language: en
S Accept-Encoding: gzip,deflate,compress,identity
S Keep-Alive: 300
S Connection: keep-alive
10:42:48.422 S 8
10:42:48.462 R 8
10:42:48.482 S Content-type: application/x-www-form-urlencoded
S Content-Length: 184
S
S
DomainID=5&LoginState=2&SuccessfulLogin=%2Ftpl&Project=amexmail&NewServerName=ww
S
w.amexmail.com&JavaScript=JavaScript1.2&UserID=xxxxx&passwd=xxx&Login.x=16
S &Login.y=14&Use_Cookie=1
10:42:48.502 S 8
10:42:48.562 R 8
10:42:48.973 R HTTP/1.1 200 OK
R Date: Mon, 04 Sep 2000 17:41:24 GMT
R Server: Apache/1.2.5
R Set-Cookie: REMOTE_ID=YEGYLX; expires=Tue, 05-Sep-00 05:37:40
GMT; path=/tpl; do
R main=.amexmail.com
R Connection: close
R Content-Type: text/html
R
R <html>
R <head>
R <META HTTP-EQUIV="Refresh"
CONTENT="0;URL=http://www.amexmail.com/tpl/Door/210QO
R JKRL/Welcome">
R
R </html>
************************** HTTP 1.1 TRAFFIC *******************************
10:48:46.023 S 8
10:48:46.063 R 8
10:48:46.174 S 8
10:48:46.214 R 8
10:48:46.484 S POST /tpl/Door/LoginPost HTTP/1.1
S Referer: http://www.amexmail.com/
S Host: www.amexmail.com
S User-Agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; m18)
Gecko/20000831
S Accept: */*
S Accept-Language: en
S Accept-Encoding: gzip,deflate,compress,identity
S Keep-Alive: 300
S Connection: keep-alive
10:48:46.514 S 8
10:48:46.554 R 8
10:48:46.594 S Content-type: application/x-www-form-urlencoded
S Content-Length: 184
S
S
DomainID=5&LoginState=2&SuccessfulLogin=%2Ftpl&Project=amexmail&NewServerName=ww
S
w.amexmail.com&JavaScript=JavaScript1.2&UserID=xxxxx&passwd=xxx&Login.x=28
S &Login.y=12&Use_Cookie=1
10:48:46.624 S 8
10:48:46.664 R 8
10:48:46.794 R HTTP/1.1 100 Continue
10:48:46.975 R HTTP/1.1 200 OK
R Date: Mon, 04 Sep 2000 17:47:23 GMT
R Server: Apache/1.2.5
R Set-Cookie: REMOTE_ID=JVBGDL; expires=Tue, 05-Sep-00 05:43:39
GMT; path=/tpl; do
R main=.amexmail.com
R Keep-Alive: timeout=1, max=100
R Connection: Keep-Alive
R Transfer-Encoding: chunked
R Content-Type: text/html
R
R 76
R <html>
R <head>
R <META HTTP-EQUIV="Refresh"
CONTENT="0;URL=http://www.amexmail.com/tpl/Door/210KE
R SGZC/Welcome">
R
R </html>
R
R 0
Updated•24 years ago
|
Summary: Can't log in → HTTP 1.1 support does not work for www.amexmail.com
Updated•24 years ago
|
Component: Cookies → Networking
Comment 29•24 years ago
|
||
OK, here's a patch that fixes the problem. I don't understand the netwerking
layer well enough to know if this is the right patch (or even why the patch
works), but it does fix the problem.
Part of the problem is that the OnDataAvailable that delivered the "100
Continue" also delivered an extra CR-LF (I don't know where that came from) that
needs to get read in with the Continue (else it will look like a blank line
later on). Furthermore, the parsing of the Continue status will cause
nHeadersDone to be set to true and we need to have that be false, else the logic
downstream from there will fail.
Go ahead, tell me I'm crazy and the patch is totally the wrong thing to do. But
in that case tell me what the correct patch should be. In any event, will
someone from the networking group please review this patch.
Index: nsHTTPResponseListener.cpp
===================================================================
RCS file:
/cvsroot/mozilla/netwerk/protocol/http/src/nsHTTPResponseListener.cpp,v
retrieving revision 1.128
diff -c -r1.128 nsHTTPResponseListener.cpp
*** nsHTTPResponseListener.cpp 2000/08/22 07:03:11 1.128
--- nsHTTPResponseListener.cpp 2000/09/04 23:23:04
***************
*** 336,341 ****
--- 336,352 ----
// Parse the response headers as long as there is more data and
// the headers are not done...
//
+ if (mResponse)
+ {
+ PRUint32 statusCode = 0;
+ mResponse->GetStatus(&statusCode) ;
+ if (statusCode == 100) { // Continue
+ rv = ParseStatusLine(bufferInStream, i_Length, &actualBytesRead) ;
+ i_Length -= actualBytesRead;
+ mHeadersDone = PR_FALSE;
+ return NS_OK;
+ }
+ }
while (NS_SUCCEEDED(rv) && i_Length && !mHeadersDone)
{
rv = ParseHTTPHeader(bufferInStream, i_Length, &actualBytesRead) ;
***************
Comment 30•24 years ago
|
||
Is this bug why I can't log on to yahoo.com with Mozilla, or is it separate?
I've tried changing HTTP from 1.1 to 1.0 in the debug preferences and it still
failed so it sounds slightly different, but I have no way of checking that the
preferences had any effect on the network traffic.
Whiteboard: [need info] → [nsbeta3+]
Assignee | ||
Comment 31•24 years ago
|
||
100 response is 1.1-only response. There must be a bug somewhere in
nsHttpResponseListener.cpp; -> myself (unless someone else got time on his
hands, cuz I'm swamped)
Assignee: morse → ruslan
Status: ASSIGNED → NEW
Comment 32•24 years ago
|
||
*** Bug 51075 has been marked as a duplicate of this bug. ***
Comment 33•24 years ago
|
||
*** Bug 51244 has been marked as a duplicate of this bug. ***
Comment 34•24 years ago
|
||
FreeBSD 4.1 pull/build 20000905 noon edt
I think the problem with http 1.1 support is also causing the problem with 302
object moved (bug 50415). You can see this also at http://www.telocity.com -
select a state, and wait for it to time out. You'll see this:
HTTP/1.1 302 Object moved Server: Microsoft-IIS/4.0 Date: Tue, 05 Sep 2000
17:38:18 GMT cache-control: private pragma: no-cache Location: thome/index.asp
Content-Length: 136 Content-Type: text/html Expires: Tue, 05 Sep 2000 17:38:18
GMT Set-Cookie: phoneinfo=QUALIFYORNOT=no&=&=&=&=&=; expires=Wed, 06-Sep-2000
07:00:00 GMT; domain=.telocity.com; path=/ Set-Cookie:
carrierinfo=CARENUMBER=1%2D888%2D809%2D6628&=640+Kbps&=90+Kbps&=ADSL&=22&=BELLATLANTIC&=PA;
expires=Wed, 20-Sep-2000 07:00:00 GMT; domain=.telocity.com; path=/
Cache-control: private
Object Moved
This object may be found here.
Comment 35•24 years ago
|
||
Yep, another dup as I just verified by changing the pref. With pref set to 1.1
I get just what rjesup described and with it set to 1.0 everything works fine.
Note that it is not the "302 Object moved" that is causing the problem but
rather the status that preceded it. It's after the problem occurs that all
future traffic is misinterpreted as content instead of headers.
I didn't bother running under the sniffer to see what the preceding status was,
but I'm willing to be it was a "100 Continue".
Comment 36•24 years ago
|
||
*** Bug 50415 has been marked as a duplicate of this bug. ***
Assignee | ||
Comment 37•24 years ago
|
||
I'm not sure this patch is correct. There's already logic for 100 processing a
20 line down. It seems to be the same - the question why it doesn't work anymore
....
Status: NEW → ASSIGNED
Comment 38•24 years ago
|
||
There's a big difference between the processing I'm adding and the processing 20
lines down. Namely, an attempt is made to read in the headers in the interim
and that's obtaining a blank line. My patch determines early on that it is a
continue, absorbs any pending input to that point, and exits the
on-data-available routine. That causes us to wait for the next data to come in
triggering another on-data-available.
Now my patch might not be the correct thing to do, but something like it needs
to be done early on. The place that you process the continue is too late.
Assignee | ||
Comment 39•24 years ago
|
||
Ok. The problem is actually in a generic logic of OnDataAvailable's header
processing ... when i_Length == 0 it still tries to parse the status line and
fails. It could have cause various other sideeffects - we just got lucky that
only continue failed. I'm testing 2-liner fix right now and will check it in
shortly.
Assignee | ||
Comment 40•24 years ago
|
||
Fix on hand - awaiting review
Assignee | ||
Comment 41•24 years ago
|
||
Done
Status: ASSIGNED → RESOLVED
Closed: 24 years ago
Resolution: --- → FIXED
Assignee | ||
Comment 42•24 years ago
|
||
*** Bug 35787 has been marked as a duplicate of this bug. ***
Comment 43•24 years ago
|
||
*** Bug 51347 has been marked as a duplicate of this bug. ***
Comment 44•24 years ago
|
||
*** Bug 51363 has been marked as a duplicate of this bug. ***
Assignee | ||
Comment 45•24 years ago
|
||
*** Bug 51307 has been marked as a duplicate of this bug. ***
Comment 46•24 years ago
|
||
verif.
WinNT 2000091108
Linux rh6 2000091108
Mac os8.6 2000090820
Status: RESOLVED → VERIFIED
Comment 47•23 years ago
|
||
Mass removing self from CC list.
Comment 48•23 years ago
|
||
Now I feel sumb because I have to add back. Sorry for the spam.
You need to log in
before you can comment on or make changes to this bug.
Description
•