Closed
Bug 181910
Opened 22 years ago
Closed 22 years ago
Click on link in zombie doc during "Transferring", onload won't fire for loaded document
Categories
(SeaMonkey :: General, defect, P3)
SeaMonkey
General
Tracking
(Not tracked)
RESOLVED
FIXED
mozilla1.3alpha
People
(Reporter: aaronlev, Assigned: aaronlev)
Details
Attachments
(2 files, 1 obsolete file)
(deleted),
text/html
|
Details | |
(deleted),
patch
|
rpotts
:
review+
jst
:
superreview+
|
Details | Diff | Splinter Review |
If you click on a link to load a document, and then on a different link before
the first doc was loaded (during the "Transferring" stage), the new doc will
load but the onload handler won't fire.
Assignee | ||
Updated•22 years ago
|
Status: NEW → ASSIGNED
Priority: -- → P3
Target Milestone: --- → mozilla1.3alpha
Assignee | ||
Comment 1•22 years ago
|
||
This fixes the problem, but is it the best way?
Who knows enough about these issues to review?
Assignee | ||
Comment 2•22 years ago
|
||
To reproduce, click on "mba" and during "Transferring" click on "Google".
The caret won't be automatically in google's edit box, as it would be if the
onload was fired.
Assignee | ||
Comment 3•22 years ago
|
||
An alternate fix might be do test to see if the current document is a zombie (by
looking to see if there is a previous viewer still active). If there is, then do
the full Stop(STOP_ALL), otherwise just do the STOP_NETWORK as it currently does.
Assignee | ||
Updated•22 years ago
|
Attachment #107389 -
Flags: review?(rpotts)
Comment 4•22 years ago
|
||
Here's what's going on here...
Using the testcase, you click on the mba link and we start loading mba.com,
which for whaterver reason seems like a really slow site. Before mba.com is done
loading you click on the link to google.com, now we'll go ahead and cancel all
loads that are currently part of the document's loadgroup, this includes
external scripts n' so on from mba.com. When canceling the load of an external
script (which is the case I saw in the debugger using the testcase) we
asynchronously fire the script loaders OnStreamComplete() code with an error
code indicating that the load was canceled, this ends up telling the sink that
the script was loaded (or failed to load, rather) and the sink tells the parser
to continue parsing. The parser then continues feeding the already received data
(the data that came in while we were waiting for the script to load) and that
data ends up containing yet one more script tag, so we start loading it, and we
now of course add that load to the loadgroup that should now only contain the
requests for google.com. All the data from google.com ends up coming in, but the
script from mba.com still hasn't arrived, so we don't fire the onload handler
for google.com, and we won't do that until the script from mba.com actually
arrives and is removed from the loadgroup.
IOW, we *do* fire the onload handler, it's just significantly delayed by a
request that shouldn't have been added to the loadgroup in the first place.
I think this fix is "correct", since it ends up telling the parser in the
document that whose load is canceled by clicking on the second link to
terminate, thus the second script is never loaded by the data that came in over
the wire before the second link was clicked. I do, however, think that it would
be worth making this change take effect only when the current document is a
zombie document, as Aaron suggested.
Rick, it turns out that the presshell can't leave dummy requests in the
loadgroup as I thought, they're all guaranteed to be removed already, so no need
for that change.
I had a look at what's really happening here, the onload handler *does* fire,
even if you follow the steps lined out for the testcase in this bug, however,
the onload event does *not* even if all the loads that google.com initiates are
done, there's more stuff in the loadgroup that's left over from the target of
the initial link that was clicked on. That's the problem. The one time I caught
this in the debugger I saw a JS file from mba.com be the last thing to be
removed from the loadgroup, and
Comment 5•22 years ago
|
||
Um, ignore the last paragraph in my last comment...
Assignee | ||
Comment 6•22 years ago
|
||
Same stop content fix in nsDocShell::InternalLoad, but now only does it if
we're in a zombie document (otherwise does what it used to do).
Attachment #107389 -
Attachment is obsolete: true
Assignee | ||
Updated•22 years ago
|
Attachment #108499 -
Flags: superreview?(jst)
Attachment #108499 -
Flags: review?(rpotts)
Comment 7•22 years ago
|
||
Comment on attachment 108499 [details] [diff] [review]
Stops content only when in zombie
+ // Stop any current network activity.
+ // Also stop content if this is a zombie doc. otherwise
+ // the onload won't fire for the new document.
This comment is incorrect. The onload handler *will* fire, but it will be
delayed by other loads initiated in the background by the first document that
didn't fully load before the next load was initiated.
>+ rv = Stop(zombieViewer? (nsIWebNavigation::STOP_ALL) : (nsIWebNavigation::STOP_NETWORK));
For readability's sake I'd rather see this go in as:
+ if (zombieViewer) {
+ rv = Stop(nsIWebNavigation::STOP_ALL);
+ } else {
+ rv = Stop(nsIWebNavigation::STOP_NETWORK);
+ }
Other than that, sr=jst
Attachment #108499 -
Flags: superreview?(jst) → superreview+
Assignee | ||
Updated•22 years ago
|
Attachment #107389 -
Flags: review?(rpotts)
Updated•22 years ago
|
Attachment #108499 -
Flags: review?(rpotts) → review+
Assignee | ||
Comment 8•22 years ago
|
||
fixed
Status: ASSIGNED → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
Updated•20 years ago
|
Product: Browser → Seamonkey
You need to log in
before you can comment on or make changes to this bug.
Description
•