Open Bug 459384 Opened 16 years ago Updated 2 years ago

History traversal back to POST result page locks file that was submitted

Categories

(Core :: Networking, defect, P3)

x86
macOS
defect

Tracking

()

People

(Reporter: bzbarsky, Unassigned, NeedInfo)

References

Details

(Whiteboard: [necko-backlog])

Attachments

(1 file, 1 obsolete file)

See bug 183689 comment 7. The basic problem is that we seek to the beginning of the POST data stream, as described in bug 183689 comment 37. Then HTTP doesn't actually read the data, and the file stays open. I guess we could just read all the data in this case, but maybe there's a better approach. I'll poke around a bit.
So I don't see anything in the stream APIs that would let us do this... we want to close the file stream but NOT the multiplex and buffered streams wrapping it. Any ideas? I suppose we could add an interface that allows closing underlying file streams and have multiplex and buffered streams implement it. I really don't want to be reading all the POST data just to get the file we don't actually need properly closed. :( Another option, I guess, is to move the seek from docshell into necko so we don't reopen the file until we really need it.
Isn't this backwards? Shouldn't this bug be closed as a duplicate of the earlier bug 183689?
Age of the bugs isn't the only consideration to make when choosing to mark a duplicate. The value of the comments, existence of a patch or patches, and presence of the right people in the CC list are much more important.
And clarity of the bug, which was the main reason for the dup direction here.
Assignee: nobody → bzbarsky
Priority: -- → P2
Priority: P2 → P3
Attached patch Patch (obsolete) (deleted) — Splinter Review
This is the solution we discussed on IRC. I can't seem to reproduce this, so I can't really tell whether or not it works :-|
Attachment #485489 - Flags: review?(bzbarsky)
Attached patch Patch (deleted) — Splinter Review
hg qref
Attachment #485489 - Attachment is obsolete: true
Attachment #485490 - Flags: review?(bzbarsky)
Attachment #485489 - Flags: review?(bzbarsky)
Comment on attachment 485490 [details] [diff] [review] Patch Sorry for the lag here... This is wrong on at least three counts: 1) In the mUploadStreamHasHeaders == true case, this is setting the wrong content-length. 2) Doing the rewind/etc in AsyncOpen could leave us in this bug's situation, still, if we get a cache hit and just read from cache. We need to push that off to the point where we'll actually use mUploadStream. 3) For the file input stream, using the cached last return of Available() doesn't make that much sense to me... Wouldn't we want to return a cached file size instead?
Attachment #485490 - Flags: review?(bzbarsky) → review-
Whiteboard: [necko-backlog]
Comment on Bug 459384 and Bug 183689 The issue still exists in firefox 51.0.1. File handle stays open until browser is closed. It only happens if firefox opens more than one process - e.g. AddOns like Ghostery are activated. Process monitor shows that for one process the file handle gets released but the other firefox process still holds it. Reproduce: - install Ghostery AddOn - check if more than 1 process of firefox is started - upload file at: http://validator.w3.org/#validate_by_upload - try to remove file with DEL command from cmd
Please file a separate bug on the Ghostery issue. It's not related to this bug.
Flags: needinfo?(jan.schummers)
Ghostery was just an example how to force firefox to open a second process and use its [e10s]. I filed a separate bug report here: https://bugzilla.mozilla.org/show_bug.cgi?id=1343509
Flags: needinfo?(jan.schummers)
I'm clearly not working on this.... Necko's API just needs to get fixed. :( Needs a new necko owner.
Assignee: bzbarsky → nobody
Flags: needinfo?(mcmanus)
Priority: P3 → P1
Priority: P1 → P3
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: