Closed Bug 1622451 Opened 5 years ago Closed 4 years ago

New Firefox Release 74.0 made a function not work with Cisco WebVPN

Categories

(Core :: DOM: Service Workers, defect, P2)

74 Branch
Desktop
All
defect

Tracking

()

VERIFIED FIXED
mozilla78
Tracking Status
firefox-esr68 --- unaffected
firefox74 --- wontfix
firefox75 + wontfix
firefox76 + wontfix
firefox77 + wontfix
firefox78 + verified

People

(Reporter: hli0102, Assigned: perry)

References

(Regression)

Details

(Keywords: regression, Whiteboard: [wfh])

Attachments

(7 files, 2 obsolete files)

User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0

Steps to reproduce:

  • Use Cisco WebVPN to access our official web application; login to WebVPN, then login to web app;
  • Use a link or button to bring the pop-up window for adding a PDF file attachment;
  • On the pop-up window, browse/select the file from the local drive, enter the required data fields on the window, click "Add Attachment" button;

Actual results:

  • The button didn't respond at all - nothing happened, where usually the window would be closed after the file was attached.

Expected results:

  • The button should respond to the click, and the pop-up window would be closed after the PDF file was attached;
  • We tried in Firefox 72.0.2, using WebVPN, the same function worked fine;
  • Application is a Java based web app, using Struts framework, using JavaScript, etc.
  • For the same app, when not using WebVPN and just going to the website directly, the same function worked fine too.

So we believe the issue occurred due to the Firefox Release 74.0 - where something is not working well with the combination of Firefox Release 74.0 and Cisco WebVPN.

If your development or support team could help resolve this issue, that would be great!

Thanks,
Jessica

Hi hli0102!
Thanks for taking the time to add this issue.
In order to perform a test, is it possible that you can give us some access to your official web application or a similar test page where we can reproduce the issue on our end?
Also, there are some Add-ons for Cisco, are you using some of them or you just use your web app?

Flags: needinfo?(hli0102)

Hi Marcela,
Thank you so much for looking into this!
Our web application is restraining order related, it is owned by California government, so the access is highly restricted; I will ask our business analyst to see if it is possible to give you some access to at least reproduce the issue, I will keep you updated.
Regarding Add-ons for Cisco, the one I know is "Cisco SSL VPN Relay program", in order to use Cisco WebVPN (i.e. Clientless SSL VPN), the laptop/desktop needs to have "Cisco SSL VPN Relay program" installed; from time to time, when using IE (Internet Explorer), we had issues with "Cisco SSL VPN Relay program", by re-installing it, usually the issues could be resolved.
We never had issues with "Cisco SSL VPN Relay program" when using Firefox, that's why Firefox had been our favorite, as we knew if IE had problems, we always had Firefox to use.
Other than that, I think we do not use other Add-ons for Cisco.
Also FYI, we have users who need to use Cisco WebVPN to access our web app, we also have users who can login to our web app directly without using Cisco WebVPN.
In short, we never had any issues with Firefox until this time - Release 74.0.

Thanks!!

Hi Marcela,

We set up a test account for you, here is the detailed information:

Test website URL: (WebVPN access):

https://vpn.dev.courts.ca.gov/

Test user account:

Username: ccportest01
Password: JCCpass0420

You would need to enter the above credential two times to login to the web app.

After logging in, you will see the Actions menu on the left, two actions you could use to troubleshoot the issue:

"Add Quick Attach"
Click the menu item, on the page, click "Browse", select a PDF file, click "Upload"; on the next page, enter dummy data for required fields (highlighted in YELLOW), click "Submit" button, you will encounter the issue - complaining no file was uploaded or no response at all.

If you try the second time, third time, you might not even be able to get through the first step - after you click "Upload" button, the page is hanging/spinning ...

"Add Order"
Click the menu item, on the page, enter dummy data for required fields (highlighted in YELLOW), at the bottom of the page, click "Add Attachment" button, click "Browse", select a PDF file, enter dummy data for required fields, click "Add Attachment" button, you will encounter the issue - complaining no file was uploaded or no response at all.

Again, if you try the second time, third time, the issue scenario might be different, but afterwards, it is consistent - no response or hanging ...

"Search Order" is not available to you, if you see an error, just use "Back" button to go back to the previous page.

Please feel free to let me know if you have any questions.

Thanks for your help! Really appreciate it.

Jessica

Attached video Bug 1622451 - Add Quick Attach.mp4 (deleted) β€”
Attached video Bug 1622451 - Add Order.mp4 (deleted) β€”
Attached image Bug 1622451 - Brower Console Error.png (deleted) β€”

(In reply to hli0102 from comment #3)

Hi Marcela,

"Add Quick Attach"
Click the menu item, on the page, click "Browse", select a PDF file, click "Upload"; on the next page, enter dummy data for required fields (highlighted in YELLOW), click "Submit" button, you will encounter the issue - complaining no file was uploaded or no response at all.

If you try the second time, third time, you might not even be able to get through the first step - after you click "Upload" button, the page is hanging/spinning ...

I was not able to reach the "next page" after clicking the Upload button. Upload no response.
Just only to confirm, I opened Browser Console and I've seen an error. Could you open Browser Console and tell us if it has any sense to you?
You can get it since Hamburger menu > Web Developer > Browser Console

"Add Order"
Click the menu item, on the page, enter dummy data for required fields (highlighted in YELLOW), at the bottom of the page, click "Add Attachment" button, click "Browse", select a PDF file, enter dummy data for required fields, click "Add Attachment" button, you will encounter the issue - complaining no file was uploaded or no response at all.

Again, if you try the second time, third time, the issue scenario might be different, but afterwards, it is consistent - no response or hanging ...

I was not able to attach a file, apparently, the button does not work.

Just only for double-check it, I want it to let us know if these behaviors are the same as you had experienced. I've attached 3 screen recorder in order to validate

Hi Marcela,

Thanks a lot for your efforts and time! Here are the responses to your questions and some additional info:

That's something I expected - 'not able to reach the "next page" ', as I described earlier the behavior was a little different from time to time; on your side, I believe your browser settings or something else are different from mine, that's why you could not go to the next page even at the first try;

I opened Browser Console, and the error(s) on my side was/were different from yours, again I think this is because of the browser settings or network settings/configurations. The error(s) didn't have any sense to me; (please see the attached screenshot.)

I usually use F12 > Debugger, when clicking "Add Attachment" button on the pop-up window for "Add Order", I got a TypeError:

"TypeError: window.opener.document.orderForm.currentUploadedImage is undefined"

That error is fine - we have other pop-up windows for adding different data (but not uploading a file), those "ADD" buttons work fine even with the above TypeError from JavaScript; (please see the attached screenshot.)

Yes, those behaviors are the same as I had experienced.

Also, I forgot to share this with you - we found out that if using β€œprivate mode” of Firefox, the functionality worked fine - was able to upload PDF file without any problem (Firefox 74.0 + WebVPN).

Just FYI, when trying to see the second screen recorder, I got "Video playback aborted due to a network error.", so I could not see the video.

Thanks.
Jessica

Update: After I submitted the new comments, the screen was refreshed, and the second screen recorder could be played now; I saw the recorder, and yes, the behavior is the same as what I observed. Thanks.

Attached image Bug_1622451_Browser_Console_from_user.png (deleted) β€”
Flags: needinfo?(hli0102)
Attached image Bug_1622451_F12_Debugger_from_user.png (deleted) β€”

Since the result is the same as you mention, I will change the state to New, add Product and Components and I'll change the corresponding flags.

Status: UNCONFIRMED → NEW
Component: Untriaged → Networking
Ever confirmed: true
Product: Firefox → Core
Hardware: Unspecified → Desktop

Thanks, Marcela!

Thanks for the report.
Would you be able to help us track down which build introduced the bug using mozregression? https://mozilla.github.io/mozregression/

Flags: needinfo?(hli0102)

Hi Valentin,

Thanks for helping look into this!
I would be more than happy to help you "track down which build introduced the bug using mozregression". I took a look at the website and watched part of the video of how mozregression worked. My question is:

In Comment# 3, I provided the access to our application's test website (below is the info), with that information, would you be able to track down on your side? Also, in my initial report, I mentioned that everything worked fine in Firefox 72.0.2, but when in Firefox 74.0, that specific feature started not working.

Test website URL: (WebVPN access):

https://vpn.dev.courts.ca.gov/

Test user account:

Username: ccportest01
Password: JCCpass0420

If it is still necessary for me to track down on my side, I have no problem to do so. I just need to spend a little more time to learn how to do it.

Please feel free to let me know.

Thanks,
Jessica

I was able to reproduce the issue on Fx 74.0, Latest Nightly 76.0a1 and Fx 75.0b10 but not on Fx 68.6.0esr (Windows 10 64bit) and I found this regression using mozregression tool:

Last good: 2019-10-07
First bad: 2019-10-08
Pushlog: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=3fa65bda1e506a314ea90d936f763c7e840ab98a&tochange=e1a65223d498aa0b8e3e4802d8267db2768073d9

Narrowed down to the following pushlog:
https://hg.mozilla.org/integration/autoland/pushloghtml?fromchange=edf2447d5c06005d0f7ab889eceb3b7e50bd90d9&tochange=e1a65223d498aa0b8e3e4802d8267db2768073d9

Not sure which is the culprit here though.

Has Regression Range: --- → yes
Has STR: --- → yes
OS: Unspecified → All

Hi Bogdan,
Thank you very much for reproducing the issue and helping provide regression using mozregression tool! Really appreciate it.
Thanks,
Jessica

Flags: needinfo?(hli0102)

(In reply to hli0102 from comment #17)

Hi Bogdan,
Thank you very much for reproducing the issue and helping provide regression using mozregression tool! Really appreciate it.
Thanks,
Jessica

No problem Jessica, happy to help. Valentin can you please take a look at the regression range I posted in comment 16?

Flags: needinfo?(valentin.gosu)

I really do not know what could have cause this. There is not necko changes in the pushlog from comment 16.
I am also not sure what could cause this from that pushlog, so I will try with DOM.
tt, is this maybe bug 1577311?

Flags: needinfo?(valentin.gosu) → needinfo?(ttung)
Component: Networking → DOM: Core & HTML

(In reply to Dragana Damjanovic [:dragana] from comment #19)

I really do not know what could have cause this. There is not necko changes in the pushlog from comment 16.
I am also not sure what could cause this from that pushlog, so I will try with DOM.
tt, is this maybe bug 1577311?

It doesn't look like this is related to bug 1577311 at this moment.

The commit related to bug 1577311 asks the blob holder to clean up the stream when it's about to be destructed or unlinked. Otherwise, we would have a dangling pointer. If it's related, that would mean the old code uses a dangling point...

(I don't know who should I needinfo so that this bug can be tracked, so I just redirect the request back)

Flags: needinfo?(ttung) → needinfo?(valentin.gosu)
Flags: needinfo?(valentin.gosu) → needinfo?(perry)
Regressed by: 1456995
Component: DOM: Core & HTML → DOM: Service Workers
Whiteboard: [wfh]

I've reproduced locally on linux on a locally built opt release build using clang and -Og and after a pernosco db-building hiccup, we now have:
https://pernos.co/debug/svC9e_vsQ2JfaEVKJ99MSg/index.html

Investigating now, clearing perry's needinfo for now, but that doesn't mean he won't (get to) investigate too! :)

Assignee: nobody → bugmail
Status: NEW → ASSIGNED
Flags: needinfo?(perry)
Assignee: bugmail → perry
Priority: -- → P2

Hi Perry, any updates here?

Flags: needinfo?(perry)

I think I've identified the bug that's happening, so I'll be starting on writing a patch. I don't think the patch will be too big.

Flags: needinfo?(perry)

Baku, it looks like this bug is happening because we try to clone an nsFileInputStream via Request.clone(), but the class says that cloning isn't possible. Any ideas for a workaround?

Flags: needinfo?(amarchesini)

nsIIFileInputStream is not cloneable on the content process because we block the opening of files. Before suggesting workarounds, I have a few questions:

  1. I would like to know how a Response object obtains a nsIFileInputStream. Asking, because maybe the bug is elsewhere.
  2. Do you have a stack-trace or can you reduce the bug in a test?
  3. do we have blobs involves? (I hope not :)

About possible workarounds, you can use NS_CloneInputStream(), but this uses a lot of memory, and I hope we can find a better solution.

Flags: needinfo?(amarchesini) → needinfo?(perry)
Attached file Bug 1622451 - modified test case that crashes (obsolete) (deleted) β€”

Yeah, it appears there are blobs involved. Conceptual STR is

  1. Have SW-controlled page with an HTML form with a file input that does a POST when submitted. Select a file and submit the form. This is where I see blobs.
  2. The SW clones the request and tries to do something with the clone's body. When diagnostic assert is disabled, the promise returned by whatever body-reading method is used, e.g. text(), will never settle. When there's diagnostic assert, it crashes for me. The WIP patch here has a modified test that can reproduce the crash. (The crash here is a side effect of the underlying bug, not the bug itself.)

There's another stack trace in Andrew's comment 22, but it's decently noisy because it's a recording of the STR in the bug description.

As for how the Response gets the nsIFileInputStream, it's
Response -> (mRequest) InternalRequest -> (mBodyStream) nsMIMEInputStream -> (mStream) nsMultiplexInputStream -> (mStreams[1]->mStream) nsBufferedInputStream -> (mStream) nsFileInputStream. The final nsFileInputStream isn't cloneable, which makes the owning nsBufferedInputStream not cloneable, which prevents the owning nsMultiplexInputStream from QI-ing to nsICloneableInputStream.

Flags: needinfo?(perry) → needinfo?(amarchesini)

Ok, I know what is happening here and how I would fix the problem, but it will require some important change in the service-worker code.
Let me start with some good news: blobs are not involved \o/ The only blob we see is a file object in the HTMLInputElement. But when we create the FormData, we take its inputStream only.

I'll start far far away: ServiceWorkers are scripts that have to deal with network operations. Several network operations (POST requests) require data to be sent, and because of this, we have nsIInputStream objects involved. But not all the serviceWorker scripts need to read the data. Often, the "real" reading of the nsIInputStream data happens in the parent process.

For instance, in this bug, we have a page that does a fetch(). A serviceWorker that intercepts the fetch(), it obtains the nsIInputStream and it does a second fetch(). The final network operation happens in the parent process. Only the parent process (or the socket necko process, but let's ignore it) requires the reading of the data from the nsIInputStream.

Let's ignore the crash for 1 second and let's see what we do with nsIInputStream: we copy data around way too many times:

  1. in the content process we have the first fetch, with the initial blob. The real data is only in the parent process because we have IPCBlobInputStream \o/ - no data copied!
  2. The parent process sends the nsIInputStream to the serviceWorker -> first copy!
  3. the serviceWorker script does a clone(). Here we crash, but let's say we fix the crash. -> second copy!
  4. btw, each clone() will duplicate data. And we can easily run out of memory.
    45 the serviceWorker calls fetch() and all the data is sent back to the parent process again -> third copy!

Clearly this approach doesn't scale!

And here the solution I want to suggest: Let's reuse IPCBlobInputStream for serviceWorkers too.
Step by step I would do this:

  1. take IPCBlobInputStream out of the IPCBlob serialization code
  2. rename IPCBlobInputStream to RemoteInputStream
  3. in MaybeDeserializeAndReserialize(), instead of cloning the stream, we can call: RemoteInputStreamUtils::Serialize(). https://searchfox.org/mozilla-central/rev/97cb0a90bd053de87cd1ab7646d5565809166bb1/dom/serviceworkers/FetchEventOpProxyParent.cpp#30

RemoteInputStreamUtils::Serialize is what currently is located here: https://searchfox.org/mozilla-central/rev/97cb0a90bd053de87cd1ab7646d5565809166bb1/dom/file/ipc/IPCBlobUtils.cpp#130-154

What's the benefit of doing this:

  1. reuse of existing code that fixes a similar issue.
  2. less memory allocated, faster browser
  3. no additional crashes in serviceWorker code.

Note that response.clone() is only 1 of the possible places where we clone the stream. With a bit of effort, I can trigger other crashes from other stream cloning.

If you like this plan, I would like to review the code. Perry, Andrew, thoughts?

Flags: needinfo?(perry)
Flags: needinfo?(bugmail)
Flags: needinfo?(amarchesini)

While I am not really able to judge it, this sounds like a very good cleanup plan. But does this also mean, there is no way to have a dirty fix of "3. the serviceWorker script does a clone()" first?

Implementing this should not require a lot of time. I suspect that a couple of days of work should be sufficient. Definitely we can do it for the next release.

Yes, that makes a lot of sense to me. On slack I was half-seriously proposing abusing the (IPC)Blob mechanism for the request body stream when serializing from parent-to-child since it already provides a mechanism for referencing existing things, but concerned about any defensive pipe tee-ings such a naive approach might entail. I knew you'd have the wiser, less terrifying proposal though! ;)

Unfortunately it doesn't seem like there's really a good mitigation in this exact situation other than the correct fix. The smallest mitigation would be to just bypass the serviceworker if we know we can't clone a navigation request, but that's a pretty extreme mitigation.

Flags: needinfo?(bugmail)

OK, then let's go for it! Thanks everybody

Doesn't sound like the kind of thing we can realistically fix for Fx76 at this point, but if there's any kind of quick and dirty wallpaper fix we could hack together to get this working, I'd be open to considering it still.

I'll see if there's a hotfix possible, it's currently unclear but I will post an update later today.

Flags: needinfo?(perry)

(In reply to Perry Jiang [:perry] from comment #35)

I'll see if there's a hotfix possible, it's currently unclear but I will post an update later today.

Any update? :)

Flags: needinfo?(perry)

There wasn't an obvious hotfix and the patch for the current bug has exposed another issue :\ baku says he will take a look at the current fix.

Flags: needinfo?(perry)

So as I understand it the summary of the "new" issue when trying to use PIPCBlobInputStream is:

  • the nsFileInputStream needs to be serialized across the PBackground boundary in the parent, which results in a child -> parent sending of data
  • sending data using PIPCBlobInputStream only works (currently) for parent -> child sending of data
  • as a result, the nsFileInputStream has to be sent as an IPCStream (and not converted to a IPCBlobInputStream for the above reason) in the parent
  • this IPCStream is then de-serialized into a new nsFileInputStream on the background thread, but this new instance does not get an mFile member variable (only a FD) and therefore isn't cloneable
  • this new nsFileInputStream instance is part of an nsMultiplexInputStream which is the underlying stream for the PIPCBlobInputStream used to finally send the data to the content process, and the nsMultiplexInputStream also isn't cloneable
  • when this underlying stream is retrieved from IPCBlobInputStreamStorage, an nsPipeInputStream is returned because the underlying stream isn't cloneable (see link to nsStreamUtils.cpp)
  • nsPipeInputStream does not implement nsIAsyncInputStreamLength making calls to InputStreamLengthHelper::GetAsyncLength with it fail
    https://searchfox.org/mozilla-central/source/xpcom/io/nsStreamUtils.cpp#837
Flags: needinfo?(amarchesini)
  • the nsFileInputStream needs to be serialized across the PBackground boundary in the parent, which results in a child -> parent sending of data

It's interesting to know why this is needed. Currently it's needed because we use AutoIPCStream in InternalRequest::ToIPC. But there are no reasons to do it. Because:

  1. InternalRequest::ToIPC() is called only on the parent process
  2. BTW it's a template, but we use it only with the PBackgroundChild manager. We can remove the template.
  3. the serialized IPCInternalRequest is sent to the parent process, and, here we deserialize nsFileInputstream.

We can clean up the code a lot here. Here is how I would proceed:

  1. IPCInternalRequest struct should have an extra body ID: nsID? bodyStreamId

  2. InternalRequest::ToIPC() uses IPCBlobInputStreamStorage to keep the inputStream alive. You can do something like:

  if (mBodyStream) {
    nsID id;
    nsContentUtils::GenerateUUIDInPlace(id);
    IPCBlobInputStreamStorage::Get()->AddStream(mBodyStream, id, mBodyLength, 0);
    aIPCRequest->bodyStreamId().emplace(id);
  }
  1. InternalRequest CTOR can retrieve the stream for IPCBlobInputStreamStorage:
  if (aIPCRequest.bodyStreamId().isSome()) {
    MOZ_ASSERT(We are in the parent process);
    IPCBlobInputStreamStorage::Get()->GetStream(aIPCRequest.bodyStreamId().value(), 0, mBodyLength, getter_AddRefs(mBodyStream));
  }
  1. In FetchEventOpProxyParent::Create() you can retrieve the stream using bhe bodyStreamId, create an IPCBlobInputStreamParent actor, and add it into the copyRequest struct.
Flags: needinfo?(amarchesini)
Attached patch experiments.patch (deleted) β€” β€” Splinter Review

In attach there is a patch. It works but it's a mess... It's your patch + bug 1633731, without comments, and fully untested. But it passes your mochitest.

Flags: needinfo?(perry)

Initially, IPCInternal{Request,Response} had contained IPCStreams which would
result in unecessary copying when sending the objects over IPC. The patch
makes these streams either:

  1. ParentToParentStream (just a UUID)
  2. ParentToChildStream (a PIPCBlobInputStream actor, acting as a handle)
  3. ChildToParentStream (a real IPCStream)

These three types are union-ed together by the BodyStreamVariant IPDL structure.
This structure replaces the IPCStream members in IPCInternal{Request,Response}
so that, depending on the particular IPDL protocol, we can avoid cloning streams
and just pass handles/IDs instead.

As a side effect, this makes file-backed Response objects cloneable. Initially,
these Responses would be backed by an nsFileInputStream, which is not cloneable
outside the parent process. They are now backed by IPCBlobInputStreams, which
are cloneable.

One thing that's not really satisfactory (IMO), is the manual management of
IPCBlobInputStreamStorage so that no streams are leaked, e.g. if we store a
stream in the IPCBlobInputStreamStorage but fail to send an IPC message and
therefore fail to remove the stream from storage on the other side of the IPC
boundary (only parent-to-parent in this case).

Pushed by pjiang@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/e1be97ce43d1
minimize stream copying across IPC boundaries r=asuth,baku

Backed out for failures on browser_download_canceled.js

backout: https://hg.mozilla.org/integration/autoland/rev/75e6fd2c528715783aba08945363526abf82daa9

push: https://treeherder.mozilla.org/#/jobs?repo=autoland&searchStr=browser-chrome&revision=e1be97ce43d157d60778b4d267175b64547df89d&selectedTaskRun=LfylR9GNSNe5LlV0AbP5Bw-0

failure log: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=300320186&repo=autoland&lineNumber=12417

[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - TEST-PASS | dom/serviceworkers/test/browser_download_canceled.js | canceled download -
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - wait for the sw-passthrough-download stream to close.
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - Buffered messages finished
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - TEST-UNEXPECTED-FAIL | dom/serviceworkers/test/browser_download_canceled.js | Ensure the stream canceled instead of timing out. - Got timeout, expected canceled
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - Stack trace:
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/browser-test.js:test_is:1303
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochitests/content/browser/dom/serviceworkers/test/browser_download_canceled.js:performCanceledDownload:100
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochitests/content/browser/dom/serviceworkers/test/browser_download_canceled.js:interruptedDownloads:144
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/browser-test.js:Tester_execTest/<:1045
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/browser-test.js:Tester_execTest:1080
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/browser-test.js:nextTest/<:910
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/tests/SimpleTest/SimpleTest.js:SimpleTest.waitForFocus/waitForFocusInner/focusedOrLoaded/<:918
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - Cancellation reason: timeout after undefined ticks
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - watching for download popup

I'm still working on a fix for the backout - seems like there's something fundamentally wrong with the initial patch that's causing the failure.

Flags: needinfo?(perry)
Pushed by pjiang@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/fb8e753340e4
minimize stream copying across IPC boundaries r=asuth,baku
Pushed by pjiang@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/6ad5f406fb73
minimize stream copying across IPC boundaries r=asuth,baku
Status: ASSIGNED → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla78
Flags: needinfo?(perry)
Flags: qe-verify+

It seems that credentials provided in comment 15 are no longer working. Can you please verify if the issue is fixed using the latest beta build from here? Thank you!

Flags: needinfo?(hli0102)

Hi,
For the test website URL: (WebVPN access)

https://vpn.dev.courts.ca.gov/

We reset the password for the following test user account:

Username: ccportest01
Password: JCCpass0615

Please try to log in to the website again; if it still does not work, please let me know.

We will verify when we get the official Release 78.0 on our side. Thanks.

Flags: needinfo?(hli0102)

(In reply to hli0102 from comment #52)

Hi,
For the test website URL: (WebVPN access)

https://vpn.dev.courts.ca.gov/

We reset the password for the following test user account:

Username: ccportest01
Password: JCCpass0615

Please try to log in to the website again; if it still does not work, please let me know.

We will verify when we get the official Release 78.0 on our side. Thanks.

I'm sorry but the credentials still don't work. Thank you for providing a quick answer!

We fixed the access issue, the same credentials should work now, please try again. Thanks.

(In reply to hli0102 from comment #54)

We fixed the access issue, the same credentials should work now, please try again. Thanks.

Thank you again for your response.

I was able to reproduce the issue using Firefox 76.0a1 (20200313214616) on Windows 10x64 and steps from comment 3 using Add Quick Attach and Add Order.
Using the same steps I verified the issue with Firefox 78.0b9 (20200619002543) on Windows 10x64, macOS 10.12, and Ubuntu 18.04. The .pdf file is successfully uploaded and no errors are thrown in browser console for this issue.

Feel free to notify us if there are any problems related to this when using the official Release 78.0.

Status: RESOLVED → VERIFIED
Flags: qe-verify+

Thanks for verifying that the issue was fixed! We will try it when the official Release 78.0 is available, and will share the update with you. Thanks.

Attachment #9140870 - Attachment is obsolete: true
Attachment #9143709 - Attachment is obsolete: true

Hi,
We downloaded the official Release 78.0.1 and tested the fix in our application, the fix is working fine so far.
Thank you very much for all your efforts on fixing this issue! It is greatly appreciated!
Thanks.

Blocks: 1604719
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: