New Firefox Release 74.0 made a function not work with Cisco WebVPN
Categories
(Core :: DOM: Service Workers, defect, P2)
Tracking
()
People
(Reporter: hli0102, Assigned: perry)
References
(Regression)
Details
(Keywords: regression, Whiteboard: [wfh])
Attachments
(7 files, 2 obsolete files)
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0
Steps to reproduce:
- Use Cisco WebVPN to access our official web application; login to WebVPN, then login to web app;
- Use a link or button to bring the pop-up window for adding a PDF file attachment;
- On the pop-up window, browse/select the file from the local drive, enter the required data fields on the window, click "Add Attachment" button;
Actual results:
- The button didn't respond at all - nothing happened, where usually the window would be closed after the file was attached.
Expected results:
- The button should respond to the click, and the pop-up window would be closed after the PDF file was attached;
- We tried in Firefox 72.0.2, using WebVPN, the same function worked fine;
- Application is a Java based web app, using Struts framework, using JavaScript, etc.
- For the same app, when not using WebVPN and just going to the website directly, the same function worked fine too.
So we believe the issue occurred due to the Firefox Release 74.0 - where something is not working well with the combination of Firefox Release 74.0 and Cisco WebVPN.
If your development or support team could help resolve this issue, that would be great!
Thanks,
Jessica
Comment 1•5 years ago
|
||
Hi hli0102!
Thanks for taking the time to add this issue.
In order to perform a test, is it possible that you can give us some access to your official web application or a similar test page where we can reproduce the issue on our end?
Also, there are some Add-ons for Cisco, are you using some of them or you just use your web app?
Hi Marcela,
Thank you so much for looking into this!
Our web application is restraining order related, it is owned by California government, so the access is highly restricted; I will ask our business analyst to see if it is possible to give you some access to at least reproduce the issue, I will keep you updated.
Regarding Add-ons for Cisco, the one I know is "Cisco SSL VPN Relay program", in order to use Cisco WebVPN (i.e. Clientless SSL VPN), the laptop/desktop needs to have "Cisco SSL VPN Relay program" installed; from time to time, when using IE (Internet Explorer), we had issues with "Cisco SSL VPN Relay program", by re-installing it, usually the issues could be resolved.
We never had issues with "Cisco SSL VPN Relay program" when using Firefox, that's why Firefox had been our favorite, as we knew if IE had problems, we always had Firefox to use.
Other than that, I think we do not use other Add-ons for Cisco.
Also FYI, we have users who need to use Cisco WebVPN to access our web app, we also have users who can login to our web app directly without using Cisco WebVPN.
In short, we never had any issues with Firefox until this time - Release 74.0.
Thanks!!
Hi Marcela,
We set up a test account for you, here is the detailed information:
Test website URL: (WebVPN access):
https://vpn.dev.courts.ca.gov/
Test user account:
Username: ccportest01
Password: JCCpass0420
You would need to enter the above credential two times to login to the web app.
After logging in, you will see the Actions menu on the left, two actions you could use to troubleshoot the issue:
"Add Quick Attach"
Click the menu item, on the page, click "Browse", select a PDF file, click "Upload"; on the next page, enter dummy data for required fields (highlighted in YELLOW), click "Submit" button, you will encounter the issue - complaining no file was uploaded or no response at all.
If you try the second time, third time, you might not even be able to get through the first step - after you click "Upload" button, the page is hanging/spinning ...
"Add Order"
Click the menu item, on the page, enter dummy data for required fields (highlighted in YELLOW), at the bottom of the page, click "Add Attachment" button, click "Browse", select a PDF file, enter dummy data for required fields, click "Add Attachment" button, you will encounter the issue - complaining no file was uploaded or no response at all.
Again, if you try the second time, third time, the issue scenario might be different, but afterwards, it is consistent - no response or hanging ...
"Search Order" is not available to you, if you see an error, just use "Back" button to go back to the previous page.
Please feel free to let me know if you have any questions.
Thanks for your help! Really appreciate it.
Jessica
Comment 4•5 years ago
|
||
Comment 5•5 years ago
|
||
Comment 6•5 years ago
|
||
Comment 7•5 years ago
|
||
(In reply to hli0102 from comment #3)
Hi Marcela,
"Add Quick Attach"
Click the menu item, on the page, click "Browse", select a PDF file, click "Upload"; on the next page, enter dummy data for required fields (highlighted in YELLOW), click "Submit" button, you will encounter the issue - complaining no file was uploaded or no response at all.If you try the second time, third time, you might not even be able to get through the first step - after you click "Upload" button, the page is hanging/spinning ...
I was not able to reach the "next page" after clicking the Upload button. Upload no response.
Just only to confirm, I opened Browser Console and I've seen an error. Could you open Browser Console and tell us if it has any sense to you?
You can get it since Hamburger menu > Web Developer > Browser Console
"Add Order"
Click the menu item, on the page, enter dummy data for required fields (highlighted in YELLOW), at the bottom of the page, click "Add Attachment" button, click "Browse", select a PDF file, enter dummy data for required fields, click "Add Attachment" button, you will encounter the issue - complaining no file was uploaded or no response at all.Again, if you try the second time, third time, the issue scenario might be different, but afterwards, it is consistent - no response or hanging ...
I was not able to attach a file, apparently, the button does not work.
Just only for double-check it, I want it to let us know if these behaviors are the same as you had experienced. I've attached 3 screen recorder in order to validate
Hi Marcela,
Thanks a lot for your efforts and time! Here are the responses to your questions and some additional info:
That's something I expected - 'not able to reach the "next page" ', as I described earlier the behavior was a little different from time to time; on your side, I believe your browser settings or something else are different from mine, that's why you could not go to the next page even at the first try;
I opened Browser Console, and the error(s) on my side was/were different from yours, again I think this is because of the browser settings or network settings/configurations. The error(s) didn't have any sense to me; (please see the attached screenshot.)
I usually use F12 > Debugger, when clicking "Add Attachment" button on the pop-up window for "Add Order", I got a TypeError:
"TypeError: window.opener.document.orderForm.currentUploadedImage is undefined"
That error is fine - we have other pop-up windows for adding different data (but not uploading a file), those "ADD" buttons work fine even with the above TypeError from JavaScript; (please see the attached screenshot.)
Yes, those behaviors are the same as I had experienced.
Also, I forgot to share this with you - we found out that if using βprivate modeβ of Firefox, the functionality worked fine - was able to upload PDF file without any problem (Firefox 74.0 + WebVPN).
Just FYI, when trying to see the second screen recorder, I got "Video playback aborted due to a network error.", so I could not see the video.
Thanks.
Jessica
Update: After I submitted the new comments, the screen was refreshed, and the second screen recorder could be played now; I saw the recorder, and yes, the behavior is the same as what I observed. Thanks.
Reporter | ||
Comment 11•5 years ago
|
||
Comment 12•5 years ago
|
||
Since the result is the same as you mention, I will change the state to New, add Product and Components and I'll change the corresponding flags.
Comment 14•5 years ago
|
||
Thanks for the report.
Would you be able to help us track down which build introduced the bug using mozregression? https://mozilla.github.io/mozregression/
Reporter | ||
Comment 15•5 years ago
|
||
Hi Valentin,
Thanks for helping look into this!
I would be more than happy to help you "track down which build introduced the bug using mozregression". I took a look at the website and watched part of the video of how mozregression worked. My question is:
In Comment# 3, I provided the access to our application's test website (below is the info), with that information, would you be able to track down on your side? Also, in my initial report, I mentioned that everything worked fine in Firefox 72.0.2, but when in Firefox 74.0, that specific feature started not working.
Test website URL: (WebVPN access):
https://vpn.dev.courts.ca.gov/
Test user account:
Username: ccportest01
Password: JCCpass0420
If it is still necessary for me to track down on my side, I have no problem to do so. I just need to spend a little more time to learn how to do it.
Please feel free to let me know.
Thanks,
Jessica
Comment 16•5 years ago
|
||
I was able to reproduce the issue on Fx 74.0, Latest Nightly 76.0a1 and Fx 75.0b10 but not on Fx 68.6.0esr (Windows 10 64bit) and I found this regression using mozregression tool:
Last good: 2019-10-07
First bad: 2019-10-08
Pushlog: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=3fa65bda1e506a314ea90d936f763c7e840ab98a&tochange=e1a65223d498aa0b8e3e4802d8267db2768073d9
Narrowed down to the following pushlog:
https://hg.mozilla.org/integration/autoland/pushloghtml?fromchange=edf2447d5c06005d0f7ab889eceb3b7e50bd90d9&tochange=e1a65223d498aa0b8e3e4802d8267db2768073d9
Not sure which is the culprit here though.
Updated•5 years ago
|
Reporter | ||
Comment 17•5 years ago
|
||
Hi Bogdan,
Thank you very much for reproducing the issue and helping provide regression using mozregression tool! Really appreciate it.
Thanks,
Jessica
Updated•5 years ago
|
Comment 18•5 years ago
|
||
(In reply to hli0102 from comment #17)
Hi Bogdan,
Thank you very much for reproducing the issue and helping provide regression using mozregression tool! Really appreciate it.
Thanks,
Jessica
No problem Jessica, happy to help. Valentin can you please take a look at the regression range I posted in comment 16?
Comment 19•5 years ago
|
||
I really do not know what could have cause this. There is not necko changes in the pushlog from comment 16.
I am also not sure what could cause this from that pushlog, so I will try with DOM.
tt, is this maybe bug 1577311?
Updated•5 years ago
|
Comment 20•5 years ago
|
||
(In reply to Dragana Damjanovic [:dragana] from comment #19)
I really do not know what could have cause this. There is not necko changes in the pushlog from comment 16.
I am also not sure what could cause this from that pushlog, so I will try with DOM.
tt, is this maybe bug 1577311?
It doesn't look like this is related to bug 1577311 at this moment.
The commit related to bug 1577311 asks the blob holder to clean up the stream when it's about to be destructed or unlinked. Otherwise, we would have a dangling pointer. If it's related, that would mean the old code uses a dangling point...
(I don't know who should I needinfo so that this bug can be tracked, so I just redirect the request back)
Comment 21•5 years ago
|
||
I redid the mozregression, and it pointed me to https://hg.mozilla.org/integration/autoland/pushloghtml?fromchange=720c1e5a8dd3f5bda4ae32137d1c624a1ad55301&tochange=be9a6289486a6f366e431782b84a0c0633f8fec2
Updated•5 years ago
|
Updated•5 years ago
|
Comment 22•5 years ago
|
||
I've reproduced locally on linux on a locally built opt release build using clang and -Og
and after a pernosco db-building hiccup, we now have:
https://pernos.co/debug/svC9e_vsQ2JfaEVKJ99MSg/index.html
Investigating now, clearing perry's needinfo for now, but that doesn't mean he won't (get to) investigate too! :)
Updated•5 years ago
|
Updated•5 years ago
|
Assignee | ||
Updated•5 years ago
|
Updated•5 years ago
|
Updated•5 years ago
|
Assignee | ||
Comment 24•5 years ago
|
||
I think I've identified the bug that's happening, so I'll be starting on writing a patch. I don't think the patch will be too big.
Updated•5 years ago
|
Assignee | ||
Comment 25•5 years ago
|
||
Baku, it looks like this bug is happening because we try to clone an nsFileInputStream via Request.clone(), but the class says that cloning isn't possible. Any ideas for a workaround?
Comment 26•5 years ago
|
||
nsIIFileInputStream is not cloneable on the content process because we block the opening of files. Before suggesting workarounds, I have a few questions:
- I would like to know how a Response object obtains a nsIFileInputStream. Asking, because maybe the bug is elsewhere.
- Do you have a stack-trace or can you reduce the bug in a test?
- do we have blobs involves? (I hope not :)
About possible workarounds, you can use NS_CloneInputStream(), but this uses a lot of memory, and I hope we can find a better solution.
Assignee | ||
Comment 27•5 years ago
|
||
Assignee | ||
Comment 28•5 years ago
|
||
Yeah, it appears there are blobs involved. Conceptual STR is
- Have SW-controlled page with an HTML form with a file input that does a POST when submitted. Select a file and submit the form. This is where I see blobs.
- The SW clones the request and tries to do something with the clone's body. When diagnostic assert is disabled, the promise returned by whatever body-reading method is used, e.g. text(), will never settle. When there's diagnostic assert, it crashes for me. The WIP patch here has a modified test that can reproduce the crash. (The crash here is a side effect of the underlying bug, not the bug itself.)
There's another stack trace in Andrew's comment 22, but it's decently noisy because it's a recording of the STR in the bug description.
As for how the Response gets the nsIFileInputStream, it's
Response -> (mRequest) InternalRequest -> (mBodyStream) nsMIMEInputStream -> (mStream) nsMultiplexInputStream -> (mStreams[1]->mStream) nsBufferedInputStream -> (mStream) nsFileInputStream. The final nsFileInputStream isn't cloneable, which makes the owning nsBufferedInputStream not cloneable, which prevents the owning nsMultiplexInputStream from QI-ing to nsICloneableInputStream.
Assignee | ||
Updated•5 years ago
|
Comment 29•5 years ago
|
||
Ok, I know what is happening here and how I would fix the problem, but it will require some important change in the service-worker code.
Let me start with some good news: blobs are not involved \o/ The only blob we see is a file object in the HTMLInputElement. But when we create the FormData, we take its inputStream only.
I'll start far far away: ServiceWorkers are scripts that have to deal with network operations. Several network operations (POST requests) require data to be sent, and because of this, we have nsIInputStream objects involved. But not all the serviceWorker scripts need to read the data. Often, the "real" reading of the nsIInputStream data happens in the parent process.
For instance, in this bug, we have a page that does a fetch(). A serviceWorker that intercepts the fetch(), it obtains the nsIInputStream and it does a second fetch(). The final network operation happens in the parent process. Only the parent process (or the socket necko process, but let's ignore it) requires the reading of the data from the nsIInputStream.
Let's ignore the crash for 1 second and let's see what we do with nsIInputStream: we copy data around way too many times:
- in the content process we have the first fetch, with the initial blob. The real data is only in the parent process because we have IPCBlobInputStream \o/ - no data copied!
- The parent process sends the nsIInputStream to the serviceWorker -> first copy!
- the serviceWorker script does a clone(). Here we crash, but let's say we fix the crash. -> second copy!
- btw, each clone() will duplicate data. And we can easily run out of memory.
45 the serviceWorker calls fetch() and all the data is sent back to the parent process again -> third copy!
Clearly this approach doesn't scale!
And here the solution I want to suggest: Let's reuse IPCBlobInputStream for serviceWorkers too.
Step by step I would do this:
- take IPCBlobInputStream out of the IPCBlob serialization code
- rename IPCBlobInputStream to RemoteInputStream
- in MaybeDeserializeAndReserialize(), instead of cloning the stream, we can call: RemoteInputStreamUtils::Serialize(). https://searchfox.org/mozilla-central/rev/97cb0a90bd053de87cd1ab7646d5565809166bb1/dom/serviceworkers/FetchEventOpProxyParent.cpp#30
RemoteInputStreamUtils::Serialize is what currently is located here: https://searchfox.org/mozilla-central/rev/97cb0a90bd053de87cd1ab7646d5565809166bb1/dom/file/ipc/IPCBlobUtils.cpp#130-154
What's the benefit of doing this:
- reuse of existing code that fixes a similar issue.
- less memory allocated, faster browser
- no additional crashes in serviceWorker code.
Note that response.clone() is only 1 of the possible places where we clone the stream. With a bit of effort, I can trigger other crashes from other stream cloning.
If you like this plan, I would like to review the code. Perry, Andrew, thoughts?
Comment 30•5 years ago
|
||
While I am not really able to judge it, this sounds like a very good cleanup plan. But does this also mean, there is no way to have a dirty fix of "3. the serviceWorker script does a clone()" first?
Comment 31•5 years ago
|
||
Implementing this should not require a lot of time. I suspect that a couple of days of work should be sufficient. Definitely we can do it for the next release.
Comment 32•5 years ago
|
||
Yes, that makes a lot of sense to me. On slack I was half-seriously proposing abusing the (IPC)Blob mechanism for the request body stream when serializing from parent-to-child since it already provides a mechanism for referencing existing things, but concerned about any defensive pipe tee-ings such a naive approach might entail. I knew you'd have the wiser, less terrifying proposal though! ;)
Unfortunately it doesn't seem like there's really a good mitigation in this exact situation other than the correct fix. The smallest mitigation would be to just bypass the serviceworker if we know we can't clone a navigation request, but that's a pretty extreme mitigation.
Comment 34•5 years ago
|
||
Doesn't sound like the kind of thing we can realistically fix for Fx76 at this point, but if there's any kind of quick and dirty wallpaper fix we could hack together to get this working, I'd be open to considering it still.
Assignee | ||
Comment 35•4 years ago
|
||
I'll see if there's a hotfix possible, it's currently unclear but I will post an update later today.
Comment 36•4 years ago
|
||
(In reply to Perry Jiang [:perry] from comment #35)
I'll see if there's a hotfix possible, it's currently unclear but I will post an update later today.
Any update? :)
Assignee | ||
Comment 37•4 years ago
|
||
Assignee | ||
Comment 38•4 years ago
|
||
There wasn't an obvious hotfix and the patch for the current bug has exposed another issue :\ baku says he will take a look at the current fix.
Assignee | ||
Comment 39•4 years ago
|
||
So as I understand it the summary of the "new" issue when trying to use PIPCBlobInputStream is:
- the
nsFileInputStream
needs to be serialized across thePBackground
boundary in the parent, which results in a child -> parent sending of data - sending data using
PIPCBlobInputStream
only works (currently) for parent -> child sending of data - as a result, the
nsFileInputStream
has to be sent as anIPCStream
(and not converted to aIPCBlobInputStream
for the above reason) in the parent - this
IPCStream
is then de-serialized into a newnsFileInputStream
on the background thread, but this new instance does not get anmFile
member variable (only a FD) and therefore isn't cloneable - this new
nsFileInputStream
instance is part of annsMultiplexInputStream
which is the underlying stream for thePIPCBlobInputStream
used to finally send the data to the content process, and thensMultiplexInputStream
also isn't cloneable - when this underlying stream is retrieved from
IPCBlobInputStreamStorage
, annsPipeInputStream
is returned because the underlying stream isn't cloneable (see link to nsStreamUtils.cpp) nsPipeInputStream
does not implementnsIAsyncInputStreamLength
making calls toInputStreamLengthHelper::GetAsyncLength
with it fail
https://searchfox.org/mozilla-central/source/xpcom/io/nsStreamUtils.cpp#837
Comment 40•4 years ago
|
||
- the
nsFileInputStream
needs to be serialized across thePBackground
boundary in the parent, which results in a child -> parent sending of data
It's interesting to know why this is needed. Currently it's needed because we use AutoIPCStream in InternalRequest::ToIPC
. But there are no reasons to do it. Because:
- InternalRequest::ToIPC() is called only on the parent process
- BTW it's a template, but we use it only with the PBackgroundChild manager. We can remove the template.
- the serialized IPCInternalRequest is sent to the parent process, and, here we deserialize nsFileInputstream.
We can clean up the code a lot here. Here is how I would proceed:
-
IPCInternalRequest struct should have an extra body ID:
nsID? bodyStreamId
-
InternalRequest::ToIPC() uses IPCBlobInputStreamStorage to keep the inputStream alive. You can do something like:
if (mBodyStream) {
nsID id;
nsContentUtils::GenerateUUIDInPlace(id);
IPCBlobInputStreamStorage::Get()->AddStream(mBodyStream, id, mBodyLength, 0);
aIPCRequest->bodyStreamId().emplace(id);
}
- InternalRequest CTOR can retrieve the stream for IPCBlobInputStreamStorage:
if (aIPCRequest.bodyStreamId().isSome()) {
MOZ_ASSERT(We are in the parent process);
IPCBlobInputStreamStorage::Get()->GetStream(aIPCRequest.bodyStreamId().value(), 0, mBodyLength, getter_AddRefs(mBodyStream));
}
- In FetchEventOpProxyParent::Create() you can retrieve the stream using bhe bodyStreamId, create an IPCBlobInputStreamParent actor, and add it into the copyRequest struct.
Comment 41•4 years ago
|
||
In attach there is a patch. It works but it's a mess... It's your patch + bug 1633731, without comments, and fully untested. But it passes your mochitest.
Assignee | ||
Comment 42•4 years ago
|
||
Initially, IPCInternal{Request,Response} had contained IPCStreams which would
result in unecessary copying when sending the objects over IPC. The patch
makes these streams either:
- ParentToParentStream (just a UUID)
- ParentToChildStream (a PIPCBlobInputStream actor, acting as a handle)
- ChildToParentStream (a real IPCStream)
These three types are union-ed together by the BodyStreamVariant IPDL structure.
This structure replaces the IPCStream members in IPCInternal{Request,Response}
so that, depending on the particular IPDL protocol, we can avoid cloning streams
and just pass handles/IDs instead.
As a side effect, this makes file-backed Response objects cloneable. Initially,
these Responses would be backed by an nsFileInputStream, which is not cloneable
outside the parent process. They are now backed by IPCBlobInputStreams, which
are cloneable.
One thing that's not really satisfactory (IMO), is the manual management of
IPCBlobInputStreamStorage so that no streams are leaked, e.g. if we store a
stream in the IPCBlobInputStreamStorage but fail to send an IPC message and
therefore fail to remove the stream from storage on the other side of the IPC
boundary (only parent-to-parent in this case).
Comment 43•4 years ago
|
||
Pushed by pjiang@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/e1be97ce43d1 minimize stream copying across IPC boundaries r=asuth,baku
Comment 44•4 years ago
|
||
Backed out for failures on browser_download_canceled.js
backout: https://hg.mozilla.org/integration/autoland/rev/75e6fd2c528715783aba08945363526abf82daa9
failure log: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=300320186&repo=autoland&lineNumber=12417
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - TEST-PASS | dom/serviceworkers/test/browser_download_canceled.js | canceled download -
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - wait for the sw-passthrough-download stream to close.
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - Buffered messages finished
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - TEST-UNEXPECTED-FAIL | dom/serviceworkers/test/browser_download_canceled.js | Ensure the stream canceled instead of timing out. - Got timeout, expected canceled
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - Stack trace:
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/browser-test.js:test_is:1303
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochitests/content/browser/dom/serviceworkers/test/browser_download_canceled.js:performCanceledDownload:100
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochitests/content/browser/dom/serviceworkers/test/browser_download_canceled.js:interruptedDownloads:144
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/browser-test.js:Tester_execTest/<:1045
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/browser-test.js:Tester_execTest:1080
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/browser-test.js:nextTest/<:910
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - chrome://mochikit/content/tests/SimpleTest/SimpleTest.js:SimpleTest.waitForFocus/waitForFocusInner/focusedOrLoaded/<:918
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - Cancellation reason: timeout after undefined ticks
[task 2020-05-01T00:37:41.884Z] 00:37:41 INFO - watching for download popup
Updated•4 years ago
|
Updated•4 years ago
|
Assignee | ||
Comment 45•4 years ago
|
||
I'm still working on a fix for the backout - seems like there's something fundamentally wrong with the initial patch that's causing the failure.
Comment 46•4 years ago
|
||
Pushed by pjiang@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/fb8e753340e4 minimize stream copying across IPC boundaries r=asuth,baku
Comment 47•4 years ago
|
||
Backed out 1 changesets (Bug 1622451) for assertion failures on UniquePtr.h.
https://hg.mozilla.org/integration/autoland/rev/81dc4a5fbae5955572286debafe7b2cb5ae82ffe
Failure log:
https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=304154294&repo=autoland&lineNumber=1770
Comment 49•4 years ago
|
||
Pushed by pjiang@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/6ad5f406fb73 minimize stream copying across IPC boundaries r=asuth,baku
Comment 50•4 years ago
|
||
bugherder |
Assignee | ||
Updated•4 years ago
|
Updated•4 years ago
|
Comment 51•4 years ago
|
||
It seems that credentials provided in comment 15 are no longer working. Can you please verify if the issue is fixed using the latest beta build from here? Thank you!
Reporter | ||
Comment 52•4 years ago
|
||
Hi,
For the test website URL: (WebVPN access)
https://vpn.dev.courts.ca.gov/
We reset the password for the following test user account:
Username: ccportest01
Password: JCCpass0615
Please try to log in to the website again; if it still does not work, please let me know.
We will verify when we get the official Release 78.0 on our side. Thanks.
Comment 53•4 years ago
|
||
(In reply to hli0102 from comment #52)
Hi,
For the test website URL: (WebVPN access)https://vpn.dev.courts.ca.gov/
We reset the password for the following test user account:
Username: ccportest01
Password: JCCpass0615Please try to log in to the website again; if it still does not work, please let me know.
We will verify when we get the official Release 78.0 on our side. Thanks.
I'm sorry but the credentials still don't work. Thank you for providing a quick answer!
Reporter | ||
Comment 54•4 years ago
|
||
We fixed the access issue, the same credentials should work now, please try again. Thanks.
Comment 55•4 years ago
|
||
(In reply to hli0102 from comment #54)
We fixed the access issue, the same credentials should work now, please try again. Thanks.
Thank you again for your response.
I was able to reproduce the issue using Firefox 76.0a1 (20200313214616) on Windows 10x64 and steps from comment 3 using Add Quick Attach
and Add Order
.
Using the same steps I verified the issue with Firefox 78.0b9 (20200619002543) on Windows 10x64, macOS 10.12, and Ubuntu 18.04. The .pdf file is successfully uploaded and no errors are thrown in browser console for this issue.
Feel free to notify us if there are any problems related to this when using the official Release 78.0.
Reporter | ||
Comment 56•4 years ago
|
||
Thanks for verifying that the issue was fixed! We will try it when the official Release 78.0 is available, and will share the update with you. Thanks.
Updated•4 years ago
|
Updated•4 years ago
|
Reporter | ||
Comment 57•4 years ago
|
||
Hi,
We downloaded the official Release 78.0.1 and tested the fix in our application, the fix is working fine so far.
Thank you very much for all your efforts on fixing this issue! It is greatly appreciated!
Thanks.
Description
•