Crash in [@ mozilla::ipc::FatalError | mozilla::ipc::IProtocol::HandleFatalError | IPC::ParamTraits<mozilla::ipc::DataPipeReceiverStreamParams>::Read]
Categories
(Core :: Networking: Cache, defect)
Tracking
()
Tracking | Status | |
---|---|---|
firefox-esr91 | --- | unaffected |
firefox100 | --- | unaffected |
firefox101 | --- | unaffected |
firefox102 | + | fixed |
People
(Reporter: gsvelto, Assigned: nika)
References
(Regression)
Details
(Keywords: crash, regression)
Crash Data
Crash report: https://crash-stats.mozilla.org/report/index/39f3574b-a82d-4666-88e2-06a8d0220516
MOZ_CRASH Reason: MOZ_CRASH(IPC FatalError in the parent process!)
Top 10 frames of crashing thread:
0 libxul.so mozilla::ipc::FatalError ipc/glue/ProtocolUtils.cpp:170
1 libxul.so mozilla::ipc::IProtocol::HandleFatalError const ipc/glue/ProtocolUtils.cpp:403
2 libxul.so IPC::ParamTraits<mozilla::ipc::DataPipeReceiverStreamParams>::Read ipc/ipdl/InputStreamParams.cpp:356
3 libxul.so IPC::ParamTraits<mozilla::ipc::InputStreamParams>::Read ipc/ipdl/InputStreamParams.cpp:1353
4 libxul.so IPC::ParamTraits<mozilla::ipc::IPCStream>::Read ipc/ipdl/IPCStream.cpp:41
5 libxul.so IPC::ParamTraits<mozilla::Maybe<mozilla::ipc::IPCStream> >::Read ipc/glue/IPCMessageUtilsSpecializations.h:739
6 libxul.so IPC::ParamTraits<mozilla::dom::cache::CacheReadStream>::Read ipc/ipdl/CacheTypes.cpp:181
7 libxul.so IPC::ParamTraits<mozilla::Maybe<mozilla::dom::cache::CacheReadStream> >::Read ipc/glue/IPCMessageUtilsSpecializations.h:739
8 libxul.so IPC::ParamTraits<mozilla::dom::cache::CacheResponse>::Read ipc/ipdl/CacheTypes.cpp:578
9 libxul.so IPC::ParamTraits<mozilla::dom::cache::CacheRequestResponse>::Read ipc/ipdl/CacheTypes.cpp:664
This appears to be a regression and seems related to caching but I'm not sure what could have triggered it. Nika could this have been affected by your changes?
Assignee | ||
Updated•3 years ago
|
Assignee | ||
Comment 1•3 years ago
|
||
Yup, bug 1754004 introduced the first real uses of DataPipe
into the tree, so crashes related to them are probably regressions from that bug.
I believe this failure is caused by us sending too many DataPipeReceiver instances within a single message, meaning we exceed MAX_DESCRIPTORS_PER_MESSAGE handles attached to the message. Unfortunately errors in this case are currently handled poorly (we log an error to the console, and then send the known-invalid message, failing when deserializing), so I've filed bug 1769593 to improve the error reporting in that area.
In terms of how we can handle that situation, I have some WIP patches to relax the MAX_DESCRIPTORS_PER_MESSAGE limit substantially in bug 1767514, however they were backed out due to some very confusing macOS IPC errors which I haven't figured out the cause of yet.
If that approach turns out to be unworkable, it may also be possible to use some trickery to send the shared memory regions for DataPipe objects separately, but I worry a bit about the extra complexity and performance hit which that could incur.
Comment 2•3 years ago
|
||
Set release status flags based on info from the regressing bug 1754004
Comment 3•3 years ago
|
||
Tracking for 102 as this is significant volume for the nightly channel.
Updated•3 years ago
|
Assignee | ||
Comment 4•3 years ago
|
||
Given that I have a fix for bug 1767514, I think I'll fix this issue with that bug, rather than building some other workaround.
Comment 5•3 years ago
|
||
FWIW, I can get this crash predictably just by going to vscode.dev. Firefox crashes while loading the page
Updated•3 years ago
|
Comment 6•3 years ago
|
||
Nika, will your patch in bug 1767514 land in time for 102? Thanks
Assignee | ||
Comment 7•3 years ago
|
||
(In reply to Pascal Chevrel:pascalc from comment #6)
Nika, will your patch in bug 1767514 land in time for 102? Thanks
I've queued them up for landing now.
Comment 8•3 years ago
|
||
Bug 1767514 landed, should we close this?
Assignee | ||
Updated•3 years ago
|
Updated•2 years ago
|
Description
•