Crash in [@ selectors::parser::{{impl}}::to_shmem<T>]
Categories
(Core :: CSS Parsing and Computation, defect, P2)
Tracking
()
People
(Reporter: philipp, Assigned: bgrins)
References
(Regression)
Details
(Keywords: crash, regression)
Crash Data
Attachments
(1 file)
(deleted),
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-release+
|
Details |
This bug is for crash report bp-faaa6062-4dd6-49f7-ab2c-571820200108.
Top 10 frames of crashing thread:
0 xul.dll RustMozCrash mozglue/static/rust/wrappers.cpp:16
1 xul.dll static void mozglue_static::panic_hook mozglue/static/rust/lib.rs:89
2 xul.dll static void core::ops::function::Fn::call<fn src/libcore/ops/function.rs:69
3 xul.dll static void std::panicking::rust_panic_with_hook src/libstd/panicking.rs:481
4 xul.dll static void std::panicking::continue_panic_fmt src/libstd/panicking.rs:384
5 xul.dll void std::panicking::begin_panic_fmt src/libstd/panicking.rs:339
6 xul.dll static struct core::mem::manually_drop::ManuallyDrop<selectors::parser::Component<style::gecko::selector_parser::SelectorImpl>> selectors::parser::{{impl}}::to_shmem<style::gecko::selector_parser::SelectorImpl> servo/components/selectors/parser.rs
7 xul.dll static struct core::mem::manually_drop::ManuallyDrop<servo_arc::ThinArc<selectors::builder::SpecificityAndFlags, selectors::parser::Component<style::gecko::selector_parser::SelectorImpl>>> to_shmem::{{impl}}::to_shmem<selectors::builder::SpecificityAndFlags, selectors::parser::Component<style::gecko::selector_parser::SelectorImpl>> servo/components/to_shmem/lib.rs:488
8 xul.dll static struct mut slice<selectors::parser::Selector<style::gecko::selector_parser::SelectorImpl>>* to_shmem::to_shmem_slice<selectors::parser::Selector<style::gecko::selector_parser::SelectorImpl>, core::slice::Iter<selectors::parser::Selector<style::gecko::selector_parser::SelectorImpl>>> servo/components/to_shmem/lib.rs:320
9 xul.dll static struct core::mem::manually_drop::ManuallyDrop<style::stylesheets::CssRule> style::stylesheets::{{impl}}::to_shmem servo/components/style/stylesheets/mod.rs:235
these crash reports with MOZ_CRASH Reason ToShmem failed for Atom: must be a static atom: moz-collapsed
are starting to show up in firefox 72. could this be related to the changes in bug 1434087?
Assignee | ||
Comment 1•5 years ago
|
||
(In reply to [:philipp] from comment #0)
This bug is for crash report bp-faaa6062-4dd6-49f7-ab2c-571820200108.
Top 10 frames of crashing thread: 0 xul.dll RustMozCrash mozglue/static/rust/wrappers.cpp:16 1 xul.dll static void mozglue_static::panic_hook mozglue/static/rust/lib.rs:89 2 xul.dll static void core::ops::function::Fn::call<fn src/libcore/ops/function.rs:69 3 xul.dll static void std::panicking::rust_panic_with_hook src/libstd/panicking.rs:481 4 xul.dll static void std::panicking::continue_panic_fmt src/libstd/panicking.rs:384 5 xul.dll void std::panicking::begin_panic_fmt src/libstd/panicking.rs:339 6 xul.dll static struct core::mem::manually_drop::ManuallyDrop<selectors::parser::Component<style::gecko::selector_parser::SelectorImpl>> selectors::parser::{{impl}}::to_shmem<style::gecko::selector_parser::SelectorImpl> servo/components/selectors/parser.rs 7 xul.dll static struct core::mem::manually_drop::ManuallyDrop<servo_arc::ThinArc<selectors::builder::SpecificityAndFlags, selectors::parser::Component<style::gecko::selector_parser::SelectorImpl>>> to_shmem::{{impl}}::to_shmem<selectors::builder::SpecificityAndFlags, selectors::parser::Component<style::gecko::selector_parser::SelectorImpl>> servo/components/to_shmem/lib.rs:488 8 xul.dll static struct mut slice<selectors::parser::Selector<style::gecko::selector_parser::SelectorImpl>>* to_shmem::to_shmem_slice<selectors::parser::Selector<style::gecko::selector_parser::SelectorImpl>, core::slice::Iter<selectors::parser::Selector<style::gecko::selector_parser::SelectorImpl>>> servo/components/to_shmem/lib.rs:320 9 xul.dll static struct core::mem::manually_drop::ManuallyDrop<style::stylesheets::CssRule> style::stylesheets::{{impl}}::to_shmem servo/components/style/stylesheets/mod.rs:235
these crash reports with MOZ_CRASH Reason
ToShmem failed for Atom: must be a static atom: moz-collapsed
are starting to show up in firefox 72. could this be related to the changes in bug 1434087?
We don't use the moz-collapsed string in CSS anywhere in m-c after bug 1434087 (https://searchfox.org/mozilla-central/search?q=moz-collapsed&case=false®exp=false&path=). Here's the commit that removed it: https://hg.mozilla.org/mozilla-central/rev/53ebedebdb63. Cam or Emilio, I don't know the details of the stylesheet shared memory feature, but is it possible that I should have left the atom around even if the string is no longer present in xul.css? Or that the feature operates on user sheets outside of m-c (like user.css)?
It's not clear to me why this only started now and that it wouldn't have shown up on beta, and that it would only happen for some users and not all.. :philipp, did this just start happening with the merge to release?
Assignee | ||
Comment 2•5 years ago
|
||
We could certainly make a patch to re-add that StaticAtom, though I'd like to understand why this is crashing or see if it can be reproduced.
Assignee | ||
Comment 3•5 years ago
|
||
Double checked to make sure the selector wasn't re-added to minimal-xul.css at some point and it's not there: https://searchfox.org/mozilla-release/rev/8e7ff7ad9099b8eca9a42487cfaf4bc093e54297/toolkit/content/minimal-xul.css#35-38 / https://searchfox.org/mozilla-release/search?q=moz-collapsed&path=.
Reporter | ||
Comment 4•5 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #1)
:philipp, did this just start happening with the merge to release?
this already happened to some extent during the nightly cycle (less in beta though):
https://crash-stats.mozilla.com/search/?moz_crash_reason=~ToShmem%20failed%20for%20Atom%3A%20must%20be%20a%20static%20atom%3A%20moz-collapsed&product=Firefox&date=%3E%3D2019-06-01&_facets=signature&_facets=version#facet-version
Comment 5•5 years ago
|
||
Seems like we have an updated binary with an outdated stylesheet... I'm not familiar with our update process... Maybe something failed and left the build in an inconsistent state?
These are all startup crashes, would be interesting to know how this can happen and whether it's persistent or not...
Assignee | ||
Comment 6•5 years ago
|
||
(In reply to Emilio Cobos Álvarez (:emilio) from comment #5)
Seems like we have an updated binary with an outdated stylesheet... I'm not familiar with our update process... Maybe something failed and left the build in an inconsistent state?
These are all startup crashes, would be interesting to know how this can happen and whether it's persistent or not...
I don't know how to test this or why it would happen, but it does seem like a reasonable guess of the root cause given Comment 1.
Robert or Mossop, are you aware of a situation where the binary would have updated but would be loading an outdated version of chrome://global/content/minimal-xul.css at startup? Is this something that for instance, an antivirus software could cause? Or would the update happening guarantee that we also have an updated chrome registry (or at least, not loading version N-1 of that file).
There's also the startup cache which stores chrome stylesheets, but I believe that's invalidated during an update. Maybe it's possible that invalidation is happening in some cases after startup and minimal-xul.css is fetched befre that? I know it can be force invalidated with the "startupcache-invalidate" notification but I think there's another process for doing so during an update.
Assignee | ||
Comment 7•5 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #6)
There's also the startup cache which stores chrome stylesheets, but I believe that's invalidated during an update. Maybe it's possible that invalidation is happening in some cases after startup and minimal-xul.css is fetched befre that? I know it can be force invalidated with the "startupcache-invalidate" notification but I think there's another process for doing so during an update.
Brendan pointed out that this happens at https://searchfox.org/mozilla-central/rev/be7d1f2d52dd9474ca2df145190a817614c924e4/toolkit/xre/nsAppRunner.cpp#4164, which calls StartupCache::IgnoreDiskCache, which invalidates. So it seems it could plausibly be a problem if:
versionOK
was wrong (true when it should be false) or- There's a race between
XREMain::XRE_mainStartup
and theGlobalStyleSheetCache
which loads minimal-xul.css.
(In reply to Emilio Cobos Álvarez (:emilio) from comment #5)
These are all startup crashes, would be interesting to know how this can happen and whether it's persistent or not...
If it is related to an outdated startup cache, then I would guess it's not persistent based on lastStartupWasCrash
https://searchfox.org/mozilla-central/rev/be7d1f2d52dd9474ca2df145190a817614c924e4/toolkit/xre/nsAppRunner.cpp#4153-4154.
Comment 8•5 years ago
|
||
Hmmm... At that point we have a profile already? We most likely need the stylesheet to spawn the profile manager...
Assignee | ||
Comment 9•5 years ago
|
||
(In reply to Emilio Cobos Álvarez (:emilio) from comment #8)
Hmmm... At that point we have a profile already? We most likely need the stylesheet to spawn the profile manager...
Just speculating (I think Mossop would know the ordering here), but yeah if that UI happens before any of this code then I think there could be a persistent crash like:
- Open profile manager window
- Fetch outdated minimal-xul.css from cache and crash
If/when the user skipped the profile manager UI the cache would be invalidated (presumably both !versionOK
and lastStartupWasCrash
would trigger the invalidation) and everything would work correctly.
If this is the case then loading outdated chrome resources then we should see if there's a way to invalidate earlier, but a short term fix would be to re-add the atom.
Assignee | ||
Comment 10•5 years ago
|
||
I tried downloading
- https://ftp.mozilla.org/pub/firefox/releases/71.0/mac/en-US/
- https://ftp.mozilla.org/pub/firefox/releases/72.0/mac/en-US/
and doing various upgrade scenarios (creating a profile in 71 and then opening it in 72, creating a profile in 71 and opening profile manager in 72, creating a profile in 71 and opening the safe mode dialog in 72) and wasn't able to trigger a crash.
Assignee | ||
Comment 11•5 years ago
|
||
Would we want to take a fix for this 72? I can make a simple patch to re-add the static atom which would fix the crash but I'm not sure if the magnitude warrants an uplift to release.
Assignee | ||
Comment 12•5 years ago
|
||
Assignee | ||
Comment 13•5 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #10)
I tried downloading
- https://ftp.mozilla.org/pub/firefox/releases/71.0/mac/en-US/
- https://ftp.mozilla.org/pub/firefox/releases/72.0/mac/en-US/
and doing various upgrade scenarios (creating a profile in 71 and then opening it in 72, creating a profile in 71 and opening profile manager in 72, creating a profile in 71 and opening the safe mode dialog in 72) and wasn't able to trigger a crash.
Though I guess it should really be tested on Windows based on the crashes.
Assignee | ||
Comment 15•5 years ago
|
||
(In reply to Cameron McCormack (:heycam) from comment #14)
Where do we store style sheets in the startup cache?
I was thinking of https://searchfox.org/mozilla-central/rev/be7d1f2d52dd9474ca2df145190a817614c924e4/layout/style/Loader.cpp#1844-1856 (specifically nsXULPrototypeCache::PutStyleSheet), although not that I look closer it appears that's only stored in memory and that StartupCache is only used for nsXULPrototypeCache::GetPrototype (to store the content of chrome xhtml files).
So, unless if I've missed something I guess the startup cache isn't is and I'm back to:
Comment 16•5 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #15)
- Comment 1: "Does the shared memory feature operates on user sheets outside of m-c (like user.css)?"
No, it only operates on the sheets listed in UserAgentStyleSheetList.h.
- Comment 6: "is there a case where binary would have updated but would be loading an outdated version of chrome://global/content/minimal-xul.css at startup?"
What about the situation where an update is applied underneath us? I'm not sure how the updater works, but if files are updated individually, then it might be possible for libxul to be overwritten with the new version, and then a new content process started before the omni.ja is updated?
Comment 17•5 years ago
|
||
(In reply to Cameron McCormack (:heycam) from comment #16)
(In reply to Brian Grinstead [:bgrins] from comment #15)
- Comment 1: "Does the shared memory feature operates on user sheets outside of m-c (like user.css)?"
No, it only operates on the sheets listed in UserAgentStyleSheetList.h.
- Comment 6: "is there a case where binary would have updated but would be loading an outdated version of chrome://global/content/minimal-xul.css at startup?"
What about the situation where an update is applied underneath us? I'm not sure how the updater works, but if files are updated individually, then it might be possible for libxul to be overwritten with the new version, and then a new content process started before the omni.ja is updated?
I'm not 100% sure how this works, but I don't think that we start new content processes after an update (about:restartrequired
is shown instead), IIRC it's not safe to do this with the way our IPC works.
cc'ing Rob Strong who would have a better idea than I about how the updater process works and why we might be seeing these crashes.
Comment 18•5 years ago
|
||
As rhelmer stated, about:restartrequired should be loaded when another instance has applied an update to the same installation.
Assignee | ||
Comment 19•5 years ago
|
||
(In reply to Robert Strong [slack rstrong] (Robert they/them - use needinfo to contact me) from comment #18)
As rhelmer stated, about:restartrequired should be loaded when another instance has applied an update to the same installation.
Thanks Robert. And what about the more typical case where an update has been applied and the browser is restarted - is there some scenario where an individual chrome URI (chrome://global/content/minimal-xul.css in this case) would be out of date & using the content from the previous version?
There definitely shouldn't be. For the case where a file update fails (patch or complete file) the file is rolled back to the original version. There might be cases where the cache is invalid and not updated. I thought of one case where this could happen ( bug 1535181 ) though I doubt it would happen very often.
There is also bug 1535064 which affects Android which doesn't use the Mozilla update system.
Updated•5 years ago
|
Comment 20•5 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #19)
(In reply to Robert Strong [slack rstrong] (Robert they/them - use needinfo to contact me) from comment #18)
As rhelmer stated, about:restartrequired should be loaded when another instance has applied an update to the same installation.
Thanks Robert. And what about the more typical case where an update has been applied and the browser is restarted - is there some scenario where an individual chrome URI (chrome://global/content/minimal-xul.css in this case) would be out of date & using the content from the previous version?
The only thing that jumps out at me is startup cache, since (unless something is very broken) we shouldn't be getting mismatched binaries/omnijar after an app update + restart...
Doug do you have any thoughts?
There definitely shouldn't be. For the case where a file update fails (patch or complete file) the file is rolled back to the original version. There might be cases where the cache is invalid and not updated. I thought of one case where this could happen ( bug 1535181 ) though I doubt it would happen very often.
There is also bug 1535064 which affects Android which doesn't use the Mozilla update system.
Comment 21•5 years ago
|
||
(In reply to Cameron McCormack (:heycam) from comment #16)
What about the situation where an update is applied underneath us? I'm not sure how the updater works, but if files are updated individually, then it might be possible for libxul to be overwritten with the new version, and then a new content process started before the omni.ja is updated?
It can't be a content process; the panic we run into here can only ever happen in the parent process.
Comment 22•5 years ago
|
||
(In reply to Robert Helmer [:rhelmer] from comment #20)
The only thing that jumps out at me is startup cache, since (unless something is very broken) we shouldn't be getting mismatched binaries/omnijar after an app update + restart...
Doug do you have any thoughts?
To be clear, this has to be coming from a css file trying to use this? If so I can't think of a way for the startup cache to be involved. In any case while I can't rule out something having changed in the startup cache which could allow stale caches to be used, the code which invalidates the cache isn't remarkably complicated and didn't change all that much, so unfortunately I think it's unlikely on this count as well (also it seems like something which would have shown up for other files on Nightly, probably?)
Comment 23•5 years ago
|
||
(In reply to Cameron McCormack (:heycam) from comment #21)
(In reply to Cameron McCormack (:heycam) from comment #16)
What about the situation where an update is applied underneath us? I'm not sure how the updater works, but if files are updated individually, then it might be possible for libxul to be overwritten with the new version, and then a new content process started before the omni.ja is updated?
It can't be a content process; the panic we run into here can only ever happen in the parent process.
One other detail that I missed before is that this crash only happens after startup:
So it shouldn't be an issue with a running build and mismatched on-disk files, it's happening after an update has been applied and the browser has restarted.
Comment 24•5 years ago
|
||
Would we want to take a fix for this 72? I can make a simple patch to re-add the static atom which would fix the crash but I'm not sure if the magnitude warrants an uplift to release.
It's a small enough fix that we may as well. But I'm wondering if when the affected users update to the next beta whether the issue would resolve itself (assuming it's something update related).
Comment 25•5 years ago
|
||
The other thing we could do is add some telemetry that reports whether the build ID of the omni.ja and libxul match. Or whether the checksum of minimal-xul.css matches libxul's expectations.
Comment 26•5 years ago
|
||
(In reply to Cameron McCormack (:heycam) from comment #25)
The other thing we could do is add some telemetry that reports whether the build ID of the omni.ja and libxul match. Or whether the checksum of minimal-xul.css matches libxul's expectations.
The current telemetry doesn't help with the above, but I landed a module a while ago that we could add this to easily:
https://searchfox.org/mozilla-central/source/toolkit/components/corroborator/Corroborate.jsm
Currently it checks that the omni jar(s) are signed correctly and reports that to Telemetry, but it doesn't check that the build ID match.
I considered doing a checksum for this, but due to the way we sign the builds it's not possible to know what the checksum will be at build time. We considered having some kind of post-signing process do this, but just checking signatures is much simpler. I suspect comparing build ID would be simpler in this case as well.
Assignee | ||
Comment 27•5 years ago
|
||
(In reply to Robert Helmer [:rhelmer] from comment #26)
(In reply to Cameron McCormack (:heycam) from comment #25)
The other thing we could do is add some telemetry that reports whether the build ID of the omni.ja and libxul match. Or whether the checksum of minimal-xul.css matches libxul's expectations.
The current telemetry doesn't help with the above, but I landed a module a while ago that we could add this to easily:
https://searchfox.org/mozilla-central/source/toolkit/components/corroborator/Corroborate.jsm
Currently it checks that the omni jar(s) are signed correctly and reports that to Telemetry, but it doesn't check that the build ID match.
I considered doing a checksum for this, but due to the way we sign the builds it's not possible to know what the checksum will be at build time. We considered having some kind of post-signing process do this, but just checking signatures is much simpler. I suspect comparing build ID would be simpler in this case as well.
That sounds great. Any chance you could spin off a bug with details for adding this while we are thinking about it?
I'll get the atom patch reviewed and request uplift to at least fix the symptom.
Assignee | ||
Updated•5 years ago
|
Updated•5 years ago
|
Updated•5 years ago
|
Comment 29•5 years ago
|
||
For the checksum I was thinking of doing it just for the contents of minimal-xul.css. That should avoid signing issues.
Filed bug 1608308 for the build ID check.
Assignee | ||
Comment 30•5 years ago
|
||
Comment on attachment 9119515 [details]
Bug 1607716 - Re-add the mozCollapsed static atom;r=heycam
Beta/Release Uplift Approval Request
- User impact if declined: Startup crash for some Windows users: https://crash-stats.mozilla.org/signature/?product=Firefox&signature=selectors%3A%3Aparser%3A%3A%7B%7Bimpl%7D%7D%3A%3Ato_shmem%3CT%3E
- Is this code covered by automated tests?: No
- Has the fix been verified in Nightly?: No
- Needs manual test from QE?: No
- If yes, steps to reproduce: I wish there was - there have been some investigations in this bug but this appears to be a case where an updated build is loading an old version of a UA CSS file.
- List of other uplifts needed: None
- Risk to taking this patch: Low
- Why is the change risky/not risky? (and alternatives if risky): This is only adding a Static Atom string to the build. The existence of an unused Static Atom shouldn't change Firefox's behavior or cause a problem.
- String changes made/needed:
Assignee | ||
Updated•5 years ago
|
Updated•5 years ago
|
Updated•5 years ago
|
Comment 31•5 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #6)
(In reply to Emilio Cobos Álvarez (:emilio) from comment #5)
Seems like we have an updated binary with an outdated stylesheet... I'm not familiar with our update process... Maybe something failed and left the build in an inconsistent state?
These are all startup crashes, would be interesting to know how this can happen and whether it's persistent or not...
I don't know how to test this or why it would happen, but it does seem like a reasonable guess of the root cause given Comment 1.
Robert or Mossop, are you aware of a situation where the binary would have updated but would be loading an outdated version of chrome://global/content/minimal-xul.css at startup? Is this something that for instance, an antivirus software could cause? Or would the update happening guarantee that we also have an updated chrome registry (or at least, not loading version N-1 of that file).
When the binary updates we should spot the change with the comparison to compatibility.ini and cause the caches to be flushed. The only problem would be if the version updated and for some reason the version and build ID didn't change.
Assignee | ||
Updated•5 years ago
|
Comment 32•5 years ago
|
||
Comment on attachment 9119515 [details]
Bug 1607716 - Re-add the mozCollapsed static atom;r=heycam
try and fix some startup crashes, approved for 72.0.2
Comment 33•5 years ago
|
||
bugherder uplift |
Comment 34•5 years ago
|
||
Looks like the fix is working on 72.0.2. Should we take this on Beta for Fx73 also given where we are in the cycle (All Hands next week, RC the week after that)?
Assignee | ||
Comment 35•5 years ago
|
||
(In reply to Ryan VanderMeulen [:RyanVM] from comment #34)
Looks like the fix is working on 72.0.2. Should we take this on Beta for Fx73 also given where we are in the cycle (All Hands next week, RC the week after that)?
I don't see any crashes on 73. We still don't really know the root cause but it seems somehow related to the update from 71 (old minimal-xul.css) to 72 (new minimal-xul.css), where 72 somehow ends up running with the old minimal-xul.css file that includes the atom.
I guess it's possible someone would upgrade straight from 71->73, right? So maybe it could happen on release but it's just speculation at this point. It'd harmless to uplift, but there's a chance we will get some more data about he root cause if we don't - especially combined with new telemetry in Bug 1608308 assuming that lands in 73. I'll ni? Cam to double check on that, and also to get a second opinion on your question.
Comment 36•5 years ago
|
||
My worry is that we won't see any crashes on 73 until after release and we're then facing a dot release situation to address them. Given the relative speeding up of our release cycles, I wouldn't rule out the possibility of users upgrading directly from 71 to 73 still. Would it be possible to take a hybrid approach and re-add the atom but still collect Telemetry when the code path is hit?
Comment 37•5 years ago
|
||
I guess uplifting the patch won't allow us to correlate crashes to telemetry values, but it wouldn't give us a definite connection anyway. So I think it's OK to uplift. If the telemetry shows we have mismatches between Omnijar and libxul, then there's definitely an updater problem we need to look into anyway.
Comment 38•5 years ago
|
||
Thanks, let's land this for 73.0RC1 then too. Hopefully the new telemetry lands in time for 74.
Comment 39•5 years ago
|
||
bugherder uplift |
Comment 40•5 years ago
|
||
I still see crashes for style::gecko_string_cache::{{impl}}::to_shmem on 73 but I do not see crashes for any of the signatures for 74 or beyond. Are we sure about the flags?
Comment 41•5 years ago
|
||
We only ever saw these crashes on release AFAIK.
Updated•5 years ago
|
Comment 42•5 years ago
|
||
Can we expect still any progress here for 74 ? Thanks!
Assignee | ||
Comment 43•5 years ago
|
||
(In reply to Jens Stutte [:jstutte] from comment #42)
Can we expect still any progress here for 74 ? Thanks!
I don't see any crashes on 74, even though the latest version we pushed the new static atom to was 73. So I think this can be closed out.
I'm guessing the volume of upgrades from 71->74 are low enough that we just aren't hitting it. The root cause for this is being investigated in Bug 1614541.
Comment 44•5 years ago
|
||
74 is in RC and won't ship for real until next week, FWIW.
Assignee | ||
Comment 45•5 years ago
|
||
(In reply to Ryan VanderMeulen [:RyanVM] from comment #44)
74 is in RC and won't ship for real until next week, FWIW.
Ah, that makes sense. It should be harmless to uplift this to 74 if we want to be safe. What do you think?
Updated•5 years ago
|
Comment 46•5 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #45)
(In reply to Ryan VanderMeulen [:RyanVM] from comment #44)
74 is in RC and won't ship for real until next week, FWIW.
Ah, that makes sense. It should be harmless to uplift this to 74 if we want to be safe. What do you think?
+1, could you make an uplift request? Thanks
Assignee | ||
Comment 47•5 years ago
|
||
Comment on attachment 9119515 [details]
Bug 1607716 - Re-add the mozCollapsed static atom;r=heycam
Beta/Release Uplift Approval Request
- User impact if declined: See Comment 30
- Is this code covered by automated tests?: No
- Has the fix been verified in Nightly?: No
- Needs manual test from QE?: No
- If yes, steps to reproduce:
- List of other uplifts needed: None
- Risk to taking this patch: Low
- Why is the change risky/not risky? (and alternatives if risky): See Comment 30
- String changes made/needed:
Updated•5 years ago
|
Updated•5 years ago
|
Comment 48•5 years ago
|
||
Comment on attachment 9119515 [details]
Bug 1607716 - Re-add the mozCollapsed static atom;r=heycam
Resetting the approval flags to make sure this shows up on the sheriffs' uplift radar.
Comment 49•5 years ago
|
||
Reopening this for now, though I'm not necessarily arguing that we need to do something for 75 either.
Comment 50•5 years ago
|
||
bugherder uplift |
Comment 51•5 years ago
|
||
(In reply to Brian Grinstead [:bgrins] from comment #43)
I'm guessing the volume of upgrades from 71->74 are low enough that we just aren't hitting it. The root cause for this is being investigated in Bug 1614541.
It also occurs to me that we may be working around this bug now by way of the watershed added in bug 1607798 too? Users are going to have to go through 72.0.2 now when updating from older releases.
Assignee | ||
Comment 52•5 years ago
|
||
I think this bug has been tagged up with crashes that have a different root cause from the "outdated libxul and minimal-xul.css" problem we worked around here (as we are still seeing reports on 73/74 where the mozCollapsed atom was re-added).
For instance:
https://crash-stats.mozilla.org/report/index/243592b9-ddd5-467f-a8eb-919a60200311 - ToShmem failed for Atom: must be a static atom: reSet
https://crash-stats.mozilla.org/report/index/40908cd8-1931-4292-9a5f-ad8640200201 - ToShmem failed for Atom: must be a static atom: bmrder
.
What are bmrder
and reSet
? These must be coming from users somehow, which is unexpected as per https://bugzilla.mozilla.org/show_bug.cgi?id=1607716#c16. Cam or Emilio, do you have any ideas what is going on from those crashes?
Comment 53•5 years ago
|
||
Corrupted file on disk? https://searchfox.org/mozilla-central/rev/2fd8ffcf087bc59a8e5c962965bbb7bf230bcd28/layout/style/res/forms.css#595 looks close enough to reSet
for example. Similarly bmrder
is pretty similar to the "border"
here: https://searchfox.org/mozilla-central/rev/2fd8ffcf087bc59a8e5c962965bbb7bf230bcd28/layout/style/res/html.css#319
Maybe we could just return an empty atom instead of crashing here, for release builds, or something...
Assignee | ||
Comment 54•5 years ago
|
||
(In reply to Emilio Cobos Álvarez (:emilio) from comment #53)
Corrupted file on disk? https://searchfox.org/mozilla-central/rev/2fd8ffcf087bc59a8e5c962965bbb7bf230bcd28/layout/style/res/forms.css#595 looks close enough to
reSet
for example. Similarlybmrder
is pretty similar to the"border"
here: https://searchfox.org/mozilla-central/rev/2fd8ffcf087bc59a8e5c962965bbb7bf230bcd28/layout/style/res/html.css#319Maybe we could just return an empty atom instead of crashing here, for release builds, or something...
Thanks Emilio. Falling back to an empty atom or similar seems reasonable, I don't think there's much else we could do if the content of the file is corrupted. May want to dig in to some more crashes to see if the theory holds. I filed Bug 1621773 so we can track it separately.
Comment 55•5 years ago
|
||
Calling the original bug here fixed. Will track the follow-up work in the new bug.
Description
•