Closed Bug 1402014 Opened 7 years ago Closed 3 years ago

Crash in mozilla::net::Http2Session::FlushOutputQueue

Categories

(Core :: Networking: HTTP, defect, P1)

56 Branch
defect

Tracking

()

RESOLVED FIXED
94 Branch
Tracking Status
firefox-esr52 --- unaffected
firefox-esr60 --- wontfix
firefox-esr78 --- wontfix
firefox55 --- unaffected
firefox56 --- wontfix
firefox57 - wontfix
firefox58 + wontfix
firefox59 --- wontfix
firefox60 --- wontfix
firefox81 --- wontfix
firefox82 --- wontfix
firefox83 --- wontfix
firefox84 --- wontfix
firefox85 --- wontfix
firefox92 --- wontfix
firefox93 --- fixed
firefox94 --- fixed

People

(Reporter: philipp, Assigned: dragana)

References

Details

(4 keywords, Whiteboard: [necko-triaged][sec-survey])

Crash Data

Attachments

(1 file, 6 obsolete files)

This bug was filed from the Socorro interface and is report bp-ec325f4f-7753-41d8-a9b3-43afb0170912. ============================================================= Crashing Thread (6), Name: Socket Thread Frame Module Signature Source 0 xul.dll mozilla::net::Http2Session::FlushOutputQueue() netwerk/protocol/http/Http2Session.cpp:551 1 xul.dll mozilla::net::Http2Session::GenerateRstStream(unsigned int, unsigned int) netwerk/protocol/http/Http2Session.cpp:835 2 xul.dll mozilla::net::Http2Session::CleanupStream(mozilla::net::Http2Stream*, nsresult, mozilla::net::Http2Session::errorType) netwerk/protocol/http/Http2Session.cpp:1092 3 xul.dll mozilla::net::Http2Session::CloseTransaction(mozilla::net::nsAHttpTransaction*, nsresult) netwerk/protocol/http/Http2Session.cpp:3628 4 xul.dll mozilla::net::nsHttpConnectionMgr::OnMsgCancelTransaction(int, mozilla::net::ARefBase*) netwerk/protocol/http/nsHttpConnectionMgr.cpp:2350 5 xul.dll mozilla::net::ConnEvent::Run() netwerk/protocol/http/nsHttpConnectionMgr.cpp:269 6 xul.dll nsThread::ProcessNextEvent(bool, bool*) xpcom/threads/nsThread.cpp:1446 7 xul.dll NS_ProcessNextEvent(nsIThread*, bool) xpcom/threads/nsThreadUtils.cpp:480 8 xul.dll mozilla::net::nsSocketTransportService::Run() netwerk/base/nsSocketTransportService2.cpp:976 9 xul.dll nsThread::ProcessNextEvent(bool, bool*) xpcom/threads/nsThread.cpp:1446 10 xul.dll NS_ProcessNextEvent(nsIThread*, bool) xpcom/threads/nsThreadUtils.cpp:480 11 xul.dll mozilla::ipc::MessagePumpForNonMainThreads::Run(base::MessagePump::Delegate*) ipc/glue/MessagePump.cpp:369 12 xul.dll MessageLoop::RunHandler() ipc/chromium/src/base/message_loop.cc:319 13 xul.dll MessageLoop::Run() ipc/chromium/src/base/message_loop.cc:299 14 xul.dll nsThread::ThreadFunc(void*) xpcom/threads/nsThread.cpp:506 15 nss3.dll PR_NativeRunThread nsprpub/pr/src/threads/combined/pruthr.c:397 16 nss3.dll pr_root nsprpub/pr/src/md/windows/w95thred.c:95 17 ucrtbase.dll o__realloc_base 18 kernel32.dll BaseThreadInitThunk 19 ntdll.dll RtlUserThreadStart this crash signature is increasing in volume during the 56.0b cycle. it's first occurrence on nightly is on 57.0a1 build 20170803134456 which suggests it was some change landing late in the 56 nightly cycle regressing this. a number of the crashes appear to happen in a UAF situation, so marking this bug as security sensitive.
Group: core-security → network-core-security
Component: Networking → Networking: HTTP
Looking over the last month, these crashes increased significantly in beta 11 and 12. We could look at changes landing between beta 10 and 11 to see if something landed that may be related.
Patrick, can you find someone to investigate? Thanks.
Flags: needinfo?(mcmanus)
Flags: needinfo?(mcmanus)
this seems kinda hairy..
Flags: needinfo?(hurley)
Still no progress here (this one is nasty). I have some thoughts, but they're at best half-formed right now. It looks like (I think) the session is what is being UAF'd. From staring at the code, it looks like we are potentially closing/deleting the session without telling all its transactions. Then, someone else cancels one of those transactions, which tries to do stuff on its connection, which then blows up. I *think* the thing to do here is, in Http2Session::Shutdown(), make sure we SetConnection(nullptr) for all the transactions (which we don't appear to do anywhere). Like I said, though, this thought is half-formed at best. I'll work up a patch to do that, but in the meantime, :mcmanus, you're way smarter than me. I'm willing to bet you can come up with a reason I'm wrong (if I am, in fact, wrong).
Flags: needinfo?(hurley) → needinfo?(mcmanus)
Assignee: nobody → hurley
Priority: -- → P1
Whiteboard: [necko-triaged]
hm. I'm not really seeing where nshttptransaction comes into play here. can you tell me more of what you are thinking? Specifically the trannsactionhash seems to be coherent otherwise the closetransaction() stack wouldn't looks so clear (and that hash is what keeps the transaction alive). the msegmentreader->onreadsegment() call that's the failure point on all the stacks should be calling into the nshttpconnection.. and that should be fine because session itself has a reference to it unless the session has been closed, but that hasn't happened because closetransaction() found the transaction in the transaction hash and the hash is cleared during close (and a flag set to prohibt more being added) but obviously something is wrong. There seem to be two kinds of stacks - closetransaction() and also NotifyConnectionOfWindowIdChange https://crash-stats.mozilla.com/report/index/747ac855-625e-4723-9520-a65db0171009 .. I think its interesting that neither one of them is a normal IO stack (i.e. read/write event). And that's where segment reader normally gets set.. so I'm wondering if mSegmentReader either a] is stale somehow b] somehow destroys mSession while OnSegmentReader is on the stack (and we're seeing the after effects of that. I'm going to say [b] probably isn't the case because connmgr properly holds a reference to the session before calling into it https://hg.mozilla.org/releases/mozilla-release/annotate/8fbf05f4b921/netwerk/protocol/http/nsHttpConnectionMgr.cpp#l2350 Some things I might try a] when assigning mSegmentReader assert !mSegmentReader || (reader == mSegmentReader) b] when assigning megmentraeder make it msegmentreader = mClosed ? nullptr : reader.. c] for every stack other than ReadSegmentsAgain and (IMPTLY) AddStream don't call flushoutputqueue.. rather SetWriteCallbacks() and let the connectionmgr call readsegments again for you asynchronously which will call flushoutputqueue with a valid segment reader.. to do this I guess I would clear msegmentreader before readsegmentsagain returned, keeping copy of it around that addstream could explicitly put on the stack too. ni me again with updates. thanks. I'll add daniel to the cc for more eyes. tell me more about what you're thinking with nshttptransaction.. I can kind of see it big picture, but I'm having trouble relating it to this bug.
Flags: needinfo?(mcmanus)
(In reply to Patrick McManus [:mcmanus] from comment #6) > tell me more about what you're thinking with nshttptransaction.. I can kind > of see it big picture, but I'm having trouble relating it to this bug. That was coming from stack frame 4 in the bug description (4 xul.dll mozilla::net::nsHttpConnectionMgr::OnMsgCancelTransaction(int, mozilla::net::ARefBase*) netwerk/protocol/http/nsHttpConnectionMgr.cpp:2350) - specifically the ARefBase param there gets cast into a nsHttpTransaction, and then trans->Connection() is the pointer used to call CloseTransaction from (which is where we get down into the place where we crash). Since the address for mSegmentReader is poisoned (in a lot of the stacks I've looked at), it seemed reasonable that the connection was being UAF'd, and so the transaction had a stale pointer. However, I'm just noticing that the transaction's mConnection is refcounted... so I'm almost certainly wrong. How I missed that last week, I'm not sure. Ugh. Something that's especially worrisome (again that I'm just noticing now... last week was bad for me, obviously) is that some of the crashes have 0x0 as the address, and we have a check/early return for !mSegmentReader up at the top of FlushOutputQueue. So potentially we have something touching/nulling mSegmentReader on another thread? This is a cross-platform crash (though only 1 each on linux and os x), so not some AV or similar getting in the way. I'll see about fixing how we set mSegmentReader (which is a reasonable thing to do anyway), but I'm feeling no closer to understanding what's actually going on here...
Flags: needinfo?(mcmanus)
> Something that's especially worrisome (again that I'm just noticing now... > last week was bad for me, obviously) is that some of the crashes have 0x0 as > the address, and we have a check/early return for !mSegmentReader up at the > top of FlushOutputQueue. So potentially we have something touching/nulling > mSegmentReader on another thread? This is a cross-platform crash (though > only 1 each on linux and os x), so not some AV or similar getting in the way. > that was part of my thinking.. cross thread seems pretty unlikely (but not impossible). I was thinking more that mSegmentReader->OnReadSegment() had been called and we were seeing the stack as it was being popped back to flushoutputqueue.. and either mSegmentReader wasn't valid, or something in OnReadSegment() managed to delete this (which would be a ref counting bug, because the connmgr is holding a ref to this) or overrun the stack or..?
Flags: needinfo?(mcmanus)
I starred at the addresses for some time to get some additional clues, but it seems to be more less random with a fair amount of them saying 0xffffffffe5e5e5e5, 0x0 and 0xffffffffffffffff but also many other values. Just hints more at UAF.
So the patch with the changes :mcmanus recommended is currently causing test timeouts that I'm working my way through.
Turns out I'm just stupid, and that's why the timeouts were happening. So now the patch as suggested works. (Will post soon.) I'm still at a loss to how we're getting into this state.
Attached patch patch (obsolete) (deleted) — Splinter Review
Attachment #8921606 - Flags: review?(mcmanus)
Comment on attachment 8921606 [details] [diff] [review] patch Review of attachment 8921606 [details] [diff] [review]: ----------------------------------------------------------------- change these to moz_diagnostic_assert and lgtm
Attachment #8921606 - Flags: review?(mcmanus) → review+
Attached patch patch (v2) (obsolete) (deleted) — Splinter Review
Updated with diagnostic asserts. Carry forward r+
Attachment #8922482 - Flags: review+
Attachment #8921606 - Attachment is obsolete: true
Comment on attachment 8922482 [details] [diff] [review] patch (v2) [Security approval request comment] How easily could an exploit be constructed based on the patch? Not sure - tbh, we're not even sure this patch will fix the issue. It's our best guess at this point, though. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? I don't think so. Which older supported branches are affected by this flaw? Probably all of them. If not all supported branches, which bug introduced the flaw? N/A Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? No, but they should be pretty much the same, easy to create, and no riskier than the original How likely is this patch to cause regressions; how much testing does it need? Would be good to get some manual testing on this - if there's a corner case we're missing that causes one of the asserts to hit, then people would see some (safe) crashes. Hopefully only the people who are currently seeing these (potentially unsafe) crashes would see them, though. I haven't seen anything running builds with this applied yet, though.
Attachment #8922482 - Flags: sec-approval?
Comment on attachment 8922482 [details] [diff] [review] patch (v2) sec-approval+ for trunk. I don't think we should necessarily take this on 57 if we think it needs manual testing and aren't even sure if it fixes the issue.
Attachment #8922482 - Flags: sec-approval? → sec-approval+
Doesn't look like a huge volume on release, and very few crashes in nightly so it'll take a while to be sure it's fixed. Probably not worth the risk for 57.
Oh, aren't those crashes fun. I could've sworn I did a try run on this. *sigh* Simple enough to work around, though.
Flags: needinfo?(hurley)
Attached patch patch (v3) (obsolete) (deleted) — Splinter Review
Replaces the bool with a count, for re-entrancy.
Attachment #8922972 - Flags: review?(mcmanus)
Attachment #8922482 - Attachment is obsolete: true
I understand the assertion is re-entrant safe with the combination of addstream and readsegments(), but I'm not sure that's generically true.. wdyt? we might need to approach it differently than just a generic balance..
Comment on attachment 8922972 [details] [diff] [review] patch (v3) Review of attachment 8922972 [details] [diff] [review]: ----------------------------------------------------------------- ::: netwerk/protocol/http/Http2Session.cpp @@ +556,5 @@ > + // is properly set through the right channels. Otherwise, just set our write > + // callbacks so the connection can call in with a proper segment reader that > + // we'll be sure we can write to. > + // See bug 1402014 comment 6 > + LOG3(("Http2Session::MaybeFlushOutputQueue mFlushOK = %d", mFlushOK)); assert socket thread
Attachment #8922972 - Flags: review?(mcmanus)
Attached patch patch (v3, with assertion) (obsolete) (deleted) — Splinter Review
Attachment #8924162 - Flags: review?(mcmanus)
Attachment #8922972 - Attachment is obsolete: true
Comment on attachment 8924162 [details] [diff] [review] patch (v3, with assertion) Review of attachment 8924162 [details] [diff] [review]: ----------------------------------------------------------------- can you address comment 22? I don't feel like we want this to be arbitrarily re-entrant
Attachment #8924162 - Flags: review?(mcmanus)
(In reply to Patrick McManus [:mcmanus] from comment #25) > can you address comment 22? Wow, totally missed that comment. Whoops. So you're thinking either some limit on re-entrancy (count no greater than 2, to handle the AddStream => ReadSegments path), or multiple bools (same effect), yes? I think that makes sense.
I think probably multiple bools because you don't want to allow the {readsegments, readsegments} combo in particular. thanks!
Attached patch patch (v4) (obsolete) (deleted) — Splinter Review
Attachment #8924698 - Flags: review?(mcmanus)
Attachment #8924162 - Attachment is obsolete: true
Comment on attachment 8924698 [details] [diff] [review] patch (v4) Review of attachment 8924698 [details] [diff] [review]: ----------------------------------------------------------------- lgtm; make sure it doesn't assert on try :)
Attachment #8924698 - Flags: review?(mcmanus) → review+
Comment on attachment 8924698 [details] [diff] [review] patch (v4) [Security approval request comment] See comment 15. The patch hasn't changed enough to alter any of the answers there (and this was given s-a+ for trunk), but I'm not sure it's alright for me to just carry forward the approval. So, I ask again :)
Attachment #8924698 - Flags: sec-approval?
(In reply to Patrick McManus [:mcmanus] from comment #29) > Comment on attachment 8924698 [details] [diff] [review] > patch (v4) > > Review of attachment 8924698 [details] [diff] [review]: > ----------------------------------------------------------------- > > lgtm; make sure it doesn't assert on try :) I made double-sure of that this time :)
Attachment #8924698 - Flags: sec-approval? → sec-approval+
remote: https://hg.mozilla.org/integration/mozilla-inbound/rev/e15196e25f9e783b2c04689936074990908170b0 Marking this leave-open so we only close when we see evidence that this fixes the issue.
Keywords: leave-open
Depends on: 1415387
What's the plan to reland this?
(In reply to Randell Jesup [:jesup] from comment #37) > What's the plan to reland this? There is no plan, as the patch that was landed does not fix the problem (see bug 1415876 comment 1). I'm continuing to investigate, but have not made significant process.
The crash-stats graph shows a remarkable dip in crash rate every weekend -- does that indicate this tends to occur only in business-type situations and not on random users' home machines?
This relatively recent crash report mentions using gmail (which uses h2, yeah): https://crash-stats.mozilla.com/report/index/ba85fa13-a7bb-47e1-8f60-5518b0180102#tab-details
Looking over all of them for the year, I see few correlations with what URL is listed. Perhaps this implies it was running background-tab code/fetches? Not sure if they're reflected in the reported URL... for example, I doubt about:addons (or about:newtab) is triggering h2. Ted, comments on where the URL comes from?
Flags: needinfo?(ted)
I'm not fully up-to-date on how the URL annotations work nowadays, but this should be an exhaustive list of places that set the URL annotation: https://dxr.mozilla.org/mozilla-central/search?q=regexp%3A(a%7CA)nnotateCrashReport.*%22URL&redirect=false It basically breaks down to `onLocationChange` handlers and `loadURI` methods. Whether that exhaustively covers everything that does HTTP requests is not something I can answer, unfortunately! For this specific sort of thing, I wonder if we could add a low-overhead way for `Http2Session` or `nsHttpConnectionMgr` (or wherever makes the most logical sense) to indicate "I am handling a HTTP session with this URL" to the crash reporter code, and then the crashreporter could check if the crashing thread was in that set and use that URL as the URL annotation.
Flags: needinfo?(ted)
Flags: needinfo?(mcmanus)
Only 3 crashes with this signature in 58.x so far, all null dereferences, 2 with EXCEPTION_ACCESS_VIOLATION_WRITE. 57 had way more crashes, all EXCEPTION_ACCESS_VIOLATION_READ, quite a few at non-null addresses. So maybe this was fixed and the remaining issue is different?
No signs of this from 59/60 yet. Guess we can see what happens after 59 goes to release.
61 is going into release this week and I still don't see any crashes for 60. I propose we mark this WORKSFORME. The only remaining question is whether someone will figure out what fixed it :)
(In reply to Ryan VanderMeulen [:RyanVM] from comment #44) > No signs of this from 59/60 yet. Guess we can see what happens after 59 goes > to release. About 10-15 crashes including UAFs in 59, so when things go to release we tend to get crashes. I do not think this is gone yet; I think we'll get crashes in 60 in release as well. https://crash-stats.mozilla.com/report/index/c078bcba-ecf9-4728-b878-763930180322 https://crash-stats.mozilla.com/report/index/463c35a9-c2fd-4aa7-b453-8c5000180404 https://crash-stats.mozilla.com/report/index/497079ca-6f0c-4667-95af-3f1cb0180502 Dragana, nick - any thoughts?
Flags: needinfo?(hurley)
Flags: needinfo?(dd.mozilla)
Ugh, those crashes make things even more messed up - one of them (https://crash-stats.mozilla.com/report/index/497079ca-6f0c-4667-95af-3f1cb0180502) is in the regular h2 i/o path, which none of the previous crashes we'd seen had been. Another (https://crash-stats.mozilla.com/report/index/463c35a9-c2fd-4aa7-b453-8c5000180404) happens during the constructor of the session - so my original (though already likely wrong) theory that the session was being UAF'd is pretty obviously wrong. The ctor crash is especially fun, because it's on a line of just a couple boolean checks of member variables. So the only pointer being dereferenced at that point would be |this|... which should be 100% valid, since it's in code called from the ctor. That, paired with the fact that this seems to be highly correlated with the work week says to me that there may be some software we can't see in the crashes interfering with us. Which, of course, makes this way harder to track down. Pat - do you see anything I'm missing here?
Flags: needinfo?(hurley)
Flags: needinfo?(mcmanus)
Flags: needinfo?(mcmanus)
The last profile you mention (https://crash-stats.mozilla.com/report/index/463c35a9-c2fd-4aa7-b453-8c5000180404) appears to be trashed code: 578: if (!mSegmentReader || !mOutputQueueUsed) 6220AAE7 8B 4E 20 mov ecx,dword ptr [esi+20h] 578: if (!mSegmentReader || !mOutputQueueUsed) 6220AAEA 85 C9 test ecx,ecx 6220AAEC 0F 84 BC 00 00 00 je mozilla::net::Http2Session::FlushOutputQueue+0CEh (6220ABAEh) --> 578: if (!mSegmentReader || !mOutputQueueUsed) 664EA942 8B 4E 20 mov ecx,dword ptr [esi+20h] 578: if (!mSegmentReader || !mOutputQueueUsed) 664EA945 85 C9 test ecx,ecx 664EA947 08 84 BC 00 00 00 57 or byte ptr [esp+edi*4+57000000h],al which is clearly fubar, so don't trash your theory just on that. There are e5e5 UAF reports in this code, like https://crash-stats.mozilla.com/report/index/c078bcba-ecf9-4728-b878-763930180322
This seems to be happening more with older versions of Firefox. When I look at the crash stats for the last 7 days, the sigs are all FF 57 or older. When I look at the last month, there are some 61 signatures, but only 5 out of 76. In the last 6 months, only 31 out of 900 crashes are from versions newer than 57.
This is no more than speculation, but it may be that the code path that led to this crash was only exercised via an old-style addon. That would explain why it mostly went away after we launched 57... a few users might have still changed the pref and used the addon for a bit. However, most of the reports I've looked at only list the default Mozilla addons - and even if my theory were true, it doesn't get us any closer from figuring out what causes the crash or how to fix it.
Keywords: stalled
(In reply to Randell Jesup [:jesup] from comment #48) > The last profile you mention > (https://crash-stats.mozilla.com/report/index/463c35a9-c2fd-4aa7-b453-8c5000180404) appears to be trashed code: In fact, there are about 10 1 and 2 bit errors in that dump
Assignee: u408661 → valentin.gosu

I don't see crashes for current releases, is this still a pertinent bug? Would you update the priority if not?

Flags: needinfo?(valentin.gosu)

(In reply to Emma Humphries, Bugmaster ☕️🎸🧞‍♀️✨ (she/her) [:emceeaich] (UTC-8) needinfo? me from comment #52)

I don't see crashes for current releases, is this still a pertinent bug? Would you update the priority if not?

Given the low crash rate and lack of crashes in recent releases, I don't think this is a high priority anymore.

Flags: needinfo?(valentin.gosu)
Priority: P1 → P3

The leave-open keyword is there and there is no activity for 6 months.
:valentin, maybe it's time to close this bug?

Flags: needinfo?(valentin.gosu)

No crashes in recent builds. I think it's safe to close.

Status: NEW → RESOLVED
Closed: 5 years ago
Flags: needinfo?(valentin.gosu)
Flags: needinfo?(mcmanus)
Flags: needinfo?(dd.mozilla)
Resolution: --- → INCOMPLETE

Since the bug is closed, the stalled keyword is now meaningless.
For more information, please visit auto_nag documentation.

Keywords: stalled
Resolution: INCOMPLETE → WORKSFORME

Although happening at a low rate, this signature is still there and half of them are scary EXCEPTION_ACCESS_VIOLATION_EXEC crashes.

Status: RESOLVED → REOPENED
Resolution: WORKSFORME → ---
Attachment #9179864 - Attachment is obsolete: true
Attachment #8924698 - Attachment is obsolete: true

Comment on attachment 9181215 [details]
Bug 1402014 - Make nsAHttpSegmentReader refcounted r=dragana

Security Approval Request

  • How easily could an exploit be constructed based on the patch?: With difficulty. The scenario of how mSegmentReader becomes a dangling pointer is hard to pin down. Our fix was to turn it into a refPtr to avoid such issues.
    Even if a scenario to reproduce the UAF after free were discovered by attackers, all they could do is call ReadSegments on that pointer - most likely crashing.
  • Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem?: Unknown
  • Which older supported branches are affected by this flaw?: all
  • If not all supported branches, which bug introduced the flaw?: None
  • Do you have backports for the affected branches?: No
  • If not, how different, hard to create, and risky will they be?: Grafts successfully onto esr78.
  • How likely is this patch to cause regressions; how much testing does it need?: There is a chance for regressions due to altering the lifetime of Http2Stream.
    I'd like this to have some time Nightly/Beta before it hits release.
Attachment #9181215 - Flags: sec-approval?

Comment on attachment 9181215 [details]
Bug 1402014 - Make nsAHttpSegmentReader refcounted r=dragana

sec-approval+, a=dveditz for beta uplift

Attachment #9181215 - Flags: sec-approval?
Attachment #9181215 - Flags: sec-approval+
Attachment #9181215 - Flags: approval-mozilla-beta+
Group: network-core-security → core-security-release
Status: REOPENED → RESOLVED
Closed: 5 years ago4 years ago
Resolution: --- → FIXED
Target Milestone: --- → 84 Branch

Changing the priority to p1 as the bug is tracked by a release manager for the current beta.
See What Do You Triage for more information

Priority: P3 → P1

As part of a security bug pattern analysis, we are requesting your help with a high level analysis of this bug. It is our hope to develop static analysis (or potentially runtime/dynamic analysis) in the future to identify classes of bugs.

Please visit this google form to reply.

Flags: needinfo?(valentin.gosu)
Whiteboard: [necko-triaged] → [necko-triaged][sec-survey]
Status: RESOLVED → REOPENED
Flags: needinfo?(valentin.gosu)
Resolution: FIXED → ---
Target Milestone: 84 Branch → ---

Comment on attachment 9181215 [details]
Bug 1402014 - Make nsAHttpSegmentReader refcounted r=dragana

Clearing the beta approval to get this off the needs-uplift radar.

Attachment #9181215 - Flags: approval-mozilla-beta+

Dragana said she'd take a look when possible.

Assignee: valentin.gosu → dd.mozilla
Keywords: stalled

This is probably fixed by one of the patches is bug 1667102.

Status: REOPENED → RESOLVED
Closed: 4 years ago3 years ago
Resolution: --- → FIXED
No longer blocks: CVE-2021-43535
Depends on: CVE-2021-43535
QA Whiteboard: [post-critsmash-triage]
Flags: qe-verify-
Depends on: 1740274
Group: core-security-release
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: