Closed Bug 1397793 Opened 7 years ago Closed 7 years ago

Remove webrtc.org "External" audio interface and switch gUM audio input to APM

Categories

(Core :: WebRTC: Audio/Video, enhancement, P2)

enhancement

Tracking

()

RESOLVED FIXED
mozilla59
Tracking Status
thunderbird_esr52 --- unaffected
firefox-esr52 --- unaffected
firefox56 --- wontfix
firefox57 --- wontfix
firefox58 --- wontfix
firefox59 --- fixed

People

(Reporter: jesup, Assigned: padenot)

References

(Blocks 1 open bug)

Details

Attachments

(17 files, 12 obsolete files)

(deleted), text/x-review-board-request
dminor
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
dminor
: review+
Details
(deleted), text/x-review-board-request
jesup
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
(deleted), text/x-review-board-request
pehrsons
: review+
Details
We should move away from the (removed) External audio interfaces in webrtc.org, and use AudioProcessing module directly for AEC/NS/AGC/etc. This will reduce friction in updates and reduce overhead, especially if we can also remove some of the critical section locks it uses, and make it easier to use things like experimental APIs (AEC3, etc)
Rank: 15
Attached patch WIP patch to remove External audio api use from gUM (obsolete) (deleted) — — Splinter Review
Attached patch rough wip for transmit_mixer (obsolete) (deleted) — — Splinter Review
Attached patch rough wip for audio processing module (obsolete) (deleted) — — Splinter Review
I had some rough work in progress for this over on Bug 1391257, attaching in case any pieces are useful to you.
Attached patch WIP patch to remove External audio api use from gUM (obsolete) (deleted) — — Splinter Review
now with NS and AGC support
Attachment #8905557 - Attachment is obsolete: true
Note: the loop to select a resample rate based on kNativeSampleRatesHz means that instead of downsampling 44100 to 32000, we'll probably be upsampling it to 48000. (see audio_processing.h). This may have some impacts on perf (slightly more overhead) and quality (probably less high-frequency loss when the AEC is on - but I haven't verified if that's really the case). You can compare to OutputMixer::APMAnalyzeReverseStream(), which isn't used with this patch applied
Attachment #8905793 - Flags: review?(padenot)
Attachment #8905572 - Attachment is obsolete: true
Comment on attachment 8905793 [details] [diff] [review] Remove webrtc.org External audio api use from gUM in favor of APM Review of attachment 8905793 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/webrtc/MediaEngineWebRTC.h @@ +610,5 @@ > + webrtc::AudioFrame mAudioFrame; > + webrtc::AudioFrame mAudioFrameReverse; // output data for AEC > + UniquePtr<webrtc::AudioProcessing> mAudioProcessing; > + webrtc::PushResampler<int16_t> mResampler; // XXX replace with speex resampler... > + webrtc::PushResampler<int16_t> mResamplerReverse; // XXX replace with speex resampler... We should not have to do the resampling ourselve. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp @@ +59,2 @@ > ScopedCustomReleasePtr<webrtc::VoENetwork> MediaEngineWebRTCMicrophoneSource::mVoENetwork; > ScopedCustomReleasePtr<webrtc::VoEAudioProcessing> MediaEngineWebRTCMicrophoneSource::mVoEProcessing; Can we remove this? @@ +684,5 @@ > + > + // This is guaranteed to be invoked on any rate or channel change, and on the first call. > + > + // We want to process at the lowest rate possible without losing information. > + // Choose the lowest native rate at least equal to the input and codec rates. Can we have a fastpath? Most of the time, we're capturing at either 44.1 or 48. It seems quite wasteful to resample here. From the APM header file: > // The float interfaces accept arbitrary rates and support differing input and > // output layouts, but the output must have either one channel or the same > // number of channels as the input. Our input data are in float. The packetizer just above does the conversion, but we can change that. @@ +686,5 @@ > + > + // We want to process at the lowest rate possible without losing information. > + // Choose the lowest native rate at least equal to the input and codec rates. > + const int min_processing_rate = std::min(aRate, mSampleFrequency); // XXX allow 48KHz? > + for (size_t i = 0; i < webrtc::AudioProcessing::kNumNativeSampleRates; ++i) { wtf, 44.1 is not in the "native sample rates" list? @@ +705,5 @@ > + while (mAudioOutputObserver->Size() > 1) { > + free(mAudioOutputObserver->Pop()); // only call if size() > 0 > + } > + } > + while (mAudioOutputObserver->Size() > 0) { We should consider ditching this queue for the duplex (common) case. @@ +710,5 @@ > + FarEndAudioChunk *buffer = mAudioOutputObserver->Pop(); // only call if size() > 0 > + if (buffer) { > + // target rate set already > + mAudioFrameReverse.num_channels_ = mAudioOutputObserver->PlayoutChannels(); > + webrtc::voe::RemixAndResample(buffer->mData, buffer->mSamples, This is still using voe. @@ +716,5 @@ > + mAudioOutputObserver->PlayoutFrequency(), > + &mResamplerReverse, > + &mAudioFrameReverse); > + int err = mAudioProcessing->ProcessReverseStream(&mAudioFrameReverse); > + free(buffer); not great to have a free() here. @@ +737,5 @@ > int16_t* packet = mInputBuffer.Elements(); > mPacketizer->Output(packet); > > + // rate already set > + // XXX Note: this means that we don't currently support stereo (or above) gUM I don't understand this. Is this a regression ? @@ +743,5 @@ > + webrtc::voe::RemixAndResample(packet, samplesPerPacket, aChannels, aRate, > + &mResampler, &mAudioFrame); > + > + mAudioProcessing->set_stream_delay_ms(0); // not worth doing conditionally (aec on) > + int err = mAudioProcessing->ProcessStream(&mAudioFrame); We should use the interface that does not use the audio frame. There are deprecation warning in the APM header file. @@ +753,5 @@ > + MonitorAutoLock lock(mMonitor); > + if (mState != kStarted) > + return; > + > + InsertInGraph<int16_t>(mAudioFrame.data_, mAudioFrame.samples_per_channel_, We should be able to insert in float directly here. @@ +893,5 @@ > mVoEBase->Init(); > > + mVoENetwork = webrtc::VoENetwork::GetInterface(mVoiceEngine); > + if (mVoENetwork) { > + mVoEProcessing = webrtc::VoEAudioProcessing::GetInterface(mVoiceEngine); What is this used for ?
Attachment #8905793 - Flags: review?(padenot)
Attachment #8905793 - Attachment is obsolete: true
Also removed the deprecated interfaces
this patch causes audio drift (huge drift); somewhere there's still a 32K vs GraphRate issue
un-assigning so someone else can finish this off. It all works except for the last patch; there's a green try with the first patch.
Assignee: rjesup → nobody
fix mis-merge - multichannel landing required some rework
Attachment #8907116 - Attachment is obsolete: true
Assignee: nobody → rjesup
Status: NEW → ASSIGNED
Attached patch WIP patch for more graphrate changes and remove more voe code (obsolete) (deleted) — — Splinter Review
still has audio rate issues. Lots more voe code removed
Attached patch WIP patch for more graphrate changes and remove more voe code (obsolete) (deleted) — — Splinter Review
now totally removes VoEBase/channels/etc
Attached patch WIP patch for more graphrate changes and remove more voe code (obsolete) (deleted) — — Splinter Review
now totally removes VoEBase/channels/etc
Attachment #8907579 - Attachment is obsolete: true
Attachment #8907587 - Attachment is obsolete: true
Mass change P1->P2 to align with new Mozilla triage process
Priority: P1 → P2
Blocks: 1402356
Blocks: 1394163
Jesup unassigned in comment 12. Please re-assign if it should be dminor instead.
Assignee: rjesup → padenot
Hey Paul, do you have a release target?
Flags: needinfo?(padenot)
(In reply to Jim Mathies [:jimm] from comment #19) > Hey Paul, do you have a release target? No, but I'm working on this more or less full time. This can probably be in 58.
Flags: needinfo?(padenot)
Attachment #8905564 - Attachment is obsolete: true
Attachment #8905565 - Attachment is obsolete: true
Attachment #8907119 - Attachment is obsolete: true
Attachment #8907120 - Attachment is obsolete: true
Attachment #8907150 - Attachment is obsolete: true
Attachment #8907597 - Attachment is obsolete: true
So, normally I would have asked jesup to have a look at this, but this patch set has been (more than) half-written by him, so it does not makes sense. dminor is assigned to look at webrtc.org bits (since he did the last import), pehrsons is assigned to AudioConduit/MediaEngineWebRTC stuff, since it's closer to the MSG. However, an overall look from jesup (if time permits) would be cool. jib, this patch set is modifying UpdateSingleSource in [0], if you want to have a look as well. Those patches are rebased against central. The code at the beginning of the series is mostly unchanged (again, modulo rebasing, and bug fixes). The end of the series is a rewrite of MediaEngineWebRTCAudio.cpp to use stable API (quite different from the original code). I tried to write long commit messages, let me know if something does not make much sense. [0]: https://bugzilla.mozilla.org/attachment.cgi?id=8923906
Flags: needinfo?(rjesup)
Flags: needinfo?(jib)
Attachment #8923900 - Flags: review?(dminor) → review+
Comment on attachment 8923902 [details] Bug 1397793 - Move away from VoEExternalMedia "external" API in AudioConduit.cpp. https://reviewboard.mozilla.org/r/195056/#review200126 ::: media/webrtc/signaling/src/media-conduit/AudioConduit.cpp:313 (Diff revision 1) > CSFLogError(LOGTAG, "%s Unable to initialize VoEBase", __FUNCTION__); > return kMediaConduitSessionNotInited; > } > > // init the engine with our audio device layer > - if(mPtrVoEBase->Init() == -1) > + if(mPtrVoEBase->Init(mFakeAudioDevice.get()) == -1) Maybe add a comment saying that we use a fake audio device to avoid initializing an actual device. I'm assuming that the BSDs are now fine with a fake audio device, as we seemed to hit problems the last time we tried something like this. ::: media/webrtc/signaling/src/media-conduit/AudioConduit.cpp:660 (Diff revision 1) > > MediaConduitErrorCode > WebrtcAudioConduit::GetAudioFrame(int16_t speechData[], > int32_t samplingFreqHz, > int32_t capture_delay, > int& lengthSamples) nit: I realize this is pre-existing code but you might as well fix the indent while you are here. ::: media/webrtc/signaling/src/media-conduit/AudioConduit.cpp:712 (Diff revision 1) > { > return kMediaConduitPlayoutError; > } > return kMediaConduitUnknownError; > } > + // XXX Annoying, have to copy to our buffers -- refactor? Please file a bug for this, we only have one caller for WebrtcAudioConduit::GetAudioFrame() and I think the refactor to avoid the copy will be straightforward. I understand not wanting to do it as part of this patch set. ::: media/webrtc/signaling/src/media-conduit/AudioConduit.cpp:715 (Diff revision 1) > return kMediaConduitUnknownError; > } > + // XXX Annoying, have to copy to our buffers -- refactor? > + PodCopy(speechData, mAudioFrame.data_, > + mAudioFrame.samples_per_channel_ * mAudioFrame.num_channels_); > + lengthSamples = mAudioFrame.samples_per_channel_ * mAudioFrame.num_channels_; please calculate lengthSamples first and use it in PodCopy.
Attachment #8923902 - Flags: review?(dminor) → review+
Comment on attachment 8923903 [details] Bug 1397793 - Revert Mozilla changes to OutputMixer https://reviewboard.mozilla.org/r/195058/#review200136 As far as I can tell we won't hit this function with feed_data_to_apm true, it looks like the only code paths which do that are in audio devices which we no longer use since we've switched to the fake audio device. Maybe we should add an assert that feed_data_to_apm is always false? ProcessReverseStream still has constraints on incoming sample rate [1] but this drops the resampling we used to do. Can we guarantee we'll always see a supported sample rate? I'm sorry, but I don't feel comfortable reviewing this change with my current level of knowledge of the audio processing code. [1] http://searchfox.org/mozilla-central/rev/1c4901d060e3953e41088c038b62a0c026c1e1fb/media/webrtc/trunk/webrtc/modules/audio_processing/audio_processing_impl.cc#1379 ::: commit-message-1975c:1 (Diff revision 1) > +Bug 1397793 - Update OutputMixer to reflec API changes. r?dminor typo: s/reflec/reflect/
Attachment #8923903 - Flags: review?(dminor)
Comment on attachment 8923901 [details] Bug 1397793 - Use the MSG rate in MediaPipeline/PeerConnectionImpl. https://reviewboard.mozilla.org/r/195054/#review200374 Excellent! Does this mean we can kill `DEFAULT_SAMPLE_RATE` in MediaEngine.h too? http://searchfox.org/mozilla-central/source/dom/media/webrtc/MediaEngine.h#57 ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h:28 (Diff revision 1) > // Should come from MediaEngine.h, but that's a pain to include here > // because of the MOZILLA_EXTERNAL_LINKAGE stuff. > #define WEBRTC_DEFAULT_SAMPLE_RATE 32000 Remove? ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:2062 (Diff revision 1) > // if this is not enough we'll loop and provide more > - samples_length = WEBRTC_DEFAULT_SAMPLE_RATE/100; > + samples_length = samples_per_10ms; > PodArrayZero(scratch_buffer); > } > > - MOZ_ASSERT(samples_length * sizeof(uint16_t) < AUDIO_SAMPLE_BUFFER_MAX_BYTES); > + MOZ_ASSERT(samples_length * sizeof(uint16_t) <= AUDIO_SAMPLE_BUFFER_MAX_BYTES); This change seems orthogonal, but makes sense. ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:2100 (Diff revision 1) > > // Handle track not actually added yet or removed/finished > if (source_->AppendToTrack(track_id_, &segment)) { > played_ticks_ += frames; > if (MOZ_LOG_TEST(AudioLogModule(), LogLevel::Debug)) { > - if (played_ticks_ > last_log_ + WEBRTC_DEFAULT_SAMPLE_RATE) { // ~ 1 second > + if (played_ticks_ > last_log_ + graph->GraphRate()) { // ~ 1 second s/graph->GraphRate()/rate/ ::: media/webrtc/signaling/src/peerconnection/PeerConnectionImpl.cpp:1792 (Diff revision 1) > // to the "start" time for the track > segment_->AppendNullData(current_ticks); > if (segment_->GetType() == MediaSegment::AUDIO) { > mStream->AsSourceStream()->AddAudioTrack( > track_id_, > - WEBRTC_DEFAULT_SAMPLE_RATE, > + mStream->GraphRate(), s/mStream->GraphRate()/track_rate/
Attachment #8923901 - Flags: review?(apehrson) → review+
Comment on attachment 8923904 [details] Bug 1397793 - Move MediaEngineDefault to use the MSG rate instead of something hard-coded. https://reviewboard.mozilla.org/r/195060/#review200380 ::: dom/media/webrtc/MediaEngineDefault.cpp:476 (Diff revision 1) > const PrincipalHandle& aPrincipalHandle) > { > MOZ_ASSERT(aID == mTrackID); > AudioSegment segment; > // avoid accumulating rounding errors > - TrackTicks desired = aSource->TimeToTicksRoundUp(AUDIO_RATE, aDesiredTime); > + TrackTicks desired = aSource->TimeToTicksRoundUp(mRate, aDesiredTime); We could even skip `mRate` and use aGraph->GraphRate() here.
Attachment #8923904 - Flags: review?(apehrson) → review+
Comment on attachment 8923905 [details] Bug 1397793 - Remove VoEExternalMedia usage in MediaEngineWebRTCAudio and MediaEngineWebRTC. https://reviewboard.mozilla.org/r/195062/#review200382 Amazing cleanup. Thanks! ::: dom/media/webrtc/MediaEngineWebRTC.h:144 (Diff revision 1) > > // Small subset of VoEHardware > class AudioInput > { > public: > - explicit AudioInput(webrtc::VoiceEngine* aVoiceEngine) : mVoiceEngine(aVoiceEngine) {}; > + explicit AudioInput() {} Remove this or make it `=default`. ::: dom/media/webrtc/MediaEngineWebRTC.cpp:283 (Diff revision 1) > if (!mAudioInput) { > - if (SupportsDuplex()) { > - // The platform_supports_full_duplex. > + MOZ_ASSERT(SupportsDuplex()); > + mAudioInput = new mozilla::AudioInputCubeb(); > - mAudioInput = new mozilla::AudioInputCubeb(mVoiceEngine); > - } else { > - mAudioInput = new mozilla::AudioInputWebRTC(mVoiceEngine); > - } > } Will this be graceful in release if `SupportsDuplex()` is false? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:760 (Diff revision 1) > } > > bool > MediaEngineWebRTCMicrophoneSource::AllocChannel() > { > - MOZ_ASSERT(mVoEBase); > + mSampleFrequency = MediaEngine::USE_GRAPH_RATE; Is `mSampleFrequency` needed at all now? How far is it propagated -- can we hit problems if an input device doesn't support the graph rate?
Attachment #8923905 - Flags: review?(apehrson) → review+
Comment on attachment 8923906 [details] Bug 1397793 - Move to APM - Part 1 - UpdateSingleSource. https://reviewboard.mozilla.org/r/195064/#review200384 Looks mostly good, but I want more details on the AGC before approving. ::: dom/media/webrtc/MediaEngineWebRTC.h:515 (Diff revision 1) > RefPtr<AudioOutputObserver> mAudioOutputObserver; > > // Note: shared across all microphone sources > static int sChannelsOpen; > > + UniquePtr<webrtc::AudioProcessing> mAudioProcessing; Add a comment on where this is written and read. Edit: looks like it's created by the constructor and destroyed by the destructor. Make it const? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:19 (Diff revision 1) > +#include "webrtc/voice_engine/utility.h" > +#include "webrtc/voice_engine/voice_engine_defines.h" Are these out of the question to get rid of? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:394 (Diff revision 1) > + mAudioProcessing->echo_cancellation()->Enable(false); > + mAudioProcessing->echo_control_mobile()->Enable(prefs.mAecOn); We should handle errors from these methods. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:401 (Diff revision 1) > - // Overhead of capturing all the time is very low (<0.1% of an audio only call) > - if (prefs.mAecOn) { > - error = mVoEProcessing->SetEcMetricsStatus(true); > - if (error) { > - LOG(("%s Error setting Echo Metrics: %d ",__FUNCTION__, error)); > + mAudioProcessing->echo_control_mobile()->Enable(false); > + mAudioProcessing->echo_cancellation()->Enable(prefs.mAecOn); > + mAudioProcessing->echo_cancellation()->set_suppression_level( > + ((webrtc::EcModes)prefs.mAec) == webrtc::EcModes::kEcConference ? > + webrtc::EchoCancellation::kHighSuppression : > + webrtc::EchoCancellation::kModerateSuppression); We should handle errors from these methods. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:439 (Diff revision 1) > + case webrtc::NsModes::kNsVeryHighSuppression: > + nsLevel = webrtc::NoiseSuppression::Level::kVeryHigh; > + break; > - } > + } > + if (mAudioProcessing->noise_suppression()->set_level(nsLevel) != 0) { > + LOG(("Failed to set NS level %d", nsLevel)); These logs really need a level (error in this case). ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:469 (Diff revision 1) > - // we don't allow switching from non-fast-path to fast-path on the fly yet > - if (mState != kStarted) { > - mSkipProcessing = !(prefs.mAecOn || prefs.mAgcOn || prefs.mNoiseOn); > - if (mSkipProcessing) { > - mSampleFrequency = MediaEngine::USE_GRAPH_RATE; > - } else { > + if (mAudioProcessing->gain_control()->set_mode(agcMode) != 0) { > + LOG(("%s Error setting AGC Mode %d ",__FUNCTION__, agcMode)); > + } > + if (mAudioProcessing->gain_control()->Enable(prefs.mAgcOn) != 0) { > + LOG(("%s Error setting AGC enable",__FUNCTION__)); > + } voice engine handles this differently in that if `set_mode()` fails it doesn't attempt to `Enable()`. Any reason we shouldn't do the same, i.e., break AEC, NS, AGC out to separate methods with early returns? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:481 (Diff revision 1) > +#if 0 > + if (agcMode != webrtc::GainControl::kFixedDigital) { > + // XXX webrtc.org sets AGC state in the audio device code here, but > + // we don't use their input code.... > + } > +#endif So, `if (agcMode != kFixedDigital)` they tell their device code to calculate and report the volume level upwards. It looks to me like we set it to `kAgcDefault` by default, when agc is enabled, per [1]. Did this regress already when we turned on full duplex then? If so I think it's enough to file a followup to fix it soonish (and log and/or assert it, instead of the `#if 0`). Otherwise we need to avoid regressing it now. [1] http://searchfox.org/mozilla-central/rev/aa1343961fca52e8c7dab16788530da31aab7c8d/modules/libpref/init/all.js#533
Attachment #8923906 - Flags: review?(apehrson)
Comment on attachment 8923907 [details] Bug 1397793 - Move to APM - Part 2 - Actual processing. https://reviewboard.mozilla.org/r/195066/#review200388 Looks good, mostly minor issues. A bit many to r+ now. The important ones are the XXX comments and the `mSources` nullptr. ::: dom/media/webrtc/AudioOutputObserver.h:22 (Diff revision 1) > namespace mozilla { > > typedef struct FarEndAudioChunk_ { > - uint16_t mSamples; > + size_t mSamples; > bool mOverrun; > - int16_t mData[1]; // variable-length > + float mData[1]; // variable-length Can we make this `AudioDataValue`? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:148 (Diff revision 1) > // The AnalyzeReverseStream() and WebRtcAec_BufferFarend() functions insist on 10ms > // samples per call. Annoying... > while (aFrames) { > if (!mSaved) { > mSaved = (FarEndAudioChunk *) moz_xmalloc(sizeof(FarEndAudioChunk) + > - (mChunkSize * channels - 1)*sizeof(int16_t)); > + (mChunkSize * channels - 1)*sizeof(float)); Can we make all these floats `AudioDataValue`? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:686 (Diff revision 1) > } > } > > + // Feed the far-end audio data (speakers) to the feedback input of the AEC. > + while (mAudioOutputObserver->Size() > 0) { > + FarEndAudioChunk *buffer = mAudioOutputObserver->Pop(); // only call if size() > 0 Put a comment here to free the buffer before returning, to avoid leaks. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:687 (Diff revision 1) > } > > + // Feed the far-end audio data (speakers) to the feedback input of the AEC. > + while (mAudioOutputObserver->Size() > 0) { > + FarEndAudioChunk *buffer = mAudioOutputObserver->Pop(); // only call if size() > 0 > + if (buffer) { I'd prefer a guard: ``` if (!buffer) { continue; } ``` ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:688 (Diff revision 1) > > + // Feed the far-end audio data (speakers) to the feedback input of the AEC. > + while (mAudioOutputObserver->Size() > 0) { > + FarEndAudioChunk *buffer = mAudioOutputObserver->Pop(); // only call if size() > 0 > + if (buffer) { > + float* packetDataPointer = buffer->mData; Can all the floats here be `AudioDataValue`? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:695 (Diff revision 1) > + float* interleavedFarend = nullptr; > + uint32_t channelCountFarend = 0; > + uint32_t framesPerPacketFarend = 0; > + > + // Downmix from aChannels to MAX_CHANNELS if needed > + if (mAudioOutputObserver->PlayoutChannels() > 2) { s/2/MAX_CHANNELS/ ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:696 (Diff revision 1) > + AudioConverter converter(AudioConfig(aChannels, > + 0, > + AudioConfig::FORMAT_FLT), > + AudioConfig(MAX_CHANNELS, > + 0, > + AudioConfig::FORMAT_FLT)); Indentation is a bit funky. Put the two AudioConfig constructors on a single line each? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:720 (Diff revision 1) > + (channelCountFarend == 1 || channelCountFarend == 2) && > + framesPerPacketFarend); indentation ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:724 (Diff revision 1) > + MOZ_ASSERT(interleavedFarend && > + (channelCountFarend == 1 || channelCountFarend == 2) && > + framesPerPacketFarend); > + > + offset = 0; > + for (uint32_t i = 0; i < deinterleavedPacketDataChannelPointers.Length(); i++) { s/uint32_t/size_t/ `++i` is a few cycles quicker. Well, at least for non-opt. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:750 (Diff revision 1) > + offset = 0; > + for (uint32_t i = 0; i < channelCountFarend; i++) { > + channelsPointers[i] = buffer->mData + offset; > + offset += framesPerPacketFarend; > + } Really a nit but we could s/offset/i * framesPerPacketFarend/ to save two lines per each of the three loops. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:751 (Diff revision 1) > + // frames. We don't use the output data here, but it's required to pass > + // valid buffers to the API. > + AutoTArray<float*, MAX_CHANNELS> channelsPointers; > + channelsPointers.SetLength(channelCountFarend); > + offset = 0; > + for (uint32_t i = 0; i < channelCountFarend; i++) { s/uint32_t/size_t/ s/channelCountFarend/channelsPointers.Length()/ ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:765 (Diff revision 1) > + int err = > + mAudioProcessing->ProcessReverseStream(channelsPointers.Elements(), > + inputConfig, > + outputConfig, > + processReverseStreamChannelPointers.Elements()); Put the first arg on a new line and put the method call on the same line as the `err` declaration. Or align all the args. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:771 (Diff revision 1) > + // XXX this is bad. Use a ring buffer with a linear buffer. > + free(buffer); File a bug to followup and note the bug number here. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:775 (Diff revision 1) > + MOZ_LOG(AudioLogModule(), LogLevel::Error, > + ("error in audio ProcessReverseStream(): %d", err)); Hmm, this log module is "AudioLatency". Any reasoning behind that name? Is it still relevant? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:800 (Diff revision 1) > - if (aChannels > MAX_CHANNELS) { > - AudioConverter converter(AudioConfig(aChannels, 0, AudioConfig::FORMAT_S16), > - AudioConfig(MAX_CHANNELS, 0, AudioConfig::FORMAT_S16)); > - converter.Process(mInputDownmixBuffer, packet, mPacketizer->PacketSize()); > - mVoERender->ExternalRecordingInsertData(mInputDownmixBuffer.Data(), > - mPacketizer->PacketSize() * MAX_CHANNELS, > + // Deinterleave the input data > + // Prepare an array pointing to deinterleaved channels. > + AutoTArray<float*, 8> deinterleavedPacketizedInputDataChannelPointers; > + deinterleavedPacketizedInputDataChannelPointers.SetLength(aChannels); > + offset = 0; > + for (uint32_t i = 0; i < deinterleavedPacketizedInputDataChannelPointers.Length(); i++) { s/uint32_t/size_t/ And `offset` could be optimized away per the above. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:808 (Diff revision 1) > + StreamConfig inputConfig(aRate, > + aChannels, > + false /* we don't use typning detection*/); indentation ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:810 (Diff revision 1) > + Deinterleave(packet, mPacketizer->PacketSize(), aChannels, > + deinterleavedPacketizedInputDataChannelPointers.Elements()); > + > + StreamConfig inputConfig(aRate, > + aChannels, > + false /* we don't use typning detection*/); s/typning/typing/ ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:813 (Diff revision 1) > + // XXX TODO Get the right delay here, it saves some work down the line. > + mAudioProcessing->set_stream_delay_ms(0); Fix or file a bug to followup and note the bug number here. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:816 (Diff revision 1) > + // XXX find a way to not allocate here. > + RefPtr<SharedBuffer> buffer = > + SharedBuffer::Create(mPacketizer->PacketSize() * aChannels * sizeof(float)); > + nsAutoPtr<AudioSegment> segment(new AudioSegment()); Fix or file a bug to followup and note the bug number here. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:828 (Diff revision 1) > + for (uint32_t i = 0; i < processedOutputChannelPointers.Length(); i++) { > + processedOutputChannelPointers[i] = static_cast<float*>(buffer->Data()) + offset; > + processedOutputChannelPointersConst[i] = static_cast<float*>(buffer->Data()) + offset; > + offset += aFrames; > + } s/uint32_t/size_t/ and the same thing as before about optimizing away `offset`. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:842 (Diff revision 1) > + processedOutputChannelPointers.Elements()); > + MonitorAutoLock lock(mMonitor); > + if (mState != kStarted) > + return; > + > + size_t len = mSources.Length(); Inline `len`. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:844 (Diff revision 1) > + if (!mSources[i]) { // why ?! > + continue; > + } This doesn't make sense. Same as [1]. We need to figure out why (code looks sane, so some kind of race?) and fix. [1] http://searchfox.org/mozilla-central/rev/aa1343961fca52e8c7dab16788530da31aab7c8d/dom/media/webrtc/MediaEngineWebRTCAudio.cpp#678 ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:850 (Diff revision 1) > + continue; > + } > + > + // We already have planar audio data of the right format. Insert into the > + // MSG. > + MOZ_ASSERT(processedOutputChannelPointers.Length() == aChannels); Unnecessary check
Attachment #8923907 - Flags: review?(apehrson)
See bug 1406135 comment 36. Let's continue reviews and get this in early 59.
Comment on attachment 8923901 [details] Bug 1397793 - Use the MSG rate in MediaPipeline/PeerConnectionImpl. https://reviewboard.mozilla.org/r/195054/#review200478 ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:2034 (Diff revision 1) > + TrackRate rate = graph->GraphRate(); > + uint32_t samples_per_10ms = rate/100; Are there any assertions/requirements in the graphDrivers/MSG that the GraphRate%100 == 0? If not, we should add them (not here; we don't need to check it on every pull, just when a new driver is created probably)
Comment on attachment 8923903 [details] Bug 1397793 - Revert Mozilla changes to OutputMixer https://reviewboard.mozilla.org/r/195058/#review200482 These are all just removing external API code we pulled forward into the 57 update. This just synchronizes these bits with upstream 57, removing our changes
Comment on attachment 8923904 [details] Bug 1397793 - Move MediaEngineDefault to use the MSG rate instead of something hard-coded. https://reviewboard.mozilla.org/r/195060/#review200498 ::: dom/media/webrtc/MediaEngineDefault.cpp:386 (Diff revision 1) > if (aConstraints.mDeviceId.IsString() && > aConstraints.mDeviceId.GetAsString().EqualsASCII("bad device")) { > return NS_ERROR_FAILURE; > } > > + mFreq = aPrefs.mFreq ? aPrefs.mFreq : 1000; since this is inited to 1000, would this be better to be "if (aPreds.mFreq) { mFreq = aPrefs.mFreq; }"? I don't have a problem retaining the trinary here, though. ::: dom/media/webrtc/MediaEngineDefault.cpp:420 (Diff revision 1) > + mSineGenerator = new SineWaveGenerator(mRate, mFreq); > + } > + > // AddTrack will take ownership of segment > AudioSegment* segment = new AudioSegment(); > - aStream->AddAudioTrack(aID, AUDIO_RATE, 0, segment, SourceMediaStream::ADDTRACK_QUEUED); > + aStream->AddAudioTrack(aID, mRate, 0, segment, SourceMediaStream::ADDTRACK_QUEUED); 48000 won't be GraphRate on a lot of systems I believe - should this be aGraph->GraphRate() (somehow)? (Perhaps would require restructuring the code)
Comment on attachment 8923904 [details] Bug 1397793 - Move MediaEngineDefault to use the MSG rate instead of something hard-coded. https://reviewboard.mozilla.org/r/195060/#review200498 > 48000 won't be GraphRate on a lot of systems I believe - should this be aGraph->GraphRate() (somehow)? (Perhaps would require restructuring the code) It gets set to GraphRate() on line 413.
Comment on attachment 8923905 [details] Bug 1397793 - Remove VoEExternalMedia usage in MediaEngineWebRTCAudio and MediaEngineWebRTC. https://reviewboard.mozilla.org/r/195062/#review200500 ::: dom/media/webrtc/MediaEngineWebRTC.h:144 (Diff revision 1) > > // Small subset of VoEHardware > class AudioInput > { > public: > - explicit AudioInput(webrtc::VoiceEngine* aVoiceEngine) : mVoiceEngine(aVoiceEngine) {}; > + explicit AudioInput() {} also (though not importantly) explicit is only needed when the function takes 1 arg. ::: dom/media/webrtc/MediaEngineWebRTC.cpp:283 (Diff revision 1) > if (!mAudioInput) { > - if (SupportsDuplex()) { > - // The platform_supports_full_duplex. > + MOZ_ASSERT(SupportsDuplex()); > + mAudioInput = new mozilla::AudioInputCubeb(); > - mAudioInput = new mozilla::AudioInputCubeb(mVoiceEngine); > - } else { > - mAudioInput = new mozilla::AudioInputWebRTC(mVoiceEngine); > - } > } It should fail creation in opt builds, probably with a warning/LOG(). right now it will succeed but fail to work. Something like "if (!SupportsDuplex) { return; }" ::: dom/media/webrtc/MediaEngineWebRTC.cpp:323 (Diff revision 1) > if (SupportsDuplex()) { > // The platform_supports_full_duplex. > Haven't we required this above? (At least if we add the early return).
Comment on attachment 8923907 [details] Bug 1397793 - Move to APM - Part 2 - Actual processing. https://reviewboard.mozilla.org/r/195066/#review200530 ::: commit-message-86e22:9 (Diff revision 1) > + > +First, we switch to floats everywhere. This allows to work with any rate, is > +more flexible with channel layout, and is a stable API (see audio_processing.h > +in webrtc.org). > + > +Then, 10ms worth of audio (already at the graph rate) are poped from the nit: "popped" ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:686 (Diff revision 1) > } > } > > + // Feed the far-end audio data (speakers) to the feedback input of the AEC. > + while (mAudioOutputObserver->Size() > 0) { > + FarEndAudioChunk *buffer = mAudioOutputObserver->Pop(); // only call if size() > 0 How about: "UniquePtr<FarEndAudioChunk> buffer(mAudioOutputObserver->Pop();" ? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:775 (Diff revision 1) > + MOZ_LOG(AudioLogModule(), LogLevel::Error, > + ("error in audio ProcessReverseStream(): %d", err)); AudioLatency is specifically just for log messages that could be processed to track the timing of data through the various paths. Not for general/error messages.
Comment on attachment 8923906 [details] Bug 1397793 - Move to APM - Part 1 - UpdateSingleSource. https://reviewboard.mozilla.org/r/195064/#review200820 ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:361 (Diff revision 1) > auto& source = mSources.LastElement(); > mAudioInput->SetUserChannelCount(prefs.mChannels); > // Get validated number of channel > uint32_t channelCount = 0; > mAudioInput->GetChannelCount(channelCount); > - MOZ_ASSERT(channelCount > 0 && mLastPrefs.mChannels > 0); > + if (mLastPrefs.mChannels != prefs.mChannels && Why remove the MOZ_ASSERT? Is non-zero channelCount no longer an invariant in kStarted? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:363 (Diff revision 1) > mAudioInput->GetChannelCount(channelCount); > - MOZ_ASSERT(channelCount > 0 && mLastPrefs.mChannels > 0); > + if (mLastPrefs.mChannels != prefs.mChannels && > - // Check if new validated channels is the same as previous > - if (static_cast<uint32_t>(mLastPrefs.mChannels) != channelCount && > !source->OpenNewAudioCallbackDriver(mListener)) { > return NS_ERROR_FAILURE; We should log a message here at least. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:415 (Diff revision 1) > + webrtc::NoiseSuppression::Level nsLevel = webrtc::kDefaultNsMode; > + switch ((webrtc::NsModes) prefs.mNoise) { > + case webrtc::NsModes::kNsDefault: > + break; // already set nit: might be a style thing, but isn't using default: in the switch statement more self-documenting? Two places. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:469 (Diff revision 1) > + case webrtc::AgcModes::kAgcAdaptiveDigital: > + agcMode = webrtc::GainControl::Mode::kAdaptiveDigital; > + break; > - } > + } > > - // we don't allow switching from non-fast-path to fast-path on the fly yet > + if (mAudioProcessing->gain_control()->set_mode(agcMode) != 0) { Is this safe now? I still see PassThrough() in the code which makes me nervous. We need to test this. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:797 (Diff revision 1) > #define ResetProcessingIfNeeded(_processing) \ > do { \ > - webrtc::_processing##Modes mode; \ > + bool enabled = mAudioProcessing->_processing()->is_enabled(); \ Nice! This macro can be function finally.
Flags: needinfo?(jib)
(In reply to Dan Minor [:dminor] from comment #33) > I'm assuming that the BSDs are now fine with a fake audio device, as we > seemed to hit problems the last time we tried something like this. I'm in the process of landing their new duplex backend they wrote for cubeb. > ::: media/webrtc/signaling/src/media-conduit/AudioConduit.cpp:712 > (Diff revision 1) > > { > > return kMediaConduitPlayoutError; > > } > > return kMediaConduitUnknownError; > > } > > + // XXX Annoying, have to copy to our buffers -- refactor? > > Please file a bug for this, we only have one caller for > WebrtcAudioConduit::GetAudioFrame() and I think the refactor to avoid the > copy will be straightforward. I understand not wanting to do it as part of > this patch set. bug 1414770
Blocks: 1414770
(In reply to Andreas Pehrson [:pehrsons] from comment #37) > ::: dom/media/webrtc/MediaEngineWebRTC.cpp:283 > (Diff revision 1) > > if (!mAudioInput) { > > - if (SupportsDuplex()) { > > - // The platform_supports_full_duplex. > > + MOZ_ASSERT(SupportsDuplex()); > > + mAudioInput = new mozilla::AudioInputCubeb(); > > - mAudioInput = new mozilla::AudioInputCubeb(mVoiceEngine); > > - } else { > > - mAudioInput = new mozilla::AudioInputWebRTC(mVoiceEngine); > > - } > > } > > Will this be graceful in release if `SupportsDuplex()` is false? No it was going to explode horribly. I simply made it return. > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:760 > (Diff revision 1) > > } > > > > bool > > MediaEngineWebRTCMicrophoneSource::AllocChannel() > > { > > - MOZ_ASSERT(mVoEBase); > > + mSampleFrequency = MediaEngine::USE_GRAPH_RATE; > > Is `mSampleFrequency` needed at all now? > How far is it propagated -- can we hit problems if an input device doesn't > support the graph rate? This cannot happen. cubeb resamples for you and provides you with a stream at the correct rate.
(In reply to Andreas Pehrson [:pehrsons] from comment #38) > > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:19 > (Diff revision 1) > > +#include "webrtc/voice_engine/utility.h" > > +#include "webrtc/voice_engine/voice_engine_defines.h" > > Are these out of the question to get rid of? I removed one of them, the other one is needed. > voice engine handles this differently in that if `set_mode()` fails it > doesn't attempt to `Enable()`. > > Any reason we shouldn't do the same, i.e., break AEC, NS, AGC out to > separate methods with early returns? I don't think it's harmful to not do it, but we could ? > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:481 > (Diff revision 1) > > +#if 0 > > + if (agcMode != webrtc::GainControl::kFixedDigital) { > > + // XXX webrtc.org sets AGC state in the audio device code here, but > > + // we don't use their input code.... > > + } > > +#endif > > So, `if (agcMode != kFixedDigital)` they tell their device code to calculate > and report the volume level upwards. > > It looks to me like we set it to `kAgcDefault` by default, when agc is > enabled, per [1]. > > Did this regress already when we turned on full duplex then? If so I think > it's enough to file a followup to fix it soonish (and log and/or assert it, > instead of the `#if 0`). Otherwise we need to avoid regressing it now. `kAgcDefault` is `kAdaptiveDigital` on mobile, `kAdaptiveAnalog` on desktop. This has probably regressed when doing this in the first place. I'm switching it to be `kAdaptiveDigital`, unconditionnaly, we'll see. This means that we'll run a digital compression stage on the mic. This is trivial to change if we want.
(In reply to Paul Adenot (:padenot) from comment #50) > (In reply to Andreas Pehrson [:pehrsons] from comment #38) > > > voice engine handles this differently in that if `set_mode()` fails it > > doesn't attempt to `Enable()`. > > > > Any reason we shouldn't do the same, i.e., break AEC, NS, AGC out to > > separate methods with early returns? > > I don't think it's harmful to not do it, but we could ? Yes please. > > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:481 > > (Diff revision 1) > > > +#if 0 > > > + if (agcMode != webrtc::GainControl::kFixedDigital) { > > > + // XXX webrtc.org sets AGC state in the audio device code here, but > > > + // we don't use their input code.... > > > + } > > > +#endif > > > > So, `if (agcMode != kFixedDigital)` they tell their device code to calculate > > and report the volume level upwards. > > > > It looks to me like we set it to `kAgcDefault` by default, when agc is > > enabled, per [1]. > > > > Did this regress already when we turned on full duplex then? If so I think > > it's enough to file a followup to fix it soonish (and log and/or assert it, > > instead of the `#if 0`). Otherwise we need to avoid regressing it now. > > `kAgcDefault` is `kAdaptiveDigital` on mobile, `kAdaptiveAnalog` on desktop. > This has probably regressed when doing this in the first place. I'm > switching it to be `kAdaptiveDigital`, unconditionnaly, we'll see. This > means that we'll run a digital compression stage on the mic. This is trivial > to change if we want. How should we handle this case then since we'll always be != kFixedDigital? Do we have to make cubeb calculate the volume level and report it upwards like they did? That would be a bit of work.
(In reply to Andreas Pehrson [:pehrsons] from comment #39) > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:148 > (Diff revision 1) > > // The AnalyzeReverseStream() and WebRtcAec_BufferFarend() functions insist on 10ms > > // samples per call. Annoying... > > while (aFrames) { > > if (!mSaved) { > > mSaved = (FarEndAudioChunk *) moz_xmalloc(sizeof(FarEndAudioChunk) + > > - (mChunkSize * channels - 1)*sizeof(int16_t)); > > + (mChunkSize * channels - 1)*sizeof(float)); > > Can we make all these floats `AudioDataValue`? Yeah, some of those we can, some we can't. Basically, we receive audio in `AudioDataValue`, and convert it to float. The AEC works on floating points values. > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:686 > (Diff revision 1) > > } > > } > > > > + // Feed the far-end audio data (speakers) to the feedback input of the AEC. > > + while (mAudioOutputObserver->Size() > 0) { > > + FarEndAudioChunk *buffer = mAudioOutputObserver->Pop(); // only call if size() > 0 > > Put a comment here to free the buffer before returning, to avoid leaks. nsAutoPtr is better than comments.
(In reply to Paul Adenot (:padenot) from comment #52) > nsAutoPtr is better than comments. Agree. Even better if you could make Pop() return something other than a raw pointer. And give it a comment! (Currently Pop() doesn't say anything, but it would have been good to know that the caller takes ownership of that object)
(In reply to Jan-Ivar Bruaroey [:jib] (needinfo? me) from comment #47) > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:469 > (Diff revision 1) > > + case webrtc::AgcModes::kAgcAdaptiveDigital: > > + agcMode = webrtc::GainControl::Mode::kAdaptiveDigital; > > + break; > > - } > > + } > > > > - // we don't allow switching from non-fast-path to fast-path on the fly yet > > + if (mAudioProcessing->gain_control()->set_mode(agcMode) != 0) { > > Is this safe now? I still see PassThrough() in the code which makes me > nervous. > > We need to test this. It's safe. All WebRTC.org function take one or two locks (depending on the function) internally. I made this an atomic_bool, and lenghthened the life-time of the `mAudioOutputObserver`.
Attachment #8923903 - Flags: review?(dminor)
Attachment #8923906 - Flags: review?(apehrson)
Attachment #8923907 - Flags: review?(apehrson)
Blocks: 1372271
Comment on attachment 8925906 [details] Bug 1397793 - Refactor the code that sets the processing modes. https://reviewboard.mozilla.org/r/197118/#review202252 C/C++ static analysis found 2 defects in this patch. You can run this analysis locally with: `./mach static-analysis check path/to/file.cpp` If you see a problem in this automated review, please report it here: http://bit.ly/2y9N9Vx ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:291 (Diff revision 1) > + switch(aMode) { > + case EcModes::kEcUnchanged: > + level = mAudioProcessing->echo_cancellation()->suppression_level(); > + break; > + case EcModes::kEcConference: > + level = EchoCancellation::kHighSuppression; Warning: Value stored to 'level' is never read [clang-tidy: clang-analyzer-deadcode.DeadStores] ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:293 (Diff revision 1) > + level = mAudioProcessing->echo_cancellation()->suppression_level(); > + break; > + case EcModes::kEcConference: > + level = EchoCancellation::kHighSuppression; > + case EcModes::kEcDefault: > + level = EchoCancellation::kModerateSuppression; Warning: Value stored to 'level' is never read [clang-tidy: clang-analyzer-deadcode.DeadStores]
Comment on attachment 8925905 [details] Bug 1397793 - Allow switching processing on/off dynamically. https://reviewboard.mozilla.org/r/197116/#review202234 Cleared r? pending an answer to that race condition issue. ::: dom/media/webrtc/MediaEngineWebRTC.h:546 (Diff revision 1) > int32_t mPlayoutDelay; > > // mSkipProcessing is true if none of the processing passes are enabled, > // because of prefs or constraints. This allows simply copying the audio into > // the MSG, skipping resampling and the whole webrtc.org code. > - bool mSkipProcessing; > + std::atomic_bool mSkipProcessing; `Atomic<bool>` ? Or if we can do this with message passing instead, just `bool`. See below. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:373 (Diff revision 1) > if (rv != AudioProcessing::kNoError) { \ > MOZ_LOG(GetMediaManagerLog(), LogLevel::Error, (#fn)); \ > } \ > } while(0); > > mSkipProcessing = !(prefs.mAecOn || prefs.mAgcOn || prefs.mNoiseOn); If this changes from `false` to `true` on the media thread right in between `NotifyOutputData` and `NotifyInputData` of the same data callback; won't we leave some data in `mAudioOutputObserver`? What are the implications? This handover seems a bit crude, even if we're planning to get rid of it with `mAudioOutputObserver`. Can we do the handover by a message to the audio thread instead? SetLastPrefs updates the prefs on the main thread, which seems like a prime time to send a ControlMessage to MSG. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:533 (Diff revision 1) > MOZ_ASSERT(mSources.Length() == mPrincipalHandles.Length()); > } > > AudioSegment* segment = new AudioSegment(); > if (mSampleFrequency == MediaEngine::USE_GRAPH_RATE) { > mSampleFrequency = aStream->GraphRate(); We can almost get rid of mSampleFrequency now it seems. At least USE_GRAPH_RATE seems superfluous.
Attachment #8925905 - Flags: review?(apehrson)
Comment on attachment 8923903 [details] Bug 1397793 - Revert Mozilla changes to OutputMixer https://reviewboard.mozilla.org/r/195058/#review202898 Please change the checkin comments for this patch before landing... Something like "remove Mozilla output_mixer changes"
Attachment #8923903 - Flags: review?(rjesup)
Comment on attachment 8923906 [details] Bug 1397793 - Move to APM - Part 1 - UpdateSingleSource. https://reviewboard.mozilla.org/r/195064/#review203240 Good! Still some issues unresolved though. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:199 (Diff revision 3) > const char* uuid, > bool aDelayAgnostic, > bool aExtendedFilter) > : MediaEngineAudioSource(kReleased) > , mAudioInput(aAudioInput) > + , mAudioProcessing(webrtc::AudioProcessing::Create()) nit: `webrtc::` shouldn't be necessary now, with the `using namespace webrtc;` above. (and a bunch more places) ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:384 (Diff revision 3) > + if (rv != AudioProcessing::kNoError) { \ > + MOZ_LOG(AudioLogModule(), LogLevel::Error, (#fn)); \ > + } \ Could there be cases when a `MOZ_ASSERT_UNREACHABLE()` is out of the question here? It seems useful to catch these on try. Error logs may help track them down but is not gonna find them for us. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:475 (Diff revision 3) > + HANDLE_APM_ERROR(mAudioProcessing->gain_control()->set_mode(agcMode)); > + HANDLE_APM_ERROR(mAudioProcessing->gain_control()->Enable(prefs.mAgcOn)); I would still like an error in `set_mode()` to not result in `Enable()`. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:796 (Diff revision 3) > } > } > > #define ResetProcessingIfNeeded(_processing) \ > do { \ > - webrtc::_processing##Modes mode; \ > + bool enabled = mAudioProcessing->_processing()->is_enabled(); \ Looks like this can just be inlined now. ::: modules/libpref/init/all.js:534 (Diff revision 3) > #endif > pref("media.getusermedia.aec_extended_filter", true); > pref("media.getusermedia.aec_delay_agnostic", true); > pref("media.getusermedia.noise", 1); > pref("media.getusermedia.agc_enabled", false); > -pref("media.getusermedia.agc", 1); > +pref("media.getusermedia.agc", 3); // digital gain control nit: could we also say kAgcAdaptiveDigital to be more clean on which internal mode we intended?
Attachment #8923906 - Flags: review?(apehrson)
Comment on attachment 8923907 [details] Bug 1397793 - Move to APM - Part 2 - Actual processing. https://reviewboard.mozilla.org/r/195066/#review200388 > File a bug to followup and note the bug number here. We still want a ring buffer right? Since this was now removed, please file and put a comment where appropriate.
Comment on attachment 8923907 [details] Bug 1397793 - Move to APM - Part 2 - Actual processing. https://reviewboard.mozilla.org/r/195066/#review203276 Better, but still some issues unaddressed.
Attachment #8923907 - Flags: review?(apehrson)
Comment on attachment 8926367 [details] Bug 1397793 - Use a ControlMessage to switch between passthrough and processing mode for microphone input. https://reviewboard.mozilla.org/r/197654/#review203278 C/C++ static analysis found 1 defect in this patch. You can run this analysis locally with: `./mach static-analysis check path/to/file.cpp` ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:521 (Diff revision 1) > + : ControlMessage(nullptr) > + , mMicrophoneSource(aSource) > + , mPassThrough(aPassThrough) > + {} > + > + virtual void Run() Warning: Prefer using 'override' or (rarely) 'final' instead of 'virtual' [clang-tidy: modernize-use-override] virtual void Run() ^ override
Comment on attachment 8925906 [details] Bug 1397793 - Refactor the code that sets the processing modes. https://reviewboard.mozilla.org/r/197118/#review203280 LGTM with nits fixed. But I have to say it's easier to review when review fixes are amended to the original patch. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:282 (Diff revision 2) > + } \ > +} while(0); > + > +void MediaEngineWebRTCMicrophoneSource::UpdateAECSettingsIfNeeded(bool aEnable, EcModes aMode) > +{ > + using webrtc::EcModes; is the `using namespace webrtc;` above enough? ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:303 (Diff revision 2) > + break; > + case EcModes::kEcAecm: > + // No suppression level to set for the mobile echo canceller > + break; > + default: > + MOZ_LOG(GetMediaManagerLog(), LogLevel::Error, ("Bad EcMode value")); Add a `MOZ_ASSERT_UNREACHABLE()` here ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:324 (Diff revision 2) > + if (aMode == kAgcAdaptiveAnalog) { > + MOZ_LOG(GetMediaManagerLog(), > + LogLevel::Error, > + ("Invalid AGC mode kAgcAdaptiveAnalog on mobile")); > + aMode = kAgcDefault; > + } Add an assert here ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:333 (Diff revision 2) > + switch (aMode) { > + case AgcModes::kAgcDefault: > + mode = kDefaultAgcMode; > + break; > + case AgcModes::kAgcUnchanged: > + mode = mAudioProcessing->gain_control()->mode(); > + break; > + case AgcModes::kAgcFixedDigital: > + mode = GainControl::Mode::kFixedDigital; > + break; > + case AgcModes::kAgcAdaptiveAnalog: > + mode = GainControl::Mode::kAdaptiveAnalog; > + break; > + case AgcModes::kAgcAdaptiveDigital: > + mode = GainControl::Mode::kAdaptiveDigital; > + break; > + } Add a default case as well. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:360 (Diff revision 2) > + switch (aMode) { > + case NsModes::kNsDefault: > + nsLevel = kDefaultNsMode; > + break; > + case NsModes::kNsUnchanged: > + nsLevel = mAudioProcessing->noise_suppression()->level(); > + break; > + case NsModes::kNsConference: > + nsLevel = NoiseSuppression::kHigh; > + break; > + case NsModes::kNsLowSuppression: > + nsLevel = NoiseSuppression::kLow; > + break; > + case NsModes::kNsModerateSuppression: > + nsLevel = NoiseSuppression::kModerate; > + break; > + case NsModes::kNsHighSuppression: > + nsLevel = NoiseSuppression::kHigh; > + break; > + case NsModes::kNsVeryHighSuppression: > + nsLevel = NoiseSuppression::kVeryHigh; > + break; > + } Add a default case.
Attachment #8925906 - Flags: review?(apehrson) → review+
Comment on attachment 8926367 [details] Bug 1397793 - Use a ControlMessage to switch between passthrough and processing mode for microphone input. https://reviewboard.mozilla.org/r/197654/#review203290 LGTM ::: commit-message-60f4b:1 (Diff revision 1) > +Bug 1397793 - Use a ControlMessage to switch between passthrough and processing mode for microphone input. r?pehsrons s/pehsrons/pehrsons/ ! ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:521 (Diff revision 1) > + : ControlMessage(nullptr) > + , mMicrophoneSource(aSource) > + , mPassThrough(aPassThrough) > + {} > + > + virtual void Run() I agree with clangbot. `void Run() override`. ::: dom/media/webrtc/moz.build:42 (Diff revision 1) > # MediaEngineWebRTC.cpp needs to be built separately. > SOURCES += [ > 'MediaEngineWebRTC.cpp', > ] > LOCAL_INCLUDES += [ > + '..', I think this should be explicit `'/dom/media',`
Attachment #8926367 - Flags: review+
Comment on attachment 8923906 [details] Bug 1397793 - Move to APM - Part 1 - UpdateSingleSource. https://reviewboard.mozilla.org/r/195064/#review203292 With the followup patches this is r+ with nits fixed.
Attachment #8923906 - Flags: review+
Comment on attachment 8926368 [details] Bug 1397793 - Remove mRate from MediaEngineDefault and use GraphRate(). https://reviewboard.mozilla.org/r/197656/#review203294 I'm ok with the assertion changes but I don't think they belong here. ::: dom/media/StreamTracks.h:19 (Diff revision 1) > - NS_ASSERTION(0 < aOutRate && aOutRate <= TRACK_RATE_MAX, "Bad out rate"); > - NS_ASSERTION(0 < aInRate && aInRate <= TRACK_RATE_MAX, "Bad in rate"); > - NS_ASSERTION(0 <= aTicks && aTicks <= TRACK_TICKS_MAX, "Bad ticks"); > + MOZ_ASSERT(0 < aOutRate && aOutRate <= TRACK_RATE_MAX, "Bad out rate"); > + MOZ_ASSERT(0 < aInRate && aInRate <= TRACK_RATE_MAX, "Bad in rate"); > + MOZ_ASSERT(0 <= aTicks && aTicks <= TRACK_TICKS_MAX, "Bad ticks"); > return (aTicks * aOutRate) / aInRate; > } > inline TrackTicks RateConvertTicksRoundUp(TrackRate aOutRate, > TrackRate aInRate, TrackTicks aTicks) > { > - NS_ASSERTION(0 < aOutRate && aOutRate <= TRACK_RATE_MAX, "Bad out rate"); > - NS_ASSERTION(0 < aInRate && aInRate <= TRACK_RATE_MAX, "Bad in rate"); > - NS_ASSERTION(0 <= aTicks && aTicks <= TRACK_TICKS_MAX, "Bad ticks"); > + MOZ_ASSERT(0 < aOutRate && aOutRate <= TRACK_RATE_MAX, "Bad out rate"); > + MOZ_ASSERT(0 < aInRate && aInRate <= TRACK_RATE_MAX, "Bad in rate"); > + MOZ_ASSERT(0 <= aTicks && aTicks <= TRACK_TICKS_MAX, "Bad ticks"); This deserves its own patch IMO.
Attachment #8926368 - Flags: review?(apehrson)
Comment on attachment 8925905 [details] Bug 1397793 - Allow switching processing on/off dynamically. https://reviewboard.mozilla.org/r/197116/#review203298 This looks good now that the atomic and the racing is gone in the followup.
Attachment #8925905 - Flags: review?(apehrson) → review+
(In reply to Andreas Pehrson [:pehrsons] from comment #79) > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:199 > (Diff revision 3) > > const char* uuid, > > bool aDelayAgnostic, > > bool aExtendedFilter) > > : MediaEngineAudioSource(kReleased) > > , mAudioInput(aAudioInput) > > + , mAudioProcessing(webrtc::AudioProcessing::Create()) > > nit: `webrtc::` shouldn't be necessary now, with the `using namespace > webrtc;` above. (and a bunch more places) I've removed some, but some need to be kept, because C++ is what it is. > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:384 > (Diff revision 3) > > + if (rv != AudioProcessing::kNoError) { \ > > + MOZ_LOG(AudioLogModule(), LogLevel::Error, (#fn)); \ > > + } \ > > Could there be cases when a `MOZ_ASSERT_UNREACHABLE()` is out of the > question here? > > It seems useful to catch these on try. Error logs may help track them down > but is not gonna find them for us. Agreed, I've done that. > ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:796 > (Diff revision 3) > > } > > } > > > > #define ResetProcessingIfNeeded(_processing) \ > > do { \ > > - webrtc::_processing##Modes mode; \ > > + bool enabled = mAudioProcessing->_processing()->is_enabled(); \ > > Looks like this can just be inlined now. I prefer the macro, this code all look the same, and I don't want to make copy paste error or whatnot. > ::: modules/libpref/init/all.js:534 > (Diff revision 3) > > #endif > > pref("media.getusermedia.aec_extended_filter", true); > > pref("media.getusermedia.aec_delay_agnostic", true); > > pref("media.getusermedia.noise", 1); > > pref("media.getusermedia.agc_enabled", false); > > -pref("media.getusermedia.agc", 1); > > +pref("media.getusermedia.agc", 3); // digital gain control > > nit: could we also say kAgcAdaptiveDigital to be more clean on which > internal mode we intended? Good idea.
Blocks: 1418367
Attachment #8923903 - Flags: review?(rjesup) → review+
Comment on attachment 8926367 [details] Bug 1397793 - Use a ControlMessage to switch between passthrough and processing mode for microphone input. https://reviewboard.mozilla.org/r/197654/#review207054 C/C++ static analysis found 1 defect in this patch. You can run this analysis locally with: `./mach static-analysis check path/to/file.cpp` ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:535 (Diff revision 2) > + : ControlMessage(nullptr) > + , mMicrophoneSource(aSource) > + , mPassThrough(aPassThrough) > + {} > + > + virtual void Run() Warning: Prefer using 'override' or (rarely) 'final' instead of 'virtual' [clang-tidy: modernize-use-override] virtual void Run() ^ override
Attachment #8923903 - Flags: review+
Comment on attachment 8923907 [details] Bug 1397793 - Move to APM - Part 2 - Actual processing. https://reviewboard.mozilla.org/r/195066/#review207300 r- for that leak. And please fix existing comments or comment on why they shouldn't be fixed. ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:779 (Diff revisions 3 - 4) > StreamConfig outputConfig = inputConfig; > > - // Prepare two sets of channel pointers array, with enough storage for the > - // frames. We don't use the output data here, but it's required to pass > - // valid buffers to the API. > + // Prepare a channel pointers array, with enough storage for the > + // frames. > + // > + // If this is a platform that uses s16 for audio input and output, use, s/ use,// ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:791 (Diff revisions 3 - 4) > +#else // MOZ_SAMPLE_TYPE_F32 > + inputData = buffer->mData; nit: Could this be: ``` #elif defined MOZ_SAMPLE_TYPE_F32 inputData = buffer->mData; #else #error ``` ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:854 (Diff revisions 3 - 4) > mAudioProcessing->set_stream_delay_ms(0); > > // Bug 1414837: find a way to not allocate here. > RefPtr<SharedBuffer> buffer = > SharedBuffer::Create(mPacketizer->PacketSize() * aChannels * sizeof(float)); > - nsAutoPtr<AudioSegment> segment(new AudioSegment()); > + UniquePtr<AudioSegment> segment(new AudioSegment()); nit: `MakeUnique<AudioSegment>()` ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:889 (Diff revisions 3 - 4) > MOZ_ASSERT(processedOutputChannelPointers.Length() == aChannels); > segment->AppendFrames(buffer.forget(), > processedOutputChannelPointersConst, > mPacketizer->PacketSize(), > mPrincipalHandles[i]); > - mSources[i]->AppendToTrack(mTrackID, segment); > + mSources[i]->AppendToTrack(mTrackID, segment.release()); This will leak! Ownership of the segment remains with the caller. Any reason not to have it on the stack?
Attachment #8923907 - Flags: review?(apehrson) → review-
Comment on attachment 8930629 [details] Bug 1397793 - Make the assertions in the rate conversion functions in StreamTracks.h be fatal. https://reviewboard.mozilla.org/r/201760/#review207304
Attachment #8930629 - Flags: review?(apehrson) → review+
Comment on attachment 8926368 [details] Bug 1397793 - Remove mRate from MediaEngineDefault and use GraphRate(). https://reviewboard.mozilla.org/r/197656/#review207306
Attachment #8926368 - Flags: review?(apehrson) → review+
Comment on attachment 8923907 [details] Bug 1397793 - Move to APM - Part 2 - Actual processing. https://reviewboard.mozilla.org/r/195066/#review207300 > nit: Could this be: > ``` > #elif defined MOZ_SAMPLE_TYPE_F32 > inputData = buffer->mData; > #else > #error > ``` No, this is a build system concern: https://searchfox.org/mozilla-central/source/old-configure.in#2600 > nit: `MakeUnique<AudioSegment>()` This is now on the stack, as per your other comment. > This will leak! > > Ownership of the segment remains with the caller. > Any reason not to have it on the stack? This is now on the stack, you're right. I was confused by the fact that AddTrack takes ownership, but AppendToTrack does not.
Comment on attachment 8923907 [details] Bug 1397793 - Move to APM - Part 2 - Actual processing. https://reviewboard.mozilla.org/r/195066/#review200388 > Put a comment here to free the buffer before returning, to avoid leaks. This is now a UniquePtr, and there is a comment about fixing this to not allocate/free. > Really a nit but we could s/offset/i * framesPerPacketFarend/ to save two lines per each of the three loops. This is ineficient and not how planar buffer are usually prepared. > We still want a ring buffer right? Since this was now removed, please file and put a comment where appropriate. This is 1414837, there is a comment. > Hmm, this log module is "AudioLatency". Any reasoning behind that name? Is it still relevant? No it's an error, this is now using the MediaManagerLog. > This doesn't make sense. Same as [1]. > > We need to figure out why (code looks sane, so some kind of race?) and fix. > > > [1] http://searchfox.org/mozilla-central/rev/aa1343961fca52e8c7dab16788530da31aab7c8d/dom/media/webrtc/MediaEngineWebRTCAudio.cpp#678 This has been removed. I've looked at the code, I think it was paranoia.
Comment on attachment 8923907 [details] Bug 1397793 - Move to APM - Part 2 - Actual processing. https://reviewboard.mozilla.org/r/195066/#review200388 > This is 1414837, there is a comment. Ah, good. I was searching code for "ring buffer".
Attachment #8923907 - Flags: review?(apehrson) → review+
Comment on attachment 8926367 [details] Bug 1397793 - Use a ControlMessage to switch between passthrough and processing mode for microphone input. https://reviewboard.mozilla.org/r/197654/#review207436 C/C++ static analysis found 1 defect in this patch. You can run this analysis locally with: `./mach static-analysis check path/to/file.cpp` ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:535 (Diff revision 3) > + : ControlMessage(nullptr) > + , mMicrophoneSource(aSource) > + , mPassThrough(aPassThrough) > + {} > + > + virtual void Run() Warning: Prefer using 'override' or (rarely) 'final' instead of 'virtual' [clang-tidy: modernize-use-override] virtual void Run() ^ override
Comment on attachment 8923901 [details] Bug 1397793 - Use the MSG rate in MediaPipeline/PeerConnectionImpl. https://reviewboard.mozilla.org/r/195054/#review200478 > Are there any assertions/requirements in the graphDrivers/MSG that the GraphRate%100 == 0? If not, we should add them (not here; we don't need to check it on every pull, just when a new driver is created probably) I don't know of a tier-1 platform where this is not true. In fact, I don't know a device where this is not true.
Comment on attachment 8923901 [details] Bug 1397793 - Use the MSG rate in MediaPipeline/PeerConnectionImpl. https://reviewboard.mozilla.org/r/195054/#review200374 Yes, done. > This change seems orthogonal, but makes sense. Not really orthogonal, it crashes without it, when you have a stereo mic. This was just badly tested.
Blocks: 1420162
Comment on attachment 8925906 [details] Bug 1397793 - Refactor the code that sets the processing modes. https://reviewboard.mozilla.org/r/197118/#review203280 > is the `using namespace webrtc;` above enough? No, because of the scoping rules of C++. I removed the maximum.
Comment on attachment 8925905 [details] Bug 1397793 - Allow switching processing on/off dynamically. https://reviewboard.mozilla.org/r/197116/#review202234 > We can almost get rid of mSampleFrequency now it seems. At least USE_GRAPH_RATE seems superfluous. Yes, will do in a followup. As said on IRC, this was too big already. This is tracked in 1420162.
Blocks: 1420815
Comment on attachment 8932556 [details] Bug 1397793 - Share SharedBuffer accross SourceMediaStream. https://reviewboard.mozilla.org/r/203608/#review209336 ::: dom/media/webrtc/MediaEngineWebRTCAudio.cpp:892 (Diff revision 1) > - segment.AppendFrames(buffer.forget(), > + RefPtr<SharedBuffer> other = buffer; > + segment.AppendFrames(other.forget(), This works. Or use do_AddRef if you want to cut it down a line.
Attachment #8932556 - Flags: review?(apehrson) → review+
Comment on attachment 8932555 [details] Bug 1397793 - Add asserts for AudioChunk invariants. https://reviewboard.mozilla.org/r/203606/#review209368 r+ with nits fixed ::: dom/media/AudioSegment.h:190 (Diff revision 1) > } > } > } > return true; > } > - bool IsNull() const { return mBuffer == nullptr; } > + bool IsNull() const { Put the opening '{' on a new line. ::: dom/media/AudioSegment.h:346 (Diff revision 1) > const nsTArray<const float*>& aChannelData, > int32_t aDuration, const PrincipalHandle& aPrincipalHandle) > { > AudioChunk* chunk = AppendChunk(aDuration); > chunk->mBuffer = aBuffer; > + MOZ_ASSERT(chunk->mBuffer.get()); No need to .get() for MOZ_ASSERT. This and the other two occurrences.
Attachment #8932555 - Flags: review?(apehrson) → review+
Comment on attachment 8932555 [details] Bug 1397793 - Add asserts for AudioChunk invariants. https://reviewboard.mozilla.org/r/203606/#review209384 ::: dom/media/AudioSegment.h:389 (Diff revisions 1 - 2) > + > AudioChunk* chunk = AppendChunk(aChunk->mDuration); > chunk->mBuffer = aChunk->mBuffer.forget(); > - MOZ_ASSERT(chunk->mBuffer.get()); > chunk->mChannelData.SwapElements(aChunk->mChannelData); > + MOZ_ASSERT(chunk->mBuffer.get() || aChunk->mChannelData.IsEmpty()); unnecessary `.get()`. Also, what's this checking? Not obvious to me, perhaps pass an explanation to `MOZ_ASSERT()`?
Paul, just as a heads up: you patch set here has merge conflicts with mozilla-central since Transceivers landed.
Comment on attachment 8934538 [details] Bug 1397793 - Don't use the AEC on a 440Hz tone when testing that audio is flowing. https://reviewboard.mozilla.org/r/205446/#review211034 Not a fan of exceptions like this (it could still happen to fail other test), but I'm ok with this since the virtual loopback device patches will have to touch processing constraints for our existing tests (probably disabling them by default since the output to the loopback device goes through the AEC as far-end).
Attachment #8934538 - Flags: review?(apehrson) → review+
Depends on: 1423228
Pushed by paul@paul.cx: https://hg.mozilla.org/integration/mozilla-inbound/rev/b44e089bcfe7 Delete old-deprecated VoEExternalMedia. r=dminor https://hg.mozilla.org/integration/mozilla-inbound/rev/a3d9aa2649b9 Use the MSG rate in MediaPipeline/PeerConnectionImpl. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/9e92591ba6dc Move away from VoEExternalMedia "external" API in AudioConduit.cpp. r=dminor https://hg.mozilla.org/integration/mozilla-inbound/rev/e92ff1339db1 Revert Mozilla changes to OutputMixer r=jesup https://hg.mozilla.org/integration/mozilla-inbound/rev/bf8977e0f440 Move MediaEngineDefault to use the MSG rate instead of something hard-coded. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/a781b123b252 Remove VoEExternalMedia usage in MediaEngineWebRTCAudio and MediaEngineWebRTC. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/2c7a56648de9 Move to APM - Part 1 - UpdateSingleSource. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/0107b3feb84b Move to APM - Part 2 - Actual processing. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/d12894d77770 Allow switching processing on/off dynamically. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/1cf542ff8687 Refactor the code that sets the processing modes. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/771c95f2963b Use a ControlMessage to switch between passthrough and processing mode for microphone input. r=pehsrons https://hg.mozilla.org/integration/mozilla-inbound/rev/16f44ac4978a Make the assertions in the rate conversion functions in StreamTracks.h be fatal. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/ea8d6efcf3ef Remove mRate from MediaEngineDefault and use GraphRate(). r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/a0d519ba5332 Add asserts for AudioChunk invariants. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/81889a72ac45 Share SharedBuffer accross SourceMediaStream. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/c02fd9acc634 Don't use the AEC on a 440Hz tone when testing that audio is flowing. r=pehrsons
Depends on: 1423893
Depends on: 1423916
Depends on: 1423920
Depends on: 1423929
Depends on: 1423930
Depends on: 1423953
Depends on: 1423954
Depends on: 1424318
Blocks: 1424342
No longer blocks: 1424342
Depends on: 1424342
Depends on: 1424660
Backed out bug 1423923 because it blocked the backout of bug 1397793 and bug 1397793: Unfortunately, most of the changes from bug 1397793 are part of the backout whose commit message only mentions bug 1423923. Sorry about that: https://hg.mozilla.org/mozilla-central/rev/7c2dd8503c547d25f273f1ec7bb0d62d8d3d7051 https://hg.mozilla.org/mozilla-central/rev/57950b4441666646efcb569807e0efb2cf3eda21
Status: RESOLVED → REOPENED
Flags: needinfo?(padenot)
Resolution: FIXED → ---
Target Milestone: mozilla59 → ---
Pushed by paul@paul.cx: https://hg.mozilla.org/integration/mozilla-inbound/rev/dc2faa169c99 Delete old-deprecated VoEExternalMedia. r=dminor https://hg.mozilla.org/integration/mozilla-inbound/rev/aa3ff283d9e7 Use the MSG rate in MediaPipeline/PeerConnectionImpl. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/3d315303d376 Move away from VoEExternalMedia "external" API in AudioConduit.cpp. r=dminor https://hg.mozilla.org/integration/mozilla-inbound/rev/80f8abb62e07 Revert Mozilla changes to OutputMixer r=jesup https://hg.mozilla.org/integration/mozilla-inbound/rev/f269fb34cee9 Move MediaEngineDefault to use the MSG rate instead of something hard-coded. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/d30ce07c5567 Remove VoEExternalMedia usage in MediaEngineWebRTCAudio and MediaEngineWebRTC. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/0ec5d3fdaf1b Move to APM - Part 1 - UpdateSingleSource. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/2b871bc4a4ba Move to APM - Part 2 - Actual processing. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/2ef19550733d Allow switching processing on/off dynamically. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/54abe5f7a604 Refactor the code that sets the processing modes. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/a11032d18d50 Use a ControlMessage to switch between passthrough and processing mode for microphone input. r=pehsrons https://hg.mozilla.org/integration/mozilla-inbound/rev/ae73bf206871 Make the assertions in the rate conversion functions in StreamTracks.h be fatal. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/ea71d48c6b64 Remove mRate from MediaEngineDefault and use GraphRate(). r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/43b35e28df74 Add asserts for AudioChunk invariants. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/cece0d078ad2 Share SharedBuffer accross SourceMediaStream. r=pehrsons https://hg.mozilla.org/integration/mozilla-inbound/rev/4c09f494655f Don't use the AEC on a 440Hz tone when testing that audio is flowing. r=pehrsons
clearing NI.
Flags: needinfo?(padenot)
Paul: this patch makes Fx consume 200% CPU for a single audio track sendonly WebRTC call (only tested on Mac)! STR: https://jsfiddle.net/v8decgw0/6/ Mozregression result: https://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=61837d2b24d454983ef10ebbbf3bb03e8b6ed12e&tochange=104d8a327353bef69e2dba4cd97f453a3ff5adbb
Flags: needinfo?(padenot)
Fixed by bug 1425596.
Flags: needinfo?(padenot)
Depends on: 1426171
Depends on: 1424944
Depends on: 1444074
Blocks: 1445762
Flags: needinfo?(rjesup)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: