Feed output audio from non-default sinks (setSinkid) to AEC
Categories
(Core :: Audio/Video: MediaStreamGraph, task, P3)
Tracking
()
People
(Reporter: pehrsons, Assigned: karlt)
References
(Depends on 1 open bug, Blocks 2 open bugs)
Details
webrtc::AudioProcessing
that we use for echo cancellation needs to be fed one input stream and one output stream to do its processing.
There is one webrtc::AudioProcessing
per input stream, be it for a default or non-default device.
However, the output stream feeding webrtc::AudioProcessing
is always of the audio played out in the default MediaTrackGraph, i.e. for the default output device, where the AudioProcessingTrack
lives.
setSinkId
with a MediaStreamTrack will create a new MediaTrackGraph running on the callbacks of the device with the given sink id, and connect the track corresponding to the played MediaStreamTrack in the default graph to the new graph. There is no path to feed the output of this new graph to the AudioProcessingTrack
in the default graph.
We'll need to fix this, possibly through bug 1830247.
Reporter | ||
Comment 1•1 year ago
|
||
Karl, would you have cycles to look into this?
Assignee | ||
Comment 2•1 year ago
|
||
Thank you for identifying this issue and linking to the parts involved. I'll look into it.
I assume multiple audio output devices should be treated similar to having multiple channels in a single output device, except that clocks and rates can differ.
Perhaps adding more channels to a single AudioProcessing
would be more efficient than having a series of AudioProcessing
.
Reporter | ||
Comment 3•1 year ago
|
||
Output->echo latencies can also differ. I'm not sure how well the AudioProcessing
AEC would handle different scenarios there.
Assignee | ||
Comment 4•1 year ago
|
||
Ah, yes, thanks. I wonder whether the AEC handles widely separated speakers.
Description
•