Open Bug 1849108 Opened 1 year ago Updated 1 year ago

Feed output audio from non-default sinks (setSinkid) to AEC

Categories

(Core :: Audio/Video: MediaStreamGraph, task, P3)

task

Tracking

()

People

(Reporter: pehrsons, Assigned: karlt)

References

(Depends on 1 open bug, Blocks 2 open bugs)

Details

webrtc::AudioProcessing that we use for echo cancellation needs to be fed one input stream and one output stream to do its processing.

There is one webrtc::AudioProcessing per input stream, be it for a default or non-default device.

However, the output stream feeding webrtc::AudioProcessing is always of the audio played out in the default MediaTrackGraph, i.e. for the default output device, where the AudioProcessingTrack lives.

setSinkId with a MediaStreamTrack will create a new MediaTrackGraph running on the callbacks of the device with the given sink id, and connect the track corresponding to the played MediaStreamTrack in the default graph to the new graph. There is no path to feed the output of this new graph to the AudioProcessingTrack in the default graph.

We'll need to fix this, possibly through bug 1830247.

Karl, would you have cycles to look into this?

Flags: needinfo?(karlt)

Thank you for identifying this issue and linking to the parts involved. I'll look into it.

I assume multiple audio output devices should be treated similar to having multiple channels in a single output device, except that clocks and rates can differ.
Perhaps adding more channels to a single AudioProcessing would be more efficient than having a series of AudioProcessing.

Assignee: nobody → karlt
Flags: needinfo?(karlt)

Output->echo latencies can also differ. I'm not sure how well the AudioProcessing AEC would handle different scenarios there.

Ah, yes, thanks. I wonder whether the AEC handles widely separated speakers.

You need to log in before you can comment on or make changes to this bug.