WebRTC源码版本为:org.webrtc:google-webrtc:1.0.32006
本文仅分析Java层源码,在分析之前,先说明一下一些重要类的基本概念。
- MediaSource:WebRTC媒体资源数据来源,它有两个子类:AudioSource(音频资源)、VideoSource(视频资源);
- MediaStreamTrack:媒体资源轨,一个MediaStreamTrack对应一个MediaSource,创建媒体轨需要MediaSource,同样,它也有两个子类:AudioTrack(音频轨)对应AudioSource、VideoTrack(视频轨)对应VideoSource;
- MediaStream:媒体流,一个媒体流可以添加多条AudioTrack和VideoTrack,一般来说我们对应分别只添加一条。
在使用WebRTC进行音视频通话时,需要构建一个PeerConnectionFactory,这是一个连接工厂,创建本地LocalMediaStream流及客户端PeerConnection等,都需要使用。构建大致代码如下:
//在创建PeerConnectionFactory之前,必须至少调用一次。不得在 PeerConnectionFactory 处于活动状态时调用。
PeerConnectionFactory.initialize(
PeerConnectionFactory.InitializationOptions
.builder(applicationContext).createInitializationOptions()
)
val eglBaseContext = EglBase.create().eglBaseContext
//视频编码工厂
val encoderFactory= DefaultVideoEncoderFactory(eglBaseContext, true, true)
//视频解码工厂
val decoderFactory = DefaultVideoDecoderFactory(eglBaseContext)
val audioDeviceModule = JavaAudioDeviceModule.builder(this)
.setSamplesReadyCallback {
audioSamples ->
//麦克风输入数据,即通话时LocalMediaStream的Audio的数据,pcm格式,通常用于录音。
}
.createAudioDeviceModule()
val peerConnectionFactory = PeerConnectionFactory.builder()
.setVideoEncoderFactory(encoderFactory)
.setVideoDecoderFactory(decoderFactory)
.setAudioDeviceModule(audioDeviceModule)
.createPeerConnectionFactory()
若你有需求WebRTC视频编码要启用H.264,请参考Android端WebRTC启用H264编码。
MediaSource
WebRTC媒体资源数据来源基类。
AudioSource(音频源)
通过 peerConnectionFactory.createAudioSource(MediaConstraints) 创建,参数为媒体约束,大致如下:
//音频源
val audioSource = peerConnectionFactory.createAudioSource(createAudioConstraints())
private fun createAudioConstraints(): MediaConstraints {
val audioConstraints = MediaConstraints()
//回声消除
audioConstraints.mandatory.add(
MediaConstraints.KeyValuePair(
"googEchoCancellation",
"true"
)
)
//自动增益
audioConstraints.mandatory.add(MediaConstraints.KeyValuePair("googAutoGainControl", "true"))
//高音过滤
audioConstraints.mandatory.add(MediaConstraints.KeyValuePair("googHighpassFilter", "true"))
//噪音处理
audioConstraints.mandatory.add(
MediaConstraints.KeyValuePair(
"googNoiseSuppression",
"true"
)
)
return audioConstraints
}
事实上音频的输入、输出具体处理在JavaAudioDeviceModule:
package org.webrtc.audio;
/**
* AudioDeviceModule implemented using android.media.AudioRecord as input and
* android.media.AudioTrack as output.
*/
public class JavaAudioDeviceModule implements AudioDeviceModule {
...
/**
* 本地麦克风采集的输入数据,android.media.AudioRecord
*/
private final WebRtcAudioRecord audioInput;
/**
* 用于播放通话时对方的音频数据,android.media.AudioTrack
*/
private final WebRtcAudioTrack audioOutput;
...
}
记录本地麦克风采集的音频数据WebRtcAudioRecord:
package org.webrtc.audio;
import android.media.AudioRecord;
class WebRtcAudioRecord {
...
private AudioRecord audioRecord;
/**
* 读取线程
*/
private AudioRecordThread audioThread;
/**
* Audio thread which keeps calling ByteBuffer.read() waiting for audio
* to be recorded. Feeds recorded data to the native counterpart as a
* periodic sequence of callbacks using DataIsRecorded().
* This thread uses a Process.THREAD_PRIORITY_URGENT_AUDIO priority.
*/
private class AudioRecordThread extends Thread {
private volatile boolean keepAlive = true;
public AudioRecordThread(String name) {
super(name);
}
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO);
//确保在采集数据状态
assertTrue(audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING);
//向外回调开始状态
doAudioRecordStateCallback(AUDIO_RECORD_START);
//存活状态
while(keepAlive) {
int bytesRead = audioRecord.read(byteBuffer, byteBuffer.capacity());
if (bytesRead == byteBuffer.capacity()) {
if (microphoneMute) {
//如果麦克风静音,则清空数据
byteBuffer.clear();
byteBuffer.put(WebRtcAudioRecord.this.emptyBytes);
}
if (keepAlive) {
//调用native方法发送数据
nativeDataIsRecorded(nativeAudioRecord, bytesRead);
}
//向外回调音频数据,pcm格式
if (audioSamplesReadyCallback != null) {
byte[] data = Arrays.copyOfRange(.byteBuffer.array(), byteBuffer.arrayOffset(), byteBuffer.capacity() + byteBuffer.arrayOffset());
audioSamplesReadyCallback.onWebRtcAudioRecordSamplesReady(new AudioSamples(audioRecord.getAudioFormat(), audioRecord.getChannelCount(), audioRecord.getSampleRate(), data));
}
} else {
String errorMessage = "AudioRecord.read failed: " + bytesRead;
Logging.e("WebRtcAudioRecordExternal", errorMessage);
if (<