1 前言
与IJKPLAYER处理AudioTrack播放类似,OpenSL ES的接入需要满足SDL_Aout的接口规范,所不同的是OpenSL ES播放是在native完成的,调用的是NDK接口OpenSL ES的播放能力。关于OpenSL ES的详细介绍,请参考官方文档 OpenSL ES 一文。
Pipeline及SDL_Aout结构体及相关创建,与AudioTrack一致,请参考前文IJKPLAYER源码分析-AudioTrack播放-优快云博客
2 接口
2.1 创建SDL_Aout
创建OpenSL ES的SDL_Aout对象,调用链如下:
ijkmp_android_create() => ffpipeline_create_from_android() => func_open_audio_output() => SDL_AoutAndroid_CreateForOpenSLES()
使能opensles选项,缺省为0,即使用AudioTrack播放:
{ "opensles", "OpenSL ES: enable",
OPTION_OFFSET(opensles), OPTION_INT(0, 0, 1) },
若使能了opensles选项,走OpenSL ES播放,相反则走AudioTrack播放:
static SDL_Aout *func_open_audio_output(IJKFF_Pipeline *pipeline, FFPlayer *ffp)
{
SDL_Aout *aout = NULL;
if (ffp->opensles) {
aout = SDL_AoutAndroid_CreateForOpenSLES();
} else {
aout = SDL_AoutAndroid_CreateForAudioTrack();
}
if (aout)
SDL_AoutSetStereoVolume(aout, pipeline->opaque->left_volume, pipeline->opaque->right_volume);
return aout;
}
OpenSL ES的SDL_Aout对象创建具体在此,遵循SDL_Aout接口规范:
SDL_Aout *SDL_AoutAndroid_CreateForOpenSLES()
{
SDLTRACE("%s\n", __func__);
SDL_Aout *aout = SDL_Aout_CreateInternal(sizeof(SDL_Aout_Opaque));
if (!aout)
return NULL;
SDL_Aout_Opaque *opaque = aout->opaque;
opaque->wakeup_cond = SDL_CreateCond();
opaque->wakeup_mutex = SDL_CreateMutex();
int ret = 0;
SLObjectItf slObject = NULL;
ret = slCreateEngine(&slObject, 0, NULL, 0, NULL, NULL);
CHECK_OPENSL_ERROR(ret, "%s: slCreateEngine() failed", __func__);
opaque->slObject = slObject;
ret = (*slObject)->Realize(slObject, SL_BOOLEAN_FALSE);
CHECK_OPENSL_ERROR(ret, "%s: slObject->Realize() failed", __func__);
SLEngineItf slEngine = NULL;
ret = (*slObject)->GetInterface(slObject, SL_IID_ENGINE, &slEngine);
CHECK_OPENSL_ERROR(ret, "%s: slObject->GetInterface() failed", __func__);
opaque->slEngine = slEngine;
SLObjectItf slOutputMixObject = NULL;
const SLInterfaceID ids1[] = {SL_IID_VOLUME};
const SLboolean req1[] = {SL_BOOLEAN_FALSE};
ret = (*slEngine)->CreateOutputMix(slEngine, &slOutputMixObject, 1, ids1, req1);
CHECK_OPENSL_ERROR(ret, "%s: slEngine->CreateOutputMix() failed", __func__);
opaque->slOutputMixObject = slOutputMixObject;
ret = (*slOutputMixObject)->Realize(slOutputMixObject, SL_BOOLEAN_FALSE);
CHECK_OPENSL_ERROR(ret, "%s: slOutputMixObject->Realize() failed", __func__);
aout->free_l = aout_free_l;
aout->opaque_class = &g_opensles_class;
aout->open_audio = aout_open_audio;
aout->pause_audio = aout_pause_audio;
aout->flush_audio = aout_flush_audio;
aout->close_audio = aout_close_audio;
aout->set_volume = aout_set_volume;
aout->func_get_latency_seconds = aout_get_latency_seconds;
return aout;
fail:
aout_free_l(aout);
return NULL;
}
2.2 func_get_latency_seconds接口
- 此接口是OpenSL ES相比于AudioTrack新增的1个接口;
- 作用是:用来计算OpenSL ES底层buffer缓存了多少ms的音频数据,音视频同步时用来纠正音频的时钟;
与AudioTrack相比,OpenSL ES增加了func_get_latency_seconds接口:
aout->func_get_latency_seconds = aout_get_latency_seconds;
此接口的具体实现:
static double aout_get_latency_seconds(SDL_Aout *aout)
{
SDL_Aout_Opaque *opaque = aout->opaque;
SLAndroidSim