最近在看Android中播放延迟的问题,看了下代码,发现AudioTrack类中的函数latency有以下注释:
/* Returns this track's latency in milliseconds. * This includes the latency due to AudioTrack buffer size, AudioMixer (if any) * and audio hardware driver. */
够强大,前几天自己还叭叭根据buffer计算延迟呢,原来可以调个函数就搞定。
看看函数AudioTrack::latency()的实现吧:
uint32_t AudioTrack::latency() const { return mLatency; }
没什么内涵,直接返回了成员变量。
mLatency是在哪儿被 赋值的呢?
AudioTrack::createTrack函数中对mLatency进行了赋值:
mLatency = afLatency + (1000*mCblk->frameCount) / sampleRate;
其中afLatency是硬件的延迟。
(1000*mCblk->frameCount) / sampleRate,这一坨,是根据AudioTrack中的audio_track_cblk_t的buffer,计算AudioTrack buffer导致的延迟。
afLatency的来历,也在函数AudioTrack::createTrack中:
uint32_t afLatency; if (AudioSystem::getOutputLatency(&afLatency, streamType) != NO_ERROR) { return NO_INIT; }
AudioSystem::getOutputLatency函数中首先根据stream type获取对应的output,然后尝试获取output的描述.
若取得成功,则使用output描述中的延迟;否则,获取AudioFlinger,并使用AudioFlinger中的延迟。
具体代码如下:
status_t AudioSystem::getOutputLatency(uint32_t* latency, int streamType) { OutputDescriptor *outputDesc; audio_io_handle_t output; if (streamType == DEFAULT) { streamType = MUSIC; } output = getOutput((stream_type)streamType); if (output == 0) { return PERMISSION_DENIED; } gLock.lock(); outputDesc = AudioSystem::gOutputs.valueFor(output); if (outputDesc == 0) { gLock.unlock(); const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger(); if (af == 0) return PERMISSION_DENIED; *latency = af->latency(output); } else { *latency = outputDesc->latency; gLock.unlock(); } LOGV("getOutputLatency() streamType %d, output %d, latency %d", streamType, output, *latency); return NO_ERROR; }
先看看AudioFlinger中的latency:
AudioFlinger::latency函数中,首先获取output对应的PlaybackThread,然后取得PlaybackThread的latency:
return thread->latency();
看看函数AudioFlinger::PlaybackThread::latency():
uint32_t AudioFlinger::PlaybackThread::latency() const { if (mOutput) { return mOutput->latency(); } else { return 0; } }
我做的这个项目中,mOutput其实就是AudioStreamOutALSA。
AudioStreamOutALSA::latency()函数:
#define USEC_TO_MSEC(x) ((x + 999) / 1000) uint32_t AudioStreamOutALSA::latency() const { // 将微秒转化为毫秒 // Android wants latency in milliseconds. return USEC_TO_MSEC (mHandle->latency); }
mHandler中父类ALSAStreamOps的构造函数中被赋值。
用的是AudioStreamOutALSA构造函数中的参数handle。
AudioStreamOutALSA对象在函数AudioHardwareALSA::openOutputStream中被创建:
out = new AudioStreamOutALSA(this, &(*it));
it的赋值:
ALSAHandleList::iterator it = mDeviceList.begin();
mDeviceList的赋值在AudioHardwareALSA的构造函数中:
mALSADevice->init(mALSADevice, mDeviceList);
init函数其实就是s_init函数:
static status_t s_init(alsa_device_t *module, ALSAHandleList &list) { LOGD("Initializing devices for IMX51 ALSA module"); list.clear(); for (size_t i = 0; i < ARRAY_SIZE(_defaults); i++) { _defaults[i].module = module; list.push_back(_defaults[i]); } return NO_ERROR; }
_defaults的定义:
static alsa_handle_t _defaults[] = { { module : 0, devices : IMX51_OUT_DEFAULT, curDev : 0, curMode : 0, handle : 0, format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT channels : 2, sampleRate : DEFAULT_SAMPLE_RATE, latency : 200000, // Desired Delay in usec bufferSize : 6144, // Desired Number of samples modPrivate : (void *)&setDefaultControls, }, { module : 0, devices : IMX51_IN_DEFAULT, curDev : 0, curMode : 0, handle : 0, format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT channels : 2, sampleRate : DEFAULT_SAMPLE_RATE, latency : 250000, // Desired Delay in usec bufferSize : 6144, // Desired Number of samples modPrivate : (void *)&setDefaultControls, }, };
原来就是在这儿指定的latency:
latency : 200000, // Desired Delay in usec
回头看看,若是在函数AudioSystem::getOutputLatency中找到了output的描述,情况又是怎样的呢?
output描述是在AudioPolicyManagerBase的构造函数中被创建。
其中,latency是通过调用函数mpClientInterface->openOutput取得:
mHardwareOutput = mpClientInterface->openOutput(&outputDesc->mDevice, &outputDesc->mSamplingRate, &outputDesc->mFormat, &outputDesc->mChannels, &outputDesc->mLatency, outputDesc->mFlags);
其实就是调用了函数AudioFlinger::openOutput。
其中对latency的赋值:
if (pLatencyMs) *pLatencyMs = thread->latency();
与前面的那条河流汇合了。