Android Audio代码分析10 - audio_track_cblk_t::framesReady函数

本文详细解析了Android系统中AudioTrack组件的数据流动过程。从AudioTrack的write函数开始,介绍了音频数据如何通过AudioFlinger的不同组件,最终传递到硬件进行播放。重点分析了AudioTrack与AudioFlinger之间的交互机制。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在看AudioTrack的write函数的时候,了解到,音频数据最终都写到了audio_track_cblk_t的结构体中。
这个结构体是在AudioFlinger中创建的。
AudioFlinger是如何来使用这些数据的呢?
今天就来学习学习。


我们写数据的时候,调用了audio_track_cblk_t::framesAvailable_l函数,来判断是否有可用的空间,以供写用。
类audio_track_cblk_t中还有另外一个函数framesReady,看名字,应该是告诉我们已经准备好了多少东东。
看样子,AudioFlinger在使用音频数据的时候,应该是先调用了framesReady函数,来看看我们已经写进去多少音频数据了,然后再使用这些数据。


*****************************************源码*************************************************
uint32_t audio_track_cblk_t::framesReady()
{
uint64_t u = this->user;
uint64_t s = this->server;


if (flags & CBLK_DIRECTION_MSK) {
if (u < loopEnd) {
return u - s;
} else {
Mutex::Autolock _l(lock);
if (loopCount >= 0) {
return (loopEnd - loopStart)*loopCount + u - s;
} else {
return UINT_MAX;
}
}
} else {
return s - u;
}
}
**********************************************************************************************
源码路径:
frameworks\base\media\libmedia\AudioTrack.cpp


#######################说明################################
之前看代码都是顺藤摸瓜,今天要顺瓜摸藤了。
看看什么地方调用了函数framesReady。
搜了一下,调用的地方还真不少。
不过,很多地方只是用调用结果来作判断,
只有在函数AudioFlinger::PlaybackThread::Track::getNextBuffer中,把返回值保存了下来。
以前看代码知道,写数据的时候,framesAvailable返回值是被保存起来,并使用的。
类推一下,读数据的时候,framesReady的返回值也应该被保存使用。
于是就来到了函数AudioFlinger::PlaybackThread::Track::getNextBuffer中:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(AudioBufferProvider::Buffer* buffer)
{
audio_track_cblk_t* cblk = this->cblk();
uint32_t framesReady;
uint32_t framesReq = buffer->frameCount;


// Check if last stepServer failed, try to step now
if (mFlags & TrackBase::STEPSERVER_FAILED) {
if (!step()) goto getNextBuffer_exit;
LOGV("stepServer recovered");
mFlags &= ~TrackBase::STEPSERVER_FAILED;
}


framesReady = cblk->framesReady();


if (LIKELY(framesReady)) {
uint64_t s = cblk->server;
uint64_t bufferEnd = cblk->serverBase + cblk->frameCount;


bufferEnd = (cblk->loopEnd < bufferEnd) ? cblk->loopEnd : bufferEnd;
if (framesReq > framesReady) {
framesReq = framesReady;
}
if (s + framesReq > bufferEnd) {
framesReq = bufferEnd - s;
}


buffer->raw = getBuffer(s, framesReq);
if (buffer->raw == 0) goto getNextBuffer_exit;
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void* AudioFlinger::ThreadBase::TrackBase::getBuffer(uint32_t offset, uint32_t frames) const {
audio_track_cblk_t* cblk = this->cblk();
int8_t *bufferStart = (int8_t *)mBuffer + (offset-cblk->serverBase)*cblk->frameSize;
int8_t *bufferEnd = bufferStart + frames * cblk->frameSize;


// Check validity of returned pointer in case the track control block would have been corrupted.
if (bufferStart < mBuffer || bufferStart > bufferEnd || bufferEnd > mBufferEnd ||
((unsigned long)bufferStart & (unsigned long)(cblk->frameSize - 1))) {
LOGE("TrackBase::getBuffer buffer out of range:\n start: %p, end %p , mBuffer %p mBufferEnd %p\n \
server %lld, serverBase %lld, user %lld, userBase %lld, channelCount %d",
bufferStart, bufferEnd, mBuffer, mBufferEnd,
cblk->server, cblk->serverBase, cblk->user, cblk->userBase, cblk->channelCount);
return 0;
}


return bufferStart;
}
----------------------------------------------------------------


buffer->frameCount = framesReq;
return NO_ERROR;
}


getNextBuffer_exit:
buffer->raw = 0;
buffer->frameCount = 0;
LOGV("getNextBuffer() no more data for track %d on thread %p", mName, mThread.unsafe_get());
return NOT_ENOUGH_DATA;
}
----------------------------------------------------------------


下面看看哪个地方调用了函数AudioFlinger::PlaybackThread::Track::getNextBuffer。
搜索发现,AudioFlinger中只有函数AudioFlinger::DirectOutputThread::threadLoop调用了,
而我们说的AudioTrack是播放音乐用的,所以,肯定不是这儿。


另外,发现AudioMixer中有几个地方调用了函数getNextBuffer,不过调用代码如下:
t.bufferProvider->getNextBuffer(&t.buffer);
bufferProvider和AudioFlinger::PlaybackThread::Track又存在怎样的关系呢?


bufferProvider的定义代码:AudioBufferProvider* bufferProvider;
存在以下继承关系:
class Track : public TrackBase
class TrackBase : public AudioBufferProvider, public RefBase


可见,bufferProvider最终指向的是AudioFlinger::PlaybackThread::Track对象,
所以,t.bufferProvider->getNextBuffer(&t.buffer)其实是调用的函数:AudioFlinger::PlaybackThread::Track::getNextBuffer


函数AudioFlinger::MixerThread::prepareTracks_l调用了函数AudioMixer::setBufferProvider。
函数AudioMixer::setBufferProvider中给bufferProvider进行赋值。


函数AudioFlinger::MixerThread::threadLoop和函数AudioFlinger::DuplicatingThread::threadLoop中
有调用函数AudioFlinger::MixerThread::prepareTracks_l。


先看看AudioMixer中调用函数AudioFlinger::PlaybackThread::Track::getNextBuffer的地方吧。
AudioMixer中的以下几个函数都调用了函数getNextBuffer:
process__nop函数
process__genericNoResampling函数
process__genericResampling函数
process__OneTrack16BitsStereoNoResampling函数
process__TwoTracks16BitsStereoNoResampling函数


这儿取其中的process__OneTrack16BitsStereoNoResampling函数来看看。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// one track, 16 bits stereo without resampling is the most common case
void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state)
{
const int i = 31 - __builtin_clz(state->enabledTracks);
const track_t& t = state->tracks[i];


AudioBufferProvider::Buffer& b(t.buffer);


int32_t* out = t.mainBuffer;
size_t numFrames = state->frameCount;


const int16_t vl = t.volume[0];
const int16_t vr = t.volume[1];
const uint32_t vrl = t.volumeRL;
while (numFrames) {
b.frameCount = numFrames;
t.bufferProvider->getNextBuffer(&b);
int16_t const *in = b.i16;


// in == NULL can happen if the track was flushed just after having
// been enabled for mixing.
if (in == NULL || ((unsigned long)in & 3)) {
memset(out, 0, numFrames*MAX_NUM_CHANNELS*sizeof(int16_t));
LOGE_IF(((unsigned long)in & 3), "process stereo track: input buffer alignment pb: buffer %p track %d, channels %d, needs %08x",
in, i, t.channelCount, t.needs);
return;
}
size_t outFrames = b.frameCount;


if (UNLIKELY(uint32_t(vl) > UNITY_GAIN || uint32_t(vr) > UNITY_GAIN)) {
// volume is boosted, so we might need to clamp even though
// we process only one track.
do {
uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
in += 2;
int32_t l = mulRL(1, rl, vrl) >> 12;
int32_t r = mulRL(0, rl, vrl) >> 12;
// clamping...
l = clamp16(l);
r = clamp16(r);
*out++ = (r<<16) | (l & 0xFFFF);
} while (--outFrames);
} else {
do {
uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
in += 2;
int32_t l = mulRL(1, rl, vrl) >> 12;
int32_t r = mulRL(0, rl, vrl) >> 12;
*out++ = (r<<16) | (l & 0xFFFF);
} while (--outFrames);
}
numFrames -= b.frameCount;
t.bufferProvider->releaseBuffer(&b);
}
}
----------------------------------------------------------------
可见,函数的主要功能是将数据从audio_track_cblk_t中copy到track_t的mainBuffer中。
track_t是个什么东东呢?是state_t的一个成员。
state_t又是个什么东东呢?是传入的函数参数。


现在有两个问题需要探讨:
1、process__OneTrack16BitsStereoNoResampling是如何被调用的?以及传入的参数是从哪儿来的?
2、放入track_t的mainBuffer中的数据,是如何被使用的?


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
先看看process__OneTrack16BitsStereoNoResampling是怎么被调用的。
函数AudioMixer::process__validate中有使用process__OneTrack16BitsStereoNoResampling函数。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioMixer::process__validate(state_t* state)
{
LOGW_IF(!state->needsChanged,
"in process__validate() but nothing's invalid");


uint32_t changed = state->needsChanged;
state->needsChanged = 0; // clear the validation flag


// recompute which tracks are enabled / disabled
uint32_t enabled = 0;
uint32_t disabled = 0;
while (changed) {
const int i = 31 - __builtin_clz(changed);
const uint32_t mask = 1<<i;
changed &= ~mask;
track_t& t = state->tracks[i];
(t.enabled ? enabled : disabled) |= mask;
}
state->enabledTracks &= ~disabled;
state->enabledTracks |= enabled;


// compute everything we need...
int countActiveTracks = 0;
int all16BitsStereoNoResample = 1;
int resampling = 0;
int volumeRamp = 0;
uint32_t en = state->enabledTracks;
while (en) {
const int i = 31 - __builtin_clz(en);
en &= ~(1<<i);


countActiveTracks++;
track_t& t = state->tracks[i];
uint32_t n = 0;
n |= NEEDS_CHANNEL_1 + t.channelCount - 1;
n |= NEEDS_FORMAT_16;
n |= t.doesResample() ? NEEDS_RESAMPLE_ENABLED : NEEDS_RESAMPLE_DISABLED;
if (t.auxLevel != 0 && t.auxBuffer != NULL) {
n |= NEEDS_AUX_ENABLED;
}


if (t.volumeInc[0]|t.volumeInc[1]) {
volumeRamp = 1;
} else if (!t.doesResample() && t.volumeRL == 0) {
n |= NEEDS_MUTE_ENABLED;
}
t.needs = n;


if ((n & NEEDS_MUTE__MASK) == NEEDS_MUTE_ENABLED) {
t.hook = track__nop;
} else {
if ((n & NEEDS_AUX__MASK) == NEEDS_AUX_ENABLED) {
all16BitsStereoNoResample = 0;
}
if ((n & NEEDS_RESAMPLE__MASK) == NEEDS_RESAMPLE_ENABLED) {
all16BitsStereoNoResample = 0;
resampling = 1;
t.hook = track__genericResample;
} else {
if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_1){
t.hook = track__16BitsMono;
all16BitsStereoNoResample = 0;
}
if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_2){
t.hook = track__16BitsStereo;
}
}
}
}


// select the processing hooks
state->hook = process__nop;
if (countActiveTracks) {
if (resampling) {
if (!state->outputTemp) {
state->outputTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
}
if (!state->resampleTemp) {
state->resampleTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
}
state->hook = process__genericResampling;
} else {
if (state->outputTemp) {
delete [] state->outputTemp;
state->outputTemp = 0;
}
if (state->resampleTemp) {
delete [] state->resampleTemp;
state->resampleTemp = 0;
}
state->hook = process__genericNoResampling;
if (all16BitsStereoNoResample && !volumeRamp) {
if (countActiveTracks == 1) {
state->hook = process__OneTrack16BitsStereoNoResampling;
}
}
}
}


LOGV("mixer configuration change: %d activeTracks (%08x) "
"all16BitsStereoNoResample=%d, resampling=%d, volumeRamp=%d",
countActiveTracks, state->enabledTracks,
all16BitsStereoNoResample, resampling, volumeRamp);


state->hook(state);


// Now that the volume ramp has been done, set optimal state and
// track hooks for subsequent mixer process
if (countActiveTracks) {
int allMuted = 1;
uint32_t en = state->enabledTracks;
while (en) {
const int i = 31 - __builtin_clz(en);
en &= ~(1<<i);
track_t& t = state->tracks[i];
if (!t.doesResample() && t.volumeRL == 0)
{
t.needs |= NEEDS_MUTE_ENABLED;
t.hook = track__nop;
} else {
allMuted = 0;
}
}
if (allMuted) {
state->hook = process__nop;
} else if (all16BitsStereoNoResample) {
if (countActiveTracks == 1) {
state->hook = process__OneTrack16BitsStereoNoResampling;
}
}
}
}
----------------------------------------------------------------
把其赋值给了state的hook。
state是传入的参数。hook是函数指针。


又引出来两个问题:
1、既然hook是函数指针,那么它何时被调用的?
2、函数process__validate有是何时被调用的?


第一个问题比较简单,函数process__validate中调用了hook。


来看看哪儿有使用函数process__validate。
函数AudioMixer::invalidateState中将process__validate赋值给了mState的hook。
mState是AudioMixer的成员变量。
hook是个函数指针。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioMixer::invalidateState(uint32_t mask)
{
if (mask) {
mState.needsChanged |= mask;
mState.hook = process__validate;
}
}
----------------------------------------------------------------


与上面类似,又出来两个问题:
1、mState的hook被调用的地方。
2、函数invalidateState被调用的地方。


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
先看第一个问题。
函数AudioMixer::process中调用了mState的hook。
void AudioMixer::process()
{
mState.hook(&mState);
}
可见,函数process__OneTrack16BitsStereoNoResampling的参数state就是此处的mState:
mState.hook(&mState);
mState.hook = process__validate;
state->hook = process__OneTrack16BitsStereoNoResampling;
state->hook(state);
----------------------------------------------------------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
再看第二个问题:函数invalidateState被调用的地方。
函数invalidateState被调用的地方还不是,不过,我们只关心以下两个函数:
AudioMixer::enable
AudioMixer::setParameter
这两个函数中函数AudioFlinger::MixerThread::prepareTracks_l中都有没调到。
而函数AudioFlinger::MixerThread::prepareTracks_l在AudioFlinger::MixerThread::threadLoop中被调到。


一个一个来。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::enable(int name)
{
switch (name) {
case MIXING: {
if (mState.tracks[ mActiveTrack ].enabled != 1) {
mState.tracks[ mActiveTrack ].enabled = 1;
LOGV("enable(%d)", mActiveTrack);
invalidateState(1<<mActiveTrack);
}
} break;
default:
return NAME_NOT_FOUND;
}
return NO_ERROR;
}
----------------------------------------------------------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setParameter(int target, int name, void *value)
{
int valueInt = (int)value;
int32_t *valueBuf = (int32_t *)value;


switch (target) {
case TRACK:
if (name == CHANNEL_COUNT) {
if ((uint32_t(valueInt) <= MAX_NUM_CHANNELS) && (valueInt)) {
if (mState.tracks[ mActiveTrack ].channelCount != valueInt) {
mState.tracks[ mActiveTrack ].channelCount = valueInt;
LOGV("setParameter(TRACK, CHANNEL_COUNT, %d)", valueInt);
invalidateState(1<<mActiveTrack);
}
return NO_ERROR;
}
}
if (name == MAIN_BUFFER) {
if (mState.tracks[ mActiveTrack ].mainBuffer != valueBuf) {
mState.tracks[ mActiveTrack ].mainBuffer = valueBuf;
LOGV("setParameter(TRACK, MAIN_BUFFER, %p)", valueBuf);
invalidateState(1<<mActiveTrack);
}
return NO_ERROR;
}
if (name == AUX_BUFFER) {
if (mState.tracks[ mActiveTrack ].auxBuffer != valueBuf) {
mState.tracks[ mActiveTrack ].auxBuffer = valueBuf;
LOGV("setParameter(TRACK, AUX_BUFFER, %p)", valueBuf);
invalidateState(1<<mActiveTrack);
}
return NO_ERROR;
}


break;
case RESAMPLE:
if (name == SAMPLE_RATE) {
if (valueInt > 0) {
track_t& track = mState.tracks[ mActiveTrack ];
if (track.setResampler(uint32_t(valueInt), mSampleRate)) {
LOGV("setParameter(RESAMPLE, SAMPLE_RATE, %u)",
uint32_t(valueInt));
invalidateState(1<<mActiveTrack);
}
return NO_ERROR;
}
}
break;
case RAMP_VOLUME:
case VOLUME:
if ((uint32_t(name-VOLUME0) < MAX_NUM_CHANNELS)) {
track_t& track = mState.tracks[ mActiveTrack ];
if (track.volume[name-VOLUME0] != valueInt) {
LOGV("setParameter(VOLUME, VOLUME0/1: %04x)", valueInt);
track.prevVolume[name-VOLUME0] = track.volume[name-VOLUME0] << 16;
track.volume[name-VOLUME0] = valueInt;
if (target == VOLUME) {
track.prevVolume[name-VOLUME0] = valueInt << 16;
track.volumeInc[name-VOLUME0] = 0;
} else {
int32_t d = (valueInt<<16) - track.prevVolume[name-VOLUME0];
int32_t volInc = d / int32_t(mState.frameCount);
track.volumeInc[name-VOLUME0] = volInc;
if (volInc == 0) {
track.prevVolume[name-VOLUME0] = valueInt << 16;
}
}
invalidateState(1<<mActiveTrack);
}
return NO_ERROR;
} else if (name == AUXLEVEL) {
track_t& track = mState.tracks[ mActiveTrack ];
if (track.auxLevel != valueInt) {
LOGV("setParameter(VOLUME, AUXLEVEL: %04x)", valueInt);
track.prevAuxLevel = track.auxLevel << 16;
track.auxLevel = valueInt;
if (target == VOLUME) {
track.prevAuxLevel = valueInt << 16;
track.auxInc = 0;
} else {
int32_t d = (valueInt<<16) - track.prevAuxLevel;
int32_t volInc = d / int32_t(mState.frameCount);
track.auxInc = volInc;
if (volInc == 0) {
track.prevAuxLevel = valueInt << 16;
}
}
invalidateState(1<<mActiveTrack);
}
return NO_ERROR;
}
break;
}
return BAD_VALUE;
}
----------------------------------------------------------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// prepareTracks_l() must be called with ThreadBase::mLock held
uint32_t AudioFlinger::MixerThread::prepareTracks_l(const SortedVector< wp<Track> >& activeTracks, Vector< sp<Track> > *tracksToRemove)
{


uint32_t mixerStatus = MIXER_IDLE;
// find out which tracks need to be processed
size_t count = activeTracks.size();
size_t mixedTracks = 0;
size_t tracksWithEffect = 0;


float masterVolume = mMasterVolume;
bool masterMute = mMasterMute;


if (masterMute) {
masterVolume = 0;
}
#ifdef LVMX
bool tracksConnectedChanged = false;
bool stateChanged = false;


int audioOutputType = LifeVibes::getMixerType(mId, mType);
if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType))
{
int activeTypes = 0;
for (size_t i=0 ; i<count ; i++) {
sp<Track> t = activeTracks[i].promote();
if (t == 0) continue;
Track* const track = t.get();
int iTracktype=track->type();
activeTypes |= 1<<track->type();
}
LifeVibes::computeVolumes(audioOutputType, activeTypes, tracksConnectedChanged, stateChanged, masterVolume, masterMute);
}
#endif
// Delegate master volume control to effect in output mix effect chain if needed
sp<EffectChain> chain = getEffectChain_l(AudioSystem::SESSION_OUTPUT_MIX);
if (chain != 0) {
uint32_t v = (uint32_t)(masterVolume * (1 << 24));
chain->setVolume_l(&v, &v);
masterVolume = (float)((v + (1 << 23)) >> 24);
chain.clear();
}


for (size_t i=0 ; i<count ; i++) {
sp<Track> t = activeTracks[i].promote();
if (t == 0) continue;


Track* const track = t.get();
audio_track_cblk_t* cblk = track->cblk();


// The first time a track is added we wait
// for all its buffers to be filled before processing it
mAudioMixer->setActiveTrack(track->name());
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setActiveTrack(int track)
{
if (uint32_t(track-TRACK0) >= MAX_NUM_TRACKS) {
return BAD_VALUE;
}
mActiveTrack = track - TRACK0;
return NO_ERROR;
}
----------------------------------------------------------------
if (cblk->framesReady() && track->isReady() &&
!track->isPaused() && !track->isTerminated())
{
//LOGV("track %d u=%08x, s=%08x [OK] on thread %p", track->name(), cblk->user, cblk->server, this);


mixedTracks++;


// track->mainBuffer() != mMixBuffer means there is an effect chain
// connected to the track
chain.clear();
if (track->mainBuffer() != mMixBuffer) {
chain = getEffectChain_l(track->sessionId());
// Delegate volume control to effect in track effect chain if needed
if (chain != 0) {
tracksWithEffect++;
} else {
LOGW("prepareTracks_l(): track %08x attached to effect but no chain found on session %d",
track->name(), track->sessionId());
}
}




int param = AudioMixer::VOLUME;
if (track->mFillingUpStatus == Track::FS_FILLED) {
// no ramp for the first volume setting
track->mFillingUpStatus = Track::FS_ACTIVE;
if (track->mState == TrackBase::RESUMING) {
track->mState = TrackBase::ACTIVE;
param = AudioMixer::RAMP_VOLUME;
}
} else if (cblk->server != 0) {
// If the track is stopped before the first frame was mixed,
// do not apply ramp
param = AudioMixer::RAMP_VOLUME;
}


// compute volume for this track
uint32_t vl, vr, va;
if (track->isMuted() || track->isPausing() ||
mStreamTypes[track->type()].mute) {
vl = vr = va = 0;
if (track->isPausing()) {
track->setPaused();
}
} else {


// read original volumes with volume control
float typeVolume = mStreamTypes[track->type()].volume;
#ifdef LVMX
bool streamMute=false;
// read the volume from the LivesVibes audio engine.
if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType))
{
LifeVibes::getStreamVolumes(audioOutputType, track->type(), &typeVolume, &streamMute);
if (streamMute) {
typeVolume = 0;
}
}
#endif
float v = masterVolume * typeVolume;
vl = (uint32_t)(v * cblk->volume[0]) << 12;
vr = (uint32_t)(v * cblk->volume[1]) << 12;


va = (uint32_t)(v * cblk->sendLevel);
}
// Delegate volume control to effect in track effect chain if needed
if (chain != 0 && chain->setVolume_l(&vl, &vr)) {
// Do not ramp volume if volume is controlled by effect
param = AudioMixer::VOLUME;
track->mHasVolumeController = true;
} else {
// force no volume ramp when volume controller was just disabled or removed
// from effect chain to avoid volume spike
if (track->mHasVolumeController) {
param = AudioMixer::VOLUME;
}
track->mHasVolumeController = false;
}


// Convert volumes from 8.24 to 4.12 format
int16_t left, right, aux;
uint32_t v_clamped = (vl + (1 << 11)) >> 12;
if (v_clamped > MAX_GAIN_INT) v_clamped = MAX_GAIN_INT;
left = int16_t(v_clamped);
v_clamped = (vr + (1 << 11)) >> 12;
if (v_clamped > MAX_GAIN_INT) v_clamped = MAX_GAIN_INT;
right = int16_t(v_clamped);


if (va > MAX_GAIN_INT) va = MAX_GAIN_INT;
aux = int16_t(va);


#ifdef LVMX
if ( tracksConnectedChanged || stateChanged )
{
// only do the ramp when the volume is changed by the user / application
param = AudioMixer::VOLUME;
}
#endif


// XXX: these things DON'T need to be done each time
// 前面有说到,调用getNextBuffer函数的代码为:t.bufferProvider->getNextBuffer(&b);
// 猜到没错的话,此处应该就是给bufferProvider赋值的地方
mAudioMixer->setBufferProvider(track);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setBufferProvider(AudioBufferProvider* buffer)
{
mState.tracks[ mActiveTrack ].bufferProvider = buffer;
return NO_ERROR;
}
----------------------------------------------------------------
// 这个就是前面说的AudioMixer::enable函数
mAudioMixer->enable(AudioMixer::MIXING);


// 这个是前面说的AudioMixer::setParameter函数
mAudioMixer->setParameter(param, AudioMixer::VOLUME0, (void *)left);
mAudioMixer->setParameter(param, AudioMixer::VOLUME1, (void *)right);
mAudioMixer->setParameter(param, AudioMixer::AUXLEVEL, (void *)aux);
mAudioMixer->setParameter(
AudioMixer::TRACK,
AudioMixer::FORMAT, (void *)track->format());
mAudioMixer->setParameter(
AudioMixer::TRACK,
AudioMixer::CHANNEL_COUNT, (void *)track->channelCount());
mAudioMixer->setParameter(
AudioMixer::RESAMPLE,
AudioMixer::SAMPLE_RATE,
(void *)(cblk->sampleRate));
mAudioMixer->setParameter(
AudioMixer::TRACK,
AudioMixer::MAIN_BUFFER, (void *)track->mainBuffer());
mAudioMixer->setParameter(
AudioMixer::TRACK,
AudioMixer::AUX_BUFFER, (void *)track->auxBuffer());


// reset retry count
track->mRetryCount = kMaxTrackRetries;
mixerStatus = MIXER_TRACKS_READY;
} else {
//LOGV("track %d u=%08x, s=%08x [NOT READY] on thread %p", track->name(), cblk->user, cblk->server, this);
if (track->isStopped()) {
track->reset();
}
if (track->isTerminated() || track->isStopped() || track->isPaused()) {
// We have consumed all the buffers of this track.
// Remove it from the list of active tracks.
tracksToRemove->add(track);
} else {
// No buffers for this track. Give it a few chances to
// fill a buffer, then remove it from active list.
// 如果track中没有数据,给它个机会往里填数据。
// 在mRetryCount次之后还没数据,就把它从active list中咔嚓掉。
if (--(track->mRetryCount) <= 0) {
LOGV("BUFFER TIMEOUT: remove(%d) from active list on thread %p", track->name(), this);
tracksToRemove->add(track);
// indicate to client process that the track was disabled because of underrun
cblk->flags |= CBLK_DISABLED_ON;
} else if (mixerStatus != MIXER_TRACKS_READY) {
mixerStatus = MIXER_TRACKS_ENABLED;
}
}
mAudioMixer->disable(AudioMixer::MIXING);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::disable(int name)
{
switch (name) {
case MIXING: {
if (mState.tracks[ mActiveTrack ].enabled != 0) {
mState.tracks[ mActiveTrack ].enabled = 0;
LOGV("disable(%d)", mActiveTrack);
invalidateState(1<<mActiveTrack);
}
} break;
default:
return NAME_NOT_FOUND;
}
return NO_ERROR;
}
此处也有对函数disable的调用。
----------------------------------------------------------------
}
}


// 干掉那些需要被干掉的track
// remove all the tracks that need to be...
count = tracksToRemove->size();
if (UNLIKELY(count)) {
for (size_t i=0 ; i<count ; i++) {
const sp<Track>& track = tracksToRemove->itemAt(i);
mActiveTracks.remove(track);
if (track->mainBuffer() != mMixBuffer) {
chain = getEffectChain_l(track->sessionId());
if (chain != 0) {
LOGV("stopping track on chain %p for session Id: %d", chain.get(), track->sessionId());
chain->stopTrack();
}
}
if (track->isTerminated()) {
mTracks.remove(track);
deleteTrackName_l(track->mName);
}
}
}


// mix buffer must be cleared if all tracks are connected to an
// effect chain as in this case the mixer will not write to
// mix buffer and track effects will accumulate into it
if (mixedTracks != 0 && mixedTracks == tracksWithEffect) {
memset(mMixBuffer, 0, mFrameCount * mChannelCount * sizeof(int16_t));
}


return mixerStatus;
}
----------------------------------------------------------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
bool AudioFlinger::MixerThread::threadLoop()
{
Vector< sp<Track> > tracksToRemove;
uint32_t mixerStatus = MIXER_IDLE;
nsecs_t standbyTime = systemTime();
size_t mixBufferSize = mFrameCount * mFrameSize;
// FIXME: Relaxed timing because of a certain device that can't meet latency
// Should be reduced to 2x after the vendor fixes the driver issue
nsecs_t maxPeriod = seconds(mFrameCount) / mSampleRate * 3;
nsecs_t lastWarning = 0;
bool longStandbyExit = false;
uint32_t activeSleepTime = activeSleepTimeUs();
uint32_t idleSleepTime = idleSleepTimeUs();
uint32_t sleepTime = idleSleepTime;
Vector< sp<EffectChain> > effectChains;


while (!exitPending())
{
processConfigEvents();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioFlinger::ThreadBase::processConfigEvents()
{
mLock.lock();
while(!mConfigEvents.isEmpty()) {
LOGV("processConfigEvents() remaining events %d", mConfigEvents.size());
ConfigEvent *configEvent = mConfigEvents[0];
mConfigEvents.removeAt(0);
// release mLock before locking AudioFlinger mLock: lock order is always
// AudioFlinger then ThreadBase to avoid cross deadlock
mLock.unlock();
mAudioFlinger->mLock.lock();
audioConfigChanged_l(configEvent->mEvent, configEvent->mParam);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// audioConfigChanged_l() must be called with AudioFlinger::mLock held
void AudioFlinger::audioConfigChanged_l(int event, int ioHandle, void *param2)
{
size_t size = mNotificationClients.size();
for (size_t i = 0; i < size; i++) {
mNotificationClients.valueAt(i)->client()->ioConfigChanged(event, ioHandle, param2);
}
}
----------------------------------------------------------------
mAudioFlinger->mLock.unlock();
delete configEvent;
mLock.lock();
}
mLock.unlock();
}
----------------------------------------------------------------


mixerStatus = MIXER_IDLE;
{ // scope for mLock


Mutex::Autolock _l(mLock);


if (checkForNewParameters_l()) {
mixBufferSize = mFrameCount * mFrameSize;
// FIXME: Relaxed timing because of a certain device that can't meet latency
// Should be reduced to 2x after the vendor fixes the driver issue
maxPeriod = seconds(mFrameCount) / mSampleRate * 3;
activeSleepTime = activeSleepTimeUs();
idleSleepTime = idleSleepTimeUs();
}


const SortedVector< wp<Track> >& activeTracks = mActiveTracks;


// put audio hardware into standby after short delay
if UNLIKELY((!activeTracks.size() && systemTime() > standbyTime) ||
mSuspended) {
if (!mStandby) {
LOGV("Audio hardware entering standby, mixer %p, mSuspended %d\n", this, mSuspended);
mOutput->standby();
mStandby = true;
mBytesWritten = 0;
}


if (!activeTracks.size() && mConfigEvents.isEmpty()) {
// we're about to wait, flush the binder command buffer
IPCThreadState::self()->flushCommands();


if (exitPending()) break;


// wait until we have something to do...
LOGV("MixerThread %p TID %d going to sleep\n", this, gettid());
mWaitWorkCV.wait(mLock);
LOGV("MixerThread %p TID %d waking up\n", this, gettid());


if (mMasterMute == false) {
char value[PROPERTY_VALUE_MAX];
property_get("ro.audio.silent", value, "0");
if (atoi(value)) {
LOGD("Silence is golden");
setMasterMute(true);
}
}


standbyTime = systemTime() + kStandbyTimeInNsecs;
sleepTime = idleSleepTime;
continue;
}
}


// 此处调用了函数prepareTracks_l
mixerStatus = prepareTracks_l(activeTracks, &tracksToRemove);


// prevent any changes in effect chain list and in each effect chain
// during mixing and effect process as the audio buffers could be deleted
// or modified if an effect is created or deleted
lockEffectChains_l(effectChains);
}


if (LIKELY(mixerStatus == MIXER_TRACKS_READY)) {
// 这儿调用了前面介绍的AudioMixer::process函数
// mix buffers...
mAudioMixer->process();
sleepTime = 0;
standbyTime = systemTime() + kStandbyTimeInNsecs;
//TODO: delay standby when effects have a tail
} else {
// If no tracks are ready, sleep once for the duration of an output
// buffer size, then write 0s to the output
if (sleepTime == 0) {
if (mixerStatus == MIXER_TRACKS_ENABLED) {
sleepTime = activeSleepTime;
} else {
sleepTime = idleSleepTime;
}
} else if (mBytesWritten != 0 ||
(mixerStatus == MIXER_TRACKS_ENABLED && longStandbyExit)) {
memset (mMixBuffer, 0, mixBufferSize);
sleepTime = 0;
LOGV_IF((mBytesWritten == 0 && (mixerStatus == MIXER_TRACKS_ENABLED && longStandbyExit)), "anticipated start");
}
// TODO add standby time extension fct of effect tail
}


if (mSuspended) {
sleepTime = suspendSleepTimeUs();
}
// 往硬件中写数据
// sleepTime == 0 means we must write to audio hardware
if (sleepTime == 0) {
for (size_t i = 0; i < effectChains.size(); i ++) {
effectChains[i]->process_l();
}
// enable changes in effect chain
unlockEffectChains(effectChains);
#ifdef LVMX
int audioOutputType = LifeVibes::getMixerType(mId, mType);
if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType)) {
LifeVibes::process(audioOutputType, mMixBuffer, mixBufferSize);
}
#endif
mLastWriteTime = systemTime();
mInWrite = true;
mBytesWritten += mixBufferSize;


// 是这儿写的。
int bytesWritten = (int)mOutput->write(mMixBuffer, mixBufferSize);
if (bytesWritten < 0) mBytesWritten -= mixBufferSize;
mNumWrites++;
mInWrite = false;
nsecs_t now = systemTime();
nsecs_t delta = now - mLastWriteTime;
if (delta > maxPeriod) {
mNumDelayedWrites++;
if ((now - lastWarning) > kWarningThrottle) {
LOGW("write blocked for %llu msecs, %d delayed writes, thread %p",
ns2ms(delta), mNumDelayedWrites, this);
lastWarning = now;
}
if (mStandby) {
longStandbyExit = true;
}
}
mStandby = false;
} else {
// enable changes in effect chain
unlockEffectChains(effectChains);
usleep(sleepTime);
}


// finally let go of all our tracks, without the lock held
// since we can't guarantee the destructors won't acquire that
// same lock.
tracksToRemove.clear();


// Effect chains will be actually deleted here if they were removed from
// mEffectChains list during mixing or effects processing
effectChains.clear();
}


if (!mStandby) {
mOutput->standby();
}


LOGV("MixerThread %p exiting", this);
return false;
}
----------------------------------------------------------------
----------------------------------------------------------------


----------------------------------------------------------------


在前面,看函数AudioMixer::process__OneTrack16BitsStereoNoResampling的时候,留下了一个问题:
放入track_t的mainBuffer中的数据,是如何被使用的?
函数函数AudioMixer::process__OneTrack16BitsStereoNoResampling的主要功能就是将数据从audio_track_cblk_t中copy到track_t的mainBuffer中。
先看看mainBuffer的来历。


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
mainBuffer是track_t的成员。
函数AudioMixer::setParameter中有对mainBuffer赋值。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
if (name == MAIN_BUFFER) {
if (mState.tracks[ mActiveTrack ].mainBuffer != valueBuf) {
mState.tracks[ mActiveTrack ].mainBuffer = valueBuf;
LOGV("setParameter(TRACK, MAIN_BUFFER, %p)", valueBuf);
invalidateState(1<<mActiveTrack);
}
return NO_ERROR;
}
----------------------------------------------------------------
函数AudioFlinger::MixerThread::prepareTracks_l中调用了函数AudioMixer::setParameter。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
mAudioMixer->setParameter(
AudioMixer::TRACK,
AudioMixer::MAIN_BUFFER, (void *)track->mainBuffer());

这个 track之前也被使用过:
mAudioMixer->setBufferProvider(track);


track的来历:
Track* const track = t.get();


t的来历:
sp<Track> t = activeTracks[i].promote();


activeTracks是函数AudioFlinger::MixerThread::prepareTracks_l的参数。


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
函数函数AudioFlinger::MixerThread::prepareTracks_l在函数AudioFlinger::MixerThread::threadLoop中被调用:
mixerStatus = prepareTracks_l(activeTracks, &tracksToRemove);


activeTracks的来历:
const SortedVector< wp<Track> >& activeTracks = mActiveTracks;


mActiveTracks是PlaybackThread的成员变量。


函数AudioFlinger::PlaybackThread::addTrack_l,往mActiveTracks中添加成员。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// addTrack_l() must be called with ThreadBase::mLock held
status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
{
status_t status = ALREADY_EXISTS;


// set retry count for buffer fill
track->mRetryCount = kMaxTrackStartupRetries;
if (mActiveTracks.indexOf(track) < 0) {
// the track is newly added, make sure it fills up all its
// buffers before playing. This is to ensure the client will
// effectively get the latency it requested.
track->mFillingUpStatus = Track::FS_FILLING;
track->mResetDone = false;
mActiveTracks.add(track);
if (track->mainBuffer() != mMixBuffer) {
sp<EffectChain> chain = getEffectChain_l(track->sessionId());
if (chain != 0) {
LOGV("addTrack_l() starting track on chain %p for session %d", chain.get(), track->sessionId());
chain->startTrack();
}
}


status = NO_ERROR;
}


LOGV("mWaitWorkCV.broadcast");
mWaitWorkCV.broadcast();


return status;
}
----------------------------------------------------------------


函数AudioFlinger::PlaybackThread::Track::start调用了函数AudioFlinger::PlaybackThread::addTrack_l。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::PlaybackThread::Track::start()
{
status_t status = NO_ERROR;
LOGV("start(%d), calling thread %d session %d",
mName, IPCThreadState::self()->getCallingPid(), mSessionId);
sp<ThreadBase> thread = mThread.promote();
if (thread != 0) {
Mutex::Autolock _l(thread->mLock);
int state = mState;
// here the track could be either new, or restarted
// in both cases "unstop" the track
if (mState == PAUSED) {
mState = TrackBase::RESUMING;
LOGV("PAUSED => RESUMING (%d) on thread %p", mName, this);
} else {
mState = TrackBase::ACTIVE;
LOGV("? => ACTIVE (%d) on thread %p", mName, this);
}


if (!isOutputTrack() && state != ACTIVE && state != RESUMING) {
thread->mLock.unlock();
status = AudioSystem::startOutput(thread->id(),
(AudioSystem::stream_type)mStreamType,
mSessionId);
thread->mLock.lock();
}
if (status == NO_ERROR) {
PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
playbackThread->addTrack_l(this);
} else {
mState = state;
}
} else {
status = BAD_VALUE;
}
return status;
}
----------------------------------------------------------------


函数AudioFlinger::TrackHandle::start调用了函数AudioFlinger::PlaybackThread::Track::start。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::TrackHandle::start() {
return mTrack->start();
}
----------------------------------------------------------------


再往上拱不是很好拱,还是跳到上面往下钻吧。


从java的测试代码中可知,调用完write写完数据后,一般会调用play函数。
如在函数testSetStereoVolumeMid中:
track.write(data, 0, data.length);
track.play();

play函数的实现:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
/**
* Starts playing an AudioTrack.
* @throws IllegalStateException
*/
public void play()
throws IllegalStateException {
if (mState != STATE_INITIALIZED) {
throw(new IllegalStateException("play() called on uninitialized AudioTrack."));
}


synchronized(mPlayStateLock) {
// 调到了native中的start函数。
native_start();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
static void
android_media_AudioTrack_start(JNIEnv *env, jobject thiz)
{
// 此处的AudioTrack对象是创建AudioTrack的时候保存过去的
AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(
thiz, javaAudioTrackFields.nativeTrackInJavaObj);
if (lpTrack == NULL ) {
jniThrowException(env, "java/lang/IllegalStateException",
"Unable to retrieve AudioTrack pointer for start()");
return;
}


lpTrack->start();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioTrack::start()
{
sp<AudioTrackThread> t = mAudioTrackThread;
status_t status;


LOGV("start %p", this);
if (t != 0) {
if (t->exitPending()) {
if (t->requestExitAndWait() == WOULD_BLOCK) {
LOGE("AudioTrack::start called from thread");
return;
}
}
t->mLock.lock();
}


if (android_atomic_or(1, &mActive) == 0) {
mNewPosition = mCblk->server + mUpdatePeriod;
mCblk->bufferTimeoutMs = MAX_STARTUP_TIMEOUT_MS;
mCblk->waitTimeMs = 0;
mCblk->flags &= ~CBLK_DISABLED_ON;
if (t != 0) {
t->run("AudioTrackThread", THREAD_PRIORITY_AUDIO_CLIENT);
} else {
setpriority(PRIO_PROCESS, 0, THREAD_PRIORITY_AUDIO_CLIENT);
}


if (mCblk->flags & CBLK_INVALID_MSK) {
LOGW("start() track %p invalidated, creating a new one", this);
// no need to clear the invalid flag as this cblk will not be used anymore
// force new track creation
status = DEAD_OBJECT;
} else {
// 此处的mAudioTrack是在函数AudioTrack::createTrack中创建并赋值的
// sp<IAudioTrack> track = audioFlinger->createTrack()
// mAudioTrack = track;

// 函数AudioFlinger::createTrack返回的是一个TrackHandle对象
// trackHandle = new TrackHandle(track);
// return trackHandle;
// 也就是说,此处调用的是函数AudioFlinger::TrackHandle::start
status = mAudioTrack->start();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::TrackHandle::start() {
// TrackHandle的构造函数中传入了一个track
// 该track的创建:track = thread->createTrack_l()
// 函数AudioFlinger::PlaybackThread::createTrack_l返回的是一个AudioFlinger::PlaybackThread::Track对象
// 因此,此处调用的是函数AudioFlinger::PlaybackThread::Track::start


// !!!!接上头了


// 也就是说,函数AudioFlinger::MixerThread::prepareTracks_l调用函数AudioMixer::setParameter时使用的track对象,其实就是我们创建AudioTrack时创建的那个东东
// 也就是说函数函数AudioMixer::process__OneTrack16BitsStereoNoResampling,其实是把数据copy到了AudioTrack对象的mian buffer中
return mTrack->start();
}
----------------------------------------------------------------
}
if (status == DEAD_OBJECT) {
LOGV("start() dead IAudioTrack: creating a new one");
status = createTrack(mStreamType, mCblk->sampleRate, mFormat, mChannelCount,
mFrameCount, mFlags, mSharedBuffer, getOutput(), false);
if (status == NO_ERROR) {
status = mAudioTrack->start();
if (status == NO_ERROR) {
mNewPosition = mCblk->server + mUpdatePeriod;
}
}
}
if (status != NO_ERROR) {
LOGV("start() failed");
android_atomic_and(~1, &mActive);
if (t != 0) {
t->requestExit();
} else {
setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_NORMAL);
}
}
}


if (t != 0) {
t->mLock.unlock();
}
}
----------------------------------------------------------------
}
----------------------------------------------------------------
mPlayState = PLAYSTATE_PLAYING;
}
}
----------------------------------------------------------------
----------------------------------------------------------------


----------------------------------------------------------------
----------------------------------------------------------------


还有一个遗留问题,
函数AudioFlinger::MixerThread::threadLoop往硬件写数据使用的是以下代码:
int bytesWritten = (int)mOutput->write(mMixBuffer, mixBufferSize);
mMixBuffer是MixerThread的成员变量。


而函数函数AudioMixer::process__OneTrack16BitsStereoNoResampling是把数据copy到AudioTrack对象的mainBuffer中。
这两个东东之间是如何联系起来的呢?


下面的这个链条,貌似将它们关联了起来:
1、函数AudioFlinger::createTrack在调用thread->createTrack_l成功之后,会根据条件调用函数AudioFlinger::moveEffectChain_l。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
track = thread->createTrack_l(client, streamType, sampleRate, format,
channelCount, frameCount, sharedBuffer, lSessionId, &lStatus);


// move effect chain to this output thread if an effect on same session was waiting
// for a track to be created
if (lStatus == NO_ERROR && effectThread != NULL) {
Mutex::Autolock _dl(thread->mLock);
Mutex::Autolock _sl(effectThread->mLock);
moveEffectChain_l(lSessionId, effectThread, thread, true);
}
----------------------------------------------------------------


2、函数函数AudioFlinger::moveEffectChain_l调用了函数AudioFlinger::PlaybackThread::removeEffectChain_l。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// remove chain first. This is useful only if reconfiguring effect chain on same output thread,
// so that a new chain is created with correct parameters when first effect is added. This is
// otherwise unecessary as removeEffect_l() will remove the chain when last effect is
// removed.
srcThread->removeEffectChain_l(chain);
----------------------------------------------------------------


3、函数AudioFlinger::PlaybackThread::removeEffectChain_l将PlaybackThread的mMixBuffer设置到了AudioTrack对象的mian buffer。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
size_t AudioFlinger::PlaybackThread::removeEffectChain_l(const sp<EffectChain>& chain)
{
int session = chain->sessionId();


LOGV("removeEffectChain_l() %p from thread %p for session %d", chain.get(), this, session);


for (size_t i = 0; i < mEffectChains.size(); i++) {
if (chain == mEffectChains[i]) {
mEffectChains.removeAt(i);
// detach all tracks with same session ID from this chain
for (size_t i = 0; i < mTracks.size(); ++i) {
sp<Track> track = mTracks[i];
if (session == track->sessionId()) {
track->setMainBuffer(mMixBuffer);
}
}
break;
}
}
return mEffectChains.size();
}
----------------------------------------------------------------


函数AudioFlinger::openOutput中在调用HAL层的openOutputStream成功打开一个output之后会创建一个MixerThread对象。
并将其添加到成员变量mPlaybackThreads中:
thread = new MixerThread(this, output, id, *pDevices);
mPlaybackThreads.add(id, thread);
###########################################################


&&&&&&&&&&&&&&&&&&&&&&&总结&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
MixerThread的threadloop函数,会先检查数据是否已经通过AudioTrack对象写到了audio_track_cblk_t中。
如果已经写好了,就将数据copy到AudioTrack对象的main buffer中。
在创建AudioTrack对象的时候,已经将AudioTrack对象的mian buffer和PlaybackThread的mix buffer进行了关联。
MixerThread的threadloop函数之后就会调用HAL层的write函数将数据写到硬件。
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
<think>我们正在处理一个关于Android系统中AudioTrack创建失败的问题,错误信息是"createTrack_l failed with status -9",这通常表示AudioFlinger无法创建track。根据参考信息,我们知道AudioTrack在创建时会与AudioFlinger交互,并涉及到audio_track_cblk_t等数据结构。状态-9通常对应的是NO_INIT(未初始化)或其它错误,但具体需要分析。 首先,我们需要了解状态码-9的含义。在Android系统中,错误代码通常定义在system/core/include/utils/Errors.h中。常见的错误码有: #define NO_INIT (-ENODEV) // 值为-19 #define BAD_VALUE (-EINVAL) // 值为-22 #define NO_MEMORY (-ENOMEM) // 值为-12 但是状态-9并不直接对应这些常见错误。实际上,-9对应的是-ENOMEM(在errno.h中,ENOMEM=12,所以-ENOMEM=-12),而-9可能是其他错误。在AndroidAudio系统中,错误码有时是直接使用负数错误码,但需要查看具体定义。 在AudioFlinger中,创建Track失败返回的状态-9,实际上对应的是NO_INIT(但NO_INIT是-19)?或者可能是其他错误?实际上,在AudioFlinger中,我们可以在代码中查找createTrack的返回错误。 根据经验,状态-9通常表示的是音频设备未打开或未初始化(NO_INIT)?但数值上不对。我们重新考虑:在Errors.h中,有一个错误码是UNKNOWN_ERROR,值为-32,但也不匹配。 实际上,在AudioFlinger中,createTrack的返回错误可能是从其他模块传递过来的。在AudioFlinger.cpp中,我们可以找到createTrack函数,它返回的状态码可能是从openOutput_l等函数中传递过来的。 经过查阅Android源码(以Android 10为例),在AudioFlinger.cpp中,当创建Track失败时,可能会返回以下错误: status_t status = NO_ERROR; // 0 如果出现错误,会设置status,例如: if (output == NULL) { status = BAD_VALUE; } else if (output->audioHwDev == NULL) { status = NO_INIT; } else { // 尝试创建Track track = new Track(this, client, streamType, sampleRate, format, channelMask, frameCount, nullptr /* buffer */, (size_t)0 /* bufferSize */, nullptr /* sharedBuffer */, sessionId, flags, *output, uid, pid); ... if (track->getCblk() == NULL) { // 创建共享内存失败,可能是内存不足 status = NO_MEMORY; } } 但是,我们并没有直接看到返回-9的情况。那么,状态-9到底代表什么? 实际上,在Android系统中,AudioFlinger返回的错误码有时是底层HAL返回的错误。而底层HAL的错误码通常是正数(如定义在hardware/libhardware/include/hardware/audio.h中的错误码),然后被转换为负的错误码(即用负数表示错误)。例如,如果底层返回ENOMEM(12),那么状态码就是-12。那么-9对应的是什么呢?在errno.h中,9代表EBADF(错误的文件描述符)。但是,在音频上下文中,这并不常见。 另一种可能是,-9是AudioFlinger内部定义的一个错误。在AudioFlinger中,有一个错误码定义在AudioFlinger.h中: enum { NO_ERROR = 0, UNKNOWN_ERROR = -1, ... // 其他错误 }; 但并没有-9的定义。因此,我们可能需要更深入地分析。 实际上,在AudioFlinger的createTrack函数中,有一个关键步骤是获取AudioFlingerPlaybackThread,如果无法获取,则返回BAD_VALUE。但是,我们注意到错误信息是“AudioFlinger could not create track”,这通常意味着AudioFlinger内部创建track失败。 根据一些开发者的经验,状态-9可能表示的是音频设备被占用或者资源不足。例如,同时创建的AudioTrack实例过多,超过了AudioFlinger的限制。 在AudioFlinger中,有一个最大track数的限制,定义在AudioFlinger.h中: #define MAX_NUM_TRACKS 32 也就是说,一个输出线程(PlaybackThread)最多只能有32个track。如果超过这个数量,就会创建失败。此时,返回的错误码可能是NO_MEMORY(-12)?但是,在代码中,当超过最大track数时,AudioFlinger会返回INVALID_OPERATION(-38)?或者BAD_VALUE? 实际上,在创建Track时,会检查当前PlaybackThreadtrack数量: if (thread->type() != ThreadBase::OFFLOAD && thread->hasAudioSession(sessionId) == 0) { if (thread->trackCount() >= MAX_NUM_TRACKS) { status = BAD_VALUE; } } 因此,当track数量超过最大值时,返回的是BAD_VALUE(-22)。所以状态-9并不是这个原因。 那么,状态-9到底是什么?我们可能需要查看具体的日志。在Android源码中,我们可以找到在AudioFlinger.cpp中,当创建Track失败时,会记录日志。例如: ALOGE("createTrack_l() failed %d", status); 因此,我们可以通过查看完整的日志来定位问题。但是,用户只提供了-9这个状态码。 考虑到错误码-9在系统错误中对应的是EBADF(错误的文件描述符),在音频系统中,可能意味着音频输出设备没有正确打开或者文件描述符无效。 可能的原因: 1. 音频设备未初始化:在AudioFlinger中,如果输出设备没有打开,那么创建Track时会失败。例如,AudioPolicyService没有正确配置音频设备。 2. 音频设备被占用:例如,某个应用独占了音频设备(如录音应用),导致无法创建新的Track。 3. 权限问题:应用没有获取使用音频的权限。 解决方案: 1. 检查应用权限:确保应用已经获取了RECORD_AUDIO权限(如果是录音)或使用音频的权限。对于播放,通常不需要RECORD_AUDIO,但需要其他权限(如MODIFY_AUDIO_SETTINGS)?实际上,播放音频通常不需要特殊权限,除非是系统应用。 2. 重启音频服务:有时候,音频服务可能出现临时故障,重启设备可能解决问题。 3. 检查音频设备状态:通过dumpsys audio命令查看音频服务的状态,特别是AudioFlingerAudioPolicyManager的状态。 4. 减少同时使用的AudioTrack数量:确保没有超过系统限制的track数量。 5. 检查音频参数:确保创建AudioTrack时使用的参数(采样率、通道数、格式)是设备支持的。 具体到错误-9,由于它对应EBADF,我们可以重点检查音频设备文件描述符的有效性。可能是在打开音频设备时出现了问题。 在AudioFlinger中,打开音频设备是在openOutput_l函数中进行的。如果打开失败,那么后续创建Track就会失败。因此,我们需要检查AudioFlinger的openOutput_l是否成功。 如何检查?我们可以通过dumpsys audio来查看音频服务状态。例如: adb shell dumpsys audio 在输出中,查找AudioFlinger部分,看是否有错误信息,特别是输出设备的状态。 另外,错误也可能发生在AudioPolicyManager中。例如,音频策略配置错误,导致没有可用的输出设备。 总结:状态-9(EBADF)通常表示文件描述符问题,可能是底层音频设备没有正确打开。建议按照以下步骤排查: 1. 检查应用权限。 2. 检查音频参数是否正确(采样率、通道数、音频格式等),确保设备支持。 3. 重启设备,看是否是临时问题。 4. 通过dumpsys audio检查音频服务状态。 5. 查看完整的logcat日志,寻找AudioFlinger相关的错误信息。 如果问题仍然存在,可能需要检查设备的音频HAL实现是否有问题。 由于用户问题中提到了引用[1],其中提到了audio_track_cblk_t,这个数据结构是用于AudioTrackAudioFlinger之间共享内存的。如果创建这个共享内存失败,通常会返回NO_MEMORY(-12)。但这里错误是-9,所以可能不是共享内存的问题。 因此,我们更倾向于认为是音频设备初始化的问题。 建议用户尝试以下步骤: 1. 确认应用有使用音频的权限(在AndroidManifest.xml中添加<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />,尽管通常播放不需要,但可以尝试)。 2. 检查创建AudioTrack的参数,特别是采样率、通道配置和音频格式,确保是设备支持的。例如,采样率通常支持44100,通道配置通常是CHANNEL_OUT_MONO或CHANNEL_OUT_STEREO。 3. 在创建AudioTrack之前,尝试先获取音频焦点,确保没有其他应用独占音频设备。 4. 在设备上,尝试播放一段系统声音(比如铃声),确认音频设备工作正常。 5. 如果问题仍然存在,收集logcat日志,特别是AudioFlingerAudioTrack相关的错误日志。 如果用户有更多的日志信息,我们可以进一步分析。 相关问题: 1. 如何通过dumpsys audio命令分析音频服务状态? 2. Android AudioTrack创建时参数如何选择? 3. Android音频权限有哪些?如何申请? 4. 如何获取和分析logcat日志以调试音频问题?</think>### Android AudioTrack createTrack_l failed with status -9 分析及解决方案 #### 错误原因分析 状态码 `-9` 在 Android 音频系统中通常表示 **音频设备未初始化或资源分配失败**。具体原因包括: 1. **音频设备未就绪** AudioFlinger(音频服务)无法为 AudioTrack 分配底层音频资源,可能因设备未初始化、音频 HAL 层异常或音频策略服务配置错误导致[^1]。 2. **资源超限** - 同时创建的 AudioTrack 实例超过系统限制(默认每进程最多 32 个) - 共享内存池耗尽(`audio_track_cblk_t` 缓冲区分配失败)[^1] 3. **参数不兼容** 使用了设备不支持的音频参数(如采样率、通道数或格式),例如: - 采样率非标准值(如 44100Hz/48000Hz) - 通道配置错误(如 `CHANNEL_OUT_STEREO` 在单声道设备上) 4. **权限问题** 缺少关键权限(如 `MODIFY_AUDIO_SETTINGS`)或 SELinux 策略限制。 --- #### 解决方案 ##### 1. 检查音频参数合法性 ```java // 示例:使用标准参数创建AudioTrack int sampleRate = 44100; // 确保设备支持该采样率 int channelConfig = AudioFormat.CHANNEL_OUT_STEREO; int audioFormat = AudioFormat.ENCODING_PCM_16BIT; AudioTrack track = new AudioTrack( AudioManager.STREAM_MUSIC, sampleRate, channelConfig, audioFormat, AudioTrack.getMinBufferSize(...), // 必须>0 AudioTrack.MODE_STREAM ); ``` **验证点**: - 调用 `AudioTrack.getMinBufferSize()` 返回值必须 >0 - 通过 `AudioManager.getProperty(String key)` 检查设备支持参数 ##### 2. 释放冗余资源 - 及时调用 `AudioTrack.release()` 释放不再使用的实例 - 单进程避免创建超过 32 个 AudioTrack ##### 3. 添加必要权限 在 `AndroidManifest.xml` 中添加: ```xml <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <uses-permission android:name="android.permission.RECORD_AUDIO" /> <!-- 如需录音 --> ``` ##### 4. 重启音频服务(需 Root) ```bash adb shell su -c setprop ctl.restart audioserver ``` ##### 5. 检查系统日志定位问题 通过 `logcat` 过滤关键信息: ```bash adb logcat -b main -b system -s "AudioTrack", "AudioFlinger" ``` 典型错误日志示例: ``` E AudioFlinger: createTrack_l() failed status=-9 E AudioTrack: AudioFlinger could not create track ``` --- #### 深入调试建议 1. **Dump 音频服务状态** ```bash adb shell dumpsys audio > audiodump.txt ``` 检查: - `Output threads:` 下设备状态是否为 `Ready` - `Tracks:` 数量是否超限 2. **验证音频 HAL** 检查 `/vendor/etc/audio_policy_configuration.xml` 配置是否正确,特别是设备支持的采样率列表。 3. **共享内存诊断** 若 `audio_track_cblk_t` 分配失败(参考[^1]),需检查: - 内核 ION 内存池是否耗尽 - `/dev/ashmem` 分区空间是否充足 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值