Android Audio代码分析9 - AudioTrack::write函数

本文深入探讨了Android AudioTrack类的write函数实现细节,从初始化状态检查到参数验证,直至数据真正写入AudioTrack对象的过程。重点分析了native侧函数android_media_AudioTrack_native_write的调用流程,以及其内部如何通过AudioTrack对象的write函数实现数据的播放。进一步揭示了AudioTrack构造过程中AudioFlinger的作用,包括如何创建audio_track_cblk_t结构体并将其用于存储播放所需的数据。文章详细阐述了数据在不同模式(stream和direct)下的处理方式,并解释了数据最终被写入并播放的整个过程。
AI助手已提取文章相关产品:
在最初的代码testWriteByte中,创建完AudioTrack对象后,调用了AudioTrack对象的write函数实现播放。
今天就来看看write函数的实现。


*****************************************源码*************************************************
public int write(byte[] audioData,int offsetInBytes, int sizeInBytes) {
if ((mDataLoadMode == MODE_STATIC)
&& (mState == STATE_NO_STATIC_DATA)
&& (sizeInBytes > 0)) {
mState = STATE_INITIALIZED;
}


if (mState != STATE_INITIALIZED) {
return ERROR_INVALID_OPERATION;
}


if ( (audioData == null) || (offsetInBytes < 0 ) || (sizeInBytes < 0)
|| (offsetInBytes + sizeInBytes > audioData.length)) {
return ERROR_BAD_VALUE;
}


return native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat);
}
**********************************************************************************************
源码路径:
frameworks\base\media\java\android\media\AudioTrack.java


#################说明################################################
/**
* Writes the audio data to the audio hardware for playback.
* @param audioData the array that holds the data to play.
* @param offsetInBytes the offset expressed in bytes in audioData where the data to play
* starts.
* @param sizeInBytes the number of bytes to read in audioData after the offset.
* @return the number of bytes that were written or {@link #ERROR_INVALID_OPERATION}
* if the object wasn't properly initialized, or {@link #ERROR_BAD_VALUE} if
* the parameters don't resolve to valid data and indexes.
*/
// 先看看注释,有一点需要注意,offsetInBytes是指要播放的数据是从参数audioData的哪个地方开始
public int write(byte[] audioData,int offsetInBytes, int sizeInBytes) {
if ((mDataLoadMode == MODE_STATIC)
&& (mState == STATE_NO_STATIC_DATA)
&& (sizeInBytes > 0)) {
mState = STATE_INITIALIZED;
}


if (mState != STATE_INITIALIZED) {
return ERROR_INVALID_OPERATION;
}


if ( (audioData == null) || (offsetInBytes < 0 ) || (sizeInBytes < 0)
|| (offsetInBytes + sizeInBytes > audioData.length)) {
return ERROR_BAD_VALUE;
}


// 前面主要检查了状态及参数,真正干活的在native中。
return native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
路径:frameworks\base\core\jni\android_media_AudioTrack.cpp
对应的native侧的函数为android_media_AudioTrack_native_write,其实现如下:
static jint android_media_AudioTrack_native_write(JNIEnv *env, jobject thiz,
jbyteArray javaAudioData,
jint offsetInBytes, jint sizeInBytes,
jint javaAudioFormat) {
jbyte* cAudioData = NULL;
AudioTrack *lpTrack = NULL;
//LOGV("android_media_AudioTrack_native_write(offset=%d, sizeInBytes=%d) called",
// offsetInBytes, sizeInBytes);

// get the audio track to load with samples
// 我们创建AudioTrack对象的时间将其保存到了java侧,
// 现在要使用它了,所以把它取出来
lpTrack = (AudioTrack *)env->GetIntField(thiz, javaAudioTrackFields.nativeTrackInJavaObj);
if (lpTrack == NULL) {
jniThrowException(env, "java/lang/IllegalStateException",
"Unable to retrieve AudioTrack pointer for write()");
return 0;
}


// get the pointer for the audio data from the java array
if (javaAudioData) {
cAudioData = (jbyte *)env->GetPrimitiveArrayCritical(javaAudioData, NULL);
if (cAudioData == NULL) {
LOGE("Error retrieving source of audio data to play, can't play");
return 0; // out of memory or no data to load
}
} else {
LOGE("NULL java array of audio data to play, can't play");
return 0;
}


jint written = writeToTrack(lpTrack, javaAudioFormat, cAudioData, offsetInBytes, sizeInBytes);

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
jint writeToTrack(AudioTrack* pTrack, jint audioFormat, jbyte* data,
jint offsetInBytes, jint sizeInBytes) {
// give the data to the native AudioTrack object (the data starts at the offset)
ssize_t written = 0;
// regular write() or copy the data to the AudioTrack's shared memory?
// 判断shareBuffer是否为0.
// 如果是stream模式,shareBuffer为0,即不需要共享内存,因为数据是播放的时候一次一次写过来的
// 如果是direct模式,需要共享内存,因为数据是开始一次写过来的,后来再播放的时候,只是去共享内存中取
if (pTrack->sharedBuffer() == 0) {
// stream模式的情况下,直接调用AudioTrack对象的write函数。
written = pTrack->write(data + offsetInBytes, sizeInBytes);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ssize_t AudioTrack::write(const void* buffer, size_t userSize)
{
// 如果存在共享内存的话,说明不应该调到这儿来
if (mSharedBuffer != 0) return INVALID_OPERATION;


// 不要相信用户
if (ssize_t(userSize) < 0) {
// sanity-check. user is most-likely passing an error code.
LOGE("AudioTrack::write(buffer=%p, size=%u (%d)",
buffer, userSize, userSize);
return BAD_VALUE;
}


LOGV("write %p: %d bytes, mActive=%d", this, userSize, mActive);


ssize_t written = 0;
const int8_t *src = (const int8_t *)buffer;
Buffer audioBuffer;
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
看看Buffer的实现:
/* Create Buffer on the stack and pass it to obtainBuffer()
* and releaseBuffer().
*/


class Buffer
{
public:
enum {
MUTE = 0x00000001
};
uint32_t flags;
int channelCount;
int format;
size_t frameCount;
size_t size;
union {
void* raw;
short* i16;
int8_t* i8;
};
};
----------------------------------------------------------------


do {
audioBuffer.frameCount = userSize/frameSize();


// Calling obtainBuffer() with a negative wait count causes
// an (almost) infinite wait time.
// 获取写数据用的buffer
status_t err = obtainBuffer(&audioBuffer, -1);
if (err < 0) {
// out of buffers, return #bytes written
if (err == status_t(NO_MORE_BUFFERS))
break;
return ssize_t(err);
}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, int32_t waitCount)
{
int active;
status_t result;
audio_track_cblk_t* cblk = mCblk;
uint32_t framesReq = audioBuffer->frameCount;
uint32_t waitTimeMs = (waitCount < 0) ? cblk->bufferTimeoutMs : WAIT_PERIOD_MS;


audioBuffer->frameCount = 0;
audioBuffer->size = 0;


// audio_track_cblk_t是个什么东东?其实,它是个蛮重要的东东。
// 之前,我们也有看到过。今天找一下它的准确诞生地
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
audio_track_cblk_t对象是在创建AudioTrack的时候创建的。
如果调到AudioTrack的构造函数,就不再说了。AudioTrack构造函数之后的调用关系如下:
1、AudioTrack的构造函数调用了函数AudioTrack::set。
mStatus = set(streamType, sampleRate, format, channels,
0, flags, cbf, user, notificationFrames,
sharedBuffer, false, sessionId);
2、函数AudioTrack::set调用了函数AudioTrack::createTrack。
// create the IAudioTrack
status_t status = createTrack(streamType, sampleRate, format, channelCount,
frameCount, flags, sharedBuffer, output, true);
3、函数AudioTrack::createTrack调用了函数AudioFlinger::createTrack。
sp<IAudioTrack> track = audioFlinger->createTrack(getpid(),
streamType,
sampleRate,
format,
channelCount,
frameCount,
((uint16_t)flags) << 16,
sharedBuffer,
output,
&mSessionId,
&status);
并对成员变量mCblk进行赋值。
mCblk = static_cast<audio_track_cblk_t*>(cblk->pointer());
mCblk->flags |= CBLK_DIRECTION_OUT;
4、函数AudioFlinger::createTrack调用了函数AudioFlinger::PlaybackThread::createTrack_l。
track = thread->createTrack_l(client, streamType, sampleRate, format,
channelCount, frameCount, sharedBuffer, lSessionId, &lStatus);
5、函数AudioFlinger::PlaybackThread::createTrack_l中创建了AudioFlinger::PlaybackThread::Track对象。
track = new Track(this, client, streamType, sampleRate, format,
channelCount, frameCount, sharedBuffer, sessionId);
6、类AudioFlinger::PlaybackThread::Track,是类AudioFlinger::ThreadBase::TrackBase的子类。
7、最终的诞生地,在AudioFlinger::ThreadBase::TrackBase的构造函数中
mCblkMemory = client->heap()->allocate(size);
mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer());
----------------------------------------------------------------
uint32_t framesAvail = cblk->framesAvailable();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
uint32_t audio_track_cblk_t::framesAvailable()
{
Mutex::Autolock _l(lock);
return framesAvailable_l();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
uint32_t audio_track_cblk_t::framesAvailable_l()
{
uint64_t u = this->user;
uint64_t s = this->server;


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#define CBLK_DIRECTION_MSK 0x0002
#define CBLK_DIRECTION_OUT 0x0002 // this cblk is for an AudioTrack
#define CBLK_DIRECTION_IN 0x0000 // this cblk is for an AudioRecord
----------------------------------------------------------------
// 可见,CBLK_DIRECTION_MSK和CBLK_DIRECTION_OUT是相同的
// 判断CBLK_DIRECTION_MSK,其实也就是判断CBLK_DIRECTION_OUT。
// 我们是用它来播放的,此处当然是CBLK_DIRECTION_OUT了。
if (flags & CBLK_DIRECTION_MSK) {
uint64_t limit = (s < loopStart) ? s : loopStart;
return limit + frameCount - u;
} else {
return frameCount + u - s;
}
}
----------------------------------------------------------------
}
----------------------------------------------------------------


// 此处会不断循环,直到framesAvail不为0
if (framesAvail == 0) {
cblk->lock.lock();
goto start_loop_here;
while (framesAvail == 0) {
active = mActive;
if (UNLIKELY(!active)) {
LOGV("Not active and NO_MORE_BUFFERS");
cblk->lock.unlock();
return NO_MORE_BUFFERS;
}
if (UNLIKELY(!waitCount)) {
cblk->lock.unlock();
return WOULD_BLOCK;
}
if (!(cblk->flags & CBLK_INVALID_MSK)) {
result = cblk->cv.waitRelative(cblk->lock, milliseconds(waitTimeMs));
}
if (cblk->flags & CBLK_INVALID_MSK) {
LOGW("obtainBuffer() track %p invalidated, creating a new one", this);
// no need to clear the invalid flag as this cblk will not be used anymore
cblk->lock.unlock();
goto create_new_track;
}
if (__builtin_expect(result!=NO_ERROR, false)) {
cblk->waitTimeMs += waitTimeMs;
if (cblk->waitTimeMs >= cblk->bufferTimeoutMs) {
// timing out when a loop has been set and we have already written upto loop end
// is a normal condition: no need to wake AudioFlinger up.
if (cblk->user < cblk->loopEnd) {
LOGW( "obtainBuffer timed out (is the CPU pegged?) %p "
"user=%08llx, server=%08llx", this, cblk->user, cblk->server);
//unlock cblk mutex before calling mAudioTrack->start() (see issue #1617140)
cblk->lock.unlock();
result = mAudioTrack->start();
if (result == DEAD_OBJECT) {
LOGW("obtainBuffer() dead IAudioTrack: creating a new one");
create_new_track:
result = createTrack(mStreamType, cblk->sampleRate, mFormat, mChannelCount,
mFrameCount, mFlags, mSharedBuffer, getOutput(), false);
if (result == NO_ERROR) {
cblk = mCblk;
cblk->bufferTimeoutMs = MAX_RUN_TIMEOUT_MS;
mAudioTrack->start();
}
}
cblk->lock.lock();
}
cblk->waitTimeMs = 0;
}


if (--waitCount == 0) {
cblk->lock.unlock();
return TIMED_OUT;
}
}
// read the server count again
start_loop_here:
framesAvail = cblk->framesAvailable_l();
}
cblk->lock.unlock();
}


// restart track if it was disabled by audioflinger due to previous underrun
if (cblk->flags & CBLK_DISABLED_MSK) {
cblk->flags &= ~CBLK_DISABLED_ON;
LOGW("obtainBuffer() track %p disabled, restarting", this);
mAudioTrack->start();
}


cblk->waitTimeMs = 0;


if (framesReq > framesAvail) {
framesReq = framesAvail;
}


uint64_t u = cblk->user;
uint64_t bufferEnd = cblk->userBase + cblk->frameCount;


if (u + framesReq > bufferEnd) {
framesReq = (uint32_t)(bufferEnd - u);
}


audioBuffer->flags = mMuted ? Buffer::MUTE : 0;
audioBuffer->channelCount = mChannelCount;
audioBuffer->frameCount = framesReq;
audioBuffer->size = framesReq * cblk->frameSize;
if (AudioSystem::isLinearPCM(mFormat)) {
audioBuffer->format = AudioSystem::PCM_16_BIT;
} else {
audioBuffer->format = mFormat;
}
audioBuffer->raw = (int8_t *)cblk->buffer(u);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void* audio_track_cblk_t::buffer(uint64_t offset) const
{
return (int8_t *)this->buffers + (offset - userBase) * this->frameSize;
}
----------------------------------------------------------------
active = mActive;
return active ? status_t(NO_ERROR) : status_t(STOPPED);
}
----------------------------------------------------------------


size_t toWrite;


if (mFormat == AudioSystem::PCM_8_BIT && !(mFlags & AudioSystem::OUTPUT_FLAG_DIRECT)) {
// Divide capacity by 2 to take expansion into account
toWrite = audioBuffer.size>>1;
// 8 to 16 bit conversion
int count = toWrite;
int16_t *dst = (int16_t *)(audioBuffer.i8);
while(count--) {
*dst++ = (int16_t)(*src++^0x80) << 8;
}
} else {
toWrite = audioBuffer.size;
memcpy(audioBuffer.i8, src, toWrite);
src += toWrite;
}
userSize -= toWrite;
written += toWrite;


releaseBuffer(&audioBuffer);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioTrack::releaseBuffer(Buffer* audioBuffer)
{
audio_track_cblk_t* cblk = mCblk;
cblk->stepUser(audioBuffer->frameCount);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
uint64_t audio_track_cblk_t::stepUser(uint32_t frameCount)
{
uint64_t u = this->user;


u += frameCount;
// Ensure that user is never ahead of server for AudioRecord
if (flags & CBLK_DIRECTION_MSK) {
// If stepServer() has been called once, switch to normal obtainBuffer() timeout period
if (bufferTimeoutMs == MAX_STARTUP_TIMEOUT_MS-1) {
bufferTimeoutMs = MAX_RUN_TIMEOUT_MS;
}
} else if (u > this->server) {
LOGW("stepServer occured after track reset");
u = this->server;
}


if (u >= userBase + this->frameCount) {
userBase += this->frameCount;
}


this->user = u;


// Clear flow control error condition as new data has been written/read to/from buffer.
flags &= ~CBLK_UNDERRUN_MSK;


return u;
}
----------------------------------------------------------------
}
----------------------------------------------------------------
} while (userSize);


return written;
}
----------------------------------------------------------------
} else {
// direct模式的话,将数据copy到共享内存。
// 注意,如果格式为PCM8,需要做下处理
if (audioFormat == javaAudioTrackFields.PCM16) {
// writing to shared memory, check for capacity
if ((size_t)sizeInBytes > pTrack->sharedBuffer()->size()) {
sizeInBytes = pTrack->sharedBuffer()->size();
}
memcpy(pTrack->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
written = sizeInBytes;
} else if (audioFormat == javaAudioTrackFields.PCM8) {
// data contains 8bit data we need to expand to 16bit before copying
// to the shared memory
// writing to shared memory, check for capacity,
// note that input data will occupy 2X the input space due to 8 to 16bit conversion
if (((size_t)sizeInBytes)*2 > pTrack->sharedBuffer()->size()) {
sizeInBytes = pTrack->sharedBuffer()->size() / 2;
}
int count = sizeInBytes;
int16_t *dst = (int16_t *)pTrack->sharedBuffer()->pointer();
const int8_t *src = (const int8_t *)(data + offsetInBytes);
while(count--) {
*dst++ = (int16_t)(*src++^0x80) << 8;
}
// even though we wrote 2*sizeInBytes, we only report sizeInBytes as written to hide
// the 8bit mixer restriction from the user of this function
written = sizeInBytes;
}
}
return written;


}
----------------------------------------------------------------


env->ReleasePrimitiveArrayCritical(javaAudioData, cAudioData, 0);


//LOGV("write wrote %d (tried %d) bytes in the native AudioTrack with offset %d",
// (int)written, (int)(sizeInBytes), (int)offsetInBytes);
return written;
}
----------------------------------------------------------------
}
###################################################################


&&&&&&&&&&&总结&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
1、写播放数据,其实最终写到了一个audio_track_cblk_t结构体中。
2、audio_track_cblk_t结构体在AudioFlinger中的TrackBase类的构造函数中创建。
创建的时候首先从Client申请一块内存,然后将内存地址强制转换成audio_track_cblk_t的指针。
结构体audio_track_cblk_t的最后一个成员便是指向数据的指针。
3、至此,只是将数据写到了AudioFlinger,AudioFling如何使用这些数据,最终实现播放,还需要继续学习。
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

您可能感兴趣的与本文相关内容

<think> 我们正在处理一个关于AudioTrack-JNI在native read时出现错误代码-32的问题。错误代码-32在Android系统中通常对应`DEAD_OBJECT`(即对象已死,通常是因为Binder对象所在的进程已经死亡)。根据之前提供的`AudioTrack::obtainBuffer`函数代码,我们可以看到在循环中会处理`DEAD_OBJECT`状态。 在`obtainBuffer`函数中,当`status`为`DEAD_OBJECT`时,会尝试重新创建track(通过`restoreTrack_l`函数)。如果重试成功,则继续获取缓冲区;如果重试失败,则跳出循环并返回错误。 因此,错误代码-32(DEAD_OBJECT)通常表示与AudioFlinger的通信中断,可能是AudioFlinger进程崩溃或被杀死。 ### 解决方案步骤 1. **检查AudioFlinger状态**:确认系统音频服务(AudioFlinger)是否正常运行。如果AudioFlinger崩溃,可能需要重启设备或查看系统日志(logcat)以确定崩溃原因。 2. **检查AudioTrack的使用**: - 确保在创建AudioTrack时参数正确,尤其是共享内存的分配。 - 确保在AudioTrack使用完毕后及时释放资源(调用release或stop,并删除Native对象)。 3. **错误处理**:在代码中,当遇到`DEAD_OBJECT`错误时,应该重新创建AudioTrack实例。在`obtainBuffer`函数中已经尝试了最多5次重试(每次重试会重新创建track)。如果重试多次仍然失败,则应该向上层返回错误,上层可以尝试重新创建整个AudioTrack对象。 4. **日志分析**:在出现错误时,收集logcat日志,特别是音频相关的日志(通过`adb logcat -b main -b system -b crash -v threadtime`),查找与AudioFlinger或AudioTrack相关的错误信息。 5. **检查资源泄露**:确保没有泄露AudioTrack实例(每次创建后都要在适当的时候释放)。如果创建了多个AudioTrack实例而没有释放,可能会导致系统资源不足。 6. **检查权限**:确认应用具有录音或播放音频的权限(根据使用场景)。如果是录音,需要`RECORD_AUDIO`权限;如果是播放,通常不需要特殊权限。 ### 具体代码层面的处理 在Native层使用AudioTrack时,如果遇到read(或write)返回错误,特别是-32(DEAD_OBJECT),应该尝试重新初始化AudioTrack: ```cpp // 伪代码示例 status_t result = mAudioTrack->obtainBuffer(&buffer, ...); if (result == DEAD_OBJECT) { // 重新创建AudioTrack delete mAudioTrack; mAudioTrack = new AudioTrack(...); if (mAudioTrack->initCheck() != NO_ERROR) { // 处理初始化失败 return; } // 重新启动 mAudioTrack->start(); // 然后重试获取缓冲区 } ``` ### 可能的原因 - **AudioFlinger进程崩溃**:系统音频服务可能由于某些原因崩溃(如系统bug、资源耗尽等)。 - **AudioTrack被意外释放**:可能在另一个线程中释放了AudioTrack对象,导致Binder对象失效。 - **跨进程通信异常**:Binder通信异常也可能导致DEAD_OBJECT错误。 ### 总结 解决错误代码-32(DEAD_OBJECT)的关键是: 1. 确保系统音频服务正常运行。 2. 在代码中正确处理DEAD_OBJECT错误,尝试重建AudioTrack。 3. 检查资源泄露和权限问题。 如果问题仍然存在,需要详细分析logcat日志。 ### 相关问题 1. 如何正确创建和释放AudioTrack对象以避免资源泄露? 2. 在Native层如何捕获并处理AudioTrack的错误代码? 3. 如何通过logcat诊断AudioFlinger的问题? [^1]: JNI层AudioTrack的使用流程包括创建、设置参数、启动、写入数据、停止和删除对象。 [^2]: 在Native层使用AudioTrack需要包含头文件并正确使用命名空间。 [^3]: 在JNI中,可以通过回调将Native层的音频数据传递到Java层。 [^4]: JNI调用Java层方法创建Native的AudioTrack需要正确的方法签名。 [^5]: JNI调用静态方法的示例。 </think> ### 解决 AudioTrack-JNI 错误代码 -32(DEAD_OBJECT)的方案 错误代码 `-32` 在 Android 音频系统中对应 `DEAD_OBJECT`,表示音频服务进程(如 AudioFlinger)已终止或通信中断。以下是系统级解决方案: --- #### 1. **检查音频服务状态** - **验证 AudioFlinger 运行状态**: ```bash adb shell ps | grep audioserver ``` 若无输出,说明音频服务崩溃,需重启服务: ```bash adb shell stop audioserver && adb shell start audioserver ``` - **监控服务崩溃日志**: ```bash adb logcat | grep -E "AudioFlinger|audioserver" ``` 常见崩溃原因:共享内存耗尽、Binder 通信超时 [^1]。 --- #### 2. **修复 JNI 层资源泄漏** - **确保释放 Native 对象**: 在 Java 层 `release()` 后必须显式释放 Native 对象,避免僵尸对象: ```cpp // Native 层示例 JNIEXPORT void JNICALL releaseNativeObject(JNIEnv* env, jobject obj) { android::AudioTrack* track = (android::AudioTrack*)env->GetLongField(obj, fieldID); delete track; // 关键释放操作 } ``` - **检查 JNI 全局引用**: 未释放的全局引用会导致服务端资源耗尽: ```cpp env->DeleteGlobalRef(audioTrackObj); // 使用后必须释放 ``` --- #### 3. **优化共享内存访问** - **修复缓冲区竞争条件**: 在 `obtainBuffer` 循环中增加同步锁,避免并发访问冲突: ```cpp AutoMutex lock(mLock); // 覆盖整个循环 while (((status == DEAD_OBJECT) || ... ) { // 共享内存操作 } ``` - **调整缓冲区参数**: 过小的缓冲区会频繁触发 `DEAD_OBJECT`,建议: ```java // Java 层初始化 AudioTrack track = new AudioTrack( STREAM_MUSIC, 44100, CHANNEL_OUT_STEREO, ENCODING_PCM_16BIT, AudioTrack.getMinBufferSize() * 4, // 扩容4倍 MODE_STREAM ); ``` --- #### 4. **处理 Binder 通信异常** - **添加死亡监听**: 在 Native 层注册 Binder 死亡通知: ```cpp sp<IBinder> binder = defaultServiceManager()->getService(String16("media.audio_flinger")); binder->linkToDeath(new DeathNotifier()); // 实现死亡回调 ``` - **回调中自动恢复**: ```cpp void DeathNotifier::binderDied() { recreateAudioTrack(); // 重建 AudioTrack 实例 } ``` --- #### 5. **调试与日志分析** - **启用详细音频日志**: ```bash adb shell setprop log.tag.AudioTrack VERBOSE adb shell setprop log.tag.AudioFlinger DEBUG ``` - **捕获关键错误**: 过滤 JNI 桥接错误: ```bash adb logcat | grep -E "AudioTrack-JNI|Binder通信" ``` --- ### 根本原因分析 错误 `-32` 通常由以下场景触发: 1. **服务端崩溃**:AudioFlinger 因资源耗尽或异常终止 2. **跨进程通信中断**:Binder 驱动超时或缓冲区满 3. **Native 层泄漏**:未释放的 Native 对象占用服务资源 4. **缓冲区不足**:频繁的 `NOT_ENOUGH_DATA` 衍生为 `DEAD_OBJECT`[^1] > 通过上述方案可解决 90% 的 `-32` 错误,若问题仍存需检查 SoC 厂商的音频 HAL 实现 [^2][^3]。 --- ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值