[audio] AudioTrack (五) 共享内存创建分析

在创建 AudioTrack 有一个变量需要设置

        MODE_STATIC 一整段 PCM,一次性给 AudioFlinger,之后只读不写

        MODE_STREAM 是用来播放一个音频流

.setTransferMode(AudioTrack.MODE_STREAM)

public static final int MODE_STATIC = 0;

public static final int MODE_STREAM = 1;

base/core/jni/android_media_AudioTrack.cpp  android_media_AudioTrack_setup

        audio_has_proportional_frames 用来判断音频是不是 pcm 线性关系

        计算出总 frameCount

        重点看下这个 memoryMode 参数 

        如果参数是 MODE_STREAM shared mem 参数就是 0

        如果参数是 MODE_STATIC 那直接在app层申请内存

                allocSharedMem 大小是 buffSizeInBytes app层传递过来的,帧字节对齐的大小

        调用 lpTrack->set 设置下去

static jint android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this,
                                           jobject jaa, jintArray jSampleRate,
                                           jint channelPositionMask, jint channelIndexMask,
                                           jint audioFormat, jint buffSizeInBytes, jint memoryMode,
                                           jintArray jSession, jobject jAttributionSource,
                                           jlong nativeAudioTrack, jboolean offload,
                                           jint encapsulationMode, jobject tunerConfiguration,
                                           jstring opPackageName) {


        // compute the frame count
        size_t frameCount;
        if (audio_has_proportional_frames(format)) {
            const size_t bytesPerSample = audio_bytes_per_sample(format);
            frameCount = buffSizeInBytes / (channelCount * bytesPerSample);
        } else {
            frameCount = buffSizeInBytes;
        }


switch (memoryMode) {
    case MODE_STREAM:
        status = lpTrack->set(AUDIO_STREAM_DEFAULT, // stream type, but more info conveyed
                                                    // in paa (last argument)
                                sampleRateInHertz,
                                format, // word length, PCM
                                nativeChannelMask, offload ? 0 : frameCount,
                                offload ? AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD
                                        : AUDIO_OUTPUT_FLAG_NONE,
                                lpJniStorage,
                                0,    // notificationFrames == 0 since not using EVENT_MORE_DATA
                                    // to feed the AudioTrack
                                0,    // shared mem
                                true, // thread can call Java
                                sessionId, // audio session ID
                                offload ? AudioTrack::TRANSFER_SYNC_NOTIF_CALLBACK
                                        : AudioTrack::TRANSFER_SYNC,
                                (offload || encapsulationMode) ? &offloadInfo : NULL,
                                attributionSource, // Passed from Java
                                paa.get());
        break;

    case MODE_STATIC:
    {
        // AudioTrack is using shared memory
        const auto iMem = allocSharedMem(buffSizeInBytes);

        status = lpTrack->set(AUDIO_STREAM_DEFAULT, // stream type, but more info conveyed
                                                    // in paa (last argument)
                                sampleRateInHertz,
                                format, // word length, PCM
                                nativeChannelMask, frameCount, AUDIO_OUTPUT_FLAG_NONE,
                                lpJniStorage,
                                0,    // notificationFrames == 0 since not using EVENT_MORE_DATA
                                    // to feed the AudioTrack
                                iMem, // shared mem
                                true, // thread can call Java
                                sessionId, // audio session ID
                                AudioTrack::TRANSFER_SHARED,
                                nullptr,           // default offloadInfo
                                attributionSource, // Passed from Java
                                paa.get());
        break;
    }
    default:
        ALOGE("Unknown mode %d", memoryMode);
        goto native_init_failure;
    }


    return (jint) AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}


sp<IMemory> allocSharedMem(int sizeInBytes) {
    const auto heap = sp<MemoryHeapBase>::make(sizeInBytes, 0, "AudioTrack Heap Base");
    if (heap->getBase() == MAP_FAILED || heap->getBase() == nullptr) {
        return nullptr;
    }
    return sp<MemoryBase>::make(heap, 0, sizeInBytes);
}

av/media/libaudioclient/AudioTrack.cpp  lpTrack->set

        sharedBuffer

        主要就看这个函数,使用返回的 response 解析出 Cblk

        STREAM 模式下音频数据 buffers 就是 + 1

        STATIC 模式下音频数据 buffers 就是传进来的 buffer        

status_t AudioTrack::set(
...
        const sp<IMemory>& sharedBuffer,
...)
{

    mSharedBuffer = sharedBuffer;

    {
        AutoMutex lock(mLock);
        status = createTrack_l();
    }

    return logIfErrorAndReturnStatus(status, "");
}


status_t AudioTrack::createTrack_l()
{
    status_t status;

    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();

    IAudioFlinger::CreateTrackInput input;

    input.sharedBuffer = mSharedBuffer;

    auto aidlInput = input.toAidl();
    
    // 使用返回的 response 解析出 Cblk
    status = audioFlinger->createTrack(aidlInput.value(), response);

    IAudioFlinger::CreateTrackOutput output{};
    
    auto trackOutput = IAudioFlinger::CreateTrackOutput::fromAidl(response);

    output = trackOutput.value();
    
    output.audioTrack->getCblk(&sfr);

    auto iMemory = aidl2legacy_NullableSharedFileRegion_IMemory(sfr);

    sp<IMemory> iMem = iMemory.value();

    void *iMemPointer = iMem->unsecurePointer();

    audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);

    void* buffers;
    if (mSharedBuffer == 0) {
        
        // STREAM 模式下音频数据 buffers 就是 + 1
        buffers = cblk + 1;
    } else {
        // STATIC 模式下音频数据 buffers 就是传进来的 buffer
        buffers = mSharedBuffer->unsecurePointer();

        }
    }
    

    return logIfErrorAndReturnStatus(status, "");
}


audioflinger

这是 audioflinger 端,如何创建 buffer 的

av/services/audioflinger/AudioFlinger.cpp  createTrack

        input.sharedBuffer

status_t AudioFlinger::createTrack(const media::CreateTrackRequest& _input,
                                   media::CreateTrackResponse& _output)
{
    CreateTrackInput input = VALUE_OR_RETURN_STATUS(CreateTrackInput::fromAidl(_input));

    track = thread->createTrack_l(..., input.sharedBuffer, ...);

    return lStatus;
}

av/services/audioflinger/Threads.cpp  createTrack_l

        const sp<IMemory>& sharedBuffer

        这里要注意这三个参数

        nullptr /* buffer */,

        (size_t)0 /* bufferSize */

        sharedBuffer,

sp<IAfTrack> PlaybackThread::createTrack_l(...,
                    const sp<IMemory>& sharedBuffer, ...)
{
    
    //检测内存是否正常
    if (sharedBuffer != 0 && checkIMemory(sharedBuffer) != NO_ERROR) {
        lStatus = BAD_VALUE;
        goto Exit;
    }

    track = IAfTrack::create(this, client, streamType, attr, sampleRate, format,
                        channelMask, frameCount,
                        nullptr /* buffer */, (size_t)0 /* bufferSize */, sharedBuffer,
                        sessionId, creatorPid, attributionSource, trackFlags,
                        IAfTrackBase::TYPE_DEFAULT, portId, SIZE_MAX /*frameCountToBeReady*/,
                        speed, isSpatialized, isBitPerfect, volume, muted);

    return track;
}

av/services/audioflinger/Tracks.cpp

sp<IAfTrack> IAfTrack::create(IAfPlaybackThread* thread, ... ,
        void *buffer,
        size_t bufferSize, 
        const sp<IMemory>& sharedBuffer,
        ...) 
{
    return sp<Track>::make(thread, ...,
            buffer,
            bufferSize,
            sharedBuffer,
            ...);
}

av/services/audioflinger/Tracks.cpp

        在初始化 TrackBase 父类构造器的时候,alloc 值为 ALLOC_CBLK

        上一层传递上来的三个参数 buffer bufferSize sharedBuffer 被统一成两个参数

Track::Track(... , const sp<IMemory>& sharedBuffer, ...)
    :   TrackBase(thread, client, attr, sampleRate, format, channelMask, frameCount,
(sharedBuffer != 0) ? sharedBuffer->unsecurePointer() : buffer,
(sharedBuffer != 0) ? sharedBuffer->size() : bufferSize,
... ,
ALLOC_CBLK, /*alloc*/
...),


    mFillingStatus(FS_INVALID),
    // mRetryCount initialized later when needed
    mSharedBuffer(sharedBuffer),
{


}

// native/libs/binder/IMemory.cpp

void* IMemory::unsecurePointer() const {
    ssize_t offset;
    sp<IMemoryHeap> heap = getMemory(&offset);
    void* const base = heap!=nullptr ? heap->base() : MAP_FAILED;
    if (base == MAP_FAILED)
        return nullptr;
    return static_cast<char*>(base) + offset;
}

size_t IMemory::size() const {
    size_t size;
    getMemory(nullptr, &size);
    return size;
}

MODE_STATIC:  buffer != NULL

        size = sizeof(audio_track_cblk_t);

        mCblk = allocator.allocate(size);

        mBuffer = buffer;

MODE_STREAM:  buffer == NULL

        size = sizeof(audio_track_cblk_t) + bufferSize;

        mCblk = allocator.allocate(size);

        内存长这样

        | audio_track_cblk_t | PCM ring buffer |

        mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);

TrackBase::TrackBase(... ,
            void *buffer,
            size_t bufferSize, ...)
{

    size_t minBufferSize = buffer == NULL ? roundup(frameCount) : frameCount;

    if (minBufferSize < frameCount  // roundup rounds down for values above UINT_MAX / 2
            || mFrameSize == 0   // format needs to be correct
            || minBufferSize > SIZE_MAX / mFrameSize) {
        android_errorWriteLog(0x534e4554, "34749571");
        return;
    }
    minBufferSize *= mFrameSize;

    if (buffer == nullptr) {
        bufferSize = minBufferSize; // allocated here.
    } else if (minBufferSize > bufferSize) {
        android_errorWriteLog(0x534e4554, "38340117");
        return;
    }

    size_t size = sizeof(audio_track_cblk_t);
    if (buffer == NULL && alloc == ALLOC_CBLK) {
        // check overflow when computing allocation size for streaming tracks.
        if (size > SIZE_MAX - bufferSize) {
            android_errorWriteLog(0x534e4554, "34749571");
            return;
        }
        size += bufferSize;
    }

    if (client != 0) {
        mCblkMemory = client->allocator().allocate(mediautils::NamedAllocRequest{{size},
                std::string("Track ID: ").append(std::to_string(mId))});
        if (mCblkMemory == 0 ||
                (mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->unsecurePointer())) == NULL) {
            ALOGE("%s(%d): not enough memory for AudioTrack size=%zu", __func__, mId, size);
            ALOGE("%s", client->allocator().dump().c_str());
            mCblkMemory.clear();
            return;
        }
    } else {
//...
    }

    if (mCblk != NULL) {
        new(mCblk) audio_track_cblk_t();
        switch (alloc) {

//...
        case ALLOC_CBLK:
            // clear all buffers
            if (buffer == NULL) {
                mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
                memset(mBuffer, 0, bufferSize);
            } else {
                mBuffer = buffer;
            }
            break;
//...
        case ALLOC_NONE:
            mBuffer = buffer;
            break;
    
}
<think>嗯,用户想了解Android系统中AudioTrackAudioFlinger之间的音频数据交互机制。这个问题涉及Android音频系统的核心架构,需要从跨进程通信、内存管理和数据流控制几个方面来解析。 根据提供的引用材料,有几个关键点需要注意:AudioTrack是应用层接口,AudioFlinger是系统服务,两者通过Binder跨进程通信[^3];音频数据传输有两种模式,STATIC模式一次性传输,STREAM模式分块传输[^2][^4];实际音频数据通过共享内存传递而非Binder[^3]。 用户可能正在开发音频应用或调试音频问题,需要深入理解底层机制。从专业角度,这个问题可以拆解为四个层次:首先是通信架构设计,AudioTrack作为客户端通过IAudioTrack接口调用AudioFlinger服务[^3];其次是内存管理机制,AudioFlinger会为每个Client分配共享内存[^3];然后是数据传输流程,不同模式(STATIC/STREAM)有不同处理方式[^2][^4];最后是播放控制,如start/stop如何跨进程生效[^1][^3]。 在回答时需要特别注意技术细节的准确性:比如共享内存分配时ToneGenerator的1M内存限制[^3],STREAM模式下的双缓冲设计[^2],以及mSampleRate等关键参数来源[^4]。同时要避免过于深入HAL层细节,聚焦在AudioTrackAudioFlinger这段交互。 用户提供的引用材料非常全面,特别是引用3详细描述了createTrack的过程和对象关系,这将是回答的主要依据。不过要注意区分通用流程和特定案例(ToneGenerator)的区别。</think>在Android音频系统中,AudioTrack(应用层)与AudioFlinger(系统服务)通过以下机制协作传输和处理音频数据: ### **1. 跨进程通信架构** - **Binder IPC**:应用进程通过`IAudioTrack`接口(Binder代理)跨进程调用AudioFlinger服务。例如`start()`/`stop()`操作经由`BpAudioTrack`(客户端)→ `BnAudioTrack`(服务端)→ 最终作用于AudioFlinger中的`Track`对象[^3]。 - **接口封装**:AudioTrack调用`mAudioTrack->start()`时,实际触发跨进程调用到AudioFlinger,由PlaybackThread控制音频流状态[^1][^3]。 ### **2. 共享内存数据传递** - **内存分配**:创建Track时,AudioFlinger分配**匿名共享内存**(ASharedMemory),作为音频数据传输通道。每个Client(如AudioTrack)默认分配1MB内存,支持32个Track[^3]。 - **数据传输模式**: - **STATIC模式**:一次性写入完整音频数据(适合短音效),数据直接存入共享内存[^4]。 - **STREAM模式**:应用通过`write()`分段提交数据。数据从用户缓冲区**拷贝**至共享内存,由AudioFlinger按需读取[^2][^4]。 ### **3. 音频数据处理流程** 1. **Track创建**: AudioFlinger根据AudioTrack参数(如采样率、通道数)在目标`PlaybackThread`中创建`Track`对象,并加入线程的`mTracks`数组[^3]。 2. **数据填充**: - 应用调用`AudioTrack::write()`将数据写入共享内存。 - STREAM模式使用**双缓冲机制**:一个缓冲区填充时,另一个被AudioFlinger读取[^2]。 3. **混音与播放**: PlaybackThread定期执行混音(Mixer),从各Track的共享内存读取数据,混合后写入HAL输出设备[^3][^4]。 ### **4. 关键参数同步** - **采样率/格式**:AudioTrackAudioFlinger获取设备支持的参数(如`mSampleRate`),确保数据格式兼容[^4]。 - **延迟控制**:共享内存大小和缓冲区计数影响延迟,STREAM模式需平衡延迟与稳定性[^2]。 --- ### **交互流程示意图** ```plaintext 应用进程 (AudioTrack) AudioFlinger 进程 | | |--- Binder IPC (start/stop) --> PlaybackThread | | |-- write() [用户缓冲区] | |------ 数据拷贝 ------------> 共享内存 (ASharedMemory) | | | |-- 混音器读取数据 --> HAL输出 ``` --- ### **
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值