Android>CTS>Media>Video>AdaptivePlaybackTest 源码解读

本文详细解析了AndroidCTS中的AdaptivePlaybackTest,涉及判断系统对自适应播放功能的支持,构造媒体文件格式,处理CSD数据,以及MediaCodec的解码流程,帮助开发者理解视频分辨率变化时的无缝播放测试实现。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

系列文章目录

Android>CTS>Media>Video>AdaptivePlaybackTest 代码解读



前言

有一定的安卓多媒体开发经验(mediaextroctor + mediacodec)更容易理解本文。

AdaptivePlaybackTest (自适应分辨率播放测试)是用于测试视频分辨率变化时是否能无缝播放的测试程序。解读它有助于我们更好的开发视频解码程序。


testAdaptivePlayback()

testAdaptivePlayback() 是主要的测试代码。

代码如下:

public void testAdaptivePlayback() throws IOException, InterruptedException {
        boolean hasSupport = isFeatureSupported(mCodecName, mMediaType,
                MediaCodecInfo.CodecCapabilities.FEATURE_AdaptivePlayback);
        if (MUST_SUPPORT_APB.contains(mMediaType)) {
            Assert.assertTrue("codec: " + mCodecName + " is required to support "
                    + "FEATURE_AdaptivePlayback" + " for mediaType: " + mMediaType, hasSupport);
        } else {
            Assume.assumeTrue("codec: " + mCodecName + " does not support FEATURE_AdaptivePlayback",
                    hasSupport);
        }
        ArrayList<MediaFormat> formats = new ArrayList<>();
        for (String file : mSrcFiles) {
            formats.add(setUpSource(MEDIA_DIR + file));
            mExtractor.release();
        }
        ArrayList<String> resFiles;
        if (mSupportRequirements.equals(CODEC_ALL)) {
            checkFormatSupport(mCodecName, mMediaType, false, formats, null, mSupportRequirements);
            resFiles = new ArrayList<>(Arrays.asList(mSrcFiles));
        } else {
            resFiles = getSupportedFiles(formats);
        }
        Assume.assumeTrue("none of the given test clips are supported by the codec: "
                + mCodecName, !resFiles.isEmpty());
        formats.clear();
        int totalSize = 0;
        for (String resFile : resFiles) {
            File file = new File(MEDIA_DIR + resFile);
            totalSize += (int) file.length();
        }
        long ptsOffset = 0;
        int buffOffset = 0;
        ArrayList<MediaCodec.BufferInfo> list = new ArrayList<>();
        ByteBuffer buffer = ByteBuffer.allocate(totalSize);
        for (String file : resFiles) {
            formats.add(createInputList(setUpSource(MEDIA_DIR + file), buffer, list, buffOffset,
                    ptsOffset));
            mExtractor.release();
            ptsOffset = mMaxPts + 1000000L;
            buffOffset = (list.get(list.size() - 1).offset) + (list.get(list.size() - 1).size);
        }
        mOutputBuff = new OutputManager();
        {
            mCodec = MediaCodec.createByCodecName(mCodecName);
            MediaFormat format = formats.get(0);
            mActivity.setScreenParams(getWidth(format), getHeight(format), true);
            mOutputBuff.reset();
            configureCodec(format, true, false, false);
            mCodec.start();
            doWork(buffer, list);
            queueEOS();
            waitForAllOutputs();
            mCodec.reset();
            mCodec.release();
        }
    }

1.判断系统是否支持FEATURE_AdaptivePlayback

boolean hasSupport = isFeatureSupported(mCodecName, mMediaType,
                MediaCodecInfo.CodecCapabilities.FEATURE_AdaptivePlayback);

这里会先去判断 codec 是否支持 FEATURE_AdaptivePlayback,也就是是否支持自适应播放,如果不支持则程序终止。

2.构造所有待测试媒体文件的formats

ArrayList<String> resFiles;
if (mSupportRequirements.equals(CODEC_ALL)) {
    checkFormatSupport(mCodecName, mMediaType, false, formats, null, mSupportRequirements);
    resFiles = new ArrayList<>(Arrays.asList(mSrcFiles));
} else {
    resFiles = getSupportedFiles(formats);
}
Assume.assumeTrue("none of the given test clips are supported by the codec: "
        + mCodecName, !resFiles.isEmpty());
formats.clear();
int totalSize = 0;
for (String resFile : resFiles) {
    File file = new File(MEDIA_DIR + resFile);
    totalSize += (int) file.length();
}
long ptsOffset = 0;
int buffOffset = 0;
ArrayList<MediaCodec.BufferInfo> list = new ArrayList<>();
ByteBuffer buffer = ByteBuffer.allocate(totalSize);
for (String file : resFiles) {
    formats.add(createInputList(setUpSource(MEDIA_DIR + file), buffer, list, buffOffset,
            ptsOffset));
    mExtractor.release();
    ptsOffset = mMaxPts + 1000000L;
    buffOffset = (list.get(list.size() - 1).offset) + (list.get(list.size() - 1).size);
}

这段代码是找到 codec 支持的文件,然后通过 setupSource() 拿到每个 resFile 的 format,然后通过createInputList() 添加到formats(ArrayList)中。

展开 setupSource()

protected MediaFormat setUpSource(String srcFile) throws IOException {
        Preconditions.assertTestFileExists(srcFile);
        mExtractor = new MediaExtractor();
        mExtractor.setDataSource(srcFile);
        for (int trackID = 0; trackID < mExtractor.getTrackCount(); trackID++) {
            MediaFormat format = mExtractor.getTrackFormat(trackID);
            if (mMediaType.equalsIgnoreCase(format.getString(MediaFormat.KEY_MIME))) {
                // This is required for some mlaw and alaw test vectors where access unit size is
                // exceeding default max input size
                if (mMediaType.equalsIgnoreCase(MediaFormat.MIMETYPE_AUDIO_G711_ALAW)
                        || mMediaType.equalsIgnoreCase(MediaFormat.MIMETYPE_AUDIO_G711_MLAW)) {
                    format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE,
                            getMaxSampleSizeForMediaType(srcFile, mMediaType));
                }
                mExtractor.selectTrack(trackID);
                if (mIsVideo) {
                    ArrayList<MediaFormat> formatList = new ArrayList<>();
                    formatList.add(format);
                    boolean selectHBD = doesAnyFormatHaveHDRProfile(mMediaType, formatList);
                    if (!selectHBD && srcFile.contains("10bit")) {
                        selectHBD = true;
                        if (mMediaType.equals(MediaFormat.MIMETYPE_VIDEO_VP9)) {
                            // In some cases, webm extractor may not signal profile for 10-bit VP9
                            // clips. In such cases, set profile to a 10-bit compatible profile.
                            // TODO (b/295804596) Remove the following once webm extractor signals
                            // profile correctly for all 10-bit clips
                            int[] profileArray = CodecTestBase.PROFILE_HDR_MAP.get(mMediaType);
                            format.setInteger(MediaFormat.KEY_PROFILE, profileArray[0]);
                        }
                    }
                    format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
                            getColorFormat(mCodecName, mMediaType, mSurface != null, selectHBD));
                    if (selectHBD && (format.getInteger(MediaFormat.KEY_COLOR_FORMAT)
                            != COLOR_FormatYUVP010)) {
                        mSkipChecksumVerification = true;
                    }

                    if ((format.getInteger(MediaFormat.KEY_COLOR_FORMAT) != COLOR_FormatYUVP010)
                            && selectHBD && mSurface == null) {
                        // Codecs that do not advertise P010 on devices with VNDK version < T, do
                        // not support decoding high bit depth clips when color format is set to
                        // COLOR_FormatYUV420Flexible in byte buffer mode. Since byte buffer mode
                        // for high bit depth decoding wasn't tested prior to Android T, skip this
                        // when device is older
                        assumeTrue("Skipping High Bit Depth tests on VNDK < T", VNDK_IS_AT_LEAST_T);
                    }
                }
                // TODO: determine this from the extractor format when it becomes exposed.
                mIsInterlaced = srcFile.contains("_interlaced_");
                return format;
            }
        }
        fail("No track with mediaType: " + mMediaType + " found in file: " + srcFile + "\n"
                + mTestConfig + mTestEnv);
        return null;
    }

setUpSource() 就是常规的通过 MediaExtractor 获取到媒体文件中的信息,不再赘述。
setUpSource() 已经拿到每个文件的 format,那么createInputList() 是做什么的?

展开 createInputList()

private MediaFormat createInputList(MediaFormat format, ByteBuffer buffer,
            ArrayList<MediaCodec.BufferInfo> list, int offset, long ptsOffset) {
        if (hasCSD(format)) {
            MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
            bufferInfo.offset = offset;
            bufferInfo.size = 0;
            // For some devices with VNDK versions till Android U, sending a zero
            // timestamp for CSD results in out of order timestamps at the output.
            // For devices with VNDK versions > Android U, codecs are expected to
            // handle CSD buffers with timestamp set to zero.
            bufferInfo.presentationTimeUs = VNDK_IS_AT_MOST_U ? ptsOffset : 0;
            bufferInfo.flags = MediaCodec.BUFFER_FLAG_CODEC_CONFIG;
            for (int i = 0; ; i++) {
                String csdKey = "csd-" + i;
                if (format.containsKey(csdKey)) {
                    ByteBuffer csdBuffer = format.getByteBuffer(csdKey);
                    bufferInfo.size += csdBuffer.limit();
                    buffer.put(csdBuffer);
                    format.removeKey(csdKey);
                } else break;
            }
            list.add(bufferInfo);
            offset += bufferInfo.size;
        }
        while (true) {
            MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
            bufferInfo.size = mExtractor.readSampleData(buffer, offset);
            if (bufferInfo.size < 0) break;
            bufferInfo.offset = offset;
            bufferInfo.presentationTimeUs = ptsOffset + mExtractor.getSampleTime();
            mMaxPts = Math.max(mMaxPts, bufferInfo.presentationTimeUs);
            int flags = mExtractor.getSampleFlags();
            bufferInfo.flags = 0;
            if ((flags & MediaExtractor.SAMPLE_FLAG_SYNC) != 0) {
                bufferInfo.flags |= MediaCodec.BUFFER_FLAG_KEY_FRAME;
            }
            list.add(bufferInfo);
            mExtractor.advance();
            offset += bufferInfo.size;
        }
        buffer.clear();
        buffer.position(offset);
        return format;
    }

首先看到判断当前的format 是否包含CSD 1 数据,它是解码器解码视频时必须的数据。如果包含的话,就要对CSD buffer 进行处理:

  1. 构建 bufferInfo, 包括 size, PTS, flag(MediaCodec.BUFFER_FLAG_CODEC_CONFIG);
  2. 从 format 中取出 csdBuffer, 并把 csdKey 从format 中删除。

处理完 CSD 数据后就是常规的数据处理,在 while 循环中,利用 mediaExtractor 的 API 遍历拿到整个媒体文件的buffer 和 bufferInfo。
最后,就是通过for循环,完成了 formats 列表的构建,简单的说,就是把所有的媒体文件进行了拼接(buffer 的拼接和 bufferInfo 的拼接),并移除了format中所有的csd key。

3.解码

解码是根据 bufferInfo 指导解码器完成对 buffer 的解码,方法就是常规的 MediaCodec 解码流程。

{
    mCodec = MediaCodec.createByCodecName(mCodecName);
    MediaFormat format = formats.get(0);
    mActivity.setScreenParams(getWidth(format), getHeight(format), true);
    mOutputBuff.reset();
    configureCodec(format, true, false, false);
    mCodec.start();
    doWork(buffer, list);
    queueEOS();
    waitForAllOutputs();
    mCodec.reset();
    mCodec.release();
}

可以看到 configureCodec 传入的参数是 formats 列表中的第一个format。

dowork 用于完成解码:

protected void doWork(ByteBuffer buffer, ArrayList<MediaCodec.BufferInfo> list)
            throws InterruptedException {
        int frameCount = 0;
        if (mIsCodecInAsyncMode) {
            // output processing after queuing EOS is done in waitForAllOutputs()
            while (!mAsyncHandle.hasSeenError() && !mSawInputEOS && frameCount < list.size()) {
                Pair<Integer, MediaCodec.BufferInfo> element = mAsyncHandle.getWork();
                if (element != null) {
                    int bufferID = element.first;
                    MediaCodec.BufferInfo info = element.second;
                    if (info != null) {
                        dequeueOutput(bufferID, info);
                    } else {
                        enqueueInput(bufferID, buffer, list.get(frameCount));
                        frameCount++;
                    }
                }
            }
        } else {
            MediaCodec.BufferInfo outInfo = new MediaCodec.BufferInfo();
            // output processing after queuing EOS is done in waitForAllOutputs()
            while (!mSawInputEOS && frameCount < list.size()) {
                int outputBufferId = mCodec.dequeueOutputBuffer(outInfo, Q_DEQ_TIMEOUT_US);
                if (outputBufferId >= 0) {
                    dequeueOutput(outputBufferId, outInfo);
                } else if (outputBufferId == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                    mOutFormat = mCodec.getOutputFormat();
                    mSignalledOutFormatChanged = true;
                }
                int inputBufferId = mCodec.dequeueInputBuffer(Q_DEQ_TIMEOUT_US);
                if (inputBufferId != -1) {
                    enqueueInput(inputBufferId, buffer, list.get(frameCount));
                    frameCount++;
                }
            }
        }
    }

可以看到,包含两种解码方式,同步和异步,了解MediaCodec 使用的朋友应该很熟悉,不再赘述。


总结

以上全部内容,本文仅仅简单介绍了 AdaptivePlaybackTest 的测试代码流程,供开发者参考,在开发视频分辨率变化时如何进行无缝解码操作。


  1. CSD 包含 csd-0 和 csd-1, 分别代表 H.264 和 H.265 编码器中的 SPS(Sequence Parameter Set)和 PPS(Picture Parameter Set)数据。 ↩︎

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值