豆包手机助手权限之READ_FRAME_BUFFER剖析

背景:

在马哥发布关于豆包手机安全危险权限文章:
聊一聊豆包AI手机助手高度敏感权限CAPTURE_SECURE_VIDEO_OUTPUT
在这里插入图片描述

后一天豆包手机助手官方也有一个声明文章,当然本文先不对这些敏感权限危险程度做啥解释,只是作为一个fw系统开发者,技术角度深入了解系统剖析这些权限的使用场景和作用。
在这里插入图片描述下面来看看权限READ_FRAME_BUFFER。

系统中使用READ_FRAME_BUFFER

搜索系统中对这个权限校验地方如下:
在这里插入图片描述上面可以看出主要是针对截图,截屏等方法的需要该权限的判断,熟悉点的方法比如:screenshotWallpaper,captureDisplay都是截图相关的api接口。

captureDisplay为案例进行分析:

frameworks/base/services/core/java/com/android/server/wm/WindowManagerService.java

    @Override
    public void captureDisplay(int displayId, @Nullable ScreenCapture.CaptureArgs captureArgs,
            ScreenCapture.ScreenCaptureListener listener) {
        Slog.d(TAG, "captureDisplay");
        //检验READ_FRAME_BUFFER权限是否有
        if (!checkCallingPermission(READ_FRAME_BUFFER, "captureDisplay()")) {
            throw new SecurityException("Requires READ_FRAME_BUFFER permission");
        }
//权限校验通过则
        ScreenCapture.LayerCaptureArgs layerCaptureArgs = getCaptureArgs(displayId, captureArgs);
        ScreenCapture.captureLayers(layerCaptureArgs, listener);

        if (Binder.getCallingUid() != SYSTEM_UID) {
            // Release the SurfaceControl objects only if the caller is not in system server as no
            // parcelling occurs in this case.
            layerCaptureArgs.release();
        }
    }

接下来看看captureLayers这部分的源码。

frameworks/base/core/java/android/window/ScreenCapture.java

    /**
     * @param captureArgs     Arguments about how to take the screenshot
     * @param captureListener A listener to receive the screenshot callback
     * @hide
     */
    public static int captureLayers(@NonNull LayerCaptureArgs captureArgs,
            @NonNull ScreenCaptureListener captureListener) {
        return nativeCaptureLayers(captureArgs, captureListener.mNativeObject, false /* sync */);
    }

captureArgs:封装捕获参数,核心包括 displayId(目标显示屏 ID)、待捕获的图层列表、捕获分辨率 / 格式,其实这个参数是核心中核心。

captureListener:异步回调,接收捕获结果(成功返回 ScreenCaptureResult,包含图层帧数据;失败返回错误码)。

frameworks/base/core/jni/android_window_ScreenCapture.cpp

static jint nativeCaptureLayers(JNIEnv* env, jclass clazz, jobject layerCaptureArgsObject,
                                jlong screenCaptureListenerObject, jboolean sync) {
    LayerCaptureArgs layerCaptureArgs;
    getCaptureArgs(env, layerCaptureArgsObject, layerCaptureArgs.captureArgs);

    SurfaceControl* layer = reinterpret_cast<SurfaceControl*>(
            env->GetLongField(layerCaptureArgsObject, gLayerCaptureArgsClassInfo.layer));
    if (layer == nullptr) {
        return BAD_VALUE;
    }
 //参数转换
    layerCaptureArgs.layerHandle = layer->getHandle();
    layerCaptureArgs.childrenOnly =
            env->GetBooleanField(layerCaptureArgsObject, gLayerCaptureArgsClassInfo.childrenOnly);

    sp<gui::IScreenCaptureListener> captureListener =
            reinterpret_cast<gui::IScreenCaptureListener*>(screenCaptureListenerObject);
            //调用到 ScreenshotClient::captureLayers
    return ScreenshotClient::captureLayers(layerCaptureArgs, captureListener, sync);
}

下面看看ScreenshotClient::captureLayers方法:
frameworks/native/libs/gui/SurfaceComposerClient.cpp

status_t ScreenshotClient::captureLayers(const LayerCaptureArgs& captureArgs,
                                         const sp<IScreenCaptureListener>& captureListener,
                                         bool sync) {
    sp<gui::ISurfaceComposer> s(ComposerServiceAIDL::getComposerService());
    if (s == nullptr) return NO_INIT;

    binder::Status status;
    if (sync) {
        gui::ScreenCaptureResults captureResults;
        //这里会调用到sf层面
        status = s->captureLayersSync(captureArgs, &captureResults);
        captureListener->onScreenCaptureCompleted(captureResults);
    } else {
        status = s->captureLayers(captureArgs, captureListener);
    }
    return statusTFromBinderStatus(status);
}

SurfaceFlinger部分的captureLayers方法:

void SurfaceFlinger::captureLayers(const LayerCaptureArgs& args,
                                   const sp<IScreenCaptureListener>& captureListener) {
    SFTRACE_CALL();

    const auto& captureArgs = args.captureArgs;
//这里会再次校验权限READ_FRAME_BUFFER
    status_t validate = validateScreenshotPermissions(captureArgs);
    if (validate != OK) {
        ALOGD("Permission denied to captureLayers");
        invokeScreenCaptureError(validate, captureListener);
        return;
    }
//省略

    ScreenshotArgs screenshotArgs;
    screenshotArgs.captureTypeVariant = parent->getSequence();
    screenshotArgs.childrenOnly = args.childrenOnly;
    screenshotArgs.sourceCrop = crop;
    screenshotArgs.reqSize = reqSize;
    screenshotArgs.dataspace = static_cast<ui::Dataspace>(captureArgs.dataspace);
    screenshotArgs.isSecure = captureArgs.captureSecureLayers;
    screenshotArgs.seamlessTransition = captureArgs.hintForSeamlessTransition;
//准备参数调用到captureScreenCommon方法
    captureScreenCommon(screenshotArgs, getLayerSnapshotsFn, reqSize,
                        static_cast<ui::PixelFormat>(captureArgs.pixelFormat),
                        captureArgs.allowProtected, captureArgs.grayscale, captureListener);
}

这里主要就是先进行权限检测,再调用到captureScreenCommon方法。

先看看validateScreenshotPermissions校验方法

static status_t validateScreenshotPermissions(const CaptureArgs& captureArgs) {
    IPCThreadState* ipc = IPCThreadState::self();
    const int pid = ipc->getCallingPid();
    const int uid = ipc->getCallingUid();
    //检测ReadFrameBuffer权限
    if (uid == AID_GRAPHICS || uid == AID_SYSTEM ||
        PermissionCache::checkPermission(sReadFramebuffer, pid, uid)) {
        return OK;
    }
    ALOGE("Permission Denial: can't take screenshot pid=%d, uid=%d", pid, uid);
    return PERMISSION_DENIED;
}

再看看captureScreenCommon方法

void SurfaceFlinger::captureScreenCommon(ScreenshotArgs& args,
                                         GetLayerSnapshotsFunction getLayerSnapshotsFn,
                                         ui::Size bufferSize, ui::PixelFormat reqPixelFormat,
                                         bool allowProtected, bool grayscale,
                                         const sp<IScreenCaptureListener>& captureListener) {
//省略

//这里会调用到captureScreenshot方法,而且返回fence进行等待
    auto futureFence =
            captureScreenshot(args, texture, false /* regionSampling */, grayscale, isProtected,
                              captureListener, layers, hdrTexture, gainmapTexture);
    futureFence.get();
}

那么看看captureScreenshot方法:

ftl::SharedFuture<FenceResult> SurfaceFlinger::captureScreenshot(
        ScreenshotArgs& args, const std::shared_ptr<renderengine::ExternalTexture>& buffer,
        bool regionSampling, bool grayscale, bool isProtected,
        const sp<IScreenCaptureListener>& captureListener,
        const std::vector<std::pair<Layer*, sp<LayerFE>>>& layers,
        const std::shared_ptr<renderengine::ExternalTexture>& hdrBuffer,
        const std::shared_ptr<renderengine::ExternalTexture>& gainmapBuffer) {
    SFTRACE_CALL();

    ScreenCaptureResults captureResults;
    ftl::SharedFuture<FenceResult> renderFuture;

    float hdrSdrRatio = args.displayBrightnessNits / args.sdrWhitePointNits;

    if (hdrBuffer && gainmapBuffer) {
  //省略
    } else {
    //这里主要又调用到renderScreenImpl
        renderFuture = renderScreenImpl(args, buffer, regionSampling, grayscale, isProtected,
                                        captureResults, layers);
    }
    if (captureListener) {
        // Defer blocking on renderFuture back to the Binder thread.
        
//进行回调到captureListener
        return ftl::Future(std::move(renderFuture))
                .then([captureListener, captureResults = std::move(captureResults),
                       hdrSdrRatio](FenceResult fenceResult) mutable -> FenceResult {
                    captureResults.fenceResult = std::move(fenceResult);
                    captureResults.hdrSdrRatio = hdrSdrRatio;
                    captureListener->onScreenCaptureCompleted(captureResults);
                    return base::unexpected(NO_ERROR);
                })
                .share();
    }
    return renderFuture;
}

那么看看renderScreenImpl方法:

ftl::SharedFuture<FenceResult> SurfaceFlinger::renderScreenImpl(
        ScreenshotArgs& args, const std::shared_ptr<renderengine::ExternalTexture>& buffer,
        bool regionSampling, bool grayscale, bool isProtected, ScreenCaptureResults& captureResults,
        const std::vector<std::pair<Layer*, sp<LayerFE>>>& layers) {
    //省略部分
//创建渲染layer到截图的表达式present
    auto present = [this, buffer = capturedBuffer, dataspace = captureResults.capturedDataspace,
                    grayscale, isProtected, layers, layerStack, regionSampling, args, renderIntent,
                    enableLocalTonemapping]() -> FenceResult {
        std::unique_ptr<compositionengine::CompositionEngine> compositionEngine =
                mFactory.createCompositionEngine();
        compositionEngine->setRenderEngine(mRenderEngine.get());
        compositionEngine->setHwComposer(mHWComposer.get());

        std::vector<sp<compositionengine::LayerFE>> layerFEs;
        layerFEs.reserve(layers.size());
        for (auto& [layer, layerFE] : layers) {
            // Release fences were not yet added for non-threaded render engine. To avoid
            // deadlocks between main thread and binder threads waiting for the future fence
            // result, fences should be added to layers in the same hop onto the main thread.
            if (!mRenderEngine->isThreaded()) {
                attachReleaseFenceFutureToLayer(layer, layerFE.get(), ui::INVALID_LAYER_STACK);
            }
            layerFEs.push_back(layerFE);
        }

        compositionengine::Output::ColorProfile colorProfile{.dataspace = dataspace,
                                                             .renderIntent = renderIntent};

        float targetBrightness = 1.0f;
        if (enableLocalTonemapping) {
            // Boost the whole scene so that SDR white is at 1.0 while still communicating the hdr
            // sdr ratio via display brightness / sdrWhite nits.
            targetBrightness = args.sdrWhitePointNits / args.displayBrightnessNits;
        } else if (dataspace == ui::Dataspace::BT2020_HLG) {
            const float maxBrightnessNits =
                    args.displayBrightnessNits / args.sdrWhitePointNits * 203;
            // With a low dimming ratio, don't fit the entire curve. Otherwise mixed content
            // will appear way too bright.
            if (maxBrightnessNits < 1000.f) {
                targetBrightness = 1000.f / maxBrightnessNits;
            }
        }

        // Capturing screenshots using layers have a clear capture fill (0 alpha).
        // Capturing via display or displayId, which do not use args.layerSequence,
        // has an opaque capture fill (1 alpha).
        const float layerAlpha =
                std::holds_alternative<int32_t>(args.captureTypeVariant) ? 0.0f : 1.0f;

        // Screenshots leaving the device must not dim in gamma space.
        const bool dimInGammaSpaceForEnhancedScreenshots =
                mDimInGammaSpaceForEnhancedScreenshots && args.seamlessTransition;

        std::shared_ptr<ScreenCaptureOutput> output = createScreenCaptureOutput(
                ScreenCaptureOutputArgs{.compositionEngine = *compositionEngine,
                                        .colorProfile = colorProfile,
                                        .layerStack = layerStack,
                                        .sourceCrop = args.sourceCrop,
                                        .buffer = std::move(buffer),
                                        .displayIdVariant = args.displayIdVariant,
                                        .reqBufferSize = args.reqSize,
                                        .sdrWhitePointNits = args.sdrWhitePointNits,
                                        .displayBrightnessNits = args.displayBrightnessNits,
                                        .targetBrightness = targetBrightness,
                                        .layerAlpha = layerAlpha,
                                        .regionSampling = regionSampling,
                                        .treat170mAsSrgb = mTreat170mAsSrgb,
                                        .dimInGammaSpaceForEnhancedScreenshots =
                                                dimInGammaSpaceForEnhancedScreenshots,
                                        .isSecure = args.isSecure,
                                        .isProtected = isProtected,
                                        .enableLocalTonemapping = enableLocalTonemapping});

        const float colorSaturation = grayscale ? 0 : 1;
        //准备好CompositionRefreshArgs参数
        compositionengine::CompositionRefreshArgs refreshArgs{
                .outputs = {output},
                .layers = std::move(layerFEs),
                .updatingOutputGeometryThisFrame = true,
                .updatingGeometryThisFrame = true,
                .colorTransformMatrix = calculateColorMatrix(colorSaturation),
        };
        //调用present进行合成
        compositionEngine->present(refreshArgs);

        return output->getRenderSurface()->getClientTargetAcquireFence();
    };

    // If RenderEngine is threaded, we can safely call CompositionEngine::present off the main
    // thread as the RenderEngine::drawLayers call will run on RenderEngine's thread. Otherwise,
    // we need RenderEngine to run on the main thread so we call CompositionEngine::present
    // immediately.
    //
    // TODO(b/196334700) Once we use RenderEngineThreaded everywhere we can always defer the call
    // to CompositionEngine::present.
    ftl::SharedFuture<FenceResult> presentFuture = mRenderEngine->isThreaded()
            ? ftl::yield(present()).share()
            : mScheduler->schedule(std::move(present)).share();

    return presentFuture;
}

这个方法比较多,这个方法本身也在马哥SurfaceFlinger课程中有讲解分析过,大概干了以下几个事情:

收集待渲染图层的元信息(Secure/HDR 标记);
适配色彩 / 亮度参数(保证显示一致性);
通过 CompositionEngine 将图层合成到目标缓冲区;
基于线程模型调度渲染逻辑,返回Fence。

本文主要带大家来剖析一下READ_FRAME_BUFFER权限的使用场景,这块其实整体上和豆包官方助手说的基本上没啥大差别。READ_FRAME_BUFFER是安卓系统级权限需要系统签名,普通第三方app无法获取,核心功能是调用sf接口,sf把对应layer进行gpu绘制到截图buffer上,完全不是网络说的没有任何限制的直接读取gpu内容。

但是最后留下个疑问思考:

READ_FRAME_BUFFER真的只能截图普通图层,就不可以截取secure的窗口图层吗?

原文地址:
https://mp.weixin.qq.com/s/QZuZrQkLgbIouz9BrVZ66Q

更多framework是做开发干货,请关注下面“千里马学框架”

### 关于 STATUS_STACK_BUFFER_OVERRUN 错误的原因 STATUS_STACK_BUFFER_OVERRUN 是一种由栈缓冲区溢出引发的安全异常。当应用程序尝试写入超出分配给特定变量的内存空间时发生此错误,这通常意味着存在潜在的安全漏洞或编程缺陷[^1]。 对于 Chrome 和 Edge 浏览器而言,此类错误可能源于多种因素: - **插件冲突**:某些第三方扩展程序可能导致浏览器不稳定并触发该错误。 - **软件更新不兼容**:操作系统或浏览器版本之间的不匹配也可能引起这一问题。 - **恶意软件感染**:病毒或其他有害程序可能会篡改系统文件从而造成这种状况。 - **硬件故障**:尽管较为罕见,但也不能排除因物理损坏而导致的情况。 ### 解决方法概述 针对上述提到的各种可能性,有几种通用的方法可以帮助缓解或彻底解决问题: #### 方法一:启用 Exploit Protection 中的堆栈保护机制 通过调整 Windows 的安全策略来增强对这类攻击的防御能力是一个有效的措施。具体操作如下所示[^5]: 1. 打开任务栏中的 "攻击防护" 工具; 2. 寻找并配置目标应用程序(即受影响的浏览器)的相关设置; 3. 启用“硬件强制实施堆栈保护”的选项; 4. 应用更改后重新启动计算机以使新设定生效; 这种方法能够有效防止大多数由于栈溢出引起的崩溃事件。 #### 方法二:清除缓存与 cookies 数据 有时旧的数据残留也会成为诱发因素之一。因此建议定期清理浏览历史记录以及临时互联网文件夹内的内容: ```bash # 对应命令行指令用于清空 chrome 缓存 (需配合相应工具使用) chrome://settings/clearBrowserData ``` #### 方法三:禁用不必要的加载项/扩展 移除那些未经验证或者不再使用的附加组件有助于减少风险暴露面,并提高整体稳定性。 #### 方法四:执行全面杀毒扫描 确保设备上没有任何已知威胁正在运行非常重要。利用可靠的防病毒解决方案进行全面检测不失为明智之举。 #### 方法五:回滚最近的操作系统补丁 如果是在安装新的 OS 更新之后才遇到这个问题,则考虑卸载这些改动或许能有所帮助。 ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

千里马学框架

帮助你了,就请我喝杯咖啡

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值