android l camera no panorama,Android Camera从App层到framework层到HAL层的初始化过程

本文详细分析了Android Camera2应用层的初始化流程,从Application层的CameraActivity开始,通过Java调用Framework层的CameraManager打开相机,进入HAL层。在Framework层,CameraManager.openCamera()方法连接到CameraService,再通过Binder机制获取到具体的硬件设备。在HAL层,调用mm_camera_intf的add_channel方法启动数据处理线程。整个过程中涉及CameraModule的配置、参数初始化、线程启动等多个关键步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Android camera 从上到下能够分为四个部分:

Application层、

framework层、

HAL(hardware abstract layer)层、

Kernel层

通常面向开发者的话主要集中在Application层和framework层,可是一些厂商须要定制一些属于本身特有的东西的时候会去修改HAL层集成本身的一些东西,至于Kernel层通常的开发人员不会涉及到。java

下面是一个简要的camera框架图: android

0fa04ee2ac2cda56fef3e2d7d2a1afb4.png

APP层经过Java调用framework层的Java类,framework层的不止包含Java类,还包含jni调用和camera client经过IPC Binder绑定获取下层的服务,而后进入HAL层。设计模式

在framework层中,Java框架经过JNI的方式调用Native框架,此处的Native做为Client端只给上层应用提供调用接口,而具体的业务实现则是由Server端(CameraService)来实现,Client和Server则是经过Binder的方式进行通信。单独分析下Camera的C/S架构,其架构图以下 api

855d6388169c481a8e1a0a331d0c81d5.png

对于Android camera来讲,其主要有:initialize,preview,takepicture三个关键点架构

因为我下载的Android源码的版本是7.1.1,版本较新,因此可能分析的内容和之前不大同样。app

下面从Application层开始分析composer

一、camera的初始化,

Camera2的初始化流程与Camera1.0有所区别,本文将就Camera2的内置应用来分析Camera2.0的初始化过程。Camera2.0首先启动的是CameraActivity,而它继承自QuickActivity,在代码中你会发现没有重写OnCreate等生命周期方法,由于此处采用的是模板方法的设计模式,在QuickActivity中的onCreate方法调用的是onCreateTasks等方法,因此要看onCreate方法就只须看onCreateTasks方法便可框架

CameraActivity.java 位于package/apps/Camera2/src/com/camera/ 目录下dom

//CameraActivity.java

@Override

public void onCreateTasks(Bundle state) {

Profile profile = mProfiler.create("CameraActivity.onCreateTasks").start();

try {

//①初始化mOneCameraOpener对象和mOneCameraManager 对象

mOneCameraOpener = OneCameraModule.provideOneCameraOpener(

mFeatureConfig, mAppContext,mActiveCameraDeviceTracker,

ResolutionUtil.getDisplayMetrics(this));

mOneCameraManager = OneCameraModule.provideOneCameraManager();

} catch (OneCameraException e) {...}

//②设置模块信息

ModulesInfo.setupModules(mAppContext, mModuleManager, mFeatureConfig);

//③进行初始化

mCurrentModule.init(this, isSecureCamera(), isCaptureIntent());

}

从上面的代码中能够看出其主要分为三个步骤:

①初始化mOneCameraOpener对象和mOneCameraManager 对象

②设置模块信息

③进行初始化ide

(1)、第一个步骤,初始化OneCameraOpener对象和mOneCameraManager对象,咱们初始化相机就必须得打开相机,而且对打开的相机进行管理,因此这里有初始化mOneCameraOpener对象和mOneCameraManager对象,这两个对象的初始化是在OneCameraModule.java中进行的

OneCameraModule.java 位于package/apps/Camera2/src/com/camera/目录下

//OneCameraModule.java

public static OneCameraOpener provideOneCameraOpener(OneCameraFeatureConfig featureConfig,Context context,ActiveCameraDeviceTracker activeCameraDeviceTracker,DisplayMetrics displayMetrics) throws OneCameraException {

Optional manager = Camera2OneCameraOpenerImpl.create(

featureConfig, context, activeCameraDeviceTracker, displayMetrics);

...

return manager.get();

}

public static OneCameraManager provideOneCameraManager() throws OneCameraException {

Optional camera2HwManager = Camera2OneCameraManagerImpl.create();

if (camera2HwManager.isPresent()) {

return camera2HwManager.get();

}

...

}

它调用Camera2OneCameraOpenerImpl的create方法来得到一个OneCameraOpener对象和调用Camera2OneCameraManagerImpl的create方法来得到一个OneCameraManager对象。

//Camera2OneCameraOpenerImpl.java

public static Optional create(OneCameraFeatureConfig featureConfig,Context context,ActiveCameraDeviceTracker activeCameraDeviceTracker,DisplayMetrics displayMetrics) {

...

OneCameraOpener oneCameraOpener = new Camera2OneCameraOpenerImpl(featureConfig,context,cameraManager,activeCameraDeviceTracker,displayMetrics);

return Optional.of(oneCameraOpener);

}

//Camera2OneCameraManagerImpl.java

public static Optional create() {

...

Camera2OneCameraManagerImpl hardwareManager =

new Camera2OneCameraManagerImpl(cameraManager);

return Optional.of(hardwareManager);

}

(2)设置模块信息,即 ModulesInfo.setupModules(mAppContext, mModuleManager, mFeatureConfig);这部份内容比较重要,主要是根据配置信息,进行一系列模块的注册:

//ModulesInfo.java

public static void setupModules(Context context, ModuleManager moduleManager,OneCameraFeatureConfig config) {

...

//注册Photo模块

registerPhotoModule(moduleManager, photoModuleId, SettingsScopeNamespaces.PHOTO,config.isUsingCaptureModule());

//设置为默认的模块

moduleManager.setDefaultModuleIndex(photoModuleId);

//注册Video模块

registerVideoModule(moduleManager, res.getInteger(R.integer.camera_mode_video),SettingsScopeNamespaces.VIDEO);

if (PhotoSphereHelper.hasLightCycleCapture(context)) {//开启闪光

//注册广角镜头

registerWideAngleModule(moduleManager, res.getInteger(

R.integer.camera_mode_panorama),SettingsScopeNamespaces

.PANORAMA);

//注册光球模块

registerPhotoSphereModule(moduleManager,res.getInteger(

R.integer.camera_mode_photosphere),

SettingsScopeNamespaces.PANORAMA);

}

//若需从新聚焦

if (RefocusHelper.hasRefocusCapture(context)) {

//注册重聚焦模块

registerRefocusModule(moduleManager, res.getInteger(

R.integer.camera_mode_refocus),

SettingsScopeNamespaces.REFOCUS);

}

//若是有色分离模块

if (GcamHelper.hasGcamAsSeparateModule(config)) {

//注册色分离模块

registerGcamModule(moduleManager, res.getInteger(

R.integer.camera_mode_gcam),SettingsScopeNamespaces.PHOTO,

config.getHdrPlusSupportLevel(OneCamera.Facing.BACK));

}

int imageCaptureIntentModuleId = res.getInteger(

R.integer.camera_mode_capture_intent);

registerCaptureIntentModule(moduleManager,

imageCaptureIntentModuleId,SettingsScopeNamespaces.PHOTO,

config.isUsingCaptureModule());

}

由于打开Camera应用,既能够拍照片也能够拍视频,此处,只分析PhotoModule的注册

private static void registerPhotoModule(ModuleManager moduleManager, final int moduleId,

final String namespace, final boolean enableCaptureModule) {

moduleManager.registerModule(new ModuleManager.ModuleAgent() {

...

public ModuleController createModule(AppController app, Intent intent) {

Log.v(TAG, "EnableCaptureModule = " + enableCaptureModule);

return enableCaptureModule ? new CaptureModule(app) : new PhotoModule(app);

}

});

}

由上面的代码可知,注册PhotoModule的关键仍是在moduleManager.registerModule中的ModuleController createModule方法根据enableCaptureModule ? new CaptureModule(app) : new PhotoModule(app);来判断是进行拍照模块的注册仍是Photo模块的注册。

(3)进行初始化 mCurrentModule.init();

//CaptureModule.java

public void init(CameraActivity activity, boolean isSecureCamera, boolean

isCaptureIntent) {

...

HandlerThread thread = new HandlerThread("CaptureModule.mCameraHandler");

thread.start();

mCameraHandler = new Handler(thread.getLooper());

//获取第一步中建立的OneCameraOpener 和OneCameraManager对象

mOneCameraOpener = mAppController.getCameraOpener();

...

//获取前面建立的OneCameraManager对象

mOneCameraManager = OneCameraModule.provideOneCameraManager();

`...

//新建CaptureModule的UI

mUI = new CaptureModuleUI(activity, mAppController.

getModuleLayoutRoot(), mUIListener);

//设置预览状态的监听

mAppController.setPreviewStatusListener(mPreviewStatusListener);

synchronized (mSurfaceTextureLock) {

//获取SurfaceTexture

mPreviewSurfaceTexture = mAppController.getCameraAppUI()

.getSurfaceTexture();

}

}

先获取建立的OneCameraOpener对象以及OneCameraManager对象,而后再设置预览状态监听,这里主要分析预览状态的监听:

//CaptureModule.java

private final PreviewStatusListener mPreviewStatusListener = new

PreviewStatusListener() {

...

public void onSurfaceTextureAvailable(SurfaceTexture surface,

int width, int height) {

updatePreviewTransform(width, height, true);

synchronized (mSurfaceTextureLock) {

mPreviewSurfaceTexture = surface;

}

//打开Camera

reopenCamera();

}

public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {

Log.d(TAG, "onSurfaceTextureDestroyed");

synchronized (mSurfaceTextureLock) {

mPreviewSurfaceTexture = null;

}

//关闭Camera

closeCamera();

return true;

}

public void onSurfaceTextureSizeChanged(SurfaceTexture surface,

int width, int height) {

//更新预览尺寸

updatePreviewBufferSize();

}

...

};

当SurfaceTexture的状态变成可用的时候,会调用reopenCamera()方法来打开Camera,分析reopenCamera()方法:

private void reopenCamera() {

if (mPaused) {

return;

}

AsyncTask.THREAD_POOL_EXECUTOR.execute(new Runnable() {

@Override

public void run() {

closeCamera();

if(!mAppController.isPaused()) {

//开启相机并预览

openCameraAndStartPreview();

}

}

});

}

继续分析openCameraAndStartPreview();

//CaptureModule.java

private void openCameraAndStartPreview() {

...

if (mOneCameraOpener == null) {

...

//释放CameraOpenCloseLock锁

mCameraOpenCloseLock.release();

mAppController.getFatalErrorHandler().onGenericCameraAccessFailure();

guard.stop("No OneCameraManager");

return;

}

// Derive objects necessary for camera creation.

MainThread mainThread = MainThread.create();

//查找须要打开的CameraId

CameraId cameraId = mOneCameraManager.findFirstCameraFacing(mCameraFacing);

...

//打开Camera

mOneCameraOpener.open(cameraId, captureSetting, mCameraHandler,

mainThread, imageRotationCalculator, mBurstController,

mSoundPlayer,new OpenCallback() {

public void onFailure() {

//进行失败的处理

...

}

@Override

public void onCameraClosed() {

...

}

@Override

public void onCameraOpened(@Nonnull final OneCamera camera) {

Log.d(TAG, "onCameraOpened: " + camera);

mCamera = camera;

if (mAppController.isPaused()) {

onFailure();

return;

}

...

mMainThread.execute(new Runnable() {

@Override

public void run() {

//通知UI,Camera状态变化

mAppController.getCameraAppUI().onChangeCamera();

//使能拍照按钮

mAppController.getButtonManager().enableCameraButton();

}

});

//至此,Camera打开成功,开始预览

camera.startPreview(new Surface(getPreviewSurfaceTexture()),

new CaptureReadyCallback() {

@Override

public void onSetupFailed() {

...

}

@Override

public void onReadyForCapture() {

//释放锁

mCameraOpenCloseLock.release();

mMainThread.execute(new Runnable() {

@Override

public void run() {

...

onPreviewStarted();

...

onReadyStateChanged(true);

//设置CaptureModule为Capture准备的状态监听

mCamera.setReadyStateChangedListener(

CaptureModule.this);

mUI.initializeZoom(mCamera.getMaxZoom());

mCamera.setFocusStateListener(

CaptureModule.this);

}

});

}

});

}

}, mAppController.getFatalErrorHandler());

guard.stop("mOneCameraOpener.open()");

}

}

这里主要会调用Camera2OneCameraOpenerImpl的open方法来打开Camera,并定义了开启的回调函数,对开启结束后的结果进行处理,如失败则释放mCameraOpenCloseLock,并暂停mAppController,若是打开成功,通知UI成功,并开启Camera的Preview(在回调new OpenCallback中打开预览函数camera.startPreview()),而且定义了Preview的各类回调操做(如new CaptureReadyCallback()中的是否设置失败onSetupFailed,是否准备好拍照onReadyForCapture),针对Open过程,因此继续分析:

//Camera2OneCameraOpenerImpl.java

public void open(

...

mActiveCameraDeviceTracker.onCameraOpening(cameraKey);

//打开Camera,此处调用framework层的CameraManager类的openCamera,进入frameworks层

mCameraManager.openCamera(cameraKey.getValue(),

new CameraDevice.StateCallback() {

private boolean isFirstCallback = true;

...

public void onOpened(CameraDevice device) {

//第一次调用此回调

if (isFirstCallback) {

isFirstCallback = false;

try {

CameraCharacteristics characteristics = mCameraManager

.getCameraCharacteristics(device.getId());

...

//建立OneCamera对象

OneCamera oneCamera = OneCameraCreator.create(device,

characteristics, mFeatureConfig, captureSetting,

mDisplayMetrics, mContext, mainThread,

imageRotationCalculator, burstController, soundPlayer,

fatalErrorHandler);

if (oneCamera != null) {

//若是oneCamera不为空,则回调onCameraOpened,后面将作分析

openCallback.onCameraOpened(oneCamera);

} else {

...

openCallback.onFailure();

}

} catch (CameraAccessException e) {

openCallback.onFailure();

} catch (OneCameraAccessException e) {

Log.d(TAG, "Could not create OneCamera", e);

openCallback.onFailure();

}

}

}

}, handler);

...

}

接着调用CameraManager的openCamera方法打开相机,进入到framework层进行初始化

(CameraManager是链接App层和framework层的桥梁)

下面是App层初始化的时序图:

b07f54cbab932ed24acfa1c8872f59e1.png

二、framework层的初始化

经过上面的CameraManager的openCamera进入到framework层,下面咱们看openCamera方法:

//CameraManager.java

public void openCamera(@NonNull String cameraId,

@NonNull final CameraDevice.StateCallback callback, @Nullable Handler handler)

throws CameraAccessException {

openCameraForUid(cameraId, callback, handler, USE_CALLING_UID);

}

openCamera方法调用了openCameraForUid方法,

//CameraManager.java

public void openCameraForUid(@NonNull String cameraId,

@NonNull final CameraDevice.StateCallback callback, @Nullable Handler handler,

int clientUid)

throws CameraAccessException {

...

openCameraDeviceUserAsync(cameraId, callback, handler, clientUid);

}

接着调用了openCameraDeviceUserAsync方法:

private CameraDevice openCameraDeviceUserAsync(String cameraId,

CameraDevice.StateCallback callback, Handler handler, final int uid)

throws CameraAccessException {

CameraCharacteristics characteristics = getCameraCharacteristics(cameraId);

CameraDevice device = null;

synchronized (mLock) {

ICameraDeviceUser cameraUser = null;

android.hardware.camera2.impl.CameraDeviceImpl deviceImpl =

new android.hardware.camera2.impl.CameraDeviceImpl(

cameraId,

callback,

handler,

characteristics);

ICameraDeviceCallbacks callbacks = deviceImpl.getCallbacks();

...

try {

if (supportsCamera2ApiLocked(cameraId)) {

ICameraService cameraService = CameraManagerGlobal.get().getCameraService();

...

cameraUser = cameraService.connectDevice(callbacks, id,

mContext.getOpPackageName(), uid);

} else {

// Use legacy camera implementation for HAL1 devices

Log.i(TAG, "Using legacy camera HAL.");

cameraUser = CameraDeviceUserShim.connectBinderShim(callbacks, id);

}

} catch (ServiceSpecificException e) {

...

} catch (RemoteException e) {

...

}

...

deviceImpl.setRemoteDevice(cameraUser);

device = deviceImpl;

}

return device;

}

该方法最重要的功能就是经过Java的Binder绑定功能获取CameraService,而后经过CameraService链接到具体的设备cameraUser = cameraService.connectDevice(callbacks, id, mContext.getOpPackageName(), uid);

//CameraService.cpp

Status CameraService::connectDevice(

const sp<:camera2::icameradevicecallbacks>& cameraCb,

int cameraId,

const String16& clientPackageName,

int clientUid,

/*out*/

sp<:camera2::icameradeviceuser>* device) {

ATRACE_CALL();

Status ret = Status::ok();

String8 id = String8::format("%d", cameraId);

sp client = nullptr;

ret = connectHelper<:camera2::icameradevicecallbacks>(cameraCb, id,

CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName,

clientUid, USE_CALLING_PID, API_2,

/*legacyMode*/ false, /*shimUpdateOnly*/ false,

/*out*/client);

if(!ret.isOk()) {

logRejected(id, getCallingPid(), String8(clientPackageName),

ret.toString8());

return ret;

}

*device = client;

return ret;

}

调用connectHelper

//CameraService.h

binder::Status CameraService::connectHelper(const sp& cameraCb, const String8& cameraId,

int halVersion, const String16& clientPackageName, int clientUid, int clientPid,

apiLevel effectiveApiLevel, bool legacyMode, bool shimUpdateOnly,

/*out*/sp& device) {

binder::Status ret = binder::Status::ok();

String8 clientName8(clientPackageName);

int originalClientPid = 0;

ALOGI("CameraService::connect call (PID %d \"%s\", camera ID %s) for HAL version %s and "

"Camera API version %d", clientPid, clientName8.string(), cameraId.string(),

(halVersion == -1) ? "default" : std::to_string(halVersion).c_str(),

static_cast(effectiveApiLevel));

sp client = nullptr;

{

// Acquire mServiceLock and prevent other clients from connecting

std::unique_ptr lock =

AutoConditionLock::waitAndAcquire(mServiceLockWrapper, DEFAULT_CONNECT_TIMEOUT_NS);

if (lock == nullptr) {

ALOGE("CameraService::connect (PID %d) rejected (too many other clients connecting)."

, clientPid);

return STATUS_ERROR_FMT(ERROR_MAX_CAMERAS_IN_USE,

"Cannot open camera %s for \"%s\" (PID %d): Too many other clients connecting",

cameraId.string(), clientName8.string(), clientPid);

}

// Enforce client permissions and do basic sanity checks

if(!(ret = validateConnectLocked(cameraId, clientName8,

/*inout*/clientUid, /*inout*/clientPid, /*out*/originalClientPid)).isOk()) {

return ret;

}

// Check the shim parameters after acquiring lock, if they have already been updated and

// we were doing a shim update, return immediately

if (shimUpdateOnly) {

auto cameraState = getCameraState(cameraId);

if (cameraState != nullptr) {

if (!cameraState->getShimParams().isEmpty()) return ret;

}

}

status_t err;

sp clientTmp = nullptr;

std::shared_ptr<:clientdescriptor sp>>> partial;

if ((err = handleEvictionsLocked(cameraId, originalClientPid, effectiveApiLevel,

IInterface::asBinder(cameraCb), clientName8, /*out*/&clientTmp,

/*out*/&partial)) != NO_ERROR) {

switch (err) {

case -ENODEV:

return STATUS_ERROR_FMT(ERROR_DISCONNECTED,

"No camera device with ID \"%s\" currently available",

cameraId.string());

case -EBUSY:

return STATUS_ERROR_FMT(ERROR_CAMERA_IN_USE,

"Higher-priority client using camera, ID \"%s\" currently unavailable",

cameraId.string());

default:

return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,

"Unexpected error %s (%d) opening camera \"%s\"",

strerror(-err), err, cameraId.string());

}

}

if (clientTmp.get() != nullptr) {

// Handle special case for API1 MediaRecorder where the existing client is returned

device = static_cast(clientTmp.get());

return ret;

}

// give flashlight a chance to close devices if necessary.

mFlashlight->prepareDeviceOpen(cameraId);

// TODO: Update getDeviceVersion + HAL interface to use strings for Camera IDs

int id = cameraIdToInt(cameraId);

if (id == -1) {

ALOGE("%s: Invalid camera ID %s, cannot get device version from HAL.", __FUNCTION__,

cameraId.string());

return STATUS_ERROR_FMT(ERROR_ILLEGAL_ARGUMENT,

"Bad camera ID \"%s\" passed to camera open", cameraId.string());

}

int facing = -1;

int deviceVersion = getDeviceVersion(id, /*out*/&facing);

sp tmp = nullptr;

if(!(ret = makeClient(this, cameraCb, clientPackageName, id, facing, clientPid,

clientUid, getpid(), legacyMode, halVersion, deviceVersion, effectiveApiLevel,

/*out*/&tmp)).isOk()) {

return ret;

}

client = static_cast(tmp.get());

LOG_ALWAYS_FATAL_IF(client.get() == nullptr, "%s: CameraService in invalid state",

__FUNCTION__);

if ((err = client->initialize(mModule)) != OK) {

ALOGE("%s: Could not initialize client from HAL module.", __FUNCTION__);

// Errors could be from the HAL module open call or from AppOpsManager

switch(err) {

case BAD_VALUE:

return STATUS_ERROR_FMT(ERROR_ILLEGAL_ARGUMENT,

"Illegal argument to HAL module for camera \"%s\"", cameraId.string());

case -EBUSY:

return STATUS_ERROR_FMT(ERROR_CAMERA_IN_USE,

"Camera \"%s\" is already open", cameraId.string());

case -EUSERS:

return STATUS_ERROR_FMT(ERROR_MAX_CAMERAS_IN_USE,

"Too many cameras already open, cannot open camera \"%s\"",

cameraId.string());

case PERMISSION_DENIED:

return STATUS_ERROR_FMT(ERROR_PERMISSION_DENIED,

"No permission to open camera \"%s\"", cameraId.string());

case -EACCES:

return STATUS_ERROR_FMT(ERROR_DISABLED,

"Camera \"%s\" disabled by policy", cameraId.string());

case -ENODEV:

default:

return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,

"Failed to initialize camera \"%s\": %s (%d)", cameraId.string(),

strerror(-err), err);

}

}

// Update shim paremeters for legacy clients

if (effectiveApiLevel == API_1) {

// Assume we have always received a Client subclass for API1

sp shimClient = reinterpret_cast(client.get());

String8 rawParams = shimClient->getParameters();

CameraParameters params(rawParams);

auto cameraState = getCameraState(cameraId);

if (cameraState != nullptr) {

cameraState->setShimParams(params);

} else {

ALOGE("%s: Cannot update shim parameters for camera %s, no such device exists.",

__FUNCTION__, cameraId.string());

}

}

if (shimUpdateOnly) {

// If only updating legacy shim parameters, immediately disconnect client

mServiceLock.unlock();

client->disconnect();

mServiceLock.lock();

} else {

// Otherwise, add client to active clients list

finishConnectLocked(client, partial);

}

} // lock is destroyed, allow further connect calls

// Important: release the mutex here so the client can call back into the service from its

// destructor (can be at the end of the call)

device = client;

return ret;

}

调用makeClient

//CameraService.cpp

Status CameraService::makeClient(const sp& cameraService,

const sp& cameraCb, const String16& packageName, int cameraId,

int facing, int clientPid, uid_t clientUid, int servicePid, bool legacyMode,

int halVersion, int deviceVersion, apiLevel effectiveApiLevel,

/*out*/sp* client) {

if (halVersion < 0 || halVersion == deviceVersion) {

// Default path: HAL version is unspecified by caller, create CameraClient

// based on device version reported by the HAL.

switch(deviceVersion) {

case CAMERA_DEVICE_API_VERSION_1_0:

if (effectiveApiLevel == API_1) {  // Camera1 API route

sp tmp = static_cast(cameraCb.get());

*client = new CameraClient(cameraService, tmp, packageName, cameraId, facing,

clientPid, clientUid, getpid(), legacyMode);

} else { // Camera2 API route

ALOGW("Camera using old HAL version: %d", deviceVersion);

return STATUS_ERROR_FMT(ERROR_DEPRECATED_HAL,

"Camera device \"%d\" HAL version %d does not support camera2 API",

cameraId, deviceVersion);

}

break;

case CAMERA_DEVICE_API_VERSION_3_0:

case CAMERA_DEVICE_API_VERSION_3_1:

case CAMERA_DEVICE_API_VERSION_3_2:

case CAMERA_DEVICE_API_VERSION_3_3:

case CAMERA_DEVICE_API_VERSION_3_4:

if (effectiveApiLevel == API_1) { // Camera1 API route

sp tmp = static_cast(cameraCb.get());

*client = new Camera2Client(cameraService, tmp, packageName, cameraId, facing,

clientPid, clientUid, servicePid, legacyMode);

} else { // Camera2 API route

sp<:camera2::icameradevicecallbacks> tmp =

static_cast<:camera2::icameradevicecallbacks>(cameraCb.get());

*client = new CameraDeviceClient(cameraService, tmp, packageName, cameraId,

facing, clientPid, clientUid, servicePid);

}

break;

default:

// Should not be reachable

ALOGE("Unknown camera device HAL version: %d", deviceVersion);

return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,

"Camera device \"%d\" has unknown HAL version %d",

cameraId, deviceVersion);

}

} else {

// A particular HAL version is requested by caller. Create CameraClient

// based on the requested HAL version.

if (deviceVersion > CAMERA_DEVICE_API_VERSION_1_0 &&

halVersion == CAMERA_DEVICE_API_VERSION_1_0) {

// Only support higher HAL version device opened as HAL1.0 device.

sp tmp = static_cast(cameraCb.get());

*client = new CameraClient(cameraService, tmp, packageName, cameraId, facing,

clientPid, clientUid, servicePid, legacyMode);

} else {

// Other combinations (e.g. HAL3.x open as HAL2.x) are not supported yet.

ALOGE("Invalid camera HAL version %x: HAL %x device can only be"

" opened as HAL %x device", halVersion, deviceVersion,

CAMERA_DEVICE_API_VERSION_1_0);

return STATUS_ERROR_FMT(ERROR_ILLEGAL_ARGUMENT,

"Camera device \"%d\" (HAL version %d) cannot be opened as HAL version %d",

cameraId, deviceVersion, halVersion);

}

}

return Status::ok();

}

由于使用的Camera API 2.0,因此建立client=new CameraDeviceClient();

建立以后须要调用CameraDeviceClient的Initialize方法进行初始化(下面讲)

//CameraDeviceClient.cpp

CameraDeviceClient::CameraDeviceClient(const sp& cameraService,

const sp<:camera2::icameradevicecallbacks>& remoteCallback,

const String16& clientPackageName,

int cameraId,

int cameraFacing,

int clientPid,

uid_t clientUid,

int servicePid) :

Camera2ClientBase(cameraService, remoteCallback, clientPackageName,

cameraId, cameraFacing, clientPid, clientUid, servicePid),

mInputStream(),

mStreamingRequestId(REQUEST_ID_NONE),

mRequestIdCounter(0) {

ATRACE_CALL();

ALOGI("CameraDeviceClient %d: Opened", cameraId);

}

继续调用Camera2ClientBase;并进行初始化

template

Camera2ClientBase::Camera2ClientBase(

const sp& cameraService,

const sp& remoteCallback,

const String16& clientPackageName,

int cameraId,

int cameraFacing,

int clientPid,

uid_t clientUid,

int servicePid):

TClientBase(cameraService, remoteCallback, clientPackageName,

cameraId, cameraFacing, clientPid, clientUid, servicePid),

mSharedCameraCallbacks(remoteCallback),

mDeviceVersion(cameraService->getDeviceVersion(cameraId)),

mDeviceActive(false)

{

ALOGI("Camera %d: Opened. Client: %s (PID %d, UID %d)", cameraId,

String8(clientPackageName).string(), clientPid, clientUid);

mInitialClientPid = clientPid;

mDevice = new Camera3Device(cameraId);

LOG_ALWAYS_FATAL_IF(mDevice == 0, "Device should never be NULL here.");

}

建立mDevice=new Camera3Device();

Camera3Device::Camera3Device(int id):

mId(id),

mIsConstrainedHighSpeedConfiguration(false),

mHal3Device(NULL),

mStatus(STATUS_UNINITIALIZED),

mStatusWaiters(0),

mUsePartialResult(false),

mNumPartialResults(1),

mTimestampOffset(0),

mNextResultFrameNumber(0),

mNextReprocessResultFrameNumber(0),

mNextShutterFrameNumber(0),

mNextReprocessShutterFrameNumber(0),

mListener(NULL)

{

ATRACE_CALL();

camera3_callback_ops::notify = &sNotify;

camera3_callback_ops::process_capture_result = &sProcessCaptureResult;

ALOGV("%s: Created device for camera %d", __FUNCTION__, id);

}

CameraDeviceClient建立对象以后须要调用initialize进行初始化:

status_t CameraDeviceClient::initialize(CameraModule *module)

{

ATRACE_CALL();

status_t res;

res = Camera2ClientBase::initialize(module);

if (res != OK) {

return res;

}

String8 threadName;

mFrameProcessor = new FrameProcessorBase(mDevice);

threadName = String8::format("CDU-%d-FrameProc", mCameraId);

mFrameProcessor->run(threadName.string());

mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,

FRAME_PROCESSOR_LISTENER_MAX_ID,

/*listener*/this,

/*sendPartials*/true);

return OK;

}

这个初始化又调用了Camera2ClientBase的initialize(module);

template

status_t Camera2ClientBase::initialize(CameraModule *module) {

ATRACE_CALL();

ALOGV("%s: Initializing client for camera %d", __FUNCTION__,

TClientBase::mCameraId);

status_t res;

// Verify ops permissions

res = TClientBase::startCameraOps();

if (res != OK) {

return res;

}

if (mDevice == NULL) {

ALOGE("%s: Camera %d: No device connected",

__FUNCTION__, TClientBase::mCameraId);

return NO_INIT;

}

res = mDevice->initialize(module);

if (res != OK) {

ALOGE("%s: Camera %d: unable to initialize device: %s (%d)",

__FUNCTION__, TClientBase::mCameraId, strerror(-res), res);

return res;

}

wp<:notificationlistener> weakThis(this);

res = mDevice->setNotifyCallback(weakThis);

return OK;

}

在Camera2ClientBase的初始化过程当中又调用了Camera3Device的Initialize进行初始化

status_t Camera3Device::initialize(CameraModule *module)

{

ATRACE_CALL();

Mutex::Autolock il(mInterfaceLock);

Mutex::Autolock l(mLock);

ALOGV("%s: Initializing device for camera %d", __FUNCTION__, mId);

if (mStatus != STATUS_UNINITIALIZED) {

CLOGE("Already initialized!");

return INVALID_OPERATION;

}

/** Open HAL device */

status_t res;

String8 deviceName = String8::format("%d", mId);

camera3_device_t *device;

ATRACE_BEGIN("camera3->open");

res = module->open(deviceName.string(),

reinterpret_cast(&device));

ATRACE_END();

if (res != OK) {

SET_ERR_L("Could not open camera: %s (%d)", strerror(-res), res);

return res;

}

/** Cross-check device version */

if (device->common.version < CAMERA_DEVICE_API_VERSION_3_0) {

SET_ERR_L("Could not open camera: "

"Camera device should be at least %x, reports %x instead",

CAMERA_DEVICE_API_VERSION_3_0,

device->common.version);

device->common.close(&device->common);

return BAD_VALUE;

}

camera_info info;

res = module->getCameraInfo(mId, &info);

if (res != OK) return res;

if (info.device_version != device->common.version) {

SET_ERR_L("HAL reporting mismatched camera_info version (%x)"

" and device version (%x).",

info.device_version, device->common.version);

device->common.close(&device->common);

return BAD_VALUE;

}

/** Initialize device with callback functions */

ATRACE_BEGIN("camera3->initialize");

res = device->ops->initialize(device, this);

ATRACE_END();

if (res != OK) {

SET_ERR_L("Unable to initialize HAL device: %s (%d)",

strerror(-res), res);

device->common.close(&device->common);

return BAD_VALUE;

}

/** Start up status tracker thread */

mStatusTracker = new StatusTracker(this);

res = mStatusTracker->run(String8::format("C3Dev-%d-Status", mId).string());

if (res != OK) {

SET_ERR_L("Unable to start status tracking thread: %s (%d)",

strerror(-res), res);

device->common.close(&device->common);

mStatusTracker.clear();

return res;

}

/** Register in-flight map to the status tracker */

mInFlightStatusId = mStatusTracker->addComponent();

/** Create buffer manager */

mBufferManager = new Camera3BufferManager();

bool aeLockAvailable = false;

camera_metadata_ro_entry aeLockAvailableEntry;

res = find_camera_metadata_ro_entry(info.static_camera_characteristics,

ANDROID_CONTROL_AE_LOCK_AVAILABLE, &aeLockAvailableEntry);

if (res == OK && aeLockAvailableEntry.count > 0) {

aeLockAvailable = (aeLockAvailableEntry.data.u8[0] ==

ANDROID_CONTROL_AE_LOCK_AVAILABLE_TRUE);

}

/** Start up request queue thread */

mRequestThread = new RequestThread(this, mStatusTracker, device, aeLockAvailable);

res = mRequestThread->run(String8::format("C3Dev-%d-ReqQueue", mId).string());

if (res != OK) {

SET_ERR_L("Unable to start request queue thread: %s (%d)",

strerror(-res), res);

device->common.close(&device->common);

mRequestThread.clear();

return res;

}

mPreparerThread = new PreparerThread();

/** Everything is good to go */

mDeviceVersion = device->common.version;

mDeviceInfo = info.static_camera_characteristics;

mHal3Device = device;

// Determine whether we need to derive sensitivity boost values for older devices.

// If post-RAW sensitivity boost range is listed, so should post-raw sensitivity control

// be listed (as the default value 100)

if (mDeviceVersion < CAMERA_DEVICE_API_VERSION_3_4 &&

mDeviceInfo.exists(ANDROID_CONTROL_POST_RAW_SENSITIVITY_BOOST_RANGE)) {

mDerivePostRawSensKey = true;

}

internalUpdateStatusLocked(STATUS_UNCONFIGURED);

mNextStreamId = 0;

mDummyStreamId = NO_STREAM;

mNeedConfig = true;

mPauseStateNotify = false;

// Measure the clock domain offset between camera and video/hw_composer

camera_metadata_entry timestampSource =

mDeviceInfo.find(ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE);

if (timestampSource.count > 0 && timestampSource.data.u8[0] ==

ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE_REALTIME) {

mTimestampOffset = getMonoToBoottimeOffset();

}

// Will the HAL be sending in early partial result metadata?

if (mDeviceVersion >= CAMERA_DEVICE_API_VERSION_3_2) {

camera_metadata_entry partialResultsCount =

mDeviceInfo.find(ANDROID_REQUEST_PARTIAL_RESULT_COUNT);

if (partialResultsCount.count > 0) {

mNumPartialResults = partialResultsCount.data.i32[0];

mUsePartialResult = (mNumPartialResults > 1);

}

} else {

camera_metadata_entry partialResultsQuirk =

mDeviceInfo.find(ANDROID_QUIRKS_USE_PARTIAL_RESULT);

if (partialResultsQuirk.count > 0 && partialResultsQuirk.data.u8[0] == 1) {

mUsePartialResult = true;

}

}

camera_metadata_entry configs =

mDeviceInfo.find(ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS);

for (uint32_t i = 0; i < configs.count; i += 4) {

if (configs.data.i32[i] == HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED &&

configs.data.i32[i + 3] ==

ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS_INPUT) {

mSupportedOpaqueInputSizes.add(Size(configs.data.i32[i + 1],

configs.data.i32[i + 2]));

}

}

return OK;

}

在Camera3Device的初始化中又三个重要的步骤:

第一个:module.open(),打开模块;

第二个:device.ops.initialize()进行HAL层的初始化;

第三个:mRequestThread.run();启动请求线程

至此,framework层的初始化已经完成了,接下来就是HAL层的初始化。

framework层初始化的时序图:

d437c80b6e57b6fd0973009818feb54f.png

三、HAL层的初始化

由Camera3Device.cpp中的initialize中的device.ops.initialize调用进入到HAL层的初始化

//QCamera3HardwareInterface.cpp

int QCamera3HardwareInterface::initialize(const struct camera3_device *device,

const camera3_callback_ops_t *callback_ops)

{

LOGD("E");

QCamera3HardwareInterface *hw =

reinterpret_cast(device->priv);

if (!hw) {

LOGE("NULL camera device");

return -ENODEV;

}

int rc = hw->initialize(callback_ops);

LOGD("X");

return rc;

}

其中调用QCamera3HardwareInterface的initialize(callback_ops),

//QCamera3HardwareInterface.cpp

int QCamera3HardwareInterface::initialize(

const struct camera3_callback_ops *callback_ops)

{

ATRACE_CALL();

int rc;

LOGI("E :mCameraId = %d mState = %d", mCameraId, mState);

pthread_mutex_lock(&mMutex);

// Validate current state

switch (mState) {

case OPENED:

/* valid state */

break;

default:

LOGE("Invalid state %d", mState);

rc = -ENODEV;

goto err1;

}

rc = initParameters();

if (rc < 0) {

LOGE("initParamters failed %d", rc);

goto err1;

}

mCallbackOps = callback_ops;

mChannelHandle = mCameraHandle->ops->add_channel(

mCameraHandle->camera_handle, NULL, NULL, this);

if (mChannelHandle == 0) {

LOGE("add_channel failed");

rc = -ENOMEM;

pthread_mutex_unlock(&mMutex);

return rc;

}

pthread_mutex_unlock(&mMutex);

mCameraInitialized = true;

mState = INITIALIZED;

LOGI("X");

return 0;

err1:

pthread_mutex_unlock(&mMutex);

return rc;

}

其中有比较重要的两个步骤:

第一个:initParameters();进行参数的初始化;

第二个:调用mm_camera_intf的add_channel方法;

先看第一个:

int QCamera3HardwareInterface::initParameters()

{

int rc = 0;

//Allocate Set Param Buffer

mParamHeap = new QCamera3HeapMemory(1);

rc = mParamHeap->allocate(sizeof(metadata_buffer_t));

if(rc != OK) {

rc = NO_MEMORY;

LOGE("Failed to allocate SETPARM Heap memory");

delete mParamHeap;

mParamHeap = NULL;

return rc;

}

//Map memory for parameters buffer

rc = mCameraHandle->ops->map_buf(mCameraHandle->camera_handle,

CAM_MAPPING_BUF_TYPE_PARM_BUF,

mParamHeap->getFd(0),

sizeof(metadata_buffer_t),

(metadata_buffer_t *) DATA_PTR(mParamHeap,0));

if(rc < 0) {

LOGE("failed to map SETPARM buffer");

rc = FAILED_TRANSACTION;

mParamHeap->deallocate();

delete mParamHeap;

mParamHeap = NULL;

return rc;

}

mParameters = (metadata_buffer_t *) DATA_PTR(mParamHeap,0);

mPrevParameters = (metadata_buffer_t *)malloc(sizeof(metadata_buffer_t));

return rc;

}

使用QCamera3HeapMemory建立了mParamHeap, QCamera3HeapMemory位于QCamera3Mem.cpp中,是一个子类,其参数表示帧数,能够在此控制放入的帧数。

//QCamera3Mem.cpp

QCamera3HeapMemory::QCamera3HeapMemory(uint32_t maxCnt)

: QCamera3Memory()

{

mMaxCnt = MIN(maxCnt, MM_CAMERA_MAX_NUM_FRAMES);

for (uint32_t i = 0; i < mMaxCnt; i ++)

mPtr[i] = NULL;

}

而后就是讲参数buffer与内存map进行映射;

rc = mCameraHandle->ops->map_buf(mCameraHandle->camera_handle,

CAM_MAPPING_BUF_TYPE_PARM_BUF,

mParamHeap->getFd(0),

sizeof(metadata_buffer_t),

(metadata_buffer_t *) DATA_PTR(mParamHeap,0));

而且将mParamHeap的头指针指向mParameters,这样经过mParameters就能够控制参数了。

接下来就是看第二个:调用mm_camera_intf.c的add_channel方法;

先在mm_camera_interface.c中找到mm_camera_ops,而后找到add_channel的映射方法mm_camera_intf_add_channel,并进入该方法:

//mm_camera_interface.c

static uint32_t mm_camera_intf_add_channel(uint32_t camera_handle,

mm_camera_channel_attr_t *attr,

mm_camera_buf_notify_t channel_cb,

void *userdata)

{

uint32_t ch_id = 0;

mm_camera_obj_t * my_obj = NULL;

CDBG("%s :E camera_handler = %d", __func__, camera_handle);

pthread_mutex_lock(&g_intf_lock);

my_obj = mm_camera_util_get_camera_by_handler(camera_handle);

if(my_obj) {

pthread_mutex_lock(&my_obj->cam_lock);

pthread_mutex_unlock(&g_intf_lock);

ch_id = mm_camera_add_channel(my_obj, attr, channel_cb, userdata);

} else {

pthread_mutex_unlock(&g_intf_lock);

}

CDBG("%s :X ch_id = %d", __func__, ch_id);

return ch_id;

}

该方法中又调用mm_camerea.c的add_channel方法mm_camera_add_channel,进如mm_camera.c的add_channel方法中

//mm_camera.c

uint32_t mm_camera_add_channel(mm_camera_obj_t *my_obj,

mm_camera_channel_attr_t *attr,

mm_camera_buf_notify_t channel_cb,

void *userdata)

{

mm_channel_t *ch_obj = NULL;

uint8_t ch_idx = 0;

uint32_t ch_hdl = 0;

for(ch_idx = 0; ch_idx < MM_CAMERA_CHANNEL_MAX; ch_idx++) {

if (MM_CHANNEL_STATE_NOTUSED == my_obj->ch[ch_idx].state) {

ch_obj = &my_obj->ch[ch_idx];

break;

}

}

if (NULL != ch_obj) {

/* initialize channel obj */

memset(ch_obj, 0, sizeof(mm_channel_t));

ch_hdl = mm_camera_util_generate_handler(ch_idx);

ch_obj->my_hdl = ch_hdl;

ch_obj->state = MM_CHANNEL_STATE_STOPPED;

ch_obj->cam_obj = my_obj;

pthread_mutex_init(&ch_obj->ch_lock, NULL);

mm_channel_init(ch_obj, attr, channel_cb, userdata);

}

pthread_mutex_unlock(&my_obj->cam_lock);

return ch_hdl;

}

该方法又调用mm_camera_channel.c的mm_channel_init方法,进入mm_camera_channel.c的该方法;

//mm_camera_channel.c

int32_t mm_channel_init(mm_channel_t *my_obj,

mm_camera_channel_attr_t *attr,

mm_camera_buf_notify_t channel_cb,

void *userdata)

{

int32_t rc = 0;

my_obj->bundle.super_buf_notify_cb = channel_cb;

my_obj->bundle.user_data = userdata;

if (NULL != attr) {

my_obj->bundle.superbuf_queue.attr = *attr;

}

CDBG("%s : Launch data poll thread in channel open", __func__);

snprintf(my_obj->threadName, THREAD_NAME_SIZE, "DataPoll");

mm_camera_poll_thread_launch(&my_obj->poll_thread[0],

MM_CAMERA_POLL_TYPE_DATA);

/* change state to stopped state */

my_obj->state = MM_CHANNEL_STATE_STOPPED;

return rc;

}

接着调用mm_camera_thread.c中的mm_camera_poll_thread_launch()方法,

//mm_camera_thread.c

int32_t mm_camera_poll_thread_launch(mm_camera_poll_thread_t * poll_cb,

mm_camera_poll_thread_type_t poll_type)

{

int32_t rc = 0;

size_t i = 0, cnt = 0;

poll_cb->poll_type = poll_type;

//Initialize poll_fds

cnt = sizeof(poll_cb->poll_fds) / sizeof(poll_cb->poll_fds[0]);

for (i = 0; i < cnt; i++) {

poll_cb->poll_fds[i].fd = -1;

}

//Initialize poll_entries

cnt = sizeof(poll_cb->poll_entries) / sizeof(poll_cb->poll_entries[0]);

for (i = 0; i < cnt; i++) {

poll_cb->poll_entries[i].fd = -1;

}

//Initialize pipe fds

poll_cb->pfds[0] = -1;

poll_cb->pfds[1] = -1;

rc = pipe(poll_cb->pfds);

if(rc < 0) {

CDBG_ERROR("%s: pipe open rc=%d\n", __func__, rc);

return -1;

}

poll_cb->timeoutms = -1;  /* Infinite seconds */

CDBG("%s: poll_type = %d, read fd = %d, write fd = %d timeout = %d",

__func__, poll_cb->poll_type,

poll_cb->pfds[0], poll_cb->pfds[1],poll_cb->timeoutms);

pthread_mutex_init(&poll_cb->mutex, NULL);

pthread_cond_init(&poll_cb->cond_v, NULL);

/* launch the thread */

pthread_mutex_lock(&poll_cb->mutex);

poll_cb->status = 0;

pthread_create(&poll_cb->pid, NULL, mm_camera_poll_thread, (void *)poll_cb);

if(!poll_cb->status) {

pthread_cond_wait(&poll_cb->cond_v, &poll_cb->mutex);

}

if (!poll_cb->threadName) {

pthread_setname_np(poll_cb->pid, "CAM_poll");

} else {

pthread_setname_np(poll_cb->pid, poll_cb->threadName);

}

pthread_mutex_unlock(&poll_cb->mutex);

CDBG("%s: End",__func__);

return rc;

}

在这个函数中进行了poll_fds,poll_entries,pipe fds的初始化操做而且调用pthred_create(..,mm_camera_poll_thread,..)启动线程。

//mm_camera_thread.c

static void *mm_camera_poll_thread(void *data)

{

prctl(PR_SET_NAME, (unsigned long)"mm_cam_poll_th", 0, 0, 0);

mm_camera_poll_thread_t *poll_cb = (mm_camera_poll_thread_t *)data;

/* add pipe read fd into poll first */

poll_cb->poll_fds[poll_cb->num_fds++].fd = poll_cb->pfds[0];

mm_camera_poll_sig_done(poll_cb);

mm_camera_poll_set_state(poll_cb, MM_CAMERA_POLL_TASK_STATE_POLL);

return mm_camera_poll_fn(poll_cb);

}

最后返回mm_camera_poll_fn(poll_cb),

//mm_camera_thread.c

static void *mm_camera_poll_fn(mm_camera_poll_thread_t *poll_cb)

{

int rc = 0, i;

if (NULL == poll_cb) {

CDBG_ERROR("%s: poll_cb is NULL!\n", __func__);

return NULL;

}

CDBG("%s: poll type = %d, num_fd = %d poll_cb = %p\n",

__func__, poll_cb->poll_type, poll_cb->num_fds,poll_cb);

do {

for(i = 0; i < poll_cb->num_fds; i++) {

poll_cb->poll_fds[i].events = POLLIN|POLLRDNORM|POLLPRI;

}

rc = poll(poll_cb->poll_fds, poll_cb->num_fds, poll_cb->timeoutms);

if(rc > 0) {

if ((poll_cb->poll_fds[0].revents & POLLIN) &&

(poll_cb->poll_fds[0].revents & POLLRDNORM)) {

/* if we have data on pipe, we only process pipe in this iteration */

CDBG("%s: cmd received on pipe\n", __func__);

mm_camera_poll_proc_pipe(poll_cb);

} else {

for(i=1; inum_fds; i++) {

/* Checking for ctrl events */

if ((poll_cb->poll_type == MM_CAMERA_POLL_TYPE_EVT) &&

(poll_cb->poll_fds[i].revents & POLLPRI)) {

CDBG("%s: mm_camera_evt_notify\n", __func__);

if (NULL != poll_cb->poll_entries[i-1].notify_cb) {

poll_cb->poll_entries[i-1].notify_cb(poll_cb->poll_entries[i-1].user_data);

}

}

if ((MM_CAMERA_POLL_TYPE_DATA == poll_cb->poll_type) &&

(poll_cb->poll_fds[i].revents & POLLIN) &&

(poll_cb->poll_fds[i].revents & POLLRDNORM)) {

CDBG("%s: mm_stream_data_notify\n", __func__);

if (NULL != poll_cb->poll_entries[i-1].notify_cb) {

poll_cb->poll_entries[i-1].notify_cb(poll_cb->poll_entries[i-1].user_data);

}

}

}

}

} else {

/* in error case sleep 10 us and then continue. hard coded here */

usleep(10);

continue;

}

} while ((poll_cb != NULL) && (poll_cb->state == MM_CAMERA_POLL_TASK_STATE_POLL));

return NULL;

}

至此HAL层的初始化工做完成了。

下面是HAL层初始化的时序图:

669359bae33b0a3ab0b58fb0eed1f656.png

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值