Preview的显示流程
这次我们要从最开始startPreview的时候开始,在starPreview之间就setPreviewWindow()。
//CameraClient.cppstatus_t CameraClient::startPreviewMode() {mHardware->previewEnabled();mHardware->setPreviewWindow(mPreviewWindow);result = mHardware->startPreview();}
最后会调用到Cam1DeviceBase的setPreviewWindow(),其中的调用过程和startPreview()的一样,不再赘述
Cam1DeviceBase::setPreviewWindow(preview_stream_ops* window){//(1) 初始化显示status_t status = initDisplayClient(window);//(2) 开始显示status = enableDisplayClient();}
initDisplayClient(); 在之前说的Cam1DeviceBase::startPreview()的(2)也调用 了initDisplayClient();
Cam1DeviceBase::initDisplayClient(preview_stream_ops* window){Size previewSize;// [1] Check to see whether the passed window is NULL or not.//...// [2] Get preview size. 获得预览参数queryPreviewSize(previewSize.width, previewSize.height);// [3] Initialize Display Client.// [3.1] create a Display Client. 仅是初始化变量mpDisplayClient = IDisplayClient::createInstance();// [3.2] initialize the newly-created Display Client.mpDisplayClient->init();// [3.3] set preview_stream_ops & related window info.mpDisplayClient->setWindow(window, previewSize.width, previewSize.height, queryDisplayBufCount());// [3.4] set Image Buffer Provider Client if it exist.mpCamAdapter != 0 && ! mpDisplayClient->setImgBufProviderClient(mpCamAdapter);}
下面是initDisplayClient();的细节分析
DisplayClient::init(){/**构造DisplayThread并让它run起来,注意构造时传入的handle参数为dispalyClient构造ImgBufQueue,它的id为eID_DISPLAY在previewClient init()时,也创建了一个ImgBufQueue,而ID是eID_PRV_CB**/ret = createDisplayThread() && createImgBufQueue();}DisplayClient::setWindow(preview_stream_ops*const window, int32_t const wndWidth,int32_t const wndHeight, int32_t const i4MaxImgBufCount){ return set_preview_stream_ops(window, wndWidth, wndHeight, i4MaxImgBufCount);}DisplayClient::set_preview_stream_ops(preview_stream_ops*const window, int32_t const wndWidth,int32_t const wndHeight, int32_t const i4MaxImgBufCount){int32_t min_undequeued_buf_count = 0;//// (2) Check//....// (3) Sava info.mpStreamImgInfo.clear();mpStreamImgInfo = new ImgInfo(wndWidth, wndHeight, CAMERA_DISPLAY_FORMAT, CAMERA_DISPLAY_FORMAT_HAL, "Camera@Display");mpStreamOps = window;mi4MaxImgBufCount = i4MaxImgBufCount;// (4.1) Set gralloc usage bits for window.err = mpStreamOps->set_usage(mpStreamOps, CAMERA_GRALLOC_USAGE);// (4.2) Get minimum undequeue buffer counterr = mpStreamOps->get_min_undequeued_buffer_count(mpStreamOps, &min_undequeued_buf_count);// (4.3) Set the number of buffers needed for display.err = mpStreamOps->set_buffer_count(mpStreamOps, mi4MaxImgBufCount+min_undequeued_buf_count);// (4.4) Set window geometryerr = mpStreamOps->set_buffers_geometry(mpStreamOps, mpStreamImgInfo->mu4ImgWidth,mpStreamImgInfo->mu4ImgHeight, mpStreamImgInfo->mi4ImgFormat);}DisplayClient::setImgBufProviderClient(sp<IImgBufProviderClient>const& rpClient){rpClient->onImgBufProviderCreated(mpImgBufQueue);mpImgBufPvdrClient = rpClient;}//BaseCamAdapter.cppBaseCamAdapter::onImgBufProviderCreated(sp<IImgBufProvider>const& rpProvider){int32_t const i4ProviderId = rpProvider->getProviderId();mpImgBufProvidersMgr->setProvider(i4ProviderId, rpProvider);}
DispalyClient的初始化就这么完成了,从但是我们并不知道上面已经初始化的参数有什么作用,又是如何开始刷显示的呢?我们返回到enableDisplayClient()继续跟踪
Cam1DeviceBase::enableDisplayClient(){Size previewSize;// [1] Get preview size. 获取预览参数queryPreviewSize(previewSize.width, previewSize.height);// [2] EnablempDisplayClient->enableDisplay(previewSize.width, previewSize.height, queryDisplayBufCount(), mpCamAdapter);}
Cam1DeviceBase::enableDisplayClient(){Size previewSize;// [1] Get preview size. 获取预览参数queryPreviewSize(previewSize.width, previewSize.height);// [2] EnablempDisplayClient->enableDisplay(previewSize.width, previewSize.height, queryDisplayBufCount(), mpCamAdapter);}DisplayClient::enableDisplay(int32_t const i4Width, int32_t const i4Height,int32_t const i4BufCount, sp<IImgBufProviderClient>const& rpClient){// Enable.enableDisplay();}DisplayClient::enableDisplay(){//Post a command to wake up the thread. 向DisplayThread发送消息mpDisplayThread->postCommand(Command(Command::eID_WAKEUP));}DisplayThread::threadLoop(){Command cmd;if ( getCommand(cmd) ){switch (cmd.eId){case Command::eID_EXIT://....case Command::eID_WAKEUP:default://调用的是handler的onThreadloop//在前面已知DisplayThread的handler被设置成了DisplayClientmpThreadHandler->onThreadLoop(cmd);break;}}}
DisplayClient的onThreadLoop()在DisplayClient.BufOps.cpp里,仔细观察下面的函数是不会有似曾相识的感觉,原来下面函数的结构和Preview的onClientThreadLoop的函数结构一模一样,边里面使用的函数名都一样,虽然不是同一个函数
DisplayClient::onThreadLoop(Command const& rCmd){// (0) lock Processor.sp<IImgBufQueue> pImgBufQueue;// (1) Prepare all TODO buffers. 准备buf,把buf放入队列里if ( ! prepareAllTodoBuffers(pImgBufQueue) ){return true;}// (2) Start 通知开始处理pImgBufQueue->startProcessor();// (3) Do until disabled.while ( 1 ){// (.1) 阻塞等待通知,并开始处理bufwaitAndHandleReturnBuffers(pImgBufQueue);// (.2) break if disabled.if ( ! isDisplayEnabled() ){MY_LOGI("Display disabled");break;}// (.3) re-prepare all TODO buffers, if possible,// since some DONE/CANCEL buffers return. 准备buf,把buf放入队列里prepareAllTodoBuffers(pImgBufQueue);}//// (4) StoppImgBufQueue->pauseProcessor();pImgBufQueue->flushProcessor();pImgBufQueue->stopProcessor();//// (5) Cancel all un-returned buffers.cancelAllUnreturnBuffers();//{Mutex::Autolock _l(mStateMutex);mState = eState_Suspend;mStateCond.broadcast();}}
根据之前的经验很直觉地找到了waitAndHandleReturnBuffers()函数,正是里面处理了数据,但是又怎么样把数据显示出来的呢?继续看
DisplayClient::waitAndHandleReturnBuffers(sp<IImgBufQueue>const& rpBufQueue){Vector<ImgBufQueNode> vQueNode;// (1) deque buffers from processor. 阻塞等待通知读取BufrpBufQueue->dequeProcessor(vQueNode);// (2) handle buffers dequed from processor.ret = handleReturnBuffers(vQueNode);}DisplayClient::handleReturnBuffers(Vector<ImgBufQueNode>const& rvQueNode){/** Notes:* For 30 fps, we just enque (display) the latest frame,* and cancel the others.* For frame rate > 30 fps, we should judge the timestamp here or source.*///// (3) Remove from List and enquePrvOps/cancelPrvOps, one by one.int32_t const queSize = rvQueNode.size();for (int32_t i = 0; i < queSize; i++){sp<IImgBuf>const& rpQueImgBuf = rvQueNode[i].getImgBuf(); // ImgBuf in Queue.sp<StreamImgBuf>const pStreamImgBuf = *mStreamBufList.begin(); // ImgBuf in List.// (.1) Check valid pointers to image buffers in Queue & Listif ( rpQueImgBuf == 0 || pStreamImgBuf == 0 ){MY_LOGW("Bad ImgBuf:(Que[%d], List.begin)=(%p, %p)", i, rpQueImgBuf.get(), pStreamImgBuf.get());continue;}// (.2) Check the equality of image buffers between Queue & List.if ( rpQueImgBuf->getVirAddr() != pStreamImgBuf->getVirAddr() ){MY_LOGW("Bad address in ImgBuf:(Que[%d], List.begin)=(%p, %p)", i, rpQueImgBuf->getVirAddr(), pStreamImgBuf->getVirAddr());continue;}// (.3) Every check is ok. Now remove the node from the list.mStreamBufList.erase(mStreamBufList.begin());//// (.4) enquePrvOps/cancelPrvOpsif ( i == idxToDisp ) {//if(mpExtImgProc != NULL){if(mpExtImgProc->getImgMask() & ExtImgProc::BufType_Display){IExtImgProc::ImgInfo img;//img.bufType = ExtImgProc::BufType_Display;img.format = pStreamImgBuf->getImgFormat();img.width = pStreamImgBuf->getImgWidth();img.height = pStreamImgBuf->getImgHeight();img.stride[0] = pStreamImgBuf->getImgWidthStride(0);img.stride[1] = pStreamImgBuf->getImgWidthStride(1);img.stride[2] = pStreamImgBuf->getImgWidthStride(2);img.virtAddr = (MUINT32)(pStreamImgBuf->getVirAddr());img.bufSize = pStreamImgBuf->getBufSize();//预留的处理接口,用户可自行填充mpExtImgProc->doImgProc(img);}}//处理将要显示的BufenquePrvOps(pStreamImgBuf);}else {//处理被忽略的bufcancelPrvOps(pStreamImgBuf);}}}DisplayClient::enquePrvOps(sp<StreamImgBuf>const& rpImgBuf){// [1] unlock buffer before sending to displayGraphicBufferMapper::get().unlock(rpImgBuf->getBufHndl());// [2] Dump image if wanted.dumpImgBuf_If(rpImgBuf);// [3] set timestamp.err = mpStreamOps->set_timestamp(mpStreamOps, rpImgBuf->getTimestamp());// [4] set gralloc buffer type & dirty 设置buf的参数::gralloc_extra_setBufParameter(rpImgBuf->getBufHndl(),GRALLOC_EXTRA_MASK_TYPE | GRALLOC_EXTRA_MASK_DIRTY, GRALLOC_EXTRA_BIT_TYPE_CAMERA | GRALLOC_EXTRA_BIT_DIRTY);// [5] unlocks and post the buffer to display.//此处我们的mpStreamOps在之前的set_preview_stream_ops()被初始化为window,即setWindow()传下来的window参数。//所以我们得往上找,这个window是什么东西。err = mpStreamOps->enqueue_buffer(mpStreamOps, rpImgBuf->getBufHndlPtr());}
再往回找,setPreviewWindow传入了一个window,这个是一个Surface,这个surface通过App层的Surface传下来的buf重新构建的
// set the buffer consumer that the preview will usestatus_t CameraClient::setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer) {sp<IBinder> binder;sp<ANativeWindow> window;if (bufferProducer != 0) {binder = bufferProducer->asBinder();// Using controlledByApp flag to ensure that the buffer queue remains in// async mode for the old camera API, where many applications depend// on that behavior.window = new Surface(bufferProducer, /*controlledByApp*/ true);}return setPreviewWindow(binder, window);}
setPreviewTarget()在最开始的android_hardware_Camera.cpp里被调用,这里传进来 的bufferProduce是从App层的surface里面获取出来的一个buf
static void android_hardware_Camera_setPreviewSurface(JNIEnv *env, jobject thiz, jobject jSurface){sp<IGraphicBufferProducer> gbp;sp<Surface> surface;surface = android_view_Surface_getSurface(env, jSurface);gbp = surface->getIGraphicBufferProducer();if (camera->setPreviewTarget(gbp) != NO_ERROR) {jniThrowException(env, "java/io/IOException", "setPreviewTexture failed");}}
你会以为上面的pStreamOps->enqueue_buffer用的就是这个surface,其实不是,这里又转了一下
status_t setPreviewWindow(const sp<ANativeWindow>& buf){mPreviewWindow = buf;mHalPreviewWindow.user = this;return mDevice->ops->set_preview_window(mDevice,buf.get() ? &mHalPreviewWindow.nw : 0);}
这里判断surface的传下来的buf是否为空,如果不为空则把mHalPreviewWindow.nw传下去,这个nw是一个操作方法的集合。相对应的enqueue_buffer被初始化为下面这个函数
static int __enqueue_buffer(struct preview_stream_ops* w,buffer_handle_t* buffer){ANativeWindow *a = anw(w);return a->queueBuffer(a,container_of(buffer, ANativeWindowBuffer, handle), -1);}
pStreamOps->enqueue_buffer就是执行了一下上面的方法。其实就是把camera里面的数据填进surface的buf队列里
写过APP的都知道,这个Surface是Android上层用于显示的一个辅助类。而Camera的预览也相当于调用了Surface的操作函数。那问题又来了,调用这些Surface是如何去显示的呢?......我就到此为止了,完成了我做知识储备的目的,日后要是工作有需要深入再深入了,有兴趣的朋友可以自行..........深入。我们还是再继续看,这些数据是如何来的呢?
现在回头看看现在涉及到的几个大类做一下总结。
Cam1DeviceBase(包括其子类和父类)包含了所有的Camera的操作函数,如Open,显示,拍照,录像,自动对焦的操作,以及这些Camera操作和Camera硬件涉及到的参数,可以说,Cam1Device对Frameworks及App层来说就是一个Camera了。
在Cam1DeviceBase里包含了ParamsManager,顾名思义就是Camera的参数管理,我们之前也没去理会他,就让它飘会吧。Cam1DeviceBase还包含了几个Client,用于负责所有的功能操作。DisplayClient用来负责显示,CamClient则是一个大统一,把Camera的操作都归集于此。CamClient又把自己的功能分成几个
Client去负责,其中PreviewClient负责显示,RecordClient负责录制,在拍照的功能里FDClient,OTClient,PreviewClient都参了一脚,具体有什么用,有空再研究。但还有一个CameraAdapter,这也非常重要的
本文详细解析了Android系统中Camera预览的实现流程,从startPreview开始,介绍如何设置预览窗口、初始化显示客户端,再到如何处理缓冲区,最终将数据发送到Surface进行显示。
1034

被折叠的 条评论
为什么被折叠?



