还是先看效果:
这里还是再放一下预览图,方便如果觉得符合自己需求的兄弟萌可以看下去。
相机预览
这里我用的Camera2的API,具体有关Camera2的简介可以看下这篇博客https://blog.youkuaiyun.com/HardWorkingAnt/article/details/72786782
具体的Helper类可以移步到这里:https://github.com/wangshengyang1996/GLCameraDemo/tree/master/app/src/main/java/com/wsy/glcamerademo/camera2
我也是参考以上两个链接地址的博客/github的代码来完善CameraHelper类的。
当我们在确认权限开启后,即可初始化Helper。
fun initCamera() {
mTextureView ?: return
Log.d(TAG, "initCamera")
mCameraHelper = CameraHelper.Companion.Builder()
.cameraListener(this)
.specificCameraId(CAMERA_ID)
.mContext(mFragment?.context!!)
.previewOn(mTextureView)
.previewViewSize(
Point(
mTextureView.layoutParams.width,
mTextureView.layoutParams.height
)
)
.rotation(mFragment?.activity?.windowManager?.defaultDisplay?.rotation ?: 0)
.build()
Log.d(TAG, "mCameraHelper = $mCameraHelper is null ? -> ${mCameraHelper == null}")
mCameraHelper?.start()
switchText("请将人脸放入取景框中", "请点击按钮拍照")
}
那么start方法具体做的其实还是通过systemservice打开camera:
@Synchronized
fun start() {
Log.i(TAG, "start")
if (mCameraDevice != null) return
startBackgroundThread()
// When the screen is turned off the turned back on, the SurfaceTexture is already available,
// and "onSurfaceTextureAvailable" will not be called. In that case, we can open a camera
// and start preview from here (otherwise, we wait until the surface is ready in the
// SurfaceTextureListener).
if (mTextureView?.isAvailable == true) {
openCamera()
} else {
mTextureView?.surfaceTextureListener = mSurfaceTextureListener
}
}
/**
* Opens the camera specified by {@link #mCameraId}
*/
private fun openCamera() {
val cameraManager = mContext?.getSystemService(Context.CAMERA_SERVICE) as CameraManager?
cameraManager?: return
Log.e(TAG, "openCamera")
setUpCameraOutputs(cameraManager)
mTextureView?.apply {
configureTransform(width, height)
}
try {
if (!mCameraOpenLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw RuntimeException("Time out waiting to lock camera opening.")
}
mContext?.apply {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
cameraManager.openCamera(mCameraId, mDeviceStateCallback, mBackgroundHandler)
}
}
} catch (e: CameraAccessException) {
cameraListener?.onCameraError(e)
} catch (e: InterruptedException) {
cameraListener?.onCameraError(e)
}
}
可以看到,当满足获取相机的所有条件后,会通过cameraManager去开启相机,cameraId是代表指定开启哪个相机(前置、后置、外接等),deviceStateCallback是设备状态的回调接口,可以在回调方法onOpened里开启预览会话previewSession
private val mDeviceStateCallback = object: CameraDevice.StateCallback() {
override fun onOpened(camera: CameraDevice) {
Log.i(TAG, "onOpened: ")
// This method is called when the camera is opened. We start camera preview here.
mCameraOpenLock.release()
mCameraDevice = camera
createCameraPreviewSession()
mPreviewSize?.let {
cameraListener?.onCameraOpened(camera, mCameraId, it, getCameraOri(rotation, mCameraId), isMirror)
}
}
// 此处省略...
}
/**
* Creates a new {@link CameraCaptureSession} for camera preview.
*/
private fun createCameraPreviewSession() {
try {
val texture = mTextureView?.surfaceTexture
assert(texture != null)
// We configure the size of default buffer to be the size of camera preview we want.
mPreviewSize?.let {
texture?.setDefaultBufferSize(it.width, it.height)
}
// This is the output Surface we need to start preview !!!
val surface = Surface(texture)
// We set up a CaptureRequest.Builder with the output Surface
mPreviewRequestBuilder =
mCameraDevice?.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
// 自动对焦
mPreviewRequestBuilder?.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
// 增加曝光度
mPreviewRequestBuilder?.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, getRange())
mPreviewRequestBuilder?.addTarget(surface)
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice?.createCaptureSession(listOf(surface, mImageReader?.surface),
mCaptureStateCallback, mBackgroundHandler)
} catch (e: CameraAccessException) {
e.printStackTrace()
}
}
通过定义previewRequestBuilder将相机设置成预览模式TEMPLATE_PREVIEW,然后将TextureView的surface作为预览对象透传,这里还可以对这个预览请求进行曝光度、对焦等属性设置。然后通过cameraDevice的createCaptureSession来创建预览会话。其中,第一个参数是指一个输出的Surface集合,在源码里是这样定义的
也就是说,这里的surface是可以作为获取到的图像数据的目标,即还可以将我们的图像数据进行预览。第二个参数则是一个capture状态的回调,里面包含了创建会话成功与否的回调方法,在这里我们可以让这个会话不断进行(setRepeat),那么这个会话就会一直在captureImage,并且在回调。那么相机的预览就完成了。
private val mCaptureStateCallback = object: CameraCaptureSession.StateCallback() {
override fun onConfigureFailed(session: CameraCaptureSession) {
Log.i(TAG, "onConfigureFailed: ")
cameraListener?.onCameraError(Exception("configuredFailed"))
}
override fun onConfigured(session: CameraCaptureSession) {
Log.i(TAG, "onConfigured: ")
// The camera is already closed
mCameraDevice?: return
// When the session is ready, we start displaying the preview
mCaptureSession = session
try {
mPreviewRequestBuilder?.let {
mCaptureSession?.setRepeatingRequest(it.build(),
object: CameraCaptureSession.CaptureCallback() {}, mBackgroundHandler)
}
} catch (e: CameraAccessException) {
e.printStackTrace()
}
}
}
相机拍照
前面扯了这么多,其实还是在讲预览,那么我在做毕业设计的时候,就考虑到其实如果实现实时的人脸检测,会不断地对图像数据进行提交,然后检测返回结果,这里当然还要保证线程安全性。要么就设置一个频控,让capture后的图像先暂停一段时间再继续预览,可是这不就是和拍照差不多么。。。于是我就做了拍照的流程。
fun takePhoto() {
// 一定要在这里加 不然回调不了 如果在上面加 会导致一直在捕获图像
mImageReader?.let {
mPreviewRequestBuilder?.addTarget(it.surface)
}
// 这里保存时要选sensorOrientation(照相机的方向)
mPreviewRequestBuilder?.set(CaptureRequest.JPEG_ORIENTATION, mSensorOrientation)
// 设置对焦触发器为空闲状态
mPreviewRequestBuilder?.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_IDLE)
mCaptureSession?.stopRepeating()
// 开始拍照
mPreviewRequestBuilder?.let {
mCaptureSession?.capture(it.build(), null, mBackgroundHandler)
}
}
这里注释说一定要加的意思是,这个预览请求PreviewRequestBuilder添加的目标surface会将新加的surface作为目标,那么如果把ImageReader的surface在预览时就添加上的话,会不断地执行预览拍照的请求,那么自然就变得很卡了。
这里再加的话,就可以让捕获的图像暂停,然后去处理捕获的图像数据了。至于这里在将ImageReader的surface作为目标透传时,当捕获的图像数据获取完成后就会执行OnImageAvailableListener的OnImageAvailable方法,从而可以进行保存图片。
private val mOnImageAvailableListener = object: ImageReader.OnImageAvailableListener {
private val lock = ReentrantLock()
override fun onImageAvailable(reader: ImageReader?) {
// 这里做保存工作
Log.e(TAG, "onImageAvailable")
val image = reader?.acquireNextImage()
if (cameraListener != null && image?.format == ImageFormat.JPEG) {
val planes = image.planes
// 加锁保证来源于同一个Image
lock.lock()
val byteBuffer = planes[0].buffer
val byteArray = ByteArray(byteBuffer.remaining())
byteBuffer.get(byteArray)
cameraListener?.onPreview(byteArray)
lock.unlock()
}
image?.close()
}
}
这里会通过cameraListener去回调给外部业务组件,那么就可以对图像进行保存或者进行识别了~
ok 到这里已经说完了整个预览以及拍照的流程 下一章会说一下如何通过调用SDK来实现人脸识别。