记一次不成熟功能的开发记录
主要使用的API
- camera2
- FaceDetector
一、实现思路:
1.首先用camera2打开前置摄像头获取前置摄像头的照片
2.通过FaceDetetoc来获取人双眼的间距
3.我们知道摄像头的成像,近大远小。那么越远的距离,我们获取到眼间距的距离会越小,反之,则越大。
4.取一个已知大小(长或宽)的参照物,在已知特定距离在手机上成像的大小,通过上一步的眼间距来获取到眼屏的距离。
实现的页面截图
二、相关代码实现细节
1. camera2
camera2的预览需要 TextureView或者SurfaceView
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TextureView
android:id="@+id/texture_preview"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<ImageView
android:onClick="switchCamera"
android:layout_gravity="end|bottom"
android:layout_marginBottom="90dp"
android:layout_marginRight="40dp"
android:src="@drawable/ic_switch_camera"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/tv_start_calculate"
android:text="开始计算"
android:textSize="26sp"
android:textStyle="bold"
android:textColor="@color/c1"
android:layout_margin="10dp"
android:layout_gravity="bottom|left"
android:layout_width="wrap_content"
android:layout_height="wrap_content"/>
<TextView
android:id="@+id/tv_stop_calculate"
android:text="停止计算"
android:textSize="26sp"
android:textStyle="bold"
android:textColor="@color/c1"
android:layout_margin="10dp"
android:layout_gravity="bottom|right"
android:layout_width="wrap_content"
android:layout_height="wrap_content"/>
<TextView
android:id="@+id/currentDistance"
android:text="-1cm"
android:textSize="26sp"
android:textStyle="bold"
android:textColor="@color/c1"
android:layout_gravity="bottom|center"
android:layout_margin="10dp"
android:layout_width="wrap_content"
android:layout_height="wrap_content"/>
</FrameLayout>
2.打开摄像头可以选择前置或者后置
public static final String CAMERA_ID_FRONT = "1";
public static final String CAMERA_ID_BACK = "0";
private void openCamera() {
CameraManager cameraManager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE);
setUpCameraOutputs(cameraManager);
configureTransform(mTextureView.getWidth(), mTextureView.getHeight());
try {
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
}
cameraManager.openCamera(mCameraId, mDeviceStateCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
if (camera2Listener != null) {
camera2Listener.onCameraError(e);
}
} catch (InterruptedException e) {
if (camera2Listener != null) {
camera2Listener.onCameraError(e);
}
}
}
3. CameraDevice.StateCallback 涉及摄像头的打开 关闭等回调。
public static abstract class StateCallback {
@Retention(RetentionPolicy.SOURCE)
@IntDef(prefix = {"ERROR_"}, value =
{ERROR_CAMERA_IN_USE,
ERROR_MAX_CAMERAS_IN_USE,
ERROR_CAMERA_DISABLED,
ERROR_CAMERA_DEVICE,
ERROR_CAMERA_SERVICE })
public @interface ErrorCode {};
public abstract void onOpened(@NonNull CameraDevice camera); // Must implement
public void onClosed(@NonNull CameraDevice camera) {
// Default empty implementation
}
public abstract void onDisconnected(@NonNull CameraDevice camera); // Must implement
public abstract void onError(@NonNull CameraDevice camera,
@ErrorCode int error); // Must implement
}
public CameraDevice() {}
……
}
4.获取摄像头的数据流,这一步我们需要将数据流转为bitmap,获取数据流我们需要用到 ImageReader 这个类
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
mCaptureStateCallback, mBackgroundHandler
);
………………
mImageReader = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(),
ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(
new OnImageAvailableListenerImpl(), mBackgroundHandler);
private class OnImageAvailableListenerImpl implements ImageReader.OnImageAvailableListener {
private byte[] y;
private byte[] u;
private byte[] v;
private ReentrantLock lock = new ReentrantLock();
@Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireNextImage();
// Y:U:V == 4:2:2
if (camera2Listener != null && image.getFormat() == ImageFormat.YUV_420_888) {
Image.Plane[] planes = image.getPlanes();
// 加锁确保y、u、v来源于同一个Image
lock.lock();
// 重复使用同一批byte数组,减少gc频率
if (y == null) {
y = new byte[planes[0].getBuffer().limit() - planes[0].getBuffer().position()];
u = new byte[planes[1].getBuffer().limit() - planes[1].getBuffer().position()];
v = new byte[planes[2].getBuffer().limit() - planes[2].getBuffer().position()];
}
if (image.getPlanes()[0].getBuffer().remaining() == y.length) {
planes[0].getBuffer().get(y);
planes[1].getBuffer().get(u);
planes[2].getBuffer().get(v);
camera2Listener.onPreview(y, u, v, mPreviewSize, planes[0].getRowStride());
}
lock.unlock();
}
image.close();
}
}
二、FaceDetector
有了数据流我们需要转化成bitmap 来让FaceDetector检测,首先需要用YuvImage来将数据流转换掉,再通过bitmapFactory.Options来转化成图片,这里要注意一点 FaceDetector d = new FaceDetector(_currentFrame.getWidth(),
_currentFrame.getHeight(), 1); 需要必须是一个正脸的头像,参考上面截图的方向,所以对于横屏需要采用矩阵变化将bitmap转向。
YuvImage yuvimage = new YuvImage(_data, ImageFormat.NV21,
_previewSize.getWidth(), _previewSize.getHeight(), null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
if (!yuvimage.compressToJpeg(new Rect(0, 0, _previewSize.getWidth(),
_previewSize.getHeight()), 100, baos)) {
Log.e("Camera", "compressToJpeg failed");
}
Log.i("Timing", "Compression finished: "
+ (System.currentTimeMillis() - t));
t = System.currentTimeMillis();
BitmapFactory.Options bfo = new BitmapFactory.Options();
bfo.inPreferredConfig = Bitmap.Config.RGB_565;
_currentFrame = BitmapFactory.decodeStream(new ByteArrayInputStream(
baos.toByteArray()), null, bfo);
Log.i("Timing", "Decode Finished: " + (System.currentTimeMillis() - t));
t = System.currentTimeMillis();
// Rotate the so it siuts our portrait mode
Matrix matrix = new Matrix();
if(mIsVertical){
matrix.postRotate(90);
matrix.preScale(-1, 1); //Android内置人脸识别的图像必须是头在上,所以要做旋转变换
// We rotate the same Bitmap
_currentFrame = Bitmap.createBitmap(_currentFrame, 0, 0,
_previewSize.getWidth(), _previewSize.getHeight(), matrix, false);
}
// We rotate the same Bitmap
_currentFrame = Bitmap.createBitmap(_currentFrame, 0, 0,
_previewSize.getWidth(), _previewSize.getHeight(), matrix, false);
Log.e("run","previewSize.width"+_previewSize.getWidth()+"_previewSize.height"+_previewSize.getHeight());
Log.i("Timing",
"Rotate, Create finished: " + (System.currentTimeMillis() - t));
t = System.currentTimeMillis();
if (_currentFrame == null) {
Log.e(FACEDETECTIONTHREAD_TAG, "Could not decode Image");
return;
}
// Log.e("aaa","_currentFrame.getWidth()"+_currentFrame.getWidth());
// Log.e("aaa","_currentFrame.getHeight(),"+_currentFrame.getHeight());
FaceDetector d = new FaceDetector(_currentFrame.getWidth(),
_currentFrame.getHeight(), 1);
Face[] faces = new Face[1];
d.findFaces(_currentFrame, faces);
三、距离换算
距离换算主要参考了下面github的链接,也就是一开始的思路,主要通过参考系值的距离来换算
Pref 与 Dref为参考值在手机成像的距离与离屏幕的距离,Psf为双眼的间距。
以上是整体的实现思路和代码思路
建议直接看demo来熟悉相应实现。
参考 https://github.com/philiiiiiipp/Android-Screen-to-Face-Distance-Measurement