Camera2+OpenGL ES+MediaCodec+AudioRecord实现录制音视频写入H264 SEI数据

记录一下学习过程,得到一个需求是基于Camera2+OpenGL ES+MediaCodec+AudioRecord实现录制音视频。

需求:

  1. 在每一帧视频数据中,写入SEI额外数据,方便后期解码时获得每一帧中的自定义数据。
  2. 点击录制功能后,录制的是前N秒至后N秒这段时间的音视频,保存的文件都按照60s进行保存。

写在前面,整个学习过程涉及到以下内容,可以快速检索是否有想要的内容

  • MediaCodec的使用,采用的是createInputSurface()创建一个surface,通过EGL接受camera2传过来的画面。
  • AudioRecord的使用
  • Camera2的使用
  • OpenGL的简单使用
  • H264 SEI的写入简单例子

整体思路设计比较简单,打开相机,创建OpenGL相关环境,然后创建video线程录制video相关数据,创建audio线程录制audio相关数据,video和audio数据都存在自定义的List中作为缓存,最后使用一个编码线程,将video List和audio List中的数据编码到MP4中即可。用的安卓sdk 28,因为29以上保存比较麻烦。整个工程暂时没上传,有需要私。
将以上功能都模块化,分别写到不同的类中。先介绍一些独立的模块。

UI布局

ui很简单,一个GLSurfaceView,两个button控件。

在这里插入图片描述

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">
    <android.opengl.GLSurfaceView
        android:id="@+id/glView"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="parent" />
    <Button
        android:id="@+id/recordBtn"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_marginBottom="80dp"
        android:text="Record"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent" />
    <Button
        android:id="@+id/exit"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_marginTop="20dp"
        android:layout_marginRight="20dp"
        android:text="Eixt"
        app:layout_constraintTop_toTopOf="parent"
        app:layout_constraintRight_toRightOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>

Camera2

camera2框架的使用,比较简单,需要注意的一点是, startPreview函数中传入的surface用于后续mCaptureRequestBuilder.addTarget(surface)的参数传入。surface的产生由以下基本几步完成。现在简单提一下,下面会贴代码。
1.这个surface 就是通过openGL 生成的纹理, GLES30.glGenTextures(1, mTexture, 0);
2.纹理生成SurfaceTexture, mSurfaceTexture = new SurfaceTexture(mTexture[0]);
3.mSurfaceTexture生成一个surface, mSurface = new Surface(mSurfaceTexture);
4.mCamera.startPreview(mSurface);

public class Camera2 {
   
    private final String TAG = "Abbott Camera2";
    private Context mContext;
    private CameraManager mCameraManager;
    private CameraDevice mCameraDevice;
    private String[] mCamList;
    private String mCameraId;
    private Size mPreviewSize;
    private HandlerThread mBackgroundThread;
    private Handler mBackgroundHandler;
    private CaptureRequest.Builder mCaptureRequestBuilder;
    private CaptureRequest mCaptureRequest;
    private CameraCaptureSession mCameraCaptureSession;

    public Camera2(Context Context) {
   
        mContext = Context;
        mCameraManager = (CameraManager) mContext.getSystemService(android.content.Context.CAMERA_SERVICE);
        try {
   
            mCamList = mCameraManager.getCameraIdList();
        } catch (CameraAccessException e) {
   
            e.printStackTrace();
        }

        mBackgroundThread = new HandlerThread("CameraThread");
        mBackgroundThread.start();
        mBackgroundHandler = new Handler(mBackgroundThread.getLooper());
    }

    public void openCamera(int width, int height, String id) {
   
        try {
   
            Log.d(TAG, "openCamera: id:" + id);
            CameraCharacteristics characteristics = mCameraManager.getCameraCharacteristics(id);
            if (characteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_FRONT) {
   
            }
            StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
            mPreviewSize = getOptimalSize(map.getOutputSizes(SurfaceTexture.class), width, height);
            mCameraId = id;
        } catch (CameraAccessException e) {
   
            e.printStackTrace();
        }

        try {
   
            if (ActivityCompat.checkSelfPermission(mContext, android.Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
   
                return;
            }
            Log.d(TAG, "mCameraManager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);: " + mCameraId);
            mCameraManager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);
        } catch (CameraAccessException e) {
   
            e.printStackTrace();
        }
    }

    private Size getOptimalSize(Size[] sizeMap, int width, int height) {
   
        List<Size> sizeList = new ArrayList<>();
        for (Size option : sizeMap) {
   
            if (width > height) {
   
                if (option.getWidth() > width && option.getHeight() > height) {
   
                    sizeList.add(option);
                }
            } else {
   
                if (option.getWidth() > height && option.getHeight() > width) {
   
                    sizeList.add(option);
                }
            }
        }
        if (sizeList.size() > 0) {
   
            return Collections.min(sizeList, new Comparator<Size>() {
   
                @Override
                public int compare(Size lhs, Size rhs) {
   
                    return Long.signum((long) lhs.getWidth() * lhs.getHeight() - (long) rhs.getWidth() * rhs.getHeight());
                }
            });
        }
        return sizeMap[0];
    }

    private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
   
        @Override
        public void onOpened(@NonNull CameraDevice camera) {
   
            mCameraDevice = camera;
        }

        @Override
        public void onDisconnected(@NonNull CameraDevice camera) {
   
            camera.close();
            mCameraDevice = null;
        }

        @Override
        public void onError(@NonNull CameraDevice camera, int error) {
   
            camera.close();
            mCameraDevice = null;
        }
    };

    public void startPreview(Surface surface) {
   
        try {
   
            mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            mCaptureRequestBuilder.addTarget(surface);
            mCameraDevice.createCaptureSession(Collections.singletonList(surface), new CameraCaptureSession.StateCallback() {
   
                @Override
                public void onConfigured(@NonNull CameraCaptureSession session) {
   
                    try {
   
                        mCaptureRequest = mCaptureRequestBuilder.build();
                        mCameraCaptureSession = session;
                        mCameraCaptureSession.setRepeatingRequest(mCaptureRequest, null, mBackgroundHandler);
                    } catch (CameraAccessException e) {
   
                        e.printStackTrace();
                    }
                }

                @Override
                public void onConfigureFailed(@NonNull CameraCaptureSession session) {
   
                }
            }, mBackgroundHandler);
        } catch (CameraAccessException e) {
   
            e.printStackTrace();
        }
    }
}

ImageList

这个类就是用于video 和audio缓存类,没有什么可以介绍的,直接用就好了。

public class ImageList {
   
    private static final String TAG = "Abbott ImageList";
    private Object mImageListLock = new Object();
    int kCapacity;
    private List<ImageItem> mImageList = new CopyOnWriteArrayList<>();

    public ImageList(int capacity) {
   
        kCapacity = capacity;
    }

    public synchronized void addItem(long Timestamp, ByteBuffer byteBuffer, MediaCodec.BufferInfo bufferInfo) {
   
        synchronized (mImageListLock) {
   
            ImageItem item = new ImageItem(Timestamp, byteBuffer, bufferInfo);
            mImageList.add(item);
            if (mImageList.size() > kCapacity) {
   
                int excessItems = mImageList.size() - kCapacity;
                mImageList.subList(0, excessItems).clear();
            }
        }
    }

    public synchronized List<ImageItem> getItemsInTimeRange(long startTimestamp, long endTimestamp) {
   
        List<ImageItem> itemsInTimeRange = new ArrayList<>();
        synchronized (mImageListLock) {
   
            for (ImageItem item : mImageList) {
   
                long itemTimestamp = item.getTimestamp();
                // 判断时间戳是否在指定范围内
                if (itemTimestamp >= startTimestamp && itemTimestamp <= endTimestamp) {
   
                    itemsInTimeRange.add(item);
                }
            }
        }
        return itemsInTimeRange;
    }

    public synchronized ImageItem getItem() {
   
        return mImageList.get(0);
    }

    public synchronized void removeItem() {
   
        mImageList.remove(0);
    }

    public synchronized int getSize() {
   
        return mImageList.size();
    }

    public static class ImageItem {
   
        private long mTimestamp;
        private ByteBuffer mVideoBuffer;
        private MediaCodec.BufferInfo mVideoBufferInfo;
        public ImageItem(long first, ByteBuffer second, MediaCodec.BufferInfo bufferInfo) {
   
            this.mTimestamp = first;
            this.mVideoBuffer = second;
            this.mVideoBufferInfo = bufferInfo;
        }

        public synchronized long getTimestamp() {
   
            return mTimestamp;
        }

        public synchronized ByteBuffer getVideoByteBuffer() {
   
            return mVideoBuffer;
        }

        public synchronized MediaCodec.BufferInfo getVideoBufferInfo() {
   
            return mVideoBufferInfo;
        }
    }
}

GlProgram

用于创建OpenGL的程序的类。目前使用的是OpenGL3.0 版本

public class GlProgram {
   
    public static final String mVertexShader =
            "#version 300 es \n" +
            "in vec4 vPosition;" +
            "in vec2 vCoordinate;" +
            "out vec2 vTextureCoordinate;" +
            "void main() {" +
            "   gl_Position = vPosition;" +
            "   vTextureCoordinate = vCoordinate;" +
            "}";
    public static final String mFragmentShader =
            "#version 300 es \n" +
            "#extension GL_OES_EGL_image_external : require \n" +
            "#extension GL_OES_EGL_image_external_essl3 : require \n" +
            "precision mediump float;" +
            "in vec2 vTextureCoordinate;" +
            "uniform samplerExternalOES oesTextureSampler;" +
            "out vec4 gl_FragColor;" +
            "void main() {" +
            "    gl_FragColor = texture(oesTextureSampler, vTextureCoordinate);" +
            "}";

    public static int createProgram(String vertexShaderSource, String fragShaderSource) {
   
        int program = GLES30.glCreateProgram();
        if (0 == program) {
   
            Log.e("Arc_ShaderManager", "create program error ,error=" + GLES30.glGetError());
            return 0;
        }
        int vertexShader = loadShader(GLES30.GL_VERTEX_SHADER, vertexShaderSource);
        if (0 == vertexShader) {
   
            
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值