Android 人脸识别开源框架 Fotoapparat —— FaceDetector的总结

本文详细介绍了Android人脸识别框架Fotoapparat的使用,从如何创建和配置FaceDetector,到FaceDetectorProcessor的工作原理,展示了如何通过FaceDetectorProcessor自定义人脸识别逻辑,并结合业务需求调用自己的算法库。
部署运行你感兴趣的模型镜像

        简单的集成可以参照:https://github.com/RedApparat/FaceDetector,基本的使用和介绍已经很易懂了。下面是整理的更为全面一些的学习总结。

    下载github的源码之后,要到Example文件夹中去运行Demo项目,而不是-master那个层级的项目,如下图所示:

                                    

          

         打开项目之后可以看到demo很简单,只有两个类:MainActivity和PermissionsDelegate这个运行时权限委托类,MainActivity的源码如下,由如下的代码可见,要启用人脸检测功能仅仅是调用了createFotoapparat()这一个自定义函数:

package io.fotoapparat.facedetector.example;

import android.os.Bundle;
import android.support.annotation.NonNull;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.view.View;

import java.util.List;

import io.fotoapparat.Fotoapparat;
import io.fotoapparat.FotoapparatSwitcher;
import io.fotoapparat.facedetector.Rectangle;
import io.fotoapparat.facedetector.processor.FaceDetectorProcessor;
import io.fotoapparat.facedetector.view.RectanglesView;
import io.fotoapparat.parameter.LensPosition;
import io.fotoapparat.view.CameraView;

import static io.fotoapparat.log.Loggers.fileLogger;
import static io.fotoapparat.log.Loggers.logcat;
import static io.fotoapparat.log.Loggers.loggers;
import static io.fotoapparat.parameter.selector.LensPositionSelectors.lensPosition;

public class MainActivity extends AppCompatActivity {

    private final PermissionsDelegate permissionsDelegate = new PermissionsDelegate(this);
    private boolean hasCameraPermission;
    private CameraView cameraView;
    private RectanglesView rectanglesView;

    private FotoapparatSwitcher fotoapparatSwitcher;
    private Fotoapparat frontFotoapparat;
    private Fotoapparat backFotoapparat;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        cameraView = (CameraView) findViewById(R.id.camera_view);
        rectanglesView = (RectanglesView) findViewById(R.id.rectanglesView);
        hasCameraPermission = permissionsDelegate.hasCameraPermission();

        if (hasCameraPermission) {
            cameraView.setVisibility(View.VISIBLE);
        } else {
            permissionsDelegate.requestCameraPermission();
        }

        frontFotoapparat = createFotoapparat(LensPosition.FRONT);
        backFotoapparat = createFotoapparat(LensPosition.BACK);
        fotoapparatSwitcher = FotoapparatSwitcher.withDefault(backFotoapparat);

        View switchCameraButton = findViewById(R.id.switchCamera);
        switchCameraButton.setVisibility(
                canSwitchCameras()
                        ? View.VISIBLE
                        : View.GONE
        );
        switchCameraButton.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                switchCamera();
            }
        });
    }

    private boolean canSwitchCameras() {
        return frontFotoapparat.isAvailable() == backFotoapparat.isAvailable();
    }

    private Fotoapparat createFotoapparat(LensPosition position) {
        return Fotoapparat
                .with(this)
                .into(cameraView)
                .lensPosition(lensPosition(position))
                .frameProcessor(
                        FaceDetectorProcessor.with(this)
                                .listener(new FaceDetectorProcessor.OnFacesDetectedListener() {
                                    @Override
                                    public void onFacesDetected(List<Rectangle> faces) {
                                        Log.d("&&&", "Detected faces: " + faces.size());

                                        rectanglesView.setRectangles(faces);
                                    }
                                })
                                .build()
                )
                .logger(loggers(
                        logcat(),
                        fileLogger(this)
                ))
                .build();
    }

    private void switchCamera() {
        if (fotoapparatSwitcher.getCurrentFotoapparat() == frontFotoapparat) {
            fotoapparatSwitcher.switchTo(backFotoapparat);
        } else {
            fotoapparatSwitcher.switchTo(frontFotoapparat);
        }
    }

    @Override
    protected void onStart() {
        super.onStart();
        if (hasCameraPermission) {
            fotoapparatSwitcher.start();
        }
    }

    @Override
    protected void onStop() {
        super.onStop();
        if (hasCameraPermission) {
            fotoapparatSwitcher.stop();
        }
    }

    @Override
    public void onRequestPermissionsResult(int requestCode,
                                           @NonNull String[] permissions,
                                           @NonNull int[] grantResults) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);
        if (permissionsDelegate.resultGranted(requestCode, permissions, grantResults)) {
            fotoapparatSwitcher.start();
            cameraView.setVisibility(View.VISIBLE);
        }
    }

}

       再细看自定义函数中对于框架的具体使用,new了 Fotoapparat 这个对象,绑定了框架所引用的布局(CameraView),以及最关键最核心的人脸检测处理器(FaceDetectorProcessor)。然后去看FaceDetectorProcessor类的源码:

      

package io.fotoapparat.facedetector.processor;

import android.content.Context;
import android.os.Handler;
import android.os.Looper;

import java.util.List;

import io.fotoapparat.facedetector.FaceDetector;
import io.fotoapparat.facedetector.Rectangle;
import io.fotoapparat.preview.Frame;
import io.fotoapparat.preview.FrameProcessor;

/**
 * {@link FrameProcessor} which detects faces on camera frames.
 * <p>
 * Use {@link #with(Context)} to create a new instance.
 */
public class FaceDetectorProcessor implements FrameProcessor {

    private static Handler MAIN_THREAD_HANDLER = new Handler(Looper.getMainLooper());

    private final FaceDetector faceDetector;
    private final OnFacesDetectedListener listener;

    private FaceDetectorProcessor(Builder builder) {
        faceDetector = FaceDetector.create(builder.context);
        listener = builder.listener;
    }

    public static Builder with(Context context) {
        return new Builder(context);
    }

    @Override
    public void processFrame(Frame frame) {
        final List<Rectangle> faces = faceDetector.detectFaces(
                frame.image,
                frame.size.width,
                frame.size.height,
                frame.rotation
        );

        MAIN_THREAD_HANDLER.post(new Runnable() {
            @Override
            public void run() {
                listener.onFacesDetected(faces);
            }
        });
    }

    /**
     * Notified when faces are detected.
     */
    public interface OnFacesDetectedListener {

        /**
         * Null-object for {@link OnFacesDetectedListener}.
         */
        OnFacesDetectedListener NULL = new OnFacesDetectedListener() {
            @Override
            public void onFacesDetected(List<Rectangle> faces) {
                // Do nothing
            }
        };

        /**
         * Called when faces are detected. Always called on the main thread.
         *
         * @param faces detected faces. If no faces were detected - an empty list.
         */
        void onFacesDetected(List<Rectangle> faces);

    }

    /**
     * Builder for {@link FaceDetectorProcessor}.
     */
    public static class Builder {

        private final Context context;
        private OnFacesDetectedListener listener = OnFacesDetectedListener.NULL;

        private Builder(Context context) {
            this.context = context;
        }

        /**
         * @param listener which will be notified when faces are detected.
         */
        public Builder listener(OnFacesDetectedListener listener) {
            this.listener = listener != null
                    ? listener
                    : OnFacesDetectedListener.NULL;

            return this;
        }

        public FaceDetectorProcessor build() {
            return new FaceDetectorProcessor(this);
        }

    }

}

       如最顶部注释所标识,FaceDetectorProcessor的作用便是可以检测手机摄像头帧上的人脸,当框架引擎检测到摄像头Frame数据后,便会进入 processFrame这个回调中,调用faceDetector.detectFaces这个方法,源码可以点进去细看,这个函数的作用便是将检测到的帧图(frame)的nv21字节数组格式(手机摄像头采集的数据格式是nv21的),大小宽高等信息作为参数传入,然后分析出识别出的人脸的关键点的信息,然后通知主线程去绘制红框,把人脸给框出来。

       @Override
       public void onFacesDetected(List<Rectangle> faces) {
            Log.d("&&&", "Detected faces: " + faces.size());

            rectanglesView.setRectangles(faces);
       }

 

    rectanglesView.setRectangles(faces)这个便是绘制函数。

         真正的人脸识别项目,在利用这个识别框架检测到人脸的帧图后,便会调用自己封的人脸识别so算法库了,不会用框架提供的这个faceDetector.detectFaces,毕竟关键点的信息很有限, 都会自己重写一个新的FaceDetectorProcessor,在识别之后做若干逻辑,附上部分项目中实际的覆写代码,能看到这个覆写的FaceDetectorProcessor便是在github提供的源码基础上根据业务逻辑所需改写的,调用公司自己开发的so库的格式转换和人脸关键特征点信息等等操作都尽在其中:

package com.xiaoluobei.facedetection.view.utils;

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Rect;
import android.graphics.YuvImage;
import android.os.Environment;
import android.os.Handler;
import android.os.Looper;
import android.util.Log;

import com.xiaoluobei.facedetection.model.FaceRequesBean;
import com.xiaoluobei.facedetection.view.activity.MainActivity;

import java.io.BufferedOutputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;

import io.fotoapparat.facedetector.FaceDetector;
import io.fotoapparat.facedetector.Rectangle;
import io.fotoapparat.preview.Frame;
import io.fotoapparat.preview.FrameProcessor;

/**
 * {@link FrameProcessor} which detects faces on camera frames.
 * <p>
 * Use {@link #with(Context)} to create a new instance.
 */
public class MTCNNFaceDetectorProcessor implements FrameProcessor {

    private static Handler MAIN_THREAD_HANDLER = new Handler(Looper.getMainLooper());

    private final FaceDetector faceDetector;
    private final OnFacesDetectedListener listener;
    private byte[] yuv = null;
    private byte[] yuvr = null;
    private byte[] rgb = null;
    private byte[] rgba = null;

    private String basePath = Environment.getExternalStorageDirectory().getPath()+ "/faceDetect/";
    private static String bigBasePath = Environment.getExternalStorageDirectory().getPath()+ "/faceDetect/bigFace/";
    private static String samllBasePath = Environment.getExternalStorageDirectory().getPath()+ "/faceDetect/samllFace/";

    private long oldTime = 0;

    private float[] faceInfoLast = null;
    private MTCNNFaceDetectorProcessor(Builder builder) {
        faceDetector = FaceDetector.create(builder.context);
        listener = builder.listener;
    }

    public static Builder with(Context context) {
        File file = new File(bigBasePath);
        File file1 = new File(samllBasePath);
        if (!file.exists()){
            file.mkdirs();
        }

        if (!file1.exists()){
            file1.mkdirs();
        }

        return new Builder(context);
    }

    @Override
    public void processFrame(Frame frame) {

        long newTime = System.currentTimeMillis();
        long times = newTime - oldTime;
        if(times < 200){
            return;
        }

        oldTime = System.currentTimeMillis();

        int width_r,height_r,rotate_r;
        if(frame.rotation == 90 || frame.rotation == 270)
        {
            width_r = frame.size.height;
            height_r = frame.size.width;
            rotate_r = 360-frame.rotation;
        }
        else
        {
            width_r = frame.size.width;
            height_r = frame.size.height;
            rotate_r = frame.rotation;
        }

        long t1 = System.currentTimeMillis();
//        final List<Rectangle> faces = faceDetector.detectFaces(
//                frame.image,
//                frame.size.width,
//                frame.size.height,
//                frame.rotation
//        );
        if(rgb == null)
        {
            yuv = new byte[frame.size.width * frame.size.height * 3 / 2];
            yuvr = new byte[frame.size.width * frame.size.height * 3 / 2];
            rgb = new byte[frame.size.width * frame.size.height * 3];
            rgba = new byte[frame.size.width * frame.size.height * 4];
        }

        //格式转换 jni
        MainActivity.vpxCoder.NV21ToI420(frame.image, yuv, frame.size.width, frame.size.height);
        MainActivity.vpxCoder.YUVRotate(yuv ,yuvr, frame.size.width, frame.size.height, rotate_r);
        //MainActivity.vpxCoder.YUV4202RGB(yuv, rgb, 32,frame.size.width, frame.size.height);
        MainActivity.vpxCoder.YUV420ToRGB24(yuvr, rgb, 24,width_r, height_r);

        long t2 =  System.currentTimeMillis();
        long execTime1 = t2-t1;
        System.out.println("===============NV21ToRGB24:"+execTime1);

//        int[] faceInfo = MainActivity.mtcnn.FaceDetect(rgb, width_r, height_r, 3);
          //调用自定义so库中提供的返回人脸关键点的相关信息
        float[] faceInfo = MainActivity.faceUtil.FaceDetectNew(rgb, width_r, height_r, 3,48);

        long t3 =  System.currentTimeMillis();
        long execTime2 = t3-t2;
        System.out.println("===============FaceDetect:"+execTime2);

        if(faceInfo.length>1)
        {
            float faceNum = faceInfo[0];
            final List<Rectangle> faces = new ArrayList<Rectangle>();
            if(faceInfoLast == null){
                faceInfoLast = faceInfo;
            } else {

                for(int i=0;i<faceNum;i++) {
                    float left, top, right, bottom, width, height;

                    left = faceInfo[1 + 15 * i];
                    top = faceInfo[2 + 15 * i];
                    right = faceInfo[3 + 15 * i];
                    bottom = faceInfo[4 + 15 * i];
                    width = right - left;
                    height = bottom - top;
                    Rectangle rect = new Rectangle(left/width_r,top /height_r,width/width_r,height/height_r);
                    faces.add(rect);
                }

                MAIN_THREAD_HANDLER.post(new Runnable() {
                    @Override
                    public void run() {
                        listener.onFacesRectangle(faces);
                    }
                });


                if (faceNum == faceInfoLast[0]) {

                    if (Math.abs(faceInfo[1+15*0] - faceInfoLast[1+15*0]) <= 20) {
//                        Log.e("过滤","过滤过滤过滤过滤过滤过滤");
                        faceInfoLast = faceInfo;



                        return;
                    }
                }


            }
            faceInfoLast = faceInfo;





            final FaceRequesBean faceRequesBean = new FaceRequesBean();

            MainActivity.vpxCoder.YUV4202RGB(yuvr,rgba,32,width_r,height_r);


            Bitmap bitmap = getPicFromBytes(rgba,width_r,height_r,faceRequesBean);

            List<String> samllFaces = new ArrayList<>();
            for(int i=0;i<faceNum;i++) {
                float left, top, right, bottom,width , height;
//                left = (float)faceInfo[1+14*i];
//                top = (float)faceInfo[2+14*i];
//                right = (float)faceInfo[3+14*i];
//                bottom = (float)faceInfo[4+14*i];

                left = faceInfo[1+15*i];
                top = faceInfo[2+15*i];
                right = faceInfo[3+15*i];
                bottom = faceInfo[4+15*i];
                width = right - left;
                height = bottom - top;


//                double dou = MainActivity.faceUtil.cmpImgData(rgb, width_r, height_r, 3,48);

                //Rectangle rect = new Rectangle(left/width_r,top /height_r,width/width_r,height/height_r);
//                getFileFromBytes(rgb, Environment.getExternalStorageDirectory().getPath()+ "/" +System.currentTimeMillis() + ".jpg");
//                Bitmap nBitmap = Bitmap.createBitmap(bitmap,(int)top,(int)left,(int)height,(int)width);
                Bitmap nBitmap = Bitmap.createBitmap(bitmap,(int)left,(int)top,(int)width,(int)height);

                try {
                    String samllFace = samllBasePath + System.currentTimeMillis() + ".jpg";
                    FileOutputStream fout = new FileOutputStream(samllFace);
                    nBitmap.compress(Bitmap.CompressFormat.JPEG, 100, fout);
                    samllFaces.add(samllFace);
                } catch (FileNotFoundException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
                Rectangle rect = new Rectangle(left/width_r,top /height_r,width/width_r,height/height_r);
                faces.add(rect);
                //画特征点
//                canvas.drawPoints(new float[]{faceInfo[5+14*i],faceInfo[10+14*i],
//                        faceInfo[6+14*i],faceInfo[11+14*i],
//                        faceInfo[7+14*i],faceInfo[12+14*i],
//                        faceInfo[8+14*i],faceInfo[13+14*i],
//                        faceInfo[9+14*i],faceInfo[14+14*i]}, paint);//画多个点
            }

            faceRequesBean.setSmallFace(samllFaces);
//            faceRequesBean.setFaces(faces);

            MAIN_THREAD_HANDLER.post(new Runnable() {
                @Override
                public void run() {
                    listener.onFacesDetected(faceRequesBean);
                }
            });

        }else{
            MAIN_THREAD_HANDLER.post(new Runnable() {
                @Override
                public void run() {
                    listener.onFacesRectangle(null);
                }
            });
            System.out.println("未检测到人脸");
        }

        long t4 =  System.currentTimeMillis();
        long execTime3 = t4-t3;
        System.out.println("===============PaintTime:"+execTime3);

//        if(faces.size() > 0)
//        {
//            System.out.println(faces.get(0).toString());
//        }


    }


    public Bitmap getPicFromBytes(YuvImage yuvimage,int w,int h) {

        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos);  //这里 80 是图片质量,取值范围 0-100,100为品质最高
        byte[] jdata = baos.toByteArray();//这时候 bmp 就不为 null 了
        Bitmap bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
//        Bitmap nBitmap = Bitmap.createBitmap(bitmap,(int)left,(int)top,(int)width,(int)height);
                try {
                    FileOutputStream fout = new FileOutputStream(Environment.getExternalStorageDirectory().getPath()+ "/hh"+System.currentTimeMillis() + ".jpg");
                    bmp.compress(Bitmap.CompressFormat.JPEG, 100, fout);
                } catch (FileNotFoundException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }



        return bmp;

    }


    public Bitmap getPicFromBytes(byte[] bytes,int w,int h,FaceRequesBean faceRequesBean) {

//        ByteArrayOutputStream baos = new ByteArrayOutputStream();
//        yuvimage.compressToJpeg(new Rect(0, 0, w, h), 80, baos);  //这里 80 是图片质量,取值范围 0-100,100为品质最高
//        byte[] jdata = baos.toByteArray();//这时候 bmp 就不为 null 了
//        Bitmap bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
////        Bitmap nBitmap = Bitmap.createBitmap(bitmap,(int)left,(int)top,(int)width,(int)height);
//                try {
//                    FileOutputStream fout = new FileOutputStream(Environment.getExternalStorageDirectory().getPath()+ "/hh"+System.currentTimeMillis() + ".jpg");
//                    bmp.compress(Bitmap.CompressFormat.PNG, 100, fout);
//                } catch (FileNotFoundException e) {
//                    // TODO Auto-generated catch block
//                    e.printStackTrace();
//                }
//        return bmp;

//        BitmapFactory.Options ops = new  BitmapFactory.Options();
//        ops.inPreferredConfig = Bitmap.Config.ARGB_8888;

        ByteBuffer bufferRGB = ByteBuffer.wrap(bytes);//将 byte 数组包装到Buffer缓冲区中
        Bitmap videoBit = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);//创建一个指定大小格式的Bitmap对象
        videoBit.copyPixelsFromBuffer(bufferRGB);
        try {
            String bigFace = bigBasePath + System.currentTimeMillis() + ".jpg";
            FileOutputStream fout = new FileOutputStream(bigFace);
            videoBit.compress(Bitmap.CompressFormat.JPEG, 100, fout);
            faceRequesBean.setBigFace(bigFace);
        } catch (FileNotFoundException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        return videoBit;
//        return BitmapFactory.decodeByteArray(bytes, 0, bytes.length,ops);
    }


    public static File getFileFromBytes(byte[] b, String outputFile) {
        BufferedOutputStream stream = null;
        File file = null;
        try {
            file = new File(outputFile);
            FileOutputStream fstream = new FileOutputStream(file);
            stream = new BufferedOutputStream(fstream);
            stream.write(b);
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (stream != null) {
                try {
                    stream.close();
                } catch (IOException e1) {
                    e1.printStackTrace();
                }
            }
        }
        return file;
    }


    /**
     * Notified when faces are detected.
     */
    public interface OnFacesDetectedListener {

        /**
         * Null-object for {@link OnFacesDetectedListener}.
         */
        OnFacesDetectedListener NULL = new OnFacesDetectedListener() {
            @Override
            public void onFacesDetected(FaceRequesBean faces) {
                // Do nothing
            }

            @Override
            public void onFacesRectangle(List<Rectangle> faces) {

            }
        };

        /**
         * Called when faces are detected. Always called on the main thread.
         *
         * @param faces detected faces. If no faces were detected - an empty list.
         */
        void onFacesDetected(FaceRequesBean faces);

        void onFacesRectangle(List<Rectangle> faces);

    }

    /**
     * Builder for {@link MTCNNFaceDetectorProcessor}.
     */
    public static class Builder {

        private final Context context;
        private OnFacesDetectedListener listener = OnFacesDetectedListener.NULL;

        private Builder(Context context) {
            this.context = context;
        }

        /**
         * @param listener which will be notified when faces are detected.
         */
        public Builder listener(OnFacesDetectedListener listener) {
            this.listener = listener != null
                    ? listener
                    : OnFacesDetectedListener.NULL;

            return this;
        }

        public MTCNNFaceDetectorProcessor build() {
            return new MTCNNFaceDetectorProcessor(this);
        }

    }

}

 

     

 

 

您可能感兴趣的与本文相关的镜像

ACE-Step

ACE-Step

音乐合成
ACE-Step

ACE-Step是由中国团队阶跃星辰(StepFun)与ACE Studio联手打造的开源音乐生成模型。 它拥有3.5B参数量,支持快速高质量生成、强可控性和易于拓展的特点。 最厉害的是,它可以生成多种语言的歌曲,包括但不限于中文、英文、日文等19种语言

代码是调用开源SDk的FaceCore关键代码。附件中有详细的接口调用说明 FaceCore人脸识别开放平台 (SERVICE INTERFACE PLATFORM)是基于人脸检测、比对核心业务技术的服务平台。平台可为外部合作伙伴提供基于高精度人脸识别技术为基础的相关服务,例如Api、人脸识别、数据安全等。作为人脸识别的重要开发途径,FaceCore平台将推动各行各业定制、创新、进化,并终促成新商业文明生态圈的建立。我们的使命是把人脸识别技术、规范等一系列核心技术基础服务,像水、电、煤一样输送给所有需要的合作伙伴、开发者、社区媒体、安全机构和各行各业。帮助社会各界通过使用此平台获得更丰厚的商业价值。 服务器测试接口: /api/hello/ 服务器测试接口,返回服务器当前时间。 人脸比对、识别接口: /api/facecompare/ 根据参数FaceFeature1,FaceFeature2获取两个人脸的相似度。 /api/facedetectcount/ 根据参数FaceImage,获取图像中的人脸数量。 /api/facedetect/ 根据参数FaceImage,获取图像中的人脸、眼睛位置和特征。 /api/urlfacedetect/ 根据参数Url,获取图像中的人脸、眼睛位置和特征。 人脸存储管理接口: /api/personface/similar/ Method:POST;根据参数Feature人脸特征,返回appkey存储的全部人脸相似度。 /api/personface/getall/ Method:GET;返回appkey存储的全部人脸。 /api/personface/{id} Method:GET;返回指定id人脸详细信息。 /api/personface/ Method:POST;添加一个人脸信息。 /api/personface/ Method:PUT;修改一个人脸信息。 /api/personface/{id} Method:DELETE;删除一个人脸信息。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值