这是一个基于Java OpenCV DNN软件模块的图像分类,使用GoogleNet模型数据的图像分类,
在进行深度学习或者图片分类时,blobFromImage
主要是用来对图片进行预处理。包含两个主要过程:
blobFromImage(InputArray image,
double scalefactor=1.0,
const Size& size = new Size(),
const Scalar& mean =new Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F)
- 整体像素值减去平均值(
mean
) - 通过缩放系数(
scalefactor
)对图片像素值进行缩放 -
image:这个就是我们将要输入神经网络进行处理或者分类的图片。
mean:需要将图片整体减去的平均值,如果我们需要对RGB图片的三个通道分别减去不同的值,那么可以使用3组平均值,如果只使用一组,那么就默认对三个通道减去一样的值。减去平均值(mean):为了消除同一场景下不同光照的图片,对我们最终的分类或者神经网络的影响,我们常常对图片的R、G、B通道的像素求一个平均值,然后将每个像素值减去我们的平均值,这样就可以得到像素之间的相对值,就可以排除光照的影响。
scalefactor:当我们将图片减去平均值之后,还可以对剩下的像素值进行一定的尺度缩放,它的默认值是1,如果希望减去平均像素之后的值,全部缩小一半,那么可以将scalefactor设为1/2。
size:这个参数是我们神经网络在训练的时候要求输入的图片尺寸。
swapRB:OpenCV中认为我们的图片通道顺序是BGR,但是我平均值假设的顺序是RGB,所以如果需要交换R和G,那么就要使swapRB=true
public static void main(String[] args) throws Exception {
args = new String[] { "-classes", "E:\\java\\opencv-4.x\\samples\\data\\dnn\\classification_classes_ILSVRC2012.txt" };
Options options = new Options();
options.addOption("classes", true, "分类文件");
options.addOption("input", true, "输入的图片");
CommandLineParser commandLineParser = new DefaultParser();
CommandLine parser = commandLineParser.parse(options, args);
float scale = Float.valueOf(parser.getOptionValue("scale", "1.0"));
//The mean argument is pretty important. These are actually the mean values that are subtracted from the image’s RGB color channels. This normalizes the input and makes the final input invariance to different illumination scales.
Scalar mean = new Scalar(104, 117, 123, 0);
boolean swapRB = Boolean.valueOf(parser.getOptionValue("rgb", "false"));
int inpWidth = Integer.valueOf(parser.getOptionValue("width", "224"));
int inpHeight = Integer.valueOf(parser.getOptionValue("height", "224"));
String model = parser.getOptionValue("model", "E:\\java\\opencv-4.x\\opencv_extra-master\\testdata\\dnn\\caffe\\bvlc_googlenet.caffemodel");
String config = parser.getOptionValue("config", "E:\\java\\opencv-4.x\\opencv_extra-master\\testdata\\dnn\\bvlc_googlenet.prototxt");
String framework = parser.getOptionValue("framework", "Caffe");
List<String> classes = null;
if (parser.hasOption("classes")) {
String file = parser.getOptionValue("classes");
classes = IOUtils.readLines(new FileInputStream(file), Charset.defaultCharset());
}
Net net = opencv_dnn.readNet(model, config, framework);
String kWinName = "Deep learning image classification in OpenCV";
opencv_highgui.namedWindow(kWinName, opencv_highgui.WINDOW_NORMAL);
VideoCapture cap = new VideoCapture();
if (parser.hasOption("input")) {
cap.open(parser.getOptionValue("input"));
} else {
cap.open(0);
}
// Process frames
Mat frame = new Mat(), blob = new Mat();
while (opencv_highgui.waitKey(2) < 0) {
if (!cap.read(frame) && frame.empty()) {
opencv_highgui.waitKey();
cap.close();
break;
}
opencv_dnn.blobFromImage(frame, blob, scale, new Size(inpWidth, inpHeight), mean, swapRB, false, opencv_core.CV_32F);
net.setInput(blob);
Mat prob = net.forward();
Point classIdPoint = new Point(1);
DoublePointer confidence = new DoublePointer(1);
opencv_core.minMaxLoc(prob.reshape(1, 1), new DoublePointer(1), confidence, new Point(1), classIdPoint, null);
int classId = classIdPoint.x();
//Put efficiency information
DoublePointer layersTimes = new DoublePointer();
double freq = opencv_core.getTickFrequency() / 1000;
double t = net.getPerfProfile(layersTimes) / freq;
String label = String.format("Inference time: %.2f ms", t);
opencv_imgproc.putText(frame, label, new Point(0, 15), opencv_imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 255, 0, 0));
// Print predicted class.
label = String.format("%s: %.4f", (classes.isEmpty() ? String.format("Class #%d", classId) : classes.get(classId)), confidence.get());
opencv_imgproc.putText(frame, label, new Point(0, 40), opencv_imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 255, 0, 0));
opencv_highgui.imshow(kWinName, frame);
}
}
效果如下图
mug