OpenCV-Python 绑定读取 训练好的caffe model 进行目标检测

本文详细介绍了一种使用深度学习进行目标检测的方法,通过Python代码演示了如何利用MobileNet SSD预训练模型进行图像中的物体识别。从加载模型到处理输入图像,再到解析检测结果并可视化,全过程深入浅出。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

二话不多,先上代码,以后再补充。

# USAGE
# python deep_learning_object_detection.py --image images/example_01.jpg \
#    --prototxt MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel

# import the necessary packages
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
    help="path to input image")
ap.add_argument("-p", "--prototxt", required=True,
    help="path to Caffe 'deploy' prototxt file")
ap.add_argument("-m", "--model", required=True,
    help="path to Caffe pre-trained model")
ap.add_argument("-c", "--confidence", type=float, default=0.2,
    help="minimum probability to filter weak detections")
args = vars(ap.parse_args())


# initialize the list of class labels MobileNet SSD was trained to
# detect, then generate a set of bounding box colors for each class
CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
    "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
    "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
    "sofa", "train", "tvmonitor"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))

# load our serialized model from disk
print("[INFO] loading model...")
net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])

# load the input image and construct an input blob for the image
# by resizing to a fixed 300x300 pixels and then normalizing it
# (note: normalization is done via the authors of the MobileNet SSD
# implementation)
image = cv2.imread(args["image"])
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(image, 0.007843, (300, 300), 127.5) # --> NCHW

# pass the blob through the network and obtain the detections and
# predictions
print("[INFO] computing object detections...")
net.setInput(blob)
detections = net.forward() # --> net.forward

# loop over the detections
for i in np.arange(0, detections.shape[2]):
    # extract the confidence (i.e., probability) associated with the
    # prediction
    confidence = detections[0, 0, i, 2]

    # filter out weak detections by ensuring the `confidence` is
    # greater than the minimum confidence
    if confidence > args["confidence"]:
        # extract the index of the class label from the `detections`,
        # then compute the (x, y)-coordinates of the bounding box for
        # the object
        idx = int(detections[0, 0, i, 1])
        box= detections[0, 0, i, 3:7] * np.array([w, h, w, h])
        (startX, startY, endX, endY) = box.astype("int")

        # display the prediction
        label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
        print("[INFO] {}".format(label))
        cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[idx], 2)
        y = startY - 15 if startY - 15 > 15 else startY + 15
        cv2.putText(image, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)

# show the output image
cv2.imshow("Output", image)
cv2.waitKey(0)
 

ktngththdexes:https://pypi.tuna.tsinghua.edu.cn/simple lecting opencv-python-headless==4.5.5.64 ownloading https://pypi.tuna.tsinghua.edu.cn/packages/43/03/13447b012f11ed59948a1f09fc791bd2fbc32a cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (26.7 MB) | 26.7 MB 11 kB/s quirement already satisfied: numpy>=1.19.3 in./archiconda3/envs/szy/lib/python3.6/site-packages (fr stalling collected packages: opencv-python-headless ccessfully installed opencv-python-headless-4.5.5.64 zy)songzhiyi@songzhiyi-desktop:~$ code zy)songzhiyi@songzhiyi-desktop:~$ cd CSI szy)songzhiyi@songzhiyi-desktop:~/CSI$ python video.py raceback (most recent call last): File "video.py", line 1, in <module> import cv2 File "/home/songzhiyi/archiconda3/envs/szy/lib/python3.6/site-packages/cv2/__init_.py",line 190, in bootstrap() File "/home/songzhiyi/archiconda3/envs/szy/lib/python3.6/site-packages/cv2/__init__.py",line 184, in b if_load_extra_py_code_for_module("cv2", submodule, DEBUG): File "/home/songzhiyi/archiconda3/envs/szy/lib/python3.6/site-packages/cv2/__init__.py",line 37,in_1py_module = importlib.import_module(module_name) File "/home/songzhiyi/archiconda3/envs/szy/lib/python3.6/importlib/__init__.py",line 126, in import_mod return _bootstrap._gcd_import(name[level:], package, level) File "/home/songzhiyi/archiconda3/envs/szy/lib/python3.6/site-packages/cv2/typing/__init__.py", line 162,LayerId = cv2.dnn.DictValue AttributeError: module 'cv2.dnn' has no attribute 'DictValue (szy) songzhiyi@songzhiyi-desktop:~/CSI$
最新发布
06-28
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值