opencv调用yolov5的onnx模型的常见bug

文章讲述了在VS/QT环境中,使用OpenCV4.5.5调用ONNX模型时遇到的问题,原因是torch版本升级导致的getLayerShapesRecursively函数报错。解决方法是下载最新export项目文件并确保ONNX版本与opencv对应(如OPSET12)。

项目场景:在vs/QT中利用opencv调用onnx模型


问题描述

将pt文件转为onnx文件后,利用opencv中的dnn模块去调用出现的问题

[ERROR:0@12.939] global /home/book/lwj/opencv/opencv_ubuntu-4.5.5/modules/dnn/src/dnn.cpp (3875) getLayerShapesRecursively OPENCV/DNN: []:(_input): getMemoryShapes() throws exception. inputs=1 outputs=0/0 blobs=0
[ERROR:0@12.941] global /home/book/lwj/opencv/opencv_ubuntu-4.5.5/modules/dnn/src/dnn.cpp (3878) getLayerShapesRecursively     input[0] = [ 1 3 640 640 ]
[ERROR:0@12.941] global /home/book/lwj/opencv/opencv_ubuntu-4.5.5/modules/dnn/src/dnn.cpp (3888) getLayerShapesRecursively Exception message: OpenCV(4.5.5) /home/book/lwj/opencv/opencv_ubuntu-4.5.5/modules/dnn/src/dnn.cpp:810: error: (-215:Assertion failed) inputs.size() == requiredOutputs in function 'getMemoryShapes'


原因分析:

### 加载并运行YOLOv5 ONNX模型 为了在Python中使用OpenCV加载并运行YOLOv5ONNX模型进行目标检测,可以遵循以下方法: #### 准备工作 确保已经安装了必要的库,如`opencv-python`和`numpy`。可以通过pip命令来完成这些包的安装。 ```bash pip install opencv-python numpy ``` #### 调用YOLOv5 ONNX模型 通过`cv2.dnn.readNetFromONNX()`函数可以直接读取YOLOv5导出为ONNX格式后的文件[^2]。此过程涉及创建DNN模块实例,并设置输入图像尺寸以及均值减去等预处理参数以适应网络需求。 #### 完整代码示例 下面是完整的Python脚本,展示了如何利用OpenCV加载YOLOv5 ONNX模型并对图片执行目标检测: ```python import cv2 import numpy as np def load_yolov5_model(onnx_path): net = cv2.dnn.readNetFromONNX(onnx_path) return net def preprocess_image(image, input_size=(640, 640)): blob = cv2.dnn.blobFromImage( image, scalefactor=1/255.0, size=input_size, swapRB=True, crop=False ) return blob def detect_objects(net, frame, conf_threshold=0.5, nms_threshold=0.4): height, width = frame.shape[:2] # 设置网络输入 blob = preprocess_image(frame) net.setInput(blob) # 获取输出层名称列表 layer_names = net.getUnconnectedOutLayersNames() # 前向传播得到预测结果 outputs = net.forward(layer_names) boxes = [] confidences = [] class_ids = [] for output in outputs: for detection in output: scores = detection[5:] class_id = np.argmax(scores) confidence = scores[class_id] if confidence > conf_threshold: center_x = int(detection[0] * width) center_y = int(detection[1] * height) w = int(detection[2] * width) h = int(detection[3] * height) x = int(center_x - w / 2) y = int(center_y - h / 2) boxes.append([x, y, w, h]) confidences.append(float(confidence)) class_ids.append(class_id) indices = cv2.dnn.NMSBoxes(boxes, confidences, conf_threshold, nms_threshold) results = [] if len(indices) > 0: for i in indices.flatten(): box = boxes[i] label = f'Class {class_ids[i]}' score = confidences[i] results.append((label, score, tuple(box))) return results if __name__ == "__main__": onnx_file = 'yolov5s.onnx' # 替换为实际路径 model = load_yolov5_model(onnx_file) cap = cv2.VideoCapture('input.mp4') # 或者传入摄像头索引号比如0代表默认摄像设备 while True: ret, frame = cap.read() if not ret: break detections = detect_objects(model, frame) for (label, score, bbox) in detections: color = (0, 255, 0) text = f'{label}: {score:.2f}' top_left = (bbox[0], bbox[1]) bottom_right = (bbox[0]+bbox[2], bbox[1]+bbox[3]) cv2.rectangle(frame, top_left, bottom_right, color, thickness=2) cv2.putText(frame, text, top_left, cv2.FONT_HERSHEY_SIMPLEX, 0.7, color, 2) cv2.imshow('Object Detection', frame) key = cv2.waitKey(1) & 0xFF if key == ord('q'): break cap.release() cv2.destroyAllWindows() ``` 该程序会打开指定视频源(或相机),逐帧应用YOLOv5模型进行物体识别,并将结果显示出来直到按下键盘上的‘q’键停止循环。
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值