YOLOv10环境搭建、模型预测和ONNX推理

目录

1、开源代码、模型下载

2、环境配置

3、模型预测

4、onnxruntime测试


1、开源代码、模型下载

代码下载链接:https://github.com/THU-MIG/yolov10

模型下载:

YOLOv10-N:https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10n.pt

YOLOv10-S:https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10s.pt

YOLOv10-M:https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10m.pt

YOLOv10-B:https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10b.pt

YOLOv10-L:https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10l.pt

YOLOv10-X:https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10x.pt

2、环境配置

打开Anaconda3终端,进入base环境,创建新环境

conda create -n yolov10 python=3.9
conda activate yolov10
#cd到yolov10的目录下
pip install -r requirements.txt -i  https://pypi.tuna.tsinghua.edu.cn/simple/
pip install -e . 

3、模型预测

安装完成之后,我们简单执行下推理命令测试下效果,默认读取yolov10-main/ultralytics/assets文件夹下的所有图像:

yolo predict model=yolov10s.pt

或者使用脚本



from ultralytics import YOLOv10
import glob
import os
import numpy as np
import cv2

classes = {
    0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus',
    6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant',
    11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat',
    16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear',
    22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag',
    27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard',
    32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove',
    36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle',
    40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl',
    46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli',
    51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake',
    56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table',
    61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard',
    67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink',
    72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors',
    77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'
}
class Colors:
    """Ultralytics color palette https://ultralytics.com/."""

    def __init__(self):
        """Initialize colors as hex = matplotlib.colors.TABLEAU_COLORS.values()."""
        hexs = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB',
                '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7')
        self.palette = [self.hex2rgb(f'#{c}') for c in hexs]
        # print(self.palette)
        self.n = len(self.palette)

    def __call__(self, i, bgr=False):
        """Converts hex color codes to rgb values."""
        c = self.palette[int(i) % self.n]
        return (c[2], c[1], c[0]) if bgr else c

    @staticmethod
    def hex2rgb(h):  # rgb order (PIL)
        return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))


colors = Colors()  # create instance for 'from utils.plots import colors'

imgpath = r'D:\Yolov10\yolov10-main\yolov10-detect\test2'
modelpath = r'D:\Yolov10\yolov10-main\yolov10-detect\yolov10s.pt'
save_dir = imgpath + '_Rst'
os.makedirs(save_dir,exist_ok=True)
model = YOLOv10(modelpath)

imgs = glob.glob(os.path.join(imgpath,'*.jpg'))
for img in imgs:
    imgname = img.split('\\')[-1]
    frame = cv2.imread(img)
    results = model.predict(img)[0]
    # results = model(img)

    for box in results.boxes:
        # print(box)
        xyxy = box.xyxy.squeeze().tolist()
        x1, y1, x2, y2 = int(xyxy[0]), int(xyxy[1]), int(xyxy[2]), int(xyxy[3])
        c, conf = int(box.cls), float(box.conf)
        name = classes[c]
        color = colors(c, True)
        cv2.rectangle(frame, (int(xyxy[0]), int(xyxy[1])), (int(xyxy[2]), int(xyxy[3])), color, thickness=2, lineType=cv2.LINE_AA)
        cv2.putText(frame, f"{name}: {conf:.2f}", (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 1, color,
                    2)
    # cv2.imshow('image', frame)
    # cv2.waitKey(0)
    cv2.imwrite(save_dir+'\\'+imgname,frame)

运行结果如下:

4、onnxruntime测试

(1)onnx模型转换

yolo export model=yolov10s.pt format=onnx opset=13 simplify

 运行后会在文件yolov10s.pt存放路径下生成一个的yolov10s.onnxONNX模型文件

可以通过这个网站Netron查看导出的节点信息:

(2)模型推理

通过 Ultralytics 框架测试下能否正常推理:

yolo predict model=yolov10s.onnx

或者使用推理脚本

import glob
import os
import cv2
import numpy as np
import onnxruntime as ort


classes = {
    0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus',
    6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant',
    11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat',
    16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear',
    22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag',
    27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard',
    32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove',
    36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle',
    40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl',
    46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli',
    51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake',
    56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table',
    61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard',
    67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink',
    72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors',
    77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'
}
class Colors:
    """Ultralytics color palette https://ultralytics.com/."""

    def __init__(self):
        """Initialize colors as hex = matplotlib.colors.TABLEAU_COLORS.values()."""
        hexs = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB',
                '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7')
        self.palette = [self.hex2rgb(f'#{c}') for c in hexs]
        # print(self.palette)
        self.n = len(self.palette)

    def __call__(self, i, bgr=False):
        """Converts hex color codes to rgb values."""
        c = self.palette[int(i) % self.n]
        return (c[2], c[1], c[0]) if bgr else c

    @staticmethod
    def hex2rgb(h):  # rgb order (PIL)
        return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))


colors = Colors()  # create instance for 'from utils.plots import colors'


def letterbox(
        im,
        new_shape,
        color=(114, 114, 114),
        auto=False,
        scaleFill=False,
        scaleup=True,
        stride=32,
):
    """
    Resize and pad image while meeting stride-multiple constraints
    Returns:
        im (array): (height, width, 3)
        ratio (array): [w_ratio, h_ratio]
        (dw, dh) (array): [w_padding h_padding]
    """
    shape = im.shape[:2]  # current shape [height, width]
    if isinstance(new_shape, int):  # [h_rect, w_rect]
        new_shape = (new_shape, new_shape)

    # Scale ratio (new / old)
    r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
    if not scaleup:  # only scale down, do not scale up (for better val mAP)
        r = min(r, 1.0)

    # Compute padding
    ratio = r, r  # wh ratios
    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))  # w h
    dw, dh = (
        new_shape[1] - new_unpad[0],
        new_shape[0] - new_unpad[1],
    )  # wh padding

    if auto:  # minimum rectangle
        dw, dh = np.mod(dw, stride), np.mod(dh, stride)  # wh padding
    elif scaleFill:  # stretch
        dw, dh = 0.0, 0.0
        new_unpad = (new_shape[1], new_shape[0])  # [w h]
        ratio = (
            new_shape[1] / shape[1],
            new_shape[0] / shape[0],
        )  # [w_ratio, h_ratio]

    dw /= 2  # divide padding into 2 sides
    dh /= 2
    if shape[::-1] != new_unpad:  # resize
        im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
    im = cv2.copyMakeBorder(
        im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color
    )
    return im, ratio, (dw, dh)


def rescale_coords(boxes, image_shape, input_shape):
    image_height, image_width = image_shape
    input_height, input_width = input_shape

    scale = min(input_width / image_width, input_height / image_height)

    pad_w = (input_width - image_width * scale) / 2
    pad_h = (input_height - image_height * scale) / 2

    boxes[:, [0, 2]] = (boxes[:, [0, 2]] - pad_w) / scale
    boxes[:, [1, 3]] = (boxes[:, [1, 3]] - pad_h) / scale

    boxes[:, [0, 2]] = np.clip(boxes[:, [0, 2]], 0, image_width)
    boxes[:, [1, 3]] = np.clip(boxes[:, [1, 3]], 0, image_height)

    return boxes.astype(int)


def preprocess(image, input_shape):
    # Resize
    input_img = letterbox(image, input_shape)[0]
    # Transpose
    input_img = input_img[..., ::-1].transpose(2, 0, 1)
    # Expand
    input_img = input_img[np.newaxis, :, :, :].astype(np.float32)
    # Contiguous
    input_img = np.ascontiguousarray(input_img)
    # Norm
    blob = input_img / 255.0
    return blob


def postprocess(outs, conf_thres, image_shape, input_shape):
    # Filtered by conf
    outs = outs[outs[:, 4] >= conf_thres]

    # Extract
    boxes = outs[:, :4]
    scores = outs[:, -2]
    labels = outs[:, -1].astype(int)

    # Rescale
    boxes = rescale_coords(boxes, image_shape, input_shape)

    return boxes, scores, labels


def main():
    conf_thres = 0.25
    input_shape = (640, 640)
    image_path = r'D:\Yolov10\yolov10-main\ultralytics\assets'
    save_path = image_path + '_Rst'
    os.makedirs(save_path,exist_ok=True)
    model_path = r'D:\Yolov10\yolov10-main\yolov10-detect\yolov10s.onnx'

    ort_model = ort.InferenceSession(model_path)
    imgs = glob.glob(os.path.join(image_path,'*.jpg'))
    imgs.sort()
    for img in imgs:
        imgname = img.split('\\')[-1]
        # Preprocess
        im0 = cv2.imread(img)
        image_shape = im0.shape[:2]
        blob = preprocess(im0, input_shape)

        # Inference
        outs = ort_model.run(None, {'images': blob})[0][0]

        # Postprocess
        boxes, scores, labels = postprocess(outs, conf_thres, image_shape, input_shape)

        # 保存结果
        for label, score, box in zip(labels, scores, boxes):
            label_text = f'{classes[label]}: {score:.2f}'
            color = colors(label,True)
            cv2.rectangle(im0, (box[0], box[1]), (box[2], box[3]), color, thickness=2, lineType=cv2.LINE_AA)
            cv2.putText(im0, label_text, (box[0], box[1] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)

        # cv2.imshow('image', im0)
        # cv2.waitKey(0)
        cv2.imwrite(save_path+'\\'+imgname, im0)


if __name__ == '__main__':
    main()

参考:YOLOv10 正式发布!原理、部署、应用一站式齐全-优快云博客

以下是使用 YOLOv5 的 ONNX 模型进行目标预测的 Python 脚本: ```python import onnxruntime import cv2 import numpy as np # 加载 ONNX 模型 sess = onnxruntime.InferenceSession("yolov5.onnx") # 读取图像 img = cv2.imread("image.jpg") img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # 对图像进行预处理 img = cv2.resize(img, (640, 640)) img = img.transpose(2, 0, 1) img = img.astype("float32") img /= 255.0 img = np.expand_dims(img, axis=0) # 进行目标检测 input_name = sess.get_inputs()[0].name output_name = sess.get_outputs()[0].name output = sess.run([output_name], {input_name: img})[0] # 解析预测结果 boxes = [] scores = [] classes = [] for detection in output: x1, y1, x2, y2, score, class_id = detection if score > 0.5: boxes.append([x1, y1, x2, y2]) scores.append(score) classes.append(class_id) # 在图像上绘制检测结果 for i in range(len(boxes)): x1, y1, x2, y2 = boxes[i] label = f"{classes[i]}: {scores[i]:.2f}" cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2) cv2.putText(img, label, (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) # 显示图像 cv2.imshow("result", img) cv2.waitKey(0) cv2.destroyAllWindows() ``` 其中,需要先将 YOLOv5 的 PyTorch 模型转换为 ONNX 模型,可以使用以下命令: ``` $ python models/export.py --weights yolov5s.pt --img-size 640 --batch-size 1 --include onnx ``` 该命令将在 `models` 目录下生成 `yolov5.onnx` 文件。在预测时,需要将图像先进行预处理,然后通过 `sess.run` 方法进行目标检测,最后解析检测结果并在图像上绘制。
评论 15
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

_Mamba24

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值