Yolo的pt模型要在C++环境下进行加载与推理,需要将pt模型转换成engine模型,以下为ubuntu20.04环境,工程为Yolov8框架的转换模型的流程。
前提条件:
(1)已安装Cuda、Cudnn、Pytorch、TensorRT
(2)开发工具CLion
(3)语言:Python、C++
1、用python脚本转换pt模型为wts
(1)安装yolo框架,执行命令:
pip install ultralytics -i https://pypi.tuna.tsinghua.edu.cn/simple
(2)复制以下代码并创建py脚本,命名为:gen_wts_yolov8,并新建2个目录:model和output,执行以下命令获得wts模型:
python gen_wts_yolov8.py -w ./model/yolov8s.pt -o ./output/yolov8s.wts -t detect
import sys # noqa: F401
import argparse
import os
import struct
import torch
def parse_args():
parser = argparse.ArgumentParser(description='Convert .pt file to .wts')
parser.add_argument('-w', '--weights', required=True,
help='Input weights (.pt) file path (required)')
parser.add_argument(
'-o', '--output', help='Output (.wts) file path (optional)')
parser.add_argument(
'-t', '--type', type=str, default='detect', choices=['detect', 'cls', 'seg', 'pose'],
help='determines the model is detection/classification')
args = parser.parse_args()
if not os.path.isfile(args.weights):
raise SystemExit('Invalid input file')
if not args.output:
args.output = os.path.splitext(args.weights)[0] + '.wts'
elif os.path.isdir(args.output):
args.output = os.path.join(
args.output,
os.path.splitext(os.path.basename(args.weights))[0] + '.wts')
return args.weights, args.output, args.type
pt_file, wts_file, m_type = parse_args()
print(f'Generating .wts for {m_type} model')
# Load model
print(f'Loading {pt_file}')
# Initialize
device = 'cpu'
# Load model
model = torch.load(pt_file, map_location=device)['model'].float() # load to FP32
if m_type in ['detect', 'seg', 'pose']:
anchor_grid = model.model[-1].anchors * model.model[-1].stride[..., None, None]
delattr(model.model[-1], 'anchors')
model.to(device).eval()
with open(wts_file, 'w') as f:
f.write('{}\n'.format(len(model.state_dict().keys())))
for k, v in model.state_dict().items():
vr = v.reshape(-1).cpu().numpy()
f.write('{} {} '.format(k, len(vr)))
for vv in vr:
f.write(' ')
f.write(struct.pack('>f', float(vv)).hex())
f.write('\n')
2、安装eigen3
官网:Eigen
到官网或git下载源码,进行解压和编译,默认安装路径 /usr/local/include/eigen3/
unzip eigen-3.3.9.zip
cd eigen-3.3.9
mkdir build
cd build
cmake ..
sudo make install
3、用C++编译转换模型工程
Git:https://github.com/emptysoal/TensorRT-YOLOv8-ByteTrack
(1)到以上git仓库下载源码,用CLion打开工程。
(2)重点:修改配置文件,打开tensorrtx-yolov8/include/config.h,根据自己训练模型的分类数量,将kNumClass修改为对应模型的分类数量,如果模型的输入图像分辨率有改动,也要修改kInputH和kInputW。
(3)打开CLion的命令控制台,执行以下命令:
cd tensorrtx-yolov8
mkdir build
cd build
cmake ..
make
(4)将步骤1生成的wts模型复制到build目录,执行命令获得engine模型:
./yolov8 -s yolov8s.wts yolov8s.engine s