用C++与TensorRT推理,在C,C++环境中进行,可在各种安装了这种环境的边缘设备上运行,需要安装CUDA,CUDNN,以支持GPU,设备要安装英伟达可编程显卡(GPU),同时还要安装TensorRT,TensorRT安装方法在我的另一篇文章中已有说明:推理引擎TensorRT安装与多线程推理(Python)(推理引擎TensorRT安装与多线程推理(Python)_tensorrt多个任务并行推理-优快云博客)
C++多线程模型推理,可看另外一个作者的文章:TensorRT部署yolov8目标检测任务,地址:https://zhuanlan.zhihu.com/p/681591561?utm_id=0,代码地址:https://github.com/cyberyang123/Learning-TensorRT
我这里在Ubuntu系统上安装了Clion,在Clion中配置运行项目。
在Clion中创建一个C++Executable项目,属于cmake类型项目。
配置CMake options:
在下图中的CMake options中填下这行命令,让CMake能够找到nvcc编译器:
-DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc
如果系统已安装了Opecv并且需要用到,还要回上下面命令:
-DopenCV_DIR=/opt/software/build
两个命令之间用空格分开,如下图:
配置CMakeLists.txt:
我这里CMakeLists.txt中代码如下所示:
cmake_minimum_required(VERSION 3.28)
project(Demo)
set(CMAKE_CXX_STANDARD 17)
set(CUDA_NVCC_FLAGS -g;-G) # c++: error: unrecognized command-line option '-G'
set(CUDA_TOOLKIT_ROOT_DIR /usr/local/cuda)
dependent library, the next warning / error will give you more info.
/opt/venv/python39venv/lib/python3.9/site-packages/torch/share/cmake)
set(OPENCV_INCLUDE /opt/software/opencv-4.x/include)
set(OPENCV_LIB /opt/software/opencv-4.x/lib)
set(TENSORRT_INCLUDE /opt/software/TensorRT-10.2.0.19/include)
set(TENSORRT_LIB /opt/software/TensorRT-10.2.0.19/lib)
include_directories(${TENSORRT_INCLUDE} ${OPENCV_INCLUDE} ${CUDA_TOOLKIT_ROOT_DIR}/include )
# 收集所有库文件
file(GLOB OPENCV_LIBS ${OPENCV_LIB}/*.so)
# 收集 CUDA 和 TensorRT 库文件
file(GLOB CUDA_LIBS ${CUDA_TOOLKIT_ROOT_DIR}/lib64/*.so)
file(GLOB TENSORRT_LIBS ${TENSORRT_LIB}/*.so)
add_executable(Demo main.cpp
detect.cpp
preprocessing.hpp
yolov8_utils.cpp
)
target_link_libraries(Demo ${OPENCV_LIBS} ${TENSORRT_LIBS} ${CUDA_LIBS})
其他代码类似上面的github中代码,然后在运行设置中添加一个CMake Application类型的设置,再点保存按钮。
如果项目没有编译错误,正常情况下,选这个配置可运行成功。