基于TensorRT7.1、Libtorch、Caffe、ROS下的安装和测试

本文详细介绍如何安装配置TensorRT及Caffe环境,包括解决常见问题如找不到共享对象文件、符号链接错误等,并提供TensorRT与Caffe在ROS平台下的配置步骤及示例代码。

配置

find:
cannot find -lcudart
解决:
sudo ln -s /usr/local/cuda/lib64/libcudart.so /usr/lib/libcudart.so

1: TensorRT7.1.3.4安装

下载链接:https://developer.nvidia.com/tensorrt

https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing

tar -xvzf TensorRT-7.1.3.4.Ubuntu-16.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz

cd data
python3 ./download_pgms.py
cd sample
pip install pillow
../bin/sample_mnist

export LD_LIBRARY_PATH=/home/cs/TensorRT-7.1.3.4/lib:$LD_LIBRARY_PATH


cd /TensorRT-7.1.3.4/python
python3.6 -m pip install tensorrt-7.1.3.4-cp36-none-linux_x86_64.whl

将TensorRT中的链接文件.so文件复制到/usr/lib/文件夹中,比如

1) ImportError: libnvinfer.so.6: cannot open shared object file:
Nosuch file or directory

2)ImportError: libnvonnxparser.so.6: cannot open shared object file:No
such file or directory

解决办法:

1) sudo cp TensorRT-6.01/targets/x86_64-linux-gnu/lib/libnvinfer.so.6
/usr/lib/

2)sudo cp
TensorRT-6.01/targets/x86_64-linux-gnu/lib/libnvonnxparser.so.6
/usr/lib/

出现pycuda问题
ModuleNotFoundError: No module named ‘pycuda’

sudo pip3.6 install pycuda== 2019.1.2 -i
https://pypi.tuna.tsinghua.edu.cn/simple
// 不用sudo
pip3.6 install pycuda -i https://pypi.tuna.tsinghua.edu.cn/simple
出现 is not a symbolic link
sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_
infer.so.8 is not a symbolic link
sudo ln -sf libcudnn.so.8.0.2 libcudnn.so.8

出现

CMakeFiles/traffic_det_reg_caffe_trt.dir/src/TrafficDetection.cpp.o:
In function onnxToTRTModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, nvinfer1::ICudaEngine*&, int const&)': TrafficDetection.cpp:(.text+0x1357): undefined reference tocreateNvOnnxParser_INTERNAL’

cmakelist添加:

/home/name/TensorRT-7.1.3.4/lib/libnvinfer.so
/home/name/TensorRT-7.1.3.4/lib/libnvinfer_plugin.so
/home/name/TensorRT-7.1.3.4/lib/libnvparsers.so
/home/name/TensorRT-7.1.3.4/lib/libnvonnxparser.so

2: TensorRT7.1.3.4 C++测试

ICudaEngine类即为Engine,可通过IBuilder类方法buildCudaEngine()/buildEngineWithConfig()返回其指针。
注意,可通过导入模型生成Engine和通过反序列化来加载Engine两种Engine生成方式。

 //method 1:
ICudaEngine *engine = builder-buildCudaEngine(*network);
 // method 2:
nvinfer1::IPluginFactory *mPlugin;
mEngine = shared_ptr<nvinfer1::ICudaEngine>(
	 	builder->buildCudaEngine(*network), InferDeleter());
//method 3:	
nvinfer1::IPluginFactory *mPlugin;
mEngine = SampleUniquePtr<nvinfer1::ICudaEngine>(
        builder->buildEngineWithConfig(*network, *config));

例如,输入输出通道的转换:
分割输入通道转换:

            cv::Mat img = cv::imread();
            if (img.empty()) continue;
            // BGR to RGB
            cv::Mat pr_img = preprocess_img(img); 
            int i = 0;
            for (int row = 0; row < INPUT_H; ++row) {
   
   
                uchar* uc_pixel = pr_img.data + row * pr_img.step;
                for (int col = 0; col < INPUT_W; ++col) {
   
   
                    data[b * 3 * INPUT_H * INPUT_W + i] = (float)uc_pixel[2] / 255.0;
                    data[b * 3 * INPUT_H * INPUT_W + i + INPUT_H * INPUT_W] = (float)uc_pixel[1] / 255.0;
                    data[b * 3 * INPUT_H * INPUT_W + i + 2 * INPUT_H * INPUT_W] = (float)uc_pixel[0]
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值