配置
find:
cannot find -lcudart
解决:
sudo ln -s /usr/local/cuda/lib64/libcudart.so /usr/lib/libcudart.so
1: TensorRT7.1.3.4安装
下载链接:https://developer.nvidia.com/tensorrt
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing
tar -xvzf TensorRT-7.1.3.4.Ubuntu-16.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz
cd data
python3 ./download_pgms.py
cd sample
pip install pillow
../bin/sample_mnist
export LD_LIBRARY_PATH=/home/cs/TensorRT-7.1.3.4/lib:$LD_LIBRARY_PATH
cd /TensorRT-7.1.3.4/python
python3.6 -m pip install tensorrt-7.1.3.4-cp36-none-linux_x86_64.whl
将TensorRT中的链接文件.so文件复制到/usr/lib/文件夹中,比如
1) ImportError: libnvinfer.so.6: cannot open shared object file:
Nosuch file or directory2)ImportError: libnvonnxparser.so.6: cannot open shared object file:No
such file or directory解决办法:
1) sudo cp TensorRT-6.01/targets/x86_64-linux-gnu/lib/libnvinfer.so.6
/usr/lib/2)sudo cp
TensorRT-6.01/targets/x86_64-linux-gnu/lib/libnvonnxparser.so.6
/usr/lib/
出现pycuda问题
ModuleNotFoundError: No module named ‘pycuda’sudo pip3.6 install pycuda== 2019.1.2 -i
https://pypi.tuna.tsinghua.edu.cn/simple
// 不用sudo
pip3.6 install pycuda -i https://pypi.tuna.tsinghua.edu.cn/simple
出现 is not a symbolic link
sbin/ldconfig.real: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_
infer.so.8 is not a symbolic link
sudo ln -sf libcudnn.so.8.0.2 libcudnn.so.8
出现
CMakeFiles/traffic_det_reg_caffe_trt.dir/src/TrafficDetection.cpp.o:
In functiononnxToTRTModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, nvinfer1::ICudaEngine*&, int const&)': TrafficDetection.cpp:(.text+0x1357): undefined reference tocreateNvOnnxParser_INTERNAL’
cmakelist添加:
/home/name/TensorRT-7.1.3.4/lib/libnvinfer.so
/home/name/TensorRT-7.1.3.4/lib/libnvinfer_plugin.so
/home/name/TensorRT-7.1.3.4/lib/libnvparsers.so
/home/name/TensorRT-7.1.3.4/lib/libnvonnxparser.so
2: TensorRT7.1.3.4 C++测试
ICudaEngine类即为Engine,可通过IBuilder类方法buildCudaEngine()/buildEngineWithConfig()返回其指针。
注意,可通过导入模型生成Engine和通过反序列化来加载Engine两种Engine生成方式。
//method 1:
ICudaEngine *engine = builder-buildCudaEngine(*network);
// method 2:
nvinfer1::IPluginFactory *mPlugin;
mEngine = shared_ptr<nvinfer1::ICudaEngine>(
builder->buildCudaEngine(*network), InferDeleter());
//method 3:
nvinfer1::IPluginFactory *mPlugin;
mEngine = SampleUniquePtr<nvinfer1::ICudaEngine>(
builder->buildEngineWithConfig(*network, *config));
例如,输入输出通道的转换:
分割输入通道转换:
cv::Mat img = cv::imread();
if (img.empty()) continue;
// BGR to RGB
cv::Mat pr_img = preprocess_img(img);
int i = 0;
for (int row = 0; row < INPUT_H; ++row) {
uchar* uc_pixel = pr_img.data + row * pr_img.step;
for (int col = 0; col < INPUT_W; ++col) {
data[b * 3 * INPUT_H * INPUT_W + i] = (float)uc_pixel[2] / 255.0;
data[b * 3 * INPUT_H * INPUT_W + i + INPUT_H * INPUT_W] = (float)uc_pixel[1] / 255.0;
data[b * 3 * INPUT_H * INPUT_W + i + 2 * INPUT_H * INPUT_W] = (float)uc_pixel[0]

本文详细介绍如何安装配置TensorRT及Caffe环境,包括解决常见问题如找不到共享对象文件、符号链接错误等,并提供TensorRT与Caffe在ROS平台下的配置步骤及示例代码。
最低0.47元/天 解锁文章

被折叠的 条评论
为什么被折叠?



