克隆代码
git clone https://github.com/dusty-nv/jetson-inference
cd jetson-inference
git submodule update --init
如果超时git clone多试几次
防止子模块下载超时
.gitmodules 文件中
https://github.com 换成 https://github.com.cnpmjs.org
git submodule sync
git submodule update --init --recursive
如果下载失败,可以git status 有差异的是失败目录,需要手动删除再执行更新命令
cmake
cd jetson-inference
mkdir build
cd build
cmake ../
cmake 会报错,需安装cuda,tensorrt,cudnn
安装cuda
https://developer.nvidia.com/cuda-toolkit-archive
linux x86_64 ubuntu 20.04 runfile
在/etc/profile 结尾加环境变量
export PATH=$PATH:/usr/local/cuda/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib:/usr/local/cuda/lib64
source /etc/profile
ldconfig
vim ~/.bashrc
在.bashrc结尾添加
source /etc/profile
如果有conda环境,在.bashrc结尾添加
conda activate base
用命令查看版本
nvcc -V
安装tensorrt
下载和cuda匹配的版本
https://developer.nvidia.com/nvidia-tensorrt-8x-download
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/workspace/doc/nvidia/TensorRT/TensorRT-8.6.0.12/lib:/workspace/doc/nvidia/TensorRT/TensorRT-8.6.0.12/targets/x86_64-linux-gnu/lib
export C_INCLUDE_PATH=$C_INCLUDE_PATH:/workspace/doc/nvidia/TensorRT/TensorRT-8.6.0.12/include
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/workspace/doc/nvidia/TensorRT/TensorRT-8.6.0.12/include
export CUDA_MODULE_LOADING=LAZY
修改 CMakeLists.txt
在 # build C/C++ library行后引入TensorRT依赖
include_directories(/workspace/doc/nvidia/TensorRT/TensorRT-8.6.0.12/include)
link_directories(/workspace/doc/nvidia/TensorRT/TensorRT-8.6.0.12/lib/)
安装cudnn
下载和cuda匹配的版本
https://developer.nvidia.com/rdp/cudnn-archive
解压后把文件复制到cuda
cd include
mv *.* /usr/local/cuda/include
cd lib
mv *.* /usr/local/cuda/lib64
make
cd jetson-inference
cd build
make
make install
运行示例
cd /build/x86_64/bin
./imagenet.py --network=vgg-16 images/jellyfish.jpg images/test/output_jellyfish.jpg
会报下载模型失败的错误,运行如下命令,修改下载地址和下载模型
cd jetson-inference/tools/
sed -in-place -e 's@https://nvidia.box.com/shared/static@https://bbs.gpuworld.cn/mirror@g' download-models.sh
./download-models.sh
或参考 https://code84.com/763863.html