jetson xavier 配置 Tensorflow1.12.0(CUDA 10.0 + cuDNN 7.3.0 )

目录

 

 

1、文档内容:

2、当前各个软件版本

2.1、Xavier刷机过程中已经安装的软件及版本见:

2.2、CUDA版本:nvcc -V

2.3、使用安装的CUDA的样例读取计算机的硬件信息:

2.4、cuDNN配置

2.5、确认cuDNN版本:

2.6、opencv版本:pkg-config --modversion opencv

2.7、gcc:gcc --version

2.8、g++ --version

2.9、cmake –version

3. 安装依赖包

3.1、通用依赖

3.2、安装Java8

3.3、安装curl

3.4、Install Protobuf 3.5.0

3.5、Install bazel 0.15.0 

3.6、Install Eigen 3

3.7、安装NCCL

4、安装Tensorflow 1.12.0 C++ Interface  

4.1、创建 Swap Space

4.2、TensorFlow安装方式说明

5、检测 Tensorflow 安装结果

参考连接:


 

1、文档内容:

1、Xavier 如何配置 Tensorflow1.12.0 c++ inerface (CUDA 10.0 + cuDNN 7.3.0 + protobuf 3.5.0 + bazel 0.15.0 +Eigen 3 +NCCL

2、xavier 如何安装ubuntu16.04系统。

2、当前各个软件版本

当前时间:2018-11-09

tensorflow: https://github.com/tensorflow/tensorflow/releases   

当前最新版 1.12.0

 

protobuf:  

https://github.com/protocolbuffers/protobuf         

https://github.com/protocolbuffers/protobuf/releases   

当前最新版本:3.6.1  我们使用 3.5.0 ,TensorFlow1.12.0需要protobuf3.6.1

 

bazel : https://github.com/bazelbuild/bazel/releases   

当前最新0.18.1   我们只是用 0.15.0

2.1、Xavier刷机过程中已经安装的软件及版本见:

 

2.2、CUDA版本:nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2018 NVIDIA Corporation

Built on Sun_Aug_12_21:08:25_CDT_2018

Cuda compilation tools, release 10.0, V10.0.117

 

其他办法查询CUDA版本:cat /usr/local/cuda/version.txt

 

2.3、使用安装的CUDA的样例读取计算机的硬件信息:

    cd /usr/local/cuda-10.0/samples/1_Utilities/deviceQuery    其中cuda-10.0更换为自己的CUDA版本

    sudo make

    sudo ./deviceQuery

    

 

2.4、cuDNN配置

(xavier虽然安装了cuDNN7.3.0 但没有将对应的头文件、库放到cuda目录):

xavier中cuDNN7.3.0的头文件在:/usr/include

 

  1、cd /usr/include && sudo cp cudnn.h /usr/local/cuda/include

  2、cd /usr/lib/aarch64-linux-gnu && sudo cp libcudnn* /usr/local/cuda/lib64/

 

  3、cd /usr/local/cuda/lib64

  4、sudo chmod 777 libcudnn*

注意,复制完后需要将libcudnn*的权限都改为777,否则后面将不能执行,编译时出现找不到Libcudnn.so或者libcudart.so.*.*文件的情况。

 

  5、sudo ldconfig

/sbin/ldconfig.real: /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.7 不是符号连接

说明cudnn默认安装的路径在:/usr/local/cuda-9.0/targets/x86_64-linux/lib/  上边的步骤1-4 或许根本不需要像在非arm架构一样放到指定目录,但为了以防万一 还是执行了上述步骤。

解决办法:重新连接一下即可

  sudo ln -sf /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7.3.0 /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7

  6、sudo ldconfig

 

2.5、确认cuDNN版本:

cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2

    #define CUDNN_MAJOR 7

    #define CUDNN_MINOR 3

    #define CUDNN_PATCHLEVEL 0

 

    即版本为:7.3.0

    

2.6、opencv版本:pkg-config --modversion opencv

3.2.0

2.7、gcc:gcc --version

7.3.0

2.8、g++ --version

7.3.0

2.9、cmake –version

3.10.2

3. 安装依赖包

3.1、通用依赖

sudo apt-get update

sudo apt-get install python-enum34

sudo apt-get install libsdl2-dev

sudo apt-get install libsdl2-image-dev

sudo apt-get install -y --no-install-recommends libgflags-dev libgoogle-glog-dev

wget https://bootstrap.pypa.io/get-pip.py -o get-pip.py

sudo python get-pip.py (会报错,改成sudo python get-pip.py.1 即可)

sudo apt-get install openjdk-8-jdk (时间较长)

sudo apt-get install zip unzip autoconf automake libtool curl zlib1g-dev maven -y

sudo apt-get install python-numpy swig python-dev python-pip python-wheel -y

 

3.2、安装Java8

sudo add-apt-repository ppa:webupd8team/java  

sudo apt-get update  

sudo apt-get install oracle-java8-installer</code>  

失败!!!解决办法:原因 版本冲突导致 https://www.cnblogs.com/VeryGoodVeryGood/p/8318105.html

ubuntu通过ppa源安装jdk时遇到如下问题:

download failed

Oracle JDK 8 is NOT installed.

dpkg: error processing package oracle-java8-installer (--configure):

subprocess installed post-installation script returned error exit status 1

Errors were encountered while processing:

oracle-java8-installer

E: Sub-process /usr/bin/dpkg returned an error code (1)

 

sudo rm /var/lib/dpkg/info/oracle-java8-installer.*

sudo apt install oracle-java8-installer

 

3.3、安装curl

sudo apt-get install -y curl

安装失败的话手动下载安装:https://blog.youkuaiyun.com/j960828/article/details/81626826

1.下载curl包,可以在这个网站上找最新的版本 http://curl.haxx.se/download

cd ~/Dowloads

命令:wget https://curl.haxx.se/download/curl-7.55.1.tar.gz

2.下载中......

3.解压

命令:tar -xzvf curl-7.55.1.tar.gz

4.覆盖安装

cd curl-7.55.1

./configure

make

make install

在make install出现

则退回主目录:输入sudo apt install curl

3.4、Install Protobuf 3.5.0

安装过程中出现

 

Errors were encountered while processing:

oracle-java8-installer

E: Sub-process /usr/bin/dpkg returned an error code (1)

解决办法:见上边,删掉已经安装的oracle-java8-installer 重新安装

cd ~/Dowloads

git clone https://github.com/google/protobuf.git

cd protobuf

git checkout tags/v3.5.0

git submodule update --init --recursive

./autogen.sh

./configure

make (时间挺长!!!)

sudo make install

sudo ldconfig

3.5、Install bazel 0.15.0 

(github 的文件在亚马逊服务器上,无法下载)

sudo apt-get install -y --no-install-recommends openjdk-8-jdk

cd ~/Dowloads

mkdir tools

cd tools && mkdir bazel && cd bazel

下载 bazel-0.*.0-dist.zip

Linux 执行 wget https://github.com/bazelbuild/bazel/releases/download/0.13.0/bazel-0.13.0-dist.zip 无法下载文件!!!(用的网络是4G路由器,走的的联通4G网)

解决办法:直接将https://github.com/bazelbuild/bazel/releases/download/0.13.0/bazel-0.13.0-dist.zip 在Windows浏览器中打开,即可直接下载。(用的公司的网络,有可能是由于公司网络已经做了翻墙处理,毕竟是可以打开google、YouTube的。)

将0.13.0改成自己需要的版本,即可下载任意版本!!!

当前最新版本是:0.18.01 TensorFlow1.12.0需要至少 bazel 0.15.0版本才行, 我就不作死了,直接选择 0.15.0版本

0.15.0 对应protobuf 3.4.0

unzip bazel-0.15.0-dist.zip

bash compile.sh (编译过程比较慢)

最终生成的二进制文件bazel在当前目录的output/bazel下面

sudo cp output/bazel /usr/local/bin //# Copy bazel binary to a diretory in $PATH

//echo "source /usr/local/bin" >> ~/.bashrc //# Add /usr/local/bin to PATH in ~/.bashrc

//source ~/.bashrc

3.6、Install Eigen 3

cd ~/Dowloads

wget http://mirror.bazel.build/bitbucket.org/eigen/eigen/get/f3a22f35b044.tar.gz

tar -xzvf f3a22f35b044.tar.gz

cd eigen-eigen-f3a22f35b044

mkdir build

cd build

cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr ..

make

sudo make install

3.7、安装NCCL

TensorFlow ./configure的时候需要

可使用以下命令安装 NCCL 

cd ~/Dowloads

git clone https://github.com/NVIDIA/nccl.git 

cd nccl 

make -j src.build

sudo make install

 

4、安装Tensorflow 1.12.0 C++ Interface  

4.1、创建 Swap Space

This can avoid out of memory error when building TensorFlow and various other dependencies.

# Create a swapfile for Ubuntu at the current directory location

fallocate -l 8G swapfile

# List out the file

ls -lh swapfile

# Change permissions so that only root can use it

chmod 600 swapfile

# List out the file

ls -lh swapfile

# Set up the Linux swap area

mkswap swapfile

# Now start using the swapfile

sudo swapon swapfile

# Show that it's now being used

swapon -s

 

4.2、TensorFlow安装方式说明

tensorflow目前支持最好的语言还是python,但大部分服务都用C++ 开发,一般采用动态链接库(.so)方式调用通过python训练好的模型,因此tensorflow的c/c++ API还是有必要熟悉下,相同算法,c接口相比python速度更快。

1、Tensorflow1.12.0(C++ Interface) 是TensorFlow的C++接口,通过接口函数可以在Xavier上使用在服务器上通过python训练好的模型。

2、如果安装TensorFlow for Jetson AGX Xavier Py2/3到Xavier上是可以训练网络模型的,但相比专用的服务器上的GPU速度还是慢太多了。

 

4.2.1、方法一:whl现成包安装 TensorFlow for python

(这种方式适合 Xavierpython训练模型,使用模型。但没没有安装C++接口)

    下载现成的whl包进行安装:https://developer.nvidia.com/embedded/downloads  

     python2    TensorFlow for Jetson AGX Xavier Py2   版本1.12.0rc2

     python3    TensorFlow for Jetson AGX Xavier Py3   版本1.12.0rc2

cd 到对应下载的目录

如果使用python2: pip install tensorflow_gpu-1.12.0rc2+nv18.11-cp27-cp27mu-linux_aarch64.whl

如果使用python3: pip3 install tensorflow_gpu-1.12.0rc2+nv18.11-cp36-cp36m-linux_aarch64.whl

安装过程中遇到问题:

ERROR1:hdf5.h: No such file or directory

1. 在网站下载hdf5的包,我下的是hdf5-1.8.16.tar.gz(12M)

https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.8/hdf5-1.8.16/src/

2. 解压安装 $ tar -zxf hdf5-1.8.17.tar.gz。

$ cd hdf5-1.8.16

$ ./configure --prefix=/usr/local/hdf5  --build=arm    # --build=编译该软件所使用的平台

$ sudo make

$ sudo make check               

$ sudo make install

sudo apt install hdf5-helpers

sudo apt-get install libhdf5-serial-dev

4.2.2、方法二:源码wrapper安装 Tensorflow 1.12.0 C++ Interface

c++接口只能源码编译安装)

切记一点,tensorflow源码安装,有一点,官方要求的是JDK必须是java8.

Build Tensorflow 1.12.0 使用github上打包的方式编译安装

[1] IMPORTANT NOTE: in the build_tensorflow.sh file, please change the environment variable of CUDA_TOOLKIT_PATH, CUDNN_INSTALL_PATH, and TENSORRT_INSTALL_PATH if these paths are different than the hard-coded ones in the build_tensorflow.sh. CUDNN_INSTALL_PATH should be set to the path where libcudnn.so is located. TENSORRT_INSTALL_PATH should be set to the path where libnvinfer.so is located.

  1. cd ~/Dowloads
  2. git clone https://github.com/cding/tensorflow_wrapper.git 
  3. cd tensorflow_wrapper
  4. 修改CMakeLists.txtTensorFlow版本 TENSORFLOW_TAG:

if(NOT TENSORFLOW_TAG)

set(TENSORFLOW_TAG "v1.12.0")

endif()

 

  1. mkdir build
  2. cd build
  3. cmake ..
  4. make -j 8

编译过程中会自动到git clone https://github.com/tensorflow/tensorflow.git下载tensorflow

网络不稳定 git缓存太小 tensorflow 下载失败,解决办法:直接打开网址 手动下载(我在Windows下用git clone下载成功)

下载到/home/nvidia/Downloads/

修改 CMakeLists.txt 文件中的git仓库地址为已经下载好点的TensorFlow目录,这样编译过程中会直接从本地仓库下载TensorFlow

修改内容如下:

  GIT_REPOSITORY /home/nvidia/Downloads/tensorflow

#http://github.com/tensorflow/tensorflow.git

  1. sudo make install

 

4.2.3、方法三:源码安装 Tensorflow

1、下载TensorFlow源码并配置

git clone https://github.com/tensorflow/tensorflow.git

​​​​​​​cd tensorflow

git checkout r1.12

./configure

 

nvidia@jetson-0423218009492:~/Downloads/tensorflow$ ./configure

WARNING: ignoring LD_PRELOAD in environment.

WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".

You have bazel 0.15.0- (@non-git) installed.

Please specify the location of python. [Default is /usr/bin/python]:

Found possible Python library paths:

  /usr/local/lib/python2.7/dist-packages

  /usr/lib/python2.7/dist-packages

Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]

Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: y

Apache Ignite support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [Y/n]: y

XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n

No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n

No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y

CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 10.0

Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0

Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.0

Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]:

Do you wish to build TensorFlow with TensorRT support? [y/N]: y

TensorRT support will be enabled for TensorFlow.

Please specify the location where TensorRT is installed. [Default is /usr/lib/aarch64-linux-gnu]:/usr/src/tensorrt

Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 2.3.7

Please specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]:/usr/local/lib

NCCL found at /usr/local/lib/libnccl.so.2

Assuming NCCL header path is /usr/local/lib/../include/nccl.h

Please specify a list of comma-separated Cuda compute capabilities you want to build with.

You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.

Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 7.0

Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:

Do you wish to build TensorFlow with MPI support? [y/N]: n

No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n

Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.

    --config=mkl             # Build with MKL support.
    --config=monolithic      # Config for mostly static monolithic build.
    --config=gdr             # Build with GDR support.
    --config=verbs           # Build with libverbs support.
    --config=ngraph          # Build with Intel nGraph support.

Configuration finished

nvidia@jetson-0423218009492:~/Downloads/tensorflow$

 

2、编译pip安装包、so库文件:

编译pip安装包:

bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

编译libtensorflow_cc.so

//bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so

bazel build --config=opt --config=cuda --copt=-DPNG_ARM_NEON_OPT=0 //tensorflow:libtensorflow_cc.so

 

编译过程出现错误:

ERROR1:

 icu sha256 码对不上的错误,解决办法 将 tensorflow/third_party/icu 的 workspace.bzl 中的 码 改成 编译错误提示的想要的码。然后 bazel clean ,之后重新编译。

 

漫长的等待过程

ERROR2: 空间不足!!!Xavier自带32G空间不够用!!,安装固态硬盘并挂在到Home目录后 解决。之后重新编译。挂载硬盘操作见博客:https://blog.youkuaiyun.com/xingdou520/article/details/84309155

 

ERROR3:  unrecognized command line option '-mfpu=neon'

解决办法:

第一种:

编译选项增加--copt=-DPNG_ARM_NEON_OPT=0

 

bazel build --config=opt --config=cuda --copt=-DPNG_ARM_NEON_OPT=0 //tensorflow/***

 

 

第二种:

Go to file /tensorflow/contrib/lite/kernels/internal/BUILD, delete -mfpu=neon and you are good to go. from: NEON_FLAGS_IF_APPLICABLE = select({ ":arm": [ "-O3", "-mfpu=neon", ],

to: NEON_FLAGS_IF_APPLICABLE = select({ ":arm": [ "-O3",

],

https://stackoverflow.com/questions/29851128/gcc-arm64-aarch64-unrecognized-command-line-option-mfpu-neon

 

ERROR4:ERROR:/home/nvidia/Downloads/tensorflow/tensorflow/BUILD:499:1: Linking of rule '//tensorflow:libtensorflow_cc.so'failed (Exit 1)

but when i build it ,

sudo bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so

i get this error:

 

 

 

 

 

编译完成后,打包成能够使用pip安装的whl包:

 

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

 

这样在/tmp/tensorflow_pkg目录就生成了pip安装文件了

 

使用pip命令安装我们编译好的包:

pip install /tmp/tensorflow*.whl

 

5、检测 Tensorflow 安装结果

Run in terminal

python

 

import tensorflow as tf

hello = tf.constant('Hello, TensorFlow!')

sess = tf.Session()

print(sess.run(hello))

 

 

 

 

 

 

参考连接:

https://github.com/dusty-nv/jetson-inference

 

https://www.cnblogs.com/shihuc/p/6593041.html

https://www.python36.com/how-to-install-tensorflow-gpu-with-cuda-10-0-for-python-on-ubuntu/2/

https://blog.youkuaiyun.com/haoqimao_hard/article/details/80519293

 

 

 

 

 

 

 

 

 

 

 

自编译tensorflow1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.无mkl支持; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 TI 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]://home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl
自编译tensorflow1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.支持mkl,无MPI; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]:/home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: hp@dla:~/work/ts_compile/tensorflow$ bazel build --config=opt --config=mkl --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值