TensorFlow:判断CUDA和GPU是否可用

该博客介绍了如何使用Python的TensorFlow库来检查安装的版本,并判断系统是否支持CUDA和GPU。通过`print(tf.__version__)`可以获取TensorFlow的版本信息,而`tf.test.is_built_with_cuda()`用于确认库是否在CUDA环境下构建。此外,`tf.test.is_gpu_available()`函数则用于检测GPU是否可用,返回结果分别对应GPU的可用性和不可用性状态。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

查看tf版本

print(tf.__version__)

判断CUDA是否可用:

tf.test.is_built_with_cuda()

判断GPU是否可用: 

tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)

可用时:

不可用时:

自编译tensorflow: 1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.无mkl支持; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 TI 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]://home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl
### TensorFlow 2.19.0 GPU可用的原因分析 TensorFlow 2.19.0 在 GPU可用的情况下,可能涉及多个层面的问题,包括但不限于驱动程序、CUDA cuDNN 的版本兼容性、环境变量配置以及硬件支持等。以下是详细的原因分析: #### 1. 驱动程序版本不匹配 NVIDIA 显卡驱动程序需要与 CUDA 工具包版本保持一致。如果驱动程序版本过低或过高,可能会导致 TensorFlow 无法检测到 GPU[^4]。例如,TensorFlow 2.19 要求使用 CUDA 11.8,因此必须确保安装了支持 CUDA 11.8 的 NVIDIA 驱动程序。 #### 2. CUDA cuDNN 版本不兼容 TensorFlowCUDA cuDNN 的版本有明确要求。如果安装的 CUDA 或 cuDNN 版本与 TensorFlow 不兼容,可能会导致 GPU 检测失败。对于 TensorFlow 2.19,建议使用 CUDA 11.8 cuDNN 8.x[^4]。可以通过以下命令检查已安装的 CUDA cuDNN 版本: ```bash nvcc --version cat /usr/local/cuda/include/cudnn_version.h | grep CUDNN_MAJOR -A 2 ``` #### 3. 环境变量未正确配置 CUDA cuDNN 的路径需要正确添加到系统的环境变量中。如果路径配置错误,TensorFlow 可能无法找到这些库,从而无法检测到 GPU。可以通过以下命令检查路径是否正确: ```bash echo $LD_LIBRARY_PATH ``` 确保输出中包含 CUDA cuDNN 的安装路径[^3]。 #### 4. TensorFlow 版本问题 在 TensorFlow 2.x 中,`tf.test.is_gpu_available()` 方法已被废弃,推荐使用 `tf.config.list_physical_devices('GPU')` 来检测 GPU 是否可用[^2]。如果仍然使用旧方法,可能会导致误判。 #### 5. 多 GPU 环境中的冲突 在多 GPU 环境中,可能存在设备冲突或资源分配问题。可以通过以下代码检查所有可用GPU 设备: ```python import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') print("Num GPUs Available: ", len(gpus)) ``` 如果输出为 `0`,则说明 TensorFlow 未能检测到任何 GPU。 #### 6. 虚拟化环境限制 在虚拟化环境中(如 WSL 或 Docker),可能需要额外配置才能使 GPU 可用。例如,在 WSL2 上使用 GPU 时,需要安装 NVIDIA 的 WSL 驱动程序并启用 CUDA 支持[^1]。 #### 7. 硬件兼容性问题 部分显卡可能不被 TensorFlow 支持,尤其是较旧或较新的显卡型号。可以参考 NVIDIA 官方文档确认显卡是否支持所需的 CUDA 版本。 --- ### 示例代码:验证 GPU 可用性 以下代码可用于验证 TensorFlow 是否能够正确检测到 GPU: ```python import tensorflow as tf # 打印 TensorFlow 版本 print("TensorFlow version:", tf.__version__) # 检查 GPU 是否可用 gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: print("GPU 可用") for gpu in gpus: print("Device Name:", gpu.name) else: print("GPU可用") ``` --- ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值