tensorflow 使用多块GPU同时训练多个模型

本文介绍如何在多个GPU上配置不同的TensorFlow会话。通过设置不同的CUDA_VISIBLE_DEVICES环境变量,可以在特定的GPU上运行不同的Python进程。这种方式能够有效地利用多GPU资源,并避免不同会话之间的内存冲突。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转自:http://stackoverflow.com/questions/34775522/tensorflow-mutiple-sessions-with-mutiple-gpus

TensorFlow will attempt to use (an equal fraction of the memory of) all GPU devices that are visible to it. If you want to run different sessions on different GPUs, you should do the following.

  1. Run each session in a different Python process.
  2. Start each process with a different value for the CUDA_VISIBLE_DEVICES environment variable. For example, if your script is called my_script.py and you have 4 GPUs, you could run the following:

    $ CUDA_VISIBLE_DEVICES=0 python my_script.py  # Uses GPU 0.
    $ CUDA_VISIBLE_DEVICES=1 python my_script.py  # Uses GPU 1.
    $ CUDA_VISIBLE_DEVICES=2,3 python my_script.py  # Uses GPUs 2 and 3.
    

    Note the GPU devices in TensorFlow will still be numbered from zero (i.e. "/gpu:0" etc.), but they will correspond to the devices that you have made visible with CUDA_VISIBLE_DEVICES.

share edit

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值