记录一下:
If you want to use all the available GPUs:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = CreateModel()
model= nn.DataParallel(model)
model.to(device)
If you want to use specific GPUs: (For example, using 2 out of 4 GPUs)
device = torch.device("cuda:1,3" if torch.cuda.is_available() else "cpu") ## specify the GPU id's, GPU id's start from 0.
model = CreateModel()
model= nn.DataParallel(model,device_ids = [1, 3])
model.to(device)
本文详细介绍了如何在Pytorch中利用所有可用的GPU进行深度学习模型的并行训练,以及如何选择特定的GPU进行训练,适用于拥有多个GPU资源的场景。
1万+

被折叠的 条评论
为什么被折叠?



