近期学习使用ray.tune进行网络调参,在GPU上进行网络训练,但在使用ray.tune调参时报错“RuntimeError: No CUDA GPUs are avaliable”,于是写了一个简单的代码进行测试,代码如下
import torch
from ray import tune
print(torch.cuda.is_available())
def objective(config):
print(torch.cuda.is_available())
device = 'cuda'
x = torch.Tensor([1, 2])
y = torch.Tensor([2, 3])
x.to(device)
y.to(device)
score = config["a"] * x.sum() + config["b"] * y.sum()
return {"score": float(score)}
search_space = {
"a": tune.grid_search([0.1, 1.0]),
"b": tune.choice([1]),
}
tuner = tune.Tuner(objective, param_space=search_space)
res = tuner.fit()
print(res.get_best_result(metric="score", mode="min"))
结果主程序中的torch.cuda.is_available()输出True,说明设备本身有可调用的GPU,但在调参函数API中torch.cuda.is_available()输出为False,无法调用GPU并报错“RuntimeErr