问题
Traceback (most recent call last):
File "test.py", line 22, in <module>
model = loadmodel()
File "/home/joshuayun/Desktop/IBD/loader/model_loader.py", line 48, in loadmodel
checkpoint = torch.load(settings.MODEL_FILE)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 574, in _load
result = unpickler.load()
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 537, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 119, in default_restore_location
result = fn(storage, location)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 95, in _cuda_deserialize
device = validate_cuda_device(location)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 79, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but
torch.cuda.is_available() is False. If you are running on a CPU-only machine,
please use torch.load with map_location='cpu' to map your storages to the CPU.
原因
pytorch中model在加载预训练模型时,保存的模型未读取到对应合适位置;
本文仅给出机器只有CPU情形,而保存的模型是使用CUDA格式,无法正常读取,还有其他情形:
- 模型cpu,读取到gpu
- 模型gpu,读取到cpu
- 模型gpu1,读取到gpu2
解决方案
在读取模型时,指定准确硬件即可:
model = torch.load('model/pytorch_resnet50.pth',map_location ='cpu')
model = torch.load('model/pytorch_resnet50.pth',map_location ='cuda:0')
model = torch.load('model/pytorch_resnet50.pth',map_location ={'cuda:0':'cuda:1'})
注意读取模型时,是设置torch.load中读取位置,而不是model.load_state_dict。