原来单片GPU跑程序没问题,但是用两片GPU跑代码时报了这个错,搜优快云第一位说是:魔改代码的报应,有被笑道,本想进去看看高见结果还给我整付费文章?不行,必须自己弄明白!
在训练代码中报错:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 0: 700 701
In addition, you can set the environment variable TORCH_DIS
两片GPU跑PyTorch代码报错解决办法

最低0.47元/天 解锁文章
734

被折叠的 条评论
为什么被折叠?



