常见pytorch问题和解决方法
- RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time
Solution:To reduce memory usage, during the .backward() call, all the intermediary results are deleted when they are not needed anymore. Hence if you try to call .backward() again, the intermediary results don’t exist and the backward pass cannot be performed (and you get the error you see). You can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option.
- RuntimeError: copy_if failed to synchronize: device-side assert triggered
Solution:Run the code without cuda, and then you will see the real error message.
- encounter problems when installing pytorch
ModuleNotFoundError: No module named 'past’
solution: pip install future
PyTorch常见问题及解决方案

本文档详细解答了PyTorch中常见的错误,如尝试第二次反向传播时缓冲区已释放的问题,以及安装过程中遇到的模块未找到错误。提供了如设置backward(retain_graph=True)以保留图结构进行多次反向传播,以及通过安装future库解决安装问题的解决方案。
9356

被折叠的 条评论
为什么被折叠?



