RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
torch.autograd.backward
torch.autograd.backward(tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None)
参数:
- retain_graph (bool, optional) – If False, the graph used to compute the grad will be freed. Note that in nearly all cases se