报错:Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
原因:在进行多次梯度更新时,同一个变量使用了多次,一般情况下是因为这个变量作为一个不变的参数,所以可用[].data使其不参与梯度计算,如advantages参与了计算,但不需要参与梯度的更新,故使用 .data 方法让其脱离计算图。
def update(self,graph, rollouts):
advantages = rollouts.returns[:-1] - rollouts.value_preds[:-1]
advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-5)
advantages = advantages.to(self.device)
value_loss_epoch = 0
action_loss_epoch = 0
dist_entropy_epoch = 0
rollouts.to(self.device)
for e in range(self.ppo_epoch):
self.optimizer.zero_grad()
torch.autograd.set_detect_anomaly(True)
values, action_log_probs, dist_entropy = self.actor_critic.evaluate_actions(graph, rollouts.obs.data, rollouts.actions.data)
ratio = torch.exp(action_log_probs - rollouts.action_log_probs.data)
surr1 = ratio * advantages.data
surr2 = torch.clamp(ratio, 1.0 - self.clip_param,
1.0 + self.clip_param) * advantages.data
action_loss = -torch.min(surr1, surr2).mean()
1621

被折叠的 条评论
为什么被折叠?



