PPO算法实现控制倒立摆
完整代码在文章结尾。
PPO算法
PPO算法是TRPO算法的改进版本,也属于策略梯度算法,其价值估计是使用时序差分算法估计,故可视为策略梯度算法下的Actor-Critic算法的范畴。
Off-Policy
PPO算法和TRPO算法都是Off-Policy(异策略)的,因为它们都引入了重要性采样,即使用其他策略采样得到的经验样本,用于目标策略的更新。
不过这里两个算法中的其他策略指的是更新前的旧策略。
PPO-截断
PPO算法可分为PPO-惩罚和PPO-截断两种类型。
- PPO-惩罚是使用拉格朗日乘数法直接将TRPO中的KL散度约束条件放入目标函数,使其变为无约束的优化问题。
- PPO-截断是直接在目标函数中进行限制,保证新旧策略的参数差距不会过大
且大量实验表明,PPO-截断总是比PPO-惩罚表现得更好,因此下面使用PPO-截断实现控制倒立摆。
策略梯度
对于策略梯度算法,其策略梯度可统一表达为:
∇θJ(θ)=E[∇θlnπ(A∣S,θ)qπ(S,A)]\nabla_\theta J(\theta) = E[\nabla_\theta \ln \pi(A|S,\theta)q_\pi(S,A)]∇θJ(θ)=E[∇θlnπ(A∣S,θ)qπ(S,A)]
引入baseline,即基准函数,来降低在使用随机样本近似真实梯度时的近似方差。
可使用基准函数为b(S)=vπ(s)b(S)=v_\pi(s)b(S)=vπ(s),虽然这并不是最优的基准函数,是移除权重后的次优基准,即状态值函数。
引入基准函数的策略梯度:
∇θJ(θ)=E[∇θlnπ(A∣S,θ)(qπ(S,A)−vπ(S))]\nabla_\theta J(\theta) = E[\nabla_\theta \ln \pi(A|S,\theta)(q_\pi(S,A)-v_\pi(S))]∇θJ(θ)=E[∇θlnπ(A∣S,θ)(qπ(S,A)−vπ(S))]
可定义优势函数A(S,A)=qπ(S,A)−vπ(S)A(S,A) = q_\pi(S,A)-v_\pi(S)A(S,A)=qπ(S,A)−vπ(S)
优势函数的迭代近似
由贝尔曼公式可得:
qπ(st,at)−vπ(st)=E[Rt+1+γvπ(St+1)−vπ(St)∣St=st,At=at]q_\pi(s_t,a_t)-v_\pi(s_t)=E[R_{t+1}+\gamma v_\pi(S_{t+1})-v_\pi(S_t)|S_t=s_t,A_t=a_t]qπ(st,at)−vπ(st)=E[Rt+1+γvπ(St+1)−vπ(St)∣St=st,At=at]
使用随机近似算法,RM算法对该期望值进行估计:
qt+1(st,at)−vt+1(st)=qt(st,at)−vt(st)−α⋅(qt(st,at)−vt(st)−(rt+1+γvt(st+1)−vt(st)))q_{t+1}(s_t,a_t)-v_{t+1}(s_t)=q_{t}(s_t,a_t)-v_{t}(s_t)-\alpha\cdot(q_{t}(s_t,a_t)-v_{t}(s_t)-(r_{t+1}+\gamma v_t(s_{t+1})-v_t(s_t)))qt+1(st,at)−vt+1(st)=qt(st,at)−vt(st)−α⋅(qt(st,at)−vt(st)−(rt+1+γvt(st+1)−vt(st)))
通过上述不断增量式迭代,最终优势函数便可收敛到qπ(S,A)−vπ(S)q_\pi(S,A)-v_\pi(S)qπ(S,A)−vπ(S)。
PPO-截断算法的策略梯度
由于引入了重要性采样,策略梯度修改为:
∇θJ(θ)=E[π(A∣S,θ)β(A∣S)∇θlnπ(A∣S,θ)(qπ(S,A)−vπ(S))]\nabla_\theta J(\theta) = E[\dfrac{\pi(A|S,\theta)}{\beta(A|S)}\nabla_\theta \ln \pi(A|S,\theta)(q_\pi(S,A)-v_\pi(S))]∇θJ(θ)=E[β(A∣S)π(A∣S,θ)∇θlnπ(A∣S,θ)(qπ(S,A)−vπ(S))]
其中β(A∣S)\beta(A|S)β(A∣S)即为经验样本采样时使用的旧策略,这里可表示为π(A∣S,θ′)\pi(A|S,\theta')π(A∣S,θ′)
使用随机梯度方法来近似该真实梯度:
∇θJ(θ)≈π(at∣st,θt)β(at∣st)∇θlnπ(at∣st,θt)(qπ(st,at)−vπ(st))\nabla_\theta J(\theta) \approx \dfrac{\pi(a_t|s_t,\theta_t)}{\beta(a_t|s_t)}\nabla_\theta \ln \pi(a_t|s_t,\theta_t)(q_\pi(s_t,a_t)-v_\pi(s_t))∇θJ(θ)≈β(at∣st)π(at∣st,θt)∇θlnπ(at∣st,θt)(qπ(st,at)−vπ(st))
但是其中优势函数,即动作价值函数和状态价值函数仍是未知的,因此需要对该优势函数进行估计。
可使用时序差分误差来估计该优势函数:
qt(st,at)−vt(st)≈rt+1+γvt(st+1)−vt(st)q_{t}(s_t,a_t)-v_{t}(s_t) \approx r_{t+1}+\gamma v_t(s_{t+1})-v_t(s_t)qt(st,at)−vt(st)≈rt+1+γvt(st+1)−vt(st)
使用时序差分误差近似可只需使用一个神经网络来表征状态价值函数即可,只用维护一个Critic(状态价值函数)即可。
在时刻t刚开始时,上述的近似是不太准确的,但随着Critic不断更新迭代(优势函数的迭代近似部分),优势函数近似值会不断收敛到真正的优势函数值。
- 因此最终PPO-截断的Actor策略梯度可表示为:
∇θJ(θ)≈π(at∣st,θt)β(at∣st)∇θlnπ(at∣st,θt)(qt(st,at)−vt(st))\nabla_\theta J(\theta) \approx \dfrac{\pi(a_t|s_t,\theta_t)}{\beta(a_t|s_t)}\nabla_\theta \ln \pi(a_t|s_t,\theta_t)(q_t(s_t,a_t)-v_t(s_t))∇θJ(θ)≈β(at∣st)π(at∣st,θt)∇θlnπ(at∣st,θt)(qt(st,at)−vt(st))
一般实际使用时将该策略梯度化简:
∇θJ(θ)≈1β(at∣st)∇θπ(at∣st,θt)(qt(st,at)−vt(st))\nabla_\theta J(\theta) \approx \dfrac{1}{\beta(a_t|s_t)}\nabla_\theta \pi(a_t|s_t,\theta_t)(q_t(s_t,a_t)-v_t(s_t))∇θJ(θ)≈β(at∣st)1∇θπ(at∣st,θt)(qt(st,at)−vt(st))
PPO-截断存在截断步骤:
∇θJ(θ)=min(∇θJ(θ),clip(∇θπ(at∣st)β(at∣st),1−ϵ,1+ϵ)⋅A(st))\nabla_\theta J(\theta) = \min(\nabla_\theta J(\theta),clip(\dfrac{\nabla_\theta \pi(a_t|s_t)}{\beta(a_t|s_t)},1-\epsilon,1+\epsilon)\cdot A(s_t))∇θJ(θ)=min(∇θJ(θ),clip(β(at∣st)∇θπ(at∣st),1−ϵ,1+ϵ)⋅A(st))
注意,策略更新的目标是使目标函数J(θ)J(\theta)J(θ)最大化,即价值最大化,这里需要使用梯度上升。因此使用Pytorch的反向传播时,需要在该策略梯度前加上负号,因为Pytorch的反向传播默认是梯度下降。
- 而Critic的梯度表示为:
∇wJ(w)=(vt(st,wt)−(rt+1+γvt(st+1,wt)))∇wvt(st,wt)\nabla_wJ(w)=(v_t(s_t,w_t) - (r_{t+1}+\gamma v_t(s_{t+1},w_t)))\nabla_w v_t(s_t,w_t)∇wJ(w)=(vt(st,wt)−(rt+1+γvt(st+1,wt)))∇wvt(st,wt)
Critic使用梯度下降,最小化近似价值与真实价值之间的误差,因此实际使用Pytorch的反向传播时,无需添加负号。
PPO算法代码实践
Actor
演员使用一层隐藏层的神经网络
class Actor(nn.Module):
"""演员家"""
def __init__(self,state_dim,hidden_dim,action_dim):
super().__init__()
self.net = nn.Sequential(
nn.Linear(state_dim,hidden_dim),nn.LeakyReLU(),
nn.Linear(hidden_dim,hidden_dim),nn.LeakyReLU(),
nn.Linear(hidden_dim,action_dim),
nn.Softmax(dim=-1)
)
def forward(self,x):
return self.net(x)
Critic
评论家也使用一层隐藏层的神经网络
class Critic(nn.Module):
"""评论家"""
def __init__(self, state_dim,hidden_dim):
super().__init__()
self.net = nn.Sequential(
nn.Linear(state_dim,hidden_dim),nn.LeakyReLU(),
nn.Linear(hidden_dim,hidden_dim),nn.LeakyReLU(),
nn.Linear(hidden_dim,1)
)
def forward(self,x):
return self.net(x).squeeze(1)
最后输出使用squeeze(1)是为了将输出变换为一维向量
PPO Agent
由于PPO算法为Off-Policy,因此可以使用经验回放,但这里没有使用经验回放,而是每完成一个回合,将采集到的样本用于PPO的策略优化迭代。
class PPO:
"""PPO算法"""
def __init__(self,state_dim,action_dim,gamma,epochs,eps,device,hidden_dim=256):
self.actor = Actor(state_dim,hidden_dim,action_dim).to(device)
self.critic = Critic(state_dim,hidden_dim).to(device)
self.optimizer_actor = torch.optim.AdamW(self.actor.parameters())
self.optimizer_critic = torch.optim.AdamW(self.critic.parameters())
self.device = device
self.gamma = gamma # 折扣因子
self.epochs = epochs # 每次策略优化迭代次数
self.eps = eps # 策略可允许的更新的范围[1-eps,1+eps]
self.initialize() # 对actor和critic网络的参数初始化
def take_action(self,state):
"""
输入状态,输出动作的索引
"""
state = torch.tensor(state).to(self.device)
action_prob = self.actor(state) # 生成各个动作的概率,如[0.4,0.6]
categorical = torch.distributions.Categorical(action_prob) # 根据概率生成动作的概率分布对象
action = categorical.sample() # 根据概率分布进行采样,得到动作的索引
return action.cpu().item() # 输出动作的索引值
def update(self,state,action,reward,next_state,done):
"""
进行算法更新
"""
# 对输入的量,转换为tensor类型,且由于反向传播需要float类型,而不是double类型,因此需要认为指定torch.float32。
# 使用gather函数时,action需为int64类型
# done用于1 - done时,需事先转换为torch.float32类型
state = torch.tensor(state,dtype=torch.float32).to(self.device)
action = torch.tensor(action,dtype=torch.int64).to(self.device)
reward = torch.tensor(reward,dtype=torch.float32).to(self.device)
next_state = torch.tensor(next_state,dtype=torch.float32).to(self.device)
done = torch.tensor(done,dtype=torch.float32).to(self.device)
state_value = self.critic(state)
next_state_value = self.critic(next_state)
TD_target = reward + self.gamma * next_state_value * (1 - done) # 计算TD目标,用于Critic loss的计算
advantage = (TD_target - state_value).detach() # 注意优势函数中的变量不参与反向传播,作为常数参与计算
old_prob = self.actor(state).gather(1,action.unsqueeze(1)).detach() # 旧策略的概率同理不参与反向传播,作为常数参与计算
for _ in range(self.epochs):
"""
对于每一批样本,进行多次策略优化迭代
"""
new_prob = self.actor(state).gather(1,action.unsqueeze(1))
ratio = new_prob / old_prob
s1 = ratio * advantage
s2 = torch.clamp(ratio,1-self.eps,1+self.eps) * advantage
actor_loss = -torch.mean(torch.min(s1,s2))
critic_loss = torch.mean(F.mse_loss(self.critic(state),TD_target.detach())) # TD目标也不参与反向传播,作为常数参与计算
self.optimizer_actor.zero_grad()
self.optimizer_critic.zero_grad()
actor_loss.backward()
critic_loss.backward()
self.optimizer_actor.step()
self.optimizer_critic.step()
def initialize(self):
def init_weights(m):
if isinstance(m,nn.Linear):
nn.init.xavier_uniform_(m.weight)
nn.init.zeros_(m.bias)
self.actor.apply(init_weights)
self.critic.apply(init_weights)
完整代码
import torch
import torch.nn.functional as F
import os
import numpy as np
import torch.nn as nn
import gym
import matplotlib.pyplot as plt
class Actor(nn.Module):
"""演员家"""
def __init__(self,state_dim,hidden_dim,action_dim):
super().__init__()
self.net = nn.Sequential(
nn.Linear(state_dim,hidden_dim),nn.LeakyReLU(),
nn.Linear(hidden_dim,hidden_dim),nn.LeakyReLU(),
nn.Linear(hidden_dim,action_dim),
nn.Softmax(dim=-1)
)
def forward(self,x):
return self.net(x)
class Critic(nn.Module):
"""评论家"""
def __init__(self, state_dim,hidden_dim):
super().__init__()
self.net = nn.Sequential(
nn.Linear(state_dim,hidden_dim),nn.LeakyReLU(),
nn.Linear(hidden_dim,hidden_dim),nn.LeakyReLU(),
nn.Linear(hidden_dim,1)
)
def forward(self,x):
return self.net(x).squeeze(1)
class PPO:
"""PPO算法"""
def __init__(self,state_dim,action_dim,gamma,epochs,eps,device,hidden_dim=256):
self.actor = Actor(state_dim,hidden_dim,action_dim).to(device)
self.critic = Critic(state_dim,hidden_dim).to(device)
self.optimizer_actor = torch.optim.AdamW(self.actor.parameters())
self.optimizer_critic = torch.optim.AdamW(self.critic.parameters())
self.device = device
self.gamma = gamma
self.epochs = epochs
self.eps = eps
def take_action(self,state):
state = torch.tensor(state).to(self.device)
action_prob = self.actor(state)
categorical = torch.distributions.Categorical(action_prob)
action = categorical.sample()
return action.cpu().item()
def update(self,state,action,reward,next_state,done):
state = torch.tensor(state,dtype=torch.float32).to(self.device)
action = torch.tensor(action,dtype=torch.int64).to(self.device)
reward = torch.tensor(reward,dtype=torch.float32).to(self.device)
next_state = torch.tensor(next_state,dtype=torch.float32).to(self.device)
done = torch.tensor(done,dtype=torch.float32).to(self.device)
state_value = self.critic(state)
next_state_value = self.critic(next_state)
TD_target = reward + self.gamma * next_state_value * (1 - done)
advantage = (TD_target - state_value).detach()
old_prob = self.actor(state).gather(1,action.unsqueeze(1)).detach()
for _ in range(self.epochs):
new_prob = self.actor(state).gather(1,action.unsqueeze(1))
ratio = new_prob / old_prob
s1 = ratio * advantage
s2 = torch.clamp(ratio,1-self.eps,1+self.eps) * advantage
actor_loss = -torch.mean(torch.min(s1,s2))
critic_loss = torch.mean(F.mse_loss(self.critic(state),TD_target.detach()))
self.optimizer_actor.zero_grad()
self.optimizer_critic.zero_grad()
actor_loss.backward()
critic_loss.backward()
self.optimizer_actor.step()
self.optimizer_critic.step()
if __name__ == '__main__':
os.system('cls')
# env = gym.envs.registry
gamma = 0.9
epochs = 20 # 多次策略迭代,一般10~30之间
eps = 0.2
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
episode = 500
# env = gym.make('CartPole-v1',render_mode='human')
env = gym.make('CartPole-v1')
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.n
agent = PPO(state_dim,action_dim,gamma,epochs,eps,device)
episode_return_list = []
for _ in range(episode):
trajectory = []
episode_return = []
state, info = env.reset()
done = False
while not done:
action = agent.take_action(state)
next_state, reward, done, _, __ = env.step(action)
trajectory.append((state,action,reward,next_state,done))
episode_return.append(reward)
state = next_state
state, action, reward, next_state, done = zip(*trajectory)
agent.update(np.array(state),np.array(action),np.array(reward),np.array(next_state),np.array(done))
res = 0
for i in range(len(episode_return)-1,-1,-1):
res = gamma * res + episode_return[i]
print(f'回报:{res}')
episode_return_list.append(res)
epi = [x for x in range(episode)]
plt.plot(epi, episode_return_list)
plt.show()
env.close()
回报收敛图:

1052

被折叠的 条评论
为什么被折叠?



