这个库是强化学习的高级库,也就是减少我们编写的代码的整体的总量,原文中前面的深度学习的框架大概需要240行左右,引入了ptan库以后,整体的代码量应该会下降到60行左右
坑
嗯,但是安装这个库有点坑,默认的setup的要求是2.5.1但是我在国内的镜像找不到2.5的cuda版本,所以我安装是的2.3.1的版本,但是这个库安装的时候会卸载我的cuda版本,安装一个cpu的版本,导致我来来回回搞了好几次,现在外网的cuda版本下载的贼慢,浪费了我不少 时间,最后通过–no-deps的方法才安装完毕。
第二个有意思的是,我以为这个是一个通用的库(当然他肯定是),但是我秀了一下author,发现这库的作者很有可能是这本书的作者,我就觉得有点搞笑。
开始撸袖子干
介绍库之前,我们先了解一下这个库有什么,就知道为什么我们需要这个库
- Agent: 将观察obs转成action的类,甚至包含一些可选的状态
- ActionSelector: 一个动作选择类和Agent协同,目前我们有的类包括e-greedy(探索),Maxargs(最大值例如Q),softmax( 抽样)
- ExperienceSource类和变体,用于从环境中收集智能体与环境的交互数据(即经验片段)。
- ExperienceSourceBuffer类,经验回放区(存放所有的经验)
- 工具类(targetNet)
- Ignite类,用于继承PTAN都ignite框架
- gym环境,这个就不说了
动作选择器
刚刚说了,ArgmaxActionSelector,用于直接从Q值选择我们的动作下标,ProbabilityActionSelector用于从离散动作的概率采样,EpsilonGreedyActionSelector用于探索,所以你懂的
import ptan
import numpy as np
if __name__ == "__main__":
q_vals = np.array([[1, 2, 3], [1, -1, 0]])
print("q_vals")
print(q_vals)
selector = ptan.actions.ArgmaxActionSelector()
print("argmax:", selector(q_vals))
selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=0.0)
print("epsilon=0.0:", selector(q_vals))
selector.epsilon = 1.0
print("epsilon=1.0:", selector(q_vals))
selector.epsilon = 0.5
print("epsilon=0.5:", selector(q_vals))
selector.epsilon = 0.1
print("epsilon=0.1:", selector(q_vals))
selector = ptan.actions.ProbabilityActionSelector()
print("Actions sampled from three prob distributions:")
for _ in range(10):
acts = selector(np.array([
[0.1, 0.8, 0.1],
[0.0, 0.0, 1.0],
[0.5, 0.5, 0.0]
]))
print(acts)
输出的结果,两行的下标2,0分别是最大的Q值对应的动作,epsilon在不同的情况下具有不同的随机的概率。需要注意的是,如果使用的是概率采样,需要进行归一化
argmax: [2 0]
epsilon=0.0: [2 0]
epsilon=1.0: [2 1]
epsilon=0.5: [2 1]
epsilon=0.1: [2 0]
Actions sampled from three prob distributions:
[1 2 0]
[0 2 0]
[1 2 0]
[1 2 1]
[1 2 0]
[1 2 1]
[0 2 1]
[1 2 1]
[1 2 0]
[0 2 0]
agent智能体
智能体除了能预测动作的Q值以外,还可以预测动作的概率分布,成为策略智能体,智能体接受一批numpy数组(批状态),返回一批执行的动作,主要有两种,一种是DQNAgent,一种是PolicyAgent
实际过程需要定制的智能体
- NN架构
- 非标准的额探索策略(例如奥恩斯坦-乌伦贝克)
- POMDP环境,动作不完全基于观察还基于一定的内部状态
Agent的测试代码如下
DQN适合于动作不大的部分,配置action的大小,选择合适的动作选择器,就可以开始执行了
Policy需要生成离散动作集,我们可以将其转换成归一化的概率然后使用我们的采样器
import ptan
import torch
import torch.nn as nn
class DQNNet(nn.Module):
def __init__(self, actions: int):
super(DQNNet, self).__init__()
self.actions = actions
def forward(self, x):
# we always produce diagonal tensor of shape (batch_size, actions)
return torch.eye(x.size()[0], self.actions)
class PolicyNet(nn.Module):
def __init__(self, actions: int):
super(PolicyNet, self).__init__()
self.actions = actions
def forward(self, x):
# Now we produce the tensor with first two actions
# having the same logit scores
shape = (x.size()[0], self.actions)
res = torch.zeros(shape, dtype=torch.float32)
res[:, 0] = 1
res[:, 1] = 1
return res
if __name__ == "__main__":
net = DQNNet(actions=3)
net_out = net(torch.zeros(2, 10))
print("dqn_net:")
print(net_out)
selector = ptan.actions.ArgmaxActionSelector()
agent = ptan.agent.DQNAgent(model=net, action_selector=selector)
ag_out = agent(torch.zeros(2, 5))
print("Argmax:", ag_out)
selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=1.0)
agent = ptan.agent.DQNAgent(model=net, action_selector=selector)
ag_out = agent(torch.zeros(10, 5))[0]
print("eps=1.0:", ag_out)
selector.epsilon = 0.5
ag_out = agent(torch.zeros(10, 5))[0]
print("eps=0.5:", ag_out)
selector.epsilon = 0.1
ag_out = agent(torch.zeros(10, 5))[0]
print("eps=0.1:", ag_out)
net = PolicyNet(actions=5)
net_out = net(torch.zeros(6, 10))
print("policy_net:")
print(net_out)
selector = ptan.actions.ProbabilityActionSelector()
agent = ptan.agent.PolicyAgent(model=net, action_selector=selector, apply_softmax=True)
ag_out = agent(torch.zeros(6, 5))[0]
print(ag_out)
前面的都是epsilon的greedy,后面使用了apply_softmax=True来将得到的输出归一化,然后直接使用我们的proabilityActionSelector接口即可
奥恩斯坦-乌伦贝克过程(文中多次提到,我这里也是简单的学习一)
在强化学习中,智能体需要通过 探索 来发现更好的策略。对于连续动作空间,探索需要生成连续的动作噪声。与离散动作空间不同,连续动作空间的探索需要一种能够生成 平滑且相关 的噪声的方法,而 OU过程正是满足这一需求的工具。
相关性:OU过程生成的噪声在时间上是相关的,这意味着动作的变化是平滑的,而不是完全随机的跳跃。
均值回归:OU过程会倾向于回归到某个均值,这可以帮助智能体在探索和利用之间找到平衡。
可控性:通过调整 OU过程的参数(如均值回归速度
θ 和波动率 σ,可以灵活控制探索的强度和范围。
经验源
- 我们需要支持多个环境同时交互的智能体的经验源
- 预处理轨迹:例如一个带累计奖励的子轨迹rollout方法。(什么是rollout,Rollout 是指智能体在环境中执行当前策略,生成一条完整的交互轨迹(或部分轨迹))
- 来自OpenAI的Universe的环境
三种类,第一种就是ExperienceSource的n步子轨迹,第二种是ExperienceSourceFisrtLast,显然我们只有开始和结束的步骤,ExperienceSourceRollouts,这个这里没有过多介绍,但是我检查到的结果是,这个是专门用于类似于蒙特卡洛的完整的epsisode的经验源
最新的经验源的代码需要step能返回5个参数,reset需要能返回2个参数,这个需要修改元书上的代码,我修改后的代码如下
代码里面需要注意的是,firstlast的类的返回是一个firstlast的类,返回的是四个采参数
ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)
如果传入了多个env,那么得到的就是一个多个的经验的返回,也是四个参数的返回+
(Experience(state=2, action=1, reward=1.0, done_trunc=False), Experience(state=3, action=1, reward=1.0, done_trunc=False))
step_count用于表示产生的长度的轨迹,注,如果参数哟vectorized=True那么就会需要OpenAI Universe的向量化的输入,这里只需要传入很简单的env和agent就可以得到我们整个的经验源了,看起来是非常的简单的方便
实际上这里做了很多事情,
- reset()被自动调用
- 根据reset执行动作
- step被自动执行,获取了r和s`
- 继续agent生成了动作
- 继续返回信息
- 重复从步骤3开始
import gym
import ptan
from typing import List, Optional, Tuple, Any
class ToyEnv(gym.Env):
"""
Environment with observation 0..4 and actions 0..2
Observations are rotated sequentialy mod 5, reward is equal to given action.
Episodes are having fixed length of 10
"""
def __init__(self):
super(ToyEnv, self).__init__()
self.observation_space = gym.spaces.Discrete(n=5)
self.action_space = gym.spaces.Discrete(n=3)
self.step_index = 0
def reset(self):
self.step_index = 0
return self.step_index,None
def step(self, action):
is_done = self.step_index == 10
if is_done:
return self.step_index % self.observation_space.n, \
0.0, is_done,False, {}
self.step_index += 1
return self.step_index % self.observation_space.n, \
float(action), self.step_index == 10, False,{}
class DullAgent(ptan.agent.BaseAgent):
"""
Agent always returns the fixed action
"""
def __init__(self, action: int):
self.action = action
def __call__(self, observations: List[Any],
state: Optional[List] = None) \
-> Tuple[List[int], Optional[List]]:
return [self.action for _ in observations], state
if __name__ == "__main__":
env = ToyEnv()
s,_ = env.reset()
print("env.reset() -> %s" % s)
s,_,_,_,_ = env.step(1)
print("env.step(1) -> %s" % str(s))
s,_,_,_,_ = env.step(2)
print("env.step(2) -> %s" % str(s))
for _ in range(10):
s,r,_,_,_ = env.step(0)
print(r)
agent = DullAgent(action=1)
print("agent:", agent([1, 2])[0])
env = ToyEnv()
agent = DullAgent(action=1)
exp_source = ptan.experience.ExperienceSource(env=env, agent=agent, steps_count=2)
for idx, exp in enumerate(exp_source):
if idx > 15:
break
print(exp)
exp_source = ptan.experience.ExperienceSource(env=env, agent=agent, steps_count=4)
print(next(iter(exp_source)))
exp_source = ptan.experience.ExperienceSource(env=[ToyEnv(), ToyEnv()], agent=agent, steps_count=2)
for idx, exp in enumerate(exp_source):
if idx > 4:
break
print(exp)
print("ExperienceSourceFirstLast")
exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=1.0, steps_count=1)
for idx, exp in enumerate(exp_source):
print(exp)
if idx > 10:
break
完整的输出结果如下,如果有大家对某个地方不是很理解的可以直接对着输出观察这段代码的运行的结果
env.reset() -> 0
env.step(1) -> 1
env.step(2) -> 2
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
agent: [1, 1]
(Experience(state=0, action=1, reward=1.0, done_trunc=False), Experience(state=1, action=1, reward=1.0, done_trunc=False))
(Experience(state=1, action=1, reward=1.0, done_trunc=False), Experience(state=2, action=1, reward=1.0, done_trunc=False))
(Experience(state=2, action=1, reward=1.0, done_trunc=False), Experience(state=3, action=1, reward=1.0, done_trunc=False))
(Experience(state=3, action=1, reward=1.0, done_trunc=False), Experience(state=4, action=1, reward=1.0, done_trunc=False))
(Experience(state=4, action=1, reward=1.0, done_trunc=False), Experience(state=0, action=1, reward=1.0, done_trunc=False))
(Experience(state=0, action=1, reward=1.0, done_trunc=False), Experience(state=1, action=1, reward=1.0, done_trunc=False))
(Experience(state=1, action=1, reward=1.0, done_trunc=False), Experience(state=2, action=1, reward=1.0, done_trunc=False))
(Experience(state=2, action=1, reward=1.0, done_trunc=False), Experience(state=3, action=1, reward=1.0, done_trunc=False))
(Experience(state=3, action=1, reward=1.0, done_trunc=False), Experience(state=4, action=1, reward=1.0, done_trunc=True))
(Experience(state=4, action=1, reward=1.0, done_trunc=True),)
(Experience(state=0, action=1, reward=1.0, done_trunc=False), Experience(state=1, action=1, reward=1.0, done_trunc=False))
(Experience(state=1, action=1, reward=1.0, done_trunc=False), Experience(state=2, action=1, reward=1.0, done_trunc=False))
(Experience(state=2, action=1, reward=1.0, done_trunc=False), Experience(state=3, action=1, reward=1.0, done_trunc=False))
(Experience(state=3, action=1, reward=1.0, done_trunc=False), Experience(state=4, action=1, reward=1.0, done_trunc=False))
(Experience(state=4, action=1, reward=1.0, done_trunc=False), Experience(state=0, action=1, reward=1.0, done_trunc=False))
(Experience(state=0, action=1, reward=1.0, done_trunc=False), Experience(state=1, action=1, reward=1.0, done_trunc=False))
(Experience(state=0, action=1, reward=1.0, done_trunc=False), Experience(state=1, action=1, reward=1.0, done_trunc=False), Experience(state=2, action=1, reward=1.0, done_trunc=False), Experience(state=3, action=1, reward=1.0, done_trunc=False))
(Experience(state=0, action=1, reward=1.0, done_trunc=False), Experience(state=1, action=1, reward=1.0, done_trunc=False))
(Experience(state=0, action=1, reward=1.0, done_trunc=False), Experience(state=1, action=1, reward=1.0, done_trunc=False))
(Experience(state=1, action=1, reward=1.0, done_trunc=False), Experience(state=2, action=1, reward=1.0, done_trunc=False))
(Experience(state=1, action=1, reward=1.0, done_trunc=False), Experience(state=2, action=1, reward=1.0, done_trunc=False))
(Experience(state=2, action=1, reward=1.0, done_trunc=False), Experience(state=3, action=1, reward=1.0, done_trunc=False))
ExperienceSourceFirstLast
ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)
ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)
ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)
ExperienceFirstLast(state=3, action=1, reward=1.0, last_state=4)
ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)
ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)
ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)
ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)
ExperienceFirstLast(state=3, action=1, reward=1.0, last_state=4)
ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)
ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)
ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)
注意最后的一个gamma参数,如果你对之前的强化学习的内容比较熟悉,这里就是我们的折扣因子
经验回放区
有了经验,就有经验回放区,显然这个是自然而然的,但是问题是
而且这里有一个问题,就是之前的Pong的时候,为什么会出现越学越回去的情况,可能也是因为配置了太大的batch导致学习的时候出现了采样了太多有相关性的数据导致的失真
当我们的经验是10-1000w的级别的时候,那么经验回放区有三个问题
- 如何从缓冲区高效采样
- 如何高效的剔除旧样本
- 如何处理和维护优先级
PTAN支持上述的经验类,自动的向回放区存放和采样经验,有如下几种,ExperienceReplayBuffer,PrioReplayBufferNaive(简单优先级版本),PrioritizedReplayBuffer(区间树采样)下降到了logN
注意这个代码直接使用了经验回放缓冲区来代替了前面的代码里面的enumerate函数,并且使用了populate的函数来执行了函数的操作,前面的0-3步没有执行任何动作,最后的4,5步分别采样了4个sample
if __name__ == "__main__":
env = ToyEnv()
agent = DullAgent(action=1)
exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=1.0, steps_count=1)
buffer = ptan.experience.ExperienceReplayBuffer(exp_source, buffer_size=100)
for step in range(6):
buffer.populate(1)
# if buffer is small enough, do nothing
if len(buffer) < 5:
continue
batch = buffer.sample(4)
print("Train time, %d batch samples:" % len(batch))
for s in batch:
print(s)
#我拿到了populate的源码,类似干了这样的事情
def populate(self, samples: int):
"""
Populates samples into the buffer
:param samples: how many samples to populate
"""
for _ in range(samples):
entry = next(self.experience_source_iter)
self._add(entry)
TargetNet类
前面已经用过了这个玩意,两个NN就是为了能稳定的训练,这个就不说了,支持两种同步的方式,一种是sync(),复制到目标网络,是一个alpha_sync用于在连续控制的部分使用,这里大家自己看代码吧,注意复制了以后是一个新的,如果再次同步会更新参数
Ignite类
这个主要是用于实现之前的Ignitede shijian
最后CartPole的解决方案
之前的代码,如果大家还有印象,是非常麻烦的,在交叉熵的那一章节,现在的代码只需要三个部分
创建我们的net网络,创建我们的actionSelector用于执行动作,使用DQN的方式创建agent来根据state选择动作,创建我们的经验区和经验回放区, 一个optimizer,然后就可以开始学习啦~~~
但是要注意unpack_batch函数,注意我们
if __name__ == "__main__":
env = gym.make("CartPole-v0")
obs_size = env.observation_space.shape[0]
n_actions = env.action_space.n
net = Net(obs_size, HIDDEN_SIZE, n_actions)
tgt_net = ptan.agent.TargetNet(net)
selector = ptan.actions.ArgmaxActionSelector()
selector = ptan.actions.EpsilonGreedyActionSelector(
epsilon=1, selector=selector)
agent = ptan.agent.DQNAgent(net, selector)
exp_source = ptan.experience.ExperienceSourceFirstLast(
env, agent, gamma=GAMMA)
buffer = ptan.experience.ExperienceReplayBuffer(
exp_source, buffer_size=REPLAY_SIZE)
optimizer = optim.Adam(net.parameters(), LR)
step = 0
episode = 0
solved = False
while True:
step += 1
buffer.populate(1)
for reward, steps in exp_source.pop_rewards_steps():
episode += 1
print("%d: episode %d done, reward=%.3f, epsilon=%.2f" % (
step, episode, reward, selector.epsilon))
solved = reward > 180
if solved:
print("Congrats!")
break
if len(buffer) < 2*BATCH_SIZE:
continue
batch = buffer.sample(BATCH_SIZE)
states_v, actions_v, tgt_q_v = unpack_batch(
batch, tgt_net.target_model, GAMMA)
optimizer.zero_grad()
q_v = net(states_v)
q_v = q_v.gather(1, actions_v.unsqueeze(-1)).squeeze(-1)
loss_v = F.mse_loss(q_v, tgt_q_v)
loss_v.backward()
optimizer.step()
selector.epsilon *= EPS_DECAY
if step % TGT_NET_SYNC == 0:
tgt_net.sync()
@torch.no_grad()
def unpack_batch(batch, net, gamma):
states = []
actions = []
rewards = []
done_masks = []
last_states = []
for exp in batch:
states.append(exp.state)
actions.append(exp.action)
rewards.append(exp.reward)
done_masks.append(exp.last_state is None)
if exp.last_state is None:
last_states.append(exp.state)
else:
last_states.append(exp.last_state)
states_v = torch.tensor(states)
actions_v = torch.tensor(actions)
rewards_v = torch.tensor(rewards)
last_states_v = torch.tensor(last_states)
last_state_q_v = net(last_states_v)
best_last_q_v = torch.max(last_state_q_v, dim=1)[0]
best_last_q_v[done_masks] = 0.0
return states_v, actions_v, best_last_q_v * gamma + rewards_v
这里比较难理解的是这句话
q_v = q_v.gather(1, actions_v.unsqueeze(-1)).squeeze(-1),这里表示的是q_v是v不同的action对应的q值,然后将action变成一个(batch,1)的数组,然后直接读取这个action对应的q值,然后变成一个q值batch,对应我们的target网络(从代码来看,是best_q_v*gamma + reward),学习的非常快,几秒钟就学会了,嗯,果然是高级库,学起来就是简单
1932: episode 85 done, reward=59.000, epsilon=0.00
2052: episode 86 done, reward=120.000, epsilon=0.00
2154: episode 87 done, reward=102.000, epsilon=0.00
2218: episode 88 done, reward=64.000, epsilon=0.00
2371: episode 89 done, reward=153.000, epsilon=0.00
2441: episode 90 done, reward=70.000, epsilon=0.00
2550: episode 91 done, reward=109.000, epsilon=0.00
2669: episode 92 done, reward=119.000, epsilon=0.00
2752: episode 93 done, reward=83.000, epsilon=0.00
2847: episode 94 done, reward=95.000, epsilon=0.00
2990: episode 95 done, reward=143.000, epsilon=0.00
3190: episode 96 done, reward=200.000, epsilon=0.00