基于“蘑菇书”的强化学习知识点(十七):第七章的代码:DuelingDQN.ipynb及其涉及的其他代码的更新以及注解(gym版本 >= 0.26)(二)

第七章的代码:DuelingDQN.ipynb及其涉及的其他代码的更新以及注解(gym版本 >= 0.26)

摘要

本系列知识点讲解基于蘑菇书EasyRL中的内容进行详细的疑难点分析!具体内容请阅读蘑菇书EasyRL


对应蘑菇书附书代码——DuelingDQN.ipynb


# 1、定义算法
# 1.1、定义模型
import torch.nn as nn
import torch.nn.functional as F
class DuelingNet(nn.Module):
    def __init__(self, n_states, n_actions,hidden_dim=128):
        super(DuelingNet, self).__init__()
        
        # hidden layer
        self.hidden_layer = nn.Sequential(
            nn.Linear(n_states, hidden_dim),
            nn.ReLU()
        )
        
        #  advantage
        self.advantage_layer = nn.Sequential(
            nn.Linear(hidden_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, n_actions)
        )
        
        # value
        self.value_layer = nn.Sequential(
            nn.Linear(hidden_dim, hidden_dim),
            nn.ReLU(),
            nn.Linear(hidden_dim, 1)
        )
        
    def forward(self, state):
        x = self.hidden_layer(state)
        advantage = self.advantage_layer(x)
        value     = self.value_layer(x)
        '''Q(s,a)=V(s)+(A(s,a)−mean(A(s,a)))'''
        return value + advantage - advantage.mean()
      
# 1.2、定义经验回放
"""
经验回放首先是具有一定容量的,只有存储一定的transition网络才会更新,否则就退回到了之前的逐步更新了。
另外写经验回放的时候一般需要包涵两个功能或方法,
一个是push,即将一个transition样本按顺序放到经验回放中,如果满了就把最开始放进去的样本挤掉,
因此如果大家学过数据结构的话推荐用队列来写,虽然这里不是。
另外一个是sample,很简单就是随机采样出一个或者若干个(具体多少就是batch_size了)样本供DQN网络更新。
功能讲清楚了,大家可以按照自己的想法用代码来实现,参考如下。
"""
from collections import deque
import random
class ReplayBuffer(object):
    def __init__(self, capacity: int) -> None:
        self.capacity = capacity
        self.buffer = deque(maxlen=self.capacity)
    def push(self,transitions):
        ''' 存储transition到经验回放中
        '''
        self.buffer.append(transitions)
    def sample(self, batch_size: int, sequential: bool = False):
        if batch_size > len(self.buffer): # 如果批量大小大于经验回放的容量,则取经验回放的容量
            batch_size = len(self.buffer)
        if sequential: # 顺序采样
            rand = random.randint(0, len(self.buffer) - batch_size)
            batch = [self.buffer[i] for i in range(rand, rand + batch_size)]
            return zip(*batch)
        else: # 随机采样
            batch = random.sample(self.buffer, batch_size)
            return zip(*batch)
    def clear(self):
        ''' 清空经验回放
        '''
        self.buffer.clear()
    def __len__(self):
        ''' 返回当前存储的量
        '''
        return len(self.buffer)


# 1.3、真定义算法
import torch
import torch.optim as optim
import math
import numpy as np
class DuelingDQN:
    def __init__(self,model,memory,cfg):
        self.n_actions = cfg.n_actions  
        self.device = torch.device(cfg.device) 
        self.gamma = cfg.gamma  # 折扣因子
        # e-greedy策略相关参数
        self.sample_count = 0  # 用于epsilon的衰减计数
        self.epsilon = cfg.epsilon_start
        self.sample_count = 0  
        self.epsilon_start = cfg.epsilon_start
        self.epsilon_end = cfg.epsilon_end
        self.epsilon_decay = cfg.epsilon_decay
        self.batch_size = cfg.batch_size
        self.target_update = cfg.target_update
        self.policy_net = model.to(self.device)
        self.target_net = model.to(self.device)
         # 复制参数到目标网络
        for target_param, param in zip(self.target_net.parameters(),self.policy_net.parameters()): 
            target_param.data.copy_(param.data)
        # self.target_net.load_state_dict(self.policy_net.state_dict()) # or use this to copy parameters
        self.optimizer = optim.Adam(self.policy_net.parameters(), lr=cfg.lr)  # 优化器
        self.memory = memory # 经验回放
        self.update_flag = False 

    def sample_action(self, state):
        ''' 采样动作
        '''
        self.sample_count += 1
        # epsilon指数衰减
        '''根据公式更新 ε 值,实现指数衰减。'''
        self.epsilon = self.epsilon_end + (self.epsilon_start - self.epsilon_end) * \
            math.exp(-1. * self.sample_count / self.epsilon_decay) 
        if random.random() > self.epsilon:
            with torch.no_grad():
                if isinstance(state, tuple):
                    state = state[0]
                else:
                    state = state
                state = torch.tensor(state, device=self.device, dtype=torch.float32).unsqueeze(dim=0)
                q_values = self.policy_net(state)
                action = q_values.max(1)[1].item() # choose action corresponding to the maximum q value
        else:
            action = random.randrange(self.n_actions)
        return action
    @torch.no_grad() # 不计算梯度,该装饰器效果等同于with torch.no_grad():
    def predict_action(self, state):
        ''' 预测动作
        '''
        if isinstance(state, tuple):
            state = state[0]
        else:
            state = state
        state = torch.tensor(state, device=self.device, dtype=torch.float32).unsqueeze(dim=0)
        q_values = self.policy_net(state)
        action = q_values.max(1)[1].item() # choose action corresponding to the maximum q value
        return action
    def update(self):
        if len(self.memory) < self.batch_size: # 当经验回放中不满足一个批量时,不更新策略
            return
        else:
            if not self.update_flag:
                print("开始更新策略!")
                self.update_flag = True
        # 从经验回放中随机采样一个批量的转移(transition)
        state_batch, action_batch, reward_batch, next_state_batch, done_batch = self.memory.sample(
            self.batch_size)
        # 将数据转换为tensor
        processed_state_batch = []
        for s in state_batch:
            if isinstance(s, tuple):
                # 如果元素是元组,则取元组的第一个元素
                processed_state_batch.append(s[0])
            else:
                processed_state_batch.append(s)
        state_batch = torch.tensor(np.array(processed_state_batch), device=self.device, dtype=torch.float)
        action_batch = torch.tensor(action_batch, device=self.device).unsqueeze(1)  
        reward_batch = torch.tensor(reward_batch, device=self.device, dtype=torch.float).unsqueeze(1)  
        next_state_batch = torch.tensor(np.array(next_state_batch), device=self.device, dtype=torch.float)
        done_batch = torch.tensor(np.float32(done_batch), device=self.device).unsqueeze(1)
        '''
        # 实际的Q值
        作用:
        - 将状态批量 state_batch 传入策略网络,得到所有状态的 Q 值输出,形状为 (batch_size, n_actions);
        - 然后用 gather(dim=1, index=action_batch) 从每行中选取智能体实际采取动作对应的 Q 值。
        '''
        q_value_batch = self.policy_net(state_batch).gather(dim=1, index=action_batch) 
        '''
        # 计算目标Q值
        作用:
        - 将下一状态批量传入目标网络,得到每个样本下所有动作的 Q 值;
        - .max(1)[0] 从每行提取出最大的 Q 值,代表该状态下最优的 Q 值;
        - .detach() 表示不对该值计算梯度;
        - .unsqueeze(1) 将结果形状转换为 (batch_size, 1)。
        举例:
        - 假设对于一个样本,目标网络输出 [0.3, 0.4, 0.2],
                           那么 max(1)[0] 得到 0.4;整个批次得到一个形状 (64, 1) 的张量。
                           那么 max(1)[1] 得到   1;整个批次得到一个形状 (64, 1) 的张量。
        '''
        '''next_target_q_value_batch'''
        next_max_q_value_batch = self.target_net(next_state_batch).max(1)[0].detach().unsqueeze(1) # 最大的Q值
        '''
        根据贝尔曼方程计算目标 Q 值:
                                yi=ri+γmaxQtarget(a′)(s′i,a′)×(1−di)
        '''
        expected_q_value_batch = reward_batch + self.gamma * next_max_q_value_batch* (1-done_batch) # 期望的Q值
        # 计算损失
        loss = nn.MSELoss()(q_value_batch, expected_q_value_batch)
        # 优化更新模型
        self.optimizer.zero_grad()  
        loss.backward()
        # clip防止梯度爆炸
        for param in self.policy_net.parameters():  
            param.grad.data.clamp_(-1, 1)
        self.optimizer.step() 
        if self.sample_count % self.target_update == 0: # 每隔一段时间,将策略网络的参数复制到目标网络
            self.target_net.load_state_dict(self.policy_net.state_dict())   


# 2、定义训练
def train(cfg, env, agent):
    ''' 训练
    '''
    print("开始训练!")
    rewards = []  # 记录所有回合的奖励
    steps = []
    for i_ep in range(cfg.train_eps):
        ep_reward = 0  # 记录一回合内的奖励
        ep_step = 0
        state = env.reset()  # 重置环境,返回初始状态
        for _ in range(cfg.max_steps):
            ep_step += 1
            action = agent.sample_action(state)  # 选择动作
            next_state, reward, done, truncated, info = env.step(action)
            # next_state, reward, done, _ = env.step(action)  # 更新环境,返回transition
            agent.memory.push((state, action, reward,next_state, done))  # 保存transition
            state = next_state  # 更新下一个状态
            agent.update()  # 更新智能体
            ep_reward += reward  # 累加奖励
            if done:
                break
        steps.append(ep_step)
        rewards.append(ep_reward)
        if (i_ep + 1) % 10 == 0:
            print(f"回合:{i_ep+1}/{cfg.train_eps},奖励:{ep_reward:.2f},Epislon:{agent.epsilon:.3f}")
    print("完成训练!")
    env.close()
    return {'rewards':rewards}

def test(cfg, env, agent):
    print("开始测试!")
    rewards = []  # 记录所有回合的奖励
    steps = []
    for i_ep in range(cfg.test_eps):
        ep_reward = 0  # 记录一回合内的奖励
        state = env.reset()  # 重置环境,返回初始状态
        for _ in range(cfg.max_steps):
            action = agent.predict_action(state)  # 选择动作
            next_state, reward, done, truncated, info = env.step(action)
            # next_state, reward, done, _ = env.step(action)  # 更新环境,返回transition
            state = next_state  # 更新下一个状态
            ep_reward += reward  # 累加奖励
            if done:
                break
        rewards.append(ep_reward)
        print(f"回合:{i_ep+1}/{cfg.test_eps},奖励:{ep_reward:.2f}")
    print("完成测试")
    env.close()
    return {'rewards':rewards}


# 3. 定义环境
import gym
import os
def all_seed(env,seed = 1):
    ''' 万能的seed函数
    '''
    if not hasattr(env, 'seed'):
        def seed_fn(self, seed=None):
            env.reset(seed=seed)
            return [seed]
        env.seed = seed_fn.__get__(env, type(env))
    # env.seed(seed) # env config
    np.random.seed(seed)
    random.seed(seed)
    torch.manual_seed(seed) # config for CPU
    torch.cuda.manual_seed(seed) # config for GPU
    os.environ['PYTHONHASHSEED'] = str(seed) # config for python scripts
    # config for cudnn
    torch.backends.cudnn.deterministic = True # 保证使用确定性算法;
    torch.backends.cudnn.benchmark = False    # 关闭 CuDNN 的自动优化搜索;
    torch.backends.cudnn.enabled = False      # 禁用 CuDNN,从而确保每次计算结果一致。
def env_agent_config(cfg):
    env = gym.make(cfg.env_name) # 创建环境
    all_seed(env,seed=cfg.seed)
    n_states = env.observation_space.shape[0]
    n_actions = env.action_space.n
    print(f"状态空间维度:{n_states},动作空间维度:{n_actions}")
    # 更新n_states和n_actions到cfg参数中
    setattr(cfg, 'n_states', n_states)
    setattr(cfg, 'n_actions', n_actions) 
    model = DuelingNet(n_states, n_actions, hidden_dim = cfg.hidden_dim) # 创建模型
    memory = ReplayBuffer(cfg.memory_capacity) # 创建经验池
    agent = DuelingDQN(model,memory,cfg)
    return env,agent


# 4、设置参数
import argparse
import matplotlib.pyplot as plt
import seaborn as sns
class Config:
    def __init__(self):
        self.algo_name = 'DuelingDQN' # 算法名称
        self.env_name = 'CartPole-v1' # 环境名称
        self.seed = 1 # 随机种子
        self.train_eps = 100 # 训练回合数
        self.test_eps = 10  # 测试回合数
        self.max_steps = 200 # 每回合最大步数
        self.gamma = 0.95 # 折扣因子
        self.lr = 0.0001 # 学习率
        self.epsilon_start = 0.95 # epsilon初始值
        self.epsilon_end = 0.01 # epsilon最终值
        self.epsilon_decay = 500 # epsilon衰减率
        self.memory_capacity = 10000 # ReplayBuffer容量
        self.batch_size = 64 # ReplayBuffer中批次大小
        self.target_update = 800 # 目标网络更新频率
        self.hidden_dim = 256 # 神经网络隐藏层维度
        if torch.cuda.is_available(): # 是否使用GPUs
            self.device = torch.device('cuda')
        else:
            self.device = torch.device('cpu')
def smooth(data, weight=0.9):  
    '''用于平滑曲线,类似于Tensorboard中的smooth曲线
    '''
    last = data[0] 
    smoothed = []
    for point in data:
        smoothed_val = last * weight + (1 - weight) * point  # 计算平滑值
        smoothed.append(smoothed_val)                    
        last = smoothed_val                                
    return smoothed

def plot_rewards(rewards,title="learning curve"):
    sns.set()
    plt.figure()  # 创建一个图形实例,方便同时多画几个图
    plt.title(f"{title}")
    plt.xlim(0, len(rewards))  # 设置x轴的范围
    plt.xticks(np.arange(0, len(rewards), 10))
    plt.xlabel('epsiodes')
    plt.plot(rewards, label='rewards')
    plt.plot(smooth(rewards), label='smoothed')
    plt.legend()
    plt.show()

# 5、开始训练
# 获取参数
cfg = Config() 
# 训练
env, agent = env_agent_config(cfg)
res_dic = train(cfg, env, agent)
 
plot_rewards(res_dic['rewards'], title=f"training curve on {cfg.device} of {cfg.algo_name} for {cfg.env_name}")  
# 测试
res_dic = test(cfg, env, agent)
plot_rewards(res_dic['rewards'], title=f"testing curve on {cfg.device} of {cfg.algo_name} for {cfg.env_name}")  # 画出结果

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值