深度学习Save Best、Early Stop

一、Save Best

今天的大模型,在训练过程中可能会终止,但是模型其实是可以接着练的,假设GPU挂了,可以接着训练,在原有的权重上,训练其实就是更新w,如果前面对w进行了存档,那么可以从存档的比较优秀的地方进行训练。

下面代码默认每500步保存权重,第二个参数是选择保存最佳权重

class SaveCheckpointsCallback:
    def __init__(self, save_dir, save_step=500, save_best_only=True):
        """
        Save checkpoints each save_epoch epoch. 
        We save checkpoint by epoch in this implementation.
        Usually, training scripts with pytorch evaluating model and save checkpoint by step.

        Args:
            save_dir (str): dir to save checkpoint
            save_epoch (int, optional): the frequency to save checkpoint. Defaults to 1.
            save_best_only (bool, optional): If True, only save the best model or save each model at every epoch.
        """
        self.save_dir = save_dir # 保存路径
        self.save_step = save_step # 保存步数
        self.save_best_only = save_best_only # 是否只保存最好的模型
        self.best_metrics = -1 # 最好的指标,指标不可能为负数,所以初始化为-1
        
        # mkdir
        if not os.path.exists(self.save_dir): # 如果不存在保存路径,则创建
            os.mkdir(self.save_dir)
        
    def __call__(self, step, state_dict, metric=None):
        if step % self.save_step > 0: #每隔save_step步保存一次
            return
        
        if self.save_best_only:
            assert metric is not None # 必须传入metric
            if metric >= self.best_metrics:
                # save checkpoints
                torch.save(state_dict, os.path.join(self.save_dir, "best.ckpt")) # 保存最好的模型,覆盖之前的模型,不保存step,只保存state_dict,即模型参数,不保存优化器参数
                # update best metrics
                self.best_metrics = metric
        else:
            torch.save(state_dict, os.path.join(self.save_dir, f"{step}.ckpt")) # 保存每个step的模型,不覆盖之前的模型,保存step,保存state_dict,即模型参数,不保存优化器参数

二、Early Stop

如果训练着验证集的准确率开始下降或者损失上升,就需要用到早停:

class EarlyStopCallback:
    def __init__(self, patience=5, min_delta=0.01):
        """

        Args:
            patience (int, optional): Number of epochs with no improvement after which training will be stopped.. Defaults to 5.
            min_delta (float, optional): Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute 
                change of less than min_delta, will count as no improvement. Defaults to 0.01.
        """
        self.patience = patience # 多少个step没有提升就停止训练
        self.min_delta = min_delta # 最小的提升幅度
        self.best_metric = -1
        self.counter = 0 # 计数器,记录多少个step没有提升
        
    def __call__(self, metric):
        if metric >= self.best_metric + self.min_delta:#用准确率
            # update best metric
            self.best_metric = metric
            # reset counter 
            self.counter = 0
        else: 
            self.counter += 1 # 计数器加1,下面的patience判断用到
            
    @property #使用@property装饰器,使得 对象.early_stop可以调用,不需要()
    def early_stop(self):
        return self.counter >= self.patience

三、Tensorboard

# TensorBoard 可视化

pip install tensorboard
训练过程中可以使用如下命令启动tensorboard服务。注意使用绝对路径,否则会报错

```shell
 tensorboard  --logdir="D:\PycharmProjects\pythondl\chapter_2_torch\runs" --host 0.0.0.0 --port 8848
```

DQN(Deep Q-Network)是一种使用深度神经网络实现的强化学习算法,用于解决离散动作空间的问题。在PyTorch中实现DQN可以分为以下几个步骤: 1. 定义神经网络:使用PyTorch定义一个包含多个全连接层的神经网络,输入为状态空间的维度,输出为动作空间的维度。 ```python import torch.nn as nn import torch.nn.functional as F class QNet(nn.Module): def __init__(self, state_dim, action_dim): super(QNet, self).__init__() self.fc1 = nn.Linear(state_dim, 64) self.fc2 = nn.Linear(64, 64) self.fc3 = nn.Linear(64, action_dim) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x ``` 2. 定义经验回放缓存:包含多条经验,每条经验包含一个状态、一个动作、一个奖励和下一个状态。 ```python import random class ReplayBuffer(object): def __init__(self, max_size): self.buffer = [] self.max_size = max_size def push(self, state, action, reward, next_state): if len(self.buffer) < self.max_size: self.buffer.append((state, action, reward, next_state)) else: self.buffer.pop(0) self.buffer.append((state, action, reward, next_state)) def sample(self, batch_size): state, action, reward, next_state = zip(*random.sample(self.buffer, batch_size)) return torch.stack(state), torch.tensor(action), torch.tensor(reward), torch.stack(next_state) ``` 3. 定义DQN算法:使用PyTorch定义DQN算法,包含训练和预测两个方法。 ```python class DQN(object): def __init__(self, state_dim, action_dim, gamma, epsilon, lr): self.qnet = QNet(state_dim, action_dim) self.target_qnet = QNet(state_dim, action_dim) self.gamma = gamma self.epsilon = epsilon self.lr = lr self.optimizer = torch.optim.Adam(self.qnet.parameters(), lr=self.lr) self.buffer = ReplayBuffer(100000) self.loss_fn = nn.MSELoss() def act(self, state): if random.random() < self.epsilon: return random.randint(0, action_dim - 1) else: with torch.no_grad(): q_values = self.qnet(state) return q_values.argmax().item() def train(self, batch_size): state, action, reward, next_state = self.buffer.sample(batch_size) q_values = self.qnet(state).gather(1, action.unsqueeze(1)).squeeze(1) target_q_values = self.target_qnet(next_state).max(1)[0].detach() expected_q_values = reward + self.gamma * target_q_values loss = self.loss_fn(q_values, expected_q_values) self.optimizer.zero_grad() loss.backward() self.optimizer.step() def update_target_qnet(self): self.target_qnet.load_state_dict(self.qnet.state_dict()) ``` 4. 训练模型:使用DQN算法进行训练,并更新目标Q网络。 ```python dqn = DQN(state_dim, action_dim, gamma=0.99, epsilon=1.0, lr=0.001) for episode in range(num_episodes): state = env.reset() total_reward = 0 for step in range(max_steps): action = dqn.act(torch.tensor(state, dtype=torch.float32)) next_state, reward, done, _ = env.step(action) dqn.buffer.push(torch.tensor(state, dtype=torch.float32), action, reward, torch.tensor(next_state, dtype=torch.float32)) state = next_state total_reward += reward if len(dqn.buffer.buffer) > batch_size: dqn.train(batch_size) if step % target_update == 0: dqn.update_target_qnet() if done: break dqn.epsilon = max(0.01, dqn.epsilon * 0.995) ``` 5. 测试模型:使用训练好的模型进行测试。 ```python total_reward = 0 state = env.reset() while True: action = dqn.act(torch.tensor(state, dtype=torch.float32)) next_state, reward, done, _ = env.step(action) state = next_state total_reward += reward if done: break print("Total reward: {}".format(total_reward)) ``` 以上就是在PyTorch中实现DQN强化学习的基本步骤。需要注意的是,DQN算法中还有很多细节和超参数需要调整,具体实现过程需要根据具体问题进行调整。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

何仙鸟

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值