强化学习 Sarsa-lambda算法走迷宫小例子

Sarsa-lambda是Sarsa算法的一种提速的方法

如果说 Sarsa 和 Qlearning 都是每次获取到 reward, 只更新获取到 reward 的前一步. 那 Sarsa-lambda 就是更新获取到 reward 的前 lambda 步. lambda 是在 [0, 1] 之间取值,

如果 lambda = 0, Sarsa-lambda 就是 Sarsa, 只更新获取到 reward 前经历的最后一步.

如果 lambda = 1, Sarsa-lambda 更新的是 获取到 reward 前所有经历的步.(来自于https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/3-3-tabular-sarsa-lambda/

lambda的含义:

lambda 就是一个衰变值, 他可以让你知道离奖励越远的步可能并不是让你最快拿到奖励的步, 所以我们想象我们站在宝藏的位置, 回头看看我们走过的寻宝之路, 离宝藏越近的脚印越看得清, 远处的脚印太渺小, 我们都很难看清, 那我们就索性记下离宝藏越近的脚印越重要, 越需要被好好的更新. 和之前我们提到过的 奖励衰减值 gamma 一样, lambda 是脚步衰减值, 都是一个在 0 和 1 之间的数.

代码实现走迷宫的小例子(来自于莫凡大神https://github.com/MorvanZhou/Reinforcement-learning-with-tensorflow/tree/master/contents/4_Sarsa_lambda_maze):

import numpy as np
import pandas as pd


class RL(object):
    def __init__(self, action_space, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9):
        self.actions = action_space  # a list
        self.lr = learning_rate
        self.gamma = reward_decay
        self.epsilon = e_greedy

        self.q_table = pd.DataFrame(columns=self.actions, dtype=np.float64)

    def check_state_exist(self, state):
        if state not in self.q_table.index:
            # append new state to q table
            self.q_table = self.q_table.append(
                pd.Series(
                    [0]*len(self.actions),
                    index=self.q_table.columns,
                    name=state,
                )
            )

    def choose_action(self, observation):
        self.check_state_exist(observation)
        # action selection
        if np.random.rand() < self.epsilon:
            # choose best action
            state_action = self.q_table.loc[observation, :]
            # some actions may have the same value, randomly choose on in these actions
            action = np.random.choice(state_action[state_action == np.max(state_action)].index)
        else:
            # choose random action
            action = np.random.choice(self.actions)
        return action

    def learn(self, *args):
        pass


# backward eligibility traces
class SarsaLambdaTable(RL):
    def __init__(self, actions, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9, trace_decay=0.9):
        super(SarsaLambdaTable, self).__init__(actions, learning_rate, reward_decay, e_greedy)

        # backward view, eligibility trace.
        self.lambda_ = trace_decay
        self.eligibility_trace = self.q_table.copy()

    def check_state_exist(self, state):
        if state not in self.q_table.index:
            # append new state to q table
            to_be_append = pd.Series(
                    [0] * len(self.actions),
                    index=self.q_table.columns,
                    name=state,
                )
            self.q_table = self.q_table.append(to_be_append)

            # also update eligibility trace
            self.eligibility_trace = self.eligibility_trace.append(to_be_append)

    def learn(self, s, a, r, s_, a_):
        self.check_state_exist(s_)
        q_predict = self.q_table.loc[s, a]
        if s_ != 'terminal':
            q_target = r + self.gamma * self.q_table.loc[s_, a_]  # next state is not terminal
        else:
            q_target = r  # next state is terminal
        error = q_target - q_predict

        # increase trace amount for visited state-action pair

        # Method 1:
        # self.eligibility_trace.loc[s, a] += 1

        # Method 2:
        self.eligibility_trace.loc[s, :] *= 0
        self.eligibility_trace.loc[s, a] = 1

        # Q update
        self.q_table += self.lr * error * self.eligibility_trace

        # decay eligibility trace after update
        self.eligibility_trace *= self.gamma*self.lambda_

 

 

强化学习中的SARSA(State-Action-Reward-State-Action,简称SARSA)是一种基于在线学习的策略,它用于解决马尔科夫决策过程(MDP)。&lambda;版SARSA(&lambda;-return)引入了 Eligibility traces(回溯轨迹)的概念,结合了近期和长期奖励的影响。 在MATLAB中实现SARSA &lambda; 算法,你可以按照以下步骤: 1. **环境设置**:首先,你需要创建一个环境模型,如`MDPEnvironment`,定义状态、动作、奖励函数以及状态转移概率。 ```matlab classdef MDPEnvironment < handle % ... 定义状态、动作、reward 和 transition probabilities ... end ``` 2. **初始化**:定义超参数,比如学习率α(alpha)、折扣因子γ(gamma),以及&lambda;值。 ```matlab % 初始化参数 alpha = 0.5; gamma = 0.9; lambda = 0.8; state = initialState; % 初始状态 actionValueTable = zeros(numStates, numActions); % Q表初始化 eligibilityTrace = zeros(numStates, numActions); % 回溯轨迹矩阵 ``` 3. **循环迭代**:在一个循环中,模拟环境,选择动作,观察反馈并更新Q值。 ```matlab while ~terminationCondition % 选择动作 action = chooseAction(state, actionValueTable, eligibilityTrace); % 执行动作并获取反馈 [nextState, reward] = environmentStep(state, action); % 更新回溯轨迹 updateEligibilityTrace(nextState, action, eligibilityTrace, lambda); % Sarsa(&lambda;) update target = reward + gamma * max(actionValueTable(nextState, :)); actionValueTable(state, action) = actionValueTable(state, action) + alpha * (target - actionValueTable(state, action)); % 更新状态 state = nextState; end ``` 4. **函数定义**: - `chooseAction(state, Q, E)`:基于ε-greedy策略选择动作。 - `environmentStep(state, action)`:模拟环境并返回下一个状态和奖励。 - `updateEligibilityTrace(nextState, action, E, lambda)`:根据&lambda;值更新回溯轨迹。 注意:实际应用中,你会需要对上述步骤进行适当封装,以便处理更复杂的环境和高级功能,如 Experience Replay 或 Prioritized Experience Replay等。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值