Reinforcement Learning Exercise 5.9

本文介绍了一种修改后的首次访问Monte Carlo策略评估算法,该算法使用了样本平均的增量实现,适用于所有状态和动作。文章详细阐述了算法流程,包括初始化、策略选择、生成序列、累积回报和权重更新等步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Exercise 5.9 Modify the algorithm for first-visit MC policy evaluation (Section 5.1) to use the incremental implementation for sample averages described in Section 2.4.

The modified algorithm should be like this:
Input: an arbitrary target policy πInitialize, for all s∈S,a∈A(s):Q(s,a) in R(arbitrarily)C(s,a)←0Loop forever (for each episode):b←any policy with coverage of πGenerate an episode following b: S0,A0,R1,⋯ ,ST−1,AT−1,RTG←0W←1Loop for each step of episode, t=T−1,T−2,⋯ ,0, while W≠0:G←γG+Rt+1C(St,At)←C(St,At)+WUnless the pair St,At appears in S0,A0,S1,A1,⋯ ,St−1,At−1:Q(St,At)←Q(St,At)+WC(St,At)[G−Q(St,At)]W←Wπ(At∣St)b(At∣St) \begin{aligned} &\text{Input: an arbitrary target policy }\pi \\ &\text{Initialize, for all }s \in \mathcal S, a \in \mathcal A(s): \\ & \qquad Q(s,a) \text{ in }\mathbb R (\text{arbitrarily}) \\ & \qquad C(s,a) \leftarrow 0 \\ &\text{Loop forever (for each episode):} \\ & \qquad b \leftarrow \text{any policy with coverage of } \pi \\ & \qquad \text{Generate an episode following b: }S_0, A_0, R_1, \cdots,S_{T-1},A_{T-1},R_T \\ & \qquad G \leftarrow 0 \\ & \qquad W \leftarrow 1 \\ & \qquad \text{Loop for each step of episode, } t=T-1,T-2,\cdots,0, \text{ while } W = \not 0: \\ & \qquad \qquad G \leftarrow \gamma G + R_{t+1} \\ & \qquad \qquad C(S_t, A_t) \leftarrow C(S_t, A_t) + W \\ & \qquad \qquad \text{Unless the pair } S_t, A_t \text{ appears in } S_0, A_0, S_1, A_1, \cdots , S_{t-1}, A_{t-1}:\\ & \qquad \qquad \qquad Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \frac {W}{C(S_t, A_t)} \bigl [ G - Q(S_t, A_t)\bigr] \\ & \qquad \qquad \qquad W \leftarrow W \frac {\pi(A_t \mid S_t)}{b(A_t \mid S_t)} \\ \end{aligned} Input: an arbitrary target policy πInitialize, for all sS,aA(s):Q(s,a) in R(arbitrarily)C(s,a)0Loop forever (for each episode):bany policy with coverage of πGenerate an episode following b: S0,A0,R1,,ST1,AT1,RTG0W1Loop for each step of episode, t=T1,T2,,0, while W≠0:GγG+Rt+1C(St,At)C(St,At)+WUnless the pair St,At appears in S0,A0,S1,A1,,St1,At1:Q(St,At)Q(St,At)+C(St,At)W[GQ(St,At)]WWb(AtSt)π(AtSt)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值