Reinforcement Learning Exercise 5.4

本文介绍了一种改进MonteCarlo每步算法(MonteCarloES)的方法,通过维护状态-动作对的平均回报和计数,而不是整个回报列表,从而提高了算法的效率。这种增量更新的方法在不牺牲精度的情况下,显著减少了计算资源的需求。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Exercise 5.4 The pseudocode for Monte Carlo ES is inefficient because, for each state–action pair, it maintains a list of all returns and repeatedly calculates their mean. It would be more efficient to use techniques similar to those explained in Section 2.4 to maintain just the mean and a count (for each state–action pair) and update them incrementally. Describe how the pseudocode would be altered to achieve this.

The altered pseudocode is shown as below:
Initialize:π(s)∈A(s)(arbitrarily), for all s∈SQ(s,a)∈R(arbitrarily), for all s∈S,a∈A(s)counts(s,a)←0, for all s ∈S,a∈A(s)Loop forever (for each episode):Choose S0∈S,A0∈A(S0) randomly such that all pairs have probability>0Generate an episode from S0,A0,following π:S0,A0,R1,...,ST−1,AT−1,RTG←0Loop for each step of episode, t=T−1,T−2,...,0:G←γG+Rt+1Unless the pair St,At appears in S0,A0,S1,A1...,St−1,At−1:counts(St,At)←counts(St,At)+1Q(St,At)←Q(St,At)+(G−Q(St,At))count(St,At)π(St)←argmaxaQ(St,a) \begin{aligned} &\text{Initialize:} \\ &\qquad \pi(s) \in \mathcal A(s) \text{(arbitrarily), for all } s \in S \\ &\qquad Q(s, a) \in \mathbb R \text{(arbitrarily), for all } s \in S, a \in \mathcal A(s) \\ &\qquad counts(s, a) \leftarrow 0\text{, for all s } \in S, a \in \mathcal A(s) \\ &\text{Loop forever (for each episode):} \\ &\qquad \text{Choose }S_0 \in \mathcal S, A_0 \in \mathcal A(S_0) \text{ randomly such that all pairs have probability} > 0 \\ &\qquad \text{Generate an episode from }S_0, A_0, \text{following }\pi: S_0, A_0, R_1, . . . , S_{T -1}, A_{T-1}, R_T \\ &\qquad G \leftarrow 0 \\ &\qquad \text{Loop for each step of episode, } t = T -1, T -2, . . . , 0: \\ &\qquad \qquad G \leftarrow \gamma G + R_{t+1} \\ &\qquad \qquad \text{Unless the pair }S_t, A_t \text{ appears in }S_0, A_0, S_1, A_1 . . . , S_{t-1}, A_{t-1}: \\ &\qquad \qquad \qquad counts(S_t,A_t) \leftarrow counts(S_t,A_t) + 1\\ &\qquad \qquad \qquad Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \frac {(G - Q(S_t, A_t))}{count(S_t, A_t)} \\ &\qquad \qquad \qquad \pi(S_t) \leftarrow \text{argmax}_a Q(S_t, a) \\ \end{aligned} Initialize:π(s)A(s)(arbitrarily), for all sSQ(s,a)R(arbitrarily), for all sS,aA(s)counts(s,a)0, for all s S,aA(s)Loop forever (for each episode):Choose S0S,A0A(S0) randomly such that all pairs have probability>0Generate an episode from S0,A0,following π:S0,A0,R1,...,ST1,AT1,RTG0Loop for each step of episode, t=T1,T2,...,0:GγG+Rt+1Unless the pair St,At appears in S0,A0,S1,A1...,St1,At1:counts(St,At)counts(St,At)+1Q(St,At)Q(St,At)+count(St,At)(GQ(St,At))π(St)argmaxaQ(St,a)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值