Exercise 5.6 What is the equation analogous to (5.6) for action values Q(s,a)Q(s, a)Q(s,a) instead ofstate values V(s)V(s)V(s), again given returns generated using bbb?
Given a starting state StS_tSt, starting action AtA_tAt, the probability of the subsequent state-action trajectory, St+1,At+1,⋯ ,STS_{t+1}, A_{t+1}, \cdots , S_TSt+1,At+1,⋯,ST occurring under any policy π\piπ is
Pr(St+1,At+1,⋯ ,ST−1,AT−1,ST∣St,At:T−1∼π)=p(St+1∣St,At)π(At+1∣St+1)⋯p(ST−1∣ST−2,AT−2)π(AT−1∣ST−1)p(ST∣ST−1,AT−1)=∏k=tT−1π(Ak∣Sk)p(Sk+1∣Sk,Ak)π(At∣St) \begin{aligned} &Pr(S_{t+1}, A_{t+1},\cdots, S_{T-1}, A_{T-1}, S_T \mid S_t, A_{t:T-1}\sim \pi)\\ &\qquad = p(S_{t+1} \mid S_t, A_t) \pi(A_{t+1}|S_{t+1}) \cdots p(S_{T-1} \mid S_{T-2}, A_{T-2})\pi(A_{T-1} \mid S_{T-1}) p(S_T \mid S_{T-1}, A_{T-1})\\ &\qquad =\frac {\prod_{k=t}^{T - 1} \pi(A_k \mid S_k)p(S_{k+1}\mid S_k, A_k)} {\pi(A_t \mid S_t)} \end{aligned} Pr(St+1,At+1,⋯,ST−1,AT−1,ST∣St,At:T−1∼π)=p(St+1∣St,At)π(At+1∣St+1)⋯p(ST−1

本文介绍了在使用行为策略b生成回报的情况下,状态值V(s)的公式(5.6)对于动作值Q(s,a)的类似公式。给出了从初始状态St和初始动作At开始,后续状态-动作轨迹的概率表达式,以及目标策略和行为策略下轨迹相对概率的计算。进而推导出动作值Qπ(s,a)的期望是根据重要性采样比例ρt+1:T-1和回报Gt的期望。最后分别阐述了普通重要性采样和加权重要性采样的Q(s,a)计算方式。"
136335116,17243493,使用预训练transformer模型进行文本分类实战,"['自然语言处理', '深度学习', '人工智能', '语言模型', 'Python']
最低0.47元/天 解锁文章
2457

被折叠的 条评论
为什么被折叠?



