强化学习—— TD算法(Sarsa算法+Q-learning算法)

1. Sarsa算法
1.1 TD Target
- 回报函数的定义为:
U t = R t + γ R t + 1 + γ 2 R t + 2 + ⋅ ⋅ ⋅ U t = R t + γ ( R t + 1 + γ R t + 2 + ⋅ ⋅ ⋅ ) U t = R t + γ U t + 1 U_t=R_t+\gamma R_{t+1}+\gamma^2 R_{t+2}+\cdot \cdot \cdot\\ U_t=R_t+\gamma (R_{t+1}+\gamma R_{t+2}+\cdot \cdot \cdot)\\ U_t = R_t+\gamma U_{t+1} Ut=Rt+γRt+1+γ2Rt+2+⋅⋅⋅Ut=Rt+γ(Rt+1+γRt+2+⋅⋅⋅)Ut=Rt+γUt+1 - 假设t时刻的回报依赖于t时刻的状态、动作以及t+1时刻的状态: R t ← ( S t , A t , S t + 1 ) R_t \gets (S_t,A_t,S_{t+1}) Rt←(St,At,St+1)
- 则动作价值函数可以定义为: Q π ( s t , a t ) = E [ U t ∣ a t , s t ] Q π ( s t , a t ) = E [ R t + γ U t + 1 ∣ a t , s t ] Q π ( s t , a t ) = E [ R t ∣ a t , s t ] + γ E [ U t + 1 ∣ a t , s t ] Q π ( s t , a t ) = E [ R t ∣ a t , s t ] + γ E [ Q π ( S t + 1 , A t + 1 ) ∣ a t , s t ] Q π ( s t , a t ) = E [ R t + γ Q π ( S t + 1 , A t + 1 ) ] Q_\pi(s_t,a_t)=E[U_t|a_t,s_t]\\ Q_\pi(s_t,a_t)=E[R_t+\gamma U_{t+1}|a_t,s_t]\\Q_\pi(s_t,a_t)=E[R_t|a_t,s_t]+\gamma E[U_{t+1}|a_t,s_t]\\ Q_\pi(s_t,a_t)=E[R_t|a_t,s_t]+\gamma E[Q_\pi(S_{t+1},A_{t+1})|a_t,s_t]\\ Q_\pi(s_t,a_t) = E[R_t + \gamma Q_\pi(S_{t+1},A_{t+1})] Qπ(st,at)=E[Ut∣at,st]Qπ(st,at)=E[Rt+γUt+1∣at,st]Qπ(st,at)=E[Rt∣at,st]+γE[Ut+1∣at,st]Qπ(st,at)=E[Rt∣at,st]+γE[Qπ(St+1

最低0.47元/天 解锁文章
911

被折叠的 条评论
为什么被折叠?



