贝尔曼公式
v π ( s ) = E [ G t ∣ S t = s ] = E [ R t + 1 + γ G t + 1 ∣ S t = s ] = E [ R t + 1 ∣ S t = s ] + γ E [ G t + 1 ∣ S t = s ] = ∑ a π ( a ∣ s ) E [ R t + 1 ∣ S t = s , A t = a ] + ∑ s ′ E [ G t + 1 ∣ S t = s , S t + 1 = s ′ ] p ( s ′ ∣ s ) = ∑ a π ( a ∣ s ) ∑ r p ( r ∣ s , a ) r + ∑ s ′ v π ( s ′ ) p ( s ′ ∣ s ) = ∑ a π ( a ∣ s ) ∑ r p ( r ∣ s , a ) r + ∑ s ′ v π ( s ′ ) ∑ a p ( s ′ ∣ s , a ) π ( a ∣ s ) = ∑ a π ( a ∣ s ) ∑ r p ( r ∣ s , a ) r ⏟ mean of immediate rewards + γ ∑ a π ( a ∣ s ) ∑ s ′ p ( s ′ ∣ s , a ) v π ( s ′ ) ⏟ mean of future rewards = ∑ a π ( a ∣ s ) [ ∑ r p ( r ∣ s , a ) r + γ ∑ s ′ p ( s ′ ∣ s , a ) v π ( s ′ ) ] ⏟ q π ( s , a ) = ∑ a π ( a ∣ s ) E [ G t ∣ S t = s , A t = a ] ⏟ q π ( s , a ) = ∑ a π ( a ∣ s ) q π ( s , a ) , ∀ s ∈ S . \begin{aligned} v_\pi(s) &=\mathbb{E}\left[G_t \mid S_t=s\right] \\ &=\mathbb{E}\left[R_{t+1}+\gamma G_{t+1} \mid S_t=s\right] \\ &=\mathbb{E}\left[R_{t+1} \mid S_t=s\right]+\gamma \mathbb{E}\left[G_{t+1} \mid S_t=s\right] \\ &=\sum_a \pi(a \mid s) \mathbb{E}\left[R_{t+1} \mid S_t=s, A_t=a\right]+ \sum_{s^{\prime}} \mathbb{E}\left[G_{t+1} \mid S_t=s, S_{t+1}=s^{\prime}\right] p\left(s^{\prime} \mid s\right) \\ &=\sum_a \pi(a \mid s) \sum_r p(r \mid s, a) r + \sum_{s^{\prime}} v_\pi\left(s^{\prime}\right) p\left(s^{\prime} \mid s\right) \\ &=\sum_a \pi(a \mid s) \sum_r p(r \mid s, a) r + \sum_{s^{\prime}} v_\pi\left(s^{\prime}\right) \sum_a p\left(s^{\prime} \mid s, a\right) \pi(a \mid s) \\ &=\underbrace{\sum_a \pi(a \mid s) \sum_r p(r \mid s, a) r}_{\text {mean of immediate rewards }}+\underbrace{\gamma \sum_a \pi(a \mid s) \sum_{s^{\prime}} p\left(s^{\prime} \mid s, a\right) v_\pi\left(s^{\prime}\right)}_{\text {mean of future rewards }} \\ &=\sum_a \pi(a \mid s) \underbrace{\left [\sum_r p(r \mid s, a) r+\gamma \sum_{s^{\prime}} p\left(s^{\prime} \mid s, a\right) v_\pi\left(s^{\prime}\right)\right]}_{ {q_\pi(s, a)}} \\ &=\sum_a \pi(a \mid s) \underbrace{\mathbb{E}\left[G_t \mid S_t=s, A_t=a\right]}_{q_\pi(s, a)} \\ &=\sum_a \pi(a \mid s) q_\pi(s, a), \quad \forall s \in \mathcal{S} . \end{aligned} vπ(s)=E[Gt∣St=s]=E[Rt+1+γGt+1∣St=s]=E[Rt+1∣St=s]+γE[Gt+1∣St=s]=a∑π(a∣s)E[Rt+1∣

本文介绍了深度强化学习中的关键概念——贝尔曼公式,包括基本公式和最优公式。贝尔曼公式阐述了在给定策略π下状态值函数的期望回报,而最优公式则描述了存在唯一最优解的条件。内容涵盖了即时奖励与未来奖励的平均值,并提及了收缩映射定理在确定最优策略中的应用。
最低0.47元/天 解锁文章
3773





