Exercise 5.5 Consider an MDP with a single nonterminal state and a single action that transitions back to the nonterminal state with probability ppp and transitions to the terminal state with probability 1−p1-p1−p. Let the reward be +1+1+1 on all transitions, and let γ=1\gamma=1γ=1. Suppose you observe one episode that lasts 10 steps, with a return of 10. What are the first-visit and every-visit estimators of the value of the nonterminal state?
For the first-visit estimator, only the first visit of a state is considered. So:
V(Snonterminal)=G(S0)=1⋅p+0⋅(1−p)=p \begin{aligned} V(S_{nonterminal}) &= G(S_0) \\ &= 1 \cdot p + 0 \cdot (1-p) \\ &= p \end{aligned} V(Snonterminal)

在含有单一非终态状态和单一行动的MDP中,概率p返回非终态,1-p进入终态,奖励始终为1,γ=1。在一个持续10步并得到回报10的实验中,首次访问和每次访问状态下价值估计分别为p和1-p/(p^10-1)。
最低0.47元/天 解锁文章
1604





