Example 4.1 Consider the 4×44 \times 44×4 gridworld shown below.

The nonterminal states are S={
1,2,...,14}\mathcal S = \{1, 2, . . . , 14\}S={
1,2,...,14}. There are four actions possible in each state, A={
up,down,right,left}\mathcal A = \{up, down, right, left\}A={
up,down,right,left}, which deterministically cause the corresponding state transitions, except that actions that would take the agent off the grid in fact leave the state unchanged. Thus, for instance, p(6,.1∣5,right)=1p(6,.1\mid 5, right) = 1p(6,.1∣5,right)=1, p(7,.1∣7,right)=1p(7,.1\mid 7, right) = 1p(7,.1∣7,right)=1, and p(10,r∣5,right)=0p(10, r \mid 5, right) = 0p(10,r∣5,right)=0 for all r∈Rr \in \mathcal Rr∈R. This is an undiscounted, episodic task. The reward is −1-1−1 on all transitions until the terminal state is reached. The terminal state is shaded in the figure (although it is shown in two places, it is formally one state). The expected reward function is thus r(s,a,s0)=−1r(s, a, s0) = -1r(s,a,s0)=−1 for all states sss, s′s's′ and actions aaa. Suppose the agent follows the equiprobable random policy (all actions equally likely). The left side of Figure 4.1 shows the sequence of value functions vk{v_k}vk computed by iterative policy evaluation. The final estimate is in fact vπv_\pivπ, which in this case gives for each state the negation of the expected number of steps from that state until termination.

Exercise 4.1 In Example 4.1, if π\piπ is the equiprobable random policy, what is qπ(11,down)q_\pi(11, down)qπ(11,down)? What is qπ(7,down)q_\pi(7, down)qπ(7,down)?
Here, we can use the result of exercise 3.13 for calculation. The result of exercise 3.13 is :
qπ(s,a)=∑s′{
Ps,s′a⋅[Rs,s′a+γ⋅vπ(s′)]} q_\pi(s,a) = \sum_{s'} \biggl \{ P_{s,s'}^a \cdot \Bigl [ R_{s,s'}^a + \gamma \cdot v_\pi(s') \Bigr ] \biggr \} qπ(s,a)=s′∑{
Ps,s

This blog discusses Example 4.1 from a reinforcement learning context, focusing on a 4x4 gridworld. With non-terminal states and four deterministic actions, the post explores the expected reward function and value functions under an equiprobable random policy. It calculates q-values for states 11 and 7 when taking the 'down' action."
112471661,7571177,C++ Primer:深入解析拷贝控制与对象管理,"['c++', '拷贝控制', '移动语义']
最低0.47元/天 解锁文章
1820





