Reinforcement Learning - An Introduction memo

本文介绍了马尔可夫决策过程(MDP)的基本概念,包括有限状态空间与动作空间、转移概率等,并详细阐述了价值函数(state-value function与action-value function)的概念与计算方式。此外,还介绍了策略评估(policy evaluation)、策略改进(policy improvement)、策略迭代(policy iteration)及价值迭代(value iteration)等强化学习核心算法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.MDP(Markov Decision Processes)

finite MDP: finite state space&finite action space

transition probabilities:p(s′ | s, a) = Pr{St+1 = s′ | St = s, At = a}

r(s, a, s′) = E[Rt+1 | St = s, At = a, St+1 = s′]

2.Value Functions

state-value function: vπ(s)=Eπ[Gt|St = s] = Eπ[k=0γkRt+k+1| St = s]

action-value function: qπ(s, a) = Eπ[Gt|St = s, At = a] = Eπ[k=0γkRt+k+1| St = s, At = a]
Gt: return (cumulative discounted reward) following t
Rt: reward at t, dependent, like St, on At−1 and St−1
Gt = k=0γkRt+k+1

vπ, qπ: vπ(s) = aπ(a|s)qπ(s,a) qπ(s, a) = sp(s|s,a)[r(s,a,s)+γvπ(s’)]
π(a|s): probability of taking action a when in state s

Bellman Equation for vπ: vπ(s) = aπ(a|s)sp(s|s,a)[r(s,a,s)+γvπ(s)]
Bellman function => learn vπ

Bellman Equation for qπ: qπ(s, a) = sp(s|s,a)[r(s,a,s)+γaπ(a|s)qπ(s,a)]

3.Policy Evaluation

policy evaluation: compute vπ for policy π
Iteration policy evaluation:
1. For state s, the initial v0 is chosen arbitary(terminal state 0)
2.Successive approximation is obtained by using the Bellman Equation:
vk+1(s) = Eπ[Rt+1+γvk(st+1) | St = s]
code:

4.Policy Improvement

policy improvement: evaluate policy to find better policies

greedy policy π: π(x) = arg amaxqπ(s,a)
The greedy policy takes the action that looks best in the short term—after one step of lookahead—according to vπ.

5.Policy Iteration&Value Iteration

policy iteration

code:

an example:

value iteration

It can be written as a particularly simple backup operation that combines the policy improvement and truncated policy evaluation steps, more efficient.

vk+1(s) = amax E[Rt+1+γvk(St+1) | St = s, At = a]

code:

Q-Learning

Initialize Q(s,a) arbitrarily
Repeat (for each episode):
    Initialize s
    Repeat (for each episode):
        Choose a from s using policy derived from Q(e.g.,ε-greedy)
        Take action a, observe r, s'
        Q(s, a)←Q(s, a) + α[r + γmaxQ(s', a')-Q(s, a)]
        s←s'
    util s is terminal
Evaluation:

update Q-table, in every episode, Q(s, a) is the value in the table, maxa’Q(s’, a’) is the max approximation for Q(s’)

decision policy:

ε-greedy: ε = 0.9, choose the best action for 90%, choose others randomly for 10%

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值