CS231n学习笔记--14. Reinforcement Learning

1. What is Reinforcement Learning

概述:

举个栗子:
再举一个:

2. Markov Decision Process

  • Mathematical formulation of the RL problem
  • Markov property: Current state completely characterises the state of the world
**处理流程:**
The optimal policy π*

3. Q-learning

Definitions: Value function and Q-value function:

Bellman equation:
优化策略:
**Solving for the optimal policy: Q-learning**
举个栗子:Playing Atari Games
**Q-network Architecture**
**Training the Q-network: Experience Replay**
Deep Q-Learning with Experience Replay

4. Policy Gradients

Intuition:
Variance reduction:
Variance reduction: Baseline
How to choose the baseline?
A better baseline: Want to push up the probability of an action from a state, if this action was better than the **expected value of what we should get from that state**
**Actor-Critic Algorithm**

5. REINFORCE 的运用

5.1 Recurrent Attention Model (RAM)

效果示意图:
**5.2 AlphaGo**

6. Summary

  • Policy gradients: very general but suffer from high variance so requires a lot of samples.
    Challenge: sample-efficiency
  • Q-learning: does not always work but when it works, usually more sample-efficient. Challenge: exploration
  • Guarantees:
    Policy Gradients: Converges to a local minima of J(θ), often good enough!
    Q-learning: Zero guarantees since you are approximating Bellman equation with a complicated function approximator
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值