Introduction to Reinforcement Learning

Learning from interaction is a foundational idea underlying nearly all theories of learning and intelligence.The approach we explore, called reinforcement learning, is much more focused on goal-directed learning form interaction than are other approaches to machine learning.
区分强化学习和其他种类的学习方式最显著的特点是:在强化学习中,训练信息被用于评估状态和动作的好坏,而不是用于指导到底该是什么策略。
这里写图片描述
Reinforcement learning is learning what to do—how to map situations to actions—so as to maximize a numerical reward signal.
These two characteristics—trial-and-error search and delayed reward—are the two most important distinguishing features of reinforcement learning.

One of the challenges that arise in reinforcement learning, and not in other kinds of learning, is the trade-off between exploration and exploitation.
Another key feather of reforcement learning is that it explicitly considers the whole problem of a goal-directed agent interacting with an uncertain environment .

One must look beyond the most obvious examples of agents and their environments to appreciate the generality of the reinforcement learning framework.

Feathers shared by cases that can use reinforcement learning:
All involve interaction between an active decision-making agent and its environment, within which the agent seeks to achieve a goal despite uncertainty about its environment.

Elements of reforcement learning:

  • agent

  • policy
    A policy is a mapping from perceived states of the environment to actions to be taken when in those states.

  • reward signal
    A reward signal defines the goal in a reinforcement learning problem.

  • value function
    Whereas rewards determine the immediate, intrinsic desirablity of environmental states, values indicate the long-term desirability of states after taking into account the states that are likely to follow, and the rewards available in those states.

  • model of the environment (optional)
    A model predicts what the environment will do next. There are two kinds of model: Transitions Model and Rewards Model. Transition model predicts the next state. (i.e. dynamics) Pass=[S=sS=s,A=a] P s s ′ a = P [ S ′ = s ′ ∣ S = s , A = a ]
    Rewards model predicts the next (immediate reward) Ras=

The authoritative textbook for reinforcement learning by Richard Sutton and Andrew Barto. Contents Preface Series Forward Summary of Notation I. The Problem 1. Introduction 1.1 Reinforcement Learning 1.2 Examples 1.3 Elements of Reinforcement Learning 1.4 An Extended Example: Tic-Tac-Toe 1.5 Summary 1.6 History of Reinforcement Learning 1.7 Bibliographical Remarks 2. Evaluative Feedback 2.1 An -Armed Bandit Problem 2.2 Action-Value Methods 2.3 Softmax Action Selection 2.4 Evaluation Versus Instruction 2.5 Incremental Implementation 2.6 Tracking a Nonstationary Problem 2.7 Optimistic Initial Values 2.8 Reinforcement Comparison 2.9 Pursuit Methods 2.10 Associative Search 2.11 Conclusions 2.12 Bibliographical and Historical Remarks 3. The Reinforcement Learning Problem 3.1 The Agent-Environment Interface 3.2 Goals and Rewards 3.3 Returns 3.4 Unified Notation for Episodic and Continuing Tasks 3.5 The Markov Property 3.6 Markov Decision Processes 3.7 Value Functions 3.8 Optimal Value Functions 3.9 Optimality and Approximation 3.10 Summary 3.11 Bibliographical and Historical Remarks II. Elementary Solution Methods 4. Dynamic Programming 4.1 Policy Evaluation 4.2 Policy Improvement 4.3 Policy Iteration 4.4 Value Iteration 4.5 Asynchronous Dynamic Programming 4.6 Generalized Policy Iteration 4.7 Efficiency of Dynamic Programming 4.8 Summary 4.9 Bibliographical and Historical Remarks 5. Monte Carlo Methods 5.1 Monte Carlo Policy Evaluation 5.2 Monte Carlo Estimation of Action Values 5.3 Monte Carlo Control 5.4 On-Policy Monte Carlo Control 5.5 Evaluating One Policy While Following Another 5.6 Off-Policy Monte Carlo Control 5.7 Incremental Implementation 5.8 Summary 5.9 Bibliographical and Historical Remarks 6. Temporal-Difference Learning 6.1 TD Prediction 6.2 Advantages of TD Prediction Methods 6.3 Optimality of TD(0) 6.4 Sarsa: On-Policy TD Control 6.5 Q-Learning: Off-Policy TD Control 6.6 Actor-Critic Methods 6.7 R-Learning for Undiscounted Continuing Tasks 6.8 Games, Afterstates, and Other Special Cases 6.9 Summary 6.10 Bibliographical and Historical Remarks III. A Unified View 7. Eligibility Traces 7.1 -Step TD Prediction 7.2 The Forward View of TD( ) 7.3 The Backward View of TD( ) 7.4 Equivalence of Forward and Backward Views 7.5 Sarsa( ) 7.6 Q( ) 7.7 Eligibility Traces for Actor-Critic Methods 7.8 Replacing Traces 7.9 Implementation Issues 7.10 Variable 7.11 Conclusions 7.12 Bibliographical and Historical Remarks 8. Generalization and Function Approximation 8.1 Value Prediction with Function Approximation 8.2 Gradient-Descent Methods 8.3 Linear Methods 8.3.1 Coarse Coding 8.3.2 Tile Coding 8.3.3 Radial Basis Functions 8.3.4 Kanerva Coding 8.4 Control with Function Approximation 8.5 Off-Policy Bootstrapping 8.6 Should We Bootstrap? 8.7 Summary 8.8 Bibliographical and Historical Remarks 9. Planning and Learning 9.1 Models and Planning 9.2 Integrating Planning, Acting, and Learning 9.3 When the Model Is Wrong 9.4 Prioritized Sweeping 9.5 Full vs. Sample Backups 9.6 Trajectory Sampling 9.7 Heuristic Search 9.8 Summary 9.9 Bibliographical and Historical Remarks 10. Dimensions of Reinforcement Learning 10.1 The Unified View 10.2 Other Frontier Dimensions 11. Case Studies 11.1 TD-Gammon 11.2 Samuel's Checkers Player 11.3 The Acrobot 11.4 Elevator Dispatching 11.5 Dynamic Channel Allocation 11.6 Job-Shop Scheduling Bibliography Index
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值