新书推荐|Reinforcement learning for sequential decision and optimal control

本书是一本关于强化学习的详细教程,面向工业控制领域,涵盖从基本概念到深度强化学习的各个方面。作者李升波教授结合在清华大学的教学经验,深入剖析了强化学习的理论、算法和应用案例,旨在帮助读者掌握强化学习的核心并应用于实际问题。内容包括蒙特卡洛法、时序差分法、动态规划法、函数近似法、策略梯度法和深度强化学习等,适合初学者和研究人员学习。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

新书推荐|Reinforcement learning for sequential decision and optimal control

在这里插入图片描述

购买链接

各位读者可以通过以下链接直接购买本书:
链接

引言

人工智能的飞速发展正在重塑人类社会的诸多领域,强化学习(Reinforcement Learning,RL)作为最重要一项核心技术,正引起了学术界和工业界的广泛关注。强化学习的成功应用,如AlphaGo, ChatGPT等,已经证明了其解决高复杂决策和控制问题的巨大潜力。本书旨在面向工业控制领域剖析强化学习的原理、算法和应用,系统性地介绍这一领域的基础知识与典型案例,为读者提供一个内容全面且易于理解的参考书。

作者寄语

亲爱的读者们,随着工业领域对复杂决策与控制问题处理需求不断增加,以模仿人类大脑学习机制为原理的强化学习技术,展现出十分广阔的应用前景。然而,强化学习本身所涉及的数理知识深奥,体系比较繁杂,而实际工程应用又充满了各种各样地挑战。为了应对上述挑战,笔者依托在清华大学开设的研究生课程《强化学习与控制》,撰写了这一本教学参考书。在写作本书的过程中,笔者广泛征集了行业专家、领域学者的意见,力求打造一本内容全面、简洁明了的实用教材。期望您在学习过程中能够掌握强化学习的核心概念,熟练地运用各类算法设计,并将其用于解决实际工业控制问题。在此,提前预祝您学习愉快,收获满满!

图书简介

本书按照原理剖析、主流算法、典型示例的架构,系统地介绍了用于动态系统决策与控制的强化学习方法。全书共分为11章,内容涵盖了强化学习的基本概念、蒙特卡洛法、时序差分法、动态规划法、函数近似法、策略梯度法、近似动态规划、状态约束的处理和深度强化学习等知识点。本书主要面向工程应用领域的科研工作者和技术人员,旨在为领域内的行业同仁提供一本体系较为全面,且适合入门者学习和提升的参考书籍。

作者介绍

李升波,清华大学教授,博导,车辆与运载学院副院长。曾留学于斯坦福大学,密歇根大学和加州伯克利大学。从事自动驾驶汽车、强化学习、最优控制等领域研究。发表论文130余篇,总引用>15000次,H因子62。国内外学术会议优秀论文奖10余次。获中国自动化学会自然科学一等奖、中国汽车工业科技进步特等奖、国家科技进步二等奖、国家技术发明二等奖等。入选国家级高层次科技创新领军人才、交通运输行业中青年科技创新领军人才、中国汽车行业优秀青年科技人才奖、首届北京市自然科学基金杰青等。曾任IEEE ITS Society BOG委员、中国汽车工程学会青工委主任(首任)、IEEE OJ ITS高级副主编、IEEE Trans on ITS/IEEE ITS Mag副主编等。

书籍目录

Chapter 1. Introduction to Reinforcement Learning

1.1 History of RL
1.1.1 Dynamic Programming
1.1.2 Trial-and-Error Learning
1.2 Examples of RL Applications
1.2.1 Tic-Tac-Toe
1.2.2 Chinese Go
1.2.3 Autonomous Vehicles
1.3 Key Challenges in Today’s RL
1.3.1 Exploration-Exploitation Dilemma
1.3.2 Uncertainty and Partial Observability
1.3.3 Temporally Delayed Reward
1.3.4 Infeasibility from Safety Constraint
1.3.5 Non-stationary Environment
1.3.6 Lack of Generalizability
1.4 References

Chapter 2. Principles of RL Problems

2.1 Four Elements of RL Problems
2.1.1 Environment Model
2.1.2 State-Action Sample
2.1.3 Policy
2.1.4 Reward Signal
2.2 Classification of RL Methods
2.2.1 Definition of RL Problems
2.2.2 Bellman’s Principle of Optimality
2.2.3 Indirect RL Methods
2.2.4 Direct RL Methods
2.3 A Broad View of RL
2.3.1 Influence of Initial State Distribution
2.3.2 Differences between RL and MPC
2.3.3 Various Combination of Four Elements
2.4 Measures of Learning Performance
2.4.1 Policy Performance
2.4.2 Learning Accuracy
2.4.3 Learning Speed
2.4.4 Sample Efficiency
2.4.5 Approximation Accuracy
2.5 Two Examples of Markov Decision Processes
2.5.1 Example: Indoor Cleaning Robot
2.5.2 Example: Autonomous Driving System
2.6 References

Chapter 3. Model-Free Indirect RL: Monte Carlo

3.1 MC Policy Evaluation
3.2 MC Policy Improvement
3.2.1 Greedy Policy
3.2.2 Policy Improvement Theorem
3.2.3 MC Policy Selection
3.3 On-Policy Strategy vs. Off-Policy Strategy
3.3.1 On-Policy Strategy
3.3.2 Off-Policy Strategy
3.4 Understanding Monte Carlo RL from a Broad Viewpoint
3.4.1 On-Policy MC Learning Algorithm
3.4.2 Off-Policy MC Learning Algorithm
3.4.3 Incremental Estimation of Value Function
3.5 Example of Monte Carlo RL
3.5.1 Cleaning Robot in a Grid World
3.5.2 MC with Action-Value Function
3.5.3 Influences of Key Parameters
3.6 References

Chapter 4. Model-Free Indirect RL: Temporal Difference

4.1 TD Policy Evaluation
4.2 TD Policy Improvement
4.2.1 On-Policy Strategy
4.2.2 Off-Policy Strategy
4.3 Typical TD Learning Algorithms
4.3.1 On-Policy TD: SARSA
4.3.2 Off-Policy TD: Q-Learning
4.3.3 Off-Policy TD: Expected SARSA
4.3.4 Recursive Value Initialization
4.4 Unified View of TD and MC
4.4.1 n-Step TD Policy Evaluation
4.4.2 TD-Lambda Policy Evaluation
4.5 Examples of Temporal Difference
4.5.1 Results of SARSA
4.5.2 Results of Q-Learning
4.5.3 Comparison of MC, SARSA, and Q-Learning
4.6 References

Chapter 5. Model-Based Indirect RL: Dynamic Programming

5.1 Stochastic Sequential Decision
5.1.1 Model for Stochastic Environment
5.1.2 Average Cost vs. Discounted Cost
5.1.3 Policy Iteration vs. Value Iteration
5.2 Policy Iteration Algorithm
5.2.1 Policy Evaluation (PEV)
5.2.2 Policy Improvement (PIM)
5.2.3 Proof of Convergence
5.2.4 Explanation with Newton-Raphson Mechanism
5.3 Value Iteration Algorithm
5.3.1 Explanation with Fixed-point Iteration Mechanism
5.3.2 Convergence of DP Value Iteration
5.3.3 Value Iteration for Problems with Average Costs
5.4 Stochastic Linear Quadratic Control
5.4.1 Average Cost LQ Control
5.4.2 Discounted Cost LQ Control
5.4.3 Performance Comparison with Simulations
5.5 Additional Viewpoints about DP
5.5.1 Unification of Policy Iteration and Value Iteration
5.5.2 Unific

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值