(Submitted on 5 Nov 2016 (
v1), last revised 7 Apr 2017 (this version, v3))
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
Submission history
From: Brendan O'Donoghue [ view email][v1] Sat, 5 Nov 2016 10:49:37 GMT (1094kb,D)
[v2] Mon, 6 Mar 2017 12:38:42 GMT (892kb,D)
[v3] Fri, 7 Apr 2017 15:20:05 GMT (893kb,D)

本文提出了一种结合策略梯度与离策略Q学习的新方法——PGQL,该方法通过从经验回放缓冲区中抽取数据进行更新,实现了策略梯度方法对离策略数据的有效利用。此外,还建立了动作价值拟合技术和演员-评论家算法之间的等价性,并展示了正则化策略梯度方法可以被解释为优势函数学习算法。
1150

被折叠的 条评论
为什么被折叠?



