###预备知识
本文章基于ml-agents v0.7版本,因为都是阅览版,若读者使用更其他版本肯定有较大不同之处。
再看本文之前希望先对ml-agents有一个初步的认识,将项目运行起来尝试一下。
1.这里可以参考 浪尖儿大神的文章
2.除了 浪尖儿推荐的几个视频外,我还推荐李毅宏的https://www.bilibili.com/video/av24724071已经包含了ml agents所要用到的大多数算法,总长只有6个多小时,最主要的一点视频是中文的。
3.当然这里还需要了解tensorflow,起码要看的懂代码。
打开v0.7的时候我发现多了个barracude代替了原来的TFCSharp,可以使用自己的nn文件来保存模型,使得ml-agents可以在其他平台上运行。不过 barracude需要反编译才能看见源码,下次再与大家分享。
###PPO
参考论文是OpenAI的Proximal Policy Optimization Algorithms
这里我们定位到ml agents项目目录下ml-agents\mlagents\trainers\ppo\models.py,其中包含了ppo和Curiosity模型的主要代码。其中create_ppo_optimizer函数包含了ppo的算法和最后的总loss。
还有一个是bc目录下的models.py这个是模仿学习用到的模型,以后再讲。
代码如下:
def create_ppo_optimizer(self, probs, old_probs, value, entropy, beta, epsilon, lr, max_step):
"""
Creates training-specific Tensorflow ops for PPO models.
:param probs: Current policy probabilities
:param old_probs: Past policy probabilities
:param value: Current value estimate
:param beta: Entropy regularization strength
:param entropy: Current policy entropy
:param epsilon: Value for policy-divergence threshold
:param lr: Learning rate
:param max_step: Total number of training steps.
"""
self.returns_holder = tf.placeholder(shape=[None], dtype=tf.float32, name='discounted_rewards')
self.advantage = tf.placeholder(shape=[None, 1], dtype=tf.float32, name='advantages')
self.learning_rate = tf.train.polynomial_decay(lr, self.global_step, max_step, 1e-10, power=1.0)
self.old_value = tf.placeholder(shape=[None], dtype=tf.float32, name='old_value_estimates')
decay_epsilon = tf.train.polynomial_decay(epsilon, self.global_step, max_step, 0.1, power=1.0)
decay_beta = tf.train.polynomial_decay(beta, self.global_step, max_step, 1e-5, power=1.0)
optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate)
clipped_value_estimate = self.old_value + tf.clip_by_value(tf.reduce_sum(value, axis=1) - self.old_value,
- decay_epsilon, decay_epsilon)
v_opt_a = tf.squared_difference(self.returns_holder, tf.reduce_sum(value, axis=1))
v_opt_b = tf.squared_difference(self.returns_holder, clipped_value_estimate)
self.value_loss = tf.reduce_mean(tf.dynamic_partition(tf.maximum(v_opt_a, v_opt_b), self.mask, 2)[1])
# Here we calculate PPO policy loss. In continuous control this is done independently for each action gaussian
# and then averaged together. This provides significantly better performance than treating the probability
# as an average of probabilities, or as a joint probability.
#region ppo2
r_theta = tf.exp(probs - old_probs)
p_opt_a = r_theta * self.advantage
p_opt_b = tf.clip_by_value(r_theta, 1.0 - decay_epsilon, 1.0 + decay_epsilon) * self.advantage
self.policy_loss = -tf.reduce_mean(tf.dynamic_partition(tf.minimum(p_opt_a, p_opt_b), self.mask, 2)[1])
#endregion ppo2
self.loss = self.policy_loss + 0.5 * self.value_loss - decay_beta * tf.reduce_mean(
tf.dynamic_partition(entropy, self.mask, 2)[1])
#curiosity
if self.use_curiosity:
self.loss += 10 * (0.2 * self.forward_loss + 0.8 * self.inverse_loss)
self.update_batch = optimizer.minimize(self.loss)
其中的
r_theta = tf.exp(probs - old_probs)
p_opt_a = r_theta * self.advantage
p_opt_b = tf.clip_by_value(r_theta, 1.0 - decay_epsilon, 1.0 + decay_epsilon) * self.advantage
self.policy_loss = -tf.reduce_mean(tf.dynamic_partition(tf.minimum(p_opt_a, p_opt_b), self.mask, 2)[1])
就是我们的PPO算法了。对应PPO论文中的第三算法Clipped Surrogate Objective,具体公式如下:
L C L I P ( θ ) = E ^ t [ min ( r t ( θ ) A ^ t , c l i p ( r t ( θ ) , 1 − ϵ , 1 + ϵ ) A ^ t ) ] L^{CLIP}(\theta) = \hat{\Bbb{E}}_t[\min(r_t(\theta)\hat{A}_t,clip(r_t(\theta),1-\epsilon,1+\epsilon) \hat{A}_t)] LCLIP(θ)=E^t