Ray实例-乒乓球训练学习

本文通过一个使用OpenAI Gym的简单神经网络乒乓球训练示例,展示了如何利用Ray进行分布式训练。在不同硬件配置下,对比了批处理大小对训练速度的影响。文章还探讨了Ray的分布式框架,解释了如何并行执行游戏滚动和计算梯度更新。在故障排除部分,提到了多线程可能导致的性能问题,并给出了相应的解决建议。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在本例中(代码在文末),我们将使用OpenAI Gym训练一个非常简单的神经网络来打乒乓球。这个应用程序改编自Andrej Karpathy的代码,只做了很少的修改(请参阅附带的博客文章)。

首先安装依赖包gym

pip install gym[atari]

运行代码:

python ray/examples/rl_pong/driver.py --batch-size=10

如果运行在集群上,在后边加上--redis-address=<redis-address> 标志,其中redis-address为集群IP地址,如下:

python ray/examples/rl_pong/driver.py --batch-size=10  --redis-address=<redis-address>

目前,在拥有64个物理内核的大型计算机上,使用批处理大小为1的更新计算大约需要1秒,而使用批处理大小为10的更新计算大约需要2.5秒。批处理大小为60的代码大约需要3秒。在一个有11个节点(每个节点都有18个物理内核)的集群中,批量为300个节点大约需要10秒。如果您看到的数字与这些数字相差很大,请查看本文底部的故障排除部分,并考虑提交一个问题
注意 ,这些时间取决于推出所需的时间,而这又取决于政策的执行情况。例如,一个非常糟糕的政策很快就会失败。随着政策的发展,我们应该预计这些数字还会增加。

分布式框架

在Andrej代码的核心,神经网络被用来定义打乒乓球的“策略”(也就是说,一个函数选择给的定状态的动作)。在循环中,网络反复播放乒乓球游戏,并记录每个游戏的梯度。每十场比赛,梯度组合在一起,用来更新网络。

这个例子很容易并行化,因为网络可以并行运行10个游戏,而且游戏之间不需要共享任何信息。

为Pong环境定义了一个actor,其中包括一个执行滚动和计算渐变更新的方法。下面是参与者的伪代码。

@ray.remote
class PongEnv(object):
    def __init__(self):
        # 告诉numpy只使用一个核心。 如果我们不这样做,每个参与者可能
        # 会尝试使用所有核心,由此产生的争用可能导致串行版本没有加速。
        #  请注意,如果numpy正在使用OpenBLAS,那么您需要设置
        # OPENBLAS_NUM_THREADS = 1,并且您可能需要从命令行执行此操作
        #(因此它在导入numpy之前发生)。
        os.environ["MKL_NUM_THREADS"] = "1"
        self.env = gym.make("Pong-v0")

    def compute_gradient(self, model):
        # Reset the game.
        observation = self.env.reset()
        while not done:
            # Choose an action using policy_forward.
            # Take the action and observe the new state of the world.
        # Compute a gradient using policy_backward. Return the gradient and reward.
        return [gradient, reward_sum]

然后,我们创建一些参与者,以便能够并行地执行滚动。

actors = [PongEnv() for _ in range(batch_size)]

在for循环中调用这个远程函数,启动多个任务来执行滚动并并行计算梯度

model_id = ray.put(model)
actions = []
# Launch tasks to compute gradients from multiple rollouts in parallel.
for i in range(batch_size):
    action_id = actors[i].compute_gradient.remote(model_id)
    actions.append(action_id)

故障排除

如果您没有看到Ray的任何加速(假设您使用的是多核机器),那么问题可能是numpy试图使用多线程。当许多进程都试图使用多个线程时,结果往往是没有加速。运行此示例时,请尝试打开top,查看一些python进程是否使用了超过100%的CPU。如果是,那么这可能就是问题所在。

示例尝试在参与者中设置MKL_NUM_THREADS=1。但是,只有在您的机器上的numpy实际使用MKL时才可以这样做。如果使用OpenBLAS,那么需要将OPENBLAS_NUM_THREADS设置为1。实际上,您可能必须在运行脚本之前执行此操作(可能需要在导入numpy之前执行)。
此例代码:

# This code is copied and adapted from Andrej Karpathy's code for learning to
# play Pong https://gist.github.com/karpathy/a4166c7fe253700972fcbc77e4ea32c5.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import numpy as np
import os
import ray
import time

import gym

# Define some hyperparameters.

# The number of hidden layer neurons.
H = 200
learning_rate = 1e-4
# Discount factor for reward.
gamma = 0.99
# The decay factor for RMSProp leaky sum of grad^2.
decay_rate = 0.99

# The input dimensionality: 80x80 grid.
D = 80 * 80


def sigmoid(x):
    # Sigmoid "squashing" function to interval [0, 1].
    return 1.0 / (1.0 + np.exp(-x))


def preprocess(img):
    """Preprocess 210x160x3 uint8 frame into 6400 (80x80) 1D float vector."""
    # Crop the image.
    img = img[35:195]
    # Downsample by factor of 2.
    img = img[::2, ::2, 0]
    # Erase background (background type 1).
    img[img == 144] = 0
    # Erase background (background type 2).
    img[img == 109] = 0
    # Set everything else (paddles, ball) to 1.
    img[img != 0] = 1
    return img.astype(np.float).ravel()


def discount_rewards(r):
    """take 1D float array of rewards and compute discounted reward"""
    discounted_r = np.zeros_like(r)
    running_add = 0
    for t in reversed(range(0, r.size)):
        # Reset the sum, since this was a game boundary (pong specific!).
        if r[t] != 0:
            running_add = 0
        running_add = running_add * gamma + r[t]
        discounted_r[t] = running_add
    return discounted_r


def policy_forward(x, model):
    h = np.dot(model["W1"], x)
    h[h < 0] = 0  # ReLU nonlinearity.
    logp = np.dot(model["W2"], h)
    p = sigmoid(logp)
    # Return probability of taking action 2, and hidden state.
    return p, h


def policy_backward(eph, epx, epdlogp, model):
    """backward pass. (eph is array of intermediate hidden states)"""
    dW2 = np.dot(eph.T, epdlogp).ravel()
    dh = np.outer(epdlogp, model["W2"])
    # Backprop relu.
    dh[eph <= 0] = 0
    dW1 = np.dot(dh.T, epx)
    return {"W1": dW1, "W2": dW2}


@ray.remote
class PongEnv(object):
    def __init__(self):
        # Tell numpy to only use one core. If we don't do this, each actor may
        # try to use all of the cores and the resulting contention may result
        # in no speedup over the serial version. Note that if numpy is using
        # OpenBLAS, then you need to set OPENBLAS_NUM_THREADS=1, and you
        # probably need to do it from the command line (so it happens before
        # numpy is imported).
        os.environ["MKL_NUM_THREADS"] = "1"
        self.env = gym.make("Pong-v0")

    def compute_gradient(self, model):
        # Reset the game.
        observation = self.env.reset()
        # Note that prev_x is used in computing the difference frame.
        prev_x = None
        xs, hs, dlogps, drs = [], [], [], []
        reward_sum = 0
        done = False
        while not done:
            cur_x = preprocess(observation)
            x = cur_x - prev_x if prev_x is not None else np.zeros(D)
            prev_x = cur_x

            aprob, h = policy_forward(x, model)
            # Sample an action.
            action = 2 if np.random.uniform() < aprob else 3

            # The observation.
            xs.append(x)
            # The hidden state.
            hs.append(h)
            y = 1 if action == 2 else 0  # A "fake label".
            # The gradient that encourages the action that was taken to be
            # taken (see http://cs231n.github.io/neural-networks-2/#losses if
            # confused).
            dlogps.append(y - aprob)

            observation, reward, done, info = self.env.step(action)
            reward_sum += reward

            # Record reward (has to be done after we call step() to get reward
            # for previous action).
            drs.append(reward)

        epx = np.vstack(xs)
        eph = np.vstack(hs)
        epdlogp = np.vstack(dlogps)
        epr = np.vstack(drs)
        # Reset the array memory.
        xs, hs, dlogps, drs = [], [], [], []

        # Compute the discounted reward backward through time.
        discounted_epr = discount_rewards(epr)
        # Standardize the rewards to be unit normal (helps control the gradient
        # estimator variance).
        discounted_epr -= np.mean(discounted_epr)
        discounted_epr /= np.std(discounted_epr)
        # Modulate the gradient with advantage (the policy gradient magic
        # happens right here).
        epdlogp *= discounted_epr
        return policy_backward(eph, epx, epdlogp, model), reward_sum


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Train an RL agent on Pong.")
    parser.add_argument(
        "--batch-size",
        default=10,
        type=int,
        help="The number of rollouts to do per batch.")
    parser.add_argument(
        "--redis-address",
        default=None,
        type=str,
        help="The Redis address of the cluster.")
    parser.add_argument(
        "--iterations",
        default=-1,
        type=int,
        help="The number of model updates to perform. By "
        "default, training will not terminate.")
    args = parser.parse_args()
    batch_size = args.batch_size

    ray.init(redis_address=args.redis_address)

    # Run the reinforcement learning.

    running_reward = None
    batch_num = 1
    model = {}
    # "Xavier" initialization.
    model["W1"] = np.random.randn(H, D) / np.sqrt(D)
    model["W2"] = np.random.randn(H) / np.sqrt(H)
    # Update buffers that add up gradients over a batch.
    grad_buffer = {k: np.zeros_like(v) for k, v in model.items()}
    # Update the rmsprop memory.
    rmsprop_cache = {k: np.zeros_like(v) for k, v in model.items()}
    actors = [PongEnv.remote() for _ in range(batch_size)]
    iteration = 0
    while iteration != args.iterations:
        iteration += 1
        model_id = ray.put(model)
        actions = []
        # Launch tasks to compute gradients from multiple rollouts in parallel.
        start_time = time.time()
        for i in range(batch_size):
            action_id = actors[i].compute_gradient.remote(model_id)
            actions.append(action_id)
        for i in range(batch_size):
            action_id, actions = ray.wait(actions)
            grad, reward_sum = ray.get(action_id[0])
            # Accumulate the gradient over batch.
            for k in model:
                grad_buffer[k] += grad[k]
            running_reward = (reward_sum if running_reward is None else
                              running_reward * 0.99 + reward_sum * 0.01)
        end_time = time.time()
        print("Batch {} computed {} rollouts in {} seconds, "
              "running mean is {}".format(batch_num, batch_size,
                                          end_time - start_time,
                                          running_reward))
        for k, v in model.items():
            g = grad_buffer[k]
            rmsprop_cache[k] = (
                decay_rate * rmsprop_cache[k] + (1 - decay_rate) * g**2)
            model[k] += learning_rate * g / (np.sqrt(rmsprop_cache[k]) + 1e-5)
            # Reset the batch gradient buffer.
            grad_buffer[k] = np.zeros_like(v)
        batch_num += 1
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值