RVT: Robotic View Transformer for 3D Object Manipulation

Even though PerAct achieved impressive performance, it uses a voxel-based representation for the scene, limiting its scalability. RVT addressed the limitations of PerAct by proposing a novel multi-view representation for encoding the scene.

发表时间:PMLR 2023

论文链接:https://proceedings.mlr.press/v229/goyal23a/goyal23a.pdf

作者单位:NVIDIA

Motivation:对于 3D 对象操作,构建显式 3D 表示的方法比仅依赖相机图像的方法表现更好。但是使用诸如体素之类的显式 3D 表示是以巨大的计算成本为代价的(与图像推理相比,对体素创建和推理的计算成本更高,因为体素的数量与分辨率呈三次缩放,而不是图像像素的平方缩放),从而对可扩展性产生不利影响。

解决方法:在这项工作中,我们提出了RVT,一种用于3D Object Manipulation 的多视图转换器,它既可扩展又准确。RVT的一些关键特征是一种注意机制,用于跨视图聚合信息,并从机器人工作空间周围的虚拟视图重新渲染相机输入。

实现方式:在其核心,RVT 是一种利用Transformer架构的基于视图的方法。它共同关注场景的多个视图,并在视图之间聚合信息。然后,它产生用于预测机器人末端执行器姿态的视图热图和特征

模型结构:给定来自传感器的 RGB-D,我们首先构建场景的点云。然后使用点云在机器人工作空间周围生成虚拟图像虚拟图像被馈送到多视图转换器模型以预测特定于视图的特征,然后将其组合起来以预测 3D 中的动作

与大多数先前的工作不同,我们不使用大型数据集进行训练。RVT 从一小组演示中有效地学习,处理多个视图作为视觉输入,并融合来自语言目标的信息来解决多个操作任务。

实现细节:输入包括 (1) 任务的语言描述,(2) 当前视觉状态(来自 RGB-D 相机)和 (3) 当前抓手状态(开放或关闭)。该模型应该预测一个动作,由下一个关键帧的目标末端执行器姿势和抓手状态指定(不是反解算法)。关键帧表示任务执行期间夹持器的重要或瓶颈步骤[55],例如预拾取、抓取或放置姿势。给定来自RVT的目标夹持器姿态,我们使用FrankaPy[63]将机器人移动到目标,轨迹生成和反馈控制。

Rendering:The first step is the re-rendering of camera input. 给定一个或多个传感器摄像机捕获的RGB-D图像,我们首先重建场景的点云然后,点云从一组以机器人基础为中心的空间中锚定的虚拟视点重新渲染。具体来说,对于每个视图,我们渲染三个图像图,共7个通道:RGB (3 channels), (2) depth (1 channel), and (3) (x, y, z) coordinates of the points in the world frame (3 channels). 重新渲染过程将输入图像解耦为喂给transformer的图像。

Joint Transformer

  • For language, we use pretrained CLIP [57] embeddings (ResNet-50 variant)。

  • For the virtual images, we break each of them into 20 × 20 patches and pass through a multi-layer perceptron (MLP) to produce image tokens, similar to ViT。

  • For the gripper state, similar to PerAct [6], we pass it through an MLP and concatenate it to the image tokens.

RVT has eight self-attention layers:前四层是处理图像:在前四层中,图像标记只允许关注来自同一图像的其他标记。这使网络在跨图像共享信息之前首先处理单个图像。我们随后将所有图像标记与语言标记连接起来。最后四层是图像和文本一起:在最后四层中,我们允许注意力层在不同的图像和文本之间传播和积累信息。最后,将图像标记重新排列回原始空间配置,得到每个图像的特征通道。

Action Prediction:outputs an 8-dimensional action:the 6-DoF target end effector pose (3-DoF for translation and 3-DoF for rotation), 1-DoF gripper state (open or close), 以及是否允许碰撞低级运动规划器的二元指标)。对于末端执行器的变化,我们首先从joint-transformer的每个图像特征预测每个视图的热图(如附录中的图 5 所示)。然后将不同视图的热图反向投影,以预测一组密集覆盖机器人工作空间的离散 3D 点的分数。最后,末端执行器变化由得分最高的 3D 点决定。夹持器状态和运动规划器碰撞指示器表示为二元变量。

使用的全局特征预测的 the rotations, gripper state, and collision indicator。(全局特征是(1)图像特征沿空间维度的总和,由预测的平移热图加权; (2) 沿空间维度的最大池化图像特征的串联。)

Loss Function

  • For heatmaps, we use the cross-entropy loss for each image. The ground truth is obtained by a truncated Gaussian distribution around the 2D projection of the ground-truth 3D location.

  • For rotation, we use cross-entropy loss for each of the Euler angles. We use binary classification loss for the gripper state and collision indicator.

实验

Simulation:follow the simulation setup in PerAct [6], where CoppelaSim [59] is applied to simulate various RLBench [7] tasks. The visual observations are captured from four noiseless RGB-D cameras.

Real-World:We experiment on a table-top setup using a statically mounted Franka Panda arm. an Azure Kinect (RGB-D) camera. 给定来自RVT的目标夹持器姿态,我们使用FrankaPy[63]将机器人移动到目标,轨迹生成和反馈控制。We first collect a dataset for training RVT through human demonstration.

结论

消融实验结论:

  1. 正如预期的那样,以更高的分辨率渲染的虚拟图像帮助,因为虚拟图像分辨率为220的RVT优于100的图像。

  2. 不同的视角增加对应的点云信息是有帮助的。

  3. 添加深度通道和RGB通道有帮助。

  4. 在合并所有图像标记之前,独立处理来自单个图像的标记会有所帮助。

  5. 对于立方体和真实相机位置,使用正交投影渲染图像比透视投影渲染性能更好。

  6. 在渲染之前点云上使用 3D rotation augmentation 是有帮助的

  7. 立方体周围有 5 个视图的模型(图 3 (a))比立方体周围有 3 个视图(前、上、左)的模型的效果更好

  8. 与使用传感器相机图像相比,RVT 在重新渲染的图像上表现更好(表 1)。2(左),倒数第二行)。

总结:only a few demonstrations: In total, we collected 51 demonstration sequences over all 5 tasks(每个任务大概10个).

我们广泛探索了多视图架构的设计,并报告了几个有用的发现。例如,在连接patch以进行联合注意之前,我们观察到enforcing the transformer to first attend over同一图像中的patch时效果更好。另一个关键的创新是,与之前的基于视图的方法不同,我们通过从虚拟视图重新渲染图像,将相机图像与馈送到转换器的图像解耦。这使我们能够控制渲染过程并导致几个好处。

### Robotic Transformer Architecture and Applications #### Overview of Robotic Transformers Robotic transformers represent a significant advancement in robotics, particularly through architectures like RVT (Robotic View Transformer)[^3]. These models leverage the power of transformer networks originally developed for natural language processing but adapted here specifically for robotic tasks such as object manipulation. The core idea behind these systems is to process multiple views around a robot's workspace by re-rendering images from virtual viewpoints. This approach allows better control over how data is presented to the model during training and inference phases, leading to improved performance when predicting actions or poses of objects within three-dimensional space[^1]. For instance, an implementation might involve generating outputs per view that are subsequently back-projected into 3D coordinates for final pose estimation. By decoupling camera inputs from what gets fed into the network via synthetic rendering techniques, researchers can introduce various enhancements not possible with direct sensor feeds alone. #### Key Innovations in Design One notable design choice involves enforcing attention mechanisms so they operate initially on patches derived solely from individual frames before combining them across different perspectives. Studies have shown this method yields superior results compared to alternatives where all patches are treated equally without regard to their source image. Another critical aspect lies in handling multi-view setups efficiently while ensuring computational feasibility at scale—a challenge addressed effectively using advanced sampling strategies alongside other optimizations tailored towards real-time operation requirements common in practical applications involving robots interacting dynamically with environments. #### Example Code Implementation Below demonstrates part of how one could implement aspects related to patch-level operations described above: ```python import torch from torchvision import transforms def prepare_patches(image_tensor): """Prepare patches from single frame.""" # Define transformation pipeline including resizing etc. preprocess = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor() ]) processed_image = preprocess(image_tensor) batch_size, channels, height, width = processed_image.shape # Extract non-overlapping patches; adjust size according to needs kernel_size = 16 stride = 16 unfold = torch.nn.Unfold(kernel_size=(kernel_size,kernel_size),stride=stride) patches = unfold(processed_image).permute(0,2,1) return patches # Dummy input tensor representing raw pixel values extracted from cameras input_images = ... for img in input_images: patches = prepare_patches(img) ``` This snippet shows preparation steps necessary prior to feeding visual information into deeper layers responsible for cross-attention between distinct vantage points captured either physically or virtually depending upon specific use case scenarios encountered throughout development cycles associated with building intelligent machines capable of autonomous decision making based off environmental cues perceived visually. --related questions-- 1. How does incorporating virtual viewpoint generation impact the accuracy of robotic perception? 2. What challenges arise when scaling up multi-view transformer-based approaches for industrial deployment? 3. Can you provide examples demonstrating improvements achieved through enforced intra-image patch attentions versus global ones spanning entire scenes simultaneously? 4. In which types of industries would applying robotic transformers prove most beneficial given current technological limitations?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Ming_Chs

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值