ar_track_alvar

AR Track Alvar是ROS的一个包,用于高级AR标记跟踪,支持生成各种大小和分辨率的AR标记,识别和跟踪单个或捆绑标记的姿势,以及自动计算捆绑标记之间的空间关系。

http://wiki.ros.org/ar_track_alvar

https://github.com/ros-perception/ar_track_alvar

Overview

This package is a ROS wrapper for Alvar, an open source AR tag tracking library.

ar_track_alvar has 4 main functionalities:

  1. Generating AR tags of varying size, resolution, and data/ID encoding
  2. Identifying and tracking the pose of individual AR tags, optionally integrating kinect depth data (when a kinect is available) for better pose estimates.
  3. Identifying and tracking the pose of "bundles" consisting of multiple tags. This allows for more stable pose estimates, robustness to occlusions, and tracking of multi-sided objects.
  4. Using camera images to automatically calculate spatial relationships between tags in a bundle, so that the user does not have to manually measure and enter tag locations in an XML file to use the bundle functionality (**NOT CURRENTLY WORKING -- SEE BELOW).

Alvar is significantly newer and more advanced than the ARToolkit, which has been the basis for several other ROS AR tag packages. Alvar features adaptive thresholding to handle a variety of lighting conditions, optical flow based tracking for more stable pose estimation, and an improved tag identification method that does not significantly slow down as the number of tags increases.

 

Installation

 

$ sudo apt-get install ros-indigo-ar-track-alvar

 

Generating AR tags

Two pdf files are in the markers directory containing tags 0-8 and 9-17, respectively. The markers are 4.5 cm (although when printed and measured, came out to 4.4 cm for me). If you want to generate your own markers with different ID numbers, border widths, or sizes, run:

 

切换行号显示

   1 rosrun ar_track_alvar createMarker

and instructions will appear describing the various options.

You can also download the aforementioned marker files here: ar_track_alvar/markers0to8.png ar_track_alvar/markers9to17.png

 

Usecase per number of tags

 

Detecting individual tags

ar_track_alvar/indiv1.png ar_track_alvar/indiv2.png

The first use case for this package is to identify and track the poses of (possibly) multiple AR tags that are each considered individually. The node individualMarkers takes the following command line arguments:

  1. marker_size (double) -- The width in centimeters of one side of the black square marker border

  2. max_new_marker_error (double) -- A threshold determining when new markers can be detected under uncertainty

  3. max_track_error (double) -- A threshold determining how much tracking error can be observed before an tag is considered to have disappeared

  4. camera_image (string) -- The name of the topic that provides camera frames for detecting the AR tags. This can be mono or color, but should be an UNrectified image, since rectification takes place in this package

  5. camera_info (string) -- The name of the topic that provides the camera calibration parameters so that the image can be rectified

  6. output_frame (string) -- The name of the frame that the published Cartesian locations of the AR tags will be relative to

individualMarkers assumes that a kinect being used as the camera, so that depth data can be integrated for better pose estimates. If you are not using a kinect or do not desire to use depth data improvements, use individualMarkersNoKinect instead.

When using this package with the PR2, you can simply type:

 

切换行号显示

   1 roslaunch ar_track_alvar pr2_indiv.launch

Concrete example can be found on the development repo (Indigo onwardKinetic onward) can be used as an example of how to set these parameters in a reasonable way and start the node.

Otherwise, this file can be used as an example of how to set these parameters in a reasonable way and start the node.

 

Detecting multi-tag bundles

ar_track_alvar/multi1.png ar_track_alvar/multi2.png

Sometimes it is advantageous to treat "bundles" of multiple tags as a single unit. For example, this can allow for the estimation of the pose of a many-sided object, even when some of the tags cannot be seen. It can also lead to more stable pose estimates and provide robustness to occlusion. A tag bundle is defined by an XML file that lists a set of tag IDs and their positions relative to a "master" tag. The master tag always comes first in the XML file and defines a coordinate system for the rest of the tags. IMPORTANT: this coordinate system is different from the standard system used in ROS! In this system, when facing the tag, positive-z comes out of the front of the tag toward the viewer, positive-x is to the right, and positive-y is up.

To create a bundle, first choose which tag you want to be the master tag. Treat the center of the master tag as (0,0,0). Then, after placing the rest of the tags, measure the x, y, and z coordinate for each of the 4 corners of all of the tags, relative to the master tag origin. Enter these measurements for each tag into the XML file starting with the lower left corner and progressing counter-clockwise around the tag. An example XML file showing this for a tagged table leg can be found in the ./bundlesdirectory (Indigo onwardKinetic onward).

Multiple bundles can be detected at once. There should be one file for each bundle that is to be detected. The node findMarkerBundles takes the following command line parameters:

  1. marker_size (double) -- The width in centimeters of one side of the black square that makes up a marker

  2. max_new_marker_error (double) -- A threshold determining when new markers can be detected under uncertainty

  3. max_track_error (double) -- A threshold determining how much tracking error can be observed before an tag is considered to have disappeared

  4. camera_image (string) -- The name of the topic that provides camera frames for detecting the AR tags. This can be mono or color, but should be an UNrectified image, since rectification takes place in this package

  5. camera_info (string) -- The name of the topic that provides the camera calibration parameters so that the image can be rectified

  6. output_frame (string) -- The name of the frame that the published Cartesian locations of the AR tags will be relative to

  7. bundle_files (multiple strings) -- A list of XML file names, one for each bundle you wish to detect.

findMarkerBundles assumes that a kinect being used as the camera, so that depth data can be integrated for better pose estimates. If you are not using a kinect or do not desire to use depth data improvements, use findMarkerBundlesNoKinect instead.

When using this package with the PR2, you can simply type:

 

切换行号显示

   1 roslaunch ar_track_alvar pr2_bundle.launch

Otherwise, this file (Indigo onwardKinetic onward) can be used as an example of how to set these parameters in a reasonable way and start the node.

 

ROS API

 

Subscribed topic (common among usecases)

(e.g.) /kinect_head/rgb/camera_info (sensor_msgs/CameraInfo)

  • Camera needs calibrated before this topic to become available.

(any topic in an image format) (sensor_msgs/Image)

  • Image to be analyzed for markers.

 

API for individual markers

Published Topics

visualization_marker (visualization_msgs/Marker)

  • This is an rviz message that when subscribed to (as a Marker in rviz), will display a colored square block at the location of each identified AR tag, and will also overlay these blocks in a camera image. Currently, it is set to display a unique color for markers 0-5 and a uniform color for all others.

ar_pose_marker (ar_track_alvar/AlvarMarkers)

  • This is a list of the poses of all the observed AR tags, with respect to the output frame

Provided tf Transforms

Camera frame (from Camera info topic param) → AR tag frame

  • Provides a transform from the camera frame to each AR tag frame, named ar_marker_x, where x is the ID number of the tag.

 

API for detecting multiple markers

Published Topics

visualization_marker (visualization_msgs/Marker)

  • This is an rviz message that when subscribed to (as a Marker in rviz), will display a red square block at the location of each identified "master" AR tag (one per bundle), a green square block at the location of each other tag, and will also overlay these blocks in a camera image.

ar_pose_marker (ar_track_alvar/AlvarMarkers)

  • This is a list of the poses of all the observed "master" AR tags (one per bundle), with respect to the output frame

Provided tf Transforms

Camera frame (from Camera info topic param) → AR tag frame

  • Provides a transform from the camera frame to each "master" AR tag frame, named ar_marker_x, where x is the ID number of the tag.

 

Automatic XML bundle file generation

Rather than having to measure tag placements and construct an XML file by hand for each bundle, Alvar has a feature that allows this to be done automatically from multiple images in which 2 or more tags are visible. Running ./bin/trainMarkerBundle (see pr2_train.launch for an example) provides a keyboard interface for captuing images and optimizing the bundle structure. However, I was not able to get it working reliably. It would be great if someone took this up as a small project. Some tips from another group that uses Alvar can be found here.

<think>我们正在讨论ar_track_alvar的使用方法和追踪原理。根据提供的引用资料,我们可以总结如下: 1. 安装方法: - 通过apt安装:`sudo apt-get install ros-melodic-ar-track-alvar`(引用[1]和[4]) - 但引用[2]指出直接安装可能会出现代码和文件缺失的问题,因此有时需要源码安装。 2. 创建二维码标记: - 使用`createMarker`命令生成二维码,例如:`rosrun ar_track_alvar createMarker 0`(生成ID为0的二维码) - 可以指定尺寸,如`-s 5`表示5厘米(引用[1]) 3. 追踪原理: - ar_track_alvar通过摄像头图像识别二维码(或AR标签),并计算其在相机坐标系下的位姿。 - 它发布的话题是`ar_pose_marker`,消息类型为`ar_track_alvar/AlvarMarkers`,其中包含所有观测到的AR标签的位姿(引用[3])。 4. 定位与识别: - 引用[4]提到,ar_track_alvarROS中用于二维码识别和定位的功能包,尤其适用于ROS kinetic(OpenCV版本较低)环境。 现在,我将根据这些信息组织回答,并生成相关问题。</think>### ar_track_alvar 使用方法与追踪原理 #### 一、安装方法 1. **二进制安装(推荐)** ```bash sudo apt-get install ros-melodic-ar-track-alvar # 适用于ROS Melodic ``` > 注意:若出现文件缺失问题需源码安装(引用[2])[^2]。 2. **源码安装(备用方案)** 当二进制安装异常时,需从GitHub克隆源码编译: ```bash cd ~/catkin_ws/src git clone https://github.com/ros-perception/ar_track_alvar.git cd .. && catkin_make ``` #### 二、二维码生成方法 ```bash # 生成ID=0的默认尺寸二维码 rosrun ar_track_alvar createMarker 0 # 生成5cm尺寸的二维码(-s控制物理尺寸) rosrun ar_track_alvar createMarker -s 5 0 rosrun ar_track_alvar createMarker -s 5 1 # 不同ID需重复生成(引用[1])[^1] ``` > 生成的`MarkerData_0.png`需打印为实体标记。 #### 三、追踪原理详解 1. **特征点识别** 基于OpenCV识别二维码的以下特征: - 边界四边形角点 - 内部二进制编码模式 - 预设ID编码(0-65535) 2. **位姿解算** 通过**PnP算法**(Perspective-n-Point)计算标签相对于相机的6DOF位姿: $$ \begin{bmatrix} x \\ y \\ z \end{bmatrix} = K \cdot [R|t] \cdot \begin{bmatrix} X \\ Y \\ 0 \end{bmatrix} $$ 其中: - $K$:相机内参矩阵 - $[R|t]$:旋转平移矩阵 - $(X,Y)$:标签坐标系下的3D点 3. **坐标发布** 识别结果通过`/ar_pose_marker`话题发布`AlvarMarkers`消息(引用[3])[^3],包含: ```yaml header: # 时间戳和坐标系 markers: # 标签数组 - id: 0 # 标签ID pose: # 带置信度的位姿 position: {x, y, z} orientation: {qx, qy, qz, qw} ``` #### 四、典型应用场景 1. **机器人导航** AGV通过地面二维码实现厘米级定位(引用[4])[^4]。 2. **AR交互** 虚实物体对齐(如工业装配指导)。 3. **多机协同** 无人机编队基于共享标签坐标避障。 --- ### 相关问题 1. **如何标定相机参数以提高ar_track_alvar的精度?** 2. **当多个二维码同时出现时,ar_track_alvar如何处理识别冲突?** 3. **在弱光环境下,有哪些参数调整可优化ar_track_alvar的识别率?** 4. **如何将ar_track_alvar的定位数据与ROS导航栈(如move_base)集成?**
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值