Preparing Your Data for Use with robot_localization 准备 robot_localization 数据

Preparing Your Data for Use with robot_localization

准备 robot_localization 数据

Before getting started with the state estimation nodes in robot_localization, it is important that users ensure that their sensor data well-formed. There are various considerations for each class of sensor data, and users are encouraged to read this tutorial in its entirety before attempting to use robot_localization.

在开始使用robot_localization中的状态估计节点之前,用户必须确保其传感器数据格式正确,这一点很重要。每种类别的传感器数据都有各种注意事项,建议用户在尝试使用robot_localization之前完整阅读本教程。

For additional information, users are encouraged to watch this presentation from ROSCon 2015.

有关更多信息,鼓励用户观看ROSCon 2015的演示文稿。

Adherence to ROS Standards  遵守ROS标准

The two most important ROS REPs to consider are: 要考虑的两个最重要的ROS REP是:

  • REP-103 (Standard Units of Measure and Coordinate Conventions)  
  • REP-103(标准计量单位和坐标约定)
  • REP-105 (Coordinate Frame Conventions).
  • REP-105(坐标框架约定)。

Users who are new to ROS or state estimation are encouraged to read over both REPs, as it will almost certainly aid you in preparing your sensor data. robot_localization attempts to adhere to these standards as much as possible.

鼓励不熟悉ROS或状态估计的用户阅读这两个REP,因为它几乎肯定会帮助您准备传感器数据。 robot_localization尝试尽可能遵守这些标准。

Also, it will likely benefit users to look over the specifications for each of the supported ROS message types:

同样,它可能会有益于用户查看每种受支持的ROS消息类型的规范:

Coordinate Frames and Transforming Sensor Data  坐标系和转换传感器数据

REP-105 specifies four principal coordinate frames: base_link, odom, map, and earth. The base_link frame is rigidly affixed to the robot. The map and odom frames are world-fixed frames whose origins are typically aligned with the robot’s start position. The earth frame is used to provide a common reference frame for multiple map frames (e.g., for robots distributed over a large area). The earth frame is not relevant to this tutorial.

REP-105指定了四个主要坐标系:base_link,odom,map Earthbase_link frame 牢固地固定在机器人上。map odom frame 是世界固定的框架,其原点通常与机器人的起始位置对齐。earth frame 用于为多个 map frame(例如,分布在大面积区域的机器人)提供公共参考 frameearth frame 与本教程无关。

The state estimation nodes of robot_localization produce a state estimate whose pose is given in the map or odom frame and whose velocity is given in the base_link frame. All incoming data is transformed into one of these coordinate frames before being fused with the state. The data in each message type is transformed as follows:

robot_localization 的 状态估计节点 会生成 状态估计,其 pose 状态在 map 或 odom frame 中给出,其 velocity  base_link 框架中给出。在与状态融合之前,所有传入的数据都将转换为这些坐标系之一。每种消息类型中的数据,按如下转换:

  • nav_msgs/Odometry - All pose data (position and orientation) is transformed from the message header’s frame_id into the coordinate frame specified by the world_frame parameter (typically map or odom). In the message itself, this specifically refers to everything contained within the pose property. All twist data (linear and angular velocity) is transformed from the child_frame_id of the message into the coordinate frame specified by the base_link_frame parameter (typically base_link).
  • nav_msgs / Odometry :
    • 1:所有 pose 位姿数据(位置和方向)的frame_id 都从 其消息标题的 frame_id(nav_msgs::Odometry header.frame_id ) 转换为 world_frame 参数 指定的坐标系 (robot_localization 配置文件中 world_frame 参数)(通常为 map 或 odom)。在消息本身中,这特指 pose 属性中包含的所有内容。
    • 2:所有 twist 数据(线速度和角速度)都将从 其消息标题的 child_frame_id(nav_msgs::Odometry child_frame_id ) 转换为 base_link_frame 参数(robot_localization 配置文件中 base_link_frame 参数)(通常为 base_link)指定的坐标系。
  • geometry_msgs/PoseWithCovarianceStamped - Handled in the same fashion as the pose data in the Odometry message.  处理方式和里程消息处理一样。
  • geometry_msgs/TwistWithCovarianceStamped - Handled in the same fashion as the twist data in the Odometry message. 处理方式和里程消息处理一样。
  • sensor_msgs/Imu - The IMU message is currently subject to some ambiguity, though this is being addressed by the ROS community. Most IMUs natively report orientation data in a world-fixed frame whose X and Z axes are defined by the vectors pointing to magnetic north and the center of the earth, respectively, with the Y axis facing east (90 degrees offset from the magnetic north vector). This frame is often referred to as NED (North, East, Down). However, REP-103 specifies an ENU (East, North, Up) coordinate frame for outdoor navigation. As of this writing, robot_localization assumes an ENU frame for all IMU data, and does not work with NED frame data. This may change in the future, but for now, users should ensure that data is transformed to the ENU frame before using it with any node in robot_localization.
  • sensor_msgs / Imu-IMU消息目前尚未确定,尽管ROS社区正在解决此问题。 大多数IMU在本地固定的框架中报告定向数据,该框架的X和Z轴分别由指向磁北和地球中心的向量定义,而Y轴向东(与磁北向量偏移90度) )。 此框架通常称为NED(北,东,下)。 但是,REP-103指定用于户外导航的ENU(东,北,上)坐标系。 撰写本文时,robot_localization假定所有IMU数据都使用ENU帧,并且不适用于NED帧数据。 将来可能会有所改变,但就目前而言,用户应确保将数据转换为ENU框架后,再将其与robot_localization中的任何节点一起使用。

The IMU may also be oriented on the robot in a position other than its “neutral” position. For example, the user may mount the IMU on its side, or rotate it so that it faces a direction other than the front of the robot. This offset is typically specified by a static transform from the base_link_frame parameter to the IMU message’s frame_id. The state estimation nodes in robot_localization will automatically correct for the orientation of the sensor so that its data aligns with the frame specified by the base_link_frame parameter.

IMU也可以在机器人的“中性”位置以外的其他位置定向。 例如,用户可以将IMU安装在其侧面,或者旋转IMU使其面对机器人正面以外的方向。 此偏移量通常是通过从base_link_frame参数到IMU消息的frame_id的静态转换指定的。 robot_localization中的状态估计节点将自动校正传感器的方向,以使其数据与base_link_frame参数指定的帧对齐。

Handling tf_prefix

With the migration to tf2 as of ROS Indigo, robot_localization still allows for the use of the tf_prefix parameter, but, in accordance with tf2, all frame_id values will have any leading ‘/’ stripped.

随着从ROS Indigo开始向tf2的迁移,robot_localization仍然允许使用tf_prefix参数,但是根据tf2,所有frame_id值的前导“ /”都将被去除。

Considerations for Each Sensor Message Type

每种传感器消息类型的注意事项

Odometry

Many robot platforms come equipped with wheel encoders that provide instantaneous translational and rotational velocity. Many also internally integrate these velocities to generate a position estimate. If you are responsible for this data, or can edit it, keep the following in mind:

许多机器人平台都配备了提供瞬时平移和旋转速度的车轮编码器。 许多人还内部整合了这些速度以生成位置估计。 如果您对此数据负责或可以对其进行编辑,请记住以下几点:

  1. Velocities/Poses: robot_localization can integrate velocities or absolute pose information. In general, the best practice is:

      Velocities/Poses:robot_localization可以整合速度或绝对姿势信息。 通常,最佳做法是:

  • If the odometry provides both position and linear velocity, fuse the linear velocity.
  • 如果里程表同时提供位置和线速度,请融合线速度。
  • If the odometry provides both orientation and angular velocity, fuse the orientation.
  • 如果里程表同时提供方向和角速度,请融合方向。

Note

If you have two sources of orientation data, then you’ll want to be careful. If both produce orientations with accurate covariance matrices, it’s safe to fuse the orientations. If, however, one or both under-reports its covariance, then you should only fuse the orientation data from the more accurate sensor. For the other sensor, use the angular velocity (if it’s provided), or continue to fuse the absolute orientation data, but turn _differential mode on for that sensor.

如果您有两个 orientation 数据来源,则需要注意。 如果两者都产生具有精确协方差矩阵的方向,则可以安全地融合他们的orientations。 但是,如果其中一个或两个都未报告其协方差,则应仅融合来自更精确传感器orientations数据。 对于另一个传感器,请使用角速度(如果已提供),或继续融合绝对orientations数据,但是要为该传感器打开_differential模式。

 

  1. frame_id: See the section on coordinate frames and transforms above. 参考上面的 coordinate frames and transforms
  2. Covariance: Covariance values matter to robot_localization. robot_pose_ekf attempts to fuse all pose variables in an odometry message. Some robots’ drivers have been written to accommodate its requirements. This means that if a given sensor does not produce a certain variable (e.g., a robot that doesn’t report Z position), then the only way to get robot_pose_ekf to ignore it is to inflate its variance to a very large value (on the order of 1e3) so that the variable in question is effectively ignored. This practice is both unnecessary and even detrimental to the performance of robot_localization. The exception is the case where you have a second input source measuring the variable in question, in which case inflated covariances will work.

Note See Configuring robot_localization and Migrating from robot_pose_ekf for more information.

有关更多信息,请参见配置robot_localization和从robot_pose_ekf迁移。

  1. Signs: Adherence to REP-103 means that you need to ensure that the signs of your data are correct. For example, if you have a ground robot and turn it counter-clockwise, then its yaw angle should increase, and its yaw velocity should be positive. If you drive it forward, its X-position should increase and its X-velocity should be positive.
  2. 符号(移动及旋转其方向的正负号):遵守REP-103意味着您需要确保数据的迹象正确无误。 例如,如果您有一个地面机器人并逆时针旋转它,则其偏航角应增加,并且其偏航速度应为正。 如果将其向前推动,则其X位置应增加,并且其X速度应为正。
  3. Transforms: Broadcast of the odom->*base_link* transform. When the world_frame parameter is set to the value of the odom_frame parameter in the configuration file, robot_localization’s state estimation nodes output both a position estimate in a nav_msgs/Odometry message and a transform from the frame specified by its odom_frame parameter to its base_link_frame parameter. However, some robot drivers also broadcast this transform along with their odometry message. If users want robot_localization to be responsible for this transform, then they need to disable the broadcast of that transform by their robot’s driver. This is often exposed as a parameter.

IMU

In addition to the following, be sure to read the above section regarding coordinate frames and transforms for IMU data.

  1. Adherence to specifications: As with odometry, be sure your data adheres to REP-103 and the sensor_msgs/Imu specification. Double-check the signs of your data, and make sure the frame_id values are correct.
  2. Covariance: Echoing the advice for odometry, make sure your covariances make sense. Do not use large values to get the filter to ignore a given variable. Set the configuration for the variable you’d like to ignore to false.
  3. Acceleration: Be careful with acceleration data. The state estimation nodes in robot_localization assume that an IMU that is placed in its neutral right-side-up position on a flat surface will:
  • Measure +9.81

meters per second squared for the Z

  • axis.
  • If the sensor is rolled +90
  • degrees (left side up), the acceleration should be +9.81 meters per second squared for the Y
  • axis.
  • If the sensor is pitched +90
  • degrees (front side down), it should read -9.81 meters per second squared for the X
    • axis.

    PoseWithCovarianceStamped

    See the section on odometry.

    TwistWithCovarianceStamped

    See the section on odometry.

    Common errors

    • Input data doesn’t adhere to REP-103. Make sure that all values (especially orientation angles) increase and decrease in the correct directions.
    • Incorrect frame_id values. Velocity data should be reported in the frame given by the base_link_frame parameter, or a transform should exist between the frame_id of the velocity data and the base_link_frame.
    • Inflated covariances. The preferred method for ignoring variables in measurements is through the odomN_config parameter.
    • Missing covariances. If you have configured a given sensor to fuse a given variable into the state estimation node, then the variance for that value (i.e., the covariance matrix value at position (i,i)
    , where i is the index of that variable) should not be 0. If a 0 variance value is encountered for a variable that is being fused, the state estimation nodes will add a small epsilon value (1e−6) to that value. A better solution is for users to set covariances appropriately.
xuchangkuo@xuchangkuo-virtual-machine:~/ws01$ ros2 launch jetrover_description robot.launch.py [INFO] [launch]: All log files can be found below /home/xuchangkuo/.ros/log/2025-08-04-22-14-00-153915-xuchangkuo-virtual-machine-35509 [INFO] [launch]: Default logging verbosity is set to INFO [INFO] [gzserver-1]: process started with pid [35524] [INFO] [gzclient-2]: process started with pid [35526] [INFO] [robot_state_publisher-3]: process started with pid [35528] [INFO] [spawn_entity.py-4]: process started with pid [35530] [robot_state_publisher-3] [INFO] [1754316841.140475225] [robot_state_publisher]: got segment back_shell_black_link [robot_state_publisher-3] [INFO] [1754316841.140642626] [robot_state_publisher]: got segment back_shell_green_link [robot_state_publisher-3] [INFO] [1754316841.140654093] [robot_state_publisher]: got segment base_footprint [robot_state_publisher-3] [INFO] [1754316841.140694637] [robot_state_publisher]: got segment base_link [robot_state_publisher-3] [INFO] [1754316841.140708876] [robot_state_publisher]: got segment camera_connect_link [robot_state_publisher-3] [INFO] [1754316841.140780867] [robot_state_publisher]: got segment depth_cam_frame [robot_state_publisher-3] [INFO] [1754316841.140787698] [robot_state_publisher]: got segment depth_cam_link [robot_state_publisher-3] [INFO] [1754316841.140793873] [robot_state_publisher]: got segment end_effector_link [robot_state_publisher-3] [INFO] [1754316841.140800116] [robot_state_publisher]: got segment gripper_link [robot_state_publisher-3] [INFO] [1754316841.140806534] [robot_state_publisher]: got segment imu_link [robot_state_publisher-3] [INFO] [1754316841.140812582] [robot_state_publisher]: got segment l_in_link [robot_state_publisher-3] [INFO] [1754316841.140818716] [robot_state_publisher]: got segment l_link [robot_state_publisher-3] [INFO] [1754316841.140824660] [robot_state_publisher]: got segment l_out_link [robot_state_publisher-3] [INFO] [1754316841.140830881] [robot_state_publishe
08-06
在启动ROS2机器人仿真时,查看状态信息和日志输出对于调试和理解仿真运行过程非常重要。以下是对`ros2 launch jetrover_description robot.launch.py`命令执行时日志输出的分析方法和常见信息的解读。 ### 日志输出结构 ROS2的`launch`系统会通过`launch_ros`包的机制将节点启动信息、参数加载信息以及各个节点的日志输出整合到终端中。通常输出内容包括: - **启动的节点信息**:例如`Node`的名称、命名空间、可执行文件路径等。 - **参数加载信息**:节点使用的参数文件路径或直接传递的参数。 - **Gazebo仿真相关信息**:如果仿真涉及Gazebo,则会输出Gazebo插件加载、模型加载等信息。 - **错误或警告信息**:当某个节点或组件无法正常启动时,会输出错误信息,通常包含错误类型和相关堆栈跟踪。 ### 常见日志分析 1. **节点启动成功** 当看到类似以下信息时,表示节点成功启动: ``` [INFO] [launch]: All log files can be found below /home/user/.ros/log/xxxx [INFO] [launch]: Default logging verbosity is set to INFO [INFO] [robot_state_publisher-1]: process started with pid [xxxx] ``` 这些信息说明`robot_state_publisher`节点已经启动,并且日志文件的路径被指定,可以用于进一步的排查。 2. **参数加载** 如果`robot.launch.py`中通过`parameter_file`加载了`URDF`模型或`YAML`配置文件,可能会看到类似如下信息: ``` [INFO] [robot_state_publisher-1]: Loading from file: /path/to/robot.urdf ``` 这表明节点已经正确加载了指定的参数文件,若文件路径错误或文件内容格式错误,会输出`Failed to load parameter file`等提示。 3. **Gazebo模型加载** 如果仿真中使用Gazebo进行物理仿真,可能会输出类似以下的信息: ``` [INFO] [spawn_entity.py]: Spawning robot with name: robot [INFO] [gazebo-2]: process started with pid [xxxx] ``` 如果Gazebo卡在某个阶段,例如`Preparing your world`,可能是因为模型文件路径错误或缺少必要的`mesh`文件。需要检查模型描述文件(如`.sdf`或`.xacro`)中的`mesh`路径是否正确指向实际存在的文件[^2]。 4. **插件加载** Gazebo插件(如`libgazebo_ros_diff_drive.so`)加载时,可能会输出如下信息: ``` [INFO] [gazebo-2]: Loading the plugin 'diff_drive_plugin' ``` 如果插件路径错误或插件未安装,会提示`Failed to load plugin`,此时需要检查插件名称和路径是否正确,或是否已安装相关插件包。 5. **错误与警告信息** 如果某个节点启动失败,会输出错误信息。例如: ``` [ERROR] [robot_state_publisher-1]: Failed to load URDF file ``` 这种情况下需要检查`URDF`文件路径是否正确,以及文件内容是否有语法错误。对于Gazebo相关的错误,可以参考修改模型文件中`mesh`路径的方法来解决加载失败的问题[^2]。 ### 日志输出调试方法 1. **设置日志级别** 可以通过设置`ROS_LOG_LEVEL`环境变量来调整日志级别,例如: ```bash export ROS_LOG_LEVEL=DEBUG ``` 这样可以获取更详细的日志信息,帮助定位问题。 2. **查看具体节点日志** 每个节点的日志文件通常存储在`~/.ros/log`目录下,可以通过查看具体节点的日志文件进一步分析问题: ```bash cat ~/.ros/log/xxxx/robot_state_publisher-1.log ``` 3. **使用`rqt_console`查看日志** ROS2提供了`rqt_console`工具,可以实时查看日志输出并按级别过滤: ```bash ros2 run rqt_console rqt_console ``` ### 示例代码:查看日志文件路径 以下是一个简单的Python脚本,用于获取当前ROS2日志文件的路径: ```python import os # 获取ROS2日志目录 log_dir = os.path.expanduser("~/.ros/log") print(f"ROS2日志文件路径: {log_dir}") ``` ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值