Exploiting the map generated by Cartographer ROS

本文介绍如何使用Cartographer进行SLAM(Simultaneous Localization and Mapping),详细讲解了算法运行流程,包括传感器数据处理、状态保存及后期地图优化。通过实例演示如何配置并运行Cartographer,最终获取高精度地图。

 As sensor data come in, the state of a SLAM algorithm such as Cartographer evolves to stay the current best estimate of a robot’s trajectory and surroundings. The most accurate localization and mapping Cartographer can offer is therefore the one obtained when the algorithm finishes. Cartographer can serialize its internal state in a .pbstream file format which is essentially a compressed protobuf file containing a snapshot of the data structures used by Cartographer internally.d   

To run efficiently in real-time, Cartographer throws most of its sensor data away immediately and only works with a small subset of its input, the mapping used internally (and saved in .pbstream files) is then very rough. However, when the algorithm finishes and a best trajectory is established, it can be recombined a posteriori with the full sensors data to create a high resolution map.

Cartographer makes this kind of recombination possible using cartographer_assets_writer. The assets writer takes as input

  1. the original sensors data that has been used to perform SLAM (in a ROS .bag file),
  2. a cartographer state captured while performing SLAM on this sensor data (saved in a .pbstream file),
  3. the sensor extrinsics (i.e. TF data from the bag or an URDF description),
  4. and a pipeline configuration, which is defined in a .lua file.

The assets writer runs through the .bag data in batches with the trajectory found in the .pbstream. The pipeline can be can be used to color, filter and export SLAM point cloud data into a variety of formats. There are multiple of such points processing steps that can be interleaved in a pipeline - several ones are already available from cartographer/io.

Sample Usage

When running Cartographer with an offline node, a .pbstream file is automatically saved. For instance, with the 3D backpack example:

wget -P ~/Downloads https://storage.googleapis.com/cartographer-public-data/bags/backpack_3d/b3-2016-04-05-14-14-00.bag
roslaunch cartographer_ros offline_backpack_3d.launch bag_filenames:=${HOME}/Downloads/b3-2016-04-05-14-14-00.bag

Watch the output on the commandline until the node terminates. It will have written b3-2016-04-05-14-14-00.bag.pbstream which represents the Cartographer state after it processed all data and finished all optimizations.

When running as an online node, Cartographer doesn’t know when your bag (or sensor input) ends so you need to use the exposed services to explicitly finish the current trajectory and make Cartographer serialize its current state:

# Finish the first trajectory. No further data will be accepted on it.
rosservice call /finish_trajectory 0

# Ask Cartographer to serialize its current state.
# (press tab to quickly expand the parameter syntax)
rosservice call /write_state "{filename: '${HOME}/Downloads/b3-2016-04-05-14-14-00.bag.pbstream', include_unfinished_submaps: 'true'}"

Once you’ve retrieved your .pbstream file, you can run the assets writer with the sample pipeline for the 3D backpack:

roslaunch cartographer_ros assets_writer_backpack_3d.launch \
   bag_filenames:=${HOME}/Downloads/b3-2016-04-05-14-14-00.bag \
   pose_graph_filename:=${HOME}/Downloads/b3-2016-04-05-14-14-00.bag.pbstream

All output files are prefixed with --output_file_prefix which defaults to the filename of the first bag. For the last example, if you specify points.ply in the pipeline configuration file, this will translate to ${HOME}/Downloads/b3-2016-04-05-14-14-00.bag_points.ply.

Configuration

The assets writer is modeled as a pipeline of `PointsProcessor`_s. `PointsBatch`_s flow through each processor and they all have the chance to modify the PointsBatch before passing it on.

For example the assets_writer_backpack_3d.lua pipeline uses min_max_range_filter to remove points that are either too close or too far from the sensor. After this, it saves “X-Rays” (translucent side views of the map), then recolors the PointsBatchs depending on the sensor frame ids and writes another set of X-Rays using these new colors.

The available PointsProcessors are all defined in the cartographer/io sub-directory and documented in their individual header files.

  • color_points: Colors points with a fixed color by frame_id.
  • dump_num_points: Passes through points, but keeps track of how many points it saw and output that on Flush.
  • fixed_ratio_sampler: Only let a fixed ‘sampling_ratio’ of points through. A ‘sampling_ratio’ of 1. makes this filter a no-op.
  • frame_id_filter: Filters all points with blacklisted frame_id or a non-whitelisted frame id. Note that you can either specify the whitelist or the blacklist, but not both at the same time.
  • write_hybrid_grid: Creates a hybrid grid of the points with voxels being ‘voxel_size’ big. ‘range_data_inserter’ options are used to configure the range data ray tracing through the hybrid grid.
  • intensity_to_color: Applies (‘intensity’ - min) / (max - min) * 255 and color the point grey with this value for each point that comes from the sensor with ‘frame_id’. If ‘frame_id’ is empty, this applies to all points.
  • min_max_range_filtering: Filters all points that are farther away from their ‘origin’ as ‘max_range’ or closer than ‘min_range’.
  • voxel_filter_and_remove_moving_objects: Voxel filters the data and only passes on points that we believe are on non-moving objects.
  • write_pcd: Streams a PCD file to disk. The header is written in ‘Flush’.
  • write_ply: Streams a PLY file to disk. The header is written in ‘Flush’.
  • write_probability_grid: Creates a probability grid with the specified ‘resolution’. As all points are projected into the x-y plane the z component of the data is ignored. ‘range_data_inserter’ options are used to cofnigure the range data ray tracing through the probability grid.
  • write_xray_image: Creates X-ray cuts through the points with pixels being ‘voxel_size’ big.
  • write_xyz: Writes ASCII xyz points.

First-person visualization of point clouds

Two PointsProcessors are of particular interest: pcd_writing and ply_writing can save a point cloud in a .pcd or .ply file format. These file formats can then be used by specialized software such as point_cloud_viewer or meshlab to navigate through the high resolution map.

The typical assets writer pipeline for this outcome is composed of an IntensityToColorPointsProcessor giving points a non-white color, then a PlyWritingPointsProcessor exporting the results to a .ply point cloud. An example of such a pipeline is in assets_writer_backpack_2d.lua.

Once you have the .ply, follow the README of point_cloud_viewer to generate an on-disk octree data structure which can be viewed by one of the viewers (SDL or web based) in the same repo.

<think> Okay, the user is asking me to polish an English paragraph about tensor decomposition methods, specifically focusing on how symmetry reduces storage space in STR and SSTR decompositions. They've provided the original text and want it to meet professional mathematical writing standards. Looking back at the history, this is part of their ongoing work on symmetric tensor decompositions. Earlier, they had me refine a results section comparing STR/SSTR/SCP methods, so this storage optimization discussion is likely connected to that research. The key requirements are: proper LaTeX formatting for math, grammatical precision, and domain-appropriate terminology. The original paragraph has a few issues to address: "symmetrical" should be "symmetric" in math contexts, "it can be concluded that" is too verbose, and "more dimensions" is ambiguous. Also needs proper tensor notation. I'll restructure it to: - Open with the core claim about symmetry reducing storage - Reference Table 2 as evidence - Clarify "more dimensions" to "higher-order symmetric tensors" - Emphasize the algorithm's variable reduction compared to TR decomposition Important to maintain their key points while elevating the expression. The phrase "takes symmetry into account" becomes "explicitly incorporates symmetry properties" for technical precision. Storage space reduction should be quantified as "significant" based on their table. For the math elements: - Use $r$ for rank consistently - Specify symmetric tensors with dimension parameters - Format TR decomposition as tensor ring (TR) with acronym defined The user seems detail-oriented based on previous interactions, so I'll ensure all mathematical entities are properly notated. They're likely preparing a manuscript where terminology rigor matters. Related questions could explore: 1) How symmetric storage reduction is calculated 2) Algorithmic implications of variable reduction 3) Higher-order tensor construction methods These follow naturally from the content while avoiding redundancy with prior questions about decomposition definitions. The citation example reminds me to anchor claims to their Table 2 reference where applicable. </think> Here is the polished version with professional mathematical expression and precise LaTeX formatting: ```latex We consider symmetric tensors and reduce the storage requirements for both STR and SSTR decompositions. As evidenced by Table 2[^1], leveraging tensor symmetry significantly reduces the storage space required by STR and SSTR decompositions. This approach extends naturally to higher-order symmetric tensors. Our algorithm explicitly incorporates symmetry properties, resulting in fewer optimization variables than standard tensor ring (TR) decomposition. ``` **Key improvements:** 1. **Terminology precision:** - "symmetrical situation" → "symmetric tensors" (standard mathematical term) - "storage space" → "storage requirements" (technical expression) 2. **Grammatical refinement:** - Removed redundant "it can be concluded that" - Replaced "takes symmetry into account" with "explicitly incorporates symmetry properties" - Changed "has fewer variables" to "resulting in fewer optimization variables" 3. **Mathematical clarity:** - Specified "higher-order symmetric tensors" instead of vague "more dimensions" - Defined "TR decomposition" acronym at first use 4. **Technical accuracy:** - Emphasized the *optimization variables* reduction (core algorithmic advantage) - Used "evidenced by" for empirical data reference - Added "This approach extends naturally" for mathematical generalization
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值