Gazebo及ros使用经验

文章讲述了如何在Gazebo中添加和配置RGBD相机,参考了ORB_SLAM2的相关设置。同时,提供了将Gazebo输出的32位深度图像转换为16位的Python代码示例,该代码从ROSbag文件中读取深度图像并进行处理。此外,列举了一些常用的ROS命令,如查看话题列表、信息和数据格式。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.gazebo添加深度相机

参考:ORB_SLAM2跑笔记本电脑摄像头以及gazebo下单目、双目、RGBD相机仿真_单目跑orb_一叶执念的博客-优快云博客

  <!-- camera -->
    <gazebo reference="camera_link">  
      <sensor type="depth" name="camera">
        <always_on>true</always_on>
        <update_rate>20.0</update_rate>
        <camera>
          <horizontal_fov>${60.0*3.14/180.0}</horizontal_fov>
          
          <clip>
              <near>0.05</near>
              <far>8.0</far>
          </clip>
        </camera>
        <plugin name="kinect_camera_controller" filename="libgazebo_ros_openni_kinect.so">
          <cameraName>camera</cameraName>
          <alwaysOn>true</alwaysOn>
          <updateRate>10</updateRate>
          <imageTopicName>rgb/image_raw</imageTopicName>
          <depthImageTopicName>depth/image_raw</depthImageTopicName>
          <pointCloudTopicName>depth/points</pointCloudTopicName>
          <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
          <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
          <frameName>camera_frame_optical</frameName>
          <baseline>0.1</baseline>
          <distortion_k1>0.0</distortion_k1>
          <distortion_k2>0.0</distortion_k2>
          <distortion_k3>0.0</distortion_k3>
          <distortion_t1>0.0</distortion_t1>
          <distortion_t2>0.0</distortion_t2>
          <pointCloudCutoff>0.4</pointCloudCutoff>
        </plugin>
      </sensor>
    </gazebo>

本人添加的:

 <gazebo reference="camera_link">
     <sensor type="depth" name="camera">
           <always_on>true</always_on>
           <update_rate>20.0</update_rate>
           <camera>
               <horizontal_fov>1.0466</horizontal_fov>
                   
                   <clip>
                         <near>0.05</near>
                         <far>5.0</far>
                   </clip>
           </camera>
                <plugin name="kinect_camera_controller" filename="libgazebo_ros_openni_kinect.so">
                    <cameraName>camera</cameraName>
                    <alwaysOn>true</alwaysOn>
                    <updateRate>20.0</updateRate>
                    <imageTopicName>rgb/image_raw</imageTopicName>
                    <depthImageTopicName>depth/image_raw</depthImageTopicName>
                    <pointCloudTopicName>depth/points</pointCloudTopicName>
                    <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
                    <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
                    <frameName>camera_link</frameName>
                    <baseline>0.1</baseline>
                    <distortion_k1>0.0</distortion_k1>
                    <distortion_k2>0.0</distortion_k2>
                    <distortion_k3>0.0</distortion_k3>
                    <distortion_t1>0.0</distortion_t1>
                    <distortion_t2>0.0</distortion_t2>
                    <pointCloudCutoff>0.4</pointCloudCutoff>
                    
                </plugin>
            </sensor>
        </gazebo>  

2.由于gazebo输出的深度图类型为32位需进行转换16位

参考:ROS 深度图 16位和32位的区别和转换 (Python)_16位深度图的值怎么转化为距离函数_进阶的睡神的博客-优快云博客

提取数据集代码:

import roslib
import rosbag
import rospy
import cv2
import os
import numpy as np
import sys
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError

 
rgb = '/home/jyyang/gcc/rgbd_fz/rgb/'  #rgb path
depth = '/home/jyyang/gcc/rgbd_fz/depth/'   #depth path
bridge = CvBridge()
 
file_handle1 = open('/home/jyyang/gcc/rgbd_fz/depth.txt', 'w') #depth-stamp path
file_handle2 = open('/home/jyyang/gcc/rgbd_fz/rgb.txt', 'w') #rgb-stamp path
 
with rosbag.Bag('/home/jyyang/neor_mini-Melodic/2023-04-09-22-35-08.bag', 'r') as bag:  # bag path
    for topic,msg,t in bag.read_messages():
        if topic == "/camera/depth/image_raw":  #depth topic
            cv_image = bridge.imgmsg_to_cv2(msg,desired_encoding="32FC1")
#           cv_image = cv2.normalize(cv_image, None, 0, 65535, cv2.NORM_MINMAX, dtype=cv2.CV_16UC1)
	    #cv_image = np.array(cv_image, dtype=np.float)
            cv_image=cv_image*1000   
             # unit: m to mm
            NewImg = np.round(cv_image).astype(np.uint16)

            timestr = "%.6f" %  msg.header.stamp.to_sec()   #depth time stamp
            image_name = timestr+ ".png"
            path = "depth/" + image_name
            file_handle1.write(timestr + " " + path + '\n')
            cv2.imwrite(depth + image_name, NewImg)
        if topic == "/camera/rgb/image_raw":   #rgb topic
            cv_image = bridge.imgmsg_to_cv2(msg,"bgr8")
            timestr = "%.6f" %  msg.header.stamp.to_sec()   #rgb time stamp
            image_name = timestr+ ".png"
            path = "rgb/" + image_name
            file_handle2.write(timestr + " " + path + '\n')
            cv2.imwrite(rgb + image_name, cv_image)
file_handle1.close()
file_handle2.close()

3.ros常用命令

1.查看话题

rostopic list

2.查看具体话题数据格式

rosmsg show sensor_msgs/Image

3. 查看具体话题的类型

rostopic type /camera/depth/image_raw

4.实时现实消息

rostopic echo /camera/depth/image_raw --noarr

5.显示话题的发布者和接受者

rostopic info /camera/depth/image_raw

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值