Apollo之Canbus处理来自Control的Cmd

简述canbus  

        apollo项目中canbus模块的主要作用是接收Control模块发布的指令,然后将指令解析为CAN协议报文与车辆的ECU交互,且得到指令的反馈信息,并将反馈结果发布为车辆底盘信息(Chassis_detail)。

        先看一下这个Chassis_detail是什么:apollo_cidi/modules/canbus/proto/chassis_detail.proto  里面存放了一些刹车、转弯、加速等等信息。底盘信息非常重要,一方面控制模块下达的指令需要先在canbus模块中解析然后通过can总线传递给车上的各个控制单元,另一方面canbus能从can总线上获取数据并且将信息解析为底盘信息,然后把消息发布出去,这一过程也叫反馈底盘信息,为的是让控制中心了解下达命令的具体执行情况。今天我们讨论问题也就是上述的过程。

message ChassisDetail {
  enum Type {
    QIRUI_EQ_15 = 0;
    CHANGAN_RUICHENG = 1;
  }
  optional Type car_type = 1;               // car type
  optional BasicInfo basic = 2;             // basic info
  optional Safety safety = 3;               // safety
  optional Gear gear = 4;                   // gear
  optional Ems ems = 5;                     // engine manager system
  optional Esp esp = 6;                     // Electronic Stability Program
  optional Gas gas = 7;                     // gas pedal
  optional Epb epb = 8;                     // Electronic parking brake
  optional Brake brake = 9;                 // brake pedal
  optional Deceleration deceleration = 10;  // deceleration
  optional VehicleSpd vehicle_spd = 11;     // vehicle speed
  optional Eps eps = 12;                    // Electronic Power Steering
  optional Light light = 13;                // Light
  optional Battery battery = 14;            // Battery info
  optional CheckResponseSignal check_response = 15;
  optional License license = 16;            // License info
  optional Surround surround = 17;          // Surround information
  optional Gem gem = 18;
}

     看一下apollo的整体框架图

 

Control模块之下达Command

我用流程图来叙述这部分。apollo各个模块之间是通过ros来传递消息的,在这里通过PublishControlCommand 把控制者计算得到的命令pubish出去,在下文会看到在canbus.cc有一个回调函数来专门接受这个命令。

apollo中各个模块的展开过程如出一辙,不清楚的同学看看学习这篇博客:apollo的Planning模块源码分析。弄懂一个模块,再学习别的模块。在这里我想说一下定时器(OnTimer),创建一个定时器,有一定频率的去发布控制命令。这个频率到底是多少呢?

apollo_cidi/modules/control/conf/lincoln.pb.txt 在这个文件中control_period:0.01.也就是说control模块会以100hz的频率向外输出control_command信息,同理下面的chassis、trajectory也是一个道理。

control_period: 0.01
trajectory_period: 0.1
chassis_period: 0.01

Canbus模块之解析Command

在apollo项目中有两处canbus源码,modules/canbus 这一部分的canbus主要是对Vehicle(车辆)控制的实现,modules/driver/canbus是实现数据通信主要的模块。这篇文章主要是canbus如何处理Command,这两块的canbus源码就不仔细分析了,有疑惑的可以参考这篇博文https://blog.youkuaiyun.com/davidhopper/article/details/79176505。apollo的modules各模块初始化操作大致相同,我也是通过这篇文章来学习的canbus模块。接下来进入正题“处理Command”,先看我画的流程图。

再推荐一篇博文https://blog.youkuaiyun.com/liu3612162/article/details/81981388

简单总结一下canbus模块中各个部件的功能

CanClient 负责与 CAN 卡通讯,获取消息或发送消息,MessageManager 则用来解析消息。而 CanReceiver 则调度这两个对象的工作。调度 CanClient 和 MessageManager 工作的代码在 CanReceiver 的 RecvThreadFunc 方法中。

1、canbus.cc中注册回调函数

在modules/canbus/canbus.cc的Init()函数中

  if (!FLAGS_receive_guardian) {
    AdapterManager::AddControlCommandCallback(&Canbus::OnControlCommand, this);
  } else {
    AdapterManag
import math import os import struct from argparse import ArgumentParser import av import numpy as np import open3d as o3d import rosbag import yaml from sensor_msgs import point_cloud2 import subprocess from protoc.octopudata_localizationinfo_pb2 import LocalizationInfoFrame, LocalizationInfo from protoc.octopudata_trackedobject_pb2 import TrackedObjectFrame, Object, TrackedObject from protoc.octopusdata_controlcommand_pb2 import CommandFrame, ControlCommand from protoc.octopusdata_gnss_pb2 import GnssPoint, GnssPoints from protoc.octopusdata_plantrajectory_pb2 import Trajectory, TrajectoryPoint, PlanTrajectory from protoc.octopusdata_predictionobstacles_pb2 import PerceptionObstacle, \ Obstacle, PredictionTrajectory, PathPoint, PredictionObstacles from protoc.octopusdata_routingpath_pb2 import RoutingPath, Path, Point, RoutingFrames from protoc.octopusdata_vehicleinfo_pb2 import VehicleFrame, VehicleInfo av.logging.set_level(av.logging.PANIC) codec_ctx = av.codec.Codec('hevc','r') h265_code = codec_ctx.create() class Pose: def __init__(self, q0, q1, q2, q3, x, y, z): self.q0 = q0 self.q1 = q1 self.q2 = q2 self.q3 = q3 self.x = x self.y = y self.z = z def get_ts(secs, nsecs): return int(secs * 1000 + nsecs / 1000000) def get_modulo(x, y, z): return math.sqrt(x * x + y * y + z * z) def yaw_from_quaternions(w, x, y, z): a = 2 * (w * z + x * y) b = 1 - 2 * (y * y + z * z) return math.atan2(a, b) def pose_has_nan(p): return math.isnan(p.x) or math.isnan(p.y) or math.isnan(p.z) or \ math.isnan(p.q0) or math.isnan(p.q1) or math.isnan(p.q2) or \ math.isnan(p.q3) def get_t(q0, q1, q2, q3, x, y, z): aa = 1 - 2 * (q2 * q2 + q3 * q3) ab = 2 * (q1 * q2 - q0 * q3) ac = 2 * (q1 * q3 + q0 * q2) ba = 2 * (q1 * q2 + q0 * q3) bb = 1 - 2 * (q1 * q1 + q3 * q3) bc = 2 * (q2 * q3 - q0 * q1) ca = 2 * (q1 * q3 - q0 * q2) cb = 2 * (q2 * q3 + q0 * q1) cc = 1 - 2 * (q1 * q1 + q2 * q2) t = np.mat([[aa, ab, ac, x], [ba, bb, bc, y], [ca, cb, cc, z], [0, 0, 0, 1]]) return t def get_label(perception_typ, subtype): if perception_typ == 3: return 'Pedestrian' elif perception_typ == 4: return 'Bi_Tricycle' elif perception_typ == 5: if subtype == 5: return 'Truck' elif subtype == 6: return 'Bus' else: return 'Car' else: return 'unknow' def main(args): for file in os.listdir(args.input): if file.endswith('.bag'): bag_path = os.path.join(args.input, file) bag = rosbag.Bag(bag_path, "r") output_dir = os.getenv('output_dir') if not os.path.exists(os.path.join(output_dir, 'hwPtClound_A4')): os.makedirs(os.path.join(output_dir, 'hwPtClound_A4')) if not os.path.exists(os.path.join(output_dir, 'hwPtClound_B2')): os.makedirs(os.path.join(output_dir, 'hwPtClound_B2')) if not os.path.exists(os.path.join(output_dir, 'radar_track_array_0')): os.makedirs(os.path.join(output_dir, 'radar_track_array_0')) if not os.path.exists(os.path.join(output_dir, 'radar_track_array_3')): os.makedirs(os.path.join(output_dir, 'radar_track_array_3')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_74')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_74')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_73')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_73')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_72')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_72')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_71')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_71')) routes = [] controls = [] plans = [] preds = [] gnss = [] vehs = [] locs = [] objs = [] ego_pose = None has_camera_71 = False has_camera_72 = False has_camera_73 = False has_camera_74 = False has_lidar_A4 = False has_lidar_B2 = False has_radar_0 = False has_radar_3 = False lidar_num = 0 image_num = 0 radar_num = 0 for topic, msg, t in bag.read_messages(): time_stamp = int(t.to_sec() * 1000) # 以 rosbag 时间戳(t)为基准,转换为 13 位时间戳 # if topic == '/hwPtClound_A4': ### 图达通 时间辍应该是13位数字,图达通雷达8位,华为96线6位 # # file_path = os.path.join(output_dir, 'hwPtClound_A4', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # # msg.header.stamp.nsecs / 1000000))) # file_path = os.path.join(output_dir, 'hwPtClound_A4', '{}.pcd'.format(time_stamp)) # # # print(file_path) # # 提取点云数据,包括 x, y, z # points = list(point_cloud2.read_points(msg, field_names=["x", "y", "z", "intensity"], skip_nans=True)) # if points: # # 转换为 numpy 数组,添加 intensity, ring, timestamp 字段 # np_points = np.array(points) # (N, 3), 包含 x, y, z # # # 转换为 Open3D 格式点云 # pcd = o3d.geometry.PointCloud() # pcd.points = o3d.utility.Vector3dVector(np_points[:, :3]) # x, y, z # pcd.colors = o3d.utility.Vector3dVector(np.tile(np_points[:, 3:4] / np_points[:, 3:4].max(), (1, 3))) # 用 intensity 作为灰度颜色 # o3d.io.write_point_cloud(file_path, pcd) # lidar_num += 1 # has_lidar_A4 = True # elif topic == '/hwPtClound_B2': ### 图达通 # # file_path = os.path.join(output_dir, 'hwPtClound_B2', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # # msg.header.stamp.nsecs / 1000000))) # file_path = os.path.join(output_dir, 'hwPtClound_B2', '{}.pcd'.format(time_stamp)) # # # print(file_path) # # 提取点云数据,包括 x, y, z # points = list(point_cloud2.read_points(msg, field_names=["x", "y", "z", "intensity"], skip_nans=True)) # if points: # # 转换为 numpy 数组,添加 intensity, ring, timestamp 字段 # np_points = np.array(points) # (N, 3), 包含 x, y, z # # # 转换为 Open3D 格式点云 # pcd = o3d.geometry.PointCloud() # pcd.points = o3d.utility.Vector3dVector(np_points[:, :3]) # x, y, z # pcd.colors = o3d.utility.Vector3dVector(np.tile(np_points[:, 3:4] / np_points[:, 3:4].max(), (1, 3))) # 用 intensity 作为灰度颜色 # o3d.io.write_point_cloud(file_path, pcd) # lidar_num += 1 # has_lidar_B2 = True if topic == '/mdc_camera_instance_74': ### 相机 时间辍应该是13位数字 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_74', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_74 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/mdc_camera_instance_73': ### 相机 时间辍应该是13位数字 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_73', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_73 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/mdc_camera_instance_72': ### 相机 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_72', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_72 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/mdc_camera_instance_71': ### 相机 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_71', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_71 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/radar_track_array_0': ### 大陆408 时间辍应该是13位数字 # file_path = os.path.join(output_dir, 'radar_track_array_0', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000))) file_path = os.path.join(output_dir, 'radar_track_array_0', '{}.pcd'.format(time_stamp)) points = [] for track in msg.trackList: x, y, z = track.x, track.y, track.z vx, vy, ax, ay = track.vx, track.vy, track.ax, track.ay rcs, snr, yawRate = track.rcs, track.snr, track.yawRate obj_id, trackType, lifetime = track.id, track.trackType, track.lifetime # 过滤无效点 if np.isnan(x) or np.isnan(y) or np.isnan(z): continue points.append([x, y, z, vx, vy, ax, ay, rcs, snr, yawRate, obj_id, trackType, lifetime]) if not points: print("没有有效点云数据") continue points = np.array(points) # **写入 PCD 文件** with open(file_path, 'w') as f: # **写入 PCD 头部** f.write("# .PCD v0.7 - Point Cloud Data file format\n") f.write("VERSION 0.7\n") f.write(f"FIELDS x y z vx vy ax ay rcs snr yawRate id trackType lifetime\n") f.write("SIZE 4 4 4 4 4 4 4 4 4 4 4 4 4\n") f.write("TYPE F F F F F F F F F F I I I\n") # F = float, I = int f.write("COUNT 1 1 1 1 1 1 1 1 1 1 1 1 1\n") f.write(f"WIDTH {points.shape[0]}\n") f.write("HEIGHT 1\n") f.write("VIEWPOINT 0 0 0 1 0 0 0\n") f.write(f"POINTS {points.shape[0]}\n") f.write("DATA ascii\n") # **写入点云数据** np.savetxt(f, points, fmt="%.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %d %d %d") radar_num += 1 has_radar_0 = True elif topic == '/radar_track_array_3': ### 大陆408 # file_path = os.path.join(output_dir, 'radar_track_array_3', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000))) file_path = os.path.join(output_dir, 'radar_track_array_3', '{}.pcd'.format(time_stamp)) points = [] for track in msg.trackList: x, y, z = track.x, track.y, track.z vx, vy, ax, ay = track.vx, track.vy, track.ax, track.ay rcs, snr, yawRate = track.rcs, track.snr, track.yawRate obj_id, trackType, lifetime = track.id, track.trackType, track.lifetime # 过滤无效点 if np.isnan(x) or np.isnan(y) or np.isnan(z): continue points.append([x, y, z, vx, vy, ax, ay, rcs, snr, yawRate, obj_id, trackType, lifetime]) if not points: print("没有有效点云数据") continue points = np.array(points) # **写入 PCD 文件** with open(file_path, 'w') as f: # **写入 PCD 头部** f.write("# .PCD v0.7 - Point Cloud Data file format\n") f.write("VERSION 0.7\n") f.write(f"FIELDS x y z vx vy ax ay rcs snr yawRate id trackType lifetime\n") f.write("SIZE 4 4 4 4 4 4 4 4 4 4 4 4 4\n") f.write("TYPE F F F F F F F F F F I I I\n") # F = float, I = int f.write("COUNT 1 1 1 1 1 1 1 1 1 1 1 1 1\n") f.write(f"WIDTH {points.shape[0]}\n") f.write("HEIGHT 1\n") f.write("VIEWPOINT 0 0 0 1 0 0 0\n") f.write(f"POINTS {points.shape[0]}\n") f.write("DATA ascii\n") # **写入点云数据** np.savetxt(f, points, fmt="%.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %d %d %d") radar_num += 1 has_radar_3 = True elif topic == '/routing/routing_response_viz': rv = RoutingPath() rv.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) rv.stamp_secs = t.secs rv.stamp_nsecs = t.nsecs mark_list = list() for mark in msg.markers: path_pb = Path() path_pb.id = mark.id point_list = [] for point in mark.points: point_pb = Point() point_pb.x = point.x point_pb.y = point.y point_pb.z = point.z point_list.append(point_pb) path_pb.path_point.extend(point_list) mark_list.append(path_pb) rv.routing_path_info.extend(mark_list) routes.append(rv) elif topic == '/holo/ControlCommand': cf = CommandFrame() cf.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) cf.stamp_secs = t.secs cf.stamp_nsecs = t.nsecs cf.acceleration = msg.acceleration cf.front_wheel_angle = msg.front_wheel_angle cf.gear = msg.gear controls.append(cf) elif topic == '/planning/trajectory': tj = Trajectory() tj.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) tj.stamp_secs = t.secs tj.stamp_nsecs = t.nsecs p_list = [] for point in msg.trajectory_points: p = TrajectoryPoint() p.x = point.path_point.point.x p.y = point.path_point.point.y p.z = point.path_point.point.z p.theta = point.path_point.theta p.kappa = point.path_point.kappa p.v = point.v p.a = point.a p.relative_time = point.relative_time p_list.append(p) tj.trajectory_points.extend(p_list) plans.append(tj) elif topic == '/prediction/prediction_obstacles': tr_pb = PerceptionObstacle() tr_pb.timestamp = int(msg.header.stamp.secs * 1000 + msg.header.stamp.nsecs / 1000000) tr_pb.stamp_secs = msg.header.stamp.secs tr_pb.stamp_nsecs = msg.header.stamp.nsecs obj_list = list() for obj in msg.prediction_obstacle: ob_pb = Obstacle() ob_pb.obstacle_timestamp = int(obj.timestamp * 1000) ob_pb.id = obj.perception_obstacle.id ob_pb.x = obj.perception_obstacle.position.x ob_pb.y = obj.perception_obstacle.position.y ob_pb.z = obj.perception_obstacle.position.z traj_pbs = [] for traj in obj.trajectory: traj_pb = PredictionTrajectory() points_pbs = [] for trajectory_point in traj.trajectory_points: point_pb = PathPoint() point_pb.x = trajectory_point.path_point.point.x point_pb.y = trajectory_point.path_point.point.y point_pb.z = trajectory_point.path_point.point.z point_pb.theta = trajectory_point.path_point.theta point_pb.kappa = trajectory_point.path_point.kappa point_pb.lane_id = trajectory_point.path_point.lane_id point_pb.v = trajectory_point.v point_pb.a = trajectory_point.a point_pb.relative_time = trajectory_point.relative_time points_pbs.append(point_pb) traj_pb.path_point.extend(points_pbs) traj_pbs.append(traj_pb) ob_pb.prediction_trajectory.extend(traj_pbs) obj_list.append(ob_pb) tr_pb.obstacle_info.extend(obj_list) preds.append(tr_pb) elif topic == '/inspvax': pb_loc_gnss = GnssPoint() pb_loc_gnss.stamp_secs = msg.header.stamp.secs # 1 pb_loc_gnss.stamp_nsecs = msg.header.stamp.nsecs # 2 pb_loc_gnss.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) pb_loc_gnss.latitude = msg.latitude # 3 pb_loc_gnss.longitude = msg.longitude # 4 pb_loc_gnss.elevation = msg.altitude gnss.append(pb_loc_gnss) elif topic == '/holo/VehicleInfoMagotan': veh_pb = VehicleFrame() veh_pb.stamp_secs = msg.timestamp.secs # 1 veh_pb.stamp_nsecs = msg.timestamp.nsecs # 2 veh_pb.timestamp = get_ts(veh_pb.stamp_secs, veh_pb.stamp_nsecs) veh_pb.gear_value = msg.gear # 4 veh_pb.vehicle_speed = msg.vehicle_speed * 3.6 # 5 veh_pb.steering_angle = msg.steering_angle # 6 veh_pb.longitude_acc = msg.longitude_acc veh_pb.lateral_acc = msg.lateral_acc veh_pb.turn_left_light = msg.turn_left_light veh_pb.turn_right_light = msg.turn_right_light veh_pb.brake = msg.brake_torque veh_pb.autonomy_status = 0 vehs.append(veh_pb) elif topic == '/localization/localization_info': lo_pb = LocalizationInfoFrame() lo_pb.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) lo_pb.stamp_secs = msg.header.stamp.secs lo_pb.stamp_nsecs = msg.header.stamp.nsecs lo_pb.pose_position_x = msg.pose.position.x lo_pb.pose_position_y = msg.pose.position.y lo_pb.pose_position_z = msg.pose.position.z lo_pb.pose_orientation_x = msg.pose.orientation.x lo_pb.pose_orientation_y = msg.pose.orientation.y lo_pb.pose_orientation_z = msg.pose.orientation.z lo_pb.pose_orientation_w = msg.pose.orientation.w lo_pb.pose_orientation_yaw = \ yaw_from_quaternions(msg.pose.orientation.w, msg.pose.orientation.x, msg.pose.orientation.y, msg.pose.orientation.z) lo_pb.velocity_linear = get_modulo(msg.pose.linear_velocity.x, msg.pose.linear_velocity.y, msg.pose.linear_velocity.z) lo_pb.velocity_angular = get_modulo(msg.pose.angular_velocity.x, msg.pose.angular_velocity.y, msg.pose.angular_velocity.z) lo_pb.acceleration_linear = get_modulo(msg.pose.linear_acceleration_vrf.x, msg.pose.linear_acceleration_vrf.y, msg.pose.linear_acceleration_vrf.z) lo_pb.acceleration_angular = get_modulo(msg.pose.angular_velocity_vrf.x, msg.pose.angular_velocity_vrf.y, msg.pose.angular_velocity_vrf.z) locs.append(lo_pb) ego_pose = Pose(msg.pose.orientation.w, msg.pose.orientation.x, msg.pose.orientation.y, msg.pose.orientation.z, msg.pose.position.x, msg.pose.position.y, msg.pose.position.z) elif topic == '/perception/perception_obstacles': if ego_pose is None or pose_has_nan(ego_pose): continue tr_pb = TrackedObjectFrame() tr_pb.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) tr_pb.stamp_secs = msg.header.stamp.secs tr_pb.stamp_nsecs = msg.header.stamp.nsecs obj_list = list() for object in msg.perception_obstacle: ob_pb = Object() ob_pb.id = object.id ob_pb.label = get_label(object.type, object.sub_type) ob_pb.pose_position_x = object.position.x ob_pb.pose_position_y = object.position.y ob_pb.pose_position_z = object.position.z ob_pb.pose_orientation_x = 0 ob_pb.pose_orientation_y = 0 ob_pb.pose_orientation_z = math.sin(object.theta / 2) ob_pb.pose_orientation_w = math.cos(object.theta / 2) ob_pb.pose_orientation_yaw = object.theta ob_pb.dimensions_x = object.length ob_pb.dimensions_y = object.width ob_pb.dimensions_z = object.height ob_pb.speed_vector_linear_x = object.velocity.x ob_pb.speed_vector_linear_y = object.velocity.y ob_pb.speed_vector_linear_z = object.velocity.z world_obj = np.transpose(np.array([[object.position.x, object.position.y, object.position.z, 1]])) world_ego_t = get_t(ego_pose.q0, ego_pose.q1, ego_pose.q2, ego_pose.q3, ego_pose.x, ego_pose.y, ego_pose.z) try: world_ego_invt = np.linalg.pinv(world_ego_t) except Exception as err: print('pinv failed:', world_ego_t) raise err vehicle_obj = world_ego_invt * world_obj ob_pb.relative_position_x = vehicle_obj[0] ob_pb.relative_position_y = vehicle_obj[1] ob_pb.relative_position_z = vehicle_obj[2] obj_list.append(ob_pb) tr_pb.objects.extend(obj_list) objs.append(tr_pb) print(f"lidar_num : {lidar_num}") print(f"image_num : {image_num}") print(f"radar_num : {radar_num}") folders = [] if len(routes) > 0: os.makedirs(os.path.join(output_dir, 'routing_routing_response_viz')) folders.append({'folder': 'routing_routing_response_viz', 'sensor_type': 'routing_path'}) route_out = RoutingFrames() route_out.routing_frame.extend(routes) with open(os.path.join(output_dir, 'routing_routing_response_viz', 'route.pb'), "wb") as c: c.write(route_out.SerializeToString()) if len(controls) > 0: os.makedirs(os.path.join(output_dir, 'holo_ControlCommand')) folders.append({'folder': 'holo_ControlCommand', 'sensor_type': 'control'}) ctl_cmd_pb_out = ControlCommand() ctl_cmd_pb_out.command_frame.extend(controls) with open(os.path.join(output_dir, 'holo_ControlCommand', 'control.pb'), "wb") as c: c.write(ctl_cmd_pb_out.SerializeToString()) if len(plans) > 0: os.makedirs(os.path.join(output_dir, 'planning_trajectory')) folders.append({'folder': 'planning_trajectory', 'sensor_type': 'planning_trajectory'}) plan_traj_pb_out = PlanTrajectory() plan_traj_pb_out.trajectory_info.extend(plans) with open(os.path.join(output_dir, 'planning_trajectory', 'planning.pb'), "wb") as p: p.write(plan_traj_pb_out.SerializeToString()) if len(preds) > 0: os.makedirs(os.path.join(output_dir, 'prediction_prediction_obstacles')) folders.append({'folder': 'prediction_prediction_obstacles', 'sensor_type': 'predicted_objects'}) pred_obstacles_pb_out = PredictionObstacles() pred_obstacles_pb_out.perception_obstacle.extend(preds) with open(os.path.join(output_dir, 'prediction_prediction_obstacles', 'predicted.pb'), "wb") as p: p.write(pred_obstacles_pb_out.SerializeToString()) if len(gnss) > 0: os.makedirs(os.path.join(output_dir, 'inspvax')) folders.append({'folder': 'inspvax', 'sensor_type': 'gnss'}) gn_pb_out = GnssPoints() gn_pb_out.gnss_points.extend(gnss) with open(os.path.join(output_dir, 'inspvax', 'gnss.pb'), "wb") as g: g.write(gn_pb_out.SerializeToString()) if len(vehs) > 0: os.makedirs(os.path.join(output_dir, 'holo_VehicleInfoMagotan')) folders.append({'folder': 'holo_VehicleInfoMagotan', 'sensor_type': 'vehicle'}) veh_pb_out = VehicleInfo() veh_pb_out.vehicle_info.extend(vehs) with open(os.path.join(output_dir, 'holo_VehicleInfoMagotan', 'vehicle.pb'), "wb") as v: v.write(veh_pb_out.SerializeToString()) if len(locs) > 0: os.makedirs(os.path.join(output_dir, 'localization_localization_info')) folders.append({'folder': 'localization_localization_info', 'sensor_type': 'ego_tf'}) lo_pb_out = LocalizationInfo() lo_pb_out.localization_info.extend(locs) with open(os.path.join(output_dir, 'localization_localization_info', 'ego_tf.pb'), "wb") as lo: lo.write(lo_pb_out.SerializeToString()) if len(objs) > 0: os.makedirs(os.path.join(output_dir, 'perception_perception_obstacles')) folders.append({'folder': 'perception_perception_obstacles', 'sensor_type': 'object_array_vision'}) tr_pb_out = TrackedObject() tr_pb_out.tracked_object.extend(objs) with open(os.path.join(output_dir, 'perception_perception_obstacles', 'object_array_vision.pb'), "wb") as tr: tr.write(tr_pb_out.SerializeToString()) if has_camera_74: folders.append({'folder': 'mdc_camera_instance_74', 'sensor_type': 'camera'}) if has_camera_73: folders.append({'folder': 'mdc_camera_instance_73', 'sensor_type': 'camera'}) if has_camera_72: folders.append({'folder': 'mdc_camera_instance_72', 'sensor_type': 'camera'}) if has_camera_71: folders.append({'folder': 'mdc_camera_instance_71', 'sensor_type': 'camera'}) if has_lidar_A4: if args.calibration_id: folders.append({'folder': 'hwPtClound_A4', 'sensor_type': 'lidar', 'calibration_item_id': args.calibration_id}) else: folders.append({'folder': 'hwPtClound_A4', 'sensor_type': 'lidar'}) if has_lidar_B2: if args.calibration_id: folders.append({'folder': 'hwPtClound_B2', 'sensor_type': 'lidar', 'calibration_item_id': args.calibration_id}) else: folders.append({'folder': 'hwPtClound_B2', 'sensor_type': 'lidar'}) if has_radar_0: folders.append({'folder': 'radar_track_array_0', 'sensor_type': 'radar'}) if has_radar_3: folders.append({'folder': 'radar_track_array_3', 'sensor_type': 'radar'}) collect_yaml = {'folders': folders} with open(os.path.join(output_dir, "opendata_to_platform.yaml"), 'w', encoding='utf-8') as collect_file: yaml.safe_dump(collect_yaml, collect_file) with open(os.path.join(os.getenv('output_dir'), '_SUCCESS'), 'w') as f: f.write("") os.system('chmod -R a+r ${output_dir}/*') if __name__ == '__main__': parser = ArgumentParser() parser.add_argument('-i', '--input', help="input bag path", default=os.getenv('rosbag_path')) parser.add_argument('-o', '--output', default=os.getenv('output_dir'), help="result output directory, default to ./bags/") parser.add_argument('-ci', '--calibration_id', type=int) params = parser.parse_args() main(params)
最新发布
07-20
<think>首先,用户的问题是关于使用Python解析ROS bag文件,提取并保存多种传感器数据,包括点云、相机图像、雷达、GNSS、车辆信息等,并将数据保存为特定格式的文件(如PCD、JPG、PB等)。 从系统指令中,我需要: - 所有行内数学表达式使用$...$格式。 - 独立公式使用$$...$$格式并单独成段。 - LaTeX语法正确。 - 使用中文回答。 - 生成相关问题。 - 在回答中引用的段落末尾自然地添加引用标识,例如[^1]。 - 回答结构清晰,逐步解决问题。 - 参考提供的引用,但用户的问题是基于这些引用,所以我需要整合它们。 用户提供的引用: - [^1]: 关于使用rosrun pcl_ros bag_to_pcd解析点云文件。 - [^2]: 数据预处理代码,包括图像处理。 - [^3]: 相机雷达时间同步和ROS实现。 - [^4]: LiDAR点云转换到大地坐标系。 用户的问题是:解析ROS bag文件中的多种传感器数据,并保存为特定格式。 我的回答需要结构清晰,逐步指导用户如何做。步骤可能包括: 1. 安装必要的库和工具。 2. 解析bag文件提取不同传感器数据。 3. 保存数据到指定格式。 4. 可能的数据预处理(如时间同步)。 参考引用: - [^1] 提到使用rosrun pcl_ros bag_to_pcd来解析点云数据。 - [^2] 展示图像数据预处理。 - [^3] 讨论时间同步问题。 - [^4] 涉及点云坐标转换。 我应该基于这些构建回答。用户使用的是Python,所以我需要提供Python代码示例。 关键点: - 使用Python的rosbag库来读取bag文件。 - 对于点云数据,可能需要使用pcl_ros或类似工具。 - 对于图像数据,使用OpenCV或类似库。 - 对于其他数据如GNSS、车辆状态,可能需要自定义处理。 - 保存格式:PCD for point clouds, JPG for images, PB可能指的是Protocol Buffers,但用户提到PB,我假设是Protobuf或类似。 在回答中,我需要自然地添加引用。例如,当提到点云解析时,引用[^1];当提到数据预处理时,引用[^2];当提到时间同步时,引用[^3];当提到坐标转换时,引用[^4]。 结构回答: 1. **介绍**: 简要说明解析ROS bag文件的步骤。 2. **准备工作**: 安装必要的库。 3. **解析bag文件**: 使用Python的rosbag库。 4. **提取和处理数据**: - 点云数据: 保存为PCD格式。 - 图像数据: 保存为JPG格式。 - 雷达数据: 可能包括点云或其他。 - GNSS数据: 保存为文本或Protobuf。 - 车辆状态: 类似处理。 5. **数据预处理**: 如时间同步、坐标转换。 6. **代码示例**: 提供Python代码片段。 7. **生成相关问题**: 在最后添加。 使用中文回答。 添加引用: 在相关部分末尾添加[^n]。 生成相关问题: 在回答结束时添加"§§相关问题§§"部分。 现在,构建回答大纲: - **步骤1: 安装依赖** - **步骤2: 读取bag文件** - **步骤3: 提取点云数据** - 使用rosbag读取点云消息。 - 保存为PCD格式。参考[^1]。 - **步骤4: 提取图像数据** - 使用rosbag读取图像消息。 - 使用OpenCV保存为JPG。参考[^2]。 - **步骤5: 提取雷达数据** (可能包括点云,所以类似步骤3) - **步骤6: 提取GNSS和车辆状态** - 自定义处理。 - **步骤7: 时间同步和坐标转换** - 参考[^3]和[^4]。 - **代码示例**: 整合代码。 - **相关问题** 确保LaTeX: 如果有数学公式,比如在坐标转换中,使用$...$或$$...$$。例如,在[^4]中提到坐标转换,可能有旋转矩阵。 在[^4]中:"激光雷达极坐标投影到XYZ坐标系",可能有公式如: $$ x = r \cos \theta \cos \phi \\ y = r \sin \theta \cos \phi \\ z = r \sin \phi $$ 其中$r$是距离,$\theta$是水平角,$\phi$是垂直角。 所以,在回答中,如果需要解释坐标转换,可以添加这样的公式。 最后,生成相关问题,例如: 1. 如何处理ROS bag文件中的时间同步问题? 2. 如何优化点云数据的保存和加载? 3. 如何将传感器数据转换为Protobuf格式?</think>### 使用Python解析ROS bag文件并提取多传感器数据 解析ROS bag文件以提取点云、相机图像、雷达、GNSS和车辆状态等传感器数据是机器人感知系统中的常见任务。以下是一个结构化指南,基于Python实现,包括代码示例和关键步骤。整个过程分为安装依赖、读取bag文件、提取数据、保存格式和数据预处理等部分。我将逐步解释,并确保回答真实可靠,参考了相关技术文档和实践经验。 --- ### 步骤1: 安装必要的依赖库 在开始前,确保已安装ROS(推荐ROS Noetic或Melodic)和Python库。使用pip安装以下包: ```bash pip install rosbag numpy opencv-python pypcd ``` - `rosbag`: ROS官方库,用于读取bag文件。 - `opencv-python`: 处理图像数据。 - `pypcd`: 保存点云为PCD格式(需额外安装,可通过`pip install pypcd`)。 - 其他: 对于GNSS和车辆状态,可能需要自定义处理(如使用`protobuf`保存PB格式)。 如果未安装ROS,请先[设置ROS环境](http://wiki.ros.org/ROS/Installation)。对于点云处理,ROS自带工具如`pcl_ros`可辅助解析[^1]。 --- ### 步骤2: 读取ROS bag文件 使用Python的`rosbag`库读取bag文件。首先,导入必要模块: ```python import rosbag from sensor_msgs.msg import PointCloud2, Image, NavSatFix from geometry_msgs.msg import Twist import cv2 from cv_bridge import CvBridge import numpy as np from pypcd import pypcd # 初始化 bag_path = "path/to/your/bag_file.bag" bag = rosbag.Bag(bag_path, "r") # 以只读模式打开bag文件 bridge = CvBridge() # 用于ROS图像消息与OpenCV图像的转换 ``` 关键点: - `sensor_msgs/PointCloud2` 用于点云数据。 - `sensor_msgs/Image` 用于相机图像。 - `sensor_msgs/NavSatFix` 用于GNSS数据。 - `geometry_msgs/Twist` 常用于车辆状态(如速度)。 --- ### 步骤3: 提取并保存不同传感器数据 遍历bag文件中的消息,根据topic类型提取数据并保存。以下是针对每种传感器的处理代码。 #### 3.1 点云数据(保存为PCD格式) 点云数据通常来自LiDAR(如Velodyne),topic名如`/velodyne_points`。使用`pypcd`库将ROS消息转为PCD文件。 ```python # 创建存储目录 import os pcd_dir = "output/pcd" os.makedirs(pcd_dir, exist_ok=True) # 遍历点云消息 topic_pointcloud = "/velodyne_points" # 根据实际topic名修改 frame_count = 0 for topic, msg, t in bag.read_messages(topics=[topic_pointcloud]): # 将PointCloud2消息转为numpy数组 pc_data = pypcd.PointCloud.from_msg(msg) # 保存为PCD文件 pcd_filename = os.path.join(pcd_dir, f"pointcloud_{frame_count}.pcd") pc_data.save_pcd(pcd_filename, compression="binary") frame_count += 1 print(f"保存了{frame_count}帧点云数据到PCD格式") ``` - **参考**: 此方法类似ROS工具`rosrun pcl_ros bag_to_pcd`,但更灵活[^1]。 - **注意**: 点云坐标系可能需转换(如从LiDAR坐标系到大地坐标系),详见步骤5。 #### 3.2 相机图像(保存为JPG格式) 图像数据topic如`/camera/image_raw`。使用`cv_bridge`将ROS图像消息转为OpenCV格式,再保存为JPG。 ```python image_dir = "output/images" os.makedirs(image_dir, exist_ok=True) topic_image = "/camera/image_raw" # 根据实际topic名修改 frame_count = 0 for topic, msg, t in bag.read_messages(topics=[topic_image]): # 转换为OpenCV图像 cv_image = bridge.imgmsg_to_cv2(msg, desired_encoding="bgr8") # 保存为JPG jpg_filename = os.path.join(image_dir, f"image_{frame_count}.jpg") cv2.imwrite(jpg_filename, cv_image) frame_count += 1 print(f"保存了{frame_count}帧图像数据到JPG格式") ``` - **数据预处理**: 可在此添加归一化或缩放(如`cv2.resize(images, (224, 224))`),参考[^2]。 #### 3.3 雷达数据(点云或原始数据) 雷达数据通常与点云类似(如LiDAR),但可能包括额外信息(如强度)。处理方法同点云(步骤3.1)。如果雷达输出原始数据(非点云),需自定义解析: ```python # 示例:假设雷达topic为 /radar_data radar_dir = "output/radar" os.makedirs(radar_dir, exist_ok=True) topic_radar = "/radar_data" frame_count = 0 for topic, msg, t in bag.read_messages(topics=[topic_radar]): # 自定义解析雷达消息(假设为自定义消息类型) # 保存为文本或二进制文件 with open(os.path.join(radar_dir, f"radar_{frame_count}.txt"), "w") as f: f.write(str(msg.data)) # 假设msg.data包含原始数据 frame_count += 1 ``` #### 3.4 GNSS数据(保存为文本或Protobuf) GNSS数据topic如`/gps/fix`。保存为文本文件或Protobuf(PB)格式(需定义.proto文件)。 ```python gnss_dir = "output/gnss" os.makedirs(gnss_dir, exist_ok=True) topic_gnss = "/gps/fix" # NavSatFix消息 frame_count = 0 for topic, msg, t in bag.read_messages(topics=[topic_gnss]): # 保存为文本文件(简单格式) with open(os.path.join(gnss_dir, f"gnss_{frame_count}.txt"), "w") as f: f.write(f"Latitude: {msg.latitude}, Longitude: {msg.longitude}, Altitude: {msg.altitude}\n") frame_count += 1 # 可选:保存为Protobuf格式(需安装protobuf) # 1. 定义.proto文件(如gnss.proto),然后编译。 # 2. 在Python中使用生成的消息类序列化数据。 # 示例:假设已编译,保存为PB文件 # from gnss_pb2 import GnssData # gnss_pb = GnssData(latitude=msg.latitude, longitude=msg.longitude) # with open(os.path.join(gnss_dir, f"gnss_{frame_count}.pb"), "wb") as f: # f.write(gnss_pb.SerializeToString()) ``` - **提示**: Protobuf格式高效且适合大规模数据,但需额外设置。 #### 3.5 车辆状态数据(保存为文本或PB) 车辆状态topic如`/vehicle/twist`(速度信息)或自定义topic。 ```python vehicle_dir = "output/vehicle" os.makedirs(vehicle_dir, exist_ok=True) topic_vehicle = "/vehicle/twist" # Twist消息 frame_count = 0 for topic, msg, t in bag.read_messages(topics=[topic_vehicle]): # 保存为文本 with open(os.path.join(vehicle_dir, f"vehicle_{frame_count}.txt"), "w") as f: f.write(f"Linear: x={msg.linear.x}, y={msg.linear.y}, z={msg.linear.z}\n") f.write(f"Angular: x={msg.angular.x}, y={msg.angular.y}, z={msg.angular.z}\n") # 可选:保存为PB格式(类似GNSS步骤) frame_count += 1 ``` --- ### 步骤4: 数据预处理(可选但推荐) 提取数据后,进行预处理以提高后续模型性能: - **时间同步**: 传感器数据可能时间戳不一致。使用ROS离线工具(如`message_filters`)实现软同步[^3]。例如,只发布相同时间戳的数据: ```python from message_filters import ApproximateTimeSynchronizer, Subscriber # 订阅多个topic并同步 ts = ApproximateTimeSynchronizer([Subscriber(bag, "/velodyne_points"), Subscriber(bag, "/camera/image_raw")], queue_size=10, slop=0.1) ts.registerCallback(combined_callback) # 自定义回调函数处理同步数据 ``` - **坐标转换**: 点云数据常需从LiDAR坐标系转换到大地坐标系。公式基于极坐标投影: $$ x = r \cos \theta \cos \phi \\ y = r \sin \theta \cos \phi \\ z = r \sin \phi $$ 其中$r$为距离,$\theta$为水平角,$\phi$为垂直角。在Python中使用旋转矩阵实现[^4]: ```python # 示例:点云坐标转换(假设point_cloud为numpy数组) rotation_matrix = np.array([[0, -1, 0], [1, 0, 0], [0, 0, 1]]) # ROS中LiDAR坐标系旋转90度 transformed_cloud = np.dot(point_cloud[:, :3], rotation_matrix.T) ``` - **图像预处理**: 如归一化(`images = images / 255.0`)或灰度转换(`cv2.cvtColor(images, cv2.COLOR_BGR2GRAY)`),参考[^2]。 --- ### 步骤5: 完整代码整合 以下是一个整合脚本,覆盖所有传感器: ```python import rosbag from sensor_msgs.msg import PointCloud2, Image, NavSatFix from geometry_msgs.msg import Twist import cv2 from cv_bridge import CvBridge import numpy as np from pypcd import pypcd import os from message_filters import ApproximateTimeSynchronizer, Subscriber # 用于时间同步 def main(): bag_path = "3.bag" # 替换为你的bag文件路径 output_base = "output" os.makedirs(output_base, exist_ok=True) # 初始化 bag = rosbag.Bag(bag_path, "r") bridge = CvBridge() # 定义topic(根据实际修改) topics = { "pointcloud": "/velodyne_points", "image": "/camera/image_raw", "gnss": "/gps/fix", "vehicle": "/vehicle/twist" } # 提取并保存数据 for sensor, topic in topics.items(): sensor_dir = os.path.join(output_base, sensor) os.makedirs(sensor_dir, exist_ok=True) frame_count = 0 for _, msg, _ in bag.read_messages(topics=[topic]): if sensor == "pointcloud": pc = pypcd.PointCloud.from_msg(msg) pc.save_pcd(os.path.join(sensor_dir, f"pc_{frame_count}.pcd")) elif sensor == "image": cv_img = bridge.imgmsg_to_cv2(msg, "bgr8") cv2.imwrite(os.path.join(sensor_dir, f"img_{frame_count}.jpg"), cv_img) elif sensor == "gnss": with open(os.path.join(sensor_dir, f"gnss_{frame_count}.txt"), "w") as f: f.write(f"{msg.latitude},{msg.longitude},{msg.altitude}") elif sensor == "vehicle": with open(os.path.join(sensor_dir, f"vehicle_{frame_count}.txt"), "w") as f: f.write(f"{msg.linear.x},{msg.linear.y},{msg.linear.z}") frame_count += 1 print(f"{sensor}数据保存完成,共{frame_count}帧") bag.close() if __name__ == "__main__": main() ``` **注意事项**: - **时间戳处理**: 在循环中,`msg.header.stamp`可用于获取时间戳,实现精确同步[^3]。 - **性能优化**: 对于大型bag文件,使用多线程或增量读取。 - **错误处理**: 添加try-except块处理消息解析错误。 - **参考实践**: 实际应用中,如Apollo D-Kit套件,需注意传感器topic命名和坐标系差异[^4]。 --- ###
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值