m2dgr数据集在vins mono运行
1.代码的修改
1.1 yaml文件的修改
创建m2dge.yaml
%YAML:1.0
#common parameters
imu_topic: "/handsfree/imu"
image_topic: "/camera/color/image_raw"
output_path: "lvi_sam/lidar/deskew/cloud_deskewed"
#camera calibration
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 480
distortion_parameters:
k1: 0.148000794688248
k2: -0.217835187249065
p1: 0
p2: 0
projection_parameters:
fx: 617.971050917033
fy: 616.445131524790
cx: 327.710279392468
cy: 253.976983707814
# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
# 2 Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 0.0, 0.0, 1.0,
-1.0, 0.0, 0.0,
0.0, -1.0, 0.0]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [0.57711, -0.00012, 0.83333]
#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking
min_dist: 20 # min distance between two features
freq: 20 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
equalize: 1 # if image is too dark or light, trun on equalize to find enough features
fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points
#optimization parameters
max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time
max_num_iterations: 10 # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)
#imu parameters The more accurate parameters you provide, the worse performance
acc_n: 1.2820343288774358e-01 # accelerometer measurement noise standard deviation. #0.2
gyr_n: 2.1309311394972831e-02 # gyroscope measurement noise standard deviation. #0.05
acc_w: 1.3677912958097768e-02 # accelerometer bias random work noise standard deviation. #0.02
gyr_w: 3.6603917782528627e-04 # gyroscope bias random work noise standard deviation. #4.0e-5
g_norm: 9.805 # gravity magnitude
#loop closure parameters
loop_closure: 1 # start loop closure
load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path'
fast_relocalization: 0 # useful in real-time and large project
pose_graph_save_path: "/home/shaozu/output/pose_graph/" # save and load path
#unsynchronization parameters
estimate_td: 0 # online estimate time offset between camera and imu
td: 0.0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#rolling shutter parameters
rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet).
#visualization parameters
save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0
visualize_imu_forward: 0 # output imu forward propogation to achieve low latency and high frequence results
visualize_camera_size: 0.4 # size of camera marker in RVIZ
1.2修改VINS-mono轨迹保存代码
修改utility/visualization.cpp 158行的轨迹结果保存路径
ofstream foutC("your_rasultpath", ios::app);
修改utility/visualization.cpp 以下代码
// write result to file
ofstream foutC(VINS_RESULT_PATH, ios::app);
foutC.setf(ios::fixed, ios::floatfield);
foutC.precision(0);
foutC << header.stamp.toSec() * 1e9 << ",";
foutC.precision(5);
foutC << estimator.Ps[WINDOW_SIZE].x() << ","
<< estimator.Ps[WINDOW_SIZE].y() << ","
<< estimator.Ps[WINDOW_SIZE].z() << ","
<< tmp_Q.w() << ","
<< tmp_Q.x() << ","
<< tmp_Q.y() << ","
<< tmp_Q.z() << ","
<< estimator.Vs[WINDOW_SIZE].x() << ","
<< estimator.Vs[WINDOW_SIZE].y() << ","
<< estimator.Vs[WINDOW_SIZE].z() << "," << endl;
修改为以下代码
// ofstream foutC(VINS_RESULT_PATH, ios::app);
ofstream foutC("/home/li/m2dgr/vins.txt", ios::app);
foutC.setf(ios::fixed, ios::floatfield);
foutC << header.stamp.toSec() << " ";
foutC.precision(5);
foutC << estimator.Ps[WINDOW_SIZE].x() << " "
<< estimator.Ps[WINDOW_SIZE].y() << " "
<< estimator.Ps[WINDOW_SIZE].z() << " "
<< tmp_Q.w() << " "
<< tmp_Q.x() << " "
<< tmp_Q.y() << " "
<< tmp_Q.z() << endl;
// << estimator.Vs[WINDOW_SIZE].x() << ","
// << estimator.Vs[WINDOW_SIZE].y() << ","
// << estimator.Vs[WINDOW_SIZE].z() << ","
foutC.close();
}
1.3修改launch文件
添加以下两行代码
<node pkg="image_transport" type="republish" name="image_republish" args="compressed in:=/camera/color/image_raw raw out:=/camera/color/image_raw" output="screen" respawn="true"/>
<node name="rvizvisualisation" pkg="rviz" type="rviz" output="log" args="-d $(find vins_estimator)/../config/vins_rviz_config.rviz" />
2 EVO显示轨迹
运行以下两个终端
roslaunch vins_estimator run.launch
rosbag play -r 2 street_02.bag
2.1运行比较结果
evo_ape tum street02/street_02_groundtruth.txt street02/vins.txt -vap --plot_mode=xy
运行结果
参考资料
https://rupingcen.blog.youkuaiyun.com/article/details/110485772