《 Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras using Adaptive Voxelization》论文阅读
- 第一章:自适应体素化(1)——译文、理论
- 第一章:自适应体素化(2)——代码阅读
- 第二章:多雷达外参标定(1)——译文
- 第二章:多雷达外参标定(2)——推导损失函数降维
- 第二章:多雷达外参标定(3)——推导二阶闭式导数的两个引理
- 第二章:多雷达外参标定(4)——推导二阶闭式导数
- 第二章:多雷达外参标定(5)——代码阅读、公式补充
- 第三章:雷达相机外参标定——译文、代码阅读
文章目录
前言
这篇主要记录论文mlcc
关于雷达相机外参标定的部分,个人使用发现对环境依赖好像很高。
一、文章内容
With the LiDAR extrinsic parameter E L \mathcal{E}_L EL and pose trajectory S \mathcal{S} S computed above, we obtain a dense global point cloud by transforming all LiDAR points to the base LiDAR frame. Then, the extrinsic E C \mathcal{E}_C EC is optimized by minimizing the summed distance between the back-projected LiDAR edge feature points and the image edge feature points. Two types of LiDAR edge points could be extracted from the point cloud. One is the depth-discontinuous edge between the foreground and background objects, and the other is the depth-continuous edge between two neighboring non-parallel planes. As explained in our previous work [28], depth-discontinuous edges suffer from foreground inflation and bleeding points phenomenon; we hence use depth-continuous edges to match the point cloud and images.
利用上述计算得到的激光雷达外参参数 E L \mathcal{E}_L EL 和位姿轨迹 S \mathcal{S} S,我们将所有激光雷达点转换到基准激光雷达坐标系中,从而获得一个密集的全局点云。接着,通过最小化反向投影的激光雷达边缘特征点与图像边缘特征点之间的累积距离来优化相机外参 E C \mathcal{E}_C EC。可以从点云中提取两种类型的激光雷达边缘点。一种是前景和背景物体之间的深度不连续边缘;另一种是非平行邻近平面之间的深度连续边缘。正如我们先前的工作 [28] 中所解释的那样,深度不连续边缘容易出现前景膨胀和漂移点现象;因此,我们使用深度连续边缘来进行点云与图像的匹配。
In [28], the LiDAR point cloud is segmented into voxels with uniform sizes, and the planes inside each voxel are estimated by the RANSAC algorithm. In contrast, our method uses the same adaptive voxel map obtained in Sec. III-B. We calculate the angle between their containing plane normals for every two adjacent voxels. If this angle exceeds a threshold, the intersection line of these two planes is extracted as the depthcontinuous edge, as shown in Fig. 5. We choose to implement the Canny algorithm for image edge features to detect and extract.
在文献[28]中,LiDAR点云被分割成大小均匀的体素,然后通过RANSAC算法估计每个体素内的平面。相比之下,我们的方法使用了在第III-B节中获得的自适应体素地图。我们计算每两个相邻体素中包含平面法线之间的夹角。如果该夹角超过某一阈值,则提取这两个平面的交线作为深度连续边缘,如图5所示。我们选择使用Canny算法来检测并提取图像中的边缘特征。
Suppose G p i {}^{G}p_{i} Gpi represents the i i i-th point from a LiDAR edge feature extracted above in global frame. With pin-hole camera and its distortion model, G p i {}^{G}p_{i} Gpi projected onto the image taken by camera C l C_l Cl at t j t_j tj, i.e., I i , j I_{i,j} Ii,j
by I l , j p i = f ( π ( L 0 C l T ( L 0 G T t j ) − 1 G p i ) ) (7) {}^{\mathbf{I}_{l,j}}\mathbf{p}_{i}=\mathbf{f}\left(\boldsymbol{\pi}\left({{}^{C_l}_{L_{0}}}\mathbf{T}\left({}^{G}_{L_0}\mathbf{T}_{t_{j}}\right)^{-1}{}^{G}\mathbf{p}_{i}\\\right)\right)\tag{7} Il,jpi=f(π(L0ClT(L0GTtj)−1Gpi))(7)
where f(·) is the camera distortion model and π(·) is the projection model. Let I i \mathcal{I}_i Ii represent the set of images that capture the point G p i {}^{G}p_{i} Gpi, i.e., I i = I l , j \mathcal{I}_i = {I_{l,j}} Ii=Il,j. For each I l , j p i {}^{I_{l,j}}p_i Il,jpi, the $\kappa $ nearest image edge feature points q k q_k qk on I l , j I_{l,j} Il,j are searched. The normal vector n i , l , j n_{i,l,j} ni,l,j of the edge formed by these $\kappa $ points is thus the eigenvector corresponding to the minimum eigenvalue of A i , l , j A_{i,l,j} Ai,l,j that A i , l , j = ∑ k = 1 κ ( q k − q i , l , j ) ( q k − q i , l , j ) T , q i , l , j = 1 κ ∑ k = 1 κ q k (8) \mathbf{A}_{i,l,j}=\sum_{k=1}^\kappa(\mathbf{q}_k-\mathbf{q}_{i,l,j})(\mathbf{q}_k-\mathbf{q}_{i,l,j})^T,\mathbf{q}_{i,l,j}=\frac1\kappa\sum_{k=1}^\kappa\mathbf{q}_k\tag{8} Ai,l,j=k=1∑κ(qk−qi,l,j)(qk−qi,l,j)T,qi,l,j=κ1k=1∑κqk(8).
The residual originated from this LiDAR camera correspondence is defined as r i , l , j = n i , l , j T ( I l , j p i − q i , l , j ) . (9) \mathbf{r}_{i,l,j}=\mathbf{n}_{i,l,j}^{T}\left({}^{\mathbf{I}_{l,j}}\mathbf{p}_{i}-\mathbf{q}_{i,l,j}\right).\tag{9} ri,l,j=ni,l,jT(Il,jpi−qi,l,j).(9)
Collecting all such correspondences, the extrinsic ECcalibration problem could be formulated as E C ∗ = arg min E C ∑ i ∑ I l , j ∈ I i ( n i , l , j T ( I l , j p i − q i , l , j ) ) (10) \mathcal{E}_{C}^{*}=\arg\min_{\mathcal{E}_{C}}\sum_{i}\sum_{\mathbf{I}_{l,j}\in\mathcal{I}_{i}}\left(\mathbf{n}_{i,l,j}^{T}\left({}^{\mathbf{I}_{l,j}}\mathbf{p}_{i}-\mathbf{q}_{i,l,j}\right)\right)\tag{10} EC∗=argECmini∑Il,j∈Ii∑(ni,l,jT(Il,jpi−qi,l,j))(10).
假设
G
p
i
{}^{G}p_{i}
Gpi是从全局坐标系中提取的第i个激光雷达边缘特征点。利用针孔相机及其畸变模型,将
G
p
i
{}^{G}p_{i}
Gpi投影到
C
l
C_l
Cl在时刻
t
j
t_j
tj时刻拍摄的图像
I
i
,
j
I_{i,j}
Ii,j上,数学表达如下:
I
l
,
j
p
i
=
f
(
π
(
L
0
C
l
T
(
L
0
G
T
t
j
)
−
1
G
p
i
)
)
(7)
{}^{\mathbf{I}_{l,j}}\mathbf{p}_{i}=\mathbf{f}\left(\boldsymbol{\pi}\left({{}^{C_l}_{L_{0}}}\mathbf{T}\left({}^{G}_{L_0}\mathbf{T}_{t_{j}}\right)^{-1}{}^{G}\mathbf{p}_{i}\\\right)\right)\tag{7}
Il,jpi=f(π(L0ClT(L0GTtj)−1Gpi))(7)。
其中
f
(
⋅
)
\mathbf{f}(\cdot)
f(⋅)是相机畸变模型,
π
(
⋅
)
\boldsymbol{\pi}(\cdot)
π(⋅)是投影模型.令
I
i
\mathcal{I}_i
Ii表示捕捉到了点
G
p
i
{}^{G}p_{i}
Gpi的图像集合,即
I
i
=
I
l
,
j
\mathcal{I}_i = {I_{l,j}}
Ii=Il,j。对于每个
I
l
,
j
p
i
{}^{I_{l,j}}p_i
Il,jpi,在图像上搜索其
κ
\kappa
κ个最近邻的图像边缘特征点
q
k
q_k
qk.由于这
κ
\kappa
κ个点形成的边缘的法向量
n
i
,
l
,
j
n_{i,l,j}
ni,l,j即为矩阵
A
i
,
l
,
j
A_{i,l,j}
Ai,l,j的最小特征值对应的特征向量,其中
A
i
,
l
,
j
=
∑
k
=
1
κ
(
q
k
−
q
i
,
l
,
j
)
(
q
k
−
q
i
,
l
,
j
)
T
,
q
i
,
l
,
j
=
1
κ
∑
k
=
1
κ
q
k
(8)
\mathbf{A}_{i,l,j}=\sum_{k=1}^\kappa(\mathbf{q}_k-\mathbf{q}_{i,l,j})(\mathbf{q}_k-\mathbf{q}_{i,l,j})^T,\mathbf{q}_{i,l,j}=\frac1\kappa\sum_{k=1}^\kappa\mathbf{q}_k\tag{8}
Ai,l,j=k=1∑κ(qk−qi,l,j)(qk−qi,l,j)T,qi,l,j=κ1k=1∑κqk(8).
由此对应激光雷达与相机对应关系产生的残差定义为:
r
i
,
l
,
j
=
n
i
,
l
,
j
T
(
I
l
,
j
p
i
−
q
i
,
l
,
j
)
.
(9)
\mathbf{r}_{i,l,j}=\mathbf{n}_{i,l,j}^{T}\left({}^{\mathbf{I}_{l,j}}\mathbf{p}_{i}-\mathbf{q}_{i,l,j}\right).\tag{9}
ri,l,j=ni,l,jT(Il,jpi−qi,l,j).(9)
收集所有的这样的对应关系,相机外参校准问题可以表述为
E
C
∗
=
arg
min
E
C
∑
i
∑
I
l
,
j
∈
I
i
(
n
i
,
l
,
j
T
(
I
l
,
j
p
i
−
q
i
,
l
,
j
)
)
(10)
\mathcal{E}_{C}^{*}=\arg\min_{\mathcal{E}_{C}}\sum_{i}\sum_{\mathbf{I}_{l,j}\in\mathcal{I}_{i}}\left(\mathbf{n}_{i,l,j}^{T}\left({}^{\mathbf{I}_{l,j}}\mathbf{p}_{i}-\mathbf{q}_{i,l,j}\right)\right)\tag{10}
EC∗=argECmini∑Il,j∈Ii∑(ni,l,jT(Il,jpi−qi,l,j))(10).
Inspecting the residual in (9), we find the I l , j p i {}^{I_{l,j}}p_i Il,jpi is dependent on LiDAR poses L 0 G T t j {}^{G}_{L_0}T_{t_j} L0GTtj. This is due to the reason that LiDARs may have FoV overlap with cameras at different times (as in Fig. 2). Since L 0 G T t j ∈ S {}^{G}_{L_0}T_{t_j}\in\mathcal{S} L0GTtj∈S has been well estimated from Sec. III-C, we keep them fixed in this step. Moreover, the n i , l , j n_{i,l,j} ni,l,j and q i , l , j q_{i,l,j} qi,l,j are also implicitly dependent on E C \mathcal{E}_C EC, since both ni,l,jand qi,l,jare related with nearest neighbor search.
The complete derivative of (10) to the variable E C \mathcal{E}_C EC would be too complicated. In this paper, to simplify the optimization problem, we ignore the influence of camera extrinsic on n i , l , j n_{i,l,j} ni,l,j and q i , l , j q_{i,l,j} qi,l,j. This strategy works well in practice as detailed in Sec. IV-B.
检查公式 (9) 中的残差,我们发现
I
l
,
j
p
i
{}^{I_{l,j}}p_i
Il,jpi 依赖于激光雷达的姿态
L
0
G
T
t
j
{}^{G}_{L_0}T_{t_j}
L0GTtj。这是由于激光雷达可能在不同时间与相机存在视场重叠(如图2所示)。因为
L
0
G
T
t
j
∈
S
{}^{G}_{L_0}T_{t_j}\in\mathcal{S}
L0GTtj∈S 已经根据第三节C部分很好地估计出来了,在此步骤中我们将它们固定不变。此外,
n
i
,
l
,
j
n_{i,l,j}
ni,l,j 和
q
i
,
l
,
j
q_{i,l,j}
qi,l,j 也隐式地依赖于外参
E
C
\mathcal{E}_C
EC,因为两者都与最近邻搜索相关。
对变量
E
C
\mathcal{E}_C
EC 完整求导将会过于复杂。在本文中,为了简化优化问题,我们忽略了相机外参对
n
i
,
l
,
j
n_{i,l,j}
ni,l,j 和
q
i
,
l
,
j
q_{i,l,j}
qi,l,j 的影响。如第四节B部分详细说明的那样,这一策略在实践中表现良好。
The non-linear optimization (10) is solved with LM method by approximating the residuals with their first order derivatives (11). The optimal E C ∗ \mathcal{E}^{\ast}_{C} EC∗ is then obtained by iteratively solving (11) and updating δ x \delta{x} δx to x x x using the ⊞ \boxplus ⊞ operation [31]. δ x = − ( J T J + μ I ) − 1 J T r , (11) \delta\mathbf{x}=-\left(\mathbf{J}^{T}\mathbf{J}+\mu\mathbf{I}\right)^{-1}\mathbf{J}^{T}\mathbf{r},\tag{11} δx=−(JTJ+μI)−1JTr,(11) where δ x = [ ⋯ L 0 C 1 ϕ T δ L 0 C 1 t T ⋯ ] ∈ R 6 h x = [ ⋯ L 0 C 1 R L 0 C 1 t ⋯ ] J = [ ⋯ J p T ⋯ ] T , r = [ ⋯ r p ⋯ ] T , \begin{align*} \delta{x}=\begin{bmatrix}\cdots\quad{}^{C_1}_{L_0}\phi^{T}\quad\delta{{}^{C_1}_{L_0}t^{T}}\quad\cdots\end{bmatrix}\in\mathbb{R}^{6h}\\ x=\begin{bmatrix}\cdots\quad{}^{C_1}_{L_0}R\quad{}^{C_1}_{L_0}t\quad\cdots\end{bmatrix}\\ J=\begin{bmatrix}\cdots\quad J^{T}_{p}\quad\cdots\end{bmatrix}^{T},r=\begin{bmatrix}\cdots\quad{r_p}\quad\cdots\end{bmatrix}^{T}, \end{align*} δx=[⋯L0C1ϕTδL0C1tT⋯]∈R6hx=[⋯L0C1RL0C1t⋯]J=[⋯JpT⋯]T,r=[⋯rp⋯]T,
with J p J_p Jp and r p r_p rp begin the sum of J i , l , j J_{i,l,j} Ji,l,j and r i , l , j r_{i,l,j} ri,l,j when l = p l = p l=p: J i , l , j = n i , l , j T ∂ f ( p ) ∂ p ∂ π ( P ) ∂ P [ − L 0 C l R ( L 0 p i ) ∧ I ] ∈ R 1 × 6 L 0 p i = ( L 0 G T t j ) − 1 G p i . (12) \begin{align*} &\mathbf{J}_{i,l,j} =\mathbf{n}_{i,l,j}^T\frac{\partial\mathbf{f}(\mathbf{p})}{\partial\mathbf{p}}\frac{\partial\boldsymbol{\pi}(\mathbf{P})}{\partial\mathbf{P}}\left[-_{L_0}^{C_l}\mathbf{R}\left(^{L_0}\mathbf{p}_i\right)^{\wedge}\quad\mathbf{I}\right]\in\mathbb{R}^{1\times6} \\ &^{L_0}\mathbf{p}_i =\left({}_{L_0}^G\mathbf{T}_{t_j}\right)^{-1}{}^G\mathbf{p}_i. \end{align*}\tag{12} Ji,l,j=ni,l,jT∂p∂f(p)∂P∂π(P)[−L0ClR(L0pi)∧I]∈R1×6L0pi=(L0GTtj)−1Gpi.(12)
非线性优化问题 (10) 通过使用 LM(Levenberg-Marquardt)方法并用一阶导数 (11) 近似残差来求解。通过迭代求解 (11) 并使用
⊞
\boxplus
⊞ 操作更新
δ
x
\delta{x}
δx 到
x
x
x 来获得最优的
E
C
∗
\mathcal{E}^{\ast}_{C}
EC∗:
δ
x
=
−
(
J
T
J
+
μ
I
)
−
1
J
T
r
,
(11)
\delta\mathbf{x}=-\left(\mathbf{J}^{T}\mathbf{J}+\mu\mathbf{I}\right)^{-1}\mathbf{J}^{T}\mathbf{r},\tag{11}
δx=−(JTJ+μI)−1JTr,(11)
其中,
δ
x
=
[
⋯
L
0
C
1
ϕ
T
δ
L
0
C
1
t
T
⋯
]
∈
R
6
h
x
=
[
⋯
L
0
C
1
R
L
0
C
1
t
⋯
]
J
=
[
⋯
J
p
T
⋯
]
T
,
r
=
[
⋯
r
p
⋯
]
T
,
\begin{align*} \delta{x}=\begin{bmatrix}\cdots\quad{}^{C_1}_{L_0}\phi^{T}\quad\delta{{}^{C_1}_{L_0}t^{T}}\quad\cdots\end{bmatrix}\in\mathbb{R}^{6h}\\ x=\begin{bmatrix}\cdots\quad{}^{C_1}_{L_0}R\quad{}^{C_1}_{L_0}t\quad\cdots\end{bmatrix}\\ J=\begin{bmatrix}\cdots\quad J^{T}_{p}\quad\cdots\end{bmatrix}^{T},r=\begin{bmatrix}\cdots\quad{r_p}\quad\cdots\end{bmatrix}^{T}, \end{align*}
δx=[⋯L0C1ϕTδL0C1tT⋯]∈R6hx=[⋯L0C1RL0C1t⋯]J=[⋯JpT⋯]T,r=[⋯rp⋯]T,
并且当
l
=
p
l = p
l=p 时,
J
p
J_p
Jp 和
r
p
r_p
rp 分别为
J
i
,
l
,
j
J_{i,l,j}
Ji,l,j 和
r
i
,
l
,
j
r_{i,l,j}
ri,l,j 的和:
J
i
,
l
,
j
=
n
i
,
l
,
j
T
∂
f
(
p
)
∂
p
∂
π
(
P
)
∂
P
[
−
L
0
C
l
R
(
L
0
p
i
)
∧
I
]
∈
R
1
×
6
L
0
p
i
=
(
L
0
G
T
t
j
)
−
1
G
p
i
.
(12)
\begin{align*} &\mathbf{J}_{i,l,j} =\mathbf{n}_{i,l,j}^T\frac{\partial\mathbf{f}(\mathbf{p})}{\partial\mathbf{p}}\frac{\partial\boldsymbol{\pi}(\mathbf{P})}{\partial\mathbf{P}}\left[-_{L_0}^{C_l}\mathbf{R}\left(^{L_0}\mathbf{p}_i\right)^{\wedge}\quad\mathbf{I}\right]\in\mathbb{R}^{1\times6} \\ &^{L_0}\mathbf{p}_i =\left({}_{L_0}^G\mathbf{T}_{t_j}\right)^{-1}{}^G\mathbf{p}_i. \end{align*}\tag{12}
Ji,l,j=ni,l,jT∂p∂f(p)∂P∂π(P)[−L0ClR(L0pi)∧I]∈R1×6L0pi=(L0GTtj)−1Gpi.(12)
二、理论推导
论文中相机标定部分的内容是比较容易理解的,只需找到匹配的点云边缘点和图像边缘点,将点云边缘点投影到图像上,通过最小化重投影误差即可对相机外参进行优化,其中最重要的是作者建立边缘线上点云与图像的匹配点的实现思路。至于最后的误差公式的一阶导数推导,暂不进行说明,因为代码实现中直接使用了ceres的自动求导。
后续有空余时间可以补充!
建立匹配点的思路为:将所有点云边缘线上全局坐标系下的点云投影到基准雷达下(注意,是所有位姿都投影,而非对应的某
一帧,因为不同图像可能会看到同一个边缘线),再通过cv::projectPoints()函数直接投到像素坐标系下,遍历每个边缘
点,判断其距离最近的5个图像上的边缘点,如果5个点距离该点都小于一定的值则检索该点距离最近的5个点,拟合点云的方向向量,图像也是一样的处理,将那5个点拟合方向向量。需要注意的是图像的分辨率是有限的,尤其是在较低分辨率的相机中,多个三维点可能在投影到二维平面后,落在同一个像素上。因此那些投影之后落在同一个位置的三维点要取平均值。此时所有的三维点都将有其对应的二维点。
三、代码详解
1.提取平面
1.1 通过体素化提取空间中的平面
下面代码选自ba.hpp,通过判断协方差矩阵的特征值比值确定平面,与之前的雷达标定部分关于平面的提取是一样的,但是多了一些细节处理,其额外将平面拆分成8份,将每一份的法向量与整体的法向量进行一致性评估。具体如下:
bool judge_eigen(int layer) {
VOX_FACTOR covMat = sig_orig;
Eigen::SelfAdjointEigenSolver<Eigen::Matrix3d> saes(covMat.cov());
value_vector = saes.eigenvalues();
center = covMat.v / covMat.N;//平面中心点
direct = saes.eigenvectors().col(0);//平面的法向量
eigen_ratio = saes.eigenvalues()[0] / saes.eigenvalues()[2]; // [0] is the smallest
//NOTE:与雷达标定部分检测平面刚好相反,这里是小值比大值。
if (eigen_ratio > eigen_thr) return 0;
//NOTE:两个较小的特征值比值如果比较接近,那么说明在剩余两个方向分部比较均匀,是线状点云
if (saes.eigenvalues()[0] / saes.eigenvalues()[1] > 0.1) return 0; // 排除线状点云
double eva0 = saes.eigenvalues()[0];
double sqr_eva0 = sqrt(eva0);
//NOTE:选择平面中心点沿着平面法向量的方向延伸一定的距离的点作为边界点
Eigen::Vector3d center_turb = center + 5 * sqr_eva0 * direct;
//NOTE:将平面拆分为8个子平面
vector<VOX_FACTOR> covMats(8);
for (Eigen::Vector3d ap : vec_orig) {
int xyz[3] = {0, 0, 0};
for (int k = 0; k < 3; k++)
if (ap(k) > center_turb[k])
xyz[k] = 1;
Eigen::Vector3d pvec(ap(0), ap(1), ap(2));
int leafnum = 4 * xyz[0] + 2 * xyz[1] + xyz[2];
covMats[leafnum].push(pvec);
}
//NOTE:重新计算子平面的法向量,判断与平面法向量的夹角是否足够小cos(θ),当所有的都满足,则认为该平面是平面,否则不是平面
int num_all = 0, num_qua = 0;
for (int i = 0; i < 8; i++)
if (covMats[i].N > MIN_PT) {
Eigen::SelfAdjointEigenSolver<Eigen::Matrix3d> saes(covMats[i].cov());
Eigen::Vector3d child_direct = saes.eigenvectors().col(0);
if (fabs(child_direct.dot(direct)) > 0.98)
num_qua++;
num_all++;
}
if (num_qua != num_all) return 0;
return 1;
}
1.2 平面整合
选自calib_camera.hpp,将体素中属于同一平面的点云进行合并,判断条件是两个平面法向量相似度很高,且彼此法向量距离对方的平面中心点距离很近。
/**
* @brief 合并平面
* @param origin_list 原始平面列表
* @param merge_list 合并后的平面列表
*/
void mergePlane(std::vector<Plane *> &origin_list, std::vector<Plane *> &merge_list) {
for (size_t i = 0; i < origin_list.size(); i++)
origin_list[i]->id = 0;
int current_id = 1;
for (auto iter = origin_list.end() - 1; iter != origin_list.begin(); iter--) {
for (auto iter2 = origin_list.begin(); iter2 != iter; iter2++) {
//NOTE:计算当前平面和其他平面之间的法向量的差和法向量的和,以及平面中心到平面的距离,
// 如果满足条件则认为两个平面是同一个平面,方向相同,距离接近,非常有可能为同一平面
Eigen::Vector3d normal_diff = (*iter)->normal - (*iter2)->normal;
Eigen::Vector3d normal_add = (*iter)->normal + (*iter2)->normal;
double dis1 = fabs((*iter)->normal(0) * (*iter2)->center(0) +
(*iter)->normal(1) * (*iter2)->center(1) +
(*iter)->normal(2) * (*iter2)->center(2) + (*iter)->d);
double dis2 = fabs((*iter2)->normal(0) * (*iter)->center(0) +
(*iter2)->normal(1) * (*iter)->center(1) +
(*iter2)->normal(2) * (*iter)->center(2) + (*iter2)->d);
if (normal_diff.norm() < 0.2 || normal_add.norm() < 0.2)
if (dis1 < 0.05 && dis2 < 0.05) {
if ((*iter)->id == 0 && (*iter2)->id == 0) {
(*iter)->id = current_id;
(*iter2)->id = current_id;
current_id++;
} else if ((*iter)->id == 0 && (*iter2)->id != 0)
(*iter)->id = (*iter2)->id;
else if ((*iter)->id != 0 && (*iter2)->id == 0)
(*iter2)->id = (*iter)->id;
}
}
}
.....
}
2. 提取边缘线
2.1 提取点云边缘线
首先是通过两个平面法向量和中心点计算边缘线上一点。过程为假设
c
1
c_1
c1和
c
2
c_2
c2分别为平面1和2的中心点,
n
1
n_1
n1和
n
2
n_2
n2分别为平面1和2的法向量。通过向量的叉乘很容易得到两个平面相交直线的法向量为
d
=
n
1
×
n
2
d=n1\times{n2}
d=n1×n2。通过以下三个方程可以求得交线上的一点坐标
p
p
p:
n
1
⋅
(
p
−
c
1
)
=
0
n
2
⋅
(
p
−
c
2
)
=
0
d
⋅
(
p
−
c
1
)
=
0
\begin{align*} n_1\cdot(p-c_1)=0\tag{1}\\ n_2\cdot(p-c_2)=0\tag{2}\\ d\cdot(p-c_1)=0\tag{3} \end{align*}
n1⋅(p−c1)=0n2⋅(p−c2)=0d⋅(p−c1)=0(1)(2)(3)
式1表示为边缘线上一点与
c
1
c_1
c1构成的向量垂直于
n
1
n_1
n1,式2同理,式3表示边缘线上一点与
c
1
c_1
c1构成的向量垂直于直线所在的法向量。由此可整理得到:
A
=
[
n
1
T
d
T
n
2
T
]
b
=
[
n
1
⋅
c
1
d
⋅
c
1
n
2
⋅
c
2
]
A
⋅
x
=
b
\begin{align*} A=\begin{bmatrix} n_1^{T}\\d^{T}\\n_2^{T} \end{bmatrix}\tag{4}\\ b=\begin{bmatrix}n_1\cdot{c_1}\quad{d}\cdot{c_1}\quad{n_2}\cdot{c_2}\end{bmatrix}\tag{5}\\ A\cdot{x}=b\tag{6} \end{align*}
A=
n1TdTn2T
b=[n1⋅c1d⋅c1n2⋅c2]A⋅x=b(4)(5)(6)
对式6进行QR分解则能得到边缘线上一点,且该点与
c
1
c1
c1构成的向量是垂直于边缘线所在的法向量的。另外:
(
c
2
−
O
)
⋅
d
×
d
(
c
2
−
0
)
−
(
c
2
−
O
)
⋅
d
×
d
\begin{align*} (c_2-O)\cdot{d}\times{d}\tag{7}\\ (c_2-0)-(c_2-O)\cdot{d}\times{d}\tag{8} \end{align*}
(c2−O)⋅d×d(c2−0)−(c2−O)⋅d×d(7)(8)
式7为
c
2
c_2
c2到交线投影位置与
O
O
O点构成的向量,式8则为
c
2
c_2
c2与直线上一点构成垂直于交线的向量;
代码如下所示
void projectLine(const Plane *plane1, const Plane *plane2,
std::vector<Eigen::Vector3d> &line_point) {
float theta = plane1->normal.dot(plane2->normal);
//夹角要大于一定的值
if (!(theta > theta_max_ && theta < theta_min_)) return;
// std::cout << "theta:" << theta << std::endl;
// std::cout << "theta_max_" << theta_max_ << " theta_min_" << theta_min_ << std::endl;
Eigen::Vector3d c1 = plane1->center;
Eigen::Vector3d c2 = plane2->center;
Eigen::Vector3d n1 = plane1->normal;
Eigen::Vector3d n2 = plane2->normal;
Eigen::Matrix3d A;
Eigen::Vector3d d = n1.cross(n2).normalized();
A.row(0) = n1.transpose();
A.row(1) = d.transpose();
A.row(2) = n2.transpose();
//NOTE:描述了三个关系 $n_1\cdot{(p-c_1)}=0$ 、$n_2\cdot{(p-c_2)}=0$、$ d\cdot{(p-c_1)}$
//该点与c1平面内中心点构成的向量与n1垂直,与c2平面中心点构成的向量与n2垂直,与c1平面中心点构成·与两个平面法向量、
//垂直方向的法向量构成的向量垂直,所以该点为平面交线上一点
Eigen::Vector3d b(n1.dot(c1), d.dot(c1), n2.dot(c2));
Eigen::Vector3d O = A.colPivHouseholderQr().solve(b);
double c1_to_line = (c1 - O).norm();
//NOTE:注意计算点的时候约束了,该点与c1构成的向量是垂直于d的,但是没约束c2
double c2_to_line = ((c2 - O) - (c2 - O).dot(d) * d).norm();
if (c1_to_line / c2_to_line > 8 || c2_to_line / c1_to_line > 8) return;
if (plane1->points_size < plane2->points_size)
for (auto pt : plane1->plane_points) {
Eigen::Vector3d p = (pt - O).dot(d) * d + O;
line_point.push_back(p);
}
else
for (auto pt : plane2->plane_points) {
Eigen::Vector3d p = (pt - O).dot(d) * d + O;
line_point.push_back(p);
}
return;
}
进一步处理,计算在体素内所有点中距离平面投影线上的点最近的5个点,将其存放到相应的容器中。
/**
* @brief 提取体素地图中平面的边缘点
* @param surf_map
*/
void estimate_edge(std::unordered_map<VOXEL_LOC, OCTO_TREE_ROOT *> &surf_map) {
// ros::Rate loop(500);
lidar_edge_clouds = pcl::PointCloud<pcl::PointXYZI>::Ptr(new pcl::PointCloud<pcl::PointXYZI>);
for (auto iter = surf_map.begin(); iter != surf_map.end(); iter++) {
std::vector<Plane *> plane_list;
std::vector<Plane *> merge_plane_list;
iter->second->get_plane_list(plane_list);
if (plane_list.size() > 1) {
pcl::KdTreeFLANN<pcl::PointXYZI> kd_tree;
pcl::PointCloud<pcl::PointXYZI> input_cloud;
//将当前所有体素的点云加入到kd树中,用于快速最临近搜索
for (auto pv : iter->second->all_points) {
pcl::PointXYZI p;
p.x = pv(0);
p.y = pv(1);
p.z = pv(2);
input_cloud.push_back(p);
}
kd_tree.setInputCloud(input_cloud.makeShared());
mergePlane(plane_list, merge_plane_list);
if (merge_plane_list.size() <= 1) continue;
#ifdef DEBUG
for (auto plane : merge_plane_list) {
static int i = 0;
pcl::PointCloud<pcl::PointXYZRGB> color_cloud;
std::vector<unsigned int> colors;
colors.push_back(static_cast<unsigned int>(rand() % 255));
colors.push_back(static_cast<unsigned int>(rand() % 255));
colors.push_back(static_cast<unsigned int>(rand() % 255));
for (auto pv : plane->plane_points) {
pcl::PointXYZRGB pi;
pi.x = pv[0];
pi.y = pv[1];
pi.z = pv[2];
pi.r = colors[0];
pi.g = colors[1];
pi.b = colors[2];
color_cloud.points.push_back(pi);
}
pcl::io::savePCDFile("merge_plane_" + std::to_string(i++)+".pcd", color_cloud);
}
#endif
for (size_t p1_index = 0; p1_index < merge_plane_list.size() - 1; p1_index++)
for (size_t p2_index = p1_index + 1; p2_index < merge_plane_list.size(); p2_index++) {
std::vector<Eigen::Vector3d> line_point;
//计算两个平面之间的交线
projectLine(merge_plane_list[p1_index], merge_plane_list[p2_index], line_point);
if (line_point.size() == 0) break;
pcl::PointCloud<pcl::PointXYZI> line_cloud;
for (size_t j = 0; j < line_point.size(); j++) {
pcl::PointXYZI p;
p.x = line_point[j][0];
p.y = line_point[j][1];
p.z = line_point[j][2];
int K = 5;
// 创建两个向量,分别存放近邻的索引值、近邻的中心距
std::vector<int> pointIdxNKNSearch(K);
std::vector<float> pointNKNSquaredDistance(K);
if (kd_tree.nearestKSearch(p, K, pointIdxNKNSearch, pointNKNSquaredDistance) == K) {
Eigen::Vector3d tmp(input_cloud.points[pointIdxNKNSearch[K - 1]].x,
input_cloud.points[pointIdxNKNSearch[K - 1]].y,
input_cloud.points[pointIdxNKNSearch[K - 1]].z);
// if(pointNKNSquaredDistance[K-1] < 0.01)
if ((tmp - line_point[j]).norm() < 0.05) {
line_cloud.points.push_back(p);
lidar_edge_clouds->points.push_back(p);
}
}
}
}
}
}
}
2.2 提取图像边缘线
代码主要是通过opencv的cv::Canny()函数实现,没有太多额外的处理,有以下几点需要关注:
...
//高斯模糊,减小图像中的噪声和细节,保留图像的边缘
cv::GaussianBlur(src_img[a], src_img[a], cv::Size(gaussian_size, gaussian_size), 0, 0);
cv::Mat canny_result = cv::Mat::zeros(src_img[a].rows, src_img[a].cols, CV_8UC1);
//低于阈值1的像素点会被认为不是边缘;
//高于阈值2的像素点会被认为是边缘;
//在阈值1和阈值2之间的像素点,若与第2步得到的边缘像素点相邻,则被认为是边缘,否则被认为不是边缘。
cv::Canny(src_img[a], canny_result, canny_threshold, canny_threshold * 3, 3, true);
...
3. 构建匹配对
本文第2章理论推导中对建立匹配对的过程进行了一定的描述,这里不再复述。
void buildVPnp(const Camera &cam,
const Vector6d &extrinsic_params, const int dis_threshold,
const bool show_residual,
const std::vector<pcl::PointCloud<pcl::PointXYZ>::Ptr> &cam_edge_clouds_2d,
const pcl::PointCloud<pcl::PointXYZI>::Ptr &lidar_edge_clouds_3d,
std::vector<VPnPData> &pnp_list) {
pnp_list.clear();
cv::Mat camera_matrix = (cv::Mat_<double>(3, 3)
<< cam.fx_, cam.s_, cam.cx_, 0.0, cam.fy_, cam.cy_, 0.0, 0.0, 1.0);
cv::Mat distortion_coeff =
(cv::Mat_<double>(1, 5) << cam.k1_, cam.k2_, cam.p1_, cam.p2_, cam.k3_);
Eigen::AngleAxisd rotation_vector3;
rotation_vector3 =
Eigen::AngleAxisd(extrinsic_params[0], Eigen::Vector3d::UnitZ()) *
Eigen::AngleAxisd(extrinsic_params[1], Eigen::Vector3d::UnitY()) *
Eigen::AngleAxisd(extrinsic_params[2], Eigen::Vector3d::UnitX());
Eigen::Quaterniond q_(rotation_vector3);
for (size_t a = 0; a < base_poses.size(); a += 1) // for each camera pose
{
std::vector<std::vector<std::vector<pcl::PointXYZI>>> img_pts_container;
for (int y = 0; y < cam.height_; y++) {
std::vector<std::vector<pcl::PointXYZI>> row_pts_container;
for (int x = 0; x < cam.width_; x++) {
std::vector<pcl::PointXYZI> col_pts_container;
row_pts_container.push_back(col_pts_container);
}
img_pts_container.push_back(row_pts_container);
}
std::vector<cv::Point3f> pts_3d;
std::vector<cv::Point2f> pts_2d;
cv::Mat r_vec = (cv::Mat_<double>(3, 1)
<< rotation_vector3.angle() * rotation_vector3.axis().transpose()[0],
rotation_vector3.angle() * rotation_vector3.axis().transpose()[1],
rotation_vector3.angle() * rotation_vector3.axis().transpose()[2]);
Eigen::Vector3d t_(extrinsic_params[3], extrinsic_params[4], extrinsic_params[5]);
cv::Mat t_vec = (cv::Mat_<double>(3, 1) << t_(0), t_(1), t_(2));
for (size_t i = 0; i < lidar_edge_clouds_3d->size(); i++) {
pcl::PointXYZI point_3d = lidar_edge_clouds_3d->points[i];
Eigen::Vector3d pt1(point_3d.x, point_3d.y, point_3d.z);
Eigen::Vector3d pt2(0, 0, 1);
Eigen::Vector3d pt;
pt = base_poses[a].q.inverse() * (pt1 - base_poses[a].t);//转回到基准雷达坐标系下
//NOTE:将点转化到相机坐标系下,其与原点构成的向量与相机视野方向向量(0,0,1)夹角很小则说明是在相机视野内的点
if (cos_angle(q_ * pt + t_, pt2) > 0.8) // FoV check
pts_3d.emplace_back(cv::Point3f(pt(0), pt(1), pt(2)));
}
//将雷达点投到图像上
cv::projectPoints(pts_3d, r_vec, t_vec, camera_matrix, distortion_coeff, pts_2d);
pcl::PointCloud<pcl::PointXYZ>::Ptr line_edge_cloud_2d(new pcl::PointCloud<pcl::PointXYZ>);
std::vector<int> line_edge_cloud_2d_number;
for (size_t i = 0; i < pts_2d.size(); i++) {
pcl::PointXYZ p;
p.x = pts_2d[i].x;
p.y = -pts_2d[i].y;
p.z = 0;
pcl::PointXYZI pi_3d;
pi_3d.x = pts_3d[i].x;
pi_3d.y = pts_3d[i].y;
pi_3d.z = pts_3d[i].z;
pi_3d.intensity = 1;
//判断是否在图像视野内
if (p.x > 0 && p.x < cam.width_ && pts_2d[i].y > 0 && pts_2d[i].y < cam.height_) {
if (img_pts_container[pts_2d[i].y][pts_2d[i].x].size() == 0) {
line_edge_cloud_2d->points.push_back(p);
img_pts_container[pts_2d[i].y][pts_2d[i].x].push_back(pi_3d);
} else
img_pts_container[pts_2d[i].y][pts_2d[i].x].push_back(pi_3d);
}
}
if (show_residual)
if (a == 16) {
cv::Mat residual_img = getConnectImg(
cam, dis_threshold, cam_edge_clouds_2d[a], line_edge_cloud_2d);
std::string img_name = std::to_string(a);
cv::imshow(img_name, residual_img);
cv::waitKey(10);
}
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree_cam(new pcl::search::KdTree<pcl::PointXYZ>());
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree_lidar(new pcl::search::KdTree<pcl::PointXYZ>());
pcl::PointCloud<pcl::PointXYZ>::Ptr search_cloud =
pcl::PointCloud<pcl::PointXYZ>::Ptr(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr tree_cloud_cam =
pcl::PointCloud<pcl::PointXYZ>::Ptr(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr tree_cloud_lidar =
pcl::PointCloud<pcl::PointXYZ>::Ptr(new pcl::PointCloud<pcl::PointXYZ>);
kdtree_cam->setInputCloud(cam_edge_clouds_2d[a]);
kdtree_lidar->setInputCloud(line_edge_cloud_2d);
tree_cloud_cam = cam_edge_clouds_2d[a];
tree_cloud_lidar = line_edge_cloud_2d;
search_cloud = line_edge_cloud_2d;
int K = 5; // 指定近邻个数
// 创建两个向量,分别存放近邻的索引值、近邻的中心距
std::vector<int> pointIdxNKNSearch(K);
std::vector<float> pointNKNSquaredDistance(K);
std::vector<int> pointIdxNKNSearchLidar(K);
std::vector<float> pointNKNSquaredDistanceLidar(K);
std::vector<cv::Point2d> lidar_2d_list;
std::vector<cv::Point2d> img_2d_list;
std::vector<Eigen::Vector2d> camera_direction_list;
std::vector<Eigen::Vector2d> lidar_direction_list;
std::vector<int> lidar_2d_number;
for (size_t i = 0; i < search_cloud->points.size(); i++) {
pcl::PointXYZ searchPoint = search_cloud->points[i];
//查找点云的临近点,主要目的是为了计算点所在直线的方向
kdtree_lidar->nearestKSearch(searchPoint, K, pointIdxNKNSearchLidar,
pointNKNSquaredDistanceLidar);
if (kdtree_cam->nearestKSearch(searchPoint, K, pointIdxNKNSearch, pointNKNSquaredDistance) > 0) {
bool dis_check = true;
//如果点云中某个点与图像中最临近的5个点,有一个距离超出阈值,则认为该点与图像中该点距离太远,丢弃
for (int j = 0; j < K; j++) {
float distance =
sqrt(pow(searchPoint.x - tree_cloud_cam->points[pointIdxNKNSearch[j]].x, 2) +
pow(searchPoint.y - tree_cloud_cam->points[pointIdxNKNSearch[j]].y, 2));
if (distance > dis_threshold) dis_check = false;
}
if (dis_check) {
cv::Point p_l_2d(search_cloud->points[i].x, -search_cloud->points[i].y);
cv::Point p_c_2d(tree_cloud_cam->points[pointIdxNKNSearch[0]].x,
-tree_cloud_cam->points[pointIdxNKNSearch[0]].y);
Eigen::Vector2d direction_cam(0, 0);
std::vector<Eigen::Vector2d> points_cam;
for (size_t i = 0; i < pointIdxNKNSearch.size(); i++) {
Eigen::Vector2d p(tree_cloud_cam->points[pointIdxNKNSearch[i]].x,
-tree_cloud_cam->points[pointIdxNKNSearch[i]].y);
points_cam.push_back(p);
}
//计算点的分布方向,即直线方向,与计算通过协方差的特征值判断平面法向量类似
calcDirection(points_cam, direction_cam);
Eigen::Vector2d direction_lidar(0, 0);
std::vector<Eigen::Vector2d> points_lidar;
for (size_t i = 0; i < pointIdxNKNSearch.size(); i++) {
Eigen::Vector2d p(tree_cloud_lidar->points[pointIdxNKNSearchLidar[i]].x,
-tree_cloud_lidar->points[pointIdxNKNSearchLidar[i]].y);
points_lidar.push_back(p);
}
calcDirection(points_lidar, direction_lidar);
if (p_l_2d.x > 0 && p_l_2d.x < cam.width_ && p_l_2d.y > 0 &&
p_l_2d.y < cam.height_) {
lidar_2d_list.push_back(p_l_2d);
img_2d_list.push_back(p_c_2d);
camera_direction_list.push_back(direction_cam);
lidar_direction_list.push_back(direction_lidar);
}
}
}
}
for (size_t i = 0; i < lidar_2d_list.size(); i++) {
int y = lidar_2d_list[i].y;
int x = lidar_2d_list[i].x;
int pixel_points_size = img_pts_container[y][x].size();
if (pixel_points_size > 0) {
VPnPData pnp;
pnp.x = 0;
pnp.y = 0;
pnp.z = 0;
pnp.u = img_2d_list[i].x;
pnp.v = img_2d_list[i].y;
//NOTE:图像的分辨率是有限的,尤其是在较低分辨率的相机中,多个三维点可能在投影到二维平面后,落在同一个像素上。
for (int j = 0; j < pixel_points_size; j++) {
pnp.x += img_pts_container[y][x][j].x;
pnp.y += img_pts_container[y][x][j].y;
pnp.z += img_pts_container[y][x][j].z;
}
pnp.x = pnp.x / pixel_points_size;
pnp.y = pnp.y / pixel_points_size;
pnp.z = pnp.z / pixel_points_size;
pnp.direction = camera_direction_list[i];
pnp.direction_lidar = lidar_direction_list[i];
pnp.number = 0;
float theta = pnp.direction.dot(pnp.direction_lidar);
// 判断两个方向夹角是否满足条件
if (theta > direction_theta_min_ || theta < direction_theta_max_)
pnp_list.push_back(pnp);
}
}
}
}
4.优化问题
4.1 最小化重投影误差
本文第一章节中提到了相机校准问题可以表达为:
E
C
∗
=
arg
min
E
C
∑
i
∑
I
l
,
j
∈
I
i
(
n
i
,
l
,
j
T
(
I
l
,
j
p
i
−
q
i
,
l
,
j
)
)
(1)
\mathcal{E}_{C}^{*}=\arg\min_{\mathcal{E}_{C}}\sum_{i}\sum_{\mathbf{I}_{l,j}\in\mathcal{I}_{i}}\left(\mathbf{n}_{i,l,j}^{T}\left({}^{\mathbf{I}_{l,j}}\mathbf{p}_{i}-\mathbf{q}_{i,l,j}\right)\right)\tag{1}
EC∗=argECmini∑Il,j∈Ii∑(ni,l,jT(Il,jpi−qi,l,j))(1)
上式暂时还没理解,等后面再看!,但是好在代码实现上与问题表达是有差异,代码是优化点到直线上的距离,与BLAM中关于边缘特征问题表述是一样的,具体如下所示:
注意图中关于边缘点的表达可以进行以下转化:
(
I
−
n
n
T
)
(
p
f
i
−
q
)
=
(
p
f
i
−
q
)
−
n
n
T
(
p
f
i
−
q
)
(I-nn^{T})(p_{f_i}-q)=(p_{f_i}-q)-nn^{T}(p_{f_i}-q)
(I−nnT)(pfi−q)=(pfi−q)−nnT(pfi−q)
其中,
n
n
T
(
p
f
i
−
q
)
nn^{T}(p_{f_i}-q)
nnT(pfi−q)表达将
p
f
i
−
q
p_{f_i}-q
pfi−q向量投影到
n
n
n向量上。
现在将本文的相关变量带入式中可以得到表达式为:
E
C
∗
=
arg
min
E
C
∑
i
∑
I
l
,
j
∈
I
i
(
(
I
−
n
i
,
l
,
j
n
i
,
l
,
j
T
)
(
I
l
,
j
p
i
−
q
i
,
l
,
j
)
)
(2)
\mathcal{E}_{C}^{*}=\arg\min_{\mathcal{E}_{C}}\sum_{i}\sum_{\mathbf{I}_{l,j}\in\mathcal{I}_{i}}\left((I-\mathbf{n}_{i,l,j}\mathbf{n}_{i,l,j}^{T})\left({}^{\mathbf{I}_{l,j}}\mathbf{p}_{i}-\mathbf{q}_{i,l,j}\right)\right)\tag{2}
EC∗=argECmini∑Il,j∈Ii∑((I−ni,l,jni,l,jT)(Il,jpi−qi,l,j))(2)。
下面代码选自calib_camera.cpp,内容为ceres误差函数的建立:
class vpnp_calib {
public:
vpnp_calib(VPnPData p) { pd = p; }
template<typename T>
bool operator()(const T *_q, const T *_t, T *residuals) const {
Eigen::Matrix<T, 3, 3> innerT = inner.cast<T>();
Eigen::Matrix<T, 4, 1> distorT = distor.cast<T>();
Eigen::Quaternion<T> q_incre{_q[3], _q[0], _q[1], _q[2]};
Eigen::Matrix<T, 3, 1> t_incre{_t[0], _t[1], _t[2]};
Eigen::Matrix<T, 3, 1> p_l(T(pd.x), T(pd.y), T(pd.z));
Eigen::Matrix<T, 3, 1> p_c = q_incre.toRotationMatrix() * p_l + t_incre;//将点云转换到相机坐标系下
Eigen::Matrix<T, 3, 1> p_2 = innerT * p_c;
T uo = p_2[0] / p_2[2];
T vo = p_2[1] / p_2[2];//转化为像素坐标系下,并进行归一化
const T &fx = innerT.coeffRef(0, 0);
const T &cx = innerT.coeffRef(0, 2);
const T &fy = innerT.coeffRef(1, 1);
const T &cy = innerT.coeffRef(1, 2);
T xo = (uo - cx) / fx;
T yo = (vo - cy) / fy;
T r2 = xo * xo + yo * yo;
T r4 = r2 * r2;
T distortion = 1.0 + distorT[0] * r2 + distorT[1] * r4;
T xd = xo * distortion + (distorT[2] * xo * yo + distorT[2] * xo * yo) +
distorT[3] * (r2 + xo * xo + xo * xo);
T yd = yo * distortion + distorT[3] * xo * yo + distorT[3] * xo * yo +
distorT[2] * (r2 + yo * yo + yo * yo);
T ud = fx * xd + cx;
T vd = fy * yd + cy;//使用针孔模型添加畸变
if (T(pd.direction(0)) == T(0.0) && T(pd.direction(1)) == T(0.0)) {
residuals[0] = ud - T(pd.u);
residuals[1] = vd - T(pd.v);
} else {
residuals[0] = ud - T(pd.u);
residuals[1] = vd - T(pd.v);
//NOTE:这里用了BALM中的误差公式,而非MLCC中的公式
//构建向量$(I-nn^{T})({}^{I_{l,j}}p_{i}-q_{i,l,j})$以求点到直线的距离,然后最小化这个距离
Eigen::Matrix<T, 2, 2> I = Eigen::Matrix<float, 2, 2>::Identity().cast<T>();
Eigen::Matrix<T, 2, 1> n = pd.direction.cast<T>();
Eigen::Matrix<T, 1, 2> nt = pd.direction.transpose().cast<T>();
Eigen::Matrix<T, 2, 2> V = n * nt;
V = I - V;
Eigen::Matrix<T, 2, 1> R = Eigen::Matrix<float, 2, 1>::Zero().cast<T>();
R.coeffRef(0, 0) = residuals[0];
R.coeffRef(1, 0) = residuals[1];
R = V * R;
residuals[0] = R.coeffRef(0, 0);
residuals[1] = R.coeffRef(1, 0);
}
return true;
}
static ceres::CostFunction *Create(VPnPData p) {
return (new ceres::AutoDiffCostFunction<vpnp_calib, 2, 4, 3>(new vpnp_calib(p)));
}
private:
VPnPData pd;
};
4.2 粗标定
代码中还实现了一个粗标定的方法,工作方式为调整初始参数,判断匹配对的数目是否增加,如果增加,则认为图像和点云配准更加准确,此时对参数进行更新,否则进入下一组值。
void roughCalib(Calibration &calibra, double search_resolution, int max_iter) {
float match_dis = 20;
Eigen::Vector3d fix_adjust_euler(0, 0, 0);
std::cout << "roughCalib ...." << std::endl;
for (int n = 0; n < 2; n++) {//进行两轮
for (int round = 0; round < 3; round++) {//依次调整三个角度
for (size_t a = 0; a < calibra.cams.size(); a++) {//对所有相机进行优化
Eigen::Matrix3d rot = calibra.cams[a].ext_R;
Vector3d transation = calibra.cams[a].ext_t;
float min_cost = 1000;
for (int iter = 0; iter < max_iter; iter++) {
Eigen::Vector3d adjust_euler = fix_adjust_euler;
//正负交叉
adjust_euler[round] = fix_adjust_euler[round] + pow(-1, iter) * int(iter / 2) * search_resolution;
Eigen::Matrix3d adjust_rotation_matrix;
adjust_rotation_matrix =
Eigen::AngleAxisd(adjust_euler[0], Eigen::Vector3d::UnitZ()) *
Eigen::AngleAxisd(adjust_euler[1], Eigen::Vector3d::UnitY()) *
Eigen::AngleAxisd(adjust_euler[2], Eigen::Vector3d::UnitX());
Eigen::Matrix3d test_rot = rot * adjust_rotation_matrix;
Eigen::Vector3d test_euler = test_rot.eulerAngles(2, 1, 0);
Vector6d test_params;
test_params
<< test_euler[0], test_euler[1], test_euler[2], transation[0], transation[1], transation[2];
std::vector<VPnPData> pnp_list;
//建立匹配对
calibra.buildVPnp(calibra.cams[a], test_params, match_dis,
false, calibra.cams[a].rgb_edge_clouds,
calibra.lidar_edge_clouds, pnp_list);
int edge_size = calibra.lidar_edge_clouds->size();
int pnp_size = pnp_list.size();
float cost = ((float) (edge_size - pnp_size) / (float) edge_size);
#ifdef DEBUG
std::cout << "n " << n << " round " << round << " a " << a << " iter "
<< iter << " cost:" << cost << std::endl;
#endif
//NOTE:因为,edge_size是不变的,cost越小说明pnp_list数量多,说明pnp配准越好
if (cost < min_cost) {
std::cout << "cost :" << cost << "; edge size : " << calibra.lidar_edge_clouds->size()
<< "; pnp_list: " << pnp_list.size() << std::endl;
min_cost = cost;
Eigen::Matrix3d rot;
rot = Eigen::AngleAxisd(test_params[0], Eigen::Vector3d::UnitZ()) *
Eigen::AngleAxisd(test_params[1], Eigen::Vector3d::UnitY()) *
Eigen::AngleAxisd(test_params[2], Eigen::Vector3d::UnitX());
//更新参数
calibra.cams[a].update_Rt(rot, transation);
calibra.buildVPnp(calibra.cams[a], test_params, match_dis,
true, calibra.cams[a].rgb_edge_clouds,
calibra.lidar_edge_clouds, pnp_list);
}
}
}
}
}
}