by luoshi006
参考:
https://blog.youkuaiyun.com/heyijia0327/article/details/50774104
https://blog.youkuaiyun.com/heyijia0327/article/details/51083398
https://github.com/uzh-rpg/rpg_svo/issues/62
https://blog.youkuaiyun.com/kokerf/article/details/72844455
https://github.com/gaoxiang12/slambook/blob/master/ch13/dense_monocular/dense_mapping.cpp#L240
深度滤波器的深度估计过程:
- 在 ref r e f 帧,选取特征点;
- 在 cur c u r 帧,通过极线搜索和块匹配 (ZSSD) ,确定投影点位置;
- 通过三角量测,计算深度和方差;
- 在滤波器中融合深度,直至收敛;
对深度的求解过程,作者 Forster 给出了三种解法:
- SVO 中
depth_filter
更新过程中的最小二乘深度求解; rpg_vikit
中 给出了最小二乘求解 3D 点坐标;REMODE
中,采用 cramer 法则,求解 3D 点坐标;
解法一:
SVO
depth_filter
中的深度求解。
已知:
R=Rcr
R
=
R
c
r
,
t=tcr
t
=
t
c
r
,
ref
r
e
f
帧中特征点归一化坐标
fr
f
r
,
cur
c
u
r
帧中特征点归一化坐标
fc
f
c
;
求:
深度
dr=depthref
d
r
=
d
e
p
t
h
r
e
f
,
dc=depthcur
d
c
=
d
e
p
t
h
c
u
r
两特征对应同一 3D 点
整理,得:
bool depthFromTriangulation(
const SE3& T_search_ref,
const Vector3d& f_ref,
const Vector3d& f_cur,
double& depth)
{
Matrix<double,3,2> A; A << T_search_ref.rotation_matrix() * f_ref, f_cur;
const Matrix2d AtA = A.transpose()*A;
if(AtA.determinant() < 0.000001)
return false;
const Vector2d depth2 = - AtA.inverse()*A.transpose()*T_search_ref.translation();
depth = fabs(depth2[0]);
return true;
}
解法二:
SVO
rpg_vikit
中的 3D 点求解。
已知:
R=R12
R
=
R
12
,
t=t12
t
=
t
12
,
特征点归一化坐标
feature1
f
e
a
t
u
r
e
1
,
特征点归一化坐标
feature2
f
e
a
t
u
r
e
2
;
求:
3D 坐标
https://github.com/uzh-rpg/rpg_vikit/blob/master/vikit_common/src/math_utils.cpp#L15
Vector3d
triangulateFeatureNonLin(const Matrix3d& R, const Vector3d& t,
const Vector3d& feature1, const Vector3d& feature2 )
{
Vector3d f2 = R * feature2;
Vector2d b;
b[0] = t.dot(feature1);
b[1] = t.dot(f2);
Matrix2d A;
A(0,0) = feature1.dot(feature1);
A(1,0) = feature1.dot(f2);
A(0,1) = -A(1,0);
A(1,1) = -f2.dot(f2);
Vector2d lambda = A.inverse() * b;
Vector3d xm = lambda[0] * feature1;
Vector3d xn = t + lambda[1] * f2;
return ( xm + xn )/2;
}
解法三:
SVO-REMODE 中的 3D 点求解。
cramer 法则:
https://github.com/uzh-rpg/rpg_open_remode/blob/master/src/triangulation.cu#L27
// Returns 3D point in reference frame
// Non-linear formulation (ref. to the book 'Autonomous Mobile Robots')
__device__ __forceinline__
float3 triangulatenNonLin(
const float3 &bearing_vector_ref,
const float3 &bearing_vector_curr,
const SE3<float> &T_ref_curr)
{
const float3 t = T_ref_curr.getTranslation();
float3 f2 = T_ref_curr.rotate(bearing_vector_curr);
const float2 b = make_float2(dot(t, bearing_vector_ref),
dot(t, f2));
float A[2*2];
A[0] = dot(bearing_vector_ref, bearing_vector_ref);
A[2] = dot(bearing_vector_ref, f2);
A[1] = -A[2];
A[3] = dot(-f2, f2);
const float2 lambdavec = make_float2(A[3]*b.x - A[1]*b.y,
-A[2]*b.x + A[0]*b.y) / (A[0]*A[3] - A[1]*A[2]);
const float3 xm = lambdavec.x * bearing_vector_ref;
const float3 xn = t + lambdavec.y * f2;
return (xm + xn)/2.0f;
}