引言
本篇博客主要介绍对极几何和基础矩阵的求解方法和作用。从一张2D的图像估计3D模型有时十分困难。这个时候,双/多摄像头就可以帮助解决3D恢复的问题。而对极几何和基础矩阵的出现解决了:已知两幅图像中两点是对应关系,如何求解两相机的相对位置和姿态
对极几何
1.基本概念
立体成像的基本几何就是对极几何。下图是最经典的对极几何示意图。
O1和O2为两个相机(也有可能是一个相机在不同时刻的位置)的主点,P为空间中一个物点,两个相对的白色平面是像面(严格按照光路应该是在O1 O2点的后方,与P点相反方向,CV中默认采用这种往前画的方式节省空间)。p1和p2是P点在像面上的对应点,e1 e2为像面和O1 O2的交点。O1O2为基线,也被称作相机的移动方向。在对极几何中,e1和e2被称作极点,PO1O2平面为极面,p1e1为极线,同理p2e2也为极线。这是对极几何中重要的三个概念。下面介绍极点的性质:
(1)像平面和的基线O1O2交点(2)O2在相机1上的像点为e1(3)基线的平行线在各自像面上的消失点
基础矩阵
基础矩阵的作用:如果已知基础矩阵F,以及一个3D点在一个像面上的像素坐标p,则可以求得在另一个像面上的像素坐标p’。可以表征两个相机的相对位置及相机内参数。
上图中以O1为原点,光轴方向为z轴,另外两个方向为x, y轴可以得到一个坐标系,在这个坐标系下,可以对P, p1(即图中所标p), p2(即图中所标p’)得到三维坐标,同理,对O2也可以得到一个三维坐标,这两个坐标之间的转换矩阵为[R T],即通过旋转R和平移T可以将O1坐标系下的点P1(x1, y1, z1), 转换成O2坐标系下的P2(x2, y2, z2)。
代码
求投影矩阵和基础矩阵demo
from PIL import Image
from numpy import *
from pylab import *
import numpy as np
# In[2]:
from PCV.geometry import homography, camera, sfm
from PCV.localdescriptors import sift
# In[5]:
root="C:\\Users\\DELL\\Desktop\\PCV\\Ex3\\"
# Read features
im1 = array(Image.open(root+'im1.jpg'))
sift.process_image(root+'im1.jpg', root+'im1.sift')
im2 = array(Image.open(root+'im2.jpg'))
sift.process_image(root+'im2.jpg.', root+ 'im2.sift')
# In[6]:
l1, d1 = sift.read_features_from_file(root+'im1.sift')
l2, d2 = sift.read_features_from_file(root+'im2.sift')
# In[7]:
matches = sift.match_twosided(d1, d2)
# In[8]:
ndx = matches.nonzero()[0]
x1 = homography.make_homog(l1[ndx, :2].T)
ndx2 = [int(matches[i]) for i in ndx]
x2 = homography.make_homog(l2[ndx2, :2].T)
d1n = d1[ndx]
d2n = d2[ndx2]
x1n = x1.copy()
x2n = x2.copy()
# In[9]:
figure(figsize=(16,16))
sift.plot_matches(im1, im2, l1, l2, matches, True)
show()
# In[10]:
#def F_from_ransac(x1, x2, model, maxiter=5000, match_threshold=1e-6):
def F_from_ransac(x1, x2, model, maxiter=5000, match_threshold=1e-6):
""" Robust estimation of a fundamental matrix F from point
correspondences using RANSAC (ransac.py from
http://www.scipy.org/Cookbook/RANSAC).
input: x1, x2 (3*n arrays) points in hom. coordinates. """
from PCV.tools import ransac
data = np.vstack((x1, x2))
d = 10 # 20 is the original
# compute F and return with inlier index
F, ransac_data = ransac.ransac(data.T, model,8, maxiter, match_threshold, d, return_all=True)
return F, ransac_data['inliers']
# In[11]:
# find F through RANSAC
model = sfm.RansacModel()
F, inliers = F_from_ransac(x1n, x2n, model, maxiter=5000, match_threshold=1e-3)
print(F)
# In[12]:
P1 = array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]])
P2 = sfm.compute_P_from_fundamental(F)
# In[13]:
print(P2)
print(F)
# In[14]:
# P2, F (1e-4, d=20)
# [[ -1.48067422e+00 1.14802177e+01 5.62878044e+02 4.74418238e+03]
# [ 1.24802182e+01 -9.67640761e+01 -4.74418113e+03 5.62856097e+02]
# [ 2.16588305e-02 3.69220292e-03 -1.04831621e+02 1.00000000e+00]]
# [[ -1.14890281e-07 4.55171451e-06 -2.63063628e-03]
# [ -1.26569570e-06 6.28095242e-07 2.03963649e-02]
# [ 1.25746499e-03 -2.19476910e-02 1.00000000e+00]]
# In[15]:
# triangulate inliers and remove points not in front of both cameras
X = sfm.triangulate(x1n[:, inliers], x2n[:, inliers], P1, P2)
# In[16]:
# plot the projection of X
cam1 = camera.Camera(P1)
cam2 = camera.Camera(P2)
x1p = cam1.project(X)
x2p = cam2.project(X)
# In[17]:
figure(figsize=(16, 16))
imj = sift.appendimages(im1, im2)
imj = vstack((imj, imj))
imshow(imj)
cols1 = im1.shape[1]
rows1 = im1.shape[0]
for i in range(len(x1p[0])):
if (0<= x1p[0][i]<cols1) and (0<= x2p[0][i]<cols1) and (0<=x1p[1][i]<rows1) and (0<=x2p[1][i]<rows1):
plot([x1p[0][i], x2p[0][i]+cols1],[x1p[1][i], x2p[1][i]],'c')
axis('off')
show()
# In[18]:
d1p = d1n[inliers]
d2p = d2n[inliers]
# In[19]:
# Read features
im3 = array(Image.open(root+'im3.jpg'))
sift.process_image(root+'im3.jpg', 'im3.sift')
l3, d3 = sift.read_features_from_file(root+'im3.sift')
# In[20]:
matches13 = sift.match_twosided(d1p, d3)
# In[21]:
ndx_13 = matches13.nonzero()[0]
x1_13 = homography.make_homog(x1p[:, ndx_13])
ndx2_13 = [int(matches13[i]) for i in ndx_13]
x3_13 = homography.make_homog(l3[ndx2_13, :2].T)
# In[22]:
figure(figsize=(16, 16))
imj = sift.appendimages(im1, im3)
imj = vstack((imj, imj))
imshow(imj)
cols1 = im1.shape[1]
rows1 = im1.shape[0]
for i in range(len(x1_13[0])):
if (0<= x1_13[0][i]<cols1) and (0<= x3_13[0][i]<cols1) and (0<=x1_13[1][i]<rows1) and (0<=x3_13[1][i]<rows1):
plot([x1_13[0][i], x3_13[0][i]+cols1],[x1_13[1][i], x3_13[1][i]],'c')
axis('off')
show()
# In[23]:
P3 = sfm.compute_P(x3_13, X[:, ndx_13])
# In[24]:
print(P3)
# In[25]:
print(P1)
print(P2)
print(P3)
运行结果
一开始sift特征匹配
所得的基础矩阵:
所得的投影矩阵:
外极线匹配
图3投影矩阵:
总结
外景图片光线变化大,也会受树木,镜子反光影响,匹配的效果没有内景好。期间如果图片差别太大的话也会报错:
这时候可以去调整下阈值(match_threshold)的大小,使匹配的容错率稍微增大,就不会出现匹配点不足这样的情况了,但这会是矩阵的准确度下降。