【OpenCV】solvePnPRansac /solvePnP 计算外参数,求解相机位姿

本文介绍了PnP(Perspective-n-Points)在SLAM中的作用,重点讲解了OpenCV中的solvePnPRansac函数及其参数。通过实例演示了P3P和EPnP在估计相机位姿中的应用,并讨论了在异常值处理和鲁棒性方面的优化策略。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1. 使用背景机器作用

PnP(Perspective n Points):2D—3D,求解相机位姿

PnP(Perspective n Points)就是你有n个点的3D位置和它们的投影,然后要算相机的位姿。这个倒是SLAM里最常见的情况,因为你会有一堆的地图点和像素点等着你算。
PnP的做法有好多种:直接线性变换,P3P,EPnP,UPnP等等,基本你去找OpenCV的SolvePnP中的参数即可,好用的都在那里。除此之外,
通常认为线性方程解PnP,在有噪声的情况下表现不佳,所以一般以EPnP之类的解为初始值,构建一个Bundle Adjustment(BA)去做优化。

2. solvePnPRansac函数

solvePnP的一个缺点是对异常值不够鲁棒,当我们用相机定位真实世界的点,可能存在错配

bool cv::solvePnPRansac	(	InputArray 	objectPoints,
 	 	InputArray 	imagePoints,
 	 	InputArray 	cameraMatrix,
 	 	InputArray 	distCoeffs,
 	 	OutputArray 	rvec,
 	 	OutputArray 	tvec,
 	 	bool 	useExtrinsicGuess = false,
 	 	int 	iterationsCount = 100,
 	 	float 	reprojectionError = 8.0,
 	 	double 	confidence = 0.99,
 	 	OutputArray 	inliers = noArray(),
 	 	int 	flags = SOLVEPNP_ITERATIVE 
 	)
solvePnPRansac函数参数与solvePnP参数基本形同:

[in]  _opoints                参考点在世界坐标系下的点集;float or double
[in]    _ipoints                参考点在相机像平面的坐标;float or double
[in]    _cameraMatrix           相机内参
[in]    _distCoeffs             相机畸变系数
[out]   _rvec                   旋转矩阵
[out]   _tvec                   平移向量
[in]    useExtrinsicGuess       若果求解PnP使用迭代算法,初始值可以使用猜测的初始值(true),也可以使用解析求解的结果作为初始值(false)。
[in]    iterationsCount         Ransac算法的迭代次数,这只是初始值,根据估计外点的概率,可以进一步缩小迭代次数;(此值函数内部是会不断改变的),所以一开始可以赋一个大的值。
[in]    reprojectionErrr        Ransac筛选内点和外点的距离阈值,这个根据估计内点的概率和每个点的均方差(假设误差按照高斯分布)可以计算出此阈值。
[in]    confidence              此值与计算采样(迭代)次数有关。此值代表从n个样本中取s个点,N次采样可以使s个点全为内点的概率。
[out]   _inliers                返回内点的序列。为矩阵形式
[in]    flags                   最小子集的计算模型;
*                                                 SOLVEPNP_ITERATIVE(此方案,最小模型用的EPNP,内点选出之后用了一个迭代)*                                                 SOLVE_P3P(P3P只用在最小模型上,内点选出之后用了一个EPNP)      
*                                                 SOLVE_AP3P(AP3P只用在最小模型上,内点选出之后用了一个EPNP)
*                                                 SOLVE_EPnP(最小模型上&内点选出之后都采用了EPNP)

//测试 p3p测位姿算法
#include <iostream>
#include <stdio.h>
#include <fstream>
 
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/highgui/highgui.hpp>
 
void generate3DPointCloud(std::vector<cv::Point3f>& points, cv::Point3f pmin = cv::Point3f(-1, -1, 5), cv::Point3f pmax = cv::Point3f(1, 1, 10))
{
	cv::RNG rng = cv::theRNG(); // fix the seed to use "fixed" input 3D points
 
	for (size_t i = 0; i < points.size(); i++)
	{
		float _x = rng.uniform(pmin.x, pmax.x);
		float _y = rng.uniform(pmin.y, pmax.y);
		float _z = rng.uniform(pmin.z, pmax.z);
		points[i] = cv::Point3f(_x, _y, _z);
	}
}
 
void generateCameraMatrix(cv::Mat& cameraMatrix, cv::RNG& rng)
{
	const double fcMinVal = 1e-3;
	const double fcMaxVal = 100;
	cameraMatrix.create(3, 3, CV_64FC1);
	cameraMatrix.setTo(cv::Scalar(0));
	cameraMatrix.at<double>(0, 0) = rng.uniform(fcMinVal, fcMaxVal);
	cameraMatrix.at<double>(1, 1) = rng.uniform(fcMinVal, fcMaxVal);
	cameraMatrix.at<double>(0, 2) = rng.uniform(fcMinVal, fcMaxVal);
	cameraMatrix.at<double>(1, 2) = rng.uniform(fcMinVal, fcMaxVal);
	cameraMatrix.at<double>(2, 2) = 1;
}
 
void generateDistCoeffs(cv::Mat& distCoeffs, cv::RNG& rng)
{
	distCoeffs = cv::Mat::zeros(4, 1, CV_64FC1);
	for (int i = 0; i < 3; i++)
		distCoeffs.at<double>(i, 0) = rng.uniform(0.0, 1.0e-6);
}
 
void generatePose(cv::Mat& rvec, cv::Mat& tvec, cv::RNG& rng)
{
	const double minVal = 1.0e-3;
	const double maxVal = 1.0;
	rvec.create(3, 1, CV_64FC1);
	tvec.create(3, 1, CV_64FC1);
	for (int i = 0; i < 3; i++)
	{
		rvec.at<double>(i, 0) = rng.uniform(minVal, maxVal);
		tvec.at<double>(i, 0) = rng.uniform(minVal, maxVal / 10);
	}
}
 
 
int main_p3p()
{
	std::vector<cv::Point3f> points;
	points.resize(500);
	generate3DPointCloud(points);
 
 
	std::vector<cv::Mat> rvecs, tvecs;
	cv::Mat trueRvec, trueTvec;
	cv::Mat intrinsics, distCoeffs;
 
	generateCameraMatrix(intrinsics, cv::RNG());
	generateDistCoeffs(distCoeffs, cv::RNG());
 
	generatePose(trueRvec, trueTvec, cv::RNG());
 
	std::vector<cv::Point3f> opoints;
	opoints = std::vector<cv::Point3f>(points.begin(), points.begin() + 3);
 
	std::vector<cv::Point2f> projectedPoints;
	projectedPoints.resize(opoints.size());
	projectPoints(cv::Mat(opoints), trueRvec, trueTvec, intrinsics, distCoeffs, projectedPoints);
 
	std::cout << "intrinsics: " << intrinsics << std::endl;
	std::cout << "distcoeffs: " << distCoeffs << std::endl;
	std::cout << "trueRvec: " << trueRvec << std::endl;
	std::cout << "trueTvec: " << trueTvec << std::endl;
 
	std::cout << "oPoint: " << opoints << std::endl;
	std::cout << "projectedPoints: " << projectedPoints << std::endl;
	
 
 
	//std::cout << "result numbers A: :" << solveP3P(opoints, projectedPoints, intrinsics, distCoeffs, rvecs, tvecs, cv::SOLVEPNP_AP3P) << std::endl;
	//std::cout << "result numbers: :" << solveP3P(opoints, projectedPoints, intrinsics, distCoeffs, rvecs, tvecs, cv::SOLVEPNP_P3P) << std::endl;
 
	bool isTestSuccess = false;
	double error = DBL_MAX;
	for (unsigned int i = 0; i < rvecs.size() /*&& !isTestSuccess*/; ++i)
	{
		double rvecDiff = norm(rvecs[i], trueRvec, cv::NORM_L2);
		double tvecDiff = norm(tvecs[i], trueTvec, cv::NORM_L2);
		isTestSuccess = rvecDiff < 1.0e-4 && tvecDiff < 1.0e-4;
		error = std::min(error, std::max(rvecDiff, tvecDiff));
		std::cout << "i: " << i << std::endl;
		std::cout << "error: " << error << std::endl;
		std::cout << "rvec: " << rvecs[i] << std::endl;
 
	}
 
	system("pause");
	return 0;
}
int main()
{
	std::vector<cv::Point3f> points;
	points.resize(500);
	generate3DPointCloud(points);
 
 
	std::vector<cv::Mat> rvecs, tvecs;
	cv::Mat trueRvec, trueTvec;
	cv::Mat intrinsics, distCoeffs;
 
	generateCameraMatrix(intrinsics, cv::RNG());
	generateDistCoeffs(distCoeffs, cv::RNG());
 
	generatePose(trueRvec, trueTvec, cv::RNG());
 
	std::vector<cv::Point3f> opoints;
	//opoints = std::vector<cv::Point3f>(points.begin(), points.begin() + 4);
	opoints = std::vector<cv::Point3f>(points.begin(), points.end());
 
	std::vector<cv::Point2f> projectedPoints;
	projectedPoints.resize(opoints.size());
	projectPoints(cv::Mat(opoints), trueRvec, trueTvec, intrinsics, distCoeffs, projectedPoints);
 
	cv::RNG rng = cv::RNG();
	for (size_t i = 0; i < projectedPoints.size(); i++)
	{
		if (i % 100 == 0)
		{
			projectedPoints[i] = projectedPoints[rng.uniform(0, (int)points.size() - 1)];
		}
	}
 
	std::cout << "intrinsics: " << intrinsics << std::endl;
	std::cout << "distcoeffs: " << distCoeffs << std::endl;
	std::cout << "trueRvec: " << trueRvec << std::endl;
	std::cout << "trueTvec: " << trueTvec << std::endl;
 
	//std::cout << "oPoint: " << opoints << std::endl;
	//std::cout << "projectedPoints: " << projectedPoints << std::endl;
 
 
 
	//solveP3P(opoints, projectedPoints, intrinsics, distCoeffs, rvecs, tvecs, cv::SOLVEPNP_AP3P);
	cv::Mat rvec, tvec;
	solvePnP(opoints, projectedPoints, intrinsics, distCoeffs, rvec, tvec, false, CV_EPNP);
 
 
	std::cout << rvec << "---" << norm(rvec - trueRvec) << std::endl;
	std::cout << tvec << std::endl;
 
 
	std::cout << "---------------Ransac--------------------" << std::endl;
	solvePnPRansac(opoints, projectedPoints, intrinsics, distCoeffs, rvec, tvec, false, 100, 2);
	std::cout << rvec << "---" << norm(rvec - trueRvec) << std::endl;
	std::cout << tvec << std::endl;
	system("pause");
}

3. SolvePnP

bool solvePnP( InputArray objectPoints, InputArray imagePoints,
                            InputArray cameraMatrix, InputArray distCoeffs,
                            OutputArray rvec, OutputArray tvec,
                            bool useExtrinsicGuess = false, int flags = SOLVEPNP_ITERATIVE );
参数说明:

objectPoints:std::vector<cv::Point3d>    特征点物理坐标
imagePoints:   std::vector<cv::Point2d>    特征点图像坐标,点对个数必须大于4
cameraMatrix :   cv::Mat(3, 3, CV_32FC1)    相机内参:3*3float矩阵
distCoeffs:    cv::Mat(1, 5, CV_32FC1)    相机畸变参数
rvec :        输出的旋转向量
tvec:         输出的平移向量
useExtrinsicGuess:    0     
flags:         计算方法

重点flags参数选择选择如下所示:

SOLVEPNP_ITERATIVE = 0,
       SOLVEPNP_EPNP      = 1, //!< EPnP: Efficient Perspective-n-Point Camera Pose Estimation @cite lepetit2009epnp
       SOLVEPNP_P3P       = 2, //!< Complete Solution Classification for the Perspective-Three-Point Problem 
       SOLVEPNP_DLS       = 3, //!< A Direct Least-Squares (DLS) Method for PnP  @cite hesch2011direct
       SOLVEPNP_UPNP      = 4, //!< Exhaustive Linearization for Robust Camera Pose and Focal Length Estimation 
       SOLVEPNP_AP3P      = 5, //!< An Efficient Algebraic Solution to the Perspective-Three-Point Problem 
       SOLVEPNP_MAX_COUNT      //!< Used for count

基于同一平面上的标准特征点,主要用SOLVEPNP_ITERATIVE 方法。

切记,虽然Opencv的参数偏爱float类型,但是solvepnp中除了相机内参和畸变参数矩阵是用float类型外,其余的矩阵都是double类型,不然出出现计算结果不正确的情况。

### 使用 OpenCvSharp 进行相机姿计算 为了实现基于图像特征匹配的相机姿估计,可以采用如下方法: 通过检测已知三维空间中的物体在二维图像上的投影置来推断相机置和姿态。具体来说,这涉及到从一组预定义的世界坐标系下的3D点及其对应的2D图像坐标中求解参数矩阵。 #### 准备工作 安装必要的库文件并加载所需的模块: ```csharp using Cv = OpenCvSharp; ``` #### 特征提取与匹配 利用SIFT或ORB算法获取目标对象的关键点描述子,并完成两幅图片间这些关键点之间的对应关系建立。 对于每一对相匹配的关键点对$(p_i^w, p_j^{im})$,其中$p_i^w=(X,Y,Z)$表示世界坐标系下第$i$个角点的空间置;而$p_j^{im}=(u,v)$则代表该角点映射到当前帧内所成像处像素坐标值。 #### PnP问题求解 一旦获得了足够的2D-3D点集配对数据之后就可以调用`cv.SolvePnPRansac()`函数来进行最终的姿态估算操作了[^1]。 下面给出一段完整的代码片段用于展示上述流程的具体实施过程: ```csharp // 假设已经得到了一系列3D点以及它们对应的2D图像坐标 List<Cv.Point3d> objectPoints = new List<Point3d>(); List<Cv.Point2d> imagePoints = new List<Point2d>(); // ... (填充objectPoints 和imagePoints) Mat rvec = new Mat(); // 旋转向量 Mat tvec = new Mat(); // 平移向量 Mat cameraMatrix = /* your intrinsic matrix */; Mat distCoeffs = /* distortion coefficients */; bool success = Cv.Cv2.SolvePnp(objectPoints.ToArray(), imagePoints.ToArray(), cameraMatrix, distCoeffs, rvec, tvec); if (!success) { Console.WriteLine("Failed to solve PNP"); } Console.WriteLine($"Rotation Vector:\n{rvec.Dump()}"); Console.WriteLine($"Translation Vector:\n{tvec.Dump()}"); // 将旋转向量转换为旋转矩阵以便于理解 double[,] rotationMatrix = Cv.Cv2.Rodrigues(rvec).ToArray(); Console.WriteLine($"Rotation Matrix:\n {rotationMatrix.ToString()}"); ```
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

大江东去浪淘尽千古风流人物

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值