相机模型(来自opencv documentation)

本文深入解析了Pin-Hole相机模型中的投影过程及其参数,包括内参矩阵、外参矩阵和畸变系数。接着,通过引入鱼眼模型,详细解释了实际透镜产生的径向和切向畸变,并提供了MATLAB验证代码,直观展示了模型应用。文章最后,通过比较不同条件下投影结果的差异,进一步阐述了模型在实际场景中的应用价值。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1> pin-hole model:

The functions in this section use a so-called pinhole camera model. In this model, a scene view is formed by projecting 3D points into the image planeusing a perspective transformation.

s  \; m' = A [R|t] M'

or

s  \vecthree{u}{v}{1} = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\begin{bmatrix}r_{11} & r_{12} & r_{13} & t_1  \\r_{21} & r_{22} & r_{23} & t_2  \\r_{31} & r_{32} & r_{33} & t_3\end{bmatrix}\begin{bmatrix}X \\Y \\Z \\1\end{bmatrix}

where:

  • (X, Y, Z) are the coordinates of a 3D point in the world coordinate space
  • (u, v) are the coordinates of the projection point in pixels
  • A is a camera matrix, or a matrix of intrinsic parameters
  • (cx, cy) is a principal point that is usually at the image center
  • fx, fy are the focal lengths expressed in pixel units.

Thus, if an image from the camera isscaled by a factor, all of these parameters shouldbe scaled (multiplied/divided, respectively) by the same factor. Thematrix of intrinsic parameters does not depend on the scene viewed. So,once estimated, it can be re-used as long as the focal length is fixed (incase of zoom lens). The joint rotation-translation matrix[R|t] is called a matrix of extrinsic parameters. It is used to describe thecamera motion around a static scene, or vice versa, rigid motion of anobject in front of a still camera. That is,[R|t] translatescoordinates of a point(X, Y, Z) to a coordinate system,fixed with respect to the camera. The transformation above is equivalentto the following (whenz \ne 0 ):

\begin{array}{l}\vecthree{x}{y}{z} = R  \vecthree{X}{Y}{Z} + t \\x' = x/z \\y' = y/z \\u = f_x*x' + c_x \\v = f_y*y' + c_y\end{array}

Real lenses usually have some distortion, mostlyradial distortion and slight tangential distortion. So, the above modelis extended as:

\begin{array}{l} \vecthree{x}{y}{z} = R  \vecthree{X}{Y}{Z} + t \\ x' = x/z \\ y' = y/z \\ x'' = x'  \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + 2 p_1 x' y' + p_2(r^2 + 2 x'^2)  \\ y'' = y'  \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + p_1 (r^2 + 2 y'^2) + 2 p_2 x' y'  \\ \text{where} \quad r^2 = x'^2 + y'^2  \\ u = f_x*x'' + c_x \\ v = f_y*y'' + c_y \end{array}

k_1,k_2,k_3,k_4,k_5, andk_6 are radial distortion coefficients.p_1 andp_2 are tangential distortion coefficients.Higher-order coefficients are not considered in OpenCV. In the functions below the coefficients are passed or returned as

(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])

vector. That is, if the vector contains four elements, it means thatk_3=0 .The distortion coefficients do not depend on the scene viewed. Thus, they also belong to the intrinsic camera parameters. And they remain the same regardless of the captured image resolution.If, for example, a camera has been calibrated on images of320x240 resolution, absolutely the same distortion coefficients canbe used for640x480 images from the same camera whilef_x,f_y,c_x, andc_y need to be scaled appropriately.

The functions below use the above model to do the following:

  • Project 3D points to the image plane given intrinsic and extrinsic parameters.
  • Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections.
  • Estimate intrinsic and extrinsic camera parameters from several views of a known calibration pattern (every view is described by several 3D-2D point correspondences).
  • Estimate the relative position and orientation of the stereo camera “heads” and compute therectification transformation that makes the camera optical axes parallel.


2> fisheye model:

Definitions:Let P be a point in 3D of coordinates X in the world reference frame (stored in the matrix X)The coordinate vector of P in the camera reference frame is:

class center

Xc = R X + T

where R is the rotation matrix corresponding to the rotation vector om: R = rodrigues(om);call x, y and z the 3 coordinates of Xc:

class center

x = Xc_1 \\y = Xc_2 \\z = Xc_3

The pinehole projection coordinates of P is [a; b] where

class center

a = x / z \ and \ b = y / z \\r^2 = a^2 + b^2 \\\theta = atan(r)

Fisheye distortion:

class center

\theta_d = \theta (1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8)

The distorted point coordinates are [x’; y’] where

(此处opencv文档有些不清楚, 本人验证并更正如下,附matlab验证代码)

x' = (theta_d / r)* a 
y' = (theta_d / r)* b

Finally, conversion into pixel coordinates: The final pixel coordinates vector [u; v] where:

class center

u = fx * x' + Cx;

v = fy * y' + Cy;


3> other materail

http://wenku.baidu.com/link?url=waSqjF9HJ4BGMGoeL4bLIntrZ24B48jXczOoYz7PBYkoqn8jxZ8HGL8STzvFVdnl8WWEgOg8tcVFoZ4jO_Izo907_DbvLktrpbyd4SQmBMO


http://wenku.baidu.com/view/580fa337ee06eff9aef807cc.html



opencv 鱼眼模型验证:

clear
close all

R = [0.8988261790903926, 0.4188302467301371, 0.129200325873188;
  -0.4187798435070649, 0.9076282961426588, -0.02888457570005586;
  -0.1293636056005076, -0.02814427943910706, 0.9911977386379015];

t = [-0.402431, 0.0388337, 0.671309]';

A = [594.1656343384788, 0, 643.4646451030211;
      0, 593.6065468136707, 371.2638324096167;
      0, 0, 1];

K = [-0.04192856403922697;
     -0.002158383400516276;
      0.001463386066034605;
     -0.005204957317263106];

img_data = [ 327.005707, 401.706879, 382.578613, 368.528595, 447.612915, 331.631134, 521.767090, 291.437500, ...
             603.254089, 249.857986, 688.284241, 209.167130, 772.313904, 171.579849, 851.017456, 138.804169, ...
             921.380676, 111.622528, 982.589966, 89.692650, 355.885986, 474.680847, 413.861481, 445.651489, ...
             481.566345, 412.371521, 558.414246, 374.775757, 642.492310, 334.675598, 729.559509, 293.751709, ...
             814.828247, 254.507523, 893.690674, 218.945618, 963.500610, 187.922989, 1023.213501, 161.938385, ...
             389.184540, 547.380920, 449.031677, 523.005493, 518.651978, 494.009918, 597.481384, 460.122589, ...
             682.705994, 422.229462, 770.243408, 381.848572, 855.282410, 341.607635, 933.055847, 303.314911, ...
             1001.264832, 268.784271, 1059.156372, 238.558731, 424.892181, 617.114441, 486.681976, 597.320923, ...
             557.592102, 572.413391, 636.631287, 542.460144, 721.497192, 507.358459, 807.830017, 468.430420, ...
             891.032349, 427.681854, 966.609009, 387.922577, 1032.822144, 350.344391, 1088.560547, 316.416199 ]; 

obj_data = [ 0.000000, 0.000000,0.100000, 0.000000,0.200000, 0.000000,0.300000, 0.000000, ... 
             0.400000, 0.000000,0.500000, 0.000000,0.600000, 0.000000,0.700000, 0.000000, ...
             0.800000, 0.000000,0.900000, 0.000000,0.000000, 0.100000,0.100000, 0.100000, ...
             0.200000, 0.100000,0.300000, 0.100000,0.400000, 0.100000,0.500000, 0.100000, ...
             0.600000, 0.100000,0.700000, 0.100000,0.800000, 0.100000,0.900000, 0.100000, ...
             0.000000, 0.200000,0.100000, 0.200000,0.200000, 0.200000,0.300000, 0.200000, ...
             0.400000, 0.200000,0.500000, 0.200000,0.600000, 0.200000,0.700000, 0.200000, ...
             0.800000, 0.200000,0.900000, 0.200000,0.000000, 0.300000,0.100000, 0.300000, ...
             0.200000, 0.300000,0.300000, 0.300000,0.400000, 0.300000,0.500000, 0.300000, ...
             0.600000, 0.300000,0.700000, 0.300000,0.800000, 0.300000,0.900000, 0.300000];
         
%% import data

img_point = zeros(2, 40);
obj_point = zeros(3, 40);

img_pre = zeros(2, 40);
obj_pre = zeros(3, 40);

for n = 1: 40
   img_point(1, n) = img_data(2*n - 1);
   img_point(2, n) = img_data(2*n);
   obj_point(1, n) = obj_data(2*n - 1);
   obj_point(2, n) = obj_data(2*n);
   obj_point(3, n) = 0.0;
end

figure(1); hold on;
plot3(obj_point(1,:), obj_point(2,:), obj_point(3,:), 'r*');
grid on;

figure(2); hold on;
plot(img_point(1, :), img_point(2, :), 'r*');
axis equal;

for n = 1: 40
    obj_point(:, n) =  R * obj_point(:, n) + t;
end

figure(1); hold on;
plot3(obj_point(1, :), obj_point(2, :), obj_point(3, :), 'b*');
axis equal;


%% with no distortion
temp = A*obj_point;

temp(1, :) = temp(1,:)./ temp(3,:); 
temp(2, :) = temp(2,:)./ temp(3,:); 
temp(3, :) = temp(3,:)./ temp(3,:); 

figure(2)
hold on;
plot(temp(1, :), temp(2, :), 'b*');
axis equal;


%% with distortion
for n = 1:40
    a = obj_point(1, n) /obj_point(3, n);
    b = obj_point(2, n) /obj_point(3, n);
    
    r = sqrt(a^2 + b^2);
    
    theta = atan(r);
    
    theta_d = theta* (1 + K(1) * theta^2 + K(2) * theta^4 + K(3)*theta^6  + K(4)*theta^8);
    
    temp(1,n) = A(1,1)*(theta_d / r) * a + A(1,3); 
    temp(2,n) = A(2,2)*(theta_d / r) * b + A(2,3);
        
end

figure(2)
hold on;
plot(temp(1, :), temp(2, :), 'g*');
axis equal;

三维点 旋转和平移:


蓝点: 无鱼眼畸变的投影;绿色:有鱼眼畸变的投影; 红色:原图特征点 (红点与绿点重合度越高表明模型越精确)

放大后的误差对比


<think>好的,用户想查找关于OpenCV的示例代码或使用说明。首先,我需要回顾用户提供的引用内容,看看哪些信息相关。引用[1]提到了DisplayImage.cpp的示例代码,用于显示图像。引用[2]提供了Opencvsharp的C#和VB.netCore示例,覆盖多种功能。引用[4]则涉及相机标定和立体匹配的代码片段。 用户可能需要不同编程语言的示例,比如C++或C#。我应该分点整理这些资源,并给出获取途径。同时,用户可能需要官方文档和社区资源的建议,比如GitHub仓库和OpenCV官网。另外,需要注意用户提到的引用[3]关于环境配置,虽然不直接提供示例,但可能对用户运行代码有帮助,所以可以提及。 然后,根据用户要求生成相关问题,应该涵盖示例代码的获取、不同语言的支持、学习资源推荐以及常见功能实现等。需要确保问题具体且相关,帮助用户进一步探索OpenCV的使用。</think>### OpenCV示例代码与使用说明 以下是关于OpenCV示例代码和使用说明的整理,涵盖不同编程语言和功能模块: --- #### 1. **C++示例代码** - **基础图像显示**: 引用[1]提供了简单的C++图像显示示例(`DisplayImage.cpp`),核心步骤包括: ```cpp Mat image = imread(argv[1], 1); // 读取图像 namedWindow("Display Image", WINDOW_AUTOSIZE); imshow("Display Image", image); // 显示图像 waitKey(0); // 等待按键 ``` 此代码演示了OpenCV的图像加载、窗口创建和显示功能[^1]。 - **相机标定与立体匹配**: 引用[4]中提到了棋盘格角点提取和立体标定的代码片段,例如使用`findChessboardCorners`函数进行角点检测[^4]。 --- #### 2. **C#/VB.NET示例代码** 引用[2]提供了**OpenCVSharp4**的示例代码(C#和VB.netCore版本),包含以下功能: - 相机图像读取、形态学处理 - Caffe SVM、DNN模型应用 - 离散傅里叶变换(DFT)、霍夫线提取、SIFT特征检测 项目地址:[OpenCVSharp示例代码库](https://gitcode.com/open-source-toolkit/39bd9)[^2]。 --- #### 3. **Python示例代码** OpenCV官方提供了丰富的Python示例,例如: - **人脸检测**: ```python import cv2 face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') img = cv2.imread('image.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) ``` --- #### 4. **官方文档与社区资源** - **OpenCV官方文档**: 官网提供完整的API说明和教程([opencv.org/documentation](https://opencv.org/documentation/)),涵盖C++、Python、Java等语言。 - **GitHub仓库**: OpenCV官方GitHub([github.com/opencv/opencv](https://github.com/opencv/opencv))包含大量示例代码和模块测试案例。 --- #### 5. **环境配置支持** 若需配置开发环境(如Visual Studio),可参考引用[3]中提到的OpenCV库链接与头文件包含方法[^3]。 ---
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值