Feature Matching

本文介绍如何使用OpenCV进行特征匹配,包括Brute-Force Matcher和FLANN Matcher两种方法,并通过实例展示了如何利用ORB和SIFT描述符进行匹配。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html

Goal

In this chapter
  • We will see how to match features in one image with others.
  • We will use the Brute-Force matcher and FLANN Matcher in OpenCV

Basics of Brute-Force Matcher

Brute-Force matcher is simple. It takes the descriptor of one feature in first set and is matched with all other features in second set using some distance calculation. And the closest one is returned.

For BF matcher, first we have to create the BFMatcher object using cv2.BFMatcher(). It takes two optional params. First one is normType. It specifies the distance measurement to be used. By default, it is cv2.NORM_L2. It is good for SIFT, SURF etc (cv2.NORM_L1 is also there). For binary string based descriptors like ORB, BRIEF, BRISK etc, cv2.NORM_HAMMINGshould be used, which used Hamming distance as measurement. If ORB is using WTA_K == 3 or 4cv2.NORM_HAMMING2 should be used.

Second param is boolean variable, crossCheck which is false by default. If it is true, Matcher returns only those matches with value (i,j) such that i-th descriptor in set A has j-th descriptor in set B as the best match and vice-versa. That is, the two features in both sets should match each other. It provides consistant result, and is a good alternative to ratio test proposed by D.Lowe in SIFT paper.

Once it is created, two important methods are BFMatcher.match() and BFMatcher.knnMatch(). First one returns the best match. Second method returns k best matches where k is specified by the user. It may be useful when we need to do additional work on that.

Like we used cv2.drawKeypoints() to draw keypoints, cv2.drawMatches() helps us to draw the matches. It stacks two images horizontally and draw lines from first image to second image showing best matches. There is also cv2.drawMatchesKnn which draws all the k best matches. If k=2, it will draw two match-lines for each keypoint. So we have to pass a mask if we want to selectively draw it.

Let’s see one example for each of SURF and ORB (Both use different distance measurements).

Brute-Force Matching with ORB Descriptors

Here, we will see a simple example on how to match features between two images. In this case, I have a queryImage and a trainImage. We will try to find the queryImage in trainImage using feature matching. ( The images are /samples/c/box.png and /samples/c/box_in_scene.png)

We are using SIFT descriptors to match features. So let’s start with loading images, finding descriptors etc.

import numpy as np
import cv2
from matplotlib import pyplot as plt

img1 = cv2.imread('box.png',0)          # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage

# Initiate SIFT detector
orb = cv2.ORB()

# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

Next we create a BFMatcher object with distance measurement cv2.NORM_HAMMING (since we are using ORB) and crossCheck is switched on for better results. Then we use Matcher.match() method to get the best matches in two images. We sort them in ascending order of their distances so that best matches (with low distance) come to front. Then we draw only first 10 matches (Just for sake of visibility. You can increase it as you like)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)

# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)

plt.imshow(img3),plt.show()

Below is the result I got:

ORB Feature Matching with Brute-Force

What is this Matcher Object?

The result of matches = bf.match(des1,des2) line is a list of DMatch objects. This DMatch object has following attributes:

  • DMatch.distance - Distance between descriptors. The lower, the better it is.
  • DMatch.trainIdx - Index of the descriptor in train descriptors
  • DMatch.queryIdx - Index of the descriptor in query descriptors
  • DMatch.imgIdx - Index of the train image.

Brute-Force Matching with SIFT Descriptors and Ratio Test

This time, we will use BFMatcher.knnMatch() to get k best matches. In this example, we will take k=2 so that we can apply ratio test explained by D.Lowe in his paper.

import numpy as np
import cv2
from matplotlib import pyplot as plt

img1 = cv2.imread('box.png',0)          # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage

# Initiate SIFT detector
sift = cv2.SIFT()

# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])

# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)

plt.imshow(img3),plt.show()

See the result below:

SIFT Descriptor with ratio test

FLANN based Matcher

FLANN stands for Fast Library for Approximate Nearest Neighbors. It contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional features. It works more faster than BFMatcher for large datasets. We will see the second example with FLANN based matcher.

For FLANN based matcher, we need to pass two dictionaries which specifies the algorithm to be used, its related parameters etc. First one is IndexParams. For various algorithms, the information to be passed is explained in FLANN docs. As a summary, for algorithms like SIFT, SURF etc. you can pass following:

index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)

While using ORB, you can pass the following. The commented values are recommended as per the docs, but it didn’t provide required results in some cases. Other values worked fine.:

index_params= dict(algorithm = FLANN_INDEX_LSH,
                   table_number = 6, # 12
                   key_size = 12,     # 20
                   multi_probe_level = 1) #2

Second dictionary is the SearchParams. It specifies the number of times the trees in the index should be recursively traversed. Higher values gives better precision, but also takes more time. If you want to change the value, pass search_params = dict(checks=100).

With these informations, we are good to go.

import numpy as np
import cv2
from matplotlib import pyplot as plt

img1 = cv2.imread('box.png',0)          # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage

# Initiate SIFT detector
sift = cv2.SIFT()

# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)   # or pass empty dictionary

flann = cv2.FlannBasedMatcher(index_params,search_params)

matches = flann.knnMatch(des1,des2,k=2)

# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in xrange(len(matches))]

# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
    if m.distance < 0.7*n.distance:
        matchesMask[i]=[1,0]

draw_params = dict(matchColor = (0,255,0),
                   singlePointColor = (255,0,0),
                   matchesMask = matchesMask,
                   flags = 0)

img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)

plt.imshow(img3,),plt.show()

See the result below:

FLANN based matching

Additional Resources

Exercises

### 回答1: local feature matching 是计算机视觉中一种常用的图像特征匹配算法,主要用于在不同图像中寻找相似的局部特征点。这种算法可以应用于目标跟踪、图像拼接、立体视觉等多个领域。 local feature matching 的代码实现通常可以分为以下几个步骤: 1. 提取特征点:首先,通过使用特征提取算法(例如SIFT、SURF、ORB等)从图像中提取出关键点和对应的特征描述子。 2. 特征点匹配:对于两幅图像,通过计算特征点的相似性(如欧氏距离、汉明距离等),将每个特征点在另一幅图像中找到最匹配的特征点。可以使用暴力匹配方法或KD树等数据结构来加速匹配过程。 3. 特征点筛选:对匹配的特征点进行筛选,通常使用RANSAC算法或最小二乘法来剔除错误匹配点,同时选取最佳匹配对。 4. 可选的步骤:可以对剩余的特征点进行进一步的过滤和匹配,提高匹配质量。 5. 可视化结果:最后,将匹配结果可视化,可以使用线段、箭头等来表示匹配的特征点对,可以将匹配结果输出到图像或视频中。 local feature matching 的代码实现可以使用各种编程语言和计算机视觉库进行。例如,使用Python编程语言可以使用OpenCV、scikit-image等开源库进行特征点提取和匹配。先通过这些库中的相关函数进行特征点提取,再通过自定义代码实现匹配和筛选的步骤。最后,使用图像库绘制匹配结果。 ### 回答2: 本文将简要介绍local feature matching的代码实现。local feature matching是计算机视觉中常用的一种图像特征匹配方法,可以用于在两幅图像中找到相似的局部特征点。 代码的实现基于OpenCV库,首先需要将两幅图像加载进内存,并使用特征检测算法(例如SIFT或SURF)提取图像的局部特征点。接下来,使用描述子算法(例如SIFT或SURF)计算每个局部特征点的描述子。 在进行特征点匹配之前,需要根据提取到的描述子计算两幅图像中的特征点的相似度。这可以通过计算描述子之间的距离来实现,常用的距离度量方法有欧氏距离、曼哈顿距离和余弦相似度等。 在计算特征点相似度之后,可以使用一定的阈值筛选出相似度高于阈值的特征点对,这些特征点对可以认为是匹配成功的特征点对。可以根据需求设置不同的阈值,以控制匹配的准确性和敏感度。 最后,可以将匹配成功的特征点对绘制在两幅图像上,以便直观地观察匹配结果。可以使用OpenCV中提供的函数(例如cv2.drawMatches)完成这一步骤。 需要注意的是,local feature matching的代码实现还可以根据具体的需求进行优化和改进。例如,可以使用快速特征匹配算法(例如FLANN)来加速匹配过程,或者通过RANSAC算法来筛选出更准确的特征点匹配。 总结起来,local feature matching的代码实现主要包括图像加载、特征点检测、描述子计算、特征点相似度计算、特征点筛选和匹配结果可视化等步骤。通过代码的实现,可以实现在两幅图像中找到相似的局部特征点。 ### 回答3: local feature matching(局部特征匹配)是计算机视觉领域中常用的图像处理技术,它常用于目标检测、图像配准、视觉跟踪等应用中。下面我们来介绍一下关于local feature matching的代码实现。 首先,我们需要提取图像中的局部特征,常用的方法是使用一种称为SIFT(Scale-Invariant Feature Transform)的算法。这个算法可以在不同尺度和旋转变换下保持特征的不变性。通过调用现有的SIFT库,我们可以得到两个图像中的一组特征点和对应的特征描述子。 接下来,我们需要对两幅图像的特征点进行匹配,常用的方法是使用一种称为FLANN(Fast Library for Approximate Nearest Neighbors)的算法。该算法可以高效地在高维特征空间进行最近邻搜索。通过调用现有的FLANN库,我们可以找到每个特征点在另一幅图像中距离最近的特征点,从而实现特征点的匹配。 最后,我们可以根据匹配的特征点对两幅图像进行配准或其他进一步的处理。例如,我们可以计算两幅图像之间的变换矩阵,实现图像的对齐或融合。此外,我们还可以利用匹配的特征点进行目标检测或跟踪,例如通过计算特征点的运动向量来实现光流估计。 综上所述,local feature matching是一种常用的图像处理技术,通过提取和匹配图像中的局部特征,可以实现目标检测、图像配准和视觉跟踪等应用。在代码实现中,我们需要调用相应的库函数,例如SIFT库和FLANN库,以实现对特征点的提取和匹配。通过进一步的处理和分析,我们可以利用匹配的特征点来实现图像的配准、目标检测和跟踪等功能。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值