Home >Database >Mysql Tutorial >OpenCV中feature2D学习SIFT和SURF算子实现特征点提取与匹配
概述 之前的文章SURF和SIFT算子实现特征点检测简单地讲了利用SIFT和SURF算子检测特征点,在检测的基础上可以使用SIFT和SURF算子对特征点进行特征提取并使用匹配函数进行特征点的匹配。具体实现是首先采用SurfFeatureDetector检测特征点,再使用SurfDescripto
之前的文章SURF和SIFT算子实现特征点检测简单地讲了利用SIFT和SURF算子检测特征点,在检测的基础上可以使用SIFT和SURF算子对特征点进行特征提取并使用匹配函数进行特征点的匹配。具体实现是首先采用SurfFeatureDetector检测特征点,再使用SurfDescriptorExtractor计算特征点的特征向量,最后采用BruteForceMatcher暴力匹配法或者FlannBasedMatcher选择性匹配法(二者的不同)来进行特征点匹配。
实验所用环境是opencv2.4.0+vs2008+win7,需要注意opencv2.4.X版本中SurfFeatureDetector是包含在opencv2/nonfree/features2d.hpp中,BruteForceMatcher是包含在opencv2/legacy/legacy.hpp中,FlannBasedMatcher是包含在opencv2/features2d/features2d.hpp中。
首先使用BruteForceMatcher暴力匹配法,代码如下:
/** * @采用SURF算子检测特征点,对特征点进行特征提取,并使用BruteForce匹配法进行特征点的匹配 * @SurfFeatureDetector + SurfDescriptorExtractor + BruteForceMatcher * @author holybin */ #include <stdio.h> #include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/nonfree/features2d.hpp" //SurfFeatureDetector实际在该头文件中 #include "opencv2/legacy/legacy.hpp" //BruteForceMatcher实际在该头文件中 //#include "opencv2/features2d/features2d.hpp" //FlannBasedMatcher实际在该头文件中 #include "opencv2/highgui/highgui.hpp" using namespace cv; using namespace std; int main( int argc, char** argv ) { Mat src_1 = imread( "D:\\opencv_pic\\cat3d120.jpg", CV_LOAD_IMAGE_GRAYSCALE ); Mat src_2 = imread( "D:\\opencv_pic\\cat0.jpg", CV_LOAD_IMAGE_GRAYSCALE ); if( !src_1.data || !src_2.data ) { cout keypoints_1, keypoints_2; detector.detect( src_1, keypoints_1 ); detector.detect( src_2, keypoints_2 ); cout > matcher; vector matches; matcher.match( descriptors_1, descriptors_2, matches ); cout<br> <p>实验结果:</p> <img src="/inc/test.jsp?url=http%3A%2F%2Fimg.blog.csdn.net%2F20141115151204375%3Fwatermark%2F2%2Ftext%2FaHR0cDovL2Jsb2cuY3Nkbi5uZXQvaG9seWJpbg%3D%3D%2Ffont%2F5a6L5L2T%2Ffontsize%2F400%2Ffill%2FI0JBQkFCMA%3D%3D%2Fdissolve%2F70%2Fgravity%2FSouthEast&refer=http%3A%2F%2Fblog.csdn.net%2Fu012564690%2Farticle%2Fdetails%2F17370511" alt="OpenCV中feature2D学习SIFT和SURF算子实现特征点提取与匹配" ><br> <p><span><br> </span></p> <h1><span>FLANN匹配法</span></h1> <p>使用暴力匹配的结果不怎么好,下面使用FlannBasedMatcher进行特征匹配,只保留好的特征匹配点,代码如下:</p> <pre class="brush:php;toolbar:false">/** * @采用SURF算子检测特征点,对特征点进行特征提取,并使用FLANN匹配法进行特征点的匹配 * @SurfFeatureDetector + SurfDescriptorExtractor + FlannBasedMatcher * @author holybin */ #include <stdio.h> #include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/nonfree/features2d.hpp" //SurfFeatureDetector实际在该头文件中 //#include "opencv2/legacy/legacy.hpp" //BruteForceMatcher实际在该头文件中 #include "opencv2/features2d/features2d.hpp" //FlannBasedMatcher实际在该头文件中 #include "opencv2/highgui/highgui.hpp" using namespace cv; using namespace std; int main( int argc, char** argv ) { Mat src_1 = imread( "D:\\opencv_pic\\cat3d120.jpg", CV_LOAD_IMAGE_GRAYSCALE ); Mat src_2 = imread( "D:\\opencv_pic\\cat0.jpg", CV_LOAD_IMAGE_GRAYSCALE ); if( !src_1.data || !src_2.data ) { cout keypoints_1, keypoints_2; detector.detect( src_1, keypoints_1 ); detector.detect( src_2, keypoints_2 ); cout allMatches; matcher.match( descriptors_1, descriptors_2, allMatches ); cout maxDist ) maxDist = dist; } printf(" max dist : %f \n", maxDist ); printf(" min dist : %f \n", minDist ); //-- 过滤匹配点,保留好的匹配点(这里采用的标准:distance goodMatches; for( int i = 0; i (), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS //不显示未匹配的点 ); imshow("matching result", matchImg ); //-- 输出匹配点的对应关系 for( int i = 0; i <br> <p>实验结果:</p> <img src="/inc/test.jsp?url=http%3A%2F%2Fimg.blog.csdn.net%2F20141115151359125%3Fwatermark%2F2%2Ftext%2FaHR0cDovL2Jsb2cuY3Nkbi5uZXQvaG9seWJpbg%3D%3D%2Ffont%2F5a6L5L2T%2Ffontsize%2F400%2Ffill%2FI0JBQkFCMA%3D%3D%2Fdissolve%2F70%2Fgravity%2FSouthEast&refer=http%3A%2F%2Fblog.csdn.net%2Fu012564690%2Farticle%2Fdetails%2F17370511" alt="OpenCV中feature2D学习SIFT和SURF算子实现特征点提取与匹配" ><br> <p><br> </p> <p>从第二个实验结果可以看出,经过过滤之后特征点数目从49减少到33,匹配的准确度有所上升。当然也可以使用SIFT算子进行上述两种匹配实验,只需要将SurfFeatureDetector换成SiftFeatureDetector,将SurfDescriptorExtractor换成SiftDescriptorExtractor即可。</p> <p><br> </p> <h1><span>拓展</span></h1> <p> 在FLANN匹配法的基础上,还可以进一步利用透视变换和空间映射找出已知物体(目标检测),具体来说就是利用findHomography函数利用匹配的关键点找出相应的变换,再利用perspectiveTransform函数映射点群。具体可以参考这篇文章:OpenCV中feature2D学习——SIFT和SURF算法实现目标检测。</p> <p><br> </p> </iostream></stdio.h>