Feature extraction and matching of opencv (1)

Feature extraction and matching of opencv (1)

The steps of feature extraction and matching in opencv are as follows:
Extraction of feature points
Generating descriptors of feature points
Feature point matching

opencv corresponding classes:
Extraction of Image Feature Points-Feature Detector
Feature Point Descriptor Generation-Descriptor Extractor
Matching of Feature Points - Descriptor Matcher
(Different classes can be derived from these three basic classes to implement different feature extraction algorithms, descriptions and matching)

Feature Extraction: Feature Detector, to achieve two-dimensional image feature extraction

/*
 * Abstract base class for 2D image feature detectors.
 */
class CV_EXPORTS_W FeatureDetector : public virtual Algorithm //Derived from Algorithm algorithm class
/*
{
public:
    virtual ~FeatureDetector();

    /*
     * Detect keypoints in an image.
     * image        The image.
     * keypoints    The detected keypoints.
     * mask         Mask specifying where to look for keypoints (optional). Must be a char
     *              matrix with non-zero values in the region of interest.
     */
    CV_WRAP void detect( const Mat& image, CV_OUT vector<KeyPoint>& keypoints, const Mat& mask=Mat() ) const;

    /*
     * Detect keypoints in an image set.
     * images       Image collection.
     * keypoints    Collection of keypoints detected in an input images. keypoints[i] is a set of keypoints detected in an images[i].
     * masks        Masks for image set. masks[i] is a mask for images[i].
     */
    void detect( const vector<Mat>& images, vector<vector<KeyPoint> >& keypoints, const vector<Mat>& masks=vector<Mat>() ) const;

    ...
    // Create feature detector by detector name.
    CV_WRAP static Ptr<FeatureDetector> create( const string& detectorType ); //Create specific feature detection methods by name, such as "ORB" -- OrbFeature Detector, "FAST" -- FastFeature Detector
    ...
};

Descriptor for generating feature points: Descriptor Extractor
The feature descriptors of key points are expressed as dense, fixed-dimensional vectors. The set of feature descriptors is expressed as a Mat:
Each row is a descriptor of key feature points. The number of rows of Mat matrix represents the number of feature points extracted, and the number of columns represents the dimension of feature point descriptor.

/*
 * Abstract base class for computing descriptors for image keypoints.
 */
class CV_EXPORTS_W DescriptorExtractor : public virtual Algorithm
{
public:
    virtual ~DescriptorExtractor();

    /*
     * Compute the descriptors for a set of keypoints in an image.
     * image        The image.
     * keypoints    The input keypoints. Keypoints for which a descriptor cannot be computed are removed.
     * descriptors  Copmputed descriptors. Row i is the descriptor for keypoint i.
     */
    CV_WRAP void compute( const Mat& image, CV_OUT CV_IN_OUT vector<KeyPoint>& keypoints, CV_OUT Mat& descriptors ) const;

    /*
     * Compute the descriptors for a keypoints collection detected in image collection.
     * images       Image collection.
     * keypoints    Input keypoints collection. keypoints[i] is keypoints detected in images[i].
     *              Keypoints for which a descriptor cannot be computed are removed.
     * descriptors  Descriptor collection. descriptors[i] are descriptors computed for set keypoints[i].
     */
    void compute( const vector<Mat>& images, vector<vector<KeyPoint> >& keypoints, vector<Mat>& descriptors ) const;

    ...
    CV_WRAP static Ptr<DescriptorExtractor> create( const string& descriptorExtractorType );
    //Create specific feature detection methods based on names, such as "ORB" -- OrbDescriptor Extractor, "FAST" -- FastDescriptor Extractor
    ...
};

Feature point matching: Decriptor Matcher
It is the base class for feature key point descriptor matching. It is mainly used to match feature descriptors between two images.
Or a feature descriptor of an image and an image set. There are mainly two matching methods:
BFMatcher and Flann BasedMatcher

/*
 * Abstract base class for matching two sets of descriptors.
 */
class CV_EXPORTS_W DescriptorMatcher : public Algorithm
{
public:
    virtual ~DescriptorMatcher();

    ...
    /*
     * Group of methods to match descriptors from image pair.
     * Method train() is run in this methods.
     */
    // Find one best match for each query descriptor (if mask is empty).
    CV_WRAP void match( const Mat& queryDescriptors, const Mat& trainDescriptors,
                CV_OUT vector<DMatch>& matches, const Mat& mask=Mat() ) const;
    // Find k best matches for each query descriptor (in increasing order of distances).
    // compactResult is used when mask is not empty. If compactResult is false matches
    // vector will have the same size as queryDescriptors rows. If compactResult is true
    // matches vector will not contain matches for fully masked out query descriptors.
    CV_WRAP void knnMatch( const Mat& queryDescriptors, const Mat& trainDescriptors,
                   CV_OUT vector<vector<DMatch> >& matches, int k,
                   const Mat& mask=Mat(), bool compactResult=false ) const;
    // Find best matches for each query descriptor which have distance less than
    // maxDistance (in increasing order of distances).
    void radiusMatch( const Mat& queryDescriptors, const Mat& trainDescriptors,
                      vector<vector<DMatch> >& matches, float maxDistance,
                      const Mat& mask=Mat(), bool compactResult=false ) const;
    ...
    CV_WRAP static Ptr<DescriptorMatcher> create( const string& descriptorMatcherType );
    ...
}

DMatch architecture encapsulates some features of matched feature descriptors:
The index of feature descriptor,
The index of training image,
Distance between feature descriptors, etc.

//Struct for matching: query descriptor index, train descriptor index, train image index and distance between descriptors.
struct CV_EXPORTS_W_SIMPLE DMatch
{
    ...
    CV_PROP_RW int queryIdx; // query descriptor index
    CV_PROP_RW int trainIdx; // train descriptor index
    CV_PROP_RW int imgIdx;   // train image index

    CV_PROP_RW float distance;
    ...
};

class CV_EXPORTS_W BFMatcher : public DescriptorMatcher {
public:   
    CV_WRAP BFMatcher( int normType=NORM_L2, bool crossCheck=false );
    //normType: 
    //NORM_L1, NORM_L2, NORM_HAMMING, NORM_HAMMING2.
    //SIFT, SURF: NROM_L1 and NORM_L2 are commonly used.
    //ORB, BRISK, BRIEF: NORM_HAMMING is commonly used;
    //NORM_HAMMING2 is used for ORB and the parameter WTA = 3 or 4 of the ORB constructor.
    //Cross Check parameter:
    //Finding k nearest matching points for false
    //Finding the Best Matching Point Pairs for true
    ...
}
template<class Distance>
class CV_EXPORTS BruteForceMatcher : public BFMatcher // 
{
public:
    BruteForceMatcher( Distance d = Distance() ) : BFMatcher(Distance::normType, false) {(void)d;}
    virtual ~BruteForceMatcher() {}
};

//Instantiate a matcher: 
BruteForceMatcher<L2<float> > matcher;  
class CV_EXPORTS_W FlannBasedMatcher : public DescriptorMatcher{ //Nearest Neighbor Algorithms for Finding Matching Points
    //Flann algorithm - Nearest neighborhood similar matching, find a relatively good match rather than the best match when using Flann BasedMatcher.
    //The matching accuracy or algorithm speed can be improved by adjusting the parameters of Flann BasedMatcher, but the corresponding algorithm speed or algorithm accuracy will be affected.
}

/******************************************************************/

//Abstract base class for simultaneous 2D feature detection descriptor extraction.
class CV_EXPORTS_W Feature2D : public FeatureDetector, public DescriptorExtractor
{
    ...
    //Feature 2D is derived from the Feature Detector class and the Descriptor Extractor class.
}

//ORB implementation - - FeatureS2d. HPP has such declarations as follows:
class CV_EXPORTS_W ORB : public Feature2D
{
public:
    // the size of the signature in bytes
    enum { kBytes = 32, HARRIS_SCORE=0, FAST_SCORE=1 };

    CV_WRAP explicit ORB(int nfeatures = 500, float scaleFactor = 1.2f, int nlevels = 8, int edgeThreshold = 31,
        int firstLevel = 0, int WTA_K=2, int scoreType=ORB::HARRIS_SCORE, int patchSize=31 );

    // returns the descriptor size in bytes
    int descriptorSize() const;
    // returns the descriptor type
    int descriptorType() const;

    // Compute the ORB features and descriptors on an image
    void operator()(InputArray image, InputArray mask, vector<KeyPoint>& keypoints) const;

    // Compute the ORB features and descriptors on an image
    void operator()( InputArray image, InputArray mask, vector<KeyPoint>& keypoints,
                     OutputArray descriptors, bool useProvidedKeypoints=false ) const;

    AlgorithmInfo* info() const;

protected:
    void computeImpl( const Mat& image, vector<KeyPoint>& keypoints, Mat& descriptors ) const;
    void detectImpl( const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat() ) const;
    ...
};

typedef ORB OrbFeatureDetector;
typedef ORB OrbDescriptorExtractor;

ORB objects derived from feature2D classes can not only call the member functions of FeatureDetector to extract features, but also call the member functions of DescroptorExtractor to generate feature descriptors.

SIFT and SURF (part of paid use)

Add header file:
<opencv2/nonfree/feature2d.hpp>
<opencv2/nonfree/nonfree.hpp>,
And add at the beginning of the program:
initModule_nonfree(); //

Keywords: OpenCV less

Added by yellowzm on Wed, 26 Jun 2019 21:00:21 +0300