Finds edges in an image using Canny algorithm.
Parameters: |
|
---|
The function finds edges in the input image image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking, the largest value is used to find the initial segments of strong edges, see http://en.wikipedia.org/wiki/Canny_edge_detector
Calculates eigenvalues and eigenvectors of image blocks for corner detection.
Parameters: |
|
---|
For every pixel , the function cornerEigenValsAndVecs considers a blockSize blockSize neigborhood . It calculates the covariation matrix of derivatives over the neighborhood as:
Where the derivatives are computed using Sobel() operator.
After that it finds eigenvectors and eigenvalues of and stores them into destination image in the form where
are the eigenvalues of ; not sorted
are the eigenvectors corresponding to
are the eigenvectors corresponding to
The output of the function can be used for robust edge or corner detection.
See also: cornerMinEigenVal() , cornerHarris() , preCornerDetect()
Harris edge detector.
Parameters: |
|
---|
The function runs the Harris edge detector on the image. Similarly to cornerMinEigenVal() and cornerEigenValsAndVecs() , for each pixel it calculates a gradient covariation matrix over a neighborhood. Then, it computes the following characteristic:
Corners in the image can be found as the local maxima of this response map.
Calculates the minimal eigenvalue of gradient matrices for corner detection.
Parameters: |
|
---|
The function is similar to cornerEigenValsAndVecs() but it calculates and stores only the minimal eigenvalue of the covariation matrix of derivatives, i.e. in terms of the formulae in cornerEigenValsAndVecs() description.
Refines the corner locations.
Parameters: |
|
---|
The function iterates to find the sub-pixel accurate location of corners, or radial saddle points, as shown in on the picture below.
Sub-pixel accurate corner locator is based on the observation that every vector from the center to a point located within a neighborhood of is orthogonal to the image gradient at subject to image and measurement noise. Consider the expression:
where is the image gradient at the one of the points in a neighborhood of . The value of is to be found such that is minimized. A system of equations may be set up with set to zero:
where the gradients are summed within a neighborhood (“search window”) of . Calling the first gradient term and the second gradient term gives:
The algorithm sets the center of the neighborhood window at this new center and then iterates until the center keeps within a set threshold.
Determines strong corners on an image.
Parameters: |
|
---|
The function finds the most prominent corners in the image or in the specified image region, as described in [Shi94] :
The function can be used to initialize a point-based tracker of an object.
Note that the if the function is called with different values A and B of the parameter qualityLevel , and A > {B}, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .
See also: cornerMinEigenVal() , cornerHarris() , calcOpticalFlowPyrLK() , estimateRigidMotion() , PlanarObjectDetector() , OneWayDescriptor()
Finds circles in a grayscale image using a Hough transform.
Parameters: |
|
---|
The function finds circles in a grayscale image using some modification of Hough transform. Here is a short usage example:
#include <cv.h>
#include <highgui.h>
#include <math.h>
using namespace cv;
int main(int argc, char** argv)
{
Mat img, gray;
if( argc != 2 && !(img=imread(argv[1], 1)).data)
return -1;
cvtColor(img, gray, CV_BGR2GRAY);
// smooth it, otherwise a lot of false circles may be detected
GaussianBlur( gray, gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
HoughCircles(gray, circles, CV_HOUGH_GRADIENT,
2, gray->rows/4, 200, 100 );
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// draw the circle center
circle( img, center, 3, Scalar(0,255,0), -1, 8, 0 );
// draw the circle outline
circle( img, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
namedWindow( "circles", 1 );
imshow( "circles", img );
return 0;
}
Note that usually the function detects the circles’ centers well, however it may fail to find the correct radii. You can assist the function by specifying the radius range ( minRadius and maxRadius ) if you know it, or you may ignore the returned radius, use only the center and find the correct radius using some additional procedure.
See also: fitEllipse() , minEnclosingCircle()
Finds lines in a binary image using standard Hough transform.
Parameters: |
|
---|
The function implements standard or standard multi-scale Hough transform algorithm for line detection. See HoughLinesP() for the code example.
Finds lines segments in a binary image using probabilistic Hough transform.
Parameters: |
|
---|
The function implements probabilistic Hough transform algorithm for line detection, described in [Matas00] . Below is line detection example:
/* This is a standalone program. Pass an image name as a first parameter
of the program. Switch between standard and probabilistic Hough transform
by changing "#if 1" to "#if 0" and back */
#include <cv.h>
#include <highgui.h>
#include <math.h>
using namespace cv;
int main(int argc, char** argv)
{
Mat src, dst, color_dst;
if( argc != 2 || !(src=imread(argv[1], 0)).data)
return -1;
Canny( src, dst, 50, 200, 3 );
cvtColor( dst, color_dst, CV_GRAY2BGR );
#if 0
vector<Vec2f> lines;
HoughLines( dst, lines, 1, CV_PI/180, 100 );
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0];
float theta = lines[i][1];
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
Point pt1(cvRound(x0 + 1000*(-b)),
cvRound(y0 + 1000*(a)));
Point pt2(cvRound(x0 - 1000*(-b)),
cvRound(y0 - 1000*(a)));
line( color_dst, pt1, pt2, Scalar(0,0,255), 3, 8 );
}
#else
vector<Vec4i> lines;
HoughLinesP( dst, lines, 1, CV_PI/180, 80, 30, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
line( color_dst, Point(lines[i][0], lines[i][1]),
Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 );
}
#endif
namedWindow( "Source", 1 );
imshow( "Source", src );
namedWindow( "Detected Lines", 1 );
imshow( "Detected Lines", color_dst );
waitKey(0);
return 0;
}
This is the sample picture the function parameters have been tuned for:
And this is the output of the above program in the case of probabilistic Hough transform
Calculates the feature map for corner detection
Parameters: |
|
---|
The function calculates the complex spatial derivative-based function of the source image
where , are the first image derivatives, , are the second image derivatives and is the mixed derivative.
The corners can be found as local maximums of the functions, as shown below:
Mat corners, dilated_corners;
preCornerDetect(image, corners, 3);
// dilation with 3x3 rectangular structuring element
dilate(corners, dilated_corners, Mat(), 1);
Mat corner_mask = corners == dilated_corners;
Data structure for salient point detectors
KeyPoint
{
public:
// default constructor
KeyPoint();
// two complete constructors
KeyPoint(Point2f _pt, float _size, float _angle=-1,
float _response=0, int _octave=0, int _class_id=-1);
KeyPoint(float x, float y, float _size, float _angle=-1,
float _response=0, int _octave=0, int _class_id=-1);
// coordinate of the point
Point2f pt;
// feature size
float size;
// feature orintation in degrees
// (has negative value if the orientation
// is not defined/not computed)
float angle;
// feature strength
// (can be used to select only
// the most prominent key points)
float response;
// scale-space octave in which the feature has been found;
// may correlate with the size
int octave;
// point (can be used by feature
// classifiers or object detectors)
int class_id;
};
// reading/writing a vector of keypoints to a file storage
void write(FileStorage& fs, const string& name, const vector<KeyPoint>& keypoints);
void read(const FileNode& node, vector<KeyPoint>& keypoints);
Maximally-Stable Extremal Region Extractor
class MSER : public CvMSERParams
{
public:
// default constructor
MSER();
// constructor that initializes all the algorithm parameters
MSER( int _delta, int _min_area, int _max_area,
float _max_variation, float _min_diversity,
int _max_evolution, double _area_threshold,
double _min_margin, int _edge_blur_size );
// runs the extractor on the specified image; returns the MSERs,
// each encoded as a contour (vector<Point>, see findContours)
// the optional mask marks the area where MSERs are searched for
void operator()( const Mat& image, vector<vector<Point> >& msers, const Mat& mask ) const;
};
The class encapsulates all the parameters of MSER (see http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions ) extraction algorithm.
Class for extracting Speeded Up Robust Features from an image.
class SURF : public CvSURFParams
{
public:
// default constructor
SURF();
// constructor that initializes all the algorithm parameters
SURF(double _hessianThreshold, int _nOctaves=4,
int _nOctaveLayers=2, bool _extended=false);
// returns the number of elements in each descriptor (64 or 128)
int descriptorSize() const;
// detects keypoints using fast multi-scale Hessian detector
void operator()(const Mat& img, const Mat& mask,
vector<KeyPoint>& keypoints) const;
// detects keypoints and computes the SURF descriptors for them
void operator()(const Mat& img, const Mat& mask,
vector<KeyPoint>& keypoints,
vector<float>& descriptors,
bool useProvidedKeypoints=false) const;
};
The class SURF implements Speeded Up Robust Features descriptor [Bay06] . There is fast multi-scale Hessian keypoint detector that can be used to find the keypoints (which is the default option), but the descriptors can be also computed for the user-specified keypoints. The function can be used for object tracking and localization, image stitching etc. See the find_obj.cpp demo in OpenCV samples directory.
Implements Star keypoint detector
class StarDetector : CvStarDetectorParams
{
public:
// default constructor
StarDetector();
// the full constructor initialized all the algorithm parameters:
// maxSize - maximum size of the features. The following
// values of the parameter are supported:
// 4, 6, 8, 11, 12, 16, 22, 23, 32, 45, 46, 64, 90, 128
// responseThreshold - threshold for the approximated laplacian,
// used to eliminate weak features. The larger it is,
// the less features will be retrieved
// lineThresholdProjected - another threshold for the laplacian to
// eliminate edges
// lineThresholdBinarized - another threshold for the feature
// size to eliminate edges.
// The larger the 2 threshold, the more points you get.
StarDetector(int maxSize, int responseThreshold,
int lineThresholdProjected,
int lineThresholdBinarized,
int suppressNonmaxSize);
// finds keypoints in an image
void operator()(const Mat& image, vector<KeyPoint>& keypoints) const;
};
The class implements a modified version of CenSurE keypoint detector described in [Agrawal08]