Current location - Recipe Complete Network - Healthy recipes - OpenCV Python Series Tutorial 4-OpenCV Image Processing (I)
OpenCV Python Series Tutorial 4-OpenCV Image Processing (I)
Learning objectives:

There are 150 color space conversion methods in OpenCV, and only two methods are discussed here:

HSV has a hue range of and a value range of. Different software uses different scales. If you want to compare OpenCV values with them, you need to standardize these ranges.

HSV and HLV explain

Running results: The function of this program is to detect blue targets, and similarly it can detect targets of other colors.

There is some noise in the result, which will be removed in the following chapters.

This is the simplest method in object tracking. Once you learn the function of contour lines, you can do many things, such as finding the center of mass of this object, using it to track this object, drawing a chart just by moving your hand in front of the camera, and many other interesting things.

Rookie tutorial online HSV-> BGR conversion

For example, to find out the green HSV value, you can use the above program, and the obtained value takes an upper and lower bound. Such as the lower bound [h-10,100,100] and the upper bound [H+ 10, 255, 255].

Or use other tools such as GIMP.

Learning objectives:

Thresholding the image is the simplest image segmentation method. Based on the gray difference between the image and the background, this segmentation is based on pixel level.

threshold(src, thresh, maxval, type[, dst]) -> retval, dst

Calculate the threshold of the small area of the image. So we get different thresholds for different areas of the same image, which provides better results for our images under different illumination.

Three special input parameters and one output parameter.

adaptiveThreshold(src, maxValue, adaptiveMethod, thresholdType, blockSize, C[, dst]) -> dst

opencv-threshold-python

OpenCV picture set

Original text of this section

Learning objectives:

OpenCV provides two transformation functions: cv2.warpAffine and cv2.warpPerspective

Cv2.resize () completes scaling.

Document description

Running result

Description: cv2.INTER_LINEAR method is slower than cv2.INTER_CUBIC, which seems inconsistent with the official document? To be verified.

Speed comparison: INTER_CUBIC > INTER_NEAREST > INTER_LINEAR > INTER_AREA > INTER_LANCZOS4

Change the position of the image and create a transformation matrix of np.float32 type.

warpAffine(src, M, dsize[, dst[, flags[, borderMode[, borderValue]]]]) -> dst

Running results:

The rotation angle () is transformed by a transformation matrix:

OpenCV provides zoom rotation with adjustable rotation center, so that you can rotate anywhere you like. The modified transformation matrix is

here

OpenCV provides cv2.getRotationMatrix2D control.

cv2.getRotationMatrix2D(center, angle, scale) → retval

Running result

cv2.getAffineTransform(src, dst) → retval

Functional relationship:

\begin{bmatrix} x'_i \ y'_i \end{bmatrix}\begin{bmatrix} x'_i \ y'_i \end{bmatrix} =

among

Operation results: the points on the diagram are easy to observe, and the red dots in the two diagrams correspond to each other.

Perspective transformation requires a 3x3 transformation matrix. After the transformation, the straight line remains straight. To find this transformation matrix, you need four points on the input image and the corresponding points on the output image. Of these four points, three should not be * * * lines. The transformation matrix is obtained through the calculation of cv2.getPerspectiveTransform, and the final result is obtained through the transformation of the matrix cv2.warpPerspective.

Original text of this section

Smoothing, also known as bluring, is a simple and frequently used image processing method. The purpose of smoothing: it is usually used to reduce noise or distortion on the image. Smoothing is a very useful method when it comes to reducing image resolution.

Image filtering: The noise of the target image is suppressed under the condition of preserving the detailed features of the image as much as possible, and its processing effect will directly affect the effectiveness and reliability of subsequent image processing and analysis.

Eliminating noise components in an image is called image smoothing or filtering operation. The energy of the signal or image is mostly concentrated in the low and middle frequency bands of the amplitude spectrum. In the high frequency band, useful information will be drowned out by noise. Therefore, a filter that can reduce the amplitude of high-frequency components can weaken the influence of noise.

The purpose of filtering is to extract the features of the object as the feature pattern of image recognition; In order to meet the requirements of image processing, noise mixed in image digitization is eliminated.

Requirements of filtering processing: important information such as the outline and edge of the image cannot be damaged; The image is clear and the visual effect is good.

Smoothing filtering is a spatial filtering technique with low frequency enhancement, and its purpose is to blur and eliminate noise.

The smoothing filtering in spatial domain generally adopts simple average method, that is, to find the average brightness value of adjacent pixel points. The size of the neighborhood is directly related to the smoothing effect. The larger the neighborhood, the better the smoothing effect. However, if the neighborhood is too large, the smoothing will also cause the greater loss of edge information, thus blurring the output image. Therefore, it is necessary to choose an appropriate neighborhood.

Filter: a window containing weighting coefficients. When smoothing an image with a filter, put this window on the image and look at the image we get through this window.

Linear filter: used to eliminate unwanted frequencies in the input signal or select a desired frequency from many frequencies.

Low pass filter, high pass filter, band pass filter, band stop filter, all pass filter, notch filter.

boxFilter(src, ddepth, ksize[, dst[, anchor[, normalize[, borderType]]]]) -> dst

Mean filtering is a special case after normalization of block filtering. Normalization is to scale the processed quantity to a range such as (0, 1) for unified processing and intuitive quantification. Non-normalized block filtering is used to calculate the integral characteristics in the neighborhood of each pixel, such as the covariance matrix of the reciprocal of the image used in dense optical flow algorithm.

Running results:

Mean filtering is a typical linear filtering algorithm, and the main method is neighborhood averaging method, that is, the average value of each pixel in an image area replaces each pixel value in the original image. Generally, it is necessary to give a template (kernel) to the target pixel on the image, which includes its neighboring pixels (for example, 8(3x3- 1) pixels around the target pixel as the center to form a filtering template, that is, to remove the target pixel itself). Then the average value of all pixels in the template is used to replace the original pixel value. That is, for the current pixel point (x, y) to be processed, select a template, which is composed of several pixels in its immediate vicinity, find the average value of all pixels in the template, and then give the average value to the current pixel point (x, y) as the gray level g(x, y) of the processed image at this point, that is, g(x, y) =1/m ∑.

Mean filtering itself has inherent defects, that is, it can not protect the details of the image well, and it also destroys the details of the image while denoising, thus making the image blurred and unable to remove the noise points well.

cv2.blur(src, ksize[, dst[, anchor[, borderType]]]) → dst

Results:

Gaussian filtering: linear filtering can eliminate Gaussian noise and is widely used in the noise reduction process of image processing. Gaussian filtering is the process of weighted average of the whole image, and the value of each pixel is obtained by weighted average of itself and other pixel values in the neighborhood. The specific operation of Gaussian filtering is to scan every pixel in the image with a template (or convolution or mask), and replace the value of the central pixel of the template with the weighted average gray value of the pixels in the neighborhood determined by the template.

Gaussian filtering is useful but inefficient.

The visual effect of the image generated by Gaussian blur technology is like observing the image through a translucent screen, which is obviously different from the effect of out-of-focus imaging and the effect of ordinary lighting shadow. Gaussian smoothing is also used in the preprocessing stage of computer vision algorithm to enhance the image effect at different scales (see scale space representation and scale space realization). From a mathematical point of view, the Gaussian blur process of an image is that the image is convolved with the normal distribution. Because normal distribution is also called Gaussian distribution, this technique is called Gaussian fuzzy.

Gaussian filter is a kind of linear smoothing filter whose selection value is based on the shape of Gaussian function. Gaussian smoothing filter is very effective for suppressing noise that obeys normal distribution.

One-dimensional zero-mean Gaussian function is: Gaussian distribution parameters determine the width of Gaussian function.

Generation of Gaussian noise

GaussianBlur(src, ksize, sigmaX[, dst[, sigmaY[, borderType]]]) -> dst

Linear filtering is easy to construct and analyze from the angle of frequency response.

In many cases, nonlinear filtering using neighboring pixels will get better results. For example, when the noise is shot noise instead of Gaussian noise, that is, the image occasionally has a large value, when the image is blurred by Gaussian filter, the noise pixels will not be eliminated, but will be transformed into softer but still visible shot.

Median filter is a typical nonlinear filtering technology. The basic idea is to replace the gray value of a pixel with the median of the gray value of the neighborhood of the pixel. This method is used to remove impulse noise and salt and pepper noise, which is also called impulse noise. It randomly changes some pixel values, and it is black and white bright and dark point noise generated by image sensor, transmission channel and decoding processing. Salt and pepper noise is often caused by image cutting. At the same time, it can keep the edge details of the image.

Median filtering is a kind of nonlinear signal processing technology based on sorting statistics theory, which can effectively suppress noise. Its basic principle is to replace the value of a point in a digital image or digital sequence with the median value of each point in a neighborhood of the point, so that the surrounding pixel values are close to the real value, thus eliminating isolated noise points. It is especially useful for speckle noise and salt-and-pepper noise, because it does not depend on. The median filter works in a similar way to the linear filter when dealing with the continuous image window function, but the filtering process is no longer a weighting operation.

Under certain conditions, median filtering can overcome the blurring of image details caused by common linear filters, such as least mean square filtering, block filtering, mean filtering, etc., and it is very effective in filtering pulse interference and image scanning noise. It is also often used to protect edge information, and the characteristics of preserving edges make it very useful in situations where edge blurring is not expected. It is a very classic smoothing noise processing method.

Compared with mean filtering:

Description: Under certain conditions, median filtering can overcome the blurring of image details caused by linear filters (such as mean filtering, etc.), and it is the most effective to filter out impulse interference, that is, image scanning noise. The statistical characteristics of the image are not needed in the actual operation process, which also brings a lot of convenience to the calculation. However, median filtering is not suitable for some images with many details, especially lines and spires.

Bilateral filter is a nonlinear filtering method, which combines the spatial proximity of images with the similarity of pixel values, and considers the spatial information and gray similarity at the same time to achieve the purpose of edge-preserving denoising. It is simple, non-iterative and local.

The advantage of bilateral filter is that it can preserve the edge. Generally, Wiener filter or Gaussian filter used in the past will blur the edge obviously, and the protection effect for high-frequency details is not obvious. As the name implies, the bilateral filter has one more Gaussian variance Sigma-D than the Gaussian filter, which is a Gaussian filter function based on spatial distribution, so far away pixels near the edge will not affect the pixel values on the edge too much, thus ensuring the preservation of the pixel values near the edge. However, because too much high-frequency information is saved, the bilateral filter can not filter out the high-frequency noise in the color image cleanly, but can only filter the low-frequency information well.

Running result

Learning objectives:

Morphological transformation is some simple operations based on image shape. It is usually executed on binary images.

Functions achieved by expansion and corrosion

The basic idea of erosion is like soil erosion, which will erode the boundaries of foreground objects (always trying to keep the foreground white). So what does it do? The kernel slides in the image (as in 2D convolution). Only when all the pixels under the kernel are 1, the pixel in the original image (1 or 0) will be regarded as 1, otherwise it will be eroded (become zero).

erode(src, kernel[, dst[, anchor[, iterations[, borderType[, borderValue]]]]]) -> dst

Contrary to the operation of corrosion. If at least one pixel under the kernel is 1, the pixel element is 1. Therefore, it increases the size of the white area or foreground object in the image. Usually, in the case of removing noise, erosion is followed by expansion. Because erosion will eliminate white noise, but it will also shrink our objects. So we expand it. As the noise disappears, they will not come back, but our object area will increase. It can also be used to connect broken parts of an object.