Planning Motivation Control

Equalizing histograms to improve image quality. Basic image processing Filtering noisy images

With all element-by-element transformations, the probability distribution law that describes the image changes. Let us consider the mechanism of this change using the example of an arbitrary transformation with a monotonic characteristic described by a function (Figure 2.8), which has a single-valued inverse function. Suppose the random variable obeys a probability density. Let be an arbitrary small interval of values ​​of a random variable, and - the corresponding interval of the transformed random variable.

A value falling into an interval entails a value falling into an interval, which means the probabilistic equivalence of these two events. Therefore, taking into account the smallness of both intervals, we can write an approximate equality:

,

where the modules take into account the dependence of the probabilities on the absolute lengths of the intervals (and the independence from the signs of the increments and). Calculating from this the probability density of the transformed quantity, substituting its expression through the inverse function and performing the passage to the limit at (and, therefore,), we obtain:

. (2.4)

This expression allows you to calculate the probability density of the transformation product, which, as can be seen from it, does not coincide with the distribution density of the original random variable. It is clear that the performed transformation has a significant effect on the density, since (2.4) includes its inverse function and its derivative.

Relationships become somewhat more complex if the transformation is described by a non-one-to-one function. An example of such a more complex characteristic with ambiguous inverse function the sawtooth characteristic of Fig. 2.4, K. However, in general, the meaning of the probabilistic transformations does not change in this case.

All element-by-element transformations of images considered in this chapter can be considered from the point of view of the change in the probability density described by expression (2.4). Obviously, for none of them, the probability density of the output product will not coincide with the probability density of the original image (except, of course, a trivial transformation). It is easy to verify that with linear contrasting the form of the probability density is preserved, however, in the general case, i.e., for arbitrary values ​​of the parameters of the linear transformation, the parameters of the probability density of the transformed image change.

Determination of the probabilistic characteristics of images that have undergone nonlinear processing is a direct task of the analysis. When deciding practical tasks of image processing, the inverse problem can be posed: by the known form of the probability density and the desired form, determine the required transformation to which the original image should be subjected. In the practice of digital image processing, the transformation of the image to an equiprobable distribution often leads to a useful result. In this case

where and are the minimum and maximum brightness values ​​of the converted image. Let us determine the characteristic of the converter deciding this task... Let and are related by function (2.2), and and are the integral distribution laws of the input and output quantities. Taking into account (2.5), we find:

.

Substituting this expression into the condition of probabilistic equivalence

after simple transformations we obtain the relation

which is characteristic (2.2) in the problem being solved. According to (2.6), the original image undergoes a nonlinear transformation, the characteristic of which is determined by the integral distribution law of the original image itself. After that, the result is brought to the specified dynamic range using the linear contrasting operation.

Similarly, solutions can be obtained for other similar problems in which it is required to bring the distribution laws of the image to a given form. In the table of such conversions is given. One of them, the so-called hyperbolization of the distribution, involves reducing the probability density of the transformed image to a hyperbolic form:

(2.7)

If we take into account that when light passes through the eye, the input brightness is logarithm by its retina, then the final probability density turns out to be uniform. Thus, the difference from the previous example lies in taking into account the physiological properties of vision. It can be shown that an image with probability density (2.7) is obtained at the output of a nonlinear element with the characteristic

also determined by the integral distribution law of the original image.

Thus, the transformation of the probability density assumes knowledge of the cumulative distribution for the original image. As a rule, there is no reliable information about him. The use of analytical approximations for the purposes under consideration is also of little use, since their small deviations from the true distributions can lead to significant differences in the results from the required ones. Therefore, in the practice of image processing, the transformation of distributions is performed in two stages.

At the first stage, the histogram of the original image is measured. For a digital image, the gray scale of which, for example, belongs to the integer range 0 ... 255, the histogram is a table of 256 numbers. Each of them shows the number of points in the frame with a given brightness. Dividing all the numbers in this table by the total sample size equal to the number of used image points, an estimate of the probability distribution of the image brightness is obtained. We denote this estimate ... Then the estimate of the cumulative distribution is obtained by the formula:

.

At the second stage, the nonlinear transformation (2.2) itself is performed, which provides the necessary properties of the output image. In this case, instead of the unknown true cumulative distribution, its estimate based on the histogram is used. Taking this into account, all methods of element-by-element transformation of images, the purpose of which is to modify the distribution laws, are called histogram methods. In particular, the transformation in which the output image has a uniform distribution is called equalization (equalization) of histograms.

Note that the histogram conversion procedures can be applied both to the image as a whole and to its individual fragments. The latter can be useful when processing non-stationary images, the content of which differs significantly in their characteristics in different areas. In this case, the best effect can be achieved by applying histogram processing to individual areas.

The use of relations (2.4) - (2.8), which are valid for images with a continuous distribution of brightness, is not entirely correct for digital images. It should be borne in mind that as a result of processing it is not possible to obtain an ideal probability distribution of the output image, therefore it is useful to control its histogram.

a) original image

b) the result of processing

Rice. 2.9. Example of image equalization

Figure 2.9 shows an example of equalization performed in accordance with the described method. A characteristic feature of many images obtained in real imaging systems is a significant proportion of dark areas and a relatively small number of areas with high brightness. Equalization is designed to correct the picture by aligning the integral areas of areas with different brightness. Comparison of the original (Fig. 2.9.a) and processed (Fig. 2.9.b) images shows that the redistribution of brightness that occurs during processing leads to an improvement in visual perception.

COMPARISON OF EQUALIZATION ALGORITHMS

HYSTOGRAM HALF-TONE IMAGES

1 "2 Alexandrovskaya A.A., Mavrin E.M.

1 Aleksandrovskaya Anna Andreevna - Master's student; Mavrin Evgeniy Mikhailovich - Master's student, department information systems and telecommunications,

Faculty of Informatics and Control Systems, Moscow State Technical University. N.E. Bauman, Moscow

Abstract: This article compares digital image processing algorithms, namely histogram equalization algorithms. Three algorithms are considered: global histogram equalization (HE), adaptive histogram equalization (ANE), adaptive histogram equalization with contrast limiting (CHANE). The result of the work described in the article is a visual comparison of the work of the algorithms on the same images.

Key words: image histogram, histogram image equalization, COI, computer vision, ANE, XANE.

To improve the image quality, it is necessary to increase the brightness range, contrast, sharpness, clarity. Taken together, these parameters can be improved by the equalization of the image histogram. When defining the contours of objects, in most cases, the data contained in the grayscale image is sufficient. A grayscale image is an image that contains information only about brightness, but not about the color of pixels. Accordingly, it is advisable to construct a histogram for a grayscale image.

Let the image under consideration consist of n pixels with an intensity (brightness) r in the range from 0 to 2bpp, where bpp is the number of bits allocated for encoding the brightness of one pixel. In most color models for coding

brightness of one color of one pixel requires 1 byte. Accordingly, the pixel intensity is defined on a set from 0 to 255. The graph of the dependence of the number of pixels in an image with intensity r to the intensity itself is called the image histogram. In fig. 1 shows an example of test images and histograms built on the basis of these images:

Rice. 1. Test images and their histograms

Obviously, having studied the corresponding histogram, one can draw conclusions about the original image. For example, histograms of very dark images are characterized by the concentration of non-zero values ​​of the histogram near zero brightness levels, while for light images, on the contrary, all non-zero values ​​are collected on the right side of the histogram.

Histogram equalization algorithms are popular algorithms for enhancing the processed grayscale image. In general, HE-algorithms (Histogram Equalization) have a relatively low computational cost and at the same time show high efficiency. The essence of the work of this type algorithms consists in adjusting the levels of a grayscale image in accordance with the probability distribution function of a given image (1) and, as a consequence, increases the dynamic range of the brightness distribution. This leads to better visuals,

such as: brightness contrast, sharpness, clarity.

p (i) = -, i = 0. 255, n

where p (i) is the probability of the appearance of a pixel with brightness i, the normalized function of the histogram of the original image, k are the coordinates of the pixel of the processed image, g (k) is the equalized image.

Histogram equalization algorithms are divided into two types: local (adaptive) histogram equalization and global histogram equalization. In the global method, one diagram is built and the histogram of the entire image is equalized (Fig. 3 a). In the local method (Fig. 3b), a large number of histograms are constructed, where each histogram corresponds only to a part of the processed image. This method improves local contrast.

image, which allows you to get better overall processing results.

Local processing algorithms can be divided into the following types: overlapping local processing blocks, non-overlapping local processing blocks, and partially overlapping local processing blocks (Fig. 2).

Rice. 2. An illustration of the operation of various types of algorithms for local image processing: a) overlapping local processing blocks, b) non-overlapping local processing blocks, c) partially overlapping local processing blocks

The overlapping block algorithm gives the best processing result, but is the slowest among the listed ones. Algorithm of non-overlapping blocks, on the contrary, requires less processing time, all other things being equal, but since the processed blocks do not overlap each other, sharp changes in brightness in the final image are possible. The compromise solution is the partially overlapping block algorithm. The disadvantages of adaptive histogram equalization algorithms include over-amplification of the image parameters and the resulting increase in noise in the final image.

An improved version of the above algorithm is the contrast limited adaptive histogram equalization (CLAHE) algorithm (Fig. 4c). Main feature of this algorithm is the limitation

range of the histogram based on the analysis of the brightness values ​​of pixels in the processed block (2), thereby the resulting image looks more natural and less noisy.

where add is the coefficient of the increment of the value of the histogram function, ps is the number of pixels exceeding threshold... An illustration of the change in the histogram is shown in Figure 3.

Rice. 3. Limiting the range of the histogram in the CLAHE algorithm

It is worth noting that the classic SLIB algorithm uses bilinear interpolation to eliminate the boundaries between the processed blocks.

Rice. 4. The results of the histogram equalization algorithms: a) global histogram equalization (HE), b) adaptive histogram equalization (ANE), c) adaptive histogram equalization with contrast limitation (CHANE)

When visually comparing the processing results the best method is CLAHE (Fig. 3c). The image processed with this method has less noise than the image processed with the AHE method, and the brightness contrast is more natural. Compared to the image processed by the global equalization method, the CLAHE method improves the clarity of small and blurred details of the processed image, and also increases the contrast, but not as exaggerated as in the case of the AHE method. Also, below is a table for estimating the execution time of the considered methods in the MATLAB 2016 programming environment.

Table 1. Estimation of the execution time of the considered

Lead time

Program name with Runtime

method by considered method, c method, c

CLAHE 0.609 0.519

Bibliography

1. Chichvarin N.V. Signal detection and recognition // National Library them. N.E. Bauman [ Electronic resource] 2016, Access mode: https://ru.bmstu.wiki/Correcting_Luminance_and_contrast_ of images (date of access: 03.05.2019).

2. Gonzalez R.K. , Woods R.E. ... Digital Image Processing, 3rd edition, New Jersey: Pearson Education, 2008.950 p.

3. Gupta S., Kaur Yu. Review of Different Local and Global Contrast Enhancement Techniques for a Digital Image // International Journal of Computer Applications [Electronic resource] 2014, URL: https://pdfs.semanticscholar.org/7fb1/bf8775a1a1eaad9b3d1f4 5bc85adc5c3212f.pdf (Retrieved 3.05. 2019).

4. Ma J., Fan Ks. , Young S. Ks. , Zang Ks. , Ztsu Ks. ... Contrast Limited Adaptive Histogram Equalization Based Fusion for Underwater Image Enhancement // Preprints [Electronic resource] 2017, URL: https: // www. preprints. org / manuscript / 201703.0086 / v 1 (Date accessed: 3.05.2019).

Perform image processing, visualization and analysis

Image Processing Toolbox ™ provides a comprehensive set of reference standard algorithms and workflow applications for image processing, analysis, visualization, and algorithm development. You can perform image segmentation, image enhancement, noise reduction, geometric transformations, and display registrations using deep learning and traditional methods image processing. Supports 2D, 3D, and arbitrarily large images toolbox processing.

Image Processing Toolbox applications enable you to automate common image processing workflows. You can interactively segment image data, compare image registration methods, and batch process large datasets. Visualization functions and applications allow you to explore images, 3D volumes and videos; adjust the contrast; create histograms; and manage the visible areas (KINGS).

You can speed up algorithms by running them on multi-core processors and GPUs. Many toolbox functions support C / C ++ code generation for computer vision deployment and prototype analysis.

Beginning of work

Learn the basics of Image Processing Toolbox

Import, export, and transform

Image data import and export, conversion of image types and classes

Display and Exploration

Interactive imaging and exploration tools

Geometric transformation and image registration

Scale, rotate, perform other N-D transformations, and align images using intensity correlation, function match, or control point display

Display filtering and enhancement

Contrast adjustment, morphological filtering, deblurring, ROI-based processing

Display segmentation and analysis

Area analysis, structure analysis, pixel and image statistics

Deep Learning for Image Processing

Perform image processing tasks such as removing image noise and creating images with high resolution from low resolution images using convolutional neural networks (requires Deep Learning Toolbox ™),

With all element-by-element transformations, the probability distribution law that describes the image changes. With linear contrasting, the form of the probability density is preserved, however, in the general case, i.e. at arbitrary values ​​of the parameters of the linear transformation, the parameters of the probability density of the transformed image are changed.

Determination of the probabilistic characteristics of images that have undergone nonlinear processing is a direct task of the analysis. When solving practical problems of image processing, the inverse problem can be posed: according to the known form of the probability density p f(f) and the desired form p g(g) determine the required transformation g= ϕ( f) to which the original image should be subjected. In the practice of digital image processing, the transformation of the image to an equiprobable distribution often leads to a useful result. In this case

where g min and g max - minimum and maximum brightness values ​​of the converted image. Let us determine the characteristics of the converter that solves this problem. Let be f and g linked by function g(n, m) = j ( f(n, m)), but P f(f) and Pg(g) - integral laws of distribution of input and output brightness. Taking into account (6.1), we find:

Substituting this expression into the condition of probabilistic equivalence

after simple transformations we obtain the relation

representing the characteristic g(n, m) = j ( f(n, m)) in the problem to be solved. According to (6.2), the original image undergoes a nonlinear transformation, the characteristic of which is P f(f) is determined by the integral distribution law of the original image. After that, the result is brought to the specified dynamic range using the linear contrasting operation.

Thus, the transformation of the probability density assumes knowledge of the cumulative distribution for the original image. As a rule, there is no reliable information about him. Approximation by analytical functions, due to approximation errors, can lead to a significant difference in the results from the required ones. Therefore, in the practice of image processing, the transformation of distributions is performed in two stages.



At the first stage, the histogram of the original image is measured. For a digital image, the gray scale of which, for example, belongs to an integer range, the histogram is a table of 256 numbers. Each of them shows the number of points in the image (frame) that have a given brightness. Dividing all the numbers in this table by the total sample size equal to the number of samples in the image, an estimate of the probability distribution of the image brightness is obtained. We denote this estimate by q p f(f q), 0 ≤ f q≤ 255. Then the estimate of the cumulative distribution is obtained by the formula:

At the second stage, the nonlinear transformation (6.2) itself is performed, which provides the necessary properties of the output image. In this case, instead of the unknown true cumulative distribution, its estimate based on the histogram is used. Taking this into account, all methods of element-by-element transformation of images, the purpose of which is to modify the distribution laws, are called histogram methods. In particular, a transformation in which the output image has a uniform distribution is called equalization (equalization) of the histogram.

Note that the histogram conversion procedures can be applied both to the image as a whole and to its individual fragments. The latter can be useful when processing non-stationary images, the characteristics of which differ significantly in different areas... In this case, the best effect can be achieved by applying histogram processing to individual areas - regions of interest. However, this will change the values ​​of the readings and all other areas. Figure 6.1 shows an example of equalization performed in accordance with the described method.

Characteristic feature Many images obtained in real imaging systems have a significant proportion of dark areas and a relatively small number of areas with high brightness.

Figure 6.1 - An example of image histogram equalization: a) the original image and its histogram c); b) the transformed image and its histogram d)

Equalization of the histogram results in equalization of the integral areas of uniformly distributed brightness ranges. Comparison of the original (Figure 6.1 a) and processed (Figure 6.1 b) images shows that the redistribution of brightness that occurs during processing leads to an improvement in visual perception.

There are three main methods for enhancing the contrast of an image:

  • linear stretching of the histogram (linear contrasting),
  • histogram normalization,
  • equalization (linearization or equalization) of the histogram.

Linear stretch is reduced to assigning new intensity values ​​to each pixel of the image. If the intensities of the original image changed in the range from to, then it is necessary to linearly "stretch" the specified range so that the values ​​change from 0 to 255. To do this, it is enough to recalculate the old intensity values ​​for all pixels according to the formula, where the coefficients are simply calculated based on that the border should go to 0, and - to 255.

Normalizing histogram Unlike the previous method, it does not stretch the entire range of intensity variation, but only its most informative part. The informative part is understood as a set of histogram peaks, i.e. intensities that are most common in the image. The bins corresponding to rare intensities are discarded during the normalization process, then the usual linear stretching of the resulting histogram is performed.

Alignment bar charts are one of the most common methods. The purpose of equalization is to ensure that all brightness levels have the same frequency, and the histogram conforms to a uniform distribution law. Let's say you have a grayscale image that has pixel resolution. The number of levels of quantization of the brightness of pixels (the number of bins) is. Then, on average, for each brightness level, pixels. The basic math lies in comparing two distributions. Let - random variables describing the change in the intensity of pixels in the images, - the density of the intensity distribution in the original image, - the desired distribution density. It is necessary to find a transformation of distribution densities, which would allow obtaining the desired density:

Let us denote by and integral laws of distribution of random variables and. From the condition of probabilistic equivalence it follows that ... Let us write down the integral distribution law by definition:

Hence we get that

It remains to figure out how to estimate the integral distribution law. To do this, you must first build a histogram of the original image, then normalize the resulting histogram by dividing the value of each bin by total amount pixels. The bin values ​​can be considered as an approximate value of the distribution density function. Thus, the value of the cumulative distribution function can be represented as a sum of the following form:

The constructed estimate can be used to calculate new intensity values. Note that the listed histogram transformations can be applied not only to the entire image, but also to its individual parts.

The OpenCV library implements the equalizeHist function, which enhances the contrast of an image by flattening the histogram [,]. The function prototype is shown below.

void equalizeHist (const Mat & src, Mat & dst)

The function works in four stages:

Below is an example of a program that flattens a histogram. The application takes the name of the original image as a command line argument. After performing the histogram equalization operation, the original image 1 is displayed The image used is from the PASACL VOC 2007 database. converted to grayscale (Figure 7.11, left), and a flattened histogram image (Figure 7.11, right).

#include #include using namespace cv; const char helper = "Sample_equalizeHist.exe \ n \ \ t - image file name \ n "; int main (int argc, char * argv) (const char * initialWinName =" Initial Image ", * equalizedWinName =" Equalized Image "; Mat img, grayImg, equalizedImg; if (argc< 2) { printf("%s", helper); return 1; } // загрузка изображения img = imread(argv, 1); // преобразование в оттенки серого cvtColor(img, grayImg, CV_RGB2GRAY); // выравнивание гистограммы equalizeHist(grayImg, equalizedImg); // отображение исходного изображения и гистограмм namedWindow(initialWinName, CV_WINDOW_AUTOSIZE); namedWindow(equalizedWinName, CV_WINDOW_AUTOSIZE); imshow(initialWinName, grayImg); imshow(equalizedWinName, equalizedImg); waitKey(); // закрытие окон destroyAllWindows(); // осовобождение памяти img.release(); grayImg.release(); equalizedImg.release(); return 0; }


Rice. 7.11.