Analysis of the clustering fusion algorithm for multi-band color image

Because of the limitations of the technical conditions, the traditional algorithms can not be mapped with many kinds of color, and the treatment effect is poor, which is not conducive to human eye observation. A clustering fusion algorithm based on D-S evidence theory is proposed in this paper to make salt denoising and Gauss denoising operation for the multi-band color image, to improve the image recognition, and better reflect the objective reality, which is not limited by technical conditions; the denoised images are made texture features and edge features extraction; these two kinds of features are fused and carried out the probability distribution to solve the probability of that the each pixel belongs to each class; Based on the DS evidence combination, the probability of four channels is fused, and according to the probability of what kind of each pixel belonging is the largest, it is clustered. Experimental results show that the proposed algorithm can combine different bands of color images to different levels of target features, and retain more effective information, which is conducive to target recognition and detection.

1. Introduction. Image fusion is an important branch of multi-sensor information fusion, is a comprehensive discipline of sensor technology, computer application, image processing, artificial intelligence and other modern high technology. Image fusion is an information fusion technology based on the image as the research object, is a information processing process to integrate the image of a specific scene or image sequence information at the same time or different time acquired by two or more than two sensors, to generate the new explanation to this scene [2]. Since 1980s, a large number of theoretical and application studies have been carried out at different levels of multi-source image fusion. Multi-source image has redundancy and complementarity. It can get information from multiple viewpoints, expand the sensing range of space-time, and improve the accuracy and robustness of observation. Therefore, multi-source image fusion has more advantages than single source image [24].
Using a certain algorithm, the multiple imaging information with different wavelength and different imaging mechanism are merged into a new image, making the higher reliability and less fuzzy of image fusion, which is comprehensible and more suitable for visual inspection and computer human measurement, classification, recognition, understanding and processing. Multi-source image fusion technology is widely used in the field of military target detection, tracking and recognition, earth observation, airport navigation, security surveillance, intelligent transportation, medical imaging and diagnosis and other civilian areas. It has very important significance to the development of the national economy and the construction of national defense [12,23].
However, in the current image clustering fusion technology, most of the fusion results obtained by clustering fusion algorithm are grayscale images. Research on human visual characteristics has found that human vision is very sensitive to color. There are only tens of gradations that can be distinguished by human eyes, while the resolution of colors can be up to thousands. Color can provide much more information to human eyes than gray. Compared with the gray level information, the human eye can identify the color coded information faster and more accurately [11]. The traditional gray level algorithm first integrates the image into gray image, and then uses pseudo color technology to make pseudo color coding. Finally, the color display is pseudo color image fusion image. Although the gray layer method is intuitionistic and simple, the principle is the technical constraints, and the color types can map is not much, and the treatment effect is poor, which is not conducive to the human eye observation. In this paper, a novel clustering fusion algorithm based on D-S evidence theory is proposes.

2.
Research on the clustering fusion algorithm for multi-band color image.

2.1.
Multi-band color image denoising. Due to the limitation of the device's performance and the influence of the late processing circuit, the image obtained directly from the sensor will introduce a lot of noise interference in the final result. In order to improve the identifiability of the image and better reflect the objective reality, it is necessary to denoise the digital image.
(1) Fast salt and pepper noise removal algorithm Salt and pepper noise also known as impulse noise, is bright and dark spot, produced in multi-band color image imaging, transmission and decoding process [25]. The density function of salt and pepper noise can be given in the following formula: If b > a, the gray value b in multi-band color image will show the highlights, and a is scotoma. For the two-valued image, the negative pulse appears in the multiband color image with a black spot (pepper point x = 0), and the positive pulse appears in the image with white dot (salt point x = 255), which seriously affects the image quality.
Based on the idea of weight filtering, it can assign different weights to each pixel in the filter window, and use median filter to replace the center and pixel points to remove the salt and pepper noise. While the theory of noise detection is introduced, and the noise can be pre-judgment, in order to adaptively change the window size.
On the premise of removing the noise, the signal to noise ratio of the processed image is improved, and the texture and contour in the image are preserved [3].
For the original image, supposing that there is a pixel p(x, y)in the image, and whether the pixel is suspected noise(p(x, y) = 255, or p(x, y) = 0) is judged, if the point is not that the suspected noise point is skipped; otherwise, the number S length of the non extreme points in the multi-band color image window is detected, if S length is greater than the set standard criteria to retain coordinate of the detection point, and expand the filtering window and continue calculate; otherwise, filtering is carried out. That is, taking the point as the center, and the domain of 3*3, to build a filter window. According to the absolute value difference method of neighborhood, the difference between an element point and the center point in the window is calculated, and the difference will be accumulated to a discriminant value of ROAD, while setting a threshold Q t (when the value is 100, the result of experiments is best). if ROAD is greater than the threshold, denoising is carried out, otherwise the original value is kept. The discriminant formula is as follows: When denoising, the neighborhood pixels except the center pixels are assigned to different weight coefficients according to the correlation of the central pixels. The higher the correlation is, the greater the weight coefficient of the distribution is, and vice versa, which reduces the weight coefficient [14]. The formula for calculating the weight of the pixel in the neighborhood is as follows: In the formula, σ ω represents the weight value allocation empirical constant; the final weight coefficient of each neighborhood pixel is obtained after the normalization of each pixel weight: According to the above formula, each pixel q(i, j) is multiplied with the corresponding weight coefficient and summed up, and a new center point Z weight is obtained. The expression is as follows: According to the above calculation, the original central pixel is replaced by the new result, and median filter is used to get the final value Z med as output result. The cycle is repeated for the first step until all pixels are detected.
(2) Gauss noise removal algorithm In the process of image generation and transmission, the mixed noise in the image is affected by many responsible factors. A large number of small independent and other distributed factors are stacked together to form the synthetic effect. According to the central limit theorem, the Gauss distribution can be used to describe the noise model. For Gauss noise [13], its distribution function satisfies: In the formula, N represents the total number of pixels in the multi-band color image, and z represents the pixel gray value; µ represents the expectation to z; σ represents the standard deviation of z, and the square σ 2 is the variance of z. Gauss noise is another main type of noise in remote sensing image. The texture information in the remote sensing image is badly damaged, and it must be effectively removed. The K-SVD super complete dictionary method is used to sparse decomposition of multi-band source color images and an ultra complete dictionary is constructed to effectively remove Gauss noise.
It is assumed that the size of the multi-band color image matrix Y = [y 1 , . . . , y p ] is n * p. Where, n represents the length of multi-band color image blocks, and p represents the number of multi-band color image blocks. Firstly, by considering any block of image y i in matrix Y , and an initial random dictionary D which size is n * K, p >> K and K > n, the coefficient of y i is expressed as solving the following optimization problem.
Where, α represents the sparse decomposition coefficient of multi-band color images; and T 0 represents the sparsity. Firstly, the sparse solution phase is considered. Assuming that the dictionary D is fixed, the upper formula can be converted to solving the sparse decomposition coefficient α. For the M image blocks of the entire multi-band color image Y to be de-noised, it can be expressed as: Where, A represents the singular value decomposition matrix of multi-band color images.
On the basis of the above calculation, the dictionary D should be updated. The updates operation steps of K-SVD algorithm for dictionary are as: under a condition of given sparsity T 0 , Y is multi-band color training image, initialization of D is a random dictionary, and SVD decomposition is used to update the atom of dictionary D, and finally get the target dictionary.
In this process, a multi-band color image with Gauss noise is trained in the dictionary, and the super complete dictionary can be better adapted to the multiband color image. According to the principle of sparse decomposition, the sparse components are reserved when the noisy image is made sparse decomposition. By the K-SVD dictionary training and updating, the obtained more complete dictionary can reflect the features of multi-band color image, and good denoising effect can be obtained followed by reconstruction of multi-band color image.

2.2.
Feature extraction of multi-band color images. In order to carry out clustering fusion of multi-band color images, the target feature information in multi band color images must be extracted as much as possible, including the texture, edge, contour and other features of the image. The image texture features reflect the characteristics and spatial topological relations of image gray scale; it is a kind of local structural features of the image. The specific performance is the pixel grayscale or color change in a pixel neighborhood of image. Texture is an important image, but it is difficult to describe the characteristics. In this paper, the grayscale symbiotic matrix method is used to extract the texture features of remote sensing images. The gray level co-occurrence matrix of images can reflect the comprehensive information of image gray on direction, adjacent interval and amplitude of change. It is the basis of analyzing local characteristics and arrangement rules of images.
(1) Spectral feature extraction of multi-band color images The feature extraction algorithm based on spectrum is usually based on the mean and variance of the sample [17]. For the color remote sensing image with RGB three channels, we get the mean and variance for each sample on each channel. The specific implementation method is: if the sample C is an image with the size of r * c, the mean value of the R channel is the sum of all the pixel values of the sample A on the R channel divided by the total number of pixels. The formula for calculation is: Where, u U and S 2 represent the mean and variance of the sample pixels of multiband color images.
The gray level co-occurrence matrix is defined as the joint probability distribution of two pixels with a distance of δ = (∆x, ∆y) in a multi band color image. Assuming that the gray level of multi-band color images is N , the co-occurrence matrix is N × N , which can be expressed as P δ (i, j), where the value of element P (i, j) located at (i, j) is expressed as a co-occurrence probability of a pixel pair of which distance is δ = (∆x, ∆y) between a pixel with a gray scale i and another pixel with a gray scale j. In actual processing, in order to reduce the amount of calculation, (∆x, ∆y) generally only takes the following four cases: (1) ∆x = 1, ∆y = 0, that is, adjacent to the horizontal direction; (2) ∆x = 0, ∆y = 1, that is, adjacent to the vertical direction; (3) ∆x = 1, ∆y = 1, that is, the northwest adjacent to southeast; (4) ∆x = −1, ∆y = 1, that is, the northeast adjacent to the southwest; The grayscale co-occurrence matrix of different multi-band color images varies greatly because of the different texture scales. For coarse-grained multi-band color images, the texture scale is large and the gray level is smooth, and the pixels tend to be the same brightness. So the value P (i, j) of the co-occurrence matrix is more concentrated near the diagonal line. For images with fine texture, the size of the texture is smaller and the distribution of gray value is not concentrated, and the value P (i, j) in the co-occurrence matrix are scattered in other places. It can be seen that the co-occurrence matrix can reflect the spatial information of the relative positions of different gray pixels.
(2) texture feature extraction of multi-band color image The grayscale co-occurrence matrix can be used to define the texture feature of multi-band color image [4,7,19,20], the purpose is to use texture features to assist multi-band color image classification. In this paper, four commonly used features are selected to extract the texture features of multi-band color images. For the multi-band color images, the following four statistics have the best effect. The expression of energy is as follows: Secondly, the expression of contrast is as follows: (3) The calculation formula for the correlation characteristic is as follows: Where, ∂ x and ∂ y represent the two element values of the gray level co-occurrence matrix of multi-band color images; µ x and µ y represent the correlation coefficient.
(4) the calculation formula for entropy characteristic is as follows: In order to more intuitively describe the texture status in the symbiotic matrix, the following four features are described from the physical meaning: (1) Energy characteristics: Energy is the sum of squares of the element values in the grayscale co-occurrence matrix. It reflects the uniformity of the gray level distribution and the roughness of the texture. If all the values of the co-occurrence matrix are equal, the ASM value is small; on the contrary, if some of the values are large and the other values are small, the ASM value is large. When the element is concentrated in the co-occurrence matrix, the ASM value is large at this time, and the ASM value indicates a more uniform and regular pattern of texture. (2) Contrast features: reflect the image clarity and depth extent of texture striation. The deeper the texture groove is, the greater the contrast is, and the visual effect is clearer; on the contrary, the contrast is small, grooves are shallow, and the effect is fuzzy. The difference of gray level, that is, the more pixels with large contrast has, the greater the value is. The greater the value of elements away from diagonal lines in the gray level co-occurrence matrix is, the greater the CON is. (3) Features: it is to measure the similarity degree of the element of spatial gray level co-occurrence matrix in the direction of the row or column, thus, the correlation value reflects the correlation of local gray value in image. When the value of the matrix element is equal, the correlation value is large; on the contrary, if the pixel value of the matrix is very different, the correlation value is small. If there is a horizontal texture in the image, COR of the horizontal direction matrix is greater than COR of the rest of the matrix. (4) Entropy feature is the measure of information quantity that the image has, and texture information is also the information of the image. It is a randomness measure. When all elements in the co-occurrence matrix have the greatest randomness, the entropy is large; when all the values in the space co-occurrence matrix are almost equal. It represents the non-uniformity or complexity of the texture in the image.
In a comprehensive analysis, the texture feature extraction algorithm of multiband color image based on grayscale co-occurrence matrix is as follows: (1) The color remote sensing image is transformed into gray image. (2) The gray level of multi-band color images is coarsely quantified. Because of the large amount of computation to solve the co-occurrence matrix, in order to save computing time, we usually quantify the gray level, e.g. 256 levels are quantized to 16 levels. Although the quantized multi-band color images are somewhat distorted, they have little influence on the texture features. In this paper, Sobel and Canny operators are used to carry out convolution operation for multi-band color image, and the target region feature of multi-band color image is extracted [9,21].
The Sobel operator is usually used in image edge detection algorithm. It is a discrete differential operator and can calculate the gradient of the image. The Sobel operator uses two 3*3 kernel templates to carry out the convolution with images to calculate the gradient changes in the horizontal and vertical direction of the image. Supposing that B is a matrix of multi-band color image; G x and G y represent the convolution kernel in the horizontal and vertical directions, then the template calculation formula is as follows: The steps of Sobel operator to extract the edge features of multi-band color images are: (1) The whole image is traversed by the template of the two directions. After each movement, the pixel center is overlapped with one of the pixels in the image. The Canny operator is used to extract the edge information of multi-band color images in a multiscale algorithm. The extracted results can include the clear texture and detail information in the target image, and the specific algorithm steps for finding the edge are as follows: (1) Construction of Gauss filter H, and to be made convolution operation with the multi-band color image G, then it can be obtained: Where, g(x, y) is a multi-band color image after convolution; h(x, y, γ) represents the constructed Gauss filter, of which γ represents the Gauss coefficient. (2) The gradient and direction of the image are calculated. The gradient of the image filtered by Gauss can be used to calculate the partial derivative at the x and y directions by the 2*2 first order finite difference approximation.
The gradient and azimuth of the multi-band color image can be calculated by the transformation formula of the Cartesian coordinate system to the polar coordinate system, and the expressions are as follows: (3) the gradient amplitude is suppressed by non maximum value. Compared with the two pixels along the gradient line, if the central pixel x i of the neighborhood is larger than the two pixels along the gradient line, if the gradient value of x i is not larger than the two adjacent pixels, the gray value of x i is 0. That is, to retain the maximum point of the local gradient, and to suppress the non maximum value. (4) The double threshold algorithm is used to detect and connect the edges. The image is suppressed with two thresholds of T H1 and T H2 with a non maximum value, and T H1 < T H2. The pixel gray value of the multi-band color image with the gradient value less than T H1 is set to 0, and the image 1 is obtained. At this time, the more information is retained because of the small threshold. When the gradient value is less than T H2, the pixel gray value is set to 0, and the image 1 is obtained. At the same time, because of the large threshold, it will remove most of the noise interference, but at the same time, it will lose effective edge information. On the basis of image 2, image 1 is supplemented, the edge of the image is connected, and the final feature extraction image is obtained [22].

Multi-band color image clustering fusion algorithm based on D-S evidence theory. (1) Identification framework
Assuming that Θ represents a set of domains of all possible values of variable X, if any element in Θ is mutually exclusive, then Θ is a discriminant framework. Any concept and function in the theory of evidence is based on the discriminant framework, so we say that the theory of evidence is a theory based on the identification framework.
(2) Basic probability distribution function Supposing that Θ represents an authentication framework, and set function be m : 2 Θ → [0, 1], if the following conditions are met: It is called "m" as the basic probability assignment function on the frame Θ, and m(C) is the basic probability number, which indicates the degree of evidence supporting the occurrence of proposition C, and reflects the credibility of evidence to proposition C. If C ⊆ Θ and C = Θ, m(C) represents the degree of trust of proposition C; if C = Θ, m(C) means that the number is not assigned. If C ⊆ Θ and C did not say empty, then C is taken as the focal element of m.
(3) Trust function and likelihood function For any C ⊆ Θ, the trust function is defined by the evidence body.
In the formula, Bel(C) is the sum of the basic probability of all subsets in proposition C. According to the definition, it can be seen that Bel(Θ) = 0, Bel(Θ) = 1.
For any C ⊆ Θ, the likelihood function is defined by the evidence body.
Where, P l(C) represents the sum of the basic probabilities of all sets compatible with proposition C.
For any C ⊆ Θ, there is P l(C) ≥ Bel(C); interval [Bel(C), P l(C)] constitutes an uncertain interval of proposition C, indicating the degree of uncertainty of evidence.
(4) The rules of combination of evidence It is assumed that l basic probability distribution functions are m 1 , m 2 , . . . , m l respectively. After they are combined, the total basic probability distribution function m Θ = m 1 ⊕ m 2 ⊕ · · · ⊕ m l can be expressed as: Where, the calculation formula of parameter K is as follows: The above analysis and calculation of DS evidence theory provides a natural and powerful method for the representation and synthesis of uncertain information. As an information fusion method, it has been widely applied in remote sensing image classification. In the multi-band color image clustering, the main function of DS evidence theory is to realize the fusion of multiple features. Therefore, before applying the evidence theory, we should set up the target feature database and decide which features to fuse. Each feature has a standard eigenvalue, and the standard eigenvalues can be learned from the test data. Then the categories of the targets to be classified are determined. The possible target class is determined and the complexity of the calculation is minimized according to the different application conditions. When the number of elements in a box is very large, all the assumptions and evidence in the field need to be divided into a number of smaller discrimination boxes to simplify the calculation [18]. The concrete steps of the application of evidence theory are as follows: (1) Feature extraction. The appropriate features are selected according to the characteristics of the multi-band color image to be classified. (For example, texture features, edges, contour features, etc.). (2) The calculation of the quality function can be done by the method of probability theory. Because the DS inference system and the traditional classification method of Bias are very similar, the probability value of the target belonging to each class can be calculated through the comparison of the target feature value with the standard feature value of each class. In order to simplify the calculation, it can assume that the probability distribution of the characteristic value is in line with the Gauss distribution, and then to be normalized to meet the quality requirements. (3) According to the combination of evidence combination rules, the multi-level fusion process can be carried out according to the actual situation, and the target information can be fully utilized. First, the information of each band is fused for each feature. Then, the fusion quality function is used to fuse different features and merge different features information. (4) The target category is determined by the decision logic, and the confidence interval determined by reliability and likelihood is used to classify the target.
In the actual process of clustering multi-band color image, firstly, the texture feature and contour feature extraction of multi-band color image to be clustered are carried out. Then, these two characteristics are fused, and the probability distribution of characteristics is made, to calculate the probability of each pixel belonging to each class; then, DS evidence combination is based on the fusion probability to fuse the probabilities of the four channels. The implementation process of the specific algorithm is as follows: (1) Taking into account the characteristics of the image, the discriminant framework can be determined to be three classes, and it is assumed as {U, V, W }. Three kinds of sample images are selected from other multiple images. Sample images can be read by every kind of samples of each pixel on the R, G, B three channel, to calculate each type of samples in each channel of the mean and variance (i.e. spectral characteristics of multispectral color image). That is to establish a target feature library, and the specific calculation method and formula are as the formula (9) and (10). When the multi-feature combination method is used for clustering and fusion, we normalize the texture image into 0-255 range of gray scale images. In this way, the contribution of the texture image to the band image in the classification process is basically the same. The mean and variance of the normalized gray image to form a fourth channel can be found [1,5,6,8,10,15,16], which is as follows: R channel: G channel: B channel: The fouth channel: In the four bands on R, G, B and the fourth channels, the probability distribution for each category is calculated for each pixel respectively. In order to compare the advantages and disadvantages of various distribution functions, the Gauss distribution is used in this probability distribution, and the form of the function is as follows: Gauss probability distribution function is: (3) According to the rules of combination of evidence, the probability distribution on the four channels is fused. To simplify the problem, it is assumed that the three categories are independent of each other, that is, The following method is used for Clustering Fusion: The probability distribution of each pixel in four channels is fused directly, and the fusion rule is as follows: 3. Experimental results and analysis. The high resolution panchromatic image used in this paper is a SAR image, as shown in Figure 1. The multispectral image is shown as a TM image, as shown in Figure 2. The TM image is a satellite image provided by the LANDSAT satellite, which consists of 7 bands, as shown in Table 1.
Before clustering and fusion of multi-band color images, Sobel and Canny operators should be used to extract the SAR images respectively. The extraction results are shown as shown in Figure 3.
For remote sensing images of different wavelengths, Sobel and Canny operators are used for feature extraction. The Sobel operator of infrared remote sensing image edge extraction is better, most can effectively extract contour information, but also can produce some continuous segments or intermittent outlier; Canny operator is able to extract shortwave remote sensing image texture and contour information, and, because the extraction algorithm itself has noise suppression and edge connection characteristics, the extraction edge is more smooth and the false alarm is very few. Therefore, the fusion of the results after the two operators can be combined to get more comprehensive target information by combining the advantages of the two. Figure 4 shows the results of Clustering Fusion between traditional algorithms and this algorithm for Figure 1 and Figure 2. From Figure 4 we can see clearly that the traditional algorithm increases the image component of large gray gradient in Clustering Fusion of multi-band color image, so in the process of two images with large gray gradient difference, the effect is significant. But it is affected by noise, especially pulse noise, the mutation of gray level will make the poor fusion result; while in the proposed algorithm, the pepper and salt denoising and Gauss denoising are made firstly before the clustering fusion of multi-band color image. It can obtain is relatively stable grey level image, laid the foundation for better clustering fusion.
The results of multi-band color image clustering fusion can be judged qualitatively and quantitatively through the analysis of visual effects of human eyes or the statistical parameters of the image. Through the identification of human visual effect on clustering fusion in Figure 4, test results can be found that the proposed algorithm not only can effectively enhance the spatial resolution of multi-spectral images, reduce the loss of spectral information compared with the traditional algorithm, and better preserve the spectral information of multi-spectral images, to obtain high resolution and good spectral information fusion results.   In the objective discrimination of cluster fusion test results, qualitative and quantitative evaluation is usually done by statistical parameter method of image. It includes four indexes: information entropy, mean gradient, standard deviation and edge retention, to obtain the numerical analysis results of each fusion image.
(1) The concept of information entropy is used to represent the proportion of effective information in the signal. In the field of image processing, the definition of the concept of information entropy is the average information amount of image, namely that for an independent image, it is considered that the gray value of each element is independent of each other. Then the gray distribution of the image is h = {h 1 , h 2 , . . . , h i , . . . , h ς }, among them, h i is the ratio of the number of pixels with a gray value of i and the total number of pixels in the cluster fusion image; ς is the total number of gray level, and the information entropy formula of clustering fusion image is as follows:  Where, P (o) represents the probability of a gray value o in the cluster fusion image, and the greater the information entropy in the cluster fusion image is, the more effective information contained in the image does, and the better the clustering fusion is.
(2) The average gradient indicates the improvement of image quality. It reflects the clarity of the image, as well as the contrast between the minutiae and the texture transformation characteristics in the image. The formula is as follows: (3) For an image, the standard deviation reflects the degree of discreteness of the pixel gray value relative to the mean value of the overall gray level. The greater the standard deviation is, the more dispersed the grayscale is. The calculation expression is as follows: Due to the imaging characteristics of human eye vision, the more distributed the gray scale of the image is, the stronger its identifiability is. For clustering fusion images, the larger the standard deviation is, the wider the gray scale dynamic range is, the clearer the detailed features of the images are, and the higher the identifiability is.  preservation is used as a criterion to evaluate the clustering results. The larger the edge preserving degree is, the more edge information retained by clustering is. Table 2 shows the results of cluster fusion image evaluation index of Figure 1 and Figure 2 of band 1, 4 and 7. According to the experimental results, we can see that the clustering fusion image obtained by this algorithm can better combine the advantages of the two operators, and reproduce the contour information of some small targets. Figure 1 and Figure 2 have a large number of regular rectangular field information, and the disorderly part of the building is on the edge of the farm. By extracting the edge features and clustering with D-S evidence theory, it can avoid the interference of complex background and get the effective contour feature of the target. As can be seen from Table 2, the proposed algorithm used for image clustering and fusion of Figure 1 and Figure 2, the three indicators of images are far higher than the results obtained by the traditional fusion algorithm in information entropy, average gradient, and edge retention, which can contain more effective information in the source image, with clear edge and the outline, and it is more helpful to object detection and recognition.

4.
Conclusions. Multi-band image fusion is a new discipline integrated sensor, image processing, signal processing, parallel computing and artificial intelligence technology, which is widely used in the field of military target detection, tracking and recognition, earth observation, airport navigation, security surveillance, intelligent transportation, medical imaging and diagnosis and other civilian areas, It has a very important significance to the development of the national economy and the construction of national defense. This paper innovatively proposes an efficient and fast denoising algorithm, which can effectively remove salt and pepper noise in remote sensing images, and maintain the integrity of image texture and contour information. In this paper, feature level fusion strategy is adopted. Based on Matlab simulation platform, Canny operator and Sobel operator are used to extract preprocessed image's contour edge information respectively, and D-S evidence theory is applied to cluster the feature images.
In this paper, the algorithm for researching on registration is not studied, which leads to the strict requirements for the experimental images and the original experimental images must be fully matched. However, the actual images may be imagined by translation, rotation or scaling. For these cases, a good registration algorithm is needed for image matching. This will be the focus of next research.