PIECEWISE CONSTANT SIGNAL AND IMAGE DENOISING USING A SELECTIVE AVERAGING METHOD WITH MULTIPLE NEIGHBORS

,


(Communicated by Hui Ji)
Abstract. Piecewise constant signals and images are an important kind of data. Typical examples include bar code signals, logos, cartoons, QR codes (Quick Response codes), and text images, which are widely used in both general commercial and automotive industry use. One previous work called a general selective averaging method (GSAM) was introduced to remove noise from them. It chooses homogeneous neighbors from the two closest pixels (one pixel at each side) to update the current pixel. One limitation is that it suffered from appearing sparse noisy pixels in the denoised result when the noise level is high. In this paper, we try to solve this problem by proposing a selective averaging method with multiple neighbors. To update the intensity value at each pixel, the proposed algorithm averages more homogeneous neighbors selected from a large domain, which is based on the property of the local geometry of signals and images. This greatly reduces sparse noisy pixels left in the final result by GSAM. Similarly, our method adopts the Neumann boundary condition at edges, and thus preserves edges well. In 1D case, some theoretical results are given to guarantee the convergence of our algorithm. In 2D case, except eliminating additive Gaussian noise, this algorithm can be used for restoring noisy images corrupted by speckle noise. Intensive experiments on both gray and color image denoising demonstrate that the proposed method is quite effective for piecewise constant image denoising and achieves superior performance visually and quantitatively.
Each of them consists of some flat-regions separated by distinct edges. Those flatparts are called homogeneous regions, where each is a connected component with the same intensity values. Due to the imperfection of imaging devices, signals and images are unavoidable to be degraded by different kinds of noise. Gaussian and speckle noise are two important types of noise in digital and ultrasound images, respectively. In this paper, we focus on the problem of removing Gaussian noise from piecewise constant signals and images, and make a preliminary exploration to eliminate speckle noise from this kind of images.
In [44], a general selective averaging method (GSAM) was presented to estimate Gaussian noise from piecewise constant signals and images. In each iteration, the GSAM chose homogeneous pixels from 1-neighborhood of the current pixel, and updated its value by averaging these selected pixels with flexible weights. This method can be considered as a discrete approximation of the linear diffusion in flatregions, while a discrete approximation of the linear diffusion with the Neumann boundary condition [41] at edges. It prevents the diffusion between different flatregions, and thus image edges are preserved very well. However, it led to sparse noisy pixels appearing in the output, when dealing with images with high level noise. We need additional steps as proposed in [45] to detect and suppress them. Furthermore, since the GSAM only selects pixels from 1-neighborhood for averaging, it usually needs too many iterations to obtain satisfactory results in practical application.
Motivated by GSAM for preventing the diffusion between different flat-regions, we propose a novel algorithm to select more pixels from a larger homogeneous region by considering the property of the local geometry of signals and images. The proposed algorithm is able to not only improve the efficiency of removing noise, but also there is only very few number of noisy pixels left in the restored result. In 1D case, we have some theoretical results to guarantee the convergence of our method. Moreover, we observe that the proposed algorithm has better ability of eliminating noise than GSAM [44] for siganls with the same noise level. Compared to other existing methods, experimental results show that our method achieves superior performance for piecewise constant images with Gaussian or speckle noise.
The rest of this paper is organized as follows. In section 2, we briefly review the GSAM [44]. Then, we introduce the selective averaging with multiple neighbors method (SAMNM) in 1D case and provide some theoretical results. In section 3, we present the alternating SAMNM in 2D case. Section 4 shows experiments and comparisons for both gray and color images corrupted by additive Gaussian or speckle noise. Finally, some conclusions are summarized in section 5.
2. The proposed algorithm in 1D case. In this section, we first give a brief review of the 1D general selective averaging method (GSAM) [44], and then present the novel selective algorithm in detail.
Let u ∈ R m be the ground truth, which is a discrete signal consisting of several constant segments. Each constant segment is a homogeneous region. The observed signal f is corrupted by a zero-mean Gaussian noise with the standard deviation σ. We denote the index set as A = {1, 2, · · · , m}.
2.1. The general selective averaging method for 1D signal. Since the general selective averaging method (GSAM) [44] is an iterative algorithm, a signal sequence {u (0) , u (1) , · · · , u (n) , · · · } would be generated, where u (0) = f . In each iteration, for a pixel i, it selects pixels from N 1 (i) = {i − 1, i + 1} for averaging. The first thing is to find out the homogeneous set Ω i given by where T is a predefined threshold parameter. Then, we use pixels of Ω i to compute u (n+1) i . The set Ω i can be divided into the following four cases: • The first case: Accordingly, there are four different weighted average schemes proposed in [44]. For convenience of description, these weighted average schemes can be unified the following formula: where 0 < p ≤ 1 2 is the weight parameter, and Ω i denotes the number of elements of Ω i . Since the weights of u i+1 depend on the set Ω i , each case has different weights in (1). As pointed out in [44], the scheme (1) for the first case can be considered as a discrete approximation of the linear diffusion; the (1) for the second (or third) case corresponds to a discrete approximation of the linear diffusion with the Neumann boundary condition [41] at the right (or left) edge pixel; the (1) for the fourth case is a discrete approximation of the linear diffusion with two Neumann boundary conditions at both the right and left edge pixels. By (1), the GSAM prevents the diffusion between different homogeneous regions, and thus keep edges very well.
A drawback of GSAM [44] is that it only selects pixels from 1-neighborhood N 1 (i) for updating the current value. Thus, it usually needs too many iterations to obtain satisfactory results in practical application. On the other hand, there exist sparse noisy pixels appearing in the output by GSAM, when the noise level is high.

2.2.
A selective averaging with multiple neighbors method for 1D signal.
To overcome the problems of GSAM [44] as discussed in Sect. 2.1, we propose a selective averaging with multiple neighbors method by choosing more homogeneous neighbors from a large domain. It depends on the property of the local geometry of signals. Similarly, a signal sequence {u (0) , u (1) , · · · , u (n) , · · · } will be generated, where u (0) is the observed f and u (n) denotes the signal after the n-th iteration.
2.2.1. The algorithm. To update u (n) , we start from its first pixel to the last one; see the computation order in Fig. 1. For a pixel i, let N l k (i) represent the set of its left k-neighbors (k > 1), i.e., Note that, j l (i) helps to show which pixels of N l k (i) belong to the left homogeneous set Ω l k (i). Similarly, j r (i) helps to show which pixels of N r k (i) belong to the right homogeneous set Ω r k (i). We denote the index i − K l i as the smallest element in Ω l k (i) and i + K r i as the largest element in Ω r k (i). If Ω l k (i) = N l k (i), then K l i = k; if Ω l k (i) = {j : i > j ≥ i − j l (i)}, then K l i = j l (i) with j l (i) < k. Similarly, we can obtain the value of K r i . Note, if K l i = 0 (or K r i = 0), then we set Ω l k (i) = ∅ (or Ω r k (i) = ∅). Let Ω k (i) = Ω l k (i) ∪ Ω r k (i) and K i = K l i + K r i . Obviously, we have Ω k (i) = K i .  For convenience of description, we introduce the following notations to represent different weighted average schemes: 2k denotes the weight parameter in v 1 , v 2 , v 3 and v 4 . The sum of weights is 1 for each v j (j = 1, · · · , 4). In v 2 , we add one more u (n) i−K l i , because we assume that there exists a Neumann boundary condition [41] at the edge pixel i − K l i , namely ∂u (n) ∂n i−K l i = 0, wheren is the outward normal at i − K l i . Similar to v 2 , one more u In v 4 , we assume that there exist two Neumann boundary conditions at edge pixels i − K l i and i + K r i , and thus one more u (n) i−K l i and u (n) i+K r i are added. In the following, we present the method to select homogeneous neighbors of i-th pixel based on that of i − 1. Since the indexes K l i−1 and K r i−1 at the pixel i − 1 are known, we can find out K l i and K r i based on them. Note that, all K l i−1 , K r i−1 , K l i and K r i are the indexes in step n. There are five cases to analyze: According to the above different cases, we can obtain different K l i and K r i and the corresponding weighted average scheme.
The updated values u (n+1) i in the above five cases are given as follows, where the detailed calculation process can be found in Appendix 6.
• Case 1: • Case 4: Using the above method for each pixel of u (n) , we obtain a new signal u (n+1) . Note that, we use the same p value for each pixel. The corresponding iterative procedure is summarized in Algorithm 1.  based on five cases of K l i−1 and K r i−1 from (2) to (6). (a1-a2), (b1-b2), (c1-c2), (d) and (e) correspond to Case 1, Case 2, Case 3, Case 4 and Case 5, respectively.

Theoretical analysis.
We first define a jump vector J (n) corresponding to u (n) . For any i ∈ {2, · · · , m}, if |u i | > T , we say that there is a jump (an edge) between u i , for ∀i ∈ {2, · · · , m}. Furthermore, there exists an integer N such that for ∀n ≥ N , we have J (n) = J (N ) . That is, {J (n) } ∞ n=1 converges in a finite number of iterations.
in Case 1, we do not consider this case. We just compute u To simplify describing the updated value u (n+1) i−1 , we introduce some notations: where p is the weight parameter. If Thus, we adopt the Neumann boundary conditions [41] To compute |u there are nine types to analyze: Inverse Problems and Imaging Volume 13, No. 5 (2019), 903-930 (10) and (14a) (10) and (14b) (11) and (15) From these above nine types, it means that there is no jump between u According to Lemma 2.1, we observe that the locations of 1 in J (n) are fixed from the N -th iteration. It implies that, for ∀n ≥ N , u (n) is separated into several fixed segments (homogeneous regions), which are exactly the segments of u (N ) . Let S (N ) denote the number of these fixed segments, and u is+ts−1 } denote pixels in the s-th segment, for ∀s ∈ {1, 2, · · · , S (N ) }. As the GSAM in [44], the whole fixed weight matrix W (N ) is also a block diagonal matrix consisting of S (N ) blocks and can be written as (16) W For ∀n ≥ N , we use the same weight parameter for pixels in each homogeneous region u (n) s . Theoretically, the total number of parameters is S (N ) . In this paper, for convenience of description, we use the same parameter p for all homogeneous regions. Thus, each block W s has the following form: where 0 < p ≤ 1 2k and W s is a t s × t s non-negative matrix. For example, if k = 2 and t s = 5, W s is given by From (18), we see that the rows sum of W s is 1, and thus W s is called row-stochastic matrix. When n ≥ N , the averaging procedure in each segment is simplified to

Remark 1.
In the interior of each homogeneous region u (n) s , Algorithm 1 can be considered as a discrete approximation of the linear diffusion in a local domain, while a discrete approximation of the linear diffusion with the Neumann boundary condition [41] at edges.
To further analyze the property of each W s , we introduce two basic concepts of graph theory: • V is a set whose elements are called vertices; • A is a set of ordered pairs of vertices, called directed edges. A) is called strongly connected if there is a path in each direction between each pair of vertices, i.e., for any two vertices V i and V j , there exists a directed path connecting them.
We consider that W s is associated a finite directed graph G s , where the number of vertices is t s . Let V 1 , V 2 , · · · , V ts denote these distinct t s vertices. For each It is possible to get to any vertex V i from any vertex V j , for 1 ≤ i, j ≤ t s . Thus, the directed graph G s is strongly connected. Now, we have the following theorem. s } ∞ n=N in each segment. A sufficient condition is that all the absolute eigenvalues of W s in (19) are less than or equal to 1, and −1 is not an eigenvalue of W s ((−1) n−N is divergent). This is shown in the following.
We first prove that each absolute eigenvalue of W s is less than or equal to 1. Suppose that ρ is an eigenvalue of W s and y is the corresponding eigenvector, i.e., W s y = ρy. Let |y j | = y ∞ for some j. Since We then show that 1 is the largest simple eigenvalue of W s and −1 is not an eigenvalue. It is obvious that 1 is an eigenvalue of W s , because W s 1 = 1. Here, 1 denotes the all-ones vector. As mentioned above, W s is associated a strongly connected directed graph G s . From Theorem 1.17 in [36], it indicates that W s is irreducible. Here, non-negative matrix W s mens that each element of W s is nonnegative. According to Chap.8 in [25] (or Froebenius-Perron theorem) about the non-negative irreducible matrix, we have that 1 is the unique eigenvalue of W s with maximum modulus. Therefore, −1 is not an eigenvalue and 1 is the largest simple eigenvalue.
Based on these analyses, {W s n−N } ∞ n=N converges. This implies that {u converges to a constant vector, where each coordinate has the same value. Consequently, {u (n) } ∞ n=0 converges to a signal consisting of several segments, and each segment is a constant vector.
In Fig. 3, the first and third rows show two examples to compare the proposed Algorithm 1 to GSAM [44], when the clean signal is corrupted by Gaussian noise with σ = 0.12 and σ = 0.16. The stopping condition is the same as that in Algorithm 1 for two algorithms, and = 10 −6 . Here, we adopt the same threshold parameter T for them. As in [44], p is fixed to 0.49. We tune p and k in Algorithm 1. From  Fig. 3, two methods can obtain good results visually, when the noise level is low. However, in terms of SNR defined by (20), the proposed Algorithm 1 gains about 1.0 dB than the GSAM [44] quantificationally. When σ = 0.16, we found that there is a noisy pixel left in the final result by GSAM [44]. In contrast, Algorithm 1 achieves satisfactory performance, where the result is very close to the clean signal. Moreover, Algorithm 1 even exceeds the GSAM [44] by about 10.0 dB in terms of SNR. Table 1 further lists the comparison of the iterative number and time cost by GSAM [44] and our Algorithm 1. We found that Algorithm 1 always needs less iterations and is faster than GSAM [44]. Therefore, we can say that the proposed Algorithm 1 indeed gives better result with less time consumption than the previous GSAM [44]. Since the total variation [37] (TV) and L 0 gradient minimization [47] (L 0 ) are very popular for piecewise constant and smooth signal and image denoising, we also compare them to our proposed algorithm. In the second and fourth rows of Fig. 3, we present the results by TV [37] and L 0 [47] for two noisy signals. One can see that 50 100  3. Extension to 2D case.

2D gray image. Without loss of generality, a piecewise constant gray image is represented as an
For a pixel (i, j), we consider that the index i corresponds to x-axis, and the index j corresponds to y-axis. The reason is that an image is sampled from an underlying function of two real variables. Let V denote the Euclidean space R M1×M2 . Assume that an observed image F is corrupted from the clean image U by the zero-mean Gaussian noise with standard deviation σ.
In [44], we present an alternating general selective averaging method (AGSAM), which alternatively uses 1D GSAM in the x-axis and y-axis. We found that this method is quite suitable to 2D piecewise constant image denoising. Similarly, we alternatively apply Algorithm 1 (denoted by SAMNM) in the x-axis and y-axis. An image sequence {U (0) , U (1) , · · · , U (n) , · · · } will be generated, where U (0) is the observed image F and U (n) denote the denoised image after the n-th iteration.
The following is the details of the alternating SAMNM (ASAMNM): • The first phase: Compute U (n+ 1 2 ) from U (n) by using Algorithm 1 in the x-axis. That is, for each row of U (n) , call the SAMNM; • The second phase: Compute U (n+1) from U (n+ 1 2 ) by using Algorithm 1 in the y-axis. That is, for each column of U (n+ 1 2 ) , call the SAMNM. There are three parameters: T , k and p, as that in Algorithm 1. Similar to AGSAM in [44], we can only provide a numerical verification that {U (n) } ∞ n=0 converges to a piecewise constant image from experiments; see Sect. 4.

2D color image.
An input color image denoted by F has three channels, i.e., F = (F 1 , F 2 , F 3 ). Each F m belongs to the Euclidean space V for m = 1, 2, 3. The intensity at each pixel (i, j) includes three values, namely, . For any two pixels F i1,j1 and F i2,j2 of a color image, the absolute difference between them is defined as Once these values are computed, we can find the homogeneous pixels for each pixel. Accordingly, we choose the correct scheme for each pixel to update its value. Then, we iterate this process as ASAMNM in Sect. 3.1. 4. Experiments results. In this section, we mainly perform experiments and comparisons for eliminating additive Gaussian noise from piecewise constant gray and color images. Then, we try to apply our algorithm to speckle noise removal as the preliminary exploration. All the experiments are conducted using Windows 8 and MATLAB R2015a on a desktop with an Intel-i7 processor at 3.6GHz and 8GB RAM.
Except for visual quality evaluation, we also use the signal-to-noise ratio (SNR) defined by (20) SNR = 10 log 10 as the quantitative measurement to evaluate the performence of restoration, wherẽ G is the mean intensity value of the original image G and U is the recovered image. Fig. 4 shows the clean test images.

4.1.
Gaussian noise removal. For removing additive Gaussian noise, we compare our ASAMNM to TV [37,1], L 0 [47], NLM-SAP [16] and AGSAM [44]. As we know, both TV and L 0 can effectively restore the cartoon part of images. The NLM-SAP adopts adaptive patch shapes instead of the usual square patches to reduce the noisy halos created around edges, and thus is suitable to recover piecewise constant images. The previous work of AGSAM also achieves good performance. We implemented the TV [37,1] based on ADMM in [46], and used the code of AGSAM [44]. The source codes of L 0 [47] and NLM-SAP [16] were respectively downloaded from their authors' homepages. Except L 0 and NLM-SAP using the termination conditions of their source codes, the remaining algorithms were terminated when < 10 −5 . For each method, we tuned parameters carefully to achieve the best visual and quantitative results. In TV [37], we fixed a positive real number r = 10, and tuned a regularization parameter α. In L 0 [47], we fixed a rate κ = 1.01 to allow more iterations for better results and tuned a smoothing weight λ. Considering computational performance and time cost of NLM-SAP [16], the size of search window was set 31 × 31 and the number of shapes was 24. Moreover, we tuned the bandwidth h and the smoothing parameter T in the aggregating strategy EWA. Since p = 0.49 in AGSAM [44], we tuned the threshold parameter T . Although there are three parameters in ASAMNM, we found that k = 4 is a good choice for all experiments, and we only tuned T and p. In Table 2, we listed all the values of these methods for gray image denoising, while Table 5 gave all the values of these methods for color image denoising. 4.1.1. Piecewise constant gray image denoising. Fig. 5 shows the denoising results by different methods for the Logo corrupted by Gaussian noise with σ = 15. Visually, all results are good, where each algorithm is able to preserve sharp edges and eliminate most noise. One can see that L 0 [47] performs better for recovering flat-regions than TV [37], where TV suffers from the staircase effects. As pointed out in [10], both the results by TV [37] and L 0 [47] may include some noisy pixels near edges; see the zoom-in views. Although NLM-SAP is better than the classical NLM [4], those pixels near edges are not totally in accordance with the clean ones. This also can be found from the intensity values along a horizontal line in Fig. 6. In contrast, the AGSAM [44] and our ASAMNM can thoroughly eliminate noise in flat-regions and edges. Furthermore, it is seen from their zoom-in views that they produce piecewise constant results experimentally. From the quantitative SNR, the improvement by ASAMNM is 1.39 dB over AGSAM [44] (the second best method). The reason may be that, in finite iterations, more neighbors are used for averaging in ASAMNM generally would yield better result. Fig. 6 shows results of a cartoon image with thin lines, and compares intensity values along part of a single row of each result. In this example, the main problem of TV [37] is that it fails to keep the intensity values in thin regions. One can see that there exists one noisy pixel at the edge of the recovered result by L 0 [47]. As discussed on Fig. 5, NLM-SAP [16] has the same problem when handling pixels at edges. The proposed ASAMNM and AGSAM [44] can overcome those problems of the first three algorithms, where the results are very close to the ground truth visually. However, our method still obtains the highest SNR, and outperforms the second best method (AGSAM [44]) by 1.79 dB.
To further demonstrate the validity of the proposed ASAMNM, another three experiments are shown in Figs. 7, 8 and 9. Except the example in Fig. 7, where L 0 [47] performs a little better than our method, the ASAMNM always obtains the best visual result and quantitative evaluation in the remaining two examples. The zoom-in views in the fourth column also gives an impression of this point. Tables 3 and 4 list the SNR values, iterative number and runtime by different methods for other two noise levels in each set of experiments. In most cases, our method achieves the best SNR values. Since the number of iteration is 1 in the original code of NLM-SAP [16], we also adopts one iteration. However, NLM-SAP is slowest. The reason may be that its code is not optimized. Although our method  needs a little more iterations than TV [37], it has the similar time cost with TV. Obviously, the iterations of ASAMNM are always less than AGSAM [44]. Since κ = 1.01 allows more iterations (over 1000 iterations) for piecewise constant image denoising in L 0 [47], its runtime is a little longer.

4.1.2.
Piecewise constant color image denoising. In the following, we compare our method to TV [1], L 0 [47] and AGSAM [44] for color piecewise constant image denoising. Fig. 10 shows an example for noisy QRcode2 corrupted by Gaussian noise with σ = 20. Similar to the gray case, each method obtains good result visually. Yet, from the corresponding zoom-in views, TV [1] still fails to deal with those noisy pixels around edges. Although L 0 [47] has better noise reduction performance than TV, several noisy pixels at the corners are not eliminated. In this example, AGSAM [44] obtains similar visual result to our method, which can effectively preserve edges and remove noise very well. However, the proposed ASAMNM gives the noticeable improvement in terms of SNR over AGSAM [44].
Two other examples with higher noise level are presented in Fig. 11. Again, our method yields satisfactory results and performs best among all the compared   Tables 6 and 7. The phenomenon in the two table is similar to that in Tables 3 and 4. Except the example of QRcode3 with σ = 40, the proposed method always achieves the highest SNR using shorter computation time.

4.2.
The preliminary exploration for speckle noise removal. In ultrasound imaging, speckle noise is an important type of noise to affect accurate diagnosis. Lately, Wang et.al [43] proposed a novel model combining the advantage of total variation and high-order total variation. This approach overcomes staircase effect caused by TV regularization and is effective for suppressing speckle noise in ultrasound images. In this section, we just make a try to remove this kind of noise from piecewise constant images using the proposed ASAMNM, and the future work is to improve our method for real ultrasound image denoising. We compare our method to Wang et.al [43], L 0 [47] and AGSAM [44]. As discussed in Wang et.al [43], the speckle noise can be modeled by   where F is the observed image, U is the clean image and G is the zero-mean Gaussian noise with standard deviation σ. Obviously, this type of noise is dependent on the intensity values of images. The parameter settings for L 0 [47], AGSAM [44] and our method are similar to that for additive Gaussian noise removal in Sect. 4.1.
According to the code (provided by its authors) of Wang et.al [43], we tuned two parameters λ and µ. Table 8 gave all the values of these methods for the following experiments. Fig. 12 presents a toy example when the "Blobs" image (built-in image from MATALB) is corrupted by speckle noise with σ = 2. The intensity values along part of a single column at the edge of each result are shown for further comparison. In this example, one can see that Wang et.al [43] only weakens the noise intensity and can not remove them thoroughly. The reason may be that TV-based regularization is not suitable to thin regions. Although the result by L 0 [47] looks better visually, there still exist sparse noisy pixels. We found that AGSAM [44] produces the  Fig. 13, when the Cartoon1 is corrupted by speckle noise with σ = 2 and 2.5 respectively. When σ = 2, we observe that our method performs best among all the compared methods, and exceeds the second best method (AGSAM [44]) by 1.1 dB in SNR. However, though the proposed ASAMNM is still better than Wang et.al [43] and AGSAM [44], it is worse than L 0 [47] when the level of noise is 2.5. Nevertheless, it is worth improving the ASAMNM  adopts the Neumann boundary condition at edges, and thus keeps edge very well. Compared to the GSAM in [44], the proposed algorithm can not only improve the efficiency of removing noise, but also greatly reduce sparse noisy pixels left in the final result. Furthermore, we proved that the iterative signal sequence converges to a signal consisting of several segments, where each segment is a constant vector. By alternatively applying the proposed algorithm in 1D case, the ASAMNM can be easily obtained for 2D image denoising. Experimental results on both gray and color image denoising demonstrated that our algorithm is quite effective, and in most cases achieves superior performance from visual image quality and quantitative SNR evaluation. In particular, it always performs better than the AGSAM [44] with shorter time, under the same stopping condition. Although we mainly consider eliminating additive Gaussian noise or speckle noise from piecewise constant images, we believe that there is large room to improve ASAMNM for natural image and real ultrasound image denoising. To handle very high level of noise, it is also a future work. 6. Appendix. The detailed calculation process of (7a), (7b), (8a), (8b), (9a), (9b), (10) and (11) is given in the following.
• Case 1: Case 1 indicates that there is an edge between i − 1 and i, i.e., the two pixels i − 1 and i belong to different homogeneous regions. Thus, we have K l i = 0. To obtain K r i , we need to compare |u i | ≤ T for all j, then K r i = k. That is, all pixels in N r k (i) belong to Ω r k (i); which is presented in Fig. 2(a1). The corresponding weighted average scheme is given by u (n+1) i = v 4 , for K l i = 0, K r i = k. If K r i < k, then |u i −u (n) i+(K r i +1) | > T . It means that the pixel i+(K r i +1) belongs to another homogeneous region; see Fig. 2(a2). Thus, we assume that there exists a Neumann boundary condition at i + K r i , to prevent the diffusion between the two regions, and u (n+1) i is computed by u (n+1) i = v 3 , for K l i = 0, K r i < k. Except Case 1, one can find that two pixels i and i − 1 are in the same homogeneous region for the remaining four cases. Therefore, the homogeneous neighbors of i − 1 belonging to N l k (i) and N r k (i) are also that of i. • Case 2: Since K l i−1 ≥ k − 1, we have |u j | ≤ T for ∀j ≥ (i − 1) − (k − 1), i.e., ∀j ≥ i − k. Consequently, all pixels of N l k (i) are the homogeneous neighbors of i, and Ω l k (i) = N l k (i). Obviously, we have K l i = k. If K r i−1 = k, then Ω r k (i − 1) = {j : i − 1 < j ≤ (i − 1) + k}. Therefore, the pixels from i + 1 to i + (k − 1) are in Ω r k (i). The only thing is to judge whether the pixel i + k belongs to Ω r k (i). The index K r i can be divided as follows: i+k | > T. which correspond to Fig. 2(b1-b2), respectively. Two different weighted average schemes are written as u (n+1) i = v 1 , for K l i = k, K r i = k, v 3 , for K l i = k, K r i = k − 1.
Note, we compute u (n+1) i = v 3 , because there is an edge between i + (k − 1) and i + k from K r i = k − 1. • Case 3: Since Ω l k (i − 1) = {j : (i − 1) − K l i−1 ≤ j < i − 1} for K l i−1 < k − 1, it is not difficult to obtain Ω l k (i) = {j : i − (K l i−1 + 1) ≤ j < i}. K l i is given by K l i = K l i−1 + 1. For K r i−1 = k, it is the same as that in Case 2, and thus the index K r i is equal to k with |u i+k | > T . Consequently, two weighted average schemes for two types shown in Fig. 2(c1-c2) are proposed: Note, we assume two Neumann boundary conditions at both i−K l i and i+K r i when K l i = K l i−1 + 1 and K r i = k − 1. • Case 4: By the similar discussion on K l i in Case 2, we also have K l i = k. From 0 < K r i−1 < k, there is an edge between (i − 1) + K r i−1 and (i − 1) + (K r i−1 + 1). Accordingly, we have Ω r k (i) = {j : i < j ≤ i + (K r i−1 − 1)} and K r i is obtained by K r i = K r i−1 − 1, which is shown in Fig. 2(d). We compute u = v 3 , for K l i = k, K r i = K r i−1 − 1.
• Case 5: In this case, it is easy to obtain K l i = K l i−1 + 1 and K r i = K r i−1 − 1; see Fig. 2(e). We assume two Neumann boundary conditions at both i − K l i and i + K r i , and u (n+1) is computed as follows: