AN ADAPTIVE TOTAL VARIATIONAL DESPECKLING MODEL BASED ON GRAY LEVEL INDICATOR FRAME

. For the characteristics of the degraded images with multiplicative noise, the gray level indicators for constructing adaptive total variation are proposed. Based on the new regularization term, we propose the new convex adaptive variational model. Then, considering the existence, uniqueness and comparison principle of the minimizer of the functional. The ﬁnite diﬀerence method with rescaling technique and the primal-dual method with adaptive step size are used to solve the minimization problem. The paper ends with a report on numerical tests for the denoising of images subject to multiplicative noise, the comparison with other methods is provided as well.


1.
Introduction. Synthetic aperture radar (SAR) is a microwave imaging system. Compared with an optical imaging system, it has all-weather working characteristics, which lays its absolute position in the field of remote sensing. SAR imaging plays an important role in military and daily life, and is manifested in military reconnaissance, aerospace mapping, natural disaster detection, and urban planning. However, speckle noise is a fundamental defect inherent in all imaging systems based on coherent principles, including SAR systems. For SAR systems, the resolution unit is much larger than the wavelength of the transmitted signal, then each resolution unit can be regarded as composed of many scatterers of similar size and wavelength. The total echo of each resolution unit is a coherent combination of the echoes of each scatterer in this unit. As a result, in the original uniform region with the same backscattering coefficient in the SAR image, its gray level is not uniform and fluctuates randomly around a certain mean value. The particle pattern of speckle noise will cause a serious degradation of image quality, not only hinders information observation, but also affects image interpretation based on noise-sensitive computers. Therefore, it is of great practical significance to study the speckle suppression algorithm.
Goodman proposed in [34] that fully developed speckle noise is a multiplicative noise model, which follows a negative exponential distribution with a mean of 1. While, for SAR images after multiple-look processing, speckle noise is a Gamma distribution with a mean of 1. The purpose of this paper is to develop an effective method to restore images contaminated by multiplicative Gamma noise in an undetectable manner.
Suppose that images are real-valued functions defined on Ω which is a bounded open subset of R 2 . Let f be an observed noisy images, u the ideal image to be recovered, and η the corresponding multiplicative unknown noise satisfying the relation Here η is usually regarded as a random variable of Gamma distribution, and its probability density function is as follows where L is a positive integer, indicating the number of images superimposed in the multiple-look processing, and χ {η≥0} represents the indicative function of set {η|η ≥ 0}. Furthermore, it is implicitly assumed in formula (2) that the mean and variance of η are 1 and 1/L respectively, namely where |Ω| = Ω 1dx. Abundant multiplicative noise removal methods exist, originating from filtering technique, partial differential equations (PDEs) and variational method. Firstly, there are many filtering-based methods [24,13,26,35], among them SAR-BM3D [26] is well known, which is a non-local algorithm combining block similarity measure and group shrinkage of the transformed data. But the precondition of this method is to rewrite equation (1) as a signal-dependent additive noise model, which is f = u+(η−1)u. In addition, the rapid development of sensors in the last few years has brought us new opportunities, resulting in the birth of optical-driven non-local SAR speckle removal technology. Secondly, the popularity of PDE-based methods [42,32,31] relies on nonlinear diffusion techniques. The second order equation usually has the extremum principle to ensure the stability and smoothness of the restoration results. In particular, a nonlinear diffusion equation with smooth solution is proposed in [31]. The fourth order equation can improve the staircase effect, but the equation theory is difficult to carry out. However, All these achievements should be attributed to the forward-backward diffusion model proposed by Perona and Malik in [27] and tensor diffusion method [37], which has stimulated a lot of theoretical and numerical studies on anisotropic diffusion equations. In the variational framework, many researchers prefer total variation (TV) based method which was first seriously introduced in [29], because the bounded variational (BV) function space allows images to have important discontinuous features such as edges and textures. For noise model (1), using Bayes rule and maximum a posteriori (MAP), Aubert and Aujol [1] obtained a new fidelity term which is strictly convex for u ∈ (0, 2f ), and introduced the following functional model (AA model): In [16], according to the statistical properties of the multiplicative Gamma noise, is globally convex when the equilibrium parameters meet certain conditions, below we refer to (4) as the DZ model. The above two models can be generalized to simultaneously deblur and remove multiplicative noise, while in [40], the blur and multiplicative noise equation is rewritten to decople the image variable from the noise variable, and a new convex optimization model is proposed. Besides, the unidirectional first-order and high-order total variation regularization combined with low rank constraints are used in [39] to destrip in remote sensing images. And Z. Guo et al. [15] found that the globally convex fidelity term used to remove poisson noise [25] is also applicable to multiplicative gamma noise. In addition, for multiplicative noise, another strategy is to transform the data to make the noise additive by means of logarithmic transformation, homomorphic transformation is used to approximate the gaussian distribution, and then gaussian denoizer can be used to restore. For example, a generic method, called MuLoG [14], is provided to embed a Gaussian denoiser in a speckle reduction filter, to apply as well to single-channel or to multi-channel SAR images. The dazzling variational methods for image reconstruction also benefit from the development of fast numerical schemes, optimization algorithms considered to be particularly efficient include Chambolle's dual method [6], the split Bregman iteration [21], gradient-based algorithms [2], as well as the augmented Lagrangian method [38] and so on. These methods share a remarkable simplicity together with better convergence.
Although TV plays an important role in image denoising, due to the assumption of piecewise constancy, the use of TV often leads to the staircase effect and the unnatural appearance of cartoon images [19], then, the Total Generalized Variation (TGV) [5] and its nonconvex version [22] are introduced and yields results that are superior to conventional TV. Besides, TV is not suitable for multiplicative noise due to the same degree of regularization at every pixel. As an improvement, based on the noise level estimated via wavelets, the parameter adaptation of the TV regularization is applied for SAR speckle removal in [41]. And in [33], Chan and Strong introduce the idea of adaptive total variation and proposed the following model: where the weight function g(x) controls the regularization level of different regions, of course, it is vital to design proper weighting functions. Furthermore, many theoretical questions for the adaptive total variation regularization are investigated in [11,10].
In this paper, we are interested in adaptive total variation regularization. A new convex adaptive total variation model is proposed, in particular, the framework of gray level indicators is systematically designed. Its inspiration is mainly due to the signal dependency of multiplicative noise and the statistical characteristics of gamma noise, so using gray level indicator as a weighting function will make that the regularization in different regions is steerable. The existence and uniqueness of solution to functional minimization problem can be proved. In addition, the global convexity of the model provides a broad background for numerical calculation, excepted the classical finite difference method, we also consider an accelerated version of the primal-dual algorithm introduced by A. Chambolle and T. Pock in [8], which is a stable and easy method to implement. Experiments show that the proposed algorithm for multiplicative noise removal outperforms the other related methods effectively.
The rest of the paper is organized as follows. In Section 2, we introduce the new model for multiplicative denoising, and state the definition and some necessary lemmas about adaptive total variation as preliminaries. The properties of solutions to functional minimization problems are demonstrated in section 3. In Section 4, we consider two fast numerical algorithms. At last, we apply our algorithmic to five test images and compare the numerical results of our model with the classical AA model and DZ model in Section 5.
2. The new model. Following the degradation model in [1], a Gamma distribution is assumed for the multiplicative noise. According to statistical information of noise, we propose a family of gray level indicators and use these as the weight of the adaptive total variation, so that the new regularization term is more suitable for multiplicative noise removal. Following the ideas in [16] and [25], we employ a new global convex fidelity term. The new model can not only effectively remove the multiplicative noise, but also have a certain protection effect on the texture images.
2.1. The new functional. The proposed model is given as follow, Here λ and β are equilibrium parameters. J(u) is the adaptive total variation regularization term, α(x) is a weight function which will be introduced below. H(u, f ) is a fidelity consisting of two parts, the quadratic penalty term can enhance the convexity of the model, and the last term can guarantee that the mean value of u is the same as f . In the model, we introduced the quadratic penalty term of DZ model. As we all know, AA model is a classical variational model for multiplicative noise removal, and in essence, DZ model is a modified model obtained by introducing a quadratic penalty term to the AA model, While the DZ model is strictly convex as long as the penalty factor α is greater than or equal to 2 √ 6 9 . Therefore, we retain the quadratic penalty term in the model to enhance the convexity of the model. However, they are all based on TV regularization, this is more suitable for additive Gaussian noise, later we will explain the necessity of introducing the gray indicator as the weight function of the regularization term in multiplicative noise removal.
And the third item T (u, f ) = Ω (u + f log 1 u )dx is the fidelity item proposed in [25] to remove Poisson noise, but it also applies to the hypothesis of noise in this paper. Exactly in the same fashion of the analysis in [15], let us consider the Euler Lagrange equation for a functional-based problem, it consists of a certain divergence term and the corresponding source term T (u, f ), using the Neumann boundary condition and integration by parts, we can derive that which implies that the constraint with a noise mean of 1 is met.

2.2.
Characteristics of Gamma multiplicative noise and gray level indicators. Multiplicative noise is quite different from additive noise in noise influence and distribution. First, we consider the characteristics of multiplicative noise, and then construct a set of gray level indicators, to provide multiple choices for α(x) in our modal.
A. Characteristics of Gamma multiplicative noise.
(1) The signal-dependent nature of the fluctuations. By simple calculation the expectation and variance of the noisy image are obtained: Due to the variance depends on the expectation, fluctuations can be said to be signal dependent. It can be seen that even though the noise is independent identically distributed (see Figure 1(b)), the noisy signal shows different features on the pixels where the gray levels are different (see Figure 1(c)). The higher the gray level is, the more remarkable the influence of the noise is. The signal dependent nature of the fluctuations also can be observed in Figure 1(d): the difference between first and third quartiles is larger when the underlying signal values are high. At the same time, the oscillation makes the gradient modulus of the image unbounded in theory, so it is difficult to distinguish the "real" boundary from the "false" boundary.
(2) The gamma distribution has a heavier right-tail. Therefore, signals contaminated by multiplicative gamma noise will have more abnormal peaks, while in the intensity image it will appear to have more bright outliers.
The complete destruction of the original data by multiplicative noise pollution severely limits the main recovery algorithm. If only TV regularity is used, different gray levels (fluctuations of different degrees) will be processed uniformly. This will result in the area where the gray level is high is not sufficiently denoised and the area where the gray level is low is already smooth. To this end, we propose the gray level indicator for gray level adaptive regularization.
B. Gray level indicators. Next, we propose the following gray level indicator, where R represents the pre-processing of the observed noisy image, and p > 0 is an exponential parameter.
As far as we know, the granular pattern of the speckle degrades the image, even adjacent pixels in a homogeneous region may cause a large difference in intensity after being contaminated by multiplicative noise. Therefore, in order to detect the  intensity of the original image, it is necessary to perform a preprocessing operation on the noisy image, that is, after low-pass filtering, the non-local image information is used to construct the gray level indicator. We can first observe Figure 2 for an intuitive impression of the preprocessing, and the two types of preprocessing we use are as follows, (1) Gaussian convolution: Gaussian filters are a class of linear smoothing filters that select weights based on the shape of a Gaussian function, they can suppress most of the noise and reduce the interference of noise on image features. In this case we can write (a) Median filtering (b) Gaussian convolution Figure 2. Pretreatment effect map. The black curve is the original signal, and the green curve is the observed signal which has polluted by the Gamma noise with a standard deviation of 1/ √ 6 (L=6). The red curve is result of preprocessing.
(2) Median filtering: Median filter is windowed filter of nonlinear class, which refers to replacing the value of one pixel with the middle value of the intensity value in the neighborhood of this pixel, as for the size of the neighborhood, it depends on the actual needs. And for heavy-tail or impulse noise, it has been demonstrated in [4,3] that among those generalized filters based on linear combinations of order statistics, the median filter is nearly optimal for suppressing noise.
The idea of the exponential parameter p is derived from gamma correction [30]. Multiplicative noise often causes the gray-scales of the original image to overflow [0, 255]. If the noisy observation data is simply normalized by the transformation f M to [0,1], the usage information carried by the gray-scales in the middle range in the original image will be compressed or even disappeared, so we use the exponent p to adjust.
All the scheme we have presented up to now share the fact that 0 ≤ α(x) ≤ 1, and the higher the gray value at pixel x, the larger the value of α. Let us remark again that the value of α at most of the pixels is too small, which affects the speed of regularization. In this regard, we can use the appropriate method to equalize or stretch the value of α. Histogram equalization is a reasonable choice, but it destroys the monotonicity of α. In addition, we can also use the following piecewise function for processing: where b is an artificially set threshold. The comparison between α and α is shown in Figure 3.
2.3. The adaptive total variation. Based on the new gray level indicator, we propose the following adaptive total variation,  where 0 ≤ α(x) ≤ 1. Setting α(x) = 1, it becames an ordinary TV regular term, whereas, on the contrary, if α(x) ≈ 0, the new model can be considered as having no smoothing function, and the optimal solution u = f . That is, in the signal, when the intensity approaches zero, the signal itself is less affected by noise, and the degree of regularization controlled by α(x) in the denoising process is also almost zero. Therefore, under certain circumstances, the new model can maintain weak contrast and even protect the texture. According to the two groups of experiment in Figure 4 and Figure 5, we can intuitively see the significant advantage of adaptive total variation for multiplicative denoising. Compared with AA model, DZ model has a great improvement in the maintenance of low-intensity information, but there is still some information loss where the intensity is zero, and the contrast in the medium-high-intensity region is reduced. On the one hand, because the Gamma distribution has a heavier righttail, the signal contaminated by the multiplicative Gamma noise will have some abnormal peaks in the high-intensity region of the original signal. On the other hand, the signal is almost unaffected when the intensity approaches zero. While our model adds a coefficient α(x) to quickly smooth out abnormal peaks and protect low-intensity information, removing multiplicative noise in varying intensity ranges. Hence, it can maintain low contrast and texture information to some extent.
Next we state some necessary lemmas about α total variation (α-TV) that will be used. From now on we always assume that Ω is a bounded open subset of R n with Lipschitz boundary.
2.4. α-TV. As in [10], we introduce the following definition of α-total variation, where weight function α(x) is a positive valued continuous function on R n . Definition 2.1. Let u ∈ L 1 (Ω), the α-total variation of u or α-TV of u is defined by   where If u ∈ α-BV, define α-BV seminorm to be Ω α|Du|, and the α-BV norm to be Remark 1. Note that if function α is bounded and the infimum is greater than zero, then the BV norm and α-BV norm are equivalent. Therefore, u ∈ L 1 (Ω) having bounded α-total variation implies that u ∈ BV(Ω). Now, we directly show some of the properties or conclusions about α total variation in [11].
Taking φ as the absolute function in Lemma 1 in Section 4.3 of [23] and extending the conclusion, we can have the following lemma.
Lemma 2.5. Let u ∈ BV(Ω), and let ϕ a,b be the cut-off function defined by Then, Dϕ a,b (u) ∈ BV(Ω), and Proof. With t ∈ R + and a fixed x ∈ Ω , we define a function g as By simple calculation, we obtain that the second order derivatives of g satisfies Obviously, g (t) > 0 for all t ∈ R + , g is strictly convex. Then, setting t = u(x) for each x ∈ Ω , we get the strict convexity of the last two terms in (5). Based on the convexity of the TV regularization, we deduce that E(u) in (5) is strictly convex.
and the corresponding first derivative is Denoting √ t = y > 0 and let g (t) = 0, then we have The positive root of the above equation is y = 1, this shows that g(m) has only one extreme point m = f (x) and is a minimum point. Now, we discuss the existence and uniqueness of a solution to (5) and the minimum-maximum principle. with inf x∈Ω f > 0. Then, the model (5) admits a unique solution u * ∈ BV(Ω) such that Since h (t) = 1 − f /t, (t > 0), it is easy to check that the function u − f log u reaches its minimum value f − f log f over R + for u = f . Hence, This implies that E(u) has a lower bound for all u ∈ BV(Ω) with u > 0. And we can choose a minimizing sequence {u n } ⊂ BV(Ω) for the problem (5). Based on Lemma 3.2, we see that, g(m) is decreasing if m ∈ [0, f (x)) and increasing if m ∈ (f (x), +∞). Therefore, if M ≥ f , for all m ∈ R + , one always has g(min(m, M )) ≤ g(m), namely Moreover, utilizing Lemma 2.5, we can have Combining (14) and (15), we deduced that On the other hand, in the same way, we are able to get that E(sup(u, a)) ≤ E(u). Therefore, we can assume that 0 < a ≤ u n ≤ b, which implies that u n is bounded in L 1 (Ω). Since a ≤ u n ≤ b and g(m) ∈ C[a, b], g(u n ) is bounded. Moreover, by the definition of u n , there is a constant C such that Ω α|Du n | + Ω g(u n )dx ≤ C.
Then, Ω α|Du n | ≤ C, which implies that the sequence {u n } ∞ n=1 is bounded in BV(Ω). Consequently, there exists a subsequence u n k which converges strongly in L 1 (Ω) to some u * ∈ BV(Ω), and converges * weakly to u * in BV(Ω), where a ≤ u * ≤ b. Utilizing Lemma 2.3 and Fatou's lemma, we get that u * is a solution of the problem (5).
Finally, from lemma 3.1, the uniqueness of the minimizer follows from the strict convexity of the energy functional in (5).
From Theorem 3.3, it is noticed that there are solutions u 1 and u 2 for f 1 and f 2 , respectively. Since u i is a minimizer of E(u) with f = f i , with respect to i = 1, 2, Adding these two inequalities, and extending the facts in [7,20] which is as follows: then we can deduce that Writing Ω = {u 1 > u 2 } ∪ {u 1 ≤ u 2 }, we easily deduce that further, from the Lagrange mean value theorem we can get log Hence, we find that as soon as β λ > b1b2−2a1a2 2a1a2b1 . Taking account of f 1 < f 2 , in this case we deduce that {u 1 > u 2 } has a zero Lebesgue measure, i.e., u 1 ≤ u 2 a.e. in Ω.

The evolution equation. Problem (5) admits a unique solution described by the Euler-Lagrange equation
with the Neumann boundary condition. The gradient descent method is adopted, that is, converted to solve the following equations For the moment, we assume that problem (16)-(18) has a weak solution u(x, t) and give the maximum principle.
as a test function, where 0 < N = sup x∈Ω f . Then, equation (16) multiplied by test function, integrated over Ω and using the Neumann boundary condition, we get (20) Obviously the second and third terms on the left-hand side are greater than or equal to 0. Then from (20), we can deduce that Thus u ≤ N = sup x∈Ω f . The proof of inf Ω f ≤ u is similar.
4. Numerical aspects. In this section, we want to use the traditional finite difference method and the primal-dual algorithm proposed in [8,18,28] to solve the energy functional minimization problem in (5) For a given (i, j) ∈ [1, N 1 ]×[1, N 2 ], denote u i,j ∈ R and p i,j = (p 1;i,j , p 2;i,j ) ∈ R 2 for u ∈ X and p ∈ Y , respectively. Then, equip the standard Euclidean inner products as follows: Also define the discrete forward(+) and backward(-) gradient operator ∇ ± : X → Y : . Considering inner products on X and Y , the corresponding discrete backward(-) and forward(+) adjoint operator div ∓ : Y → X of −∇ ± is obtained:

The finite difference method (FDM).
To ensure the stability of the difference scheme of parabolic equations, the Courant-Friedrichs-Lewy (CFL) condition needs to be satisfied. In terms of the following forward difference scheme, it is stable if τ ≤ h 2 4a . However, our equation (3.4) is limited by the CFL condition (τ ≤ ch 2 |∇u|), especially in the flat region where ∇u = 0, the CFL condition requires a small time step τ , which makes the algorithm converge slowly. Thus, we normalize the initial value to change the CFL condition and improve the efficiency of finite difference.
A. The rescaling technique. Usingf = f sup x∈Ω f (x) to replace f in the equations (16)-(18), then we rewrite the equations: In this case, the last restored image I = sup x∈Ω f (x) · u. This greatly reduces the number of iterations, for the sake of analysis, our model can be simplified to (24) ∂u ∂t = div u max u p ∇u |∇u| .
So we converted the calculation of equation (24) into solving In fact, this relaxes the CFL condition by M times, greatly reducing the number of iterations.
B. The finite difference scheme. The core of finite difference scheme is replacing the derivatives by finite differences to numerically approximate our model. Denoting the space step by h = 1 and the time step by τ , then the initial boundary conditions can be described as follows: where L, J are the grid numbers of the image. Consequently, as in [29], algorithm for the proposed model is given as follows, Input: f ; σ, a, λ, β.

4.3.
Primal-dual method (PDM). If the evolution equations are not considered, the primal-dual algorithm may be used to solve the minimization problem directly. Based on the definition of α-TV in section 2, we give the primal-dual formulation of (5): where p is the dual variable, and Z = {q ∈ Y : q 2 1;i,j + q 2 2;i,j ≤ α i,j }. For the sake of simplicity, denote This is a generic max-min problem, and we can apply the primal-dual algorithm with adaptive steps proposed in [8] to solve the above optimization task. The algorithm can be summarized as the following alternating iterative process: θ n = 1/ 1 + 2γτ n , τ n+1 = θ n τ n , σ n+1 = σ n /θ n , In order to apply the algorithm to (26), the main questions are how to solve the dual problem in (27) and the primal problem in (28). For (27), the solution can be easily given by p n+1 i,j = π 1 σ n (∇ū n ) i,j + p n i,j . Where π 1 is the projection onto the set Y under the l 2 -norm, i.e.
with |q i,j | = q 2 1;i,j + q 2 2;i,j . For (28), since the minimization problem is strictly convex and its objective function has the second derivative, it can be solved efficiently by the Newton method following with one correction step, to ensure that u n is nonnegative and maintains the mean of f . This projection is inspired by Proposition 2.1 of [9] or Proposition 12 of [12]. We end this section with the convergence properties of the above algorithm, please refer to the proof of theorem 2 in [8] for details. Theorem 4.1. Choose τ 0 > 0, σ 0 ≤ 1/(τ 0 ||∇|| 2 ), then the iterates (u n , p n ) of our algorithm converge to a saddle point of (26).

Experimental results.
In this section we present some numerical examples illustrating the image restoration capability of our model. Here, we compare the results with those obtained from some state-of-art methods in recent studies such as AA algorithm and DZ algorithm. All denoising methods were evaluated on five images: Pie image (214×216), Cameraman image (256×256 pixels), Parrot image (256×256 pixels), an Synthetic image (300× 300 pixels), Piglet image (341×199 pixels). In all numerical experiments calculated by our algorithm in this paper, the images do not need to be normalized in the range [0,255]. For each image, a noisy observation is generated by multiplying the original image by a realization of noise according to the model in (2) with the choice L ∈ {1, 4, 10}.
To quantify the denoising effect, for a noiseless image u O and its denoised version by any algorithm u, the denoising performance is measured in terms of peak signal to noise ratio (PSNR) [17], and mean absolute-deviation error (MAE), gives the gray-scale range of the original image, M, N are the size of image. And use the structural similarity (SSIM) [36] to measure the degree of structural similarity between u O and u.
The gray level indicators (7) and (8) were selected for model test, and their corresponding functionals (5) were denoted as E 1 and E 2 , respectively. Five parameters are involved in the model of this paper, λ, β, p, σ and a. The exponential parameter p in the gray level indicator plays a role of a correction, and the convolution scale σ is related to the noise level. We found that a value of 1 ≤ p ≤ 2 and σ ∈ {1, 2} gave the optimal results across all the experiments of our model, and adjusting the parameters during the above interval will result in a slight change in results. We simply set σ = 1, and λ, β, p, a are tuned empirically.
For fair comparison, the parameters of all methods were tweaked mutually to achieve their best performance level. Their values are summarized in Table 1. By observing the lines E1-FDM and E2-FDM, it can be found that the key parameter p of gray level indicator (7) is insensitive to different images or different noise levels. In practice, p = 1.3 can be set to obtain the best result. In contrast, the key parameters p and a of the accelerated version (8) of the gray level indicator need to be adjusted empirically for different images or different noise levels.
The results are depicted in Figure 6-8 for Pie image, Figure 9-11 for Cameraman, Figure 12-14 for Parrot, Figure 15 Comparing the results of the three methods, i.e., the AA, DZ and our method in Figures 6-20, we see that our method performs best visually. First, in the restored results of the AA and DZ methods, much more noise remains compared with those of our method, especially when recovering the images corrupted by high-level noise, there are many bright outliers in the AA and too many black spots in the DZ. Second, due to the excessive smoothness in the denoising process, the detail contrast of the AA and DZ method are significantly reduced. See, e.g., the tripod in Cameraman, eyes and nose in Piglet, however, our method preserves more details. Moreover, observing the center of Pie, the textures surrounding the eye and  the feathers of Parrot, we can clearly see that our model protects certain texture information while successfully suppressing noise. In order to quantitatively compare performance and computational efficiency, the PSNR, MAE and SSIM values of the restored results, as well as the number of iterations, are listed in Table 2. The PSNR value of our method is at least 0.5dB higher than others, the more serious the influence of noise is, the more significant the advantage of our method will be, in particular, when L = 1, the image repaired by our method is 3dB higher than that restored by DZ method. In terms of computational efficiency, our model adopts the finite difference method with large-steps, the number of iterations does not need to exceed 600 for all kinds of noise levels. However, the primal-dual method converges faster when the noise level is smaller. Furthermore, when formula (8) is selected as the regularization weight, the convergence rate will be a little faster. This verifies that stretching and shifting of the basic formula (7) can improve the regularization efficiency.    algorithm makes the energy drop rapidly, but in order to get a higher PSNR more accurately, more iterations are needed. In contrast, the FDM algorithm converges slowly, but eventually the energy converges to a lower value.
6. Conclusion. In this paper, using two kinds of preprocessing, we get two types of gray level indicators, and improve the efficiency of regularization by shifting technology. Thus, the regularization term constructed differ in degree of regularization at different gray levels, which exactly conforms to the characteristics of multiplicative noise. For the numerical implementation of the proposed model, we have introduced  the FDM and PDM scheme. On the one hand, we use the initial value normalization to improve the efficiency of the finite difference, increase the time step and reduce the number of iterations. On the other hand, the model is strictly convex, the primal-dual algorithm with adaptive steps is also an efficient method. And experimental results have demonstrated the effectiveness of our model.