NONCONVEX TGV REGULARIZATION MODEL FOR MULTIPLICATIVE NOISE REMOVAL WITH SPATIALLY VARYING PARAMETERS

. In this article, we introduce a novel variational model for the restoration of images corrupted by multiplicative Gamma noise. The model incorporates a convex data-ﬁdelity term with a nonconvex version of the total generalized variation (TGV). In addition, we adopt a spatially adaptive reg- ularization parameter (SARP) approach. The nonconvex TGV regularization enables the eﬃcient denoising of smooth regions, without staircasing artifacts that appear on total variation regularization-based models, and edges and details to be conserved. Moreover, the SARP approach further helps preserve ﬁne structures and textures. To deal with the nonconvex regularization, we utilize an iteratively reweighted (cid:96) 1 algorithm, and the alternating direction method of multipliers is employed to solve a convex subproblem. This leads to a fast and eﬃcient iterative algorithm for solving the proposed model. Numerical experiments show that the proposed model produces better denoising results than the state-of-the-art models.

1. Introduction. In many real applications, images unavoidably suffer from noise that occurs during the image-acquisition process. Thus, image denoising is essential in image processing, and it aims to remove noise while simultaneously preserving edges and fine details. Many previous works have focused on the removal of additive Gaussian noise. However, the Gaussian noise model is not suitable to describe the noise in real images such as synthetic aperture radar (SAR), ultrasound, and laser images. In fact, these images are distorted by multiplicative noise, and the multiplicative noise depends on the intensity values of the original image. Thus, the removal of multiplicative noise is more challenging than that of additive Gaussian noise. In this paper, we consider the problem of multiplicative noise removal.
Let Ω R 2 be an open and bounded domain with a compact Lipschitz boundary and let u : Ω Ñ R be an original image. The observation f contaminated by multiplicative noise η is as follows: (1) f u ¤ η, where η ¡ 0 represents the multiplicative noise, following a Gamma distribution whose probability density function (PDF) is as follows ΓpM q e ¡M¤η , where M ¡ 0 is an integer related to the noise level, and Γ is the Gamma function.
Thus, the mean of η is 1 and its standard deviation is 1{ c M . This type of noise typically appears in SAR images.
In general, there are two approaches to recover an unknown clean image from a noisy image degraded by multiplicative Gamma noise. The first is the filteringbased approach [35,57,17,16,50] and the other is the variational approach [4,52,22,34,51,15]. The filter-based methods are fast, but they produce some artifacts and cannot restore thin or small details properly. One of the well-known filteringbased methods is the SAR-BM3D [50] that combines a nonlocal principle with the wavelet representation. Especially, it collects the 3D groups of similar image patches using a similarity measure and applies the wavelet decomposition for computing the 3D blocks to obtain denoised image. This nonlocal based algorithm performs well, but its computational cost is very expensive and produces some undesirable artifacts for heavy multiplicative noise. In a variational framework, Aubert and Aujol [4] proposed a new model, by deriving a data-fidelity term from the Gamma distribution and integrating it with the total variation (TV) regularization [52]. However, their data-fidelity term is nonconvex. Thus, some convex data-fidelity terms were proposed in [55,19], by using a log transform or adding a quadratic term. Recently, Lu et al. [41] suggested a new convex data-fidelity term, which is more suitable for removing heavy multiplicative noise. All these models involve TV regularization, so they tend to produce some artifacts with stair form in smooth transition regions, which are often called staircasing artifacts. To alleviate these artifacts, several hybrid TV regularization models have been suggested in [9,11,43,7], which combine first-and second-order TV functionals. In particular, Feng et al. [20] extended the model in [55], by adopting total generalized variation (TGV) regularization [7] instead of TV. Moreover, Shama et al. [54] proposed a TGV-based model associated with the convex fidelity term in [19]. In this work, we introduce a nonconvex TGV (NTGV)-based model incorporated with the convex data fidelity in [41] to deal with heavy multiplicative noise.
Nonconvex regularization [33,49] has drawn interest because nonconvex regularizers have advantages over convex regularizers in terms of keeping edges and discontinuous features. In [32], the authors replaced the TV regularization term by the q norm of the gradient of an image with 0.5 q 0.8 and demonstrated that the nonconvex regularizer found a better denoised image than TV. In addition, numerical results in [45] showed that nonconvex regularizations are superior to convex regularizations. Recently, Oh et al. [48] proposed a nonconvex hybrid TV regularizer, which is a convex combination of TV and second-order TV. The authors also showed that the nonconvex hybrid TV produces better denoising results than the convex version or TV. The TGV was also extended in [47] to a nonconvex version, and it was validated that the NTGV has preferable performance compared with the TGV. Hence, here we also utilize a nonconvex version of the TGV, to take advantages of both nonconvex regularization and TGV regularization.
In the last decade, many efficient optimization algorithms for convex minimization problems have been proposed [24,10,6,63]. The alternating direction method of multipliers (ADMM) [6,63] is the most commonly used algorithm for convex problems in image processing due to its convergence and wide applicability. ADMM is equivalent to the split Bregman method [53] under optimization problems with linear equality constraints. On the other hand, nonconvex optimization algorithms have not been studied as much as convex algorithms because of the intrinsic difficulties arising from nonconvexity. Nevertheless, several numerical algorithms for nonconvex problems have been proposed in [2,3,56,26,25,59] with or without convergence analysis. Recently, Candes et al. [8] introduced a new algorithm, called the iteratively reweighted 1 algorithm (IRLA), to solve the compressive sensing problems involving a nonconvex regularization. Furthermore, Ochs et al. [46] extended it to solve linearly constrained optimization problems in computer vision problems, which was also generalized as various kinds of iteratively reweighted algorithms [47] with convergence analysis. In this work, we adopt the IRLA from [46,47] to handle our nonconvex model, and the ADMM to solve a convex subproblem.
An energy functional in a variational model usually consists of a data-fidelity term and a regularization term. The data-fidelity term measures the difference between an unknown estimated image u and a given noisy image f , while the regularization term smooths the estimated image and removes any noise that may be present. Typically, a regularization parameter is multiplied in the data-fidelity term, and this controls the smoothness of the output image. An appropriate spatially varying regularization parameter can give better results for the image denoising problem. The spatially adaptive regularization parameter (SARP) strategies [13,18,23,1,39,36,14] have been studied for image denoising problems. They aim to find a spatially varying parameter that has a small value in homogeneous regions and a large value in textured regions. In particular, Dong et al. [18] proposed a new SARP approach for additive Gaussian noise removal with a theoretical analysis. This approach was extended to denoising problems with other types of noise [28,12,38]. We also extend this SARP approach to the removal of heavy multiplicative Gamma noise.
In this paper, we propose a novel variational model for the removal of heavy multiplicative Gamma noise. The model involves a NTGV regularization and a spatially varying regularization parameter. The NTGV regularization helps to effectively denoise smooth regions and preserve edges and fine details. In addition, the spatially varying regularization parameter further improves the denoising results by maintaining small scales and textures. We also present an efficient algorithm for solving the proposed nonconvex nonsmooth model, by exploiting the IRLA and the ADMM. Numerical results demonstrate the superiority of the proposed model over state-of-the-art models.
The rest of the paper is organized as follows. In Section 2, we recall several existing regularizers, and some variational models for multiplicative Gamma noise removal. In Section 3, we introduce our proposed model and describe an algorithm for solving the proposed model. In Section 4, numerical experiments are presented for the proposed model, with comparisons with several state-of-the art models. Finally, we summarize our work in Section 5.

Background.
2.1. Convex and nonconvex regularizers. In this section we recall several existing regularizers. First, the TV regularizer was introduced in a variational model for Gaussian noise removal [52]: where λ ¡ 0 is a tuning parameter, and |∇u| The TV regularizer is one of the most well-known regularizers owing to ability to preserve edges.
To enhance the edge-preserving ability of TV, several nonconvex TV regularizers were proposed in [32,45], which have the form Φp|∇u|q ³ Ω φp|∇u|q dx, where φ is the nonconvex function defined as (4) φpsq s q p0 q 1q, ρs 2 Numerical results in [45] showed that the nonconvex TV regularizers were better at preserving edges and textures than TV. However, the nonconvex TV regularizers smooth homogeneous regions in the same way as TV. This indicates that they can yield some staircasing artifacts near smooth transition regions in the restored images.
To overcome these staircasing effects, higher-order regularization-based models were suggested in [9,11,43,37]. As an early work, a inf-convolution TV (ICTV) model was proposed in [9], which takes the infimal convolution of TV and secondorder TV. Moreover, Li et al. [37] proposed a denoising model, involving a convex combination of TV and the second order TV as a regularizer. Oh et al. [48] further extended the model in [37] by making use of a nonconvex hybrid TV regularizer. On the other hand, as a generalization of the ICTV, the TGV regularizer was proposed in [7]. In particular, the second-order TGV is as follows: where Eppq 1 2 ∇p p∇pq T¨r epresents the distributional symmetrized derivative, and α 1 , α 0 ¡ 0 are the weighted parameters that control the balance between the first-and second-order terms. From the formulation (5) of TGV, it can be interpreted that TGV 2 puq can automatically find an appropriate balance between the first-and the second-order derivative of u with respect to α i .
Recently, Ochs et al. [47] proposed a nonconvex extension of the TGV regularizer as follows: where φpxq 1 ρ logp1 ρxq with the parameter ρ ¡ 0 controlling the nonconvexity of the regularization term. This regularization takes advantages of both nonconvex regularization and TGV regularization.

2.2.
Variational models for multiplicative noise removal. In this section, we review existing variational models for multiplicative Gamma noise removal. First, Aubert and Aujol [4] proposed the following variational model for eliminating multiplicative Gamma noise, whose data-fidelity term is derived based on the maximum a posteriori estimation: » Ω plog u f u q dx TVpuq, where λ ¡ 0 is a regularization parameter. This model is nonconvex because of the data-fidelity term. Hence, it cannot be guaranteed to find a global solution even numerically.
To overcome this disadvantage, Shi and Osher [55] proposed a convex data-fidelity term by using the log-transformation (u Ð log u). This convex data-fitting term was integrated with the TGV regularizer, leading to the following model [20]: Another approach to construct a convex data-fidelity term was proposed in [19], by inserting a quadratic penalty term to the nonconvex fidelity term of the model in (7). Recently, Shama et al. [54] proposed a variational model by combining the convex data-fidelity term in [19] with the TGV: where α ¡ 0 is a parameter. It was shown in [19] that the objective function of (9) is strictly convex if α ¥ 2 c 6 9 . Note that Etp u{f qu is always larger than 1 and is close to 1 only when M is sufficiently large. This indicates that this model is more appropriate for a relatively large value of M , which corresponds to a low noise level.
To handle heavy multiplicative noise, Lu et al. [41] replaced the value 1 in the penalty term of the data fidelity term by a parameter β ¥ 1. Furthermore, the authors adopted the log transformation as in [55] because exponent-like models have better quality of restored image than logarithm-like models. This model is called the exp model and its formulation is given as follows: where β ¥ 1 depends on a noise level M , and the objective function is strictly The authors in [41] demonstrated that the data fidelity of (10) is appropriate for small values of M corresponding to heavy multiplicative noise. Moreover, Na et al. [44] extended the model (10) by incorporating an automatic selection of a SARP λpxq L V pΩq. This model enhanced the denoising results of the model (10) by better preserving details and edges. However, the restored images obtained by the exp model with SARP [44] still include the staircase artifacts owing to the TV regularizer.
3. Description of the proposed model and algorithm. In this section, we introduce a variational model using a NTGV and a SARP approach, and present an efficient optimization algorithm for solving our model. Specifically, in Section 3.1, we propose a NTGV-based model with the SARP to remove heavy multiplicative Gamma noise. In Section 3.2, we describe an updating strategy for the SARP by introducing a constrained model with local constraints. We present an iterative algorithm for solving the proposed model in Section 3.3.
3.1. Proposed model. First, we propose the following model for the removal of heavy multiplicative noise, which utilizes a NTGV and a SARP λ : Ω Ñ R : with the NTGV defined as (13) NTGVpuq min ρ i ¡ 0 control the nonconvexity of regularization terms. The parameters α ¡ 0 and β ¥ 1 are defined in (10) and satisfy the condition in (11) to enforce the convexity of the data-fidelity term.
The data-fidelity term in (10) is employed for our model because it is suitable for heavy multiplicative Gamma noise. Thus, our model is an extension of the TV-based denoising models [41,44] to higher-order regularization. Moreover, we utilize a nonconvex version of TGV, since TGV automatically balances the first-and secondorder derivatives rather than using any fixed combination. The NTGV enables us to benefit from both higher-order regularization and nonconvex regularization for image denoising. That is, it helps sufficiently denoise smooth regions without staircasing effects while preserving edges and details. We further extend our model by incorporating the SARP approach. The SARP approach automatically selects a spatially varying regularization parameter, which prevents over-smoothing of small features such as textures while sufficiently denoising homogeneous regions.
Here we employ the nonconvex log function among some possible nonconvex functions in (4). In the case of the nonconvex q -norm regularizer, it is difficult to find its limiting supergradient at zero. Moreover, as discussed in [42], one can expect to obtain better denoising results when using the fractional function to reconstruct piecewise-constant images. However, for images that are no longer piecewise constant, the log function produces better denoising results. In general, real natural or SAR images are not piecewise-constant, so we utilize the log function as our nonconvex function φ i . This log type of NTGV was introduced in [46,47]. The only difference to our approach is that we use different values for the parameters ρ 0 and ρ 1 .
First, we present some statistical properties of a Gamma random variable. Let us assume that η follows the Gamma distribution with mean 1 and standard deviation 1 c M , and consider the following function: where α, β R. As discussed in [44], the expectation of Ipηq is obtained as follows: where the Opsq is defined as lim sÑ0 Opsq s

V.
From now on, we derive a constrained model with local constraints to obtain an updating strategy for the SARP λpxq in (12). First, we define a local window at center x with size r, Ω r x ty | }x ¡ y} V ¤ r 2 u, and a mean filter w r px, yq as follows: From the degradation model (1), we can obtain η f u , and then we substitute η in (14) with f u . Hence, the local expected value estimator of Ip f u q at x, associated with the local window (16), is obtained as Then, by taking the transformation u logũ in (17), we can obtain the following local expected value estimation function F r u pxq: Now we consider the following NTGV based minimization problem with local constraints: where C is an approximate expected value of Ipηq, which is given by The constrained model (19) involves only one local window. According to [44], a small local window results in leftover noise in homogeneous regions, whereas a large local window leads to over-smoothing of fine features such as edges and details. Thus, an appropriate selection of window size is important, so we here adopt multiple local windows to overcome this limitation. In [44], it was shown that the use of multiple local windows results in better denoising results in both homogeneous and edge regions than when using one local window, especially when considering heavy multiplicative noise.
Therefore, we ultimately consider the following constrained model with local constraints involving multiple local windows: in Ω, where F u pxq is the local expected value using multiple local windows, which can be computed as with wpx, yq 1 N°N k1 w r k px, yq is the average of the mean filters defined in (16).
Finally, we relate the constrained model (21) with the proposed model (12). For this, we convert the constrained model (21) into the following unconstrained minimization problem using the penalty method: where µ ¡ 0 is a penalty parameter. As µ approaches V, the unconstrained problem (23) goes back to the original problem (21).
The Fréchet derivative of F u pxq, with its action ν, is computed as Using this formula (24), the Fréchet derivative of the penalty quadratic term in (23) is computed as follows: where G µ puq µ maxpF u pxq ¡ C, 0q. Thus, the first-order optimality condition of the problem (23) is given by (26) is only the necessary optimality condition due to the nonconvexity of the model (23). Let u µ be a critical point which holds the first-order optimality condition (26). Then, we can have the following equality for any action ν: On the other hand, the right-hand side of (27) is identical to the Fréchet derivative of the data-fitting term in the proposed model (12) at u u µ when λ µ λ. This implies that the problem (21) is identical to the problem (12) with a spatially varying function λ : Ω Ñ R defined in (28).
Consequently, from this relation, we can obtain the updating rule (28) for the spatially varying parameter λ in (12).

3.3.
Algorithm for solving the proposed model (12). In this subsection, we present an algorithm to solving the proposed model (12). The function λ : Ω Ñ R is automatically updated from the updating rule in (28). We initially select a small positive constant value for λ to obtain an over-smoothed restored image, and then restore details by updating the function λ. The updating rule for λ is as follows: where δ ¡ 0 is a step size and u n is the current estimation of the original image u. In other words, u n is a solution of the problem (12) with respect to u for fixed λ λ n pxq: (30) u n : arg min u » Ω λ n pxqgpuq dx NTGVpuq.
To sum up, our SARP algorithm for the model (12) is summarized in Algorithm 1.

5:
Based on u n , update λ n 1 as follows: Here we focus on solving the u-subproblem (31) for fixed λ n , which is a nonconvex problem. We first adopt the IRLA, which is introduced in [46,47] for solving a nonconvex minimization problem with linear constraints. Let us consider the following nonconvex linearly constrained minimization problem: tVu is proper and convex, and S 2 : R n Ñ R is concave and increasing. Here, R n denotes the non-negative orthant of R n , and |v| is the coordinate-wise absolute value function. The IRLA for solving the problem (32) iteratively solves the following convex relaxation problem of (32): Indeed, the IRLA is an example of an iterative convex majorization-minimization (ICMM) method. To show the global convergence of the ICMM, the objective function of its subproblem must be strongly convex. For the global convergence of the IRLA, we use the modified version of IRLA in [47], obtained by adding a proximal term as follows: where ν is a very small positive constant. This modified algorithm is also a kind of the ICMM method. For simplicity, we call the modified algorithm (34) IRLA from now on.
To apply IRLA, we first reformulate the problem (31) to the following equivalent constrained problem, using the variable splitting technique: This constrained problem (35) can be rewritten in the form (32) with the following setting: Now we can apply the IRLA to the constrained model (35). The first step of the IRLA applied to the model (35) is given by Since the functional S 2 is differentiable, its limiting supergradient corresponds to its gradient. Moreover, the second step of the IRLA applied to the model (35) is obtained as follows: where x¤, ¤y is the componentwise inner product, d k 1 α1 The problem in (37) is a convex minimization problem with linear equality constraints. However, the minimization of (37) is not easy, due to the nonsmooth terms and the constraints. To handle these difficulties, we employ the ADMM. In the following section, we depict an algorithm for solving the problem (37).
Finally, based on the global convergence of IRLA in [47], we can also show the convergence of IRLA applied to our u-subproblem as follows: Theorem 3.1. Let pu k , p k , a k , b k , c k q kN be the sequence generated by IRLA (37). The sequence pu k , p k , a k , b k , c k q kN converges to a critical point in tv : Av 0u of (37) as k Ñ V.

3.3.2.
ADMM for solving the problem (37). The ADMM [6,63] is a widely used algorithm to solve linearly constrained convex optimization problems. We consider the following convex minimization problem with linear constraints: where z is the Lagrangian multiplier vector and τ ¡ 0 is a penalty parameter. The ADMM for solving the problem (38) minimizes the augmented Lagrangian function (39) over each variable, y or v, with the other variable fixed, and then updates the Lagrange multiplier z induced by the Karush-Kuhn-Tucker optimality conditions for (38), which are given by (40) In the model (37), we let v py, vq with y pa, b, cq and v pu, pq. Then the constraint in (37), i.e. Av 0, can have the form of Cy Dv e. Hence, the augmented Lagrangian function for the problem (37) is as follows: L τ pa, b, c, u, p; γ a , γ b , γ c q xλpxqgpcq, 1y xd k 1 , |a|y xd k 0 , |b|y (41) xγ a , a ¡ ∇u py τ 2 }a ¡ ∇u p} 2 Thus, the ADMM applied to the problem (37) yields (42) 6 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 8 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 7 The first subproblem in (42) is decoupled over the variables a, b, and c. Thus, it can be separated into the following three subproblems: The subproblems for a i 1 and b i 1 in (43) can be solved exactly using the shrink operator: where shrinkpa, bq a }a}2 maxp}a} 2 ¡ b, 0q. On the other hand, there is no closedform solution for c i 1 . Since the objective function in the c-subproblem is differentiable, the Euler-Lagrange equation for c i 1 can be obtained as The normal equation (48) can be efficiently solved by using Newton's method.
The minimizer pu i 1 , p i 1 q of the second subproblem in (42) satisfy the following first-order optimality condition: , which is the adjoint operator of ¡E.
The formula (49) can be rewritten as the following linear equation: Following the ideas in [27,21], we can obtain u i 1 , p i 1 by twodimensional fast Fourier transform (FFT) under a symmetric boundary condition.
To sum up, we describe the algorithm for solving the u-subproblem (31) in Algorithm 2.

Remark 1.
In several works [20,30,31], the primal-dual (PD) algorithm [10] is adopted to solve the TGV based minimization models instead of the ADMM. When applying it to our minimization problem (31), the PD algorithm takes much more computing time than the ADMM to reach the same energy functional value. Therefore, we utilized the ADMM algorithm for solving our u-subproblem.

Numerical results.
In this section, we compare the performance of our model with other state-of-art models which were recently proposed for multiplicative noise removal. Specifically, we compare our proposed model with the TwL-mV model [29], the exp-SARP model [44], the SO-TGV model [20], and the DZ-TGV model [54].
We tested 10 images, as shown in Figure 1, which consist of six natural images and four SAR images. For experiments, all test images are corrupted by multiplicative Gamma noise for M 3, 5, or 10.
To measure the quality of restored images, we use the peak-signal-to-noise-ratio (PSNR) value and structure similarity (SSIM) index [60]. The PSNR (in dB) is given by where N p is the total number of pixels, u and u ¦ present the restored image and clean image, respectively.   All methods are terminated when the following stopping condition is satisfied: of the SO-TGV model and DZ-TGV model is set as tol 3 ¢ 10 ¡5 and 4 ¢ 10 ¡4 , respectively, as provided in [20,54]. For the outer iterations for our model and the exp-SARP model, we set the maximum iteration number M axIter 3 due to the computational cost. Figure 2 presents the evolution of the function λ and the denoised imageũ as the iteration increases. It can be seen that there is a big improvement on the denoising results after the 2nd iteration, but there is no big difference between two denoised images in (b) and (c). This demonstrates that 3 outer iterations are sufficient to attain satisfactory denoising results. We use the proximal linearized ADMM (PLAD) algorithm [62] to solve the exp-SARP model. For the SO-TGV and the DZ-TGV models, the primal dual algorithm [10] is utilized.
All numerical experiments are performed in MATLAB R2015b running on a 64bit Windows 7 PC with an Intel i7-3770 CPU @ 3.40 GHz and 16 GB RAM.

Selection of parameters.
In the primal dual algorithm [10] used to solve the SO-TGV and DZ-TGV models, the primal and dual proximal parameters σ, τ are directly related to the speed of convergence. We set σ τ 1 c 12 for the SO-TGV model and σ 0.0016, τ 50 for the DZ-TGV model, as described in [20,54], respectively. The parameter α in the DZ-TGV model has to be manually tuned to obtain satisfactory restored results. In particular, α should satisfy the condition α ¥ 2 c  parameter λ in the SO-TGV and DZ-TGV models to obtain satisfactory denoising results.
We use m 4 for the TwL-mV model [29], which gives satisfactory denoising results with reasonable computational cost. To find a solution, the linearized proximal alternating minimization algorithm is utilized, and the parameters are chosen as in [29]. For the exp-SARP model and our model, the value of β is closely related to the noise level M . As provided in [41], we take β 1 0.1113 0.1109M 2 1. The value for α in the exp-SARP model and our model is heuristically set to satisfy the condition (11). In fact, we use the same values for α and β in the exp-SARP model and our model. The proximal parameter ν for IRLA is fixed at 0.001. Moreover, we fix a penalty parameter τ 0.75 in both ADMM and PLAD algorithms. For our model, the parameters ρ 1 and ρ 0 affect the quality of restored images, which are fixed as ρ 1 0.01 and ρ 0 5.
Lastly, we use three local windows of size r 1 7, r 2 21, r 3 256 for our model.
In Figure 3, we compare the denoised images obtained by using two or three local windows. Figure (a) shows the denoised image when using two small windows, which still retains noise near edges. On the other hand, Figure (b) presents the denoised image when using one small and one large windows, where the textural regions are oversmoothed. However, as in (c), the denoised image when using three local windows, which is the combination of small, medium, large windows, preserves the texture parts while sufficiently removing noise near edges. We also note that the use of more than four windows is not necessary since it produces similar denoising results with when using three windows. Therefore, this example validates the use of three different local windows.

4.2.
Image denoising. In Figure 4, to show the effectiveness of both NTGV and SARP, we compare the performance of our model with that of the exp-SARP and our model with a fixed λ (so called NTGV). Since the exp-SARP model is a TV-based bottom rows: TwL-4V [29], exp-SARP [44], SO-TGV [20], DZ-TGV [54], and our model. bottom rows: TwL-4V [29], exp-SARP [44], SO-TGV [20], DZ-TGV [54], and our model. On the other hand, the NTGV-based models diminish the staircasing artifacts, which demonstrates the effectiveness of TGV in producing natural restored images. Besides, the nonconvexity of NTGV enables the recovered images to better preserve edges. On the other hand, comparing the NTGV model with our model, our model can better conserve tiny features such as forest and houses in SAR images. Since the face image has strong edges but does not include small features, the resorted image of our model is very similar to that of the NTGV model. Usually, SAR images have many small features. Thus, the SARP approach is more effective for the removal of noise in SAR images, rather than using a fixed parameter. In Figures 5 and 6, we show the denoising results of our model when the noise level M 10, and comparisons with other models. It can be seen that the TwL-mV and exp-SARP models produce the restored images with staircasing artifacts around the faces of both images. Comparing our model with the SO-TGV model and the DZ-TGV model, we can observe that our model retains less noise around the cheek and preserves more details in the eyes and hat in Figure 5. Moreover, in Figure 6, we can observe that the restored image of our model has clear eyes, nose, and mouth and some noise in the hat is eliminated well, unlike the SO-TGV model and the DZ-TGV model. These images indicate that the SARP λpxq in our model performs properly, leading to better denoising results than a fixed constant λ. In Figures 7 and 8, we test all the above-mentioned denoising models for all noise levels M 10, 5, and 3. It can be seen that the recovered images of TV based models (TwL-4V, exp-SARP) have the staircasing artifacts in homogeneous regions for all noise levels. However, the exp-SARP model retains less noise in the background while preserving details than the TwL-4V model, which is due to its SARP approach. In the case of M 10, the DZ-TGV model provides a satisfactory result. However, when M is small (M 5 and 3), it produces a restored image that is too over-smoothed. Thus, the restored images of the DZ-TGV model do not capture any details. Moreover, we can see some artifacts with black and white dots, which are caused by the quadratic term in the DZ-TGV model. From [19], the authors showed that the data-fidelity term of the DZ-TGV model is mainly suitable for a large value of M owing to statistical property of the Gamma distribution. In addition, we also confirm that the DZ-TGV model is not suitable when the noise level M is smaller than 10. If we increase the λ value for the DZ-TGV model, then we obtain a reconstructed image with a lot of black and white dots while avoiding oversmoothing and conserving detail. Thus, when choosing the regularization parameter λ, we have to decide whether to retain many such black and white points along with preserving more details. In these experiments, we choose that the recovered images for the DZ-TGV model involve fewer black and white points and are slightly over-smoothed when M is smaller than 10. Comparing the SO-TGV model with our proposed model, we can observe that the SO-TGV model retains noise in the homogeneous region, which was not removed sufficiently. In contrast, as one can see in Figure 7, the denoising result of the SO-TGV model is less clear near the stem of the peppers than that of our model and a lot of remaining noise as a whole in the result of the SO-TGV model. In addition, our model in Figure 8 shows more details of the boat and the edges are more clearly restored.  In Figures 11-14, we test all five models with SAR images with all noise levels M 10, 5, and 3. It can be observed that the exp-SARP model gives more successful results since it preserves fine features better than the TwL-4V model, but both models still generate undesirable staircasing artifacts. The restored images of the DZ-TGV model improve as M increases. We can confirm that our model provides the best denoising results. For SAR images, our model also provides denoised images with well-smoothed homogeneous regions and conserved edges. Overall, the proposed model produces the restored images with the most natural visual quality In Tables 1, 2, and 3, we measure qualities of the restored images for the exp-SARP model, the TwL-4V model, the SO-TGV model, the DZ-TGV model, and our model. We report the PSNR and the SSIM values for M 10, 5, and 3, respectively. We note that our proposed model has the highest PSNR and SSIM values for all images and for all noise levels. Although some PSNR values of the proposed model and exp-SARP model are not very different, we can see in Figures 8 and 10 that the denoising results of our model have better visual quality than those of the exp-SARP model.
Lastly, in Figures 15 and 16, we compare the denoised images of our model and the SAR-BM3D model [50]. It is known that the SAR-BM3D model achieves the best performance for the multiplicative noise removal in terms of PSNR values. Indeed, it preserves textural patterns and fine details well, but it also produces some undesirable artifacts especially in the presence of heavy multiplicative noise. On the other hand, our model provides clean homogeneous regions and sharp edges, unlike the SAR-BM3D model that supplies unwanted artifacts in homogeneous regions and near edges. Thus, in some cases such as the Girl image in Figure 15 and the last two SAR images in Figure 16, our model provides higher PSNR values. Overall, our model not only outperforms the existing TV-or TGV-based variational models but is competitive to the SAR-BM3D model.

5.
Conclusions. In this paper, we have proposed a new minimization model for restoring images degraded by heavy multiplicative Gamma noise. We exploited the convex data-fidelity term in [41] to properly handle the heavy multiplicative noise. Moreover, a NTGV regularization and a SARP were utilized. The NTGV allowed us to adequately denoise smooth regions without the staircasing artifacts that appear bottom rows: TwL-4V [29], exp-SARP [44], SO-TGV [20], DZ-TGV [54], our model.
in TV-based models, while preserving edges and details. The SARP further assisted in keeping textures and small scales during the denoising process. We presented an automatic selection rule for the SARP, by introducing a constrained model with local constraints associated with multiple local windows. To solve the nonconvex minimization problem, we employed the IRLA, and the ADMM was adopted to solve  the subproblem induced by the IRLA. These led to an efficient iterative algorithm for solving the proposed model. We tested our proposed model and algorithm on several images including real natural and SAR images. Numerical results showed the superiority of our proposed model over the state-of-the-art models with regards to visual quality and some quantities.