ITERATIVE CHOICE OF THE OPTIMAL REGULARIZATION PARAMETER IN TV IMAGE RESTORATION

. We present iterative methods for choosing the optimal regularization parameter for linear inverse problems with Total Variation regularization. This approach is based on the Morozov discrepancy principle or on a damped version of this principle and on an approximating model function for the data term. The theoretical convergence of the method of choice of the regularization parameter is demonstrated. The choice of the optimal parameter is reﬁned with a Newton method. The eﬃciency of the method is illustrated on deconvolution and super-resolution experiments on diﬀerent types of images. Results are provided for diﬀerent levels of blur, noise and loss of spatial resolution. The damped Morozov discrepancy principle often outerperforms the approaches based on the classical Morozov principle and on the Unbiased Predictive Risk Estimator. Moreover, the proposed methods are fast schemes to select the best parameter for TV regularization.


1.
Introduction. Linear inverse problems are encountered in many engineering applications and they play a major role in image processing. For example, image deconvolution and restoration problems are crucial in medical imaging or image and video coding. Such problems can be formulated as: (1) g δ = Af + n, where g δ represents the measured data, f the ideal image, n an additive noise corrupting the measurements and A a linear transform like a blurring kernel or the composition of a smoothing kernel and of an undersampling operator. This problem is ill-posed and small changes in the data can have large effects on the solutions. To obtain stable solutions in the presence of noise, a well-known approach is to include some prior information and to minimize a regularization functional including a data fidelity term and a regularization term R(f ) that imposes an a priori constraint on the solution. A regularization parameter is balancing the contribution of the two terms. The effectiveness of the regularization approach depends on the choice of the optimal regularization parameter. A well-known method to control the noise is the Tikhonov regularization [35] with R(f ) = Df 2 , where D is a differential operator. More recently, the Total Variation regularization (TV) [32,38] has been studied and applied to various image processing problems [41,6,27,30]. For an image f in the first order Sobolev space H 1 (Ω), this regularization term is given by: This regularization is known to preserve well edges. Convergence rates for TV regularization have been obtained under additional source conditions with quantitative estimates for the Bregman distance induced by the regularization functional [10]. Various numerical methods have been used to solved the TV regularized deconvolution problem [36,7]. Yet, extensive numerical experiments show that algorithms based on the Alternating Direction Method of Multipliers (ADMM) are the stateof-the-art methods [41,27,43,14,1,2]. In order to obtain a good reconstructed image, it is necessary to choose an optimal regularization parameter. A very common approach is to try various values until the reconstruction solution is satisfactory. For large scale problems, an exhaustive search with a grid of points of regularization parameters values is very slow. Several methods have been studied for the optimal selection of the regularization parameter for Tikhonov regularization. Methods like the discrepancy principle and the Unbiased Predictive Risk Estimator are based on a knowledge of the noise level δ [38,26,8,12,28,37,11,29,40,39]. The L-curve method and the Generalized Cross-Validation can be used without the knowledge of this noise level [17,19]. Moreover, an iterative choice of the regularization parameters based on the Morozov discrepancy principle applied with a model function has been proposed in [22].
For TV regularization, fewer methods have been investigated. For denoising problems, Aujol et al. [4] have proposed a method where the denoised result is close to the optimal, in the SNR sense. An extension of the Unbiased Predictive Risk Estimator to TV regularization has been shown to give a good estimate of the regularization parameter [25]. A method exploiting the Generalized Cross-Validation technique has been proposed in [24]. Babacan et al. [5,6] have studied a bayesian variational method for solving TV deconvolution and super-resolution problems. A new approach based on a proximal point method applied to the dual formulation of the TV regularization problem and the discrepancy principle has been proposed recently [42].
The goal of this paper is to propose a fast and simple method to select the optimal regularization parameter for the restoration with TV regularization of an image f from a blurred, noisy image g δ with various sub-sampling rates. Our contribution in this field is to design an iterative method following the idea of the model function for Tikhonov regularization of Kunisch et al. [22]. The outline of this paper is the following. In Section 2, we present the linear inverse problem studied, the Morozov discrepancy principle, a damped version of this principle, and some convergence results associated to our method of choice of the regularization parameter and the ADMM/TV algorithm chosen to estimate the regularized solutions. In Section 3, the new method for the selection of the regularization parameter is presented. In Section 4, we detail the UPRE method that will be used for comparison. In Section 5, numerical results are given to demonstrate the effectiveness of the approach for deconvolution or super-resolution problems. Concluding remarks are given in the last section.

Problem formulation.
Let Ω be a bounded open domain with Lipischitz boundary. In the following, we consider a linear operator A : L 2 (Ω) → L 2 (Ω). We assume that A is injective and continous. We consider the regularization functional: where µ is a regularization parameter. The Total Variation regularization has been extensively studied in [34,3]. The following theorem summarizes the main convergence and stability results.
Theorem 2.1. Let A be a linear and bounded operator A : L 2 (Ω) → L 2 (Ω), satisfying Aq = 0 for all constants q ∈ L 2 (Ω). The space L 2 (Ω) is assumed to be endowed with its weak topology. Then: -the TV semi-norm is convex and lower semicontinuous with respect to the weak topology on L 2 (Ω).
-consequently, the regularization with the TV semi-norm is well-defined, stable and convergent.
Let (g k ) be a sequence converging toward g δ , and f k be a minimizer of the regularization functional J µ,g k , the stability of the regularization method means that every convergent subsequence of (f k ) converges to a minimizer of J µ,g . Let (δ k ) be a sequence of positive numbers converging to 0, and (g k ) a sequence such that g − g k ≤ δ k , the TV regularization method is convergent and thus for a good choice of the parameter µ(δ k ), the sequence (f k ) has a convergent subsequence. The limit is a minimal norm solution [34].

2.2.
Morozov and modified Morozov principle, basic properties. In the following, we denote as f (µ) the regularized solution obtained with the regularization parameter µ. The noise free data g is assumed to be corrupted by noise in such a way that: where δ is the noise level. The unique solution of Af = g will be denoted f * . With the Morozov discrepancy principle [13], the regularization parameter is chosen such that the discrepancy term is equal to the known noise level δ on the observed data, so that the following equation for µ holds : In some cases, the use of the Morozov discrepancy principle may not lead to an optimal regularization parameter. As suggested in [22,23], a damped Morozov equation may be more satisfactory. We have thus considered the damped equation: where the exponent γ has to be chosen. We first study the range of the functions P (f (µ)) = g δ −Af (µ) 2 and P γ (f (µ)) = g δ − Af (µ) 2 + µ γ R T V (f (µ)). We define: We denote f 0 the value of f for which the minimum ν is obtained. We are first interested in some basic properties of P γ and P . The proofs are similar to the one given in Kunisch et al [23], but some changes are necessary because of the use of the TV regularization term.
The same results hold for P .
Proof. The minimum of J µ (f ) is obtained for f (µ) and R T V (f 0 ) = 0 thus: As the TV semi-norm is equivalent to the BV norm for functions with zero average value [34], this result implies (f (µ)− < f >) BV → 0 as µ → 0. Therefore f (µ) →< f > in L 2 (Ω) since BV is continuously embedded in L 2 (Ω) [34]. The operator A is continous on L 2 (Ω) and thus For > 0, we consider f such that Af − g δ ≤ η + .

Convergence results.
It is well-known that for ill-posed operator equations, the convergence rates of regularized solutions require additional properties of the true solution f * of the equation Af = g such as source conditions. Convergence rates results can be obtained in terms of Bregman distance between the true solution and the regularized solution [10,31,34]. The Bregman distance is a measure of the difference between two elements in a Banach space [21,9]. The use of this distance has now evolved as a standard tool for error measure. For a element ξ ∈ ∂R T V (v), where ∂R T V (v) denotes the subdifferential of TV at v, the Bregman distance with respect to TV between u and v at ξ is defined by In the following, we assume the standard source condition holds and that there exist τ ∈ L 2 (Ω) such that: Some convergence results have been obtained for a priori choices of the regularization parameter [34,18] given by the following proposition: Assume that the former source condition holds. Then for the choice 1/µ ∼ δ, the Bregman distance between the regularized solution and the ground truth is D ξ T V (f (µ), f * ) = O(δ). These control of the error in terms of Bregman divergence was extended by a l 2 control in the discrete setting [15]. We present here some convergence rates results as the error level δ → 0. We develop an error estimation when the regularization parameter µ(g δ , δ) is chosen according to the modified Morozov principle (6). We first show that the TV norm of f (µ) is uniformly bounded if the modified Morozov principle is satisfied. Proof. From the definition of f (µ), we have: Combining the modified Morozov Principle and (11), we obtain: Proposition 2.5. If the regularization parameter is chosen according to the modified Morozov principle, the Bregman distance between the true solution and the regularized solution is given by From the proof of Proposition 3.41 in [34], it can be shown that if assumption (10) holds, then there exist β 1 ∈ [0, 1[, β 2 ≥ 0 and ξ ∈ ∂R T V (f * ) such that: With the choice of µ(g δ , δ) according to the modified Morozov principle: and thus (12), it follows that: 2.4. The ADMM approach. The iterative method developed in this paper to select the regularization parameter is independent of the algorithm used to minimize the regularization functional. Many approaches have been proposed to solve the TV regularization problem. The recent ADMM methodology is the state-of-theart method to obtain a stable solution with the TV regularization [27,41]. This type of algorithms has attracted much attention and has also been proposed to solve a number of image processing tasks, such as image impainting and deblurring. The convergence rates for the Augmented Lagrangian method when the Morozov's discrepancy principle is chosen as parameter selection method has been investigated in [16]. For discrete data, in the framework of the ADMM method, an augmented La- where µ is the regularization parameter, β the Lagrangian parameter, and 2 is the Euclidean norm in R 2 . For the ith pixel at the location (ix, iy), the first order finite difference operator D i is defined by The horizontal difference operator is given by: A similar definition holds for the vertical difference operator D y i (f ). The augmented Lagrangian depends on the auxiliary unknown g i = (g 1 i , g 2 i ) ∈ R 2 and on the dual variable associated to the constraint g i = D i (f ), denoted λ i ∈ R 2 . The ADMM algorithm is searching for the saddle point of the augmented Lagrangian that is obtained by iterating the following equations: With the alternating minimization algorithm, three sequences (f k , {g k i }, {λ k i }) are constructed with the following iterative scheme: For each pixel i, g i is updated with: The new iterate f k+1 is obtained from the following linear system : The Lagrange multipliers are updated with, λ k+1 The convergence of the algorithm is summarized by the following theorem [27,41,16]: of the problem, which corresponds to a saddle point of the Lagrangian. With the augmented Lagrangian given in (17), the Karush-Kuhn-Tucker (KKT) conditions can be written as: 3. Proposed Morozov based methods to choose the regularization parameter. In this section, we detail two numerical schemes to choose the regularization parameter. We assume that f (µ) is a minimum R T V norm solution and that it is unique. We first show, under restrictive conditions, the differentiability of the function f and of the value function E around µ, which are functional properties on which the iterative scheme relies.
3.1. Lipschitz continuity and differentiability of f (µ). We consider here the function f : R → R N of the regularization parameter. The results in this section will be formulated in a discrete setting. They can be generalized to the the continuous setting if the subdifferential of the discret TV semi-norm D t i (∂ D i f (µ) ) is replaced by its continuous version. Proof. We first show that f is Lipschitz continuous around µ. Subtracting the KKT condition for µ + t and µ yields: Taking the inner product with f (µ + t) − f (µ), using the Schwarz inequality, that f is bounded and the monotonicity of the subdifferential, we obtain: where C 1 is a positive constant and (25) where α min is the minimum singular value of A t A and thus f is Lipschitz continuous around µ.
The regularization functional for the regularization parameter µ + t is minimized for f (µ + t) and thus: and thus, there are constants C 2 , C 3 such that: where we have used (24).
From (25), we can deduce that, for all i, as t → 0, D i f (µ + t) = 0 if and only if D i f (µ) = 0, and that the components of D i f (µ) and D i f (µ + t) have the same sign. Therefore, as t → 0, we have, , and there is a positive constant C 4 such that Let BV (Ω) ♦ , the set of all functions of bounded variation with zero mean, then on BV (Ω) ♦ the TV and BV norm are equivalent [34]: This property is also verified for the discrete TV semi-norm. Let us denote ∆f = f (µ + t) − f (µ), and M : L 1 (Ω) → L 1 (Ω) the averaging operator of a function f . The operator M is linear and continuous. In a continuous setting, the operator M is defined by: where 1 Ω is the indicator function of the set Ω. Using the operator M we have: Under the assumption that the operator A is injective on the set of constant functions, we obtain that there is a positive constant C 5 such that: Using (29) and (32), we get: We now assume that the operator A commute with M . It can be shown easily that it is the case for the convolution integral operator investigated in the following.
Using the continuity of M , (24) and (28), we thus obtain that, as t → 0 there is positive constant D such that: We now demonstrate that under some restricted injectivity condition, the regularized solution is a differentiable function of the regularization parameter.
Proposition 3.2. Let us assume there exists a constant E > 0 such that the operator A is injective on the subset C of BV (Ω), defined by: Then, the regularized solution f is differentiable around the regularization parameter µ.
Proof. We now show that the function f is differentiable around µ and that its derivative is the unique solution ω of: We assume that the operator A is injective on the subset C. Let = (f (µ + t) − f (µ)) − tω. Dividing (23) by t and substracting (36), we obtain: The optimality condition shows that as t → 0, f (µ + t) belongs to the set C. It is also the case for . By the Schwarz inequality: (38) µα where α min is the minimal singular value of A t A restricted to C. When t → 0, γ i → 0, since Dif (µ) Dif (µ) and Dif (µ+t) Dif (µ+t) are equal for all i as t → 0. Therefore → 0 and f is thus differentiable around µ.

3.2.
Differentiability of the value function E(µ). In the following, we denote E(µ) the value obtained at the saddle point for and E (µ) = Af (µ) − g δ , Af (µ) , and E (µ) is a decreasing function.
Proof. The differentiability of E follows from its definition and from the differentiability of f (µ). Classical differentiability calculation rules gives: . Taking into account the optimality condition, the first derivative of E(µ) is given by: Thus, the second derivative of E(µ) is given by: Using equation (36), E (µ) can be rewritten: We obtain that E (µ) ≤ 0. The function E (µ) is decreasing and the value function E(µ) is monotonically increasing and concave.

3.3.
Iterative schemes for the proposed Morozov based regularization parameter selection methods. We present in this section the iterative schemes to choose the regularization parameter according to the Morozov or to the damped Morozov principle.

3.3.1.
Iterative scheme for the Morozov principle. In the framework of the Morozov discrepancy principle, the regularization parameter is chosen such that the discrepancy term is equal to the noise level δ, that is g δ − Af (µ) = δ. This condition can be written 2E (µ) = δ 2 . In order to determine the regularization parameter and to solve the Morozov equation, we have used a two parameters model function.
The approximate value of µ obtained is refined with a Newton method. Following the model function methodology in [22], a model function can be calculated that preserves the main properties of E (µ).
Proposition 3.4. The discrepancy term E (µ) can be approximated by the decreasing function, Proof. Upon differentiating the optimality condition with respect to µ, we obtain: Taking the inner product of this equation with f (µ) and neglecting the second member, we obtain: This equation will be rewritten: If the right member is assumed constant and using (39) and (40), we get: Solving this differential equation, we obtain a decreasing and convex model function depending on two real parameters a and b: which describes the main properties of the function E (µ).
As expected, this model function is decreasing and the value function E(µ) is concave. This model function yields an approximate value, µ approx , of the regularization parameter obtained with the equation: In a second step, we define the function G(µ) by: In order to solve G(µ) = 0, given two initial guess values close to µ approx , µ 0 and µ 1 , a sequence of iterates (µ k ) k≥0 is generated for k ≥ 1 by:

3.3.2.
Iterative scheme for the damped Morozov principle. With the damped Morozov principle [22,23], we have to consider the modified equation: for a parameter γ. The first step of the iterative scheme is the same. For the Newton refinement step, we can redefine the function G(µ) as: From an approximate value calculated with the model function of the classical Morozov principle, the optimal regularization parameter can be obtain with the Newton method by defining a sequence (µ k ) k≥0 , given µ 0 and µ 1 , two initial guess values close to µ approx and applying: Taking into account the differentiability of the value function E(µ) for a smoothed version of the gradient, it is possible to demonstrate that G(µ) is decreasing for some values of µ and γ: Proposition 3.5. The function µ → G(µ) is strictly decreasing for γ = −1 and µ > 0 and for γ < −1 and µ > 2 1/(γ+1) .
This result implies that, if it exists, the solution to G(µ) = 0 that solves the damped Morozov principle is unique.

UPRE method for Total Variation parameter selection.
In this section, we first briefly summarize the Unbiased Predictive Risk Estimator (UPRE) method for the TV regularization that will be used for comparison with our iterative methods. The UPRE approach has been studied in detail for optimal parameter selection for Tikhonov problems [38,8,12,28,37]. The predictive error for the parameter µ is defined as p(µ) = Af (µ) − Af * . In the framework of the UPRE method, the optimal parameter µ is the minimizer of the predictive risk p(µ) 2 N 2 , where N 2 is the total number of pixels in the image. Recently, the UPRE method has been extended to TV regularization and to large-scale problems [25]. As detailed in [25], the optimal parameter is the minimizer of the Unbiased Predictive Risk Estimator which has the same functional form as the one for the Tikhonov regularization: Theorem 4.1. The Unbiased Predictive Risk Estimator of the predictive risk for TV regularization is given by: where σ 2 is the noise variance and tr(.) denotes the trace operator. The influence matrix C T V (µ) in the TV problem can be written as: where the matrix L(f (µ)) is given by: In order to avoid divergencies in the numerical tests, the norm of the gradient of the image at pixel i is approximated by: (D x i (f )) 2 + (D y i (f )) 2 + where is a smoothing constant.
For large scale problems the Singular Value Decomposition of A is not available and it is necessary to approximate the trace value tr(C T V (µ)) with an algorithm requiring only matrix-vector products. With the approach of Hutchinson [20], the trace of a smooth function F (M ) of a symmetric positive definite matrix M can be evaluatued with the unbiased trace estimator: E(u t F (M )u) = tr(F (M )) where u is white noise vector with standard deviation equal to σ u = 1 and E denotes the expected value function. The choice of u that minimizes the variance in this estimator is the one for which the components of the vector are independant and take on the values of 1 and -1 with equal probability [38]. The expectation value is evaluated with a number n u of realization of the random variable. In order to approximate the trace of C T V (µ), we have to solve the linear system: (µA t A + L(f (µ))) −1 A t u = y for a given vector y which can be done with a conjugate gradient algorithm. In this work, in order to obtain the best reconstruction results, the U P RE(N iter , µ) was calculated as a function of the number of iterates N iter and as a function of the regularization parameter µ.

Numerical experiments.
In order to show the effectiveness of our method for choosing the regularization parameter, we have considered two test problems: deconvolution and super-resolution. In the deconvolution case, the test image is blurred with a Gaussian kernel of standard deviation σ blur and corrupted by additive Gaussian noise of standard deviation σ. In the super-resolution case, after the blur and the noise are applied, the image is also undersampled by a factor p, increasing the ill-posedness of the problem. 5.1. Practical implementation of the classical and damped Morozov methods. We start by finding the optimal regularization parameter that maximizes the PSNR of the resulting images defined as: where f * , f (µ) represent the ground-truth image and the reconstructed image for the regularization parameter µ respectively and N 2 is the total number of pixels. Thus, an extensive sweeping of the regularization parameter values is performed. The deteriorated image is restored with the ADMM algorithm for each value of the regularization parameter µ. The ADMM iterations are stopped when In order to apply the classical and damped Morozov schemes described in section 3.3, we need to determine approximate values of the parameters a and b of the model function E (µ), by first choosing two values µ * and µ * * with µ * < µ * * . For each of the regularization parameters, µ * and µ * * , the blurred noisy image is restored with the ADMM algorithm and the values E (µ * ) and E (µ * * ) are calculated. If 2E (µ * ) ≥ δ 2 , µ * is not changed, otherwise it is decreased. If 2E (µ * * ) ≤ δ 2 , µ * * is not changed, otherwise it is increased. An approximate value for the regularization parameter, µ approx , is obtained from the equation δ 2 = 2(a(µ approx ) −2 + b), where the noise level δ is evaluated with the standard deviation σ as δ 2 = N 2 σ 2 . Starting from this approximate solution, the Newton method is then applied to refine the solution. Considering µ 0 = µ approx + 0.1 and µ 1 = µ approx , the iterative formula given by (49) or (52) are applied for the classical or damped Morozov principles. For each iteration of the Newton method, the ADMM algorithm is applied to obtain the solution f (µ) with the same value stop ADM M . The Newton iterations are stopped when | Af (µ k )−g δ 2 −δ 2 δ 2 | < stop N ewton .

Deconvolution problem.
The test image f ∈ R N 2 (N=330) to be reconstructed is displayed in Figure 1.(a). It is a slice of a bone synchrotron micro-CT sample at a voxel size of 20 µm [33]. Trabecular bone is organized in a complex network made of struts of about 100 µm (white patterns in the image), requiring images at high spatial resolution that are still difficult to acquire in-vivo.
We first consider the application of the methods to the noisy and blurred image shown in Figure 1.(b) where σ blur = 1.2 and σ = 4.3. The noise level δ is evaluated as δ 2 = N 2 σ 2 = 2 · 10 6 . The noisy blurred image has a PSNR=24.89. The evolution of the PSNR of the image restored with the ADMM algorithm as a function of µ is displayed on Figure 2.(b). From this curve, the optimal value of the regularization parameter is µ OP T = 0.7 and the optimal PSNR of the reconstructed image is P SN R(µ OP T ) = 32.32. A value stop ADM M = 5 · 10 −5 was considered to ensure the convergence of the regularization method.
We applied the Morozov principles for selecting the regularization parameter as described in section 5.1. With µ * = 0.1 and µ * * = 10, the values obtained with (46) for a and b are 5.36 · 10 3 and 7.26 · 10 5 respectively. The approximate value for the regularization parameter was found as µ approx =0.14. By applying the Newton scheme for the classical Morozov principle, we observed that the convergence was fast and only four iterations were necessary. A value stop N ewton = 0.001 was considered. The final value of the regularization parameter was found to be µ M = 0.34. The final image obtained with µ M is displayed in Figure 1.(c). Its PSNR value  For comparison, the UPRE method was also implemented for selecting the regularization parameter. The trace of the influence matrix was calculated with 50 white noise vectors u. The smallest value that gives a positive UPRE curve was chosen. The UPRE curves are rather noisy thus the mean and the standard deviation of µ and PSNR where estimated from five UPRE curves for the chosen . The evolution of the UPRE(µ) for the value of the smoothing parameter = 0.05 as a function of the regularization parameter is displayed in Figure 2.(d) for stop ADM M = 5 · 10 −5 . The UPRE(N iter , µ) was also calculated as a function of the number of ADMM iterations, N iter , for different µ values. Figure 3 shows the evolution of UPRE as a function of N iter and µ. The best mean regularization parameter obtained is µ U P RE = 1.44 with a PSNR=31.73 and the image obtained is displayed in Figure 1.(d).
In order to have a more extensive comparison of the methods of choice of the regularization parameter, we have tested several noise and blur levels. The values of µ M , µ DM with γ = −1, µ U P RE , µ OP T and the PSNR values are summarized in Table 1. For low blur, the classical Morozov principle is better than the damped one. On the contrary, for high blur level, the damped Morozov principle improves the results obtained with the classical Morozov principle. The UPRE method is efficient except for the highest noise level case.

5.3.
Super-resolution problem. In this section we investigate a more ill-posed problem, the super-resolution problem. The standard deviation of the Gaussian blurring kernel was σ blur = 4.85 and a sub-sampling rate of p = 4 was applied to the images. An additional Gaussian noise with a standard deviation σ was added to the low-resolution image. The aim of the regularization approach is to estimate the high resolution image. Simulations were performed with several noise levels. To illustrate the method, we first detail the results obtained with σ = 6. Figure 4.(a) and 4.(b) display the high resolution and low resolution images. Figure 5.(b) displays the evolution of the PSNR for σ = 6. The best PSNR is 19.57. In this case, the optimal regularization parameter is µ OP T = 6.2. The regularization parameter estimated with the classical Morozov principle is µ M = 2.22, yielding a PSNR of 19.07, far from the optimal value. The evolution of the data term for this case is displayed in Figure 5.(a).  When applying the damped Morozov method with γ = −1, the obtained regularization parameter is µ DM = 6.2. The evolution of the data term weighted with the TV norm of the image f is shown in Figure 5.(c) as a function of the regularization parameter. The corresponding PSNR value is 19.57 which is very close to the optimal one. We obtain thus a very good estimation of the optimal regularization parameter.
For comparison, the evolution of the UPRE(µ) term for σ = 6 with = 0.05 is displayed in Figure 5.(d). The regularization parameter obtained is µ U P RE = 13.48 with a PSNR of 19.14, slightly lower than with the damped Morozov method. The images obtained with this two methods are displayed in Figures 4.(c)-(d). Table 2 summarizes the values of the regularization parameters and the PSNR obtained for different noise levels. For all these noise levels, the UPRE method gives a better estimation of the optimal regularization parameter than the classical Morozov method. Yet, the damped Morozov method is more effective than the UPRE approach for this super-resolution problem.

5.4.
Super-resolution test problem for other images. To assess the potential of the proposed method in other domains of application, we made additional experiments on the usual Cameraman (N=512) and Lena (N=512) images. The simulation parameters of the super-resolution problem where: p=4, σ blur = 4.85 and two noise levels σ = 4.3 and σ = 6.
The results are summarized in Table 3 and Table 4. Resulting images are displayed in Figures 6 -9. In most cases, the UPRE method provides better results   Table 3. Comparison of regularization parameter selection methods for super-resolution case -Cameraman image.   data term of the regularization functional is first obtained from the reconstructed images with two different regularization parameters. An approximate value of the regularization parameter is calculated with the model function approximating the discrepancy term. Then the solution is refined with a Newton iterative method on the basis of the Morozov principle or its damped version. This new approach has been compared with the one based on the UPRE. The methods have been compared for deconvolution and super-resolution test problems for various noise and blur levels and for different images. For the deconvolution case, the damped Morozov principle is the most efficient method for the highest blur level. Otherwise, the classical Morozov principle offers the best estimation of the optimal regularization parameter. For the superresolution case, the UPRE method outperforms the classical Morozov method. Yet, a more accurate estimation of the optimal regularization parameter is obtained with the damped Morozov principle, where the data term is weighted by a TV term. The iterative schemes based on the new model function presented in this work are very fast and avoid an extensive sweeping of the values of the regularization parameter.