Iterated quasi-reversibility method applied to elliptic and parabolic data completion problems

We study the iterated quasi-reversibility method to regularize ill-posed elliptic and parabolic problems: data completion problems for Poisson's and heat equations. We define an abstract setting to treat both equations at once. We demonstrate the convergence of the regularized solution to the exact one, and propose a strategy to deal with noise on the data. We present numerical experiments for both problems: a two-dimensional corrosion detection problem and the one-dimensional heat equation with lateral data. In both cases, the method prove to be efficient even with highly corrupted data.

The data completion problem is: Problem. For f , g D and g N in L 2 (Ω) × L 2 (Γ) × L 2 (Γ), find u ∈ H 1 (Ω) such that This problem is well-known to be ill-posed (see [1,2] and the references therein): it does not necessarily admit a solution for any data (f, g D , g N ), and if a solution exists, it does not depend continuously on the data. On the other hand, if the problem admits a solution u s , this solution is necessarily unique (see e.g. [1,4]).
Such a problem is encountered in many practical applications, among others in plasma physic [5,6], or corrosion detection problems [8,9,7,11,10]. We will be particularly interested in the corrosion detection problem: in this problem, u is the electrical potential inside a conductive object Ω, σ is the conductivity of the object, g N represent a current imposed on Γ, accessible part of the boundary of Ω, and g D is the corresponding potential measured on Γ. The aim is to determine if some portion of the inaccessible part of the boundary Γ c is corroded.
Mathematically, it exists a non-negative function µ define on Γ c such that σ∇u · ν + µ u = 0 on Γ c and the objective is to reconstruct µ: µ = 0 on the healthy part of Γ c , and µ > 0 on the corroded part. In section 6.1, we test our method on this problem.
The data completion problem is known to be severely, even exponentially illposed [2]. Therefore ones needs to use regularization methods to try to reconstruct u. Several methods have been proposed to stabilize the problem: see, e.g., [12,13,14,15,16,17] and the references therein.
We are also interesting in the data completion problem for the heat equation, which is quite similar to the elliptic one, except that this time u solves a parabolic equation. Such inverse problem appears naturally in thermal imaging [33] and inverse obstacle problems [34,35]. For T > 0, we define Q := (0, T ) × Ω. Let f be in L 2 (Q), g D and g N in L 2 (0, T ; L 2 (Γ)). The data completion problem is then Problem. find u ∈ H 1,1 (Q) := L 2 (0, T ; H 1 (Ω)) ∩ H 1 (0, T ; L 2 (Ω)) such that This parabolic data completion problem is also severely ill-posed (see e.g. [18]). Note that it is not mandatory to impose an initial condition u(0, .) on Ω to obtain the uniqueness of the solution (if such a solution exists). Again, regularization methods are needed to obtain a stable reconstruction of u from the data f, g D and g N .
The quasi-reversibility method is such a regularization method, introduced in the pioneering work of Lattès and Lions [19] to regularize elliptic, parabolic (and even hyperbolic) data completion problems. The mean idea of the method is to approach the ill-posed data completion problem by a family of well-posed variational problems of higher order (typically fourth order problems) depending on a (small) parameter ε. The solution of the regularized problem converges to the solution of the data completion problem, when the parameter ε goes to zero. The quasi-reversibility method presents interesting features: first of all the variational problems appearing in the method are naturally discretized using finite element methods, thus the method can be used in complicated geometries, an interesting property when the method is used in an iterative algorithm with changing domain. Furthermore, the method is independent of the dimension. Since its introduction, the quasi-reversibility method has been successfully used to reconstruct the solution of elliptic [20,21,23,24,25] and parabolic [29,30] ill-posed problems, and as a keystone in the resolution of inverse obstacle problems in the exterior approach [26,27,28].
In the present paper, we are interested in a natural extension of the quasireversibility method, the iterated quasi-reversibility method : it consists in solving iteratively quasi-reversibility problems, the solution of each one depending on the solution of the previous one. We therefore obtain a sequence of quasi-reversibility solutions, which converges to the exact solution of the data completion problem if exact data are provided, for any choice of the regularization parameter ε. This has interesting consequences from a numerical point of view: first of all, one can now choose a large value for the parameter of regularization ε, leading to an improvement in the conditioning of the finite-element problems, without lowering the quality of the reconstruction. This is not the case for the standard quasi-reversibility method, for which it is mandatory to use small ε to obtain a good reconstruction. Furthermore, in presence of noisy data, we present a method to choose when to stop the iterations according to the amplitude of noise on the data, based on the Morozov discrepancy principle, which ensure both stability and convergence of the method. The main drawback of this extension of the quasi-reversibility method, comparatively to the standard quasi-reversibility, is that several problems have to be solved to obtain a good reconstruction. However, as it is the same variational problem that appears in each iteration of the method, one can precompute a factorization of the finite-element matrix. Hence, the cost of the method is not significantly higher.
The paper is organized as follows. In section 2, we introduce an abstract setting to treat both data completion problems we are interested in at once. In section 3, we present the standard quasi-reversibility regularization in this abstract setting, and prove some results we need to study the iterated quasi-reversibility method. In section 4, we focus on the iterated quasi-reversibility method, both in the case of exact data and noisy data. In section 5, we show that the abstract setting apply to both elliptic and parabolic data completion problems. In section 6, numerical results are presented, demonstrating the feasibility and efficiency of the method for both problems.
2. An abstract setting for data completion problems. In this section, we set up an abstract setting corresponding to both data completion problems we are interested in.
Let X , Y be two Hilbert spaces endowed with respective scalar products (., .) X and (., .) Y , and corresponding norms denoted . X and . Y .
Let y ∈ Y. Both of our data completion problems can be written in the following way: find x ∈ X such that Ax = y, with A : X → Y a continuous linear operator with following properties: In this setting, y plays the role of the data, and x the solution of our data completion problem. The problem is obviously ill-posed: indeed, as A is not onto, there exist y in Y for which the problem admits no solution. We define Y adm := Im(A) the set of admissible data, and Y nadm = Y \ Y adm the set of non-admissible ones. By definition, Y adm is dense in Y. Actually, this is also true for Y nadm Proof. This is quite simple: suppose it existsȳ ∈ Y adm and δ > 0 such that Let y be any element of Y, y =ȳ. We defineỹ = y −ȳ y −ȳ Y δ 2 +ȳ. Obviously, ỹ −ȳ Y ≤ δ. Therefore,ỹ ∈ Y adm , and it existsx ∈ X such that Ax =ỹ. A simple computation shows then that Hence Im(A) = Y, contradicting the assumptions on A. Therefore, for any y ∈ Y adm , for any δ > 0, there exists y δ ∈ Y nadm such that y − y δ Y ≤ δ, which ends the proof, as Y = Y adm ∪ Y nadm .
In other word, for any admissible data y exists a non-admissible oneỹ arbitrary close to y. In particular, this leads to the high instability of the problem with respect to noise: Proposition 2. For any y ∈ Y, the exists a sequence x n ∈ X such that Proof. We start with y ∈ Y nadm . As Im(A) is dense in Y, it exists a sequence x n ∈ X in such that Ax n n→∞ − −−− → Y y. This sequence cannot have any bounded subsequence: indeed, if such a subsequence would exist, there would be another subsequence, denoted x m here, such that x m weakly converges to an element x in X . The operator A being linear and strongly continuous, it is weakly continuous [38], hence Ax m weakly converges to Ax. But by definition Ax m strongly converges to y. By uniqueness of the limit, we have Ax = y, and y ∈ Y adm , in contradiction with the initial assumption. Therefore, we have x n X n→∞ − −−− → +∞. Now, consider y ∈ Y adm . The previous proposition implies the existence of a sequence y m ∈ Y nadm such that y m It is then not difficult to verify that the sequencex m verifies the researched properties.
Remark 1. Actually, if y is not an admissible data, it is shown in the proof that any sequence (x n ) n∈N ∈ X N such that Ax n This proposition has for important consequences the fact that for any admissible data y, with corresponding solution x, one can find an admissible dataỹ, with corresponding solutionx, such thatỹ is arbitrarily close to y andx is arbitrarily far from x.
We retrieve here the well-known fact that the problem of noisy data is crucial in data completion problems. Clearly, it is not sufficient to build a method that (approximately) reconstruct the solution of the data completion problem for any admissible data, it is also mandatory to propose a strategy for noisy data, as in practice data are always corrupted by some noise due to inaccurate measurements.
3. Standard quasi-reversibility method. We define b a symmetric bilinear nonnegative form on X , and denote by . b the induced seminorm on X . We suppose that it exists two strictly positive constants c, C such that Therefore, the symmetric bilinear form (., .) A,b , define by is a scalar product on X , and X endowed with this scalar product is a Hilbert space. We denote . A,b the corresponding norm, which is equivalent to the . X norm.
Obviously, there exists such a form b: it suffices to take the whole scalar product in X , b(., .) = (., .) X .
Adapting the initial idea of Jacques-Louis Lions and Robert Lattès [19], the quasi-reversibility method applied to the abstract data completion problem defined above relies on the resolution of the following regularized problem Problem. for y ∈ Y and ε > 0, find x ε ∈ X such that The quasi-reversibility equation is the Euler-Lagrange equation corresponding to the minimization over X of the energy In other words, it is a Tykhonov regularization of the data completion problem, ε > 0 being the parameter of regularization and . b the penalization (semi)norm. Since its introduction in 1963 by A.N. Tykhonov [37], this regularization has been widely studied and used to solve inverse problems (for a complete study on the topic, see [36] and the references therein). There are various methods to study such regularization method: e.g. singular value decomposition if A is compact (which is not the case in our data completion problems, see section 5) or spectral theory. Here we propose another approach to study the method, based on the variational formulation of the quasireversibility method, and on the differentiability of the approximated solution with respect to the parameter of regularization, the later being useful in the study of the iterated quasi-reversibility method.
First of all, let us verify that the quasi-reversibility problem is well-posed.
Proposition 3. For any y ∈ Y and ε > 0, the quasi-reversibility problems admits a unique solution x ε , with the following estimates: Proof. let us define the bilinear form It is obviously continuous. Furthermore, for all x ∈ X, we have we obtain the existence and uniqueness of x ε by Lax-Milgram theorem. By definition, we have Remark 2. In particular, we always have x ε Suppose there exists x ∈ X such that Ax = y (i.e. y ∈ Y adm ). It is easily seen that x is never the solution of the quasi-reversibility problem, except in the special case y = 0 (which is always in Y adm ) for which x = 0 = x ε . In other words, there is no ε > 0 such that the quasi-reversibility method reconstructs exactly the exact solution of the data completion problem. As seen in the following corollary, the solution of the quasi-reversibility problem is also never 0, except again in the special case y = 0. Corollary 1. The three following properties are equivalent: Proof. obviously, (iii) implies (ii). Furthermore, as min(1, Suppose it exists ε > 0 such that x ε = 0. For that particular ε and for any x ∈ X, we would have (y, Proposition 4. Let y ∈ Y, and x ε the solution of the corresponding quasi-reversibility problem. Then Ax ε strongly converges to y (even if y is not an admissible data).
Let (ε m ) m∈N be a decreasing sequence of strictly positive real numbers such that lim m→∞ ε m = 0, and note x m := x εm . As Ax m Y ≤ y Y , it exists a subsequence (still denoted x m ) such that Ax m weakly converges toỹ ∈ Y. But, for all x ∈ X , we have and Ax m weakly converges to y. As Ax m Y ≤ y Y (proposition 3), Ax m strongly converges to y. It is then not difficult to see that Ax ε strongly converges to y as ε goes to zero.
We can now state the main theorem regarding the standard quasi-reversibility method: Theorem 3.1. Suppose y ∈ Y adm , and let x s be the (necessarily unique) solution of the abstract data completion problem. Then x ε converges to x s as ε goes to zero, and we have the estimates The theorem remains valid when the . b seminorm is replaced with the . A,b norm.
Proof. Suppose first that y ∈ Y nadm . Then, as x ε is a sequence in X such that Ax ε converges to y (proposition 4), proposition 2 and remark 1 imply lim ε→0 x ε X = +∞. As x ε Now, suppose it exists x s such that Ax s = y. Then, choosing x = x ε − x s as test function in the quasi-reversibility problem, we obtain Therefore, x ε is a bounded sequence in X , and up to a subsequence it weakly converges tox. As A is a linear continuous operator, and hence is weakly continuous, proposition 4 implies Ax = y, which impliesx = x s as A is one-to-one. The uniqueness of the limit implies that the whole sequence weakly converges to x s . Finally as x ε A,p ≤ x s A,p , the sequence strongly converges to x s . Subtracting εb(x s , x ε − x s ) to equation 1, we obtain Finally, equation 1 implies which ends the proof.
Next, we focus on the differentiability of the solution of the quasi-reversibility method with respect to ε, a result that will be useful in the study of the iterated quasi-reversibility method.
3.1. Differentiability of the quasi-reversibility solution with respect to ε. It turns out that x ε , solution of the quasi-reversibility problem, depends smoothly on the parameter of regularization ε. Indeed, let us define the map F : Proof. We choose ε > 0 and h such that ε − |h| > 0. For any x ∈ X , we have Subtracting the two equations, and choosing x =x ε,h := x ε+h − x ε , lead to In conclusion, we have Remark 3. If the data completion problem admits a solution x s , then F extends continuously to R + by defining F (0) = x s .
Proof. By Lax-Milgram theorem, there exists a unique x (1) ε ∈ X verifying 2, and it clearly verifies It is a continuous function of ε: indeed, for ε > 0 and h ∈ R s.t. ε − |h| > 0, we have, for all x ∈ X , .
ε ∈ X and subtracting the two equations lead to .
ε and adding the three above relations lead to The result follows.
A simple induction leads then to the following theorem: In particular, x (m) ε verifies the following estimate: If the data completion problem admits a solution x s , it is not difficult to prove that . Finally, we have the following generalization of corollary 1: Corollary 2. the three following properties are equivalent: Suppose it exists ε > 0 and m ∈ N such that x = 0, and by induction x ε = 0, implying again y = 0. Therefore (i) implies (iii).

3.2.
Monotonic convergence of the quasi-reversibility method. In this section, y = 0. Using the results on the derivatives of x ε with respect to ε, it is easy to prove that if the data completion problem admits a solution x s , then x ε converges monotonically to x s when ε goes to zero. This is of course not the only method to obtain such results (see for example [36], where spectral theory is used), but it has the advantage to be quite simple.
The main result of this section is the following Theorem 3.3. Suppose the data completion problem admits a unique solution x s . Then x ε − x s A,b is strictly increasing with respect to ε.
We need to prove first the following two results, which are true whether or not the data completion problem admits a solution: For m ∈ N, let us define the axiom of induction: Proposition 7. The quantity Ax ε − y Y is a strictly increasing function of ε.
Proof. Defining g : ε ). Therefore So g verifies the following ODE: 4. Iterated quasi-reversibility. As seen in the previous section, the quasi-reversibility method can be viewed as a Tykhonov regularization of our abstract data completion problem. Therefore, it seems natural to study a well-known extension of such regularization, namely the iterated Tykhonov regularization method, to our problem: we then obtain the iterated quasi-reversibility method.
The iterated quasi-reversibility method consists in solving iteratively quasi-reversibility problems, each one depending on the solution of the previous one. More precisely, we define a sequence of quasi-reversibility solutions by induction : It is not difficult to verify that the sequence is well-defined. In particular, it is clear that X 0 ε = x ε , solution of the quasi-reversibility problem. Our study of the iterated quasi-reversibility method is based on the following result, which highlighted the link between the solutions of the iterated quasireversibility method (X M ε ) M ∈{−1}∪N and the derivatives of x ε with respect to the parameter of regularization ε: Proof. DenoteX Summing for m = 1 to M + 1, and adding the equation verified by x . A straightforward induction ends the proof.
From now on, we suppose y = 0: if not, we have X M ε = 0 for all ε and M .
4.1. Some estimates on X M ε and AX M ε . We start with estimates on the M -th iterated quasi-reversibility solution, valid for any data y, admissible or not. In other words, these estimates are valid whether or not the data completion problem has a solution.
Proof. We start with estimate (a): as y = 0, we have 0 = X −1 . Therefore, from lemma 3.4 we obtain Regarding estimates (b) and (c), we note that they hold for M = 0. Furthermore, Cauchy-Schwarz inequality implies then the estimate (b). Furthermore, which leads to estimate (c). Finally, the case M = 0 of estimate (d) correspond to estimate (c) with same M . For M ∈ N, we note that Therefore, it is sufficient to determine the sign of (−1) M +1 (Ax and for all m ∈ {1, . . . , M }, Summing these equations for m = 0 to M , we obtain ).
Proof. Proposition 9 is obviously true for M = −1. Let M ∈ N. We consider first the inductive sequence: and therefore x M +1 < 2(M + 1)x 1 . Now, we specify the sequence, defining x 1 := 1 √ ε y Y , and prove by induction As the result is true for M = 0, the proposition follows.

4.2.
The case of exact data. From now on, we suppose that y ∈ Y adm , and denote x s the solution of the abstract data completion problem. We define R M ε := X M ε −x s , the discrepancy between the M -th iterated QR solution, and the exact solution. Note that by definition, R −1 ε = −x s and for all M ∈ N, We aim to prove the following theorem: In other words, for any ε > 0, X M ε converges to x s as M goes to infinity.
ε , it means that the sum converges as M goes to infinity. In other words, it means that if it exists x s ∈ X solution of Ax s = y, then hence the solution of the data completion problem can be seen as a series of derivatives of the quasi-reversibility solution w.r.t. the parameter ε.
Let ε > 0 be fixed. We start with the following estimates hence the first estimate is valid. Note that in particular, as Let us now focus on the second estimate, which is directly true for M = −1. For M ∈ N, let us define g : Therefore, we have Proof. For any M ∈ N, we have Therefore , the series AR m ε 2 Y converges. The property follows.
we have that (R M ε ) M ∈N is a bounded sequence in X . Consequently, there exists ϑ : N → N, a strictly increasing map such thatR M ε :=R ϑ(M ) ε weakly converges to R ∞ in X . As A is linear and continuous, we directly obtain from proposition 11 that AR ∞ = 0 Y , which implies R ∞ = 0 X as A is one-to-one.
We hence have obtained thatR M 4.3. The case of noisy data. In this section, we suppose that our exact data, denoted y ex ∈ Y, for which the data completion problem admits a unique solution x s ∈ X , is corrupted by some noise. The obtained perturbed data, denoted y δ ∈ Y, is supposed to verify y δ − y ex Y ≤ δ: in other words, we know the amplitude of noise on the data. On the other hand, there might or might not be x ∈ X such that Ax = y δ : we don't know if y δ is an admissible solution or not. From now on, for any y ∈ Y, we will denote X M ε (y) the M-th iterated quasireversibility solution with y as data. Our main objective in this section is to propose an admissible strategy to choose M as a function of δ, the amplitude of noise, to ensure that, when δ goes to zero, X M (δ) ε tends to the exact solution x s . As pointed out in proposition 2 and remark 1, this is a crucial point in the study of data completion problems.
A first important remark is the following: AX M ε (y) always converges to y, regardless of the admissibility of y as data for the data completion problem. Proof. As Y adm is dense in Y, for any η > 0, it exists y η ∈ Y adm such that Proof. obviously, we have, for any ε > 0 and M ∈ N 0. Furthermore, propositions 8 and 10 imply The result follows. M (δ) is chosen accordingly to the Morozov discrepancy principle: it is the first M ∈ N such that the distance between AX M ε and y δ is (approximately) equal to the distance between Ax s = y and y δ : This method to choose M depending on δ has two interesting characteristics: 1. with this choice, one does the minimum number of iterations of the iterated quasi-reversibility method required to obtain an error in the residual AX M ε − y δ Y of same order of the error on the data. 2. such choice is admissible, in the sense of proposition 13.
We now prove that M (δ) is an admissible choice.
Consequently, using proposition 8 we have we directly obtain y ex = 0, in contradiction with the hypothesis. If M ∞ = 0, we obtain x ε = x s , which again (corollary 1) implies y s = 0 Finally, if M ∞ > 0, as for all x ∈ X , we obtain X M∞ ε (y ex ) = X M∞−1 ε (y ex ), or equivalently x (M∞) ε (y ex ) = 0, which again implies y ex = 0 by corollary 2. We obtain once again a contradiction, which ends the proof.
The result follows.

5.
Quasi-reversibility methods for data completion problems for the Poisson's equation and the heat equation. We will now go back to the data completion problems described in the introduction, and verify that they correspond to the abstract setting introduced in section 2.

Poisson's equation. As mentioned in the introduction, the data completion problem for Poisson's equation is: for
We could directly use this formulation of the problem to obtain a quasi-reversibility regularization. However, if we do so, we obtain a fourth-order variational problem, which is rather difficult to discretize as we would need C 1 or non-conforming finite elements which are seldom available in numerical solvers. Therefore, we first modify the problem by introducing the flux p := σ ·∇u as an additional unknown, following the idea introduced in [31]. It verifies −∇ · p = −∇ · σ∇u = f ∈ L 2 (Ω) and p · ν = σ∇u · ν = g N ∈ L 2 (Γ), hence p ∈H div (Ω) := q ∈ L 2 (Ω) d , ∇ · q ∈ L 2 (Ω), q · ν ∈ L 2 (Γ) .
H div (Ω), endowed with the scalar product is an Hilbert space [32]. We modify the data completion problem the following way: Problem. For f , g D and g N in respectively L 2 (Ω), L 2 (Γ) and L 2 (Γ), find (u, p) ∈ H 1 (Ω) ×H div (Ω) such that Obviously, this is exactly the same problem as previously. However, this small modification will lead to a second-order variational quasi-reversibility regularization in the product space H 1 ×H div , easily discretized using standard finite-elements.
To fit in our abstract setting, we introduce the operator The spaces X and Y, endowed respectively with the scalar products are obviously Hilbert spaces, and the data completion problems can be rewritten: find (u, p) ∈ X such that A(u, p) = (0, f, g D , g N ) ∈ Y.
Proposition 16. A is linear, continuous, one-to-one. It is not onto but has dense range. Additionally, A is not a compact operator.
Proof. Clearly, A is linear continuous. As the data completion problem for Poisson's equation is known to admits at most a solution, but may have no solution, A is oneto-one but not onto. Let us prove that Im(A) Choosing u = ϕ ∈ C ∞ c (Ω) and p = 0, we obtain −∇ · σ T F = 0, and in particular σ T F ∈ H div (Ω). Choosing u = 0 and p = Ψ ∈ C ∞ c (Ω) d , we obtain ∇f = F. Therefore, f ∈ H 1 (Ω), and verifies −∇ · σ T ∇f = 0. Hence, taking u ∈ H 1 (Ω) and p = 0, and using the Green formula, we obtain implying that σ T ∇f · ν = −g on Γ and 0 on Γ c . Taking u = 0 and p ∈H div (Ω), and using the divergence theorem, we obtain and therefore f = h on Γ and 0 on Γ c . We have obtain that f ∈ H 1 (Ω) verifies −∇ · σ T ∇f = 0 in Ω and f = σ T ∇f · ν = 0 on Γ c : by uniqueness of the solution of the elliptic data completion problem, necessarily f = 0, which implies directly F = 0 and g = h = 0.
Finally, let us prove that A is not a compact operator. Consider e n an Hilbert basis of L 2 (Ω), and u n in H 1 (Ω) verifying Ω u n dx = 0, −∇ · σ∇u n = e n in Ω and σ∇u n · ν = 0 on ∂Ω. It is not difficult to show that u n exists and is unique. Furthermore, u n H 1 (Ω) ≤ C(Ω, σ), and in particular u n L 2 (Γ) ≤ C(Ω, σ). Defining p n := σ∇u n ∈H div (Ω), we obtain p n H div ≤ C(Ω, σ), hence (u n , p n ) is a bounded sequence in X . But A(u n , p n ) = (0, e n , u n |Γ , 0) does not admit any convergent subsequence.
To define our quasi-reversibility approach to this data completion problem, we choose b(., .) such that the corresponding norm . A,b is equivalent to the norm . X . Of course, we could choose the whole X -scalar product. But it might be interesting to use another form, to soften the regularization: here we define which is a symmetric bilinear non-negative form in X (but obviously not a scalar product). Using Poincaré inequality, it is easy to obtain the existence of c, C > 0 such that As for any (v, q) ∈ X , v, q b ≤ v, q X , for a fixed ε the regularization term in the quasi-reversibility method is smaller. Applying the abstract setting to this problem, we obtain the following quasireversibility regularization: for ε > 0, find (u ε , p ε ) ∈ H 1 (Ω) ×H div(Ω) such that for all (v, q) ∈ H 1 (Ω) ×H div (Ω), This problem always admits an unique solution (u ε , p ε ). We know from our study that if the data completion problem for the Poisson's equation admits a solution (u s , p s ), then (u ε , p ε ) converges monotonically to (u s , p s ) as ε goes to zero, with the estimate If not, we know that (u ε , The quasi-reversibility method we obtain in this study is close to the one proposed in [31] to stabilize the data completion problem, which was: find (u ε , p ε ) ∈ H 1 (Ω)× H div(Ω) , u ε = g D and p ε · ν = g N on Γ, such that for all (v, q) ∈ H 1 (Ω) ×H div (Ω), v = q · ν = 0 on Γ, The two differences are first the use of b(., .) instead of (., .) X in the regularization term, and secondly in the way the boundary condition are included in the problem. In the formulation proposed in [31], they are strongly imposed, which presents two main issues: one is theoretical, as the regularized problem might not have solution if g D is in L 2 (Γ), and not in H 1/2 (Γ), as in that case there is no v ∈ H 1 (Ω) such that v = g D on Γ, and therefore u ε cannot exist. The second one is practical: it is not a good idea to strongly impose data which might extremely noisy, as in that case the noise is somehow imposed to the solution. In the quasi-reversibility regularization obtain in the present paper, the boundary conditions are weakly imposed, which solves both of the problems: the regularized problem always admits a solution, even in the case where g D is not the trace on Γ of a H 1 function, and the noise is regularized directly by the formulation, leading to a stabler formulation.
Finally, the abstract iterated quasi-reversibility method applied to the elliptic data completion problem is: for ε > 0, (u −1 ε , p −1 ε ) = (0, 0) and for all M ∈ N, and we directly know that u M ε and p M ε converge to u and σ∇u when M goes to infinity. In the case where noisy data f δ , g δ D and g δ N are available, such that in accordance with the result of section 4.3, we stop the iterations the first time that with r > 1 close to 1. Remark 5. Actually, in the following numerical results, we use r = 1.

Heat equation.
As for the Poisson problem, we modify the data completion problem defined in the introduction, introducing the flux p := ∇u as an additional unknown: Here, the spaces X and Y are endowed with their natural scalar products, respectively It is then not difficult to verify the Proposition 17. A is a linear continuous. It is one-to-one but not onto, and has dense image. Furthermore, it is not a compact operator.
Proof. We will just prove prove that A has dense range, as it is not difficult to be convinced that A is non compact, and the rest of the proposition follows directly from the definition of A, X and Y, and the ill-posedness of the corresponding data completion problem. Let F ∈ L 2 (0, T ; L 2 (Ω) d ), f ∈ L 2 (0, T ; L 2 (Ω)), g ∈ L 2 (0, T ; L 2 (Γ)) and h ∈ L 2 (0, T ; L 2 (Γ)) be such that for all v ∈ L 2 (0, T ; H 1 (Ω)) ∩ H 1 (0, T ; L 2 (Ω)) and all q ∈ L 2 (0, T ;H div (Ω)), A(v, q), (F, f, g, h) First of all, choosing q = Υ ∈ C ∞ c (Ω) d , we obtain F = ∇f , and therefore f ∈ L 2 (0, T ; H 1 (Ω)). So we have For all q ∈ L 2 (0, T ;H div (Ω)), for almost all t ∈ (0, T ), we have which leads by integration in time to  (4) leads to ∂ t f + ∆f = 0 in (0, T ) × Ω. We see that V := (f, ∇f ) ∈ H div (Q), and we can apply the divergence theorem: for all v ∈ H 1,1 (Q) = L 2 (0, T ; H 1 (Ω)) ∩ H 1 (0, T ; L 2 (Ω)), we have leading to ∇f · ν = −g on (0, T ) × Γ and ∇f · ν = 0 on (0, Similarly as for the previous regularization, we introduce the symmetric bilinear non-negative form It is easy to check that the bilinear form (A(u, p), A(v, q)) Y + b((u, p), (v, q)) is a scalar product on X , and that it exists two constants c, C > 0 such that The quasi-reversibility regularization we consider is therefore: for ε > 0, find (u ε , p ε ) ∈ X such that for all (v, q) ∈ X , we have According to our present study, this problem always admits a unique solution (u ε , p ε ) that converges to (u, ∇u) when ε goes to zero. The corresponding iterated quasi-reversibility is: 6. Numerical results.
We consider the problem of reconstructing a Robin coefficient η on ∂Γ c from the knowledge of a noisy Cauchy data (g δ D , g δ N ) on ∂Γ. Mathematically, we want to reconstruct a function u ∈ H 1 (Ω) and a function η ∈ C 2 (Γ c ) such that The Cauchy data (g δ D , g δ N ) ∈ L 2 (Γ) × L 2 (Γ) is supposed to correspond to an exact data (g D , g N ) corrupted by some noise of amplitude δ : Our strategy to reconstruct η is therefore to compute u , approximations of u and ∇u with the prescribed noisy Cauchy data on Γ and no data at all on Γ c , using the iterated quasi-reversibility method for the Poisson problem. Then, we obtain an approximation η ε of η on Γ c by simply taking the ratio η ε = − p In our experiments, η = 0.5 + 0.3 sin(2 (θ − 5π/4)), θ being the polar angle of a point x on Γ c , and g N = 1. The corresponding Dirichlet data is obtained by solving the direct problem −∆u = 0 in Ω, ∂ ν u = 0.2 on Γ and ∂ ν u + ηu = 0 on Γ c using a finite-element method, and defining g D := u |Γ . Then we corrupt the Dirichlet data g D pointwise with a normal noise having zero mean and variance one, to obtained the corrupted Dirichlet data g δ D . The noise is scaled so that g δ D − g D ∞ = α g D ∞ that is the relative amplitude of noise in L ∞ -norm is α. In the experiments, we have chosen α = 1%, 2% and 5%. The exact Neumann data is used (i.e. g δ N = g N ), as in practical situations it is the imposed data (the net current), whereas the g D is the measured data (the corresponding voltages). Therefore g N is known quite precisely compared to g D . We then compute the corresponding amplitude of noise for the L 2 norm δ = δ(α, g D ∞ ), which defined our stopping criterion for the iteration of the method. The iterated quasi-reversibility problem is then solved using a conforming finiteelement method using P 2 Lagrange finite elements for u M ε and RT 1 Raviart-Thomas finite elements for p M ε [39]. The study of convergence of the finite-element approximation of the quasi-reversibility approximation toward the continuous solution is just a slight adaptation of section 4.4 in [31], as the formulations are quite similar, and therefore is omitted in the present study. To avoid an inverse crime, the direct and inverse problems are solved on different meshes. According to our study, the choice of ε is completely arbitrary in the iterated quasi-reversibility method. Therefore, we have chosen ε = 1 in the experiments, as it leads to a good conditioning of the finite-element matrices.
First of all, we present in figure 3 the evolution of the residual until the stopping criterion is reached. As expected theoretically, the greater is the noise, the smaller is M (δ). Now we present the reconstruction results: in figure 4, the exact solution is compared to the reconstructed one in the whole domain of study. In figure 5, we focus on the boundary Γ: we compare the exact data g D , the noisy one g δ D used in the iterated quasi-reversibility method, and finally the trace of the reconstructed solution u M (δ) ε . Note that the iterated quasi-reversibility method gives good result even with severely corrupted data.
Finally, on figure 6, we present the reconstructed Robin coefficient on Γ c , which was our main objective. Again, the reconstruction is still acceptable for high level of noise on the data. 6.2. One-dimensional heat equation. We now focus on the data-completion problem for a one-dimensional heat equation. The problem reads: find u ∈ H 1,1 ((0, T )× = g δ D , t ∈ (0, T ) ∂ x u(t, a) = g δ N , t ∈ (0, T ).
verifies g δ D − g D ∞ = α g D ∞ , g δ N − g N ∞ = α g N ∞ . In our experiments, we test our method with α = 2% and α = 5%. As in the elliptic case, we choose ε = 1, and stop the iterations of the method once the stopping criterion is reached.
In figures 8 and 9, we present the relative error over Q, defined as the ratio u M ε (δ) − u u ∞ for both solutions u 1 and u 2 . We see that the iterated quasi-reversibility method gives also good reconstruction for this parabolic problem, even for high level of noise on both Dirichlet and Neumann data.
Finally, in figures 10, we present the evolution of the residual quantity