A convergent numerical method for a multi-frequency inverse source problem in inhomogenous media

A new numerical method to solve an inverse source problem for the Helmholtz equation in inhomogenous media is proposed. This method reduces the original inverse problem to a boundary value problem for a coupled system of elliptic PDEs, in which the unknown source function is not involved. The Dirichlet boundary condition is given on the entire boundary of the domain of interest and the Neumann boundary condition is given on a part of this boundary. To solve this problem, the quasi-reversibility method is applied. Uniqueness and existence of the minimizer are proven. A new Carleman estimate is established. Next, the convergence of those minimizers to the exact solution is proven using that Carleman estimate. Results of numerical tests are presented.


Introduction and the problem statement
In this paper, we propose a new numerical method to solve an inverse source problem for the Helmholtz equation in the multi-frequency regime. This is the problem of determining the unknown source from external measurement of the wave field. It is worth mentioning that the inverse source problem has uncountable real-world applications in electroencephalography, biomedical imaging, etc., see e.g., [1,2,3,13,17,20,18]. Below x = (x 1 , ..., x n−1 , z) ∈ R n . Let Ω be the cube (−R, R) n ⊂ R n , R ≥ 1, and For i, j = 1, ..., n, let functions a ij ∈ C 1 (R n ), b j ∈ C(R n ), c ∈ C(R n ) be such that: 1. For all x ∈ R n a ij (x) = a ji (x) 1 ≤ i, j ≤ n. (1.2) 2. There exist two constants µ 1 and µ 2 such that 0 < µ 1 ≤ µ 2 and a ij (x) ξ i ξ j ≤ µ 2 |ξ| 2 for all x ∈ R n , ξ ∈ R n . We introduce the uniformly elliptic operator L as follows The principal part of this operator is Let k > 0 be the wave number and u = u(x, k) be the complex valued wave field of wave number k, generated by the source function which has the form of separable variables g(k)f (x), where functions g ∈ C 1 [0, ∞) and f ∈ C 1 (R n ). The wave field u(x, k) ∈ C 2 (R n ), k > 0, satisfies the equation Lu + k 2 n 2 (x)u(x, k) = g(k)f (x), x ∈ R n (1.8) and the Sommerfeld radiation condition ∂ |x| u(x, k) − iku(x, k) = o(|x| (1−n)/2 ), |x| → ∞. (1.9) Here, the function n ∈ C 1 (R n ) is the refractive index . We assume that n (x) = 1 for x ∈ R n \ Ω. (1.10) See [15] for the well-posedness of problem (1.8)- (1.9) in the case L = ∆. Given numbers k and k such that 0 < k < k < ∞ and assuming that the function g : [k, k] → R is known, we are interested in the following problem.
Problem 1 (Inverse source problem with Cauchy data). Assume that conditions (1.1)-(1.10) are in place. Reconstruct the function f (x) for x ∈ Ω, given the functions F and G, where F (x, k) = u(x, k), x ∈ ∂Ω, k ∈ (k, k), (1.11) G(x, k) = ∂ z u(x, k), x ∈ Γ + , k ∈ (k, k). (1.12) where u(x, k) is the solution of problem (1.8), (1.9). Problem 1 is somewhat over-determined due to the additional data G(x, k) measured on Γ + × [k, k]. We need this data for the convergence theorem. However, we notice in our numerical experiments that our method works well without that additional data. More precisely, in addition to Problem 1, we also consider the following non-overdetermined problem.
Remark 1.1. In fact, the Dirichlet boundary data (1.13) implicitly contain the Neumann boundary data for the function u on the entire boundary ∂Ω. Indeed, for each k ∈ (k, k) one can uniquely solve equation (1.8) with the radiation condition (1.9) and boundary condition (1.13) in the unbounded domain R n \ Ω. The resulting solution provides the Neumann boundary condition ∂ ν u(x, k) for x ∈ ∂Ω, k ∈ (k, k), where ν is the unit outward normal vector at ∂Ω.
This and similar inverse source problems for Helmholtz-like PDEs were studied both analytically and numerically in [3,21]. In particular, in works [19,21] uniqueness and stability results were proven for the case of constant coefficients in (1.8) and it was also shown that the stability estimate improves when the frequency grows. In [22] uniqueness was proven for non constant coefficients. To the best of our knowledge, past numerical methods for these problems are based on various methods of the minimization of mismatched least squares functionals. Good quality numerical solutions are obtained in [4,5,22]. However, those minimization procedures do not allow to establish convergence rates of minimizers to the exact solution when the noise in the data tends to zero. On the other hand, we refer here to the work [39], in which a non-iterative method, based on a fresh idea, was proposed to solve the inverse source problem for a homogenous medium. Uniqueness and stability results were proven in [39] and good quality numerical results were presented.
In this paper, we solve the inverse source problem for inhomogeneous media. We propose a new numerical method which enables us to establish convergence rate of minimizers of a certain functional of the Quasi-Reversibility Method (QRM) to the exact solution, as long as the noise in the data tends to zero. Our method is based on four ingredients: 1. Elimination of the unknown source function f (x) from the original PDE via the differentiation with respect to k of the function u(x, k)/g (k) .
2. The use of a newly published [28] orthonormal basis in L 2 k, k to obtain an overdetermined boundary value problem for a system of coupled elliptic PDEs of the second order.
3. The use of the QRM to find an approximate solution of that boundary value problem. 4. The formulation and the proof of a new Carleman estimate for the operator L 0 in (1.7).
5. In the case of Problem 1, the use of this Carleman estimate for establishing the convergence rate of the minimizers of the QRM to the exact solution, as long as the noise in the data tends to zero.
Recently a similar idea was applied to develop a new numerical method for the X-ray computed tomography with a special case of incomplete data [31] as well as to the development of a globally convergent numerical method for a 1D coefficient inverse problem [29]. The above items 1, 4 and 5 have roots in the Bukhgeim-Klibanov method, which was originally introduced in [12]. Even though there exists now a significant number of publications on this method, we refer here only to a few of them [7,8,33,34] since the current paper is not about that method. The original goal of [12] was to prove uniqueness theorems for coefficient inverse problems. Nowadays, however, ideas of this method are applied for constructions of numerical methods for coefficient inverse problems and other ill-posed problems, see, e.g. [28,29,30,35].
The quasi-reversibility method was first introduced by Lattès and Lions [36] for numerical solutions of ill-posed problems for partial differential equations. It has been studied intensively since then, see e.g., [6,9,10,11,14,16,26,32,34,37]. A survey on this method can be found in [27]. The solution of the system of the above item 2 due to the quasi-reversibility method is called regularized solution in the theory of ill-posed problems [38]. Thus, by item 5 a new Carleman estimate allows us to prove convergence of regularized solutions to the exact one as the noise in the data tends to zero. We do this only for 1. In contrast we do not investigate convergence of our method for Problem 2. This problem is studied only numerically below.
The paper is organized as follows. In Section 2, we present the numerical methods to solve Problems 1 and 2. Next, in Section 3, we discuss about the QRM for Problem 1. We prove a new Carleman estimate in Section 4. In section 5, we prove the convergence of the regularized solutions to the true one. In Section 6, we describe the numerical implementations for both Problems 1 and 2 and present numerical results.

Numerical Methods for Problems 1 and 2
We first recall a special basis of L 2 (k, k), which was first introduced [28].

A special orthonormal basis in
Applying the Gram-Schmidt orthonormalization procedure to the sequence {φ m } ∞ m=1 , we obtain an orthonormal basis in L 2 (k, k), denoted by {Ψ m } ∞ m=1 . It is not hard to verify that for each m, the function Ψ m (k) has the form where P m−1 is a polynomial of the degree (m − 1). This leads to the following result, which plays an important role in our analysis.
Proposition 2.1 (see [28]). For m, r ≥ 1, we have Consequently, let N > 1 be an integer. Then the N × N matrix has determinant 1 and is invertible.
was first introduced in [28]. Then, it was successfully used to numerically solve nonlinear coefficient inverse problems [30,29] and the inverse problem of X-ray tomography with incomplete data [31].

Truncated Fourier series
Let L be the elliptic operator defined in (1.6). By (1.8) To eliminate the unknown right hand side f (x) from equation (2.4), we differentiate it with respect to k and obtain In Problem 2 only condition (2.6) holds. Fix an integer N ≥ 1. Recalling the orthonormal basis where v m (x) = Similarly with [28,30,29,31], we assume here that the truncated Fourier series (2.8) satisfies equation (2.4) and that truncated Fourier series (2.8) and (2.9), taken together, satisfy equation (2.5). This is our approximate mathematical model. Since we work with a numerical method, we accept this approximation scheme. Our main goal below is to find numerically Fourier coefficients v m (x), m = 1, 2, . . . , N, of v(x, k), see (2.10). If those Fourier coefficients are approximated, the target unknown function f (x) can be approximated as the right hand side of (2.4).
Remark 2.3. The number N is chosen numerically. Proving convergence of our method as N → ∞ is very challenging and such proofs are very rare in the field of ill-posed problems. Indeed, the intrinsic reason of this is the ill-posedness of those problems. Therefore, we omit the proof of convergence of our method as N → ∞. Nevertheless, a rich numerical experience of a number of previous publications, see, e.g. [24,25,23,35,30,29,31] indicates that this truncation technique still leads to good numerical results.
We now compare numerically the true function v(x, k) with its approximation (2.8). and observe that their difference is small, see Figure 1 for the illustration.  m=1 v m (·)Ψ n (k) in Test 5, see Section 4. In this test, we consider the case n = 2 and Ω = (−2, 2) 2 . On Ω, we arrange a uniform grid of 121 × 121 points in Ω. Those points are numbered from 1 to 121 2 . In (a) and (b), we respectively show the real and imaginary parts of the two functions at 300 points numbered from 7170 to 7470. It is evident that reconstructing the first 10 terms of the Fourier coefficients of v(x, k) is sufficient to solve our inverse source problems.
Plugging (2.8) and (2.9) in equation (2.5), we obtain (2.11) For each m = 1, ..., N , we multiply both sides of (2.11) by the function Ψ m (k) and then integrate the resulting equation with respect to k ∈ k, k . We obtain for all x ∈ Ω, m = 1, 2, . . . , N. Denote (2.14) Then, (2.2), (2.12)-(2.14) imply It follows from (2.6) and (2.7) that in the case of 1 the vector function V (x) satisfies the following two boundary conditions: And in the case of 2 only boundary condition (2.18) takes place. These arguments lead to Algorithms 1 and 2 to solve Problems 1 and ISP1 respectively.
In the next section, we briefly discuss the QRM used in Step 3 of Algorithm 1. We mention that the QRM is an efficient approach to solve partial differential equations with over-determined boundary data.
Compute the reconstructed function f by (2.4).

The quasi-reversibility method (QRM)
In this section, we present the QRM for the numerical solution of Problem 1. By saying below that a vector valued function belongs to a Hilbert space, we mean that each of its components belongs to this space. The norm of this vector valued function in that Hilbert space is naturally defined as the square root of the sum of squares of norms of components. Recall that by Proposition 2.1 the matrix D N is invertible. Therefore, by (2.15), (2.18) and (2.19) we need to find an approximate solution of the following over-determined boundary value problem with respect to the vector function V (x) To do this, we consider the following minimization problem: Problem (Minimization Problem). Let ∈ (0, 1) be the regularization parameter. Minimize the functional J (V ), We assume that the set of vector functions indicated in the formulation of this problem is non empty; i.e., we assume that there exists an N −D vector valued function Φ such that the set Clearly H 2 0,# (Ω) is a closed subspace of the space H 2 (Ω) . Let V min, be any minimizer of the functional (3.4), if it exists. Denote By the variational principle the following identity holds for all P ∈ H 2 0,# (Ω). The left hand side of the identity (3.9) generates a new scalar product {·, ·} in the space H 2 0,# (Ω) . The corresponding norm {·} is equivalent to the standard norm · H 2 (Ω) . Hence, (3.9) is equivalent with On the other hand, the right hand side of (3.10) can be estimated as where the number C 1 = C 1 L, D −1 N S N , n 2 , > 0 depends only on listed parameters. Hence, the right hand side of (3.10) can be considered as a bounded linear functional l Φ (P ) : H 2 0 (Ω) → C. By Riesz theorem there exists unique vector function Q ∈ H 2 0,# (Ω) such that {W min, , P } = {Q, P } , for all P ∈ H 2 0,# (Ω) , directly yielding the identity (3.10). As a consequence, W min, exists and; indeed, W min, = Q. Finally, by (3.8) V min, = W min, + Φ.
The minimizer V min, of J , subject to the constraints (3.2) and (3.3) is called the regularized solution of the problem (3.1), (3.2) and (3.3). In the theory of Ill-Posed Problems, it is important to prove convergence of regularized solutions to the true one as the noise in the data tends to zero [38]. In the next section, we establish a Carleman estimate for general elliptic operators. This estimate is essential for the proof of that convergence result in our problem, see Section 3.

A Carleman estimate for general elliptic operators
For brevity, we assume that the function u in Theorem 4.1 is a real valued one. Indeed, this theorem holds true for complex valued function u. This fact follows directly from the theorem itself. Hence, in this section, we redefine the space H 2 0,# (Ω) in (3.7) as the set of all real valued functions satisfying the same constraints. Recall the operator the uniformly elliptic operator L 0 in (1.7). 3) and also a ij ∈ C 1 (Ω). Suppose that Then there exist numbers both of which depend only on listed parameters, such that the following Carleman estimate holds: for all λ ≥ λ 0 , p ≥ p 0 and u ∈ H 2 0,# (Ω). Here, the constant depends only on listed parameters.
Proof. Below in this proof u ∈ C 2 Ω ∩ H 2 0,# (Ω) . The case u ∈ H 2 0,# (Ω) can be obtained via the density argument. In this proof C 2 > 0 denotes different positive numbers depending only on above listed parameters. On the other hand, everywhere below C 3 = C 3 µ 1 , µ 2 , b, R, max ij a ij C 1 (Ω) > 0 also denotes different positive constants depending only on listed parameters but independent on p, unlike C 2 . Also, in this proof O (1/λ) denotes different functions belonging to C 1 Ω and satisfying the estimate where U r · ν means the scalar product of vectors U r and ν in R n : recall that ν is the outward looking unit normal vector on ∂Ω. In fact it follows from the proof that, the integrals in (4.4) equal zero for r = 1, 2. But they are non-negative starting from r = 3.
Using straightforward calculations, we obtain Hence, (1.7) implies that Denote terms in the right hand side of (4.5) as y 1 , y 2 , y 3 , y 4 . More precisely, It follows from (4.5) that Thus, We now estimate from the below each term in the right hand side of inequality (4.10) separately. We do this in several steps.
Step 1. Estimate from the below of the quantity 2y 1 y 3 (z + b) 2−p . By (4.6) and (4.7), we have By the standard rules of the differentiation, see (4.4) for U 1 .
Next, we estimate the term Hence, Now, U 2 · ν = 0 for x ∈ ∂Ω for two reasons: first, this is because v z (x) = 0 for x i = ±R and, second, due to condition (4.1). Hence, due to the first reason, we do not actually use here yet condition (4.1). Next, we estimate the term −4λp (z + b) a 2 nn v z v zz in (4.11), (4.14) Combining this with (4.11)-(4.14), we conclude that see (4.4) for U 3 . Next, see (4.4) for U 4 . It follows from (4.12)-(4.16) that see (4.4) for U 5 .
In addition to the term divU 12 , the right hand side of (4.30) has one negative and one positive term. But,except of divergence terms (div), one must have only positive terms in the right hand side of any Carleman estimate. Therefore, we perform now Step 5.
Step 5. Estimate from the below −

3) as well as inequalities like
Step 6. This is the final step. Multiply estimate (4.33) by 4C 3 λp/µ 1 and sum up with (4.30). We obtain see (4.4) for U 14 . We can choose p 0 so large that, in addition to (4.19), We estimate the left hand side of (4.34) from the above as Combining this with the right hand side of (4.34), integrating the obtained pointwise inequality over the domain Ω and taking into account (4.4), (4.35) and Gauss' formula, we obtain the target estimate (4.2).
depending only on listed parameters such that the following Carleman estimate holds This Corollary follows immediately from Theorem 4.1 as well as from the well known fact (see, e.g. lemma 2.1 in [34]) that the Carleman estimate depends only on the principal part of a PDE operator while the lower order terms of this operator can be absorbed in this estimate.

Convergence Analysis
While Theorem 3.1 ensures the existence and uniqueness of the solution of the Minimization Problem (Section 3), it does not claim convergence of minimizers, i.e. regularized solutions, to the exact solution as noise in the data tends to zero. At the same time such a convergence result is obviously important. However, this theorem is much harder to prove than Theorem 3.1. Indeed, while only the variational principle and Riesz theorem are used in the proof of Theorem 3.1, a different apparatus is required in the convergence analysis. This apparatus is based on the Carleman estimate of Theorem 4.1. In Section 5.1, we establish the convergence rate of minimizers.

Convergence rate
Following one of the main principles of the regularization theory [38], we assume now that vector functions F (x) and G(x) in (3.2) and (3.3) are given with a noise. More precisely, let Φ (x) ∈ H 2 (Ω) be the function defined in (3.5). We assume that this is given with a noise of the level δ ∈ (0, 1) , i.e.

Numerical Implementation
In this section, we test our method in the 2-D case. The domain Ω is set to be the square Ω = (−R, R) 2 where R = 2. Let M x = 120 and h x = 2R/M x . We arrange a uniform grid of (M x + 1) × In this section, we set k = 1.5 and k = 4.5. The interval [k, k] is uniformly divided into M k = 150 sub-intervals whose end points are given by In all numerical tests of this section we computationally simulate the data for the inverse problem via solving equation (1.8) in the square Ω and with the boundary condition at ∂Ω generated by (1.9), i.e.
Hence, we do not specify in this section the operator L and the function n 2 (x) outside of Ω. For brevity, we consider only the isotropic case, i.e. L = ∆ for x ∈ Ω. To show that our method is applicable for the case of non homogeneous media, we choose the function n 2 (x) in all numerical tests below as: n 2 (x) = 1 + 0.1 sin(3|x| 2 ) 3|x| 2 + 1 for all x ∈ Ω.
We choose N = 10 in (2.10) by a trial and error procedure. If, for example N = 5, then our reconstructed functions f (x) are not satisfactory. Choosing N > 10 does not help us to enhance the accuracy of computed functions. We also refer here to Figure 1.
Remark 6.1 (The choice for the interval of wave numbers). The length of each side of the square Ω is 2R = 4 units. We choose the longest wavelength λ long = 2π/k = 2π/1.5 = 4.19 which is about 4 units. The upper bound of the wave number k = 4.5 is set so that the shortest wavelength λ short = 1.39 is in the range that is compatible to the maximal l max and minimal l min sizes of the tested inclusions. More precisely, we choose λ short ∈ (0.7l max , 1.45l min ) and λ long / λ short ≈ 3.
Remark 6.2. Recall that while in Problem 1 we use both functions F (x, k) and G(x, k) in (6.4), (6.5), in Problem 2 we use only the Dirichlet boundary condition F (x, k), see (1.11)-(1.13). However, it follows from boundary condition (6.3) that the Neumann boundary condition is ∂ ν u(x, k) | ∂Ω = ikF (x, k). This explains why we computationally observe the uniqueness of our numerical solution of Problem 2.

The inverse problem
In this section we describe the numerical implementation of the minimization procedure for the functional J . We use the following form of the functionals J : This functional J in (6.6) is slightly different from the one in (3.4). First, we do not use here the matrix D −1 N . Indeed, this matrix is convenient to use for the above theoretical results. However, it is inconvenient to use in computations since it contains large numbers at N = 10. Second, we replace the term V 2 H 2 (Ω) in (3.4) by the term V 2 L 2 (Ω) . This is because the L 2 (Ω)−norm is easier to work with computationally than the H 2 (Ω)−norm. On the other hand, we have not observed any instabilities probably because the number 121 × 121 of grid points we use is not too large and all norms in finite dimensional spaces are equivalent. The regularization parameter in our computations was found by a trial and error procedure, = 10 −5 .
We write derivatives involved in (6.6) via finite differences. Next, we minimize the resulting functional with respect to values of the vector valued function at grid points. The finite difference approximation of the functional J (V ) is where d mn and s mn are elements of matrices D N and S N in (2.1) and (2.14) respectively. Introduce the "line up" version of the set {v n (x i , y j ) : where m = (i − 1)(M x + 1)N + (j − 1)N + m.
It is obvious that the minimizer of J satisfies the equation Here, 0 is the (M x + 1) 2 N dimensional zero vector. Next, we consider the "line up" version of the first condition in (2.18). The following information is available where m is as in (6.8). Hence, let D be the (M x +1) 2 N ×(M x +1) 2 N diagonal matrix with such m th diagonal entries taking value 1 while the others are 0. This Dirichlet boundary constraint of the vector V become DV =F. (6.10) Here, the vectorF is the "line up" vector of the data F N in the same manner when we defined V, see (6.8).
We implement the constraint of V in (2.19). This constraint allows us to collect the following information where m is as in (6.8) and for 1 ≤ i ≤ M x + 1 and j = M x + 1. We rewrite (6.11) as whereG is the "line up" version ofG N and the matrix N is defined as 1. N mm = 1/h x and N mm = −1/h x for m and m given by (6.8) and (6.12) respectively, 1 ≤ i ≤ M x + 1, j = M x + 1.
2. Other entries of N are 0.
In practice, we compute V by solving in the case of Problem 1 and we solve for Problem 2. Having the vector V, we can compute the vector V N via (6.7). Then, we follow Steps 4 and 5 of Algorithms 1 and Algorithms 2 to compute the functions v comp via (2.8) and then f comp by taking the real part of (2.4) when k = 1.5.
Remark 6.3 (Remark on Problem 2). We use (6.15) only for the convenience, since we do not want to have a significant extra programming effort, given that we have the computer code for solving (6.14).

Tests
In the cases of Test 1 and Test 2, we apply below our method for Problem 1. And in the cases of Tests 3-5 we apply our method for Problem 2. Whenever we say below about the accuracy of values of positive and negative parts of inclusions, we compare maximal positive values and minimal negative values of computed ones with true ones. Postprocessing was not applied in all tests presented below.
1. Test 1. Problem 1. Two inclusions with different shapes. The function f true is given by and g true (k) = ik for k ∈ [k, k]. We test the reconstructions of the locations, shapes and positive/negative values of the function f for two different inclusions. One of them is a rectangle and the other one is a disk. In this case, the function f true attains both positive and negative values. The numerical solution for this case is displayed on Figure 2.  It is evident that, for this test, our method for 1 provides good numerical results. The reconstructed locations, shapes as well as the positive/negative values of the function f comp are of a good quality. The source function f = 1 in the two "upper" inclusion and f = −1 in the two "lower" inclusions.
The reconstruction is displayed in Figure 3. The source function is reconstructed well in the sense of locations, shapes and values.   The true f true and computed f comp source functions are displayed in Figure 4. We can see computed shapes of the "positive" square and the "negative" disk are quite acceptable. Given that the noise in the data is 5%, errors in values of the function f comp are also of an acceptable.
4. Test 4. Problem 2. Ring. We consider a model that is similar to that in the previous test. The main difference is the "outer positive" part of the true source function is a ring rather than a square. The function f true is f true =    1 if 0.52 2 < x 2 + y 2 < 1.2 2 , −2 if x 2 + y 2 ≤ 0.52 2 , 0 otherwise, (6.16) and g true (k) = k 2 for all k ∈ [k, k].
In Figure 5, one can see that the source function is computed rather accurately. The values of both "positive" and "negative" parts of the inclusion are computed with a good accuracy.
The numerical results for this test are displayed in Figure 6. It is evident that our method works well for this interesting case.  Figure 6: Test 5. The true and reconstructed source functions and the true and reconstructed functions v(x, k) = u(x, k)/g(k) when k = 1.5. The true and reconstructed maximal positive value of the source function are 8.10 and 7.36 (relative error 9.1%) respectively. The true and reconstructed minimal negative value of the source function are -6.55 and -5.48 (relative error 16.0%) respectively.