NUMERICAL RECOVERY OF MAGNETIC DIFFUSIVITY IN A THREE DIMENSIONAL SPHERICAL DYNAMO EQUATION

. This paper is concerned with the analysis on a numerical recovery of the magnetic diﬀusivity in a three dimensional (3D) spherical dynamo equa- tion. We shall transform the ill-posed problem into an output least squares nonlinear minimization by an appropriately selected Tikhonov regularization, whose regularizing eﬀects and mathematical properties are justiﬁed. The nonlinear optimization problem is approximated by a fully discrete ﬁnite element method and its convergence shall be rigorously established.


1.
Introduction. It is well known that many astrophysical bodies have intrinsic magnetic fields. But only in the last few decades people begin to understand more about the origin of this field. So far a widely accepted theory is the so-called meanfield dynamo theory, which is modelled by the following nonlinear spherical dynamo equation (see [3]): in Ω × (0, T ), in Ω × (0, T ), B · n = 0, ∇ × B × n = 0 on ∂Ω × (0, T ), in Ω, where Ω = B ro (0)\B ri (0) ⊂ R 3 , 0 < r i < r o < ∞ is the physical domain of interest. Here B ro and B ri denote two circles with center at 0 and radius r o and r i respectively. ∂Ω = Γ 1 ∪ Γ 2 denotes the boundary of Ω, which consists of the inner boundary Γ 1 and outer boundary Γ 2 , and n denotes the unit outer normal vector to the boundary of Ω. The functions B = B(x, t) and u = u(x, t) represent magnetic field and the fluid velocity field respectively, f (x, t) is a model-oriented function, R α is a dynamo parameter, R m is the magnetic Reynolds number, σ is a constant and the parameter β(x) is the magnetic diffusivity.
When u, f , σ, β and B 0 are given, one can solve the system (1.1) to find the behavior of magnetic field B in Ω. This is usually called a direct dynamo problem. For the numerical simulations and mathematical theory analysis of the system (1.1), one may refer to [2,3,7,10,11,12] and the references therein. And for the numerical analysis of some stochastic interface problems, we can refer to [1,8] and the references therein. While in many applications, the inverse dynamo problems may be more interesting and practically important, where the magnetic property of the physical medium occupied by Ω is unknown. But knowing them is indispensable to some research investigations in Ω or to a good understanding of the physical medium Ω and how the magnetic field B behaves in Ω. For example, in [5], the authors make use of the asymmetric time dependence and various statistical properties of polarity reversals of the earth's magnetic field to recover some of parameters of the geodynamo.
In this work we shall consider the case when u, f , σ and B 0 are known, but the magnetic diffusivity β(x) is unavailable in Ω. In order to recover the magnetic diffusivity β(x), we need to have some extra measurement data from the magnetic field B. We shall assume the measurement data B is available in some small subregion inside Ω over the time interval (0, T ), which occurs the following inverse problem.
Inverse Problem I. Let ω be a subregion in Ω. Given the noisy measurement data (1.2) B(x, t) ≈ z δ (x, t), (x, t) ∈ ω × (0, T ), we will reconstruct the magnetic diffusivity β(x) in the entire domain Ω. Here δ is the noise level. The rest of the paper is organized as follows. In Section 2, we transform the considered inverse problem I to an optimization problem with Tikhonov regularization, and demonstrate its regularizing effects. In Section 3, a fully discrete finite element method for approximating the continuous minimization problem is proposed and the convergence analysis of the numerical solutions is carried out.
We end this section with some useful notations and lemmas. For m ∈ R, H m (Ω) is the usual Sobolev space, and we denote H m (Ω) 3 and L m (Ω) 3 by H m (Ω) and L m (Ω) respectively. We shall use (·, ·) and · m,Ω to denote the scalar product in L 2 (Ω) or L 2 (Ω) and the norm of H m (Ω) or H m (Ω) respectively. Throughout this work, C is often used for a generic positive constant.
In the following, we shall give a lemma for our later use.
Lemma 1.1. (Aubin-Lions Lemma, [14], p.189) Let X 0 , X be two Banach spaces and X 1 be a Hilbert space with X 0 ⊂ X ⊂ X 1 , the injections being continuous and the injection of X 0 into X being compact. Then the injection of Y(0, T ; α 0 , α 1 ; X 0 , X 1 ) into L α0 (0, T ; X) is compact for any finite number α 0 > 1, 2. Tikhonov regularization and its regularizing effects. In this section we shall formulate the ill-posed Inverse Problem I stated in Section 1 as a stabilized minimization system and establish the existence of the solutions and stability with respect to the change in the error of the observation data. Before considering Inverse Problem I, we refer to [3] and recall the equivalent variational problem of system 1.1 and its well-posedness.
Lemma 2.1. The equivalent variational problem of system 1.1: For a.e. t ∈ (0, T ), find B(·, t) ∈ H 0 , p(·, t) ∈ L 2 0 (Ω) such that B(·, 0) = B 0 (·) and where p(x, t) is a Lagrange multiplier and γ is a constant. Moreover, we have the following stability estimate for the solution (B, p) to system (2.1): For convenience, we often write the solutions of the system (2.1) as (B(β), p(β)) to emphasize their dependence on β. In general, Inverse Problem I is mathematically ill-posed, we formulate it into a mathematically stabilized minimization system with Tikhonov regularization: where the constraint set Here β 1 and β 2 are two positive constants and λ > 0 is the regularization parameter.
We are now ready to justify the regularizing effects of the nonlinear optimization system (2.2) that it always has solutions and its solutions are stable with respect to the noise error in the observation data z δ . The first theorem establishes the existence of solutions. Proof. Since J(β) ≥ 0 for any β ∈ K, there exists a minimizing sequence {β n } ⊂ K such that lim n→∞ J(β n ) = inf β∈K J(β).
As K is a closed convex subset of H 1 (Ω), hence K is weakly-closed and we have β * ∈ K. For convenience, let (B n , p n ) = (B(β n ), p(β n )). Due to Lemma 2.1, there exists a subsequence, still denoted by {B n , p n } and some (B * , p * ) such that B n B * in L ∞ (0, T ; H 1 (Ω)), B n B * in H 1 (0, T ; L 2 (Ω)), (2.5) p n p * in L 2 (0, T ; L 2 0 (Ω)). (2.6) Next we shall show that B * = B(β * ) and p * = p(β * ). To do so, we multiply both sides of (2.1) (B is replaced by B n , β is replaced by β n ) by a function η(t) ∈ C 1 [0, T ] and get We first claim that the last term in the right hand side of (2.7) tends to 0 as n → ∞. Indeed, by Cauchy Schwarz inequality and the fact that , which converges to zero as n → ∞ by (2.4) and the Lebesgue's dominated convergence theorem.
Then we shall show that which tends to 0 as n → ∞ if B n − B * L 2 (0,T ;L 2 (Ω)) −→ 0 as n → ∞. Now we will prove Indeed, by direct computation and (2.5), we have Finally, passaging to the limit on both sides of (2.7) and (2.8), and making use of (2.5)-(2.6), (2.9) and (2.11), we obtain that Further, we shall prove B * (x, 0) = B 0 (x), which together with the definition of (B(β * ), p(β * )) implies that Choosing η(t) ∈ C 1 [0, T ] with η(T ) = 0, we have by integration by parts with respect to t that Inverse Problems and Imaging Volume 14, No. 5 (2020), 797-818 Letting n → ∞ in the above equality and using (2.5), we have On the other hand, by integration by parts with respect to t, we also have Therefore, from (2.4), (2.5), (2.12) and the semi-continuity of the norm, we derive which implies that β * is a minimizer to the optimization problem (2.2).
The next theorem demonstrates that the minimization system (2.2) is indeed a stabilization of the ill-posed Inverse Problem I with respect to the changes of the observation errors. Similar stability results were established for inverse elliptic and parabolic problems in [4,9,17]. Theorem 2.3. Let {z n } be a sequence such that z n → z δ in L 2 (0, T ; L 2 (ω)) as n → ∞ and {β n } be the minimizers of problem (2.2) with z δ replaced by z n . Then there exists a subsequence of {β n } that converges in H 1 (Ω), and the limit of every such convergent subsequence is a minimizer of (2.2).
Proof. By the definition of {β n }, we have which with β n ∈ K implies that {β n } is bounded in H 1 (Ω). Similar to the proof of Theorem 2.2, there exists a subsequence, denoted still by {β n }, and some β * ∈ K such that Hence we have Then, using the lower semi-continuity of a norm, we deduce that This yields that β * is a minimizer to system (2.2).
3. Finite element approximation and its convergence. In this section, we shall propose a fully discretized finite element approximation for solving the continuous minimization problem (2.2). For the space discretization, we consider a shape regular triangulation T h of Ω with a mesh size h, consisting of tetrahedral elements. Then we introduce some finite element spaces, which were proposed in [3]: where F h is the set of all faces of elements in T h and n F is the unit normal vector of a face F ∈ F h , P 1 (A) and P 2 (A) are the spaces of piecewise linear and quadratic polynomials on A respectively. We will approximate the magnetic field B and Lagrange multiplier p by H 0h and Q 0h respectively. Moreover, the constrained subset K is approximated by To fully discretize system (2.7)-(2.8), we also need the time discretization. To do so, we divide the time interval [0, T ] into M equally spaced subintervals using nodal points For a continuous mapping u : [0, T ] → L 2 (Ω), we define u n = u(·, t n ) for 0 ≤ n ≤ M . For a given sequence {u n } M n=0 ⊂ L 2 (Ω), we define its first-order backward finite differences and average values as follows: Then we introduce two important operators for later use. The first one is the so-called modified Scott-Zhang interpolation operator S h : (see [3] or [13]), which preserves the boundary condition in H 0 : for any B ∈ H 0 , we have S h B ∈ H 0h and it has the following properties: The second operator is the L 2 quasi-projection operator Π h : L 2 (Ω) → Q h , which has the following properties (see [16]): Now we are ready to formulate the finite element approximation of the continuous minimization (2.2) as follows: for all (A h , q h ) ∈ H 0h × Q 0h , n = 1, 2, · · · , M . Hereū n ∈ L ∞ (Ω) andf n ∈ L ∞ (Ω), and {α n } are the coefficients of the composite trapezoidal rule, i.e., α 0 = α M = 1 2 and α n = 1 for n = 0, M. Before analyzing the convergence, we refer to [3] and present the well-posedness and stability estimates for the solutions to the discrete system (3.3).
where χ n (t) is the characteristic function on the interval (t n−1 , t n ). Then we have Further, we define some interpolations based on {B n h } and {p n h } as follows: for any (x, t) ∈ Ω × (t n−1 , t n ), let  Proof. We first prove the first three equalities. By direct computation, it is easy to see that Then we show the last inequality: Next, for any ϕ(x) ∈ H 0 and ψ(t) We have by the triangle inequality, (3.4) and Lemma 3.7. Direct computations give us the following equalities: Proof. By direct computation, we have the following equalities: We then derive some important convergence results.
Finally, we shall prove the last equation (3.8).
In the following, we prove a crucial lemma for our purpose.
Next, we show B * = B * * and C * (x, t) = ∂ ∂t B * (x, t). Firstly, by (3.12) we have for any ϕ(x) ∈ H 1 (Ω) and ψ(t) ∈ C ∞ 0 (0, T ) that On the other hand, by integration by parts with respect to t and using (3.10), we get which, together with (3.15), gives Then taking any ϕ(x) ∈ H 1 (Ω) and ψ(t) ∈ C 1 [0, T ] with ψ(T ) = 0, integrating by parts with respect to t to both sides of (3.15) and using (3.16) and (3.10), we have By (3.10) and Lemma 3.1 we derive that B * (x, 0) = B 0 (x). Now we will show that B * (x, t) = B * * (x, t). By direct computation and using Lemma 3.1, we have Then from (3.13) and the uniqueness of the limits, we get B * = B * * . It is time to show that B * = B(β), p * = p(β). Using Lemma 3.7 and system (3.3), we get Letting h, τ → 0 in the above equation and making use of (3.10)-(3.14) and (3.6)-  Indeed, for any ϕ ∈ L 2 0 (Ω) 3 and ψ ∈ C ∞ 0 (0, T ), Thenq h ∈ Q 0h and we get by (3.3) and the divergence theorem that We can also derive Finally, we are ready to establish the main convergence theorem.
Theorem 3.10. Let {β * h } h>0 be a sequence of minimizers to the discrete minimization problem (3.2) and suppose z δ ∈ C(0, T ; L 2 (ω)), then as h and τ tend to 0, each sequence of {β * h } h>0 has a subsequence converging in L 2 (Ω) to a minimizer of the continuous optimization problem (2.2).