Imaging with electromagnetic waves in terminating waveguides

We study an inverse scattering problem for Maxwell's equations in terminating waveguides, where localized reflectors are to be imaged using a remote array of sensors. The array probes the waveguide with waves and measures the scattered returns. The mathematical formulation of the inverse scattering problem is based on the electromagnetic Lippmann-Schwinger integral equation and an explicit calculation of the Green tensor. The image formation is carried with reverse time migration and with $\ell_1$ optimization.

1. Introduction. We consider an inverse scattering problem for Maxwell's equations in a waveguide which contains a few unknown reflectors. The setup is illustrated in Figure 1.1, where an array of sensors probes the waveguide with waves and records the returns over the duration of some time window. The inverse problem is to reconstruct the reflectors from these measurements.
To carry out explicit calculations we assume that the waveguide has a simple geometry, with rectangular cross-section Ω = (0, L 1 ) × (0, L 2 ), and introduce the system of coordinates x = (x, x 3 ), with x = (x 1 , x 2 ) ∈ Ω and x 3 ≤ 0. The waveguide terminates at x 3 = 0 and we denote its domain by where ∇× is the curl operator in R 3 and n( x) is the unit outer normal at ∂W . There is also a radiation condition at x 3 → −∞, which states that E(ω, x) is bounded and outgoing. The current source density J models the excitation from the array located at distance L from the terminating boundary at x 3 = 0. The waveguide is filled with a linear and isotropic homogeneous medium with electric permittivity ε o and magnetic permeability µ o , and a few reflectors supported in the compact domain D ⊂ W , located between the array and the terminating boundary. The reflectors are modeled as linear and possibly anisotropic dielectrics with Hermitian, positive definite relative electric permitivity matrix ε r (ω, x). The term ε E in (1.1) is the electric displacement, and ε is the electric permittivity tensor satisfying ε(ω, x) = ε o 1 D ( x) ε r (ω, x) − I) + I . (

1.3)
Here I is the 3 × 3 identity matrix and 1 D ( x) is the indicator function, equal to one for x ∈ D and zero otherwise. The inverse problem is to reconstruct the perturbation ε r − I in (1.3), or at least its support D, from measurements of the electric field E(ω, x) at points x = (x, −L) in the array aperture A, a subset of Ω.
Inverse scattering and inverse source problems in waveguides have been considered in the past in various setups relevant to applications in ocean acoustics, nondestructive evaluation and imaging in tunnels. We refer to [2, 5-9, 19, 20, 23, 24] for mathematical studies of inverse scattering problems in acoustic and elastic waveguides with straight walls, and filled with homogeneous media. Random acoustic waveguides with finite cross-section are considered in [4,11], and with unbounded cross-section, as encountered in ocean acoustics, in [3,21]. Examples of inverse scattering problems in planar electromagnetic waveguides are in [13,17,22], where the problem is reduced to one for the scalar Helmholtz equation by considering a single type of waves, transverse electric or magnetic.
In this paper we give the mathematical formulation of the electromagnetic scattering problem in terminating waveguides and study with numerical simulations two imaging methods. The first is a reverse time migration approach, where the wave field measured at the array is time reversed and propagated to the imaging region using the electromagnetic Green's tensor in the unperturbed waveguide. The second method uses ℓ 1 optimization, and is motivated by the assumption that the perturbation of the electric permittivity has small spatial support D.
The paper is organized as follows: We begin in section 2 with the formulation of the forward problem. We define the scattered electric field and show that it satisfies a Lipmann-Schwinger type equation. The solvability of this equation is analyzed using the Fredholm alternative. The data model used for inversion is given in section 3 and the imaging methods are formulated in section 4. The imaging results obtained with numerical simulations are in section 5. We end with a summary in section 6.
2. The scattering problem. In this section we formulate the scattering problem. We begin in section 2.1 with the expression of the electric field in the unperturbed (homogeneous) waveguide. Then we define in section 2.2 the scattered wave field at the unknown reflectors and derive the radiation condition at x 3 → −∞. We state the scattering problem as a Lippmann-Schwinger integral equation and prove its Fredholm property in section 2.3.

The homogeneous waveguide.
In the absence of any reflector in the waveguide the electric field is denoted by E o , and solves the boundary value problem Obviously, E o and J depend on the frequency ω, but since we consider a constant ω we simplify notation and drop it henceforth from the arguments of all fields. The expression of the electric field in infinite homogeneous waveguides is well known. See for example [12, chapter 8]. It is a superposition of a countable set of transverse electric and magnetic waves, called modes, which are either propagating away from the source or are decaying. In the terminating waveguide we have a similar mode decomposition of E o , as stated in Lemma 2.1, but there are both outgoing (forward propagating) and incoming (backward propagating) waves due to the reflection at the terminating boundary at x 3 = 0, and the evanescent waves may be growing or decaying away from the source, in the interval x 3 ∈ (−L, 0).
The mode decomposition in Lemma 2.1 is obtained by expanding at each We refer to appendix A for a proof that { Φ (s) n (x)} n∈N 2 0 ,1≤s≤mn is an orthogonal basis of L 2 (Ω) 3 , and to [1, section 3] for an explanation of why the basis is useful for the analysis of electromagnetic waves in waveguides with perfectly conducting boundaries. In (2.2) the Laplacian ∆ and divergence ∇· are with respect to x ∈ Ω, n is the outer normal at ∂Ω, and n ⊥ is its rotation by 90 degrees, counter-clockwise. The vectors Φ  Lemma 2.1. The solution of (2.1) has the following mode decomposition and

4)
where a + o,n (s) and b ±(s) o are constant mode amplitudes determined by the current excitation J(x), and the superscripts ± remind us that the field is evaluated in the forward direction (toward the terminating boundary) or away from it. The modes are waves with wavenumber For a finite number of indexes n ∈ N 2 0 the wavenumbers β n are real valued and the waves are propagating. The remaining infinitely many waves are evanescent.
Proof. Equations (2.3)-(2.4) are obtained by solving (2.1) with separation of variables. Since the eigenfunctions of the vectorial Laplacian in (2.2) form an orthogonal basis of L 2 (Ω) 3 , as shown in Appendix A, we can expand E o in this basis for each n }. Note that in the interval x 3 ∈ (−L, 0) between the source and the terminating boundary there are both forward and backward propagating waves and decaying and growing evanescent waves. On the other side of the source, for x 3 < −L, the propagating waves are outgoing and the evanescent waves are decaying, as imposed by the radiation condition.
The simple geometry of the waveguide, with rectangular cross-section, allows us to write explicitly the mode decomposition in (2.3)-(2.4). The eigenvalues are and by assuming that (L 1 /L 2 ) 2 is not a rational number, we ensure that λ n = λ n ′ if n = (n 1 , n 2 ) = n ′ = (n ′ 1 , n ′ 2 ). This limits the multiplicity m n of the eigenvalues to For the index pairs satisfying n 1 n 2 = 0, the eigenvalues are simple, with eigenvectors satisfying the divergence free condition ∇ · Φ (1) ( x) = 0. Otherwise, there is triple multiplicity of the eigenvalues, and the eigenvectors are given by which are vectors in the cross-range plane, satisfying the divergence free condition ∇ · Φ (1) ( x) = 0 and the curl free condition ∇× Φ (2) (x) = 0, and 11) which is in the longitudinal direction.

Equations (2.3)-(2.4) take the explicit form
and The field E o is a superposition of transverse electric waves with amplitudes a n,o , and transverse magnetic waves with amplitudes a n,o . The name transverse electric refers to the fact that the third component of Φ (1) n (x), corresponding to the longitudinal electric field, equals zero. Similarly, the name transverse magnetic refers to the fact that and thus the longitudinal magnetic field is zero by Faraday's law. The transverse electric mode amplitudes are given by for x 3 ∈ (−L, 0) and by Here ·, · denotes the inner product in L 2 (Ω) 3 and · is the induced norm. The transverse magnetic mode amplitudes are The scattered field and radiation condition. The scattered field due to the reflectors supported in D ⊂ W is defined by is the scattering potential. The radiation condition, which states that the scattered field is bounded and outgoing away from the reflectors, takes the form contain the information about the reflectors supported in D and their expression follows from the calculations in the next section.

Solvability of the forward problem.
Here we study the solvability of the forward scattering problem (2.19)-(2.21). We begin with the derivation of the Green's tensor G( x, y) and then restate the scattering problem as an electromagnetic Lippmann-Schwinger equation, for which we can prove the Fredholm property. The discussion assumes that the domain D that supports the reflectors does not touch the boundary, and that the scattering potential V is bounded, entrywise.
The Green's tensor G( x, y) ∈ C 3×3 satisfies where we recall that I is the 3 × 3 identity matrix, and the curl is taken columnwise. In addition, each column of G( x, y) satisfies a radiation condition similar to (2.21) for x 3 < y 3 , which says that the Green's function is bounded and outgoing. The expression of G is given in the next lemma, proved in appendix C. Lemma 2.2. Let x = y and x, y ∈ W . The Green's tensor G(·, y) is given by with divergence taken columnwise. The vectors G j with j = 1, . . . , 3 are defined by They satisfy equations and a radiation condition similar to (2.21) for x 3 < y 3 , which says that the components of G j are outgoing or decaying waves.
To state the scattering problem (2.19) as a Lippmann-Schwinger equation, we follow the approach in [15]. For a finiteL ≥ L, we define the truncated waveguide and introduce the space where the bar denotes complex conjugate. The induced norm is u curl = ( u, u) curl .
From [15] we known that M : for all ϕ ∈ H(curl, WL), with compact support in WL. This result can be extended to our problem because the difference of Green's functions G j ( x, y) − e ik| x− y| 4π| x− y| e j is analytic and satisfies Thus, the mapping L : is linear and bounded, and v = L ( u) is the radiating variational solution of the equation ∇× ∇× v − k 2 v = k 2 u in the waveguide. We are interested in u = V E, so that L (V E) satisfies the partial differential equation (2.19). To show that this is E sc it remains to check that L (V E) satisfies the perfectly conducting boundary conditions. This follows from the boundary conditions in (2.25), because D does not touch the boundary, so we can write We have now shown that E sc ( x) = L (V E), or equivalently, that it solves the Lippmann-Schwinger equation The next Theorem proves a Gårding inequality from which we can conclude the Fredholm property.
Theorem 2.3. There exists a compact operator K : H(curl, WL) → H(curl, WL) and a positive constant C such that These are like the partial differential equations in (2.25), with k replaced by the imaginary number i. From the analysis in [15], which applies to imaginary wavenumbers like i, we obtain that L o is a bounded linear operator and where we used the integration by parts result in [18,Theorem 3.31].
Using this auxiliary operator we write 8 and from (2.32) with f = V u and ϕ = u, we get Here we used the expression (2.20) of V . Because ε r is positive definite by assumption, we conclude that there exists a positive constant C such that Substituting in the equation above and introducing the linear operators K 1 and K 2 from H(curl, WL) to H(curl, WL), defined by we obtain Result (2.28) follows once we show that K 1 and K 2 are compact operators. Since the differences G j ( x, y) − e ik| x− y| 4π| x− y| e j and G j ( x, y) − e −| x− y| 4π| x− y| e j are analytic, we conclude that the singularity of the kernel in L − L o is as strong as that of x− y| I. Thus, we can use the results in [15] to conclude that K 1 is a compact operator.
To prove that K 2 is compact, let us consider a neighborhood Γ of the boundary ∂WL, such that Γ ⊂ WL and Γ does not intersect the support D of the scattering potential. We define the operator T from (L 2 (D)) 3 to (H s (Γ)) 3 , with s > 1, by restricting ∇×L o ( f ) to Γ, for all f ∈ (L 2 (D)) 3 , This operator is compact because its kernel is an analytic function on Γ × D. Define also the trace space It is shown in [18, Section 3.5] that H To show that K 2 is compact, let { u j } be a sequence in H(curl, WL) that converges weakly to 0, and prove that {K 2 ( u j )} converges strongly to 0 in H(curl, WL). Indeed we have where the first line is a definition, the second line follows by duality, the third line is due to the boundedness of the mapping v → n × v × n and the fourth line is by the definition of T . The convergence to zero is by the compactness of the mapping u → n × T (V u)| ∂WL .
We have now proved the Gårding inequality (2.29), with K = K 1 + K 2 . We obtain from it that I − L (V ·) is the sum of the coercive operator I − L (V ·) − K and the compact operator K . Thus, I − L (V ·) is a Fredholm operator [16].
We conclude the discussion on the solvability of the forward problem with the remark that when ε r is C 1 , one can extend the results in [15] to prove uniqueness of solution of equation (2.28). The existence of the solution follows from the Fredholm property.
3. Data model. Since the array is far away from the support D of the scattering potential V , at coordinate x 3 = −L, the results in the previous section give E sc ( x) ≈ k 2 G P ( x, y) u( y)d y, x = (x, −L). (3.1) Here is an effective source supported in D, representing the wave emitted by the unknown reflectors illuminated by the field E( y). The approximation in (3.1) is because we replaced the Green tensor G defined in Lemma 2.2 by its approximation G P which neglects the evanescent waves. Explicitly, if we denote by P the set of indexes of the propagating modes we have where we used that x 3 = −L at the array. Let us denote by S q the linear mapping from the effective source (3.2) to the q−th component of the scattered field at the array Since the support of the source (3.2) is included in D, we may seek to reconstruct the domain D by inverting approximately S q . The mapping that takes the scattering potential V to the measurements is nonlinear, because the scattered field E sc enters the definition (3.2). Thus, we linearize it, meaning that we make the single scattering (Born) approximation We denote by B the linear mapping from the scattering potential V to the effective source Then, the forward map F q from the scattering potential V to the q − th component of the electric field measured at the array is the composition of the mappings in (3.6) and (3.7), The data are denoted by d q (x), for components q = 1, . . . , Q, with Q ≤ 3, and x ∈ A, the aperture of the array, which is a subset of the waveguide cross-section Ω.

Imaging.
Let d be the data vector, with entries given by d q (x) for all x in A and q = 1, . . . , Q. Let also V be the reflectivity vector consisting of the unknown components of the scattering potential V , discretized in the imaging window D I that contains the unknown support D. Then, we can state the imaging problem as finding an approximate solution V of the linear system of equations d = FV . (4.1) The reflectivity to data matrix F is defined by the discretization of the forward mapping (3.8).
The system of equations (4.1) is usually undertermined, so to find a unique approximation we regularize the inversion by minimizing either the ℓ 2 or the ℓ 1 norm of V . The first regularization is related to the reverse time migration approach, as described in section 4.1. The imaging with ℓ 1 minimization is discussed in section 4.2.

Reverse time migration.
The minimum ℓ 2 norm solution of (4.1) is where F † is the pseudo-inverse of F. If F is full row rank, F † = F ⋆ (FF ⋆ ) −1 , where the superscript denotes the adjoint. Moreover, if the rows of F are nearly orthogonal, which requires proper placement of the receiver locations in the array aperture A, at distance of the order of the wavelength, matrix FF ⋆ is nearly diagonal, so by replacing F † in (4.2) with F ⋆ we get a similar answer, up to multiplicative factors. This replacement does not affect the support of the reconstruction and we denote the result by with superscript TR for "time reversal". To explain where time reversal comes in, let us compute the adjoint of the forward mapping (3.8). Before discretizing the imaging window we have by the definition (3.8) of the forward map and equation (3.5). We rewrite this as using the Rayleigh-Carson reciprocity relation G P ( x, y) = G P ( y, x) T of the Green's tensor. The last factor, in the square brackets, is the electric field evaluated at points y in the imaging window D I , due to a source at the array which emits the data recordings d q reversed in time. The time reversal is equivalent to complex conjugation in the Fourier domain. The adjoint of the forward map follows from (4.4), where the inner product in the right hand side is for any complex valued matrix U . Recall that V ( y) is Hermitian. Thus, F ⋆ (d) is a 3 × 3 complex matrix valued field, with components The right hand side in the imaging formula (4.3) is the discretization of (4.5) over points y in the imaging window.
In the particular case of a diagonal scattering potential V ( y), which corresponds to the coordinate axes being the same as the principal axes of the dielectric material in the support of the reflectors, the adjoint operator acts from the data space to the space of diagonal, positive definite matrices. The reconstruction is given by where y are the discretization points in D I . Moreover, if the material is isotropic, so that V is a multiple of the identity, the reconstruction is V TR I, with None of these formulae are quantitative approximations of V , so we may drop the factor k 2 and display their absolute values at points y in the imaging window D I . The estimate of the support D of V is given by the subset in D I where the displayed values are large.

4.2.
Imaging with ℓ 1 optimization. To incorporate the prior information that the reflectors have small support in the imaging window, we may reconstruct the scattering potential using ℓ 1 optimization. This means solving the optimization problem min V ℓ1 such that d = FV . (4.8) The equality constraint may be replaced by the inequality d − FV 2 ℓ2 ≤ some user defined tolerance, which deals better with measurement and modeling noise. The ℓ 1 optimization is carried with the cvx package "http://cvxr.com/cvx/". This is because in practice the sensors record over a finite time window, and only the modes that propagate fast enough to arrive at the array during the duration of the measurements contribute. The polarization vector p in (5.1) equals (0, 1, 0) T in the simulations with isotropic permittivity and (1, 1, 1) T in the case of anisotropic permittivity.
In figure 5.1 we display the reverse time migration image of a point-like reflector located at (6.95, 4.73, −10.44)λ, modeled by an isotropic scattering potential V = v( y)I supported on a mesh cell in the imaging region. The mesh size is λ/18 in crossrange plane and λ/6 in range. We note that the reflector is well localized in range and cross-range, and the results improve, as expected when more modes are used to form the image. Moreover, the image at 75% aperture is almost as good as that with full aperture. Naturally, the image deteriorates for smaller apertures.
The images of the same reflector obtained with ℓ 1 optimization are shown in Figure 5.2. They are obtained with the first 350 arriving modes and a 75% aperture. The images in the first two rows are with 75% aperture and those in the last row with the full aperture. The first row is for 100 modes and the other two rows for 350 modes. We display in the left column the images in the plane y 1 = 6.95λ, and in the right column the images in the cross-range plane y 3 = −10.44λ. The axes are in units of λ.
The discretization of these images is in steps of 0.29λ in cross-range and 0.87λ in range. As expected, these images give a sharper estimate of the support, in the sense that the spurious faint peaks in Figure 5.1 are suppressed in Figure 5.2 by the sparsity promoting optimization.
In Figures 5.3 and 5.4 we show images of an extended reflector shaped like a rectangular shell of sides equal to 1.16λ, 1.18λ and 3.9λ. The shell is modeled as  We use the 350 first arriving modes, 75% aperture. The images in the left column are in the plane y 1 = 6.96λ and in the right column in the cross-range plane at y 3 = −11.14λ. 8 in R \ R o and zero outside, while the thickness of the cross-range and range walls are 0.87λ and 0.29λ, respectively. The discretization of the imaging window in Figure  5.3 is the same as in Figure 5.1. We note that the reverse time migration method estimates better the support of the reflector, specially its back, in the terminating waveguide (top left image) than in the infinite waveguide (bottom left image). The ℓ 1 optimization images are in Figure 5  In the last simulations in Figure 5.5 we present the images of an anisotropic pointlike reflector, whose scattering potential is a diagonal matrix V ( y) = diag 3, 1, 5)v( y), with the same v( y) as in Figure 5.1. We present only reverse time migration images and note that the estimates of the support of the components of V ( y) are similar to those in Figure 5.1. Specifically, we plot the absolute value of the right hand side of equation (4.6) for l = 1, 2, 3.
6. Summary. We study imaging with electromagnetic waves in terminating waveguides, using measurements of the electric field at an array of sensors. The goal of imaging is to localize compactly supported reflectors that lie between the array and the end wall. We derive the data model using Maxwell's equations. We define the scattered electric field due to an incident wave from one sensor in the array and show that it satisfies a Lipmann-Schwinger type equation. We analyze the solvability of this equation and write explicitly the data model using a modal decomposition of the wave field in the waveguide. This model is based on the single scattering approximation at the unknown reflectors. We use it to formulate two imaging methods: The first forms an image by calculating the action of the adjoint of the forward operator on the data. It has a time reversal interpretation. The second uses ℓ 1 i.e., sparsity enhancing optimization. We present numerical results with both imaging methods for point-like and extended reflectors.

Laplacian problem
for u = (u, u 3 ). Since ∆ u = (∆u, ∆u 3 ) ⊤ , we have two decoupled problems. One is the standard Poisson problem for the longitudinal component u 3 , whose weak solution is in H 1 0 (Ω) and satisfies where (·, ·) L 2 denotes the inner product in L 2 (Ω). The other problem is for the two dimensional transverse vector u, It is studied in [14] for a more general Ω than the rectangle considered here. The results there establish the existence and uniqueness of weak solutions in the space with the standard inner H 1 product (u, v) H 1 . These solutions satisfy the variational problem for all v ∈ H 1 0t (Ω), where ∇ ⊥ is the rotated gradient operator, playing the role of curl in two-dimensions, and (·, ·) L 2 is the inner product in L 2 (Ω) 2 . The results in [14] also give a proper interpretation of ∇ · u| ∂Ω in terms of the curvature of the boundary. In our case the boundary is the union of four line segments ∂Ω j , for j = 1, . . . , 4, so the curvature is zero on each segment. It is shown in [14] that ∇ · u| ∂Ω exists and belongs to H −1/2 (∂Ω j ) on each piece of the boundary, and the weak solution satisfies the estimate To arrive at the spectral decomposition of the vectorial Laplacian in (A.2), we study its "inverse" i.e., the solution operator L : (L 2 (Ω)) 3 → (L 2 (Ω)) 3 defined by L ( f ) = (u, u 3 ), where u solves (A.6) and u 3 solves (A.4). Obviously, L is a linear operator. It is also injective, bounded, self-adjoint and compact. The injectivity follows from the uniqueness of solutions of (A.4) and (A.6). The boundedness and compactness follow from the estimate (A.7) on u and a similar one on u 3 , together with the imbedding of H 1 0t (Ω) in (L 2 (Ω)) 2 and of H 1 0 (Ω) in L 2 (Ω). To see that L is self-adjoint, let f and g be arbitrary in (L 2 (Ω)) 3 and denote by (u, u 3 ) and (v, v 3 ) their image in Im (L ) ⊂ (L 2 (Ω)) 3 , such that L ( f ) = (u, u 3 ) and L ( g) = (v, v 3 ). Then equations (A.4) and (A. 6) give We conclude from the spectral theorem for self-adjoint, compact operators [10, appendix D] that there is an orthogonal basis of (L 2 (Ω)) 3 consisting of the eigenfunctions u j of L , for eigenvalues γ j that tend to 0 as j → ∞. The eigenvalues cannot be zero, because L is injective, so we can divide by them and get Consequently, by estimate (A.7) and a similar one for the standard problem (A.4), we obtain that u j = (u j , u 3,j ) ∈ H , the space of three dimensional vectors with components u j ∈ H 1 0t (Ω) and u 3,j ∈ H 1 0 (Ω). To finish the argument, let v = (v, v 3 ) ∈ H and consider the bilinear form A : H × H → C defined in the obvious way By letting u = u j we get for each given x 3 < 0. It remains to determine the coefficients g (s) j (x 3 ). We substitute (B.1) in (2.1), and calculating j (x 3 )Φ (λ j − k 2 )g (1) j (x 3 ) − ∂ 2 x3 g (1) j (x 3 ) Φ (1) j (x)δ s,1 + − (k 2 + ∂ 2 x3 )g (2) j (x 3 ) + ∂ x3 g j (x 3 ) Φ (2) j (x)δ s,2 + (λ j − k 2 )g (3) j (x 3 ) − λ j ∂ x3 g (2) j (x 3 ) Φ j (x 3 ) = λ j λ j − k 2 ∂ x3 g where ± stands for the right and left of source. The amplitudes a ±(s) j and b ±(s) j have the expression given in (2.14)-(2.17). They are derived from the jump conditions at the source, j , J) Φ Substituting in (C.4) we get g j(1) n (0, y) = 0 and (k 2 − λ n )g j (2) n (0, y) + ∂ x3 g j(3) n (0, y) = 0, or, equivalently, We now have a linear system of eight equations (C.5), (C.6) and (C.7) for the nine unknowns a Thus, we can calculate the most convenient solution of the underdetermined system (C.5), (C.6) and (C.7), corresponding to a n . This gives ∂ x3 g j(3) n (0) = 0. The expression of G j in Lemma 2.2 follows.