MATRIX VALUED INVERSE PROBLEMS ON GRAPHS WITH APPLICATION TO MASS-SPRING-DAMPER SYSTEMS

. We consider the inverse problem of ﬁnding matrix valued edge or node quantities in a graph from measurements made at a few boundary nodes. This is a generalization of the problem of ﬁnding resistors in a resistor network from voltage and current measurements at a few nodes, but where the voltages and currents are vector valued. The measurements come from solving a series of Dirichlet problems, i.e. ﬁnding vector valued voltages at some interior nodes from voltages prescribed at the boundary nodes. We give conditions under which the Dirichlet problem admits a unique solution and study the degenerate case where the edge weights are rank deﬁcient. Under mild conditions, the map that associates the matrix valued parameters to boundary data is analytic. This has practical consequences to iterative methods for solving the inverse problem numerically and to local uniqueness of the inverse problem. Our results allow for complex valued weights and give also explicit formulas for the Jacobian of the parameter to data map in terms of certain products of Dirichlet problem solutions. An application to inverse problems arising in networks of springs, masses and dampers is presented.

1. Introduction.We study a class of inverse problems where the objective is to find matrix valued quantities defined on the edges or vertices (nodes) of a graph from measurements made at a few boundary nodes.The scalar case corresponds to the problem of finding resistors in a resistor network from electrical measurements made at a few nodes, see e.g.[14].As in the scalar case, the vector potential at all the nodes can be found from its value at a few nodes by solving a Dirichlet problem, i.e. finding a vector potential satisfying a vector version of conservation of currents (Kirchhoff's node law) and having a prescribed value at certain nodes.We study in detail the Dirichlet problem and give conditions guaranteeing it admits a unique solution (sections 2 and 3).We also include cases where the matrix valued weights are rank deficient and uniqueness of the Dirichlet problem holds only up to a known subspace.
Then we present different inverse problems, where either the matrix valued weights on the edges or the vertices or even their eigenvalues are the unknown parameters that are sought after.All these inverse problems share a common structure that is given in section 4. Any inverse problem that fits this mold has certain desirable properties: mainly the parameter to data map (i.e. the forward map) is analytic and its Jacobian can be computed in terms of products of internal "states" (e.g.solutions to the Dirichlet problem).Analyticity can be used to guarantee local uniqueness for such inverse problems, for almost any parameter within a region of interest provided the Jacobian is injective for one parameter (a generalization of the results in [6]).Moreover, we show that Newton's method applied to such problems is very likely to produce valid steps.Then in sections 5 and 6 we formulate inverse problems with matrix valued weights and determine conditions under which they have the structure of section 4. Some of the inverse problems we consider arise in networks of springs, masses and dampers.
1.1.Related work.The discrete conductivity inverse problem consists of finding the resistors in a resistor network from voltage and current measurements made at a few nodes, assuming the underlying graph is known.For this problem, the uniqueness results in [13,14,10,12,11] apply to circular planar graphs (i.e.planar graphs that can be embedded in a disk and where the nodes at which measurements are made can be laid on the disk boundary) and real conductivities.A different approach is taken in [9] where a monotonicity property inspired from the continuum [1] is used to show that if the conductivities satisfy a certain inequality then they can be uniquely determined from measurements, without specific assumptions on the underlying graph.The lack of uniqueness is shown for cylindrical graphs in [21].For complex conductivities, a condition for "uniqueness almost everywhere" regardless of the underlying graph is given in [6].Uniqueness almost everywhere means that the set of conductivities that have the same boundary data lie in a zero measure set and that the linearized problem is injective for almost all conductivities in some region.
Uniqueness for the discrete Schrödinger problem is considered in the real scalar case on circular planar graphs in [2,3,4].This problem involves a resistor network with known underlying graph and resistors but where every node is connected to the ground (zero voltage) via a resistor with unknown resistance.These unknown resistors are a discrete version of the Schrödinger potential in the Schrödinger equation, and the goal is to find them from measurements made at a few nodes.A discrete Liouville identity [5] can be used to relate the discrete Schrödinger inverse problem for certain Schrödinger potentials to the discrete conductivity inverse problem, also on circular planar graphs.A condition guaranteeing uniqueness almost everywhere for complex valued potentials without an assumption on the graph is given in [6].
One of the consequences of the present study is a uniqueness almost everywhere result for matrix valued inverse problems on graphs.To the best of our knowledge there are no results for uniqueness of the inverse problem with matrix valued edge or node quantities other than the characterization and synthesis results for networks of springs masses and dampers (discussed in more detail in section 6) that are derived in [7,18,17].These results solve an inverse problem for networks of springs, masses and dampers that assumes we are free to choose the graph topology.Indeed the constructions in [7,18,17] start from data generated by these networks (displacement to forces map) and give a network that reproduces this data.We emphasize that in the present study, the underlying graph is always assumed to be known.
2. The matrix valued conductivity and Schrödinger problems.
2.1.Notation.We use the set theory notation Y X for the set of functions from X to Y .For example u ∈ (C d ) X is a function u : X → C d that to some x ∈ X associates u(x) ∈ C d .For some matrix a ∈ C d×d we write a 0 (resp.a 0) to say that a is positive definite (resp.positive semidefinite).When the same notation is used for a ∈ (C d×d ) X , the generalized inequality is understood componentwise, e.g. for a ∈ (C d×d ) X , a 0 means a(x) 0 for all x ∈ X.When we write a b (or a b) we mean a−b 0 (or a−b 0).We use the notation a = Re a+Im a, for the real Re a and imaginary Im a parts of a.The complex conjugate is a = Re a − Im a.
By ordering a finite set X, it can be identified with {1, . . ., |X|}, where |X| is the cardinality of X.Thus (C d ) X can be identified with vectors in C d|X| .Similarly, upon fixing an ordering for another finite set Y , we can identify linear operators For A ∈ C m×n , we denote by vec(A) ∈ C mn the vector representation of the matrix A, i.e. the vector obtained by stacking the columns of A in their natural ordering.Similarly for a ∈ (C d×d ) X , we denote by vec(a) ∈ C d 2 |X| , the vector representation of a, is the vector obtained by stacking the vector representations vec(a(x)) of the matrices a(x), for x ∈ X in the predetermined ordering of X.
In addition to the usual matrix vector product, we also use a block-wise outer product ( ), the Hadamard product ( ) and the Kronecker (⊗) product.For u, v ∈ (C d ) X , the (block-wise) The Hadamard or componentwise product of two vectors a, b ∈ C n is denoted by a b and it is given by (a b)(i) = a(i)b(i), i = 1, . . .n.The Kronecker product of two matrices A ∈ C n×m and B ∈ C p×q is the np × mq complex matrix A ⊗ B given by (see e.g.[20]) When we write u T v for u, v ∈ (C d ) X , we mean and similarly for u * v = u T v, where the bar denotes the complex conjugate (understood componentwise).With this in mind we can define a norm for u ∈ (C d ) X by u 2 = u * u.

2.2.
Discrete gradient, Laplacian and Schrödinger operators.We work with graphs G = (V, E), where V is the set of vertices or nodes (assumed finite) and E is the set of edges E ⊂ {{i, j}|i, j ∈ V, i = j}.All graphs we consider are undirected and with no self-edges.We partition the nodes V = B ∪ I into a (nonempty) set B of "boundary" nodes and a set I of "interior" nodes.The boundary nodes are where we can make measurements and the interior nodes are considered not accessible.By (discrete) conductivity σ we mean a symmetric matrix valued function defined on the edges, i.e. σ ∈ (C d×d ) E .Here symmetric means [σ(e)] T = σ(e), for all e ∈ E.
By (discrete) Schrödinger potential q we mean a symmetric matrix valued function defined on the vertices i.e. q ∈ (C d×d ) V .
The d−dimensional discrete gradient is the linear operator ∇ : The discrete gradient assumes an edge orientation that is fixed a priori and that is irrelevant in the remainder of this paper.
The weighted graph Laplacian is the linear map where we used the linear operator diag(σ) : Its matrix representation is a block diagonal matrix with the σ(e) on its diagonal.The operator The discrete Schrödinger operator associated with a conductivity σ and a Schrödinger potential q is a block diagonal perturbation (with blocks of size d×d) of the weighted graph Laplacian, i.e L σ + diag(q).( 5) We now give two concrete examples of problems that can be described using matrix valued conductivities and Schrödinger potentials.
Example 1.We consider a network of springs, masses and dampers based on the graph appearing in black in fig. 1.Since this is a planar network we take d = 2.We associate to each edge a spring with spring constant k 0i and a damper in parallel with a damping constant being c 0i , i = 1, . . ., 3.These physical quantities are defined more precisely in section 6.Each node is associated a mass m i and also a damping constant ν i , i = 1, . . ., 3, corresponding to the mass moving in a viscous fluid.If we subject the nodes to time harmonic external forces of the form f i (t) = exp[ωt] fi (where t is time and ω the angular frequency) the displacement of the nodes is also time harmonic and satisfies the 8 × 8 complex linear system where û ≡ [û T 0 , ûT 1 , ûT 2 , ûT 3 ] T and similarly for f .The matrices in (6) are given as follows.
• The matrix M is the mass matrix and is given by M = diag(m), where m : V → R 2×2 associates to each vertex a 2×2 matrix, namely m(i) = m i I, for i = 0, . . ., 3 and where I denotes the identity matrix of appropriate dimension.• The matrix K is the stiffness matrix, and can be written as where ∇ is the discrete gradient written under the edge ordering of fig.
and the "conductivity" σ : where P 0i is the orthogonal projector onto direction x i − x 0 of edge {0, i}, i.e.
A displaced configuration is given in blue, where the new node positions are x i + u i , with displacements u i for i = 0, . . ., 3.
• The matrix C is the damping matrix and can be written as where C E accounts for the edge dampers and C V for the viscous damping at the nodes.These matrices are given by where µ E : E → R 2×2 and ν : V → R 2×2 are given by µ E ({0, i}) = c 0i P 0i , i = 1, . . ., 3 and d(i) = ν i I, i = 0, . . ., 3.
The detailed derivation of these relations for general networks of springs, masses and dampers appears later in section 6. Nevertheless we can already see that is a discrete Laplacian with complex symmetric matrix valued edge weights σ+ωµ E .We emphasize that these weights are in general not Hermitian.Moreover corresponds to complex symmetric matrix valued node weights given by −ω 2 m+ωd.Again we emphasize that the Schrödinger potential we obtain is in general not Hermitian.Thus we can write the matrix in the system (6) as a discrete Schrödinger operator (5) since Example 2. We show how to use the matrix valued conductivities and Schrödinger potentials to view the Laplacian of a cylindrical graph C = P k × G with scalar weights as a matrix valued Schrödinger operator on the graph P k , a path with k nodes with vertices V (P k ) = {1, . . ., k} and edges E(P k ) = {{1, 2}, . . ., {k − 1, k}}.
Here × denotes the Cartesian product between graphs.Such cylindrical graphs arise e.g. in a finite difference discretization of the conductivity equation on a rectangle with a Cartesian grid, as illustrated in fig. 2. Let s ∈ (0, ∞) E(C) be a scalar conductivity on the cylindrical graph C. We view s as a vector and split it into the sub-vectors s j ∈ (0, ∞) E(G) , j = 1, . . ., k and s j,j+1 ∈ (0, ∞) V (G) , j = 1, . . ., k − 1.The sub-vector s j represents the scalar conductivity of the j-th copy of the graph G.The sub-vector s j,j+1 corresponds to  The Laplacian for a scalar conductivity on the cylindrical graph C ≡ P 5 × P 3 can be seen as a matrix valued Schrödinger operator on the graph P 5 , as explained in example 2. To fix ideas, s 4 ∈ R 2 represents the conductivities of C within the 4−th group in red and defines the matrix valued Schrödinger potential q(4).The conductivity s 2,3 ∈ R 3 represents the conductivities of the 3 edges between the 2nd and 3rd group and is used to define the matrix valued conductivity σ({2, 3}). the conductivity linking layer j to layer j +1.Define the matrix valued conductivity σ ∈ (R |V (G)|×|V (G)| ) E(P k ) by σ({j, j + 1}) = diag s j,j+1 , j = 1, . . ., k − 1 and matrix valued Schrödinger potential q ∈ (R |V (G)|×|V (G)| ) V (P k ) by q(j) = L s j , i.e. the Laplacian of the graph induced by the vertices in the j−th copy of G, j = 1, . . ., k.
Then with an appropriate ordering of the vertices we have Example 3. To further motivate the symmetry assumptions we make on the matrix valued conductivities and Schrödinger potentials, notice that the Laplacian on the cylindrical graph C = P k × G of example 2 can be used to express Kirchhoff's node law in a circuit made of resistors whose conductances are given by s ∈ (0, ∞) E(C) .In other words, the equation (L σ u) i = 0 imposes that the sum of currents at the i−th node of C must be equal to zero.If we allow the conductances to be complex, i.e. s ∈ {z ∈ C | Re z > 0} E(C) , then s is now the admittance of the circuit elements, at a particular operating frequency.Proceeding as in example 2, the complex matrix valued conductivity σ, defined on the edges of P k by σ({j, j + 1}) = diag s j,j+1 must be complex symmetric (and not Hermitian).The complex matrix valued Schrödinger potential q defined on the nodes of P k by q(j) = L sj must also complex symmetric (and not Hermitian).
2.3.The Dirichlet problem.For a conductivity σ ∈ (C d×d ) E and a Schrödinger potential q ∈ (C d×d ) V , the σ, q Dirichlet problem consists in finding u ∈ (C d ) V satisfying ((L σ + diag(q))u) I = 0, and where g ∈ (C d ) B is the Dirichlet boundary condition and the subscript I (resp.B) is used to denote restriction of a quantity in (C d ) V to one in (C d ) I .The Dirichlet to Neumann map, when it exists, is the linear mapping Λ σ,q : (C d ) B → (C d ) B defined by Λ σ,q g = ((L σ + diag(q))u) B , (16) where u solves the Dirichlet problem (15) with boundary condition u B = g ∈ (C d ) B .We call Λ σ,q a Dirichlet to Neumann map since this terminology is used when q = 0 in the scalar case (d = 1), see e.g.[10,13,14].In the scalar case with q = 0, the conditions on the interior nodes (15) correspond to conservation of currents (Kirchhoff's node law) in an resistor network.Similarly (16) represents the net currents flowing out of the boundary nodes B, when the voltage is set to g on the boundary.These currents can be interpreted as the Neumann boundary data corresponding to the Dirichlet boundary data g, hence the name for Λ σ,0 .
The Dirichlet to Neumann map is well defined e.g. when the solution to the Dirichlet problem is uniquely determined by the boundary condition. 1Conditions guaranteeing Dirichlet problem uniqueness are given in the next theorem, together with a formula for the Dirichlet to Neumann map.
Theorem 2.1.The σ, q Dirichlet problem on a connected graph with connected interior admits a unique solution when σ ∈ (C d×d ) E and q ∈ (C d×d ) V are symmetric and one of the two following conditions is satisfied.
When any of the two conditions above hold, the Dirichlet to Neumann map can be written as where we dropped the subscript σ in the blocks (L σ ) BB , . . .for clarity.This is the Schur complement of block (L σ + diag(q)) II in the matrix L σ + diag(q).
Proof.The proof appears in section 3.2.
In the previous theorem, λ min (A) denotes the smallest eigenvalue of a real symmetric matrix A. Recall that we identify operators matrices.We use A XY , X, Y ∈ {I, B}, to denote the submatrix of A with rows (resp.columns) associated with the vertices in X (resp.Y ).
Unfortunately theorem 2.1 and the expression (17) of the Dirichlet to Neumann map do not apply to one of the main applications of our results: static spring networks.As we see in more detail in section 6.1, the linearization of Hooke's law we use allows for non-physical floppy modes, i.e. non-zero displacements that can be made with zero forces.A generalization of the static spring network problem is to consider symmetric conductivities with Re σ 0. In this situation, floppy modes may also arise if there are edges e for which Re σ(e) has a non-trivial nullspace.They can be defined as follows.
If z is a floppy mode, then the solution to the σ, 0 Dirichlet problem cannot be unique.Indeed if u is a solution to the σ, 0 Dirichlet problem, then so is u + αz for any scalar α.The following theorem shows that even in the degenerate case Re σ 0, q = 0, there are situations where the Dirichlet problem admits a solution that is unique up to floppy modes.
Theorem 2.3.The σ, 0 Dirichlet problem on a connected graph with connected interior and σ(e) = 0 for all e ∈ E, admits a unique solution up to floppy modes when Re σ 0, and for each e ∈ E the inclusion N (Re σ(e)) ⊂ N (Im σ(e)) holds.In this case, the Dirichlet to Neumann map can be written as where for clarity we dropped the subscript σ in the blocks Moreover Q depends only on the ranges of Re σ(e) for e ∈ E.
Here we denote the nullspace or kernel of a matrix A ∈ C n×m by N (A) and the range or column space of A is denoted by R(A).
Proof.The proof appears in section 3.3.1.
We note that theorem 2.3 applies in particular to the case of real positive semidefinite conductivities.
Remark 1 (Discrete Dirichlet principle).For real σ 0 and real q 0, it is easy to show that the Dirichlet problem ( 15) is equivalent to finding u ∈ (R d ) V minimizing the energy subject to u B = g.The function E(u) is the energy needed to maintain a potential u in the network and is the sum of energies associated to each edge and node.The edge terms are akin to the current-voltage product to calculate the power dissipated by a two terminal electrical component.The node terms represent the energy leaked by an electrical component linking the node to the ground (zero potential).The conditions σ 0, q 0 guarantee E(u) is a convex quadratic function in u.The first equality in the Dirichlet problem ( 15) is identical to ∇ u I E(u) = 0.

Relating boundary and interior quantities.
The following lemma is a straightforward generalization to complex matrix valued conductivities and Schrödinger potentials of the interior identities [6, Lemmas 5.1 and 6.1], which are in turn inspired by the continuum interior identities used by Sylvester and Uhlmann [25] to prove uniqueness for the continuum conductivity and Schrödinger problems.
Lemma 2.4 (Boundary/Interior Identity).Let σ 1 , σ 2 ∈ (C d×d ) E be conductivities and q 1 , q 2 ∈ (C d×d ) V be Schrödinger potentials.Let u 1 , u 2 ∈ (C d ) V be solutions to the σ 1 , q 1 and σ 2 , q 2 Dirichlet problems: for some boundary conditions g 1 , g 2 ∈ (C d ) B .Then if the Dirichlet to Neumann maps Λ σi,qi , i = 1, 2, are well defined we have the identities where the outer product is as in (1).
Proof.Since u 1 solves the σ 1 , q 1 Dirichlet problem we have Similarly, we have that Subtracting ( 22) from (21) gives the first equality.To obtain the second equality, use the definition of the weighted graph Laplacian to see that By applying for each e ∈ E the identity x T Ay = vec(xy T ) T vec(A), which holds for any x, y ∈ C d and A ∈ C d×d , we get By applying the same identity for all nodes i ∈ V we get The third equality follows from identities ( 23) and ( 24).
3. Dirichlet problem uniqueness proofs.We first focus in section 3.1 on proving theorem 2.1 for the particular case where the conductivity is real positive definite and the Schrödinger potential is zero (lemma 3.3).We then complete the proof of theorem 2.1 by considering either Re σ 0 or Re q I 0 in section 3.2.In both cases the objective is to show that the conditions given in theorem 2.1 are sufficient to guarantee that the matrix (L σ ) II + diag(q I ) is invertible.The case where Re σ 0, q = 0 is dealt with in section 3.3 which includes the proof of theorem 2.3, and is more delicate because the matrix (L σ ) II may no longer be invertible.However it is still possible to show that the σ, 0 Dirichlet solution is unique up to floppy modes (definition 2.2).
3.1.Dirichlet problem uniqueness for positive definite conductivities and zero Schrödinger potentials.The purpose of this section is to establish uniqueness for the σ, 0 Dirichlet problem for real σ with σ 0 (lemma 3.3).We need two intermediary results on the discrete graph Laplacian L σ with real matrix valued symmetric conductivity σ 0. The first one is a discrete version of the first Korn inequality (lemma 3.1).The second is to show that a vector potential u ∈ N (L σ ) must be constant on all connected components of the graph G (lemma 3.2).Using these properties, we can show that when σ is a real conductivity with σ 0, we can deduce that (L σ ) II 0 which is sufficient to ensure uniqueness for the corresponding σ, 0 Dirichlet problem.
The following is a discrete version of the first Korn inequality which bounds the elastic energy stored in a body from below by the gradient of the strain, see e.g.[22, §1.12].Lemma 3.1 (Discrete Korn inequality).Let σ ∈ (R d×d ) E be a conductivity with σ 0. Then there is a constant C > 0 such that for any u Proof.By using Rayleigh quotients, Define λ * = min e∈E λ min (σ(e)) = λ min (diag(σ)).Clearly σ 0 implies λ * > 0. The inequality we seek follows with The next lemma extends to matrix valued conductivities a well known characterization of the nullspace of (scalar) weighted graph Laplacians (see e.g.[8]).Lemma 3.2 (Nullspace of graph Laplacian).For real σ 0, u ∈ N (L σ ) implies that ∇u = 0.In particular if the graph is connected then u is constant, meaning there is a constant c ∈ R d such that u(i) = c for all i ∈ V .
Proof.If u ∈ N (L σ ) then u T L σ u = 0. Using the discrete Korn inequality (lemma 3.1), we get ∇u = 0.This means that for any edge {i, j} ∈ E, we must have u(i) = u(j).Therefore u must be constant on connected components of the graph.
We can now prove the first uniqueness result for the Dirichlet problem.Lemma 3.3 (Uniqueness for real positive definite conductivities).Assume both the graph G and its subgraph induced by the interior nodes are connected.For real conductivities σ with σ 0, the matrix (L σ ) II is invertible and the σ, 0 Dirichlet problem admits a unique solution.
Proof.Our goal here is to show that (L σ ) II 0 which implies invertibility and therefore uniqueness for the σ, 0 Dirichlet problem.By definition of the weighted graph Laplacian (4), the matrix L σ must be real and symmetric.Moreover using the discrete Korn inequality (lemma 3.1), there is a constant C > 0 such that for all u ∈ (R d ) V : This implies L σ 0 and hence (L σ ) II 0. Now we can write where L σ I is the weighted graph Laplacian on the subgraph of G induced by the interior nodes I and f ∈ (R d×d ) I is given for i ∈ I by Since the sum of positive definite matrices is positive definite, σ 0 implies f (i) 0 for all nodes i ∈ I that are connected via an edge to some boundary node and By using (a) and lemma 3.2 on the subgraph induced by the interior nodes (which is connected by assumption), we get that v is constant, i.e. v(i) = v(j) for any i, j ∈ V .By using (b), we get that v(i) T σ({i, j})v(i) = 0 for all {i, j} ∈ E where i ∈ I and j ∈ B. Hence there must be at least one i ∈ I such that v(i) = 0 (since G is connected).Since the subgraph of G induced by the interior nodes is connected, we conclude that v = 0.This gives the desired result (L σ ) II 0.

3.2.
Proof of theorem 2.1.We start with the following lemma that allows us to extend the uniqueness result from lemma 3.3 to complex conductivities and Schrödinger potentials.Proof.The field of values (or numerical range, see e.g.[20]) of M ∈ C n×n is the complex plane region given by and the field of values F (M ) lies on the right hand complex plane, excluding the imaginary axis.Since the spectrum of M is contained in F (M ) this means that 0 is not an eigenvalue of M and that M is invertible.
We are now ready to prove theorem 2.1.
Proof.First assume condition (i) of theorem 2.1 holds.We want to show that (L σ ) II + diag(q I ) is invertible when Re σ 0 and Re q I −ζ, for some ζ > 0 to be determined and depending on Re σ.By lemma 3.3 and because Re σ 0, we have that (L Re σ ) II 0. Since we assume Re q I −λ min ((L Re σ ) II ), we must have (L Re σ ) II + diag(Re q I ) 0. We can now use lemma 3.4 with A ≡ (L Re σ ) II + diag(Re q I ) and B ≡ (L Im σ ) II + diag(Im q I ) to conclude that (L σ ) II + diag(q I ) is invertible.Uniqueness follows from the definition of the σ, q Dirichlet problem.
Then assume condition (ii) of theorem 2.1 holds.By the hypothesis, we have that (L Re σ ) II + diag(Re q I ) 0. Hence we can use lemma 3.4 with A ≡ (L Re σ ) II + diag(Re q I ) and B ≡ (L Im σ ) II + diag(Im q I ) to conclude that (L σ ) II + diag(q I ) is invertible and the desired uniqueness of the σ, q Dirichlet problem follows.
It now remains to prove (17).The proof is very similar to the scalar case (see e.g.[13]), but we include it here for completeness.The first equation of ( 15) can be rewritten as Since under the hypothesis of theorem 2.1 the matrix (L σ ) II + diag(q I ) is invertible, we have The expression (17) of the Dirichlet to Neumann map can be identified from

3.3.
Conductivities with positive semidefinite real part and zero Schrödinger potential.The purpose of this section is to prove theorem 2.3, which deals with the situation Re σ 0 and q = 0. We start with several intermediary results.The first is a characterization of floppy modes (lemma 3.5) that shows that floppy modes for such σ are entirely determined by the subspace N (diag(σ)).Them we establish some relations between the ranges and nullspaces of certain blocks of L σ and of L Re σ .One key observation is that floppy modes do not affect the boundary data (lemma 3.7).First notice that for any u ∈ (C d ) V and real σ 0, we have where σ 1/2 (e) = (σ(e)) 1/2 , and the square root of a positive semidefinite matrix is defined by taking the square root of the eigenvalues in its eigendecomposition, see e.g.[20].We use this identity to give a characterization of the floppy modes for complex conductivities with Re σ 0 and N (Re σ(e)) ⊂ N (Im σ(e)), e ∈ E. From this characterization, we can see that floppy modes depend only on the subspaces R(Re σ(e)) (or equivalently N (Re σ(e))), for e ∈ E. Proof.(ii) =⇒ (i).Assume that z = 0 satisfies (30).Because of the inclusion N (Re σ(e)) ⊂ N (Im σ(e)), e ∈ E, we also have diag(Im σ)∇z = 0 and diag(σ)∇z = 0. Hence L σ z = ∇ T diag(σ)∇z = 0, (i) =⇒ (ii).Now we assume that z is a floppy mode, i.e. ( 18) holds.Clearly this means that 0 = z * L σ z = z * L Re σ z + z * L Im σ z.Since both L Re σ and L Im σ are real symmetric, the scalars z * L Re σ z and z * L Im σ z must be real.We can conclude that z * L Re σ z = 0. Now we use (29) to realize that diag((Re σ) 1/2 )z 2 = 0. Since Re σ and (Re σ) 1/2 have identical nullspaces, we see that (30) holds as well.
(i) =⇒ (iii).We assume z = 0 satisfies (18).Since But the real matrices L Re σ and L Im σ are symmetric so we can conclude that z * I (L Re σ ) II z I = 0. Again by using (29) we see that L Re σ 0 and thus z I ∈ N ((L Re σ ) II ).Now assume z I ∈ N (L Re σ ) II .By lemma 3.5, the extension z of z I by zeros on B is a floppy mode, i.e. it satisfies (18).In particular, because z B = 0, we have 0 = (L σ z) I = (L σ ) II z I = 0. Thus we get z I ∈ N ((L σ ) II ) and (i) holds.
Proof of (ii).We now proceed as in the proof of lemma 3.3, and write where L Re σ I is the weighted graph Laplacian on the subgraph G I induced by the interior nodes and f ∈ (C d×d ) I is given as in (26).Take a z ∈ N (L σ ) II and multiply (31) on the left by z * and on the right by z to obtain By (29) applied to the subgraph G I , we have L Re σ I 0. By the assumption Re σ 0, we also have diag(Re f ) 0. Thus the two last terms in (32) are nonnegative and must be zero by the first equality in (32).We have established that z * diag(Re f )z = 0.But Re f (i) is a sum of positive semidefinite matrices (since Re σ 0).Thus we must have that z(i) * Re σ({i, j})z(i) = 0 for all {i, j} ∈ E, with i ∈ I, j ∈ B.
Since conductivities are assumed to have symmetric positive semidefinite real parts, this means that for all {i, j} ∈ E, with i ∈ I and j ∈ B we have Re σ({i, j})z(i) = 0.By the inclusion in the hypothesis of the lemma we get Im σ({i, j})z(i) = 0 and also σ({i, j})z(i) = 0. Now from the definition of the Laplacian we have

This shows the inclusion (ii).
Proof of (iii).Apply statement (ii) to the conductivity σ = Re σ − Im σ and the fundamental theorem of linear algebra to get R((L σ ) * II ) ⊃ R((L σ ) * IB ).Since Re σ and Im σ are symmetric we get the desired result R((L σ ) II ) ⊃ R((L σ ) IB ).
The following lemma shows that even if there are floppy modes, these do not influence Neumann (or net current) measurements at the boundary.In other words, floppy modes cannot be observed from boundary measurements.(33) The inclusion (iii) of lemma 3.6 guarantees that equation (33) admits a solution for all g ∈ (C d ) B .The general solution to (33) may be written as where z ∈ N ((L σ ) II ) and the symbol † is the Moore-Penrose pseudoinverse.The Neumann boundary data corresponding to such solution is: However the inclusion (ii) in lemma 3.6 (or lemma 3.7) guarantees that (L σ ) BI z = 0. Hence the Dirichlet to Neumann map is uniquely defined and can be written as We can always find a real Q because of lemma 3.6 (i), and it can be found from (L Re σ ) II e.g.via a QR factorization or an eigendecomposition.By the fundamental theorem of linear algebra, the space R(Q) is the orthogonal to the interior components of floppy modes, and thus depends only on the subspaces R(Re σ(e)), e ∈ E (see lemma 3.5, (ii)).We can use Q to write the pseudoinverse of (L σ ) II as follows and we get the alternate expression (19) for the Dirichlet to Neumann map.
4. Common structure.The discrete inverse problems we consider share a common structure that we describe in section 4.1 and that is motivated in section 4.2 by the classic uniqueness proof for the continuum Schrödinger inverse problem [25].Under the assumptions we make here, the linearization of the problem is readily available (section 4.3) and analyticity of the forward map is ensured.This has practical implications that are described in section 4.4.

4.1.
An abstract inverse problem.We denote by p ∈ C m the unknown parameter.As we see later in sections 5 and 6, the parameter p may represent a matrix valued quantity (or its eigenvalues) defined on the edges or nodes of a graph.The forward or parameter to data map associates to the parameter p the matrix Λ p ∈ C n×n (the data), provided the parameter p belongs to an admissible set R ⊂ C m of parameters.The inverse problem is to find p from Λ p .Furthermore, we assume that the discrete inverse problems we consider satisfy the following assumptions.
• Assumption 1.The parameter p belongs to an open convex set R ⊂ C m of admissible parameters.The forward map that to a parameter p ∈ R associates the data Λ p is well defined for p ∈ R. • Assumption 2. For all f, g ∈ C n and p 1 , p 2 ∈ R the following boundary/interior identity holds: where b : C × C → C m is a bilinear mapping and S p ∈ C ×n is a matrix defined for p ∈ R that associates to a boundary condition f ∈ C n , an internal "state" S p f ∈ C .This identity is motivated in section 4.2 by looking at the continuum Schrödinger problem.• Assumption 3: Analyticity.The entries of S p are analytic functions of p for p ∈ R.Here by "analytic" we mean in the sense of analyticity of several complex variables, see e.g.[19].For completeness, we recall in appendix A all the results we use from the theory of functions of several complex variables.4.2.Motivation of boundary/interior identity.The identity (36) in Assumption 2 is a discrete version of a similar identity that plays a key role in the Sylvester and Uhlmann [25] proof of uniqueness for the continuum Schrödinger inverse problem.To motivate (36) we make a short excursion to the continuum (fully contained within this section) and consider an open connected and bounded domain Ω ⊂ R d with smooth surface ∂Ω.We say u solves the Schrödinger problem if −∆u + qu = 0 in Ω, and where ∆ = ∂ 2 1 + . . .∂ 2 d is the Laplacian and q is a Schrödinger potential and f the Dirichlet boundary data.The Dirichlet to Neumann map is the linear mapping Λ q that maps Dirichlet boundary data u| ∂ Ω to Neumann boundary data n • ∇u| ∂Ω , where n is the outward pointing unit normal to ∂Ω and ∇u = [∂ 1 u, . . ., ∂ d u] T is the gradient of u.Now if u 1 (resp.u 2 ) solves the Schrödinger problem with u 1 | ∂Ω = f (resp.u 2 | ∂Ω = g) and Schrödinger potential q 1 (resp.q 2 ), then by using Green's identities one gets the following identity that can be found in [25]: This is called a boundary/interior identity because it relates the boundary data (difference of Dirichlet to Neumann maps for q 1 and q 2 ) to a linear functional of q 1 −q 2 in the interior of Ω.The linear functional itself is the product of the solutions u 1 and u 2 .
Analogously in the boundary/interior identity (36) we require that the difference in the parameter to data map for two parameters p 1 and p 2 is given by a linear functional of p 1 −p 2 , where the linear functional itself is a "product" of the "internal states" corresponding to p 1 and p 2 .What we mean by "product" and "internal state" is left abstract for now, but many concrete examples are given in section 5.

4.3.
The product of solutions matrix and the Jacobian.For a discrete inverse problem satisfying assumptions 1-3, we define the following product of solutions matrix, which is the matrix valued function W : R×R → C m×n 2 with columns given by2 [W (p 1 , p 2 )](:, i + (j − 1)n) = b(S p1 (:, i), S p2 (:, j)), i, j = 1, . . ., n. (39) The next lemma shows that the parameter to data map Λ p must be Fréchet differentiable (specialized versions of this lemma appear in [6, lemma 5.4 and 6.3]).
Lemma 4.1 (Linearization of discrete inverse problem).Let p ∈ R. For sufficiently small δp ∈ C m , we have Proof.Use the boundary/interior identity (36) with p 1 = p + δp and p 2 = p, for some scalar .To conclude divide both sides by and take the limit as → 0. Notice that assumption 3 guarantees that S p is analytic in p, therefore we do have continuity of S p in p and S p+ δp → S p as → 0.
A consequence of lemma 4.1 is that W (p, p) T is a n 2 × m matrix representation of the Jacobian matrix for the parameter to data map at parameter value p. From (39), the matrix representation of the Jacobian is associated to identifying the matrix Λ p ∈ C n×n with the vector vec(Λ p ) ∈ C n 2 .Clearly the linearized inverse problem about p is injective when N (W (p, p) T ) = {0}, i.e. when the product of solutions matrix W (p, p) has full row rank, i.e.R(W (p, p)) = C m .
Another consequence of lemma 4.1 is that the Jacobian of Λ p with respect to p must be analytic for p ∈ R (by assumption 3).Clearly the forward map Λ p must also be analytic for p ∈ R.

4.4.
Analyticity and uniqueness almost everywhere.We look at the impact of analyticity on the uniqueness question: If p 1 , p 2 ∈ C m are parameters with identical data Λ p1 = Λ p2 , can we conclude that p 1 = p 2 ?For inverse problems satisfying assumptions 1-3, we can only guarantee uniqueness in a weak sense that we call uniqueness almost everywhere (as in [6]).By this we mean that (a) the linearized problem is injective for almost all parameters p ∈ R and (b) for any p, the sets M p ≡ {q ∈ R | Λ q = Λ p } must have zero measure.The set M p is the equivalence class of all parameters that give the same data as the parameter p.If the uniqueness were guaranteed for the inverse problem, we would have M p = {p}.In our case we can only guarantee the much weaker statement that M p has zero measure.Both (a) and (b) follow from analyticity of the forward map and assuming that the Jacobian is injective at a single parameter ρ ∈ R.
First assume analyticity of Λ p and that we can find ρ 1 , ρ 2 ∈ R 2 such that Λ ρ1 = Λ ρ2 .Define the function g : R × R → C by g(x, y) = [Λ x − Λ y ] ij for some i, j ∈ 1, . . ., n. Clearly g is analytic on R 2 and we can find i, j such that g(ρ 1 , ρ 2 ) = 0.By analytic continuation, the set {(p 1 , p 2 ) ∈ R × R | g(p 1 , p 2 ) = 0} must be a zero measure set.In particular if we fix p ∈ R we have shown that M p × M p has zero measure and so M p must have zero measure as well.This is a much simpler way of reaching a result similar to that in [6] and was suggested by Druskin [15].
Analyticity can also be used to deduce that if the Jacobian of the forward map is injective at a parameter ρ ∈ R, then it must be injective at almost any other parameter p ∈ R, i.e. (a).Indeed, lemma 4.1 shows that the Jacobian at p can be represented by the n 2 × m matrix W (p, p) T defined in (39).If W (ρ, ρ) T is injective for a ρ ∈ R, then there is a m × m submatrix [W (ρ, ρ)] :,α of W (ρ, ρ) that is invertible, where α = (α 1 , . . ., α m ) ∈ {1, . . ., n 2 } m is a multi-index used to represent the particular choice of columns.Thus the function f : R → C defined by is analytic for p ∈ R and is such that f (ρ) = 0.By analytic continuation, the zero set of f must be of measure zero.Thus the set of parameters for which the Jacobian is not injective must be a zero measure set.
Finally we note that if we can find a parameter ρ for which the Jacobian W (ρ, ρ) T is injective, then we can use the constant rank theorem (see e.g.[24]) to show that there is a p ∈ R in a neighborhood of ρ such that Λ ρ = Λ p .4.5.Applications of uniqueness almost everywhere.Uniqueness a.e. has several practical applications that are illustrated for the scalar discrete conductivity problem in [6].We give an outline of these applications for completeness.The first application is a simple test to determine whether uniqueness a.e.holds for a particular discrete inverse problem and that may also indicate sensitivity to noise (section 4.5.1).Once we know uniqueness a.e.holds for a particular discrete inverse problem, we can guarantee that the situations in which Newton's method with line search fails can be easily avoided (section 4.5.2).Naturally a statement about zero measure sets can be translated to a probabilistic setting (section 4.5.3).

4.5.1.
A test for uniqueness almost everywhere.Recall from lemma 4.1 that the Jacobian of the discrete inverse problem at a parameter p can be easily computed as a products of solutions matrix (39) with p ≡ p 1 = p 2 .As discussed in section 4.4, if we can find a parameter p ∈ R for which the Jacobian is injective, then uniqueness a.e.holds for the problem.A numerical test for uniqueness a.e. can be summarized as follows.
1. Pick a parameter p ∈ R.
3. Find the largest and smallest singular values σ max , σ min of W (p, p). 4. If σ min > σ max , where is a tolerance set a priori, then uniqueness a.e.holds.We point out that if σ min ≤ σ max it is not possible to distinguish between the two following scenarios: (a) uniqueness a.e.holds but W (p, p) is not injective to tolerance ; or (b) uniqueness a.e.does not hold for the problem.Thus the test is inconclusive.However we know that scenario (a) is very unlikely because we would have had to pick p on the zero measure subset of R that contains all parameters for which the Jacobian is not injective.Thus the most likely outcome is (b).Finally we remark that other methods may be used instead of the Singular Value Decomposition (SVD) to find the rank of the Jacobian (e.g. the QR factorization).We prefer the SVD because the ratio σ max /σ min is the conditioning of the linear least squares problem associated with the linearization of the discrete inverse problem, and thus measures the sensitivity to noise of the linearization of the inverse problem about the parameter p. 4.5.2.Newton's method.The discrete inverse problem of finding the parameter p from the data Λ p is a non-linear system of equations that can be solved numerically using Newton's method (see e.g.[23]).Let us denote by DΛ p = W (p, p) T the Jacobian of the Dirichlet to Neumann map about the parameter p ∈ R. For our particular problem we get the following.
The first operation in the Newton iteration is to solve a linear problem for the step δp (k) .This operation can fail either because vec(Λ A remedy to either of these situations is to solve the linear least squares system min and pick δp (k) as the minimal norm solution to (42).If uniqueness a.e.holds for the problem at hand then clearly DΛ p (k) is injective except on a zero measure set.Therefore we can expect the step in Newton's method to be defined uniquely.Now assume we found a step.If we assume a particular form of analyticity for the entries of S p (in Assumption 3), then we can guarantee that there are only finitely many choices of the step length t k for which DΛ p (k+1) is not injective.In the unlikely event one encounters one of such points, the step length t k can be reduced by a small amount to make DΛ p (k+1) injective.This is a consequence of the following lemma, which is a generalization of the result for the scalar discrete conductivity inverse problem in [6, Corollary 5.7].
Lemma 4.2.Consider a discrete inverse problem satisfying assumptions 1-3 and further assume that all entries of S p are rational functions of p (of the form P (p)/Q(p), where P and Q are polynomials).Let p ∈ R and δp ∈ C m and assume the Jacobian of Λ p at p is injective.Then there are at most finitely many t ∈ R for which p + tδp ∈ R and either (i) The Jacobian of Λ p at p + tδp is not injective.
Proof.Since the Jacobian of Λ p is injective at p ∈ R, there is a multi-index α ∈ {1, . . ., n 2 } m such that the function f defined in (41) satisfies f (p) = 0. Since the admissible set is open and convex, there is an interval [a, b] containing 0 such that t ∈ [a, b] =⇒ p + tδp ∈ R. Since the entries of S p are rational functions of p and f is defined through a determinant we can see that the function g(t) = f (p + tδp) is a rational function of t, i.e. it can be written in the form g(t) = F (t)/G(t) where F (t) and G(t) are polynomials.Since F (t) can only have finitely many zeroes, we conclude that there are only finitely many t for which g(t) = 0, or in other words, for which the Jacobian at p + tδp is not injective.This proves (i).To prove (ii) we consider the function h(t) = det[W (p, p+tδp)] :,α , with α being the same multi-index as in (i).The function h is also a rational function in t with finitely many zeroes.Notice that h(t) = 0 implies the matrix W (p, p + tδp) has full row rank.Using the boundary/interior identity (36) we see that when δp = 0. Thus there are at most finitely many t for which Λ p+tδp = Λ p , p, p + δp ∈ R 2 and δp = 0.
Remark 2. The assumption on the entries of S p being rational functions of p is satisfied by all the examples of discrete inverse problems on graphs that we consider in sections 5 and 6.This is a simple consequence of the cofactor formula for the inverse of a matrix.4.5.3.Probabilistic interpretation of uniqueness almost everywhere.The discussion in section 4.4 has a probabilistic flavor as was remarked for the scalar conductivity problem in [6].To see this, consider a probability space (Ω, F, P) (i.e. a sample space Ω, a set of events F and a probability measure P) and consider a random variable P : Ω → R × R with distribution µ P that we assume is absolutely continuous with respect to the Lebesgue measure on C m × C m .Note that this assumption precludes the distribution µ P from being supported on a set of Lebesgue measure zero in R × R. We write P ≡ (P 1 , P 2 ) when we want to distinguish the components of P .If uniqueness a.e.holds for the discrete inverse problem at hand and M ⊂ R × R is a measurable set for which P{P ∈ M } > 0, then we must have To see this, remark that uniqueness a.e.guarantees that the set A similar observation can be made regarding the injectivity of the Jacobian of the problem.Let Q : Ω → R be a random variable with distribution µ Q that is assumed to be absolutely continuous with respect to the Lebesgue measure.If uniqueness a.e.holds and N ⊂ R is some measurable set with P{Q ∈ N } > 0, then we must have

5.
Examples of matrix valued inverse problems on graphs.Here we use the graph theoretical results from section 2 to give examples of matrix inverse problems on graphs that fit the mold of section 4. 5.1.Matrix valued conductivity inverse problem.Given a graph G = (V, E) with boundary, the problem here is to find the matrix valued conductivity σ ∈ (C d×d ) E from the Dirichlet to Neumann map Λ σ,0 .We explain below why this problem satisfies the assumptions of section 4.
• Here we take as admissible set: This is an open convex set in (C d×d ) E which can be identified to an open convex subset of C d 2 |E| .The forward map is the map that to σ ∈ R associates the Dirichlet to Neumann map Λ σ,0 ∈ C d|B|×d|B| .This map is well defined for σ ∈ R because of theorem 2.1.• By lemma 2.4 with q i = 0, i = 1, 2, we have the boundary/interior identity Here we define S σ ∈ C d|E|×d|B| by its action on some f ∈ C d|B| S σ f = ∇u, where u solves the Dirichlet problem (15), with q = 0 and u B = f .The bilinear map b : where the outer product is defined in section 2.1 and we are implicitly identifying (C d ) E with C d|E| and similarly for (C d×d ) E and C |E|d 2 .
• Analyticity assumption.From theorem 2.1 (see also lemmas 3.3 and 3.4), the solution u to the Dirichlet problem (15) with u B = f , σ ∈ R and q = 0 is determined by u I = −(L II ) −1 L IB f , where for clarity we omitted the subscript σ from the graph Laplacian L σ .Therefore the entries of S σ depend analytically on σ, for σ ∈ R.

5.1.1.
Relation between scalar and matrix valued conductivity problems.Here we show that if the Jacobian for the scalar conductivity problem on a graph is injective at a conductivity σ ∈ (0, ∞) E then the Jacobian for the matrix conductivity problem on the same graph but with conductivity of edge e ∈ E given by σ(e)I ∈ R d×d must also also be injective.Because of the discussion in section 4.4, this result shows that if uniqueness a.e.holds for a scalar conductivity problem, then it must also hold for the matrix valued problem.In particular uniqueness a.e.holds on the critical circular planar graphs that are defined in [13,14].
Lemma 5.1.Let G = (V, E) be a graph with boundary and let s ∈ (0, ∞) E be a scalar conductivity.Define the conductivity σ ∈ (R d×d ) E by σ(e) = s(e)I with I being the d × d identity.Then if the Jacobian of the forward problem is injective for the scalar conductivity s, it must also be injective for the matrix valued conductivity σ.
Proof.We need to show that To this end, we first link the Laplacian L σ for the matrix valued conductivity σ is a d|V | × d|V | matrix to the Laplacian for a graph G d = (V d , E d ) that corresponds to having d copies of the graph G without any edges between the copies, and each copy of G having the same scalar conductivity s.To be more precise the vertex set of G d is V d = V × {1, . . ., d}, the edge set is Then with an appropriate ordering of V d , we have L σ = L s d , where the conductivity s d ∈ (0, ∞) E d is defined by for all {v 1 , v 2 } ∈ E, 1 , 2 ∈ {1, . . ., d}, and with δ 1, 2 being the Kronecker delta.Now take a solution v ∈ R V to the Dirichlet problem on G with scalar conductivity s and let e i be the i−th canonical basis vector of R d .Then up to a reordering of V d , v ⊗ e i solves the Dirichlet problem on G with matrix valued conductivity σ and boundary data v| B ⊗ e i , i = 1, . . ., d.Here we used the Kronecker product ⊗ which we recall for convenience in (2).Let v j be the solution to the Dirichlet problem on G with conductivity s such that v j | B = e j , j = 1, . . ., |B|.Then we have where we used the outer product defined in (1).Since ∇(v ⊗ e i ) = (∇v) ⊗ e i for any v ∈ R V we should have that for any i, i = 1, . . ., |B| and j, j = 1, . . ., d.Now let us consider the subspace M ⊂ R d 2 |E| spanned by all possible products (43) in vector form, i.e.

Matrix valued Schrödinger inverse problem.
Given a graph G = (V, E) with boundary, the inverse problem we consider here is to find the symmetric matrix valued Schrödinger potential q ∈ (C d×d ) V from the Dirichlet to Neumann map Λ σ,q , where the conductivity σ ∈ (C d×d ) E is symmetric with Re σ 0 and is assumed to be known.This problem has the structure of the abstract inverse problem of section 4, as we see next.
• The admissible set is This is an open convex set in (C d×d ) V which can be identified to an open convex subset of C d 2 |V | .The forward map is the map that to q ∈ R associates the Dirichlet to Neumann map Λ σ,q ∈ C d|B|×d|B| .This map is given by ( 17) and is well defined for q ∈ R because of theorem 2.1.• The boundary/interior identity is given by applying lemma 2.4 with σ i = σ, i = 1, 2: ).Here we define S q ∈ C d|V |×d|B| by its action on some f ∈ C d|B| • Analyticity assumption.From the proof of theorem 2.1, the solution u to the Dirichlet problem (15) with u B = f and q ∈ R is determined by u I = −(L II + diag(q I )) −1 L IB f , where we omitted the subscript σ from the graph Laplacian L σ .Hence the entries of S q are analytic for q ∈ R.
with x(e) T x(e) = I being the r × r identity.By the hypothesis σ 0 we must have Re λ > 0. We note that the assumptions we make here are more restrictive than those of theorem 2.3, but they suffice for networks of springs, masses and dampers where r = 1, as we see later in section 6.1.
• We take as admissible set Clearly R is an open convex set in (C r ) E , which can be identified to an open convex subset of C r|E| .The forward map associates to λ ∈ R the Dirichlet to Neumann map Λ σ(λ),0 where σ(λ) has eigenvectors x and eigenvalues λ, i.e. σ(λ) satisfies (44).Theorem 2.3 guarantees that this map is well defined for λ ∈ R. • Let λ 1 , λ 2 ∈ R. By lemma 2.4 with σ i ≡ σ(λ i ) and q i = 0, i = 1, 2, we get the boundary/interior identity ).We define the matrix S λ ∈ C r|E|×d|B| by its action on some f ∈ C d|B| , where u solves the Dirichlet problem (15) with boundary data u B = f , conductivity σ(λ) satisfying (44) and q = 0. Recall that the Dirichlet problem solution is determined up to floppy modes.However from the floppy mode characterization in lemma 3.5, we see that the definition of S λ is independent of the choice of floppy mode.The bilinear map b : C r|E| × C r|E| → C r|E| is simply the Hadamard product, i.e. b(u, v) = u v, and as before we identify (C r ) E with C r|E| .• Analyticity assumption.From the proof of theorem 2.3 (see section 3.3.1),a solution u to the Dirichlet problem (15) with boundary data u B = f , conductivity σ(λ) satisfying (44) and q = 0, is determined by where Q is a real matrix such that Q T Q = I and R(Q) = R((L σ ) II ).Since Q depends only on the graph and the (known a priori) eigenvectors x, the entries of u I are analytic for λ ∈ R. Hence the entries of S λ must also be analytic for λ ∈ R.
6. Application to networks of springs, masses and dampers.(19).In this particular case this map is called displacement to forces map.
6.2.Networks with springs, masses and dampers.We now consider the case where the displacements depend on time, i.e. the function u : V × R → R d is defined such that u(i, t) is the displacement about the equilibrium position p(i) of node i ∈ V at time t ∈ R. We use the notation u = du/dt and ü = d 2 u/dt 2 and we assume that all nodes have a non-zero mass, which is given by the function m ∈ (0, ∞) V .
6.2.1.Viscous damping.We consider two kinds of viscous damping.The first is spring damping, which is proportional to the net velocity of a spring and is assumed to be in the same direction as the equilibrium position of the springs, with proportionality constant given by a function c E ∈ [0, ∞) E .This corresponds to having a damper in parallel with each spring.The net forces associated with this damping are given by L µ u, where µ ∈ (R d×d The second is node damping, meaning that each node is inside a small cavity containing a viscous fluid and is thus subject to a damping force proportional to the node velocity, with the proportionality constant given by a function c V ∈ [0, ∞) V .The forces associated with this kind of damping are diag(q damp ) u where q damp ∈ (R d×d ) V is defined by q damp (i) = c V (i)I, for i ∈ V and I being the d × d identity matrix.
6.2.2.Equations of motion in time domain.Putting everything together and applying Newton's second law, we get the equations of motion for a network of springs, masses and dampers: diag(q mass )ü + (diag(q damp ) + L µ ) u + L σ u = f, where q mass ∈ (R d×d ) V is defined by q mass (i) = m(i)I for i ∈ V .The function f : V × R → R d is a function representing any external forces, i.e. f (i, t) is the external force exerted on node i ∈ V at time t.This second order system of ordinary differential equations can be written as where M = diag(q mass ) is the mass matrix, C = diag(q damp ) + L µ is the damping matrix and L σ is the stiffness matrix.We recall that σ is defined in (45) and µ in (46).Now consider the problem of finding the (frequency domain) displacements ûI at the interior nodes knowing the displacements ûB at the boundary nodes and that there are no external forces at the interior nodes (i.e.fI = 0).We immediately see that we have another instance of the Dirichlet problem (15) with complex conductivity σ + ωµ and complex Schrödinger potential −ω 2 q mass + ωq damp .Unfortunately we cannot apply theorem 2.1 directly because we do not have σ 0 or −ω 2 q mass 0.
To remedy this, we assume there is always a small amount of damping at the nodes i.e. c V ∈ (0, ∞) V , in a way reminiscent of the limiting absorption principle for the Helmholtz equation.We rewrite the equations of motion (49) as follows Again if fI = 0, this is an instance of the Dirichlet problem (15) with complex conductivity µ+(ω) −1 σ and complex Schrödinger potential q damp +ωq mass .A positive damping at the nodes guarantees q damp 0. Thus the Dirichlet problem admits a unique solution by theorem 2.1.Indeed the condition (L µ ) II −λ min (q damp ) always holds in this case because (L µ ) II 0. Hence the Dirichlet to Neumann map Λ µ+(ω) −1 σ,q damp +ωqmass is well defined by (17) and so is the Dirichlet to Neumann map for the original problem: Λ σ+ωµ,−ω 2 qmass+ωq damp , as can be seen from a homogeneity argument.Since the latter map associates the frequency domain displacements to frequency domain forces, we also call it displacement to forces map.

6.3.
Spring constant inverse problem: static case.Let us consider the inverse problem of finding the spring constants k ∈ R E from the static displacement to forces map Λ σ(k),0 of a network of springs.We assume the equilibrium positions p ∈ (R d ) V of the nodes are known.Uniqueness a.e. for this inverse problem can be established using the result in section 5.3 for rank deficient matrix valued conductivities, which we adapt here to this particular problem.Since we are not aware of a physically relevant interpretation of complex valued spring constants in the static case, we take spring constants in the admissible set R = (0, ∞) E .

Figure 2 .
Figure2.The Laplacian for a scalar conductivity on the cylindrical graph C ≡ P 5 × P 3 can be seen as a matrix valued Schrödinger operator on the graph P 5 , as explained in example 2. To fix ideas, s 4 ∈ R 2 represents the conductivities of C within the 4−th group in red and defines the matrix valued Schrödinger potential q(4).The conductivity s 2,3 ∈ R 3 represents the conductivities of the 3 edges between the 2nd and 3rd group and is used to define the matrix valued conductivity σ({2, 3}).

Lemma 3 . 4 .
Let A, B ∈ R n×n be symmetric with A 0. Then the matrix M = A + B is invertible.
z B = 0 we have z * L σ z = z * I (L σ ) II z I = 0.But then 0 = z * I (L Re σ ) II z I + z * I (L Im σ ) II z I implies that z * I (L Re σ ) II z I = 0.Because of (29) we have L Re σ 0 and also (L Re σ ) II 0. Therefore z I ∈ N ((L Re σ ) II ).(iii) =⇒ (ii).Now let z be such that z B = 0 and z I ∈ N ((L Re σ ) II ).Again since z B = 0, we have 0 = z * I (L Re σ ) II z I = z * L Re σ z.By (29) we get diag((Re σ) 1/2 )∇z 2 = 0 which readily implies (30).Next we continue with a technical result, which is a slight generalization of the elastic network result [18, Lemma 1].Lemma 3.6.Let σ be a conductivity with Re σ 0 and satisfying the inclusion N (Re σ(e)) ⊂ N (Im σ(e)) for e ∈ E. Assume the subgraph induced by the interior nodes is connected.Then we have the following inclusions

Lemma 3 . 7 . 3 . 3 . 1 .
Assume the hypothesis of lemma 3.6 hold.Then floppy modes correspond to zero boundary measurements.Proof.We need to show that if z ∈ (C d ) V is a floppy mode, then we have zero Neumann boundary data, i.e. (L σ z) B = 0. Since z is a floppy mode, we see from lemma 3.5 that z B = 0 and z I ∈ N (L Re σ ) II .By inclusions (i) and (ii) of lemma 3.6 we deduce that z I ∈ N (L σ ) II ⊂ N (L σ ) BI .We conclude by noticing that (L σ z) B = (L σ ) BB z B + (L σ ) BI z I = 0. Proof of theorem 2.3.Proof.If u is a solution to the σ, 0 Dirichlet problem with boundary condition g ∈ (C d ) B , then u B = g and (L σ ) IB g + (L σ ) II u I = 0.

S
q f = u I , where u solves the Dirichlet problem (15) with u B = f .The bilinear map b : C d|V |×d|V | → C |V |d 2 is defined by b(u, v) = u v, where the block-wise outer product is defined in (1) and we implicitly identify (C d ) V with C d|V | and (C d×d ) V with C |V |d 2 .

6. 1 .
Spring networks.Consider a graph G = (V, E) with boundary B and let p ∈ (R d ) V be a function representing the equilibrium position of each node in dimension d = 2 or 3.Each edge e ∈ E represents a spring with positive spring constant given by the function k ∈ (0, ∞) E .Let u ∈ (R d ) V denote the displacements of the nodes with respect to the equilibrium position.The quantity ∇u ∈ (R d ) E is the net spring displacement.By Hooke's law, the force exerted by a spring is proportional to the net spring displacement.Here the proportionality is given by a function k ∈ (0, ∞) E .For infinitesimally small displacements, the force exerted by spring {i, j} ∈ E is proportional to the projection of the net displacement of spring {i, j} on the direction p(i) − p(j).In other words, the forces are diag(σ)∇u, where σ ∈ (R d×d ) E is the positive semidefinite conductivityσ({i, j}) = k({i, j}) [p(i) − p(j)][p(i) − p(j)] T [p(i) − p(j)] T [p(i) − p(j)] , for {i, j} ∈ E. (45)Now assume we displace the boundary nodes by an amount g ∈ (R d ) B .If the interior nodes are left to move freely, the net forces at the interior nodes should be zero, this condition is equivalent to (L σ u) I = 0. Hence finding the displacements in a spring network arising from (static) boundary displacements is the same as solving the Dirichlet problem(15) with the particular matrix valued conductivity (45) and zero Schrödinger potential.Using theorem 2.3, we see that the interior displacements are uniquely determined by the boundary displacements (up to floppy modes) and that the Dirichlet to Neumann map Λ σ is given by ) E is defined byµ({i, j}) = c E ({i, j}) [p(i) − p(j)][p(i) − p(j)] T [p(i) − p(j)] T [p(i) − p(j)], for {i, j} ∈ E. (

6. 2 . 3 .
Frequency domain formulation and the Dirichlet problem.For a time harmonic displacement u(i, t) = exp[ωt]û(i, ω), the equations of motion (48) become(−ω 2 M + ωC + K)û = f .(49) is not injective} is of measure zero.Since the distribution is absolutely continuous with respect to the Lebesgue measure, this also means µ P (Z) = 0. Roughly speaking, if we choose two admissible parameters p 1 , p 2 at random, we have W (p 1 , p 2 ) injective almost surely.Thus we can tell p 1 and p 2 apart from the data Λ p1 , Λ p2 almost surely.
5.3.Rank deficient matrix valued conductivity inverse problem.Here we consider the inverse problem of recovering a conductivity σ ∈ (C d×d ) E that is rank deficient from its Dirichlet to Neumann map Λ σ,0 .To simplify the discussion, we assume that the conductivities of all edges have the same rank r ≥ 1, i.e. rank σ(e) = r for all e ∈ E. The conductivities we consider here satisfy the inclusion assumption of theorem 2.3, namely that N (Re σ(e)) ⊂ N (Im σ(e)), for all e ∈ E. Moreover we assume that Re σ(e) and Im σ(e) commute so that they can be decomposed on the same basis of real eigenvectors.Therefore the eigenvectors of σ(e) are real and we may define x ∈ (R d×r ) E and λ ∈ (C r ) E to write the eigendecomposition of each of the conductivities i.e.