Electrical Networks with Prescribed Current and Applications to Random Walks on Graphs

We study the inverse problem of determining the conductivity matrix of an electrical network from the prescribed knowledge of the magnitude of the induced current along the edges coupled with the imposed voltage or injected current on the boundary nodes. This problem leads to a weighted $l^1$ minimization problem for the corresponding voltage potential. We also investigate the problem of determining the transition probabilities of random walks on graphs from the prescribed net number of times the walker passes along the edges of the graph. We also show that a mass preserving flow $J=(J_{i.j})$ on a network can be uniquely recovered from the knowledge of $|J|=(|J_{i,j}|)$ and the flux of the flow on the boundary nodes, where $J_{i,j}$ is the flow from node $i$ to node $j$ and $J_{i,j}=-J_{j,i}$. Convergent numerical algorithms for solving such problems are also presented.

1. Introduction. Let G = (V, E) be a simple, undirected, weighted graph with n vertices. We can identify G with an electrical network by placing a resistor with resistance R ij between every two vertices i and j, for 0 ≤ i, j ≤ n with i = j. We assign the weight σ ij = 1 Rij on each edge E ij , and let σ ij = 0 if i and j are not connected. Suppose a voltage is applied to a subset of the vertices, denoted by ∂V and called the boundary of V , then a current J = (J ij ) n×n will be induced on the edges of the graph, where J ij is the current flowing from vertex i to vertex j. In particular, J ij = −J ji and if the current flows from i to j, then J ij > 0. We will also assume that J ij = 0 if the vertices i and j are not connected by an edge, and that J ii = 0. Note that V = ∂V ∪ int(V ) = {1, 2, ..., n}. We will view the voltage potential on V as a vector v = (v 1 , v 2 , ..., v n ) ∈ R n where v i is the voltage potential at vertex i. We will also denote the imposed voltage potential on the boundary nodes by a function f : ∂V → R. By Kirchhoff's and Ohm's Law where int(V ) = V \ ∂V are the interior nodes, and v = f on ∂V is the imposed voltage on the boundary nodes (Dirichlet boundary condition). Assume ((σ ij ) n×n , f ) is given on E × ∂V . Then (1) can be written as a system of m = |int(V )| linear equations with m unknowns, i.e. (2) where v is a m dimensional column vector containing the unknown voltage values at the interior nodes, A D is a m × m non-singular matrix (see Proposition 1 below) depending on the conductivities, and b is a m dimensional column vector depending on the conductivities and the known voltage at the boundary.
In particular the forward problem (1) always has a unique solution which is indeed the voltage potential associated to the conductivity problem on the network. On the other hand if a current 0 = g ∈ R |∂V | is injected to the network on a subset of vertices ∂V ⊂ V (Neumann boundary condition), then we necessarily have  The above equations can be written as (5) A where A N is an n × n matrix depending on the conductivity σ = (σ ij ) n×n , and b is an n-dimensional column vector depending on the injected current on the boundary ∂V . The matrix A N also has unique solutions up to adding a constant (see Propositions 13 and 14 below) and the solution of (5) is the voltage potential on the vertices of the graph. The matrix A N is in fact the well known graph laplacian of a weighted undirected graph.
As described above, the forward problems always have unique solutions up to a constant and can be easily solved by solving a linear system of equations. In this paper we are interested in the inverse problem of determining the conductivity matrix of an electrical network from the knowledge of the induced current along the edges of the network and Dirichlet or Neumann boundary conditions. This problem can also be understood as a design problem where one aims to design an electrical network that induces a prescribed current along its edges when a voltage f ∈ R |∂V | is applied to the boundary nodes ∂V , or when a current g ∈ R |∂V | is injected on ∂V . These inverse problems are in the spirit of Current Density Imaging (CDI) and Current Density Impedance Imaging (CDII) in dimensions n ≥ 2 which have been actively studied in recent years because of their potential applications in medical imaging, see [17,[19][20][21][22][23][24][26][27][28][29][30][31][32][33][34]. In dimension n = 3 the induced current inside the conductive body Ω can be measured by Magnetic Resonance Imaging (MRI), see [17,21].
Random walks arise in many mathematical and physical models in biology, economics, computer and social networks, epidemiology, and statistical mechanics. Such models have been used to model infection on graphs such as spread of epidemics and rumours with mobile agents, see [2,7], voting patterns [4,40], and stock market prices [11]. Random walk models have also been proven to be a simple yet powerful method for extracting information from computer and social networks such as identification of reputable entities in a network. For instance Google's PageRank algorithm uses random walks to rank websites in their search engine results, see [18,35], and the survey papers [25] and [36] for applications of random walks on graph in computer networks. Also see [39] for a wide variety of applications of random walks on graphs in statistical mechanics. The inverse problem we investigate here translate to intriguing questions in various contexts where a random walk model on graphs is utilized. The results could also be useful in the design of effective random walk models for achieving prescribed goals with random steps in a network. For instance, one can think of designing a random walk model with a prescribed high net number of times the walker passes along certain edges of the graph.
To the authors' best knowledge the natural inverse problem considered in this paper has not been studied elsewhere. In [5] and [3], the authors investigate the problem of recovering the conductivity of the edges from the measurement of voltages at the boundary vertices, and measurements of the voltage, current, and conductivity on the boundary respectively. In [5] the authors proved injectivity of this inverse problem for critical, circular and planar graphs and provided an explicit reconstruction method. Under the assumption of monotonicity of conductivities, partial uniqueness results are established in [3]. While the general theory of inverse problems on graphs is a rich field of study with applications in various disciplines, the above results are most closely related to this work.
There is a close connection between electrical networks and random walks on graphs (see [6]). In Section 5 we exploit this connection and apply our results on electrical networks to study the inverse problem of determining transition probabilities of random walk models from the net number of times the walker passes along the edges of the graph. We will also discuss a potential application of our results in public-key encryption, a seemingly unrelated problem.
The paper is organized as follows. In Section 2 we study the problem of determining the conductivity matrix of an electrical network from the knowledge of the magnitude of the induced current with Dirichlet boundary condition, and in Section 3 we study this problem with Neumann boundary data. In Section 4 we present a numerical algorithm for finding minimizers of the l 1 minimization problem we obtain in Sections 2 and 3. In Section 5 the connection between random walks and electrical networks is discussed and we apply our results on electrical networks to the inverse problem of determining transition probabilities from the net number of time a random walker passes along the edges of the graph.

Dirichlet Boundary Condition.
In this section we study the inverse problem of determining the conductivity matrix σ = (σ ij ) n×n from the knowledge of its induced current J = (J ij ) n×n on E and the imposed voltage potential f on ∂V (Dirichlet boundary conditions). Let G = (V, E) be an undirected, simple, connected graph with n vertices, and suppose a voltage is applied to some subset of the vertices inducing the current J = (J ij ) n×n on E. Throughout the paper |J| denotes the matrix |J| := (|J ij |) n×n , we will refer to |J| as a measurement matrix.
We first show that the forward problem has a unique solution, i.e. A D is non-singular. One can find a proof in [5] and we present a brief proof for the sake of completeness.
Proof. For every i ∈ int(V ) it follows from (1) that v i is the weighted average of the voltage potential in its neighboring nodes, i.e.
Consequently v satisfies the strong maximum principle in the sense that if v attains its maximum or minimum on an interior node, then v must be constant on V . In particular, v attains its minimum and maximum on the boundary ∂V .
. When a ij = 0 and v i = v j , then we formally define σ ij = ∞ and say that the edge between nodes i and j is a perfect conductor. We shall also refer to the function v as a voltage potential and denote the set of all voltage potentials corresponding to the data (f, a) by V (f,a) .
For any measurement matrix a = (a ij ) n×n , define the function I : R n → R by and for f ∈ R |∂V | consider the minimization problem (8) min{I(u) : u ∈ R n and u| ∂V = f }.
We shall prove that u ∈ V (f,a) if and only if it is a minimizer of the least gradient problem. Let us first study the dual of the minimization problem above.
2.1. The Dual problem. Here we discuss the dual of the least gradient problem (8) and study the connection between these two problems.
Let H(V ) be the set of all real valued functions on the vertices. We shall view a function u ∈ H(V ) as a vector in R n . Also let H(E) to be the space of all functions on E, i.e. the space of all n × n matrices b = (b ij ), where b ij denotes the value of the function on the edge from vertex i to j, with the additional convention that b ij = 0 if the edge from i to j is not in E, and b ii = 0.
Definition 4. Let u, v ∈ H(V ) and a, b ∈ H(E). Then we define the inner products on H(V ) × H(V ) and H(E) × H(E), respectively. The spaces H(V ) and H(E) equipped with the above inner products are Hilbert spaces.
Next we define two linear operators D : H(V ) → H(E) and div : H(E) → H(V ) which play crucial roles in our arguments.
if the edge connecting i to j is in E, and 0 otherwise. Also for b ∈ H(E) we define divb ∈ H(V ) as follows Observe that if b ∈ H(E) is anti-symmetric, that is b ij = −b ij for all 1 ≤ i, j ≤ n, then the divergence is simply −2 j b ij . We shall refer to D and div operators as gradient and divergence, respectively, since they play the role in our setting of the standard gradient and divergence operators on R n , n ≥ 2. Note that the definition of the gradient and divergence given here does not depend on the weights (conductivities) of the graph as it would normally when defining these operators on a weighted graph. Since in the inverse problems we consider in this paper, the conductivities are unknown, these definitions are desirable. Let us first show that −div is the adjoint of D.
Proposition 6. Let u ∈ H(V ) and b ∈ H(E). Then Proof. Let u ∈ H(V ) and b ∈ H(E). Then Let f ∈ R |∂V | and define For a ∈ H(E) we take a ≥ 0 to mean that every entry is non-negative. Then for 0 ≤ a ∈ H(E) and f ∈ R |∂V | , the least gradient problem (8) can be written as where we have used the notation to be the space of functions on V which are equal to zero on ∂V . Then we can equivalently write the primal problem (12) as Then (13) can be written as By Rockafellar-Fenchel duality (see [9]), this problem admits a dual problem which can be expressed as where F * and G * denote the convex conjugate of F and G, respectively. It is easy to see that Next we compute the convex conjugate of F .
Thus the dual problem (15) can be written as Given that u i = 0 for at least one i ∈ V one can show that any minimizing sequence of the the primal problem is uniformly bounded. Hence a convergent subsequence exists and a minimizer of the primal problem (P) always exists. On the other hand, it follows from Theorem III.4.1 in [9] that the dual problem (D) also has a solution. Indeed since is continuous at p = 0, the condition (4.8) in the statement of Theorem III.4.1 in [9] is satisfied. The weighted l 1 minimization problem (8) does not have an unique minimizer and thus the conductivity inducing the current J on E is not unique. However we can characterize the non-uniqueness.
Theorem 8. The infimum of the primal problem (P) is equal to the supremum of the dual problem (D). Moreover, the dual problem has an optimal solution b, and J = −2b satisfies for every minimizer v of (8). Conversely, if u ∈ H f and the above equation holds then then u is a minimizer of (8).
Proof. A solution b to the dual problem always exists and the infimum of the primal problem (P) is equal to the supremum of the dual problem by Theorem III.4.1 in [9] as discussed above. Let v be a minimizer of (8). Then Hence the inequalities in 19 are indeed equalities and thus Therefore if we let J = −2b we we see that (17) and (18) hold. It is not hard to see that the converse also holds from the above computations.
Corollary 9. If u and v are two arbitrary minimizers of (8), then

Voltage Potentials Have Minimum Energy.
We are now ready to prove the following theorem.
Theorem 10. Let f be a function on ∂V and a be a measurement matrix. Then v ∈ V (f,a) if and only if it is a minimizer of the least gradient problem (8).
Proof. Suppose v ∈ V (f,a) and let J be the corresponding current on E. Then Therefore the minimum of the least gradient problem (8) is equal to i∈∂V f i J i . Moreover the minimum is achieved for every v ∈ V (f,|J|) . Now suppose v is a minimizer of the problem (8) and let b be a solution of the dual problem (D) and let J = −2b. Then by Theorem 8 Thus v ∈ V (f,a) and the proof is complete.
Remark 1. Note that every minimizer v of (8) uniquely determines a conductivity matrix σ. Corollary 9 indicates that the directions of the flow of the current along the edges is unique, despite multiplicity of the minimizer of (8). Indeed if two conductivity matrices σ 1 and σ 2 with 0 ≤ σ 1 ij , σ 2 ij < ∞ induce the currents J 1 and J 2 on a network when the voltage f is imposed on ∂V , and |J 1 | = |J 2 |, then J 1 = J 2 . This is a counter-intuitive result.

Multiple Measurements.
Suppose we have two data sets (f 1 , a 1 ) and (f 2 , a 2 ), and would like to find a conductivity matrix σ inducing the currents with magnitudes a 1 and a 2 , when the voltage potentials f 1 and f 2 are imposed on the boundary vertices ∂V 1 and ∂V 2 , respectively.
Let I 1 and I 2 be defined by Equation (7) for a 1 and a 2 respectively and for u = (u 1 , u 2 ) ∈ R n × R n define It is easy to see that (23) always has a minimizer.
Theorem 11. Let (u 1 , u 2 ) be a minimizer of (23). 1. If there exists a conductivity matrix σ which induces the current J i with |J i | = a i when the voltage potential f i is imposed on the boundary, denoted ∂ i V , i = 1, 2, then Φ(u 1 , u 2 ) = 0. Moreover, Proof. (1) Suppose there exists a conductivity matrix σ producing the data (f 1 , a 1 ) and (f 2 , a 2 ). It follows directly from Theorem 10 that the set of minimizers of (23) is equal to . So the first statement follows.
(2) Suppose Φ(u 1 , u 2 ) = 0. Then u 1 and u 2 minimize I 1 and I 2 over the appropriate spaces and so by Theorem 10, u 1 ∈ V (f 1 ,a 1 ) and u 2 ∈ V (f 2 ,a 2 ) and thus they each have corresponding conductivity matrices σ 1 and σ 2 that generate currents J 1 and J 2 respectively. However Φ(u 1 , u 2 ) = 0 implies that these conductivities are in fact equal.
One can similarly prove the following theorem.

Neumann Boundary Condition.
Let G = (V, E) be an undirected simple connected graph with n vertices, and suppose the current 0 = g ∈ R |∂V | is injected to a subset ∂V of V , regarded as boundary of V , inducing the current J = (J ij ) on E. Then g should satisfy the compatibility assumption We will again denote |J| := (|J ij |) n×n and refer to |J| as a measurement matrix. The following proposition characterizes solutions of the forward problem (4). Proof. Suppose A N w = 0 for some w ∈ R n . Then it follows from (4) that Hence w i = w j for all i and j connected by an edge. Since G is connected the proof is complete.  (4) has a unique solution. The following is the analog to Definition 3.
Definition 15. Given 0 = g : ∂V → R satisfying |∂V | i=1 g i = 0 and a measurement matrix a = (a ij ) n×n with a ij ∈ [0, ∞) for all 1 ≤ i, j ≤ n and a ij = 0 when i = j and E ij ∈ E, we say that a symmetric matrix σ = (σ ij ) n×n with σ ij ∈ [0, ∞] is a conductivity matrix associated to the data (g, a), if there exists a function v : {1, 2, ..., n} → R with and a matrix J = (J ij ) n×n such that When a ij = 0 and v i = v j , then we formally define σ ij = ∞ and say that the edge between nodes i and j is a perfect conductor. We shall also refer to the function v as a voltage potential and denote the set of all voltage potentials corresponding to the data (g, a) by V (g,a) .
For a measurement matrix a = (a ij ) n×n , define the function I : R n → R by (26) I Also for g ∈ R |∂V | satisfying (25) define We shall prove that the voltage potential is a minimizer of the l 1 minimization problem Let us first study the dual of this problem.
3.1. The Dual problem. In this section we discuss the dual of the least gradient problem (27) and study its connection to the primal problem. Let 0 = g ∈ R |∂V | satisfying (25). Choose u g ∈ H(V ) such that i∈∂V (u g ) i g i = 1.
Then we can equivalently write the primal problem (27) as As before this problem admits a dual problem which can be expressed as

From Lemma 7 we have
otherwise.
Next we compute G * . Proof. First note that [16], and the result follows.
Therefore the dual problem (30) can be written as where D = {b ∈ B : |b| ≤ 1 2 a}. Similar to before one can show that (27) has a minimizer. Similar to the Dirichlet boundary condition case, it follows from Theorem III.4.1 in [9] that the dual problem (D N ) also has a solution and characterizes the non-uniqueness of solutions of the primal problem (27).
Theorem 17. The infimum of the primal problem (P N ) is equal to the supremum of the dual problem (D N ). Moreover, the dual problem has an optimal solution b, and J = −2b satisfies (32) |J ij | = a ij for every i, j with u i = u j and (33) for every minimizer u of (27). Conversely, if (32) and (33) hold for some M g , then then u is a minimizer of (27).
Proof. Let b be a solution to the dual problem with corresponding λ ∈ R. Suppose u is a minimizer of 27. Then Thus the inequalities in (34) are indeed equalities and taking J = −2b we we see that (32) and (33) hold. It is easy to see from the above compuations that the converse also holds.
Corollary 18. If u and v are two arbitrary minimizers of (27), then

Voltage Potentials Have Minimum Energy.
We can now prove the analog to Theorem 10.
Theorem 19. Let g = 0 be a function on ∂V satisfying 25 and a be a measurement matrix. If v ∈ V (g,a) , then v is a minimizer of the least gradient problem (27). Conversely, given any a = (a i,j ) with a i,j ≥ 0 and g ∈ R |∂V | satisfying (25), if v is a minimizer of the least gradient problem (27), then v ∈ V (λg,a) for some λ > 0.
Proof. Suppose v ∈ V (g,a) and let J be the corresponding current on E. Following similar computations as in the proof of Theorem 10 we have Therefore the minimum of the least gradient problem (27) is equal to 1. Moreover the minimum is achieved for every v ∈ V (g,|J|) . Now suppose v is a minimizer of the problem (27) and let b be a solution of the dual problem (D N ) with the corresponding λ ∈ R. Let J = −2b. Then by Theorem 17 we see that v ∈ V (λg,a) .
Remark 2. Note that Corollary 18 indicates that the direction of the flow of the current along the edges is unique, despite multiplicity of the minimizers of (8) (see also Remark 1).

Multiple Measurements.
Suppose we have two data sets (g 1 , a 1 ) and (g 2 , a 2 ), and would like to find a conductivity matrix σ inducing the currents with magnitudes |J 1 | and |J 2 |, when the currents g 1 and g 2 are injected on the boundary vertices ∂ 1 V and ∂ 2 V , respectively. We can consider the minimization problem (36) inf where F is defined by (22) and The analog to Theorem 11 can be formulated and proved in this setting and we can also similarly extend to a finite number of measurements.

Algorithms for finding minimizers.
In this section we present numerical algorithms for finding minimizers of the l 1 minimization problems discussed in Sections 3 and 4, yielding voltage potentials for Dirichlet or Neumann boundary conditions. The primal problem (P D ) and (P N ) can be written as (37) min where H = H 0 (V ) for the Dirichlet case and H = M 0 for the Neumann boundary problem. This leads to the unconstrained problem To solve the above minimization problem, we use and develop an algorithm in the spirit of the alternating Split Bregman method which was first introduced by Goldstein and Osher [15]. The Split Bregman algorithm suggests initiating the vectors b 0 and d 0 , and producing the sequences u k , b k , and d k as follows where α > 0. Since the joint minimization problem (39) in both u and d is in general expensive to solve exactly, Goldstein and Osher [15] proposed the following Alternating Split Bregman algorithm for solving problems of type (37) See [1,10,13,15,37,38] for more details. It is pointed out by Esser [10] and Setzer [38] that the above idea to minimize alternatingly was first presented for the augmented Lagrangian algorithm by Gabay and Mercier [13] and Glowinski and Marroco [14]. The resulting algorithm is called the alternating direction method of multipliers (ADMM) [12] and is equivalent to the alternating split Bregman algorithm. The convergence of ADMM in finite dimensional Hilbert spaces was established by Eckstein and Bertsekas [8]. This in particular implies convergence of the alternating split Bregman algorithm in finite dimensional Hilbert spaces. Cai, Osher, and Shen [1] and Setzer [37,38] also independently presented convergence results for the alternating split Bregman in finite dimensional Hilbert spaces. In [27] and [29] the authors proved the convergence of the alternating split Bregman algorithm in infinite dimensional Hilbert spaces by showing that the alternating split bregman algorithim corresponds to the Douglas-Rachford splitting algorithm for the dual problem. Indeed the dual problems (15) and (30) can be written in the form where A := ∂G * o(−div) and B = ∂F * are maximal monotone operators on H. For a set valued operator P : H → 2 H , let J P denote its resolvent, i.e. J P = (Id + P ) −1 . Douglas-Rachford splitting algorithm states that for any initial elements x 0 and p 0 and any α > 0, the sequences p k and x k generated by the following algorithm converges to some x and p respectively. Furthermore p = J αB (x) and p satisfies 0 ∈ A(p) + B(p).
Let us introduce the sequences b k and d k with Notice that both sequences b k and d k converge. The resolvents J αA (2p k −x k ) and J αB (x k+1 ) can be computed as follows where u k+1 and d k+1 are minimizers of (47) and (48) over u ∈ H 0 (V ) for the Dirichlet problem and over u ∈ M 0 for the Neumann problem, and over d ∈ H(E).
In the case of Dirichlet boundary condition the minimizer of I 1 should satisfy the Euler-Lagrange equation It follows from Proposition 1 that the above system is uniquely solvable.
In the case of Neumann boundary condition, I 1 also has a unique minimizer in M 0 up to adding a constant, but identifying the solutions is more subtle. First note that if u is a minimizer I 1 in M 0 , then it satisfies the Euler-Lagrange equation for some β ∈ R. Conversely for β ∈ R, every solution of the above equation which belongs to M 0 is a minimizer of I 1 . Since i∈∂V g i = 0 and n i=1 (divc) i = 0 for any c ∈ H(E), by Propositions 13 and 14 the system (50) has a unique solution in H(V ) for every β ∈ R, up to adding a constant. To identify β and find a solution of (50) in M 0 , let z be a solution of (51) Then v = u + βz belongs to M 0 and satisfies the equation (50), and hence v is the unique minimizer of I 1 over M 0 , up to adding a constant. The minimizer of I 2 for the Dirichlet problem can be directly computed as For the Neumann problem u f is replaced by v g .
Therefore Douglas-Rachford splitting leads to the following convergent algorithms for the Dirichlet and Neumann problems.
Algorithm 1 (Finding a minimizer of the Dirichlet Problem) The following proposition follows directly from the convergence of Douglas-Rachford splitting algorithm and Theorem 1.2 in [27]. See also [1,37,38].
Proposition 20. Let u k b k , and d k be the sequences produced by the Algorithm 1. Then u k → u and b k → 1 2α J, where u and J are solutions of the (13) and it's dual problem (D), respectively. In addition d k → Du. In particular u is a voltage potential corresponding to the data (f, a) and J is the induced current with |J| = a.
Proposition 21. Let v k b k , and d k be the sequences produced by the Algorithm 2. Then v k → v and b k → 1 2α J, where v and J are solutions of the (28) and it's dual problem (D N ), respectively. In addition d k → Dv. In particular v is a voltage potential corresponding to the data (λg, a) for some λ ∈ R and J is the induced current with |J| = a. Moreover λ is the optimal values of the primal and dual problems (P N ) and (D N ), i.e. λ = α P N = α D N .

Numerical Simulations.
We performed a set of numerical simulations in MATLAB to demonstrate convergence of Algorithm 1 and 2. A simple graph with 100 vertices was generated and edges were randomly assigned between nodes with a approximate density of 0.125. Random numbers uniformly distributed between 0 and 1 were then assigned to each edge as their conductivity. We then selected 5 boundary nodes and randomly assigned values between 0 and 1 as boundary data. For the Dirichlet boundary data, the forward problem was solved to determine the current J, generating the data a = |J|. To generate the boundary data for the Neumann problem we found the current entering/leaving the system at each boundary vertex. The simulations for both the Dirichlet and Neumann boundary data were done on the same graph structure with the same current data |J|. The nonsingular linear systems in algorithm 1 were solved using the MATLAB mldivide function and the singular linear systems in algorithm were solved using the pinv function. The vector u f was chosen to be zero on int(V ) and f on the ∂V . The vector v g in Algorithm 2 was chosen using the MATLAB mldivide function. Tables 1 and 2 show the numerical errors for algorithms 1 and 2 on the same graph for different levels of tolerance. Simulations were run on a late 2013 MacBook Pro with a 2.4 GHZ Intel Core i5 processor. We used the L 2 matrix norm for error computations.  While running our simulations we observed that the speed of convergence of Algorithm 1 varied quite wildly depending on the choice of boundary data. We also observed that the speed of convergence of Algorithm 2 was always the same or faster than that of Algorithm 1. To test this observation, we ran algorithms 1 and 2 on the same graph used in Tables 1 and 2 for 1000 different choices of Dirichlet boundary. The average number of iterations for each algorithm is shown in Table 3. We also remark that changing the structure of the graph also effects the speed of convergence. It is not clear to the authors that why Algorithm 2 converges faster than Algorithm 1, and an in depth analysis of the speed of convergences of algorithms 1 and 2 remain open.

5.
Applications. In this section we discuss potential applications of our results on electrical networks on random walks on graphs and Cryptography.

Random Walks on Graphs.
Let G = (V, E ) be a connected, directed, and simple graph with n nodes and consider a random walk on G. Suppose a random walker begins at node a and walks until they reach node b and if they return to a before reaching b they keep walking. Let P = (P ij ) ∈ H(E) be the matrix of transition probabilities, i.e. 0 ≤ P ij ≤ 1 is the probability of the random walker walking from node i to node j. In particular j P ij = 1 for all 1 ≤ i ≤ n. Let W ij be the expected number of times the walker walks from node i to node j before exiting the graph at node b. Note that W ij = −W ji . Can one determine transition probabilities P = (P ij ) from the knowledge of the boundary vertices {a, b} and W = (W i,j )? In this section, among other results, we show that the answer is yes, and describe an algorithm for determining such P .
There is a close connection between electrical networks and random walks on graphs [6]. Let G = (V, E) be an electrical network with conductivity matrix σ = (σ ij ), σ i,j ∈ [0, ∞), and let ∂V = {a, b}. Suppose a current g with g(a) = 1 and g(b) = −1 is injected to the network inducing a current J along the edges. Define and assign the transition probability matrix P to the graph G = (V, E ). Then the net number of times the walker taking an step from node i to node j is indeed J ij , i.e.

J = W.
Therefore if the boundary nodes ∂V = {a, b} and the magnitude of expected net number of times the walker should walk along the edges of the graph is prescribed, by the method presented in Section 5, one can first find a conductivity matrix σ inducing the current J = W on network and compute transition probability matrix P by (58).
The connection between random walks on graphs and electrical networks with Neumann boundary condition can be generalized to the case when ∂V = Γ a ∪ Γ b with Γ a ∩ Γ b = ∅ and Γ a , Γ b = ∅. Let g ∈ R |∂V | with g| Γa ≥ 0 and g| Γ b ≤ 0 and i∈Γ1 g i = 1 and i∈Γ2 g i = −1.
Suppose we would like to determine a transition matrix P such that if a random walker enters the network from a vertex k in Γ a with probability g k , then • they exit the network at a node l ∈ Γ b with probability |g l | • the expected net number of times they pass from vertex i to node j before exiting the network is W ij , 1 ≤ i, j ≤ n. As explained above, to determine the transition matrix P it suffices to find a conductivity matrix σ inducing the current J = W with Neumann data g on ∂V . Then P can be computed from (58).
Suppose ∂V = {a, b} and consider the inverse problems of determining the transition probabilities from the relative net number of times the walker walks between the edges of the graphs, i.e. αW = (αW i,j ) where α is a unknown constant. Then one can determine a transition probability P by finding a conductivity matrix σ by minimizing the l 1 minimization problem (7) with a = αW , f (a) = 1 and f (b) = 0. A transition matrix can also be obtained by minimizing (27) with the Neumann boundary condition g(a) = 1 and g(b) = −1.
Remark 3. Note that in this section we assume that the conductivity matrix σ = (σ ij ) satisfies σ i,j ∈ [0, ∞). Indeed we do not allow perfect conductors as otherwise the probability matrix P in (58) will not be well-defined. As described in the introduction, if for a minimizer v of (8) or (27) we have v i = v j and |J i,j | = 0 for some 1 ≤ i, j ≤ n, then the edge (i, j) is a perfect conductor, i.e. σ i,j = ∞. If v is minimizer of (8) or (27) leading to perfect conductance on an edge, then one may look for an increasing function F : R → R such that u = (u 1 , u 2 , ..., u n ) := (F (v 1 ), F (v 2 ), ..., F (v n )) satisfies u i = u j for i = j. Note that such u will also be a minimizer of (8) or (27) and would provide a conductivity matrix σ with σ ij ∈ [0, ∞), and hence the transition probabilities can be computed from (58). If such increasing function F does not exists, then there exists no transition probability matrix P for which the expected number of times the walker passes along the edges is W .

Applications in Cryptography.
In this section we discuss a potential application of our results on electrical networks in public-key encryption. As stated in Remark 2, Theorem 10 implies that a mass preserving flow J = (J ij ) along the edges of a graph G = (V, E) can be recovered from the knowledge of |J| = (|J ij |) and its net flux on the boundary nodes ∂V . More precisely, suppose J i,j is the current from node i to node j (J ij = −J ji for (i, j) ∈ E), and suppose n j=1 J ij = 0 for every interior node i ∈ ∂V and n j=1 J ij = f i for every boundary node i ∈ ∂V.
Then J can be reconstructed from the knowledge of (|J|, f, ∂V ). This counter-intuitive result has a potential application in cryptography. To see the connection, let us translate a special case of this result to the language of matrices.
Let I n be a subset of {1, 2, ..., 2n + 1} with n elements and A In be the space of (2n + 1) × (2n + 1) anti-symmetric matrices A = (a ij ) satisfying the following properties: I . a ij ∈ {−1, 0, +1} for i = j and a ii = 0, for all 1 ≤ i, j ≤ 2n + 1 II . All rows of A contain an even number of non-zero entries III . Sum of the entries of the ith row is equal to zero if i ∈ I n IV . For i ∈ I n , the sum of the entries of the ith row is denoted by f i , which is not necessarily zero.
Note that f ∈ R n . Suppose a pair of communicators have agreed on a set of indices I n ⊂ {1, 2, ..., 2n + 1} with n elements, both are aware of I n , and would like to securely communicate a matrix A ∈ A In . Then the first party can just send the key (|A|, f ) where f ∈ R n is the sum of the entries of the rows of A that belong to I n . The second party can decrypt the message and find A from the knowledge of (|A|, f, I n ), using the algorithm we developed in Section 4. Since a ij only takes integer values in {−1, 0, +1}, a few iterations of the algorithm should be enough to determine A. On the other hand, finding A from the knowledge of (|A|, f ) would be extremely difficult for an adversary who is not aware of I n . Indeed since all rows of |A| have an even number entries equal to 1, the adversary could not determine the boundary nodes I n from |A|. To decrypt the message, the adversary faces the problem of guessing I n among 2n+1 n subsets of {1, ...., 2n + 1} with n elements and matching it with f . The number of different possibilities are n! 2n + 1 n 2 2n+1 √ πn n!, which grows very fast and makes the decryption for adversaries extremely difficult for large n. The above application in public-key encryption and the challenges of its implementation will be further studied in a forthcoming paper.