Article Contents
Article Contents

# Incorporating structural prior information and sparsity into EIT using parallel level sets

• * Corresponding author: Ville Kolehmainen
• EIT is a non-linear ill-posed inverse problem which requires sophisticated regularisation techniques to achieve good results. In this paper we consider the use of structural information in the form of edge directions coming from an auxiliary image of the same object being reconstructed. In order to allow for cases where the auxiliary image does not provide complete information we consider in addition a sparsity regularization for the edges appearing in the EIT image. The combination of these approaches is conveniently described through the parallel level sets approach. We present an overview of previous methods for structural regularisation and then provide a variational setting for our approach and explain the numerical implementation. We present results on simulations and experimental data for different cases with accurate and inaccurate prior information. The results demonstrate that the structural prior information improves the reconstruction accuracy, even in cases when there is reasonable uncertainty in the prior about the location of the edges or only partial edge information is available.

Mathematics Subject Classification: Primary: 65N21, 65F22; Secondary: 65N30.

 Citation:

• Figure 1.  Numerical experiment (Case 1). Rows top to bottom: $\sigma_{{\rm true}}$ and reference images $p(r)$ (top row), weighting function $\gamma(r)$ (second row), reconstructions using ${\rm SH}_1$ regularisation (third row) and STV regularisation (fourth row)

Figure 2.  Plot of standard deviation with respect to bias of the reconstructed conductivity with different values of the regularisation parameter $\alpha$ for the simulation in figure 1. The left image shows the curves for the mean conductivity in the area of the true inclusion on the top in ${\sigma}_{\rm true}$, and the image on the right for the inclusion on the bottom right. The triangle denotes the point corresponding to the smallest value of $\alpha$ in the curves

Figure 3.  Numerical experiment (Case 2). Rows top to bottom: $\sigma_{{\rm true}}$ and reference images $p(r)$ (top row), weighting function $\gamma(r)$ (second row), reconstructions using ${\rm SH}_1$ regularisation (third row) and STV regularisation (fourth row)

Figure 4.  Numerical experiment (Case 3). Rows top to bottom: $\sigma_{{\rm true}}$ and reference images $p(r)$ (top row), weighting function $\gamma(r)$ (second row), reconstructions using ${\rm SH}_1$ regularisation (third row) and STV regularisation (fourth row)

Figure 5.  Numerical experiment (Case 3): Reconstructions with increasing uncertainty about the edge location in the partial edge information. Rows top to bottom: $\sigma_{{\rm true}}$ and reference images $p(r)$ (top row), weighting function $\gamma(r)$ (second row), reconstructions using ${\rm SH}_1$ regularisation (third row) and STV regularisation (fourth row)

Figure 6.  Numerical experiment (Case 4). Rows top to bottom: $\sigma_{{\rm true}}$ and reference images $p(r)$ (top row), weighting function $\gamma(r)$ (second row), reconstructions using ${\rm SH}_1$ regularisation (third row) and STV regularisation (fourth row)

Figure 7.  Physical experiment. Top section: Photograph of the target and the reference images $p(r)$. The second row shows the weighting functions $\gamma(r)$. Bottom section: Reconstructions. ${\rm SH}_1$ regularisation (third row), STV regularisation (fourth row). (Color scales of the reconstructions are arbitrary in the sense that they are reconstructed 2D values from 3D data)

Table 1.  Examples of $\psi$ for different regularisation schemes in variational form. $\psi$ is the mapping which defines the penalty for the gradient magnitude in (4) and $\kappa$ is the corresponding local diffusivity function in (6).

 $\psi(t)$ $\kappa(t)$ $1^{\rm st}$-order Tikhonov $\frac{ t ^2}2$ $1$ TV $t$ $\frac{1}{ t }$ Smoothed TV $T\left( t ^2 + T^2\right)^{1/2} -T^2$ $T \left( t ^2 + T^2\right)^{-1/2}$ Perona-Malik (1) $\frac{T^2}2 \log\left(1 + \frac{ t ^2}{T^2}\right)$ $T^2 \left( t ^2 + T^2\right)^{-1}$ Perona-Malik (2) $\frac{T^2}2 \left[1 - \exp\left(-\frac{ t ^2}{T^2}\right)\right]$ $\exp\left(- \frac{ t ^2}{T^2}\right)$ Huber $\left\{ \begin{array}{l} Tt - \frac{{{T^2}}}{2}\\ \frac{{{t^2}}}{2} \end{array} \right.$ $\frac{T}{ t }$ 1 if $t > T$ else Tukey $\left\{ \begin{array}{l} \frac{{{T^2}}}{6}\\ \frac{{{T^2}}}{6}\left[ {1 - {{\left( {1 - \frac{{{t^2}}}{{{T^2}}}} \right)}^3}} \right] \end{array} \right.$ $0$ $\left(1 - \frac{ t ^2}{T^2}\right)^2$ if $t> T$ else

Table 2.  Reconstruction errors (28) for the simulated test cases for varying regularizations (${\rm SH}_1$, STV) and reference images (no structure, correct, partial). Cases 1-4 refer to reconstructions in the figures 1, 3, 4 and 6 respectively. Errors are given in percentages

 SH1 STV case no structure correct partial no structure correct partial 1 12.3 3.6 8.6 8.8 3.5 4.9 2 15.6 5.6 10.9 13.8 3.3 10.1 3 10.6 3.5 6.5 8.0 2.4 4.9 4 15.3 11.5 12.9 14.3 11.1 12.7
Open Access Under a Creative Commons license

Figures(7)

Tables(2)