| Level | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| Training samples | $ 16527 $ | $ 16619 $ | $ 16591 $ | $ 16587 $ | $ 16604 $ | $ 12102 $ | $ 16298 $ |
Electrical impedance tomography (EIT) plays a crucial role in non-invasive imaging, with both medical and industrial applications. In this paper, we present three data-driven reconstruction methods for EIT imaging, that were submitted to the Kuopio tomography challenge 2023 ($ {\mathsf{KTC2023}}$). First, we introduce a post-processing method, which achieved first place at KTC2023. Further, we present a fully learned and a conditional diffusion approach. All three methods are based on a similar neural network backbone and were trained using a synthetically generated data set, providing a fair comparison of these different data-driven reconstruction methods.
| Citation: |
Figure 1. An illustration of the EIT measurement tank ($\Omega$), the electrodes $e_l$, with a sample of the injection patterns. In black we show the adjacent injections; in green all against $e_1$; in pink all against $e_9$; in magenta all against $e_{17}$; in orange all against $e_{25}$. Dashed injections are removed in the $2^{\text{nd}}$ challenge level; dotted ones in the $4^{\text{th}}$; dash dotted in the $6^{\text{th}}$
Figure 2. Example initial reconstructions on challenge levels 1 and 6. Level 6 was chosen as it best highlights differences in linearised reconstructions. We evaluate an independent FSM prior, independent SM prior, joint SM+LM prior and two joint priors FSM+SM+LM with different regularisation strengths. The chosen image is a sample of the validation data
Figure 3. $ {\mathsf{FC}}\; {\mathsf{U-Net}}$ network. We first use a linear layer to map the measurements to a $ 64 \times 64 $ pixel grid, this is then bilinearly interpolated to the $ 256 \times 256 $ grid. The network is trained to output class probabilities using categorical cross-entropy loss. The class probabilities are converted to segmentation maps by assigning the class with highest probability
Figure 4. $ {\mathsf{Post-Processing}}$ network. The five linearised reconstructions are interpolated to the pixel grid as described in Section 2.3. The network is trained to output class probabilities using categorical cross-entropy loss. The class probabilities are converted to segmentation maps by assigning the class with highest probability
Figure 5. $ {\mathsf{Conditional-Diffusion}}$ network. The five linearised reconstructions are interpolated to the pixel grid as described in Section 2.3. The noisy image and linearised reconstructions are input into the network. Using the $ {\boldsymbol{\epsilon}} $-matching loss function, the network is trained to estimate the noise. Through sampling the network a segmentation map is obtained
Table 1. Number of training samples used per level
| Level | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| Training samples | $ 16527 $ | $ 16619 $ | $ 16591 $ | $ 16587 $ | $ 16604 $ | $ 12102 $ | $ 16298 $ |
Table 2. Quantitative comparison of our three submissions via structural similarity index measure (SSIM). These are official challenge results, rounded to the nearest hundredth. The second place was achieved by Team ABC from the Federal University of ABC, Brasil. The third place was achieved by Team DTU from Technical University of Denmark. SSIM is averaged for a given sample between conductive and resistive inclusions. At each level the SSIM is summed across the three samples, and the overall sum for a method is summed across all samples and levels
| Level | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Sum |
| $ {\mathsf{FC}}\; {\mathsf{U-Net}}$ | $ 2.72 $ | $ 2.64 $ | $ 2.31 $ | $ 1.80 $ | $ 2.06 $ | $ 2.07 $ | $ 1.53 $ | $ 15.13 $ |
| $ {\mathsf{Post-Processing}}$ | $ 2.76 $ | $ 2.56 $ | $ 2.54 $ | $ 1.71 $ | $ 2.06 $ | $ 1.92 $ | $ 1.69 $ | $ 15.24 $ |
| $ {\mathsf{Conditional-Diffusion}}$ | $ 2.67 $ | $ 2.49 $ | $ 2.47 $ | $ 1.61 $ | $ 1.94 $ | $ 1.76 $ | $ 1.65 $ | $ 14.60 $ |
| Team ABC | $ 2.75 $ | $ 2.37 $ | $ 2.07 $ | $ 1.74 $ | $ 1.08 $ | $ 1.53 $ | $ 1.22 $ | $ 12.75 $ |
| Team DTU | $ 2.28 $ | $ 2.3 $ | $ 1.87 $ | $ 1.55 $ | $ 1.34 $ | $ 1.44 $ | $ 1.60 $ | $ 12.45 $ |
| [1] |
C. Arndt, A. Denker, S. Dittmer, J. Leuschner, J. Nickel and M. Schmidt, Model-based deep learning approaches to the Helsinki Tomography Challenge 2022, Applied Mathematics for Modern Challenges, 1 (2023), 87-104.
|
| [2] |
S. Arridge, P. Maass, O. Öktem and C. Schönlieb, Solving inverse problems using data-driven models, Acta Numerica, 28 (2019), 1-174.
doi: 10.1017/s0962492919000059.
|
| [3] |
I. Baratta, J. Dean, J. Dokken, M. Habera, J. Hale, C. Richardson, M. Rognes, M. Scroggs, N. Sime and G. Wells, DOLFINx, the next generation FEniCS problem solving environment, preprint, 2023.
|
| [4] |
G. Batzolis, J. Stanczuk, C. B. Schönlieb and C. Etmann, Conditional image generation with score-Based diffusion models, preprint, arXiv: 2111.13606, 2021.
|
| [5] |
A. Borsic, B. M. Graham, A. Adler and W. Lionheart, In vivo impedance imaging with total variation regularization, IEEE Transactions on Medical Imaging, 29 (2009), 44-54.
|
| [6] |
Z. Chen, Y. Yang, J. Jia and P. Bagnaninchi, Deep learning based cell imaging with electrical impedance tomography, 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), (2020), 1-6.
|
| [7] |
M. Cheney, D. Isaacson, J. C. Newell, S. Simske and J. Goble, NOSER, An algorithm for solving the inverse conductivity problem, International Journal of Imaging Systems and Technology, 2 (1990), 66-75.
|
| [8] |
H. Chung, B. Sim and J. Ye, Come-closer-diffuse-faster, Accelerating conditional diffusion models for inverse problems through stochastic contraction, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 12413-12422.
|
| [9] |
P. Dhariwal and A. Nichol, Diffusion models beat gans on image synthesis, Advances in Neural Information Processing Systems, 34 (2021), 8780-8794.
|
| [10] |
D. C. Dobson and F. Santosa, An image-enhancement technique for electrical impedance tomography, Inverse Problems, 10 (1994), 317-334.
|
| [11] |
B. Efron, Tweedie's formula and selection bias, Journal of the American Statistical Association, 106 (2011), 1602-1614.
doi: 10.1198/jasa.2011.tm11181.
|
| [12] |
R. Fletcher, A modified Marquardt subroutine for non-linear least squares, Theoretical Physics Division, Atomic Energy Research Establishment Harwell, 1971.
|
| [13] |
M. Gehre, T. Kluth, A. Lipponen, B. Jin, A. Seppänen, J. Kaipio and P Maass, Sparsity reconstruction in electrical impedance tomography: An experimental evaluation, Journal of Computational and Applied Mathematics, 236 (2012), 2126-2136.
doi: 10.1016/j.cam.2011.09.035.
|
| [14] |
C. Geuzaine and J. Remacle, Gmsh, A 3-D finite element mesh generator with built-in pre- and post-processing facilities, Int. J. Numer. Methods. Eng., 79 (2009), 1309-1331.
doi: 10.1002/nme.2579.
|
| [15] |
J. Ho, A. Jain and P. Abbeel, Denoising diffusion probabilistic models, Advances in Neural Information Processing Systems, 33 (2020), 6840-6851.
|
| [16] |
N. Hyvönen, Complete electrode model of electrical impedance tomography, Approximation properties and characterization of inclusions, SIAM Journal on Applied Mathematics, 64 (2004), 902-931.
|
| [17] |
J. Kaipio, V. Kolehmainen, E. Somersalo and M. Vauhkohnen, Statistical inversion and Monte Carlo sampling methods in electrical impedance tomography, Inverse Problems, 16 (2000), 1487-1522.
doi: 10.1088/0266-5611/16/5/321.
|
| [18] |
K. Karhunen, A. Seppänen, A. Lehikoinen, P. Monteiro and J. Kaipio, Electrical resistance tomography imaging of concrete, Cement and Concrete Research, 40 (2010), 137-145.
|
| [19] |
S. Martin and C. T. Choi, A post-processing method for three-dimensional electrical impedance tomography, Scientific Reports, 7 (2017), 7212.
|
| [20] |
G. Ongie, A. Jalal, C. Metzler, R. Baraniuk, A. Dimakis and R. Willett, Deep learning techniques for inverse problems in imaging, IEEE Journal on Selected Areas in Information Theory, 1 (2020), 39-56.
|
| [21] |
M. Räsänen, P. Kuusela, J. Jauhiainen, M. Arif, K. Scheel, T. Savolainen and A. Seppänen, Kuopio Tomography Challenge 2023 (KTC2023), 2023.
|
| [22] |
O. Ronneberger, P. Fischer and T. Brox, U-net, Convolutional networks for biomedical image segmentation, Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015, (2015), 234-241.
|
| [23] |
C. Saharia, J. Ho, W. Chan, T. Salimans, D. J. Fleet and M. Norouzi, Image super-resolution via iterative refinement, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45 (2022), 4713-4726.
|
| [24] |
J. Schwab, S. Antholzer and M. Haltmeier, Deep null space learning for inverse problems, convergence analysis and rates, Inverse Problems, 35 (2019), 025008.
doi: 10.1088/1361-6420/aaf14a.
|
| [25] |
J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan and S. Ganguli, Deep unsupervised learning using nonequilibrium thermodynamics, International Conference on Machine Learning (PMLR), (2015), 2256-2265.
|
| [26] |
E. Somersalo, M. Cheney and D. Isaacson, Existence and uniqueness for electrode models for electric current computed tomography, SIAM Journal on Applied Mathematics, 52 (1992), 1023-1040.
doi: 10.1137/0152060.
|
| [27] |
J. Song, C. Meng and S. Ermon, Denoising diffusion implicit models, International Conference on Learning Representations, (2020).
|
| [28] |
A. Stuart, Inverse problems: A bayesian perspective, Acta Numerica, 19 (2010), 451-559.
doi: 10.1017/S0962492910000061.
|
| [29] |
Y. Tashiro, J. Song, Y. Song and S. Ermon, CSDI, conditional score-based diffusion models for probabilistic time series imputation, Advances in Neural Information Processing Systems, 34 (2021), 24804-24816.
|
| [30] |
T. Vilhunen, J. P. Kaipio, P. J. Vauhkonen, T. Savolainen and M. Vauhkonen, Simultaneous reconstruction of electrode contact impedances and internal electrical properties, I. Theory, Measurement Science and Technology, 13 (2002), 1848.
|
| [31] |
H. Wang, G. Xu and Q. Zhou, A comparative study of variational autoencoders, normalizing flows, and score-based diffusion models for electrical impedance tomography, Journal of Inverse and Ill-posed Problems, 0 (2024).
|
| [32] |
Z. Wang, A. Bovik, H. Sheikh and E. Simoncelli, Image quality assessment, from error visibility to structural similarity, IEEE Transactions on Image Processing, 13 (2004), 600-612.
|
| [33] |
Y. Wu and K. He, Group normalization, Proceedings of the European conference on Computer Vision (ECCV), 2018.
|
An illustration of the EIT measurement tank (
Example initial reconstructions on challenge levels 1 and 6. Level 6 was chosen as it best highlights differences in linearised reconstructions. We evaluate an independent FSM prior, independent SM prior, joint SM+LM prior and two joint priors FSM+SM+LM with different regularisation strengths. The chosen image is a sample of the validation data
Top: Hand-drawn training phantoms. Bottom: Randomly generated training phantoms. For the visualisation, we only show the circular water tank. However, note that all models are trained using the square
Left: The mesh provided by the organizers. Right: Our custom mesh for the forward operator
Segmentation of the three methods for sample
Segmentation of the three methods for sample
Segmentation of the three methods for sample