# American Institute of Mathematical Sciences

doi: 10.3934/ipi.2020051

## LANTERN: Learn analysis transform network for dynamic magnetic resonance imaging

 1 Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, China 2 University of Chinese Academy of Sciences, Beijing, China, 19 Yuquan Road, Shijingshan District, Beijing, China

* Corresponding author: Hairong Zheng

S. Wang and Y. Chen contributed equally to this work

Received  December 2019 Revised  May 2020 Published  August 2020

This paper proposes to learn analysis transform network for dynamic magnetic resonance imaging (LANTERN). Integrating the strength of CS-MRI and deep learning, the proposed framework is highlighted in three components: (ⅰ) The spatial and temporal domains are sparsely constrained by adaptively trained convolutional filters; (ⅱ) We introduce an end-to-end framework to learn the parameters in LANTERN to solve the difficulty of parameter selection in traditional methods; (ⅲ) Compared to existing deep learning reconstruction methods, our experimental results show that our paper has encouraging capability in exploiting the spatial and temporal redundancy of dynamic MR images. We performed quantitative and qualitative analysis of cardiac reconstructions at different acceleration factors ($2 \times$-$11 \times$) with different undersampling patterns. In comparison with two state-of-the-art methods, experimental results show that our method achieved encouraging performances.

Citation: Shanshan Wang, Yanxia Chen, Taohui Xiao, Lei Zhang, Xin Liu, Hairong Zheng. LANTERN: Learn analysis transform network for dynamic magnetic resonance imaging. Inverse Problems & Imaging, doi: 10.3934/ipi.2020051
##### References:

show all references

##### References:
The proposed LANTERN network architecture for dMRI reconstruction. In (A) and (B), the blue arrow indicates forward process. The pink arrow indicates the process of back-propagation to update network parameters, where $i$ represents the $i - th$ iteration and $N_i$ represents a total of $N_i$ iterations. $k$ expresses that the priori loop for $k$ times and $(i, k)$ means that in the $i-th$ iteration, the a priori loops $k$ times
Visual results comparison for the sensitivity to the training data size. From left to right, the reconstruction results (top line) with neural networks trained from different amounts of data based on the proposed method with 1D random sampling pattern at an acceleration factor of 4. PSNR values are given in the middle and the reconstruction error maps are presented at the bottom
The comparison of the three initialization modes of Random Gaussian, TV, DCT and LANTERN based on the proposed method with 1D Random sampling at an acceleration factor of 4. PSNR value are given under the results
The comparison of k-t SLR, D5C5 and the proposed method with 1D Random sampling at an acceleration factor of 4. PSNR value is given under the results
The comparison of k-t SLR, D5C5 and the proposed method with 1D Random sampling at an acceleration factor of 5. PSNR value is given under the results
The comparison of various methods between average quantification index of the 50 test data and acceleration factor based on 1D Random sampling
The comparison of various methods between average quantification index of the 50 test data and acceleration factor based on Radial sampling
The training and validation loss curves of the proposed model
The comparison of k-t SLR, D5C5 and the proposed method with 2D Radial sampling at an acceleration factor of 11. PSNR value is given under the results
 $\bf{1D\ Random}$ $\bf{2D\ Radial}$ 2X 3X 4X 5X 7X 9X 11X 2X 3X 4X 5X 7X 9X 11X 15X
 $\bf{1D\ Random}$ $\bf{2D\ Radial}$ 2X 3X 4X 5X 7X 9X 11X 2X 3X 4X 5X 7X 9X 11X 15X
Quantitative results comparison for the sensitivity to the training data size. The average quantitative indicator values of the results reconstructed for the 50 test data with the network trained from different different amount of data with 1D Random sampling pattern at an accelerated factor of 4
 1D Random4x NMSE PSNR/dB SSIM HFEN data50 0.0413 40.8047 0.8943 0.8333 data60 0.0397 41.1515 0.9 0.7939 data80 0.0388 41.3589 0.9034 0.7729 data100 0.0385 41.4391 0.9043 0.7633 data120 0.0386 41.4402 0.9035 0.7685
 1D Random4x NMSE PSNR/dB SSIM HFEN data50 0.0413 40.8047 0.8943 0.8333 data60 0.0397 41.1515 0.9 0.7939 data80 0.0388 41.3589 0.9034 0.7729 data100 0.0385 41.4391 0.9043 0.7633 data120 0.0386 41.4402 0.9035 0.7685
Quantitative results comparison for the sensitivity to the initialization. The average quantitative indicator values of the results reconstructed for the 50 test data with the network trained with different initialization with 1D Random sampling pattern at an accelerated factor of 4
 1D Random4x Gaussian TV DCT LANTERN PSNR HFEN PSNR HFEN PSNR HFEN PSNR HFEN AVE 39.8089 0.9459 40.5884 0.8514 40.9971 0.8064 41.4391 0.7633
 1D Random4x Gaussian TV DCT LANTERN PSNR HFEN PSNR HFEN PSNR HFEN PSNR HFEN AVE 39.8089 0.9459 40.5884 0.8514 40.9971 0.8064 41.4391 0.7633
Average reconstruction quantitative metrics with standard deviation of the 50 test data based on various methods with 1D Random sampling at a different accelerated factor
 Methods Random 7X Random 11X PSNR SSIM HFEN PSNR SSIM HFEN Zero-filling 29.14$\pm$2.2 0.57$\pm$0.04 2.73$\pm$0.65 27.58$\pm$2.08 0.51$\pm$0.04 3.12$\pm$0.74 Kt-SLR 33.50$\pm$2.70 0.77$\pm$0.03 1.83$\pm$0.53 32.44$\pm$2.61 0.73$\pm$0.03 2.01$\pm$0.63 D5C5 36.76$\pm$2.00 0.78$\pm$0.03 1.40$\pm$0.33 35.22$\pm$2.00 0.73$\pm$0.03 1.82$\pm$0.50 Proposed 37.48$\pm$2.45 0.82$\pm$0.02 1.31$\pm$0.36 35.40$\pm$2.60 0.77$\pm$0.03 1.67$\pm$0.53
 Methods Random 7X Random 11X PSNR SSIM HFEN PSNR SSIM HFEN Zero-filling 29.14$\pm$2.2 0.57$\pm$0.04 2.73$\pm$0.65 27.58$\pm$2.08 0.51$\pm$0.04 3.12$\pm$0.74 Kt-SLR 33.50$\pm$2.70 0.77$\pm$0.03 1.83$\pm$0.53 32.44$\pm$2.61 0.73$\pm$0.03 2.01$\pm$0.63 D5C5 36.76$\pm$2.00 0.78$\pm$0.03 1.40$\pm$0.33 35.22$\pm$2.00 0.73$\pm$0.03 1.82$\pm$0.50 Proposed 37.48$\pm$2.45 0.82$\pm$0.02 1.31$\pm$0.36 35.40$\pm$2.60 0.77$\pm$0.03 1.67$\pm$0.53
Average reconstruction quantitative metrics with standard deviation of the 50 test data based on various methods with radial sampling at a different accelerated factor
 Methods Radial 11X Radial 15X PSNR/dB SSIM HFEN PSNR/dB SSIM HFEN Zero-filling 22.269$\pm$1.37 0.345$\pm$0.06 5.198$\pm$0.72 20.153$\pm$1.27 0.275$\pm$0.05 5.986$\pm$0.67 Kt-SLR 31.961$\pm$2.34 0.718$\pm$0.03 2.179$\pm$0.51 31.518$\pm$2.36 0.707$\pm$0.04 2.229$\pm$0.54 D5C5 34.954$\pm$2.08 0.701$\pm$0.03 1.735$\pm$0.42 34.248$\pm$2.04 0.677$\pm$0.03 1.907$\pm$0.45 Proposed 38.874$\pm$2.28 0.831$\pm$0.03 1.019$\pm$0.26 38.115$\pm$2.23 0.808$\pm$0.03 1.164$\pm$0.30
 Methods Radial 11X Radial 15X PSNR/dB SSIM HFEN PSNR/dB SSIM HFEN Zero-filling 22.269$\pm$1.37 0.345$\pm$0.06 5.198$\pm$0.72 20.153$\pm$1.27 0.275$\pm$0.05 5.986$\pm$0.67 Kt-SLR 31.961$\pm$2.34 0.718$\pm$0.03 2.179$\pm$0.51 31.518$\pm$2.36 0.707$\pm$0.04 2.229$\pm$0.54 D5C5 34.954$\pm$2.08 0.701$\pm$0.03 1.735$\pm$0.42 34.248$\pm$2.04 0.677$\pm$0.03 1.907$\pm$0.45 Proposed 38.874$\pm$2.28 0.831$\pm$0.03 1.019$\pm$0.26 38.115$\pm$2.23 0.808$\pm$0.03 1.164$\pm$0.30
 [1] Alexandr Mikhaylov, Victor Mikhaylov. Dynamic inverse problem for Jacobi matrices. Inverse Problems & Imaging, 2019, 13 (3) : 431-447. doi: 10.3934/ipi.2019021 [2] Mikhail Gilman, Semyon Tsynkov. Statistical characterization of scattering delay in synthetic aperture radar imaging. Inverse Problems & Imaging, 2020, 14 (3) : 511-533. doi: 10.3934/ipi.2020024 [3] Habib Ammari, Josselin Garnier, Vincent Jugnon. Detection, reconstruction, and characterization algorithms from noisy data in multistatic wave imaging. Discrete & Continuous Dynamical Systems - S, 2015, 8 (3) : 389-417. doi: 10.3934/dcdss.2015.8.389 [4] Simone Cacace, Maurizio Falcone. A dynamic domain decomposition for the eikonal-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 109-123. doi: 10.3934/dcdss.2016.9.109 [5] Xianchao Xiu, Ying Yang, Wanquan Liu, Lingchen Kong, Meijuan Shang. An improved total variation regularized RPCA for moving object detection with dynamic background. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1685-1698. doi: 10.3934/jimo.2019024 [6] Yuncherl Choi, Taeyoung Ha, Jongmin Han, Sewoong Kim, Doo Seok Lee. Turing instability and dynamic phase transition for the Brusselator model with multiple critical eigenvalues. Discrete & Continuous Dynamical Systems - A, 2021  doi: 10.3934/dcds.2021035

2019 Impact Factor: 1.373