# American Institute of Mathematical Sciences

doi: 10.3934/ipi.2020051

## LANTERN: Learn analysis transform network for dynamic magnetic resonance imaging

 1 Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, China 2 University of Chinese Academy of Sciences, Beijing, China, 19 Yuquan Road, Shijingshan District, Beijing, China

* Corresponding author: Hairong Zheng

S. Wang and Y. Chen contributed equally to this work

Received  December 2019 Revised  May 2020 Early access  August 2020

This paper proposes to learn analysis transform network for dynamic magnetic resonance imaging (LANTERN). Integrating the strength of CS-MRI and deep learning, the proposed framework is highlighted in three components: (ⅰ) The spatial and temporal domains are sparsely constrained by adaptively trained convolutional filters; (ⅱ) We introduce an end-to-end framework to learn the parameters in LANTERN to solve the difficulty of parameter selection in traditional methods; (ⅲ) Compared to existing deep learning reconstruction methods, our experimental results show that our paper has encouraging capability in exploiting the spatial and temporal redundancy of dynamic MR images. We performed quantitative and qualitative analysis of cardiac reconstructions at different acceleration factors ($2 \times$-$11 \times$) with different undersampling patterns. In comparison with two state-of-the-art methods, experimental results show that our method achieved encouraging performances.

Citation: Shanshan Wang, Yanxia Chen, Taohui Xiao, Lei Zhang, Xin Liu, Hairong Zheng. LANTERN: Learn analysis transform network for dynamic magnetic resonance imaging. Inverse Problems & Imaging, doi: 10.3934/ipi.2020051
##### References:

show all references

##### References:
The proposed LANTERN network architecture for dMRI reconstruction. In (A) and (B), the blue arrow indicates forward process. The pink arrow indicates the process of back-propagation to update network parameters, where $i$ represents the $i - th$ iteration and $N_i$ represents a total of $N_i$ iterations. $k$ expresses that the priori loop for $k$ times and $(i, k)$ means that in the $i-th$ iteration, the a priori loops $k$ times
Visual results comparison for the sensitivity to the training data size. From left to right, the reconstruction results (top line) with neural networks trained from different amounts of data based on the proposed method with 1D random sampling pattern at an acceleration factor of 4. PSNR values are given in the middle and the reconstruction error maps are presented at the bottom
The comparison of the three initialization modes of Random Gaussian, TV, DCT and LANTERN based on the proposed method with 1D Random sampling at an acceleration factor of 4. PSNR value are given under the results
The comparison of k-t SLR, D5C5 and the proposed method with 1D Random sampling at an acceleration factor of 4. PSNR value is given under the results
The comparison of k-t SLR, D5C5 and the proposed method with 1D Random sampling at an acceleration factor of 5. PSNR value is given under the results
The comparison of various methods between average quantification index of the 50 test data and acceleration factor based on 1D Random sampling
The comparison of various methods between average quantification index of the 50 test data and acceleration factor based on Radial sampling
The training and validation loss curves of the proposed model
The comparison of k-t SLR, D5C5 and the proposed method with 2D Radial sampling at an acceleration factor of 11. PSNR value is given under the results
 $\bf{1D\ Random}$ $\bf{2D\ Radial}$ 2X 3X 4X 5X 7X 9X 11X 2X 3X 4X 5X 7X 9X 11X 15X
 $\bf{1D\ Random}$ $\bf{2D\ Radial}$ 2X 3X 4X 5X 7X 9X 11X 2X 3X 4X 5X 7X 9X 11X 15X
Quantitative results comparison for the sensitivity to the training data size. The average quantitative indicator values of the results reconstructed for the 50 test data with the network trained from different different amount of data with 1D Random sampling pattern at an accelerated factor of 4
 1D Random4x NMSE PSNR/dB SSIM HFEN data50 0.0413 40.8047 0.8943 0.8333 data60 0.0397 41.1515 0.9 0.7939 data80 0.0388 41.3589 0.9034 0.7729 data100 0.0385 41.4391 0.9043 0.7633 data120 0.0386 41.4402 0.9035 0.7685
 1D Random4x NMSE PSNR/dB SSIM HFEN data50 0.0413 40.8047 0.8943 0.8333 data60 0.0397 41.1515 0.9 0.7939 data80 0.0388 41.3589 0.9034 0.7729 data100 0.0385 41.4391 0.9043 0.7633 data120 0.0386 41.4402 0.9035 0.7685
Quantitative results comparison for the sensitivity to the initialization. The average quantitative indicator values of the results reconstructed for the 50 test data with the network trained with different initialization with 1D Random sampling pattern at an accelerated factor of 4
 1D Random4x Gaussian TV DCT LANTERN PSNR HFEN PSNR HFEN PSNR HFEN PSNR HFEN AVE 39.8089 0.9459 40.5884 0.8514 40.9971 0.8064 41.4391 0.7633
 1D Random4x Gaussian TV DCT LANTERN PSNR HFEN PSNR HFEN PSNR HFEN PSNR HFEN AVE 39.8089 0.9459 40.5884 0.8514 40.9971 0.8064 41.4391 0.7633
Average reconstruction quantitative metrics with standard deviation of the 50 test data based on various methods with 1D Random sampling at a different accelerated factor
 Methods Random 7X Random 11X PSNR SSIM HFEN PSNR SSIM HFEN Zero-filling 29.14$\pm$2.2 0.57$\pm$0.04 2.73$\pm$0.65 27.58$\pm$2.08 0.51$\pm$0.04 3.12$\pm$0.74 Kt-SLR 33.50$\pm$2.70 0.77$\pm$0.03 1.83$\pm$0.53 32.44$\pm$2.61 0.73$\pm$0.03 2.01$\pm$0.63 D5C5 36.76$\pm$2.00 0.78$\pm$0.03 1.40$\pm$0.33 35.22$\pm$2.00 0.73$\pm$0.03 1.82$\pm$0.50 Proposed 37.48$\pm$2.45 0.82$\pm$0.02 1.31$\pm$0.36 35.40$\pm$2.60 0.77$\pm$0.03 1.67$\pm$0.53
 Methods Random 7X Random 11X PSNR SSIM HFEN PSNR SSIM HFEN Zero-filling 29.14$\pm$2.2 0.57$\pm$0.04 2.73$\pm$0.65 27.58$\pm$2.08 0.51$\pm$0.04 3.12$\pm$0.74 Kt-SLR 33.50$\pm$2.70 0.77$\pm$0.03 1.83$\pm$0.53 32.44$\pm$2.61 0.73$\pm$0.03 2.01$\pm$0.63 D5C5 36.76$\pm$2.00 0.78$\pm$0.03 1.40$\pm$0.33 35.22$\pm$2.00 0.73$\pm$0.03 1.82$\pm$0.50 Proposed 37.48$\pm$2.45 0.82$\pm$0.02 1.31$\pm$0.36 35.40$\pm$2.60 0.77$\pm$0.03 1.67$\pm$0.53
Average reconstruction quantitative metrics with standard deviation of the 50 test data based on various methods with radial sampling at a different accelerated factor
 Methods Radial 11X Radial 15X PSNR/dB SSIM HFEN PSNR/dB SSIM HFEN Zero-filling 22.269$\pm$1.37 0.345$\pm$0.06 5.198$\pm$0.72 20.153$\pm$1.27 0.275$\pm$0.05 5.986$\pm$0.67 Kt-SLR 31.961$\pm$2.34 0.718$\pm$0.03 2.179$\pm$0.51 31.518$\pm$2.36 0.707$\pm$0.04 2.229$\pm$0.54 D5C5 34.954$\pm$2.08 0.701$\pm$0.03 1.735$\pm$0.42 34.248$\pm$2.04 0.677$\pm$0.03 1.907$\pm$0.45 Proposed 38.874$\pm$2.28 0.831$\pm$0.03 1.019$\pm$0.26 38.115$\pm$2.23 0.808$\pm$0.03 1.164$\pm$0.30
 Methods Radial 11X Radial 15X PSNR/dB SSIM HFEN PSNR/dB SSIM HFEN Zero-filling 22.269$\pm$1.37 0.345$\pm$0.06 5.198$\pm$0.72 20.153$\pm$1.27 0.275$\pm$0.05 5.986$\pm$0.67 Kt-SLR 31.961$\pm$2.34 0.718$\pm$0.03 2.179$\pm$0.51 31.518$\pm$2.36 0.707$\pm$0.04 2.229$\pm$0.54 D5C5 34.954$\pm$2.08 0.701$\pm$0.03 1.735$\pm$0.42 34.248$\pm$2.04 0.677$\pm$0.03 1.907$\pm$0.45 Proposed 38.874$\pm$2.28 0.831$\pm$0.03 1.019$\pm$0.26 38.115$\pm$2.23 0.808$\pm$0.03 1.164$\pm$0.30
 [1] Ziju Shen, Yufei Wang, Dufan Wu, Xu Yang, Bin Dong. Learning to scan: A deep reinforcement learning approach for personalized scanning in CT imaging. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021045 [2] Christopher Oballe, David Boothe, Piotr J. Franaszczuk, Vasileios Maroulas. ToFU: Topology functional units for deep learning. Foundations of Data Science, 2021  doi: 10.3934/fods.2021021 [3] Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127 [4] Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren, Carola-Bibiane Schönlieb. Deep learning as optimal control problems: Models and numerical methods. Journal of Computational Dynamics, 2019, 6 (2) : 171-198. doi: 10.3934/jcd.2019009 [5] Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019 [6] Miria Feng, Wenying Feng. Evaluation of parallel and sequential deep learning models for music subgenre classification. Mathematical Foundations of Computing, 2021, 4 (2) : 131-143. doi: 10.3934/mfc.2021008 [7] Govinda Anantha Padmanabha, Nicholas Zabaras. A Bayesian multiscale deep learning framework for flows in random media. Foundations of Data Science, 2021, 3 (2) : 251-303. doi: 10.3934/fods.2021016 [8] Tim McGraw, Baba Vemuri, Evren Özarslan, Yunmei Chen, Thomas Mareci. Variational denoising of diffusion weighted MRI. Inverse Problems & Imaging, 2009, 3 (4) : 625-648. doi: 10.3934/ipi.2009.3.625 [9] H. N. Mhaskar, T. Poggio. Function approximation by deep networks. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4085-4095. doi: 10.3934/cpaa.2020181 [10] Jerry L. Bona, Henrik Kalisch. Models for internal waves in deep water. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 1-20. doi: 10.3934/dcds.2000.6.1 [11] Robert D. Sidman, Marie Erie, Henry Chu. A method, with applications, for analyzing co-registered EEG and MRI data. Conference Publications, 2001, 2001 (Special) : 349-356. doi: 10.3934/proc.2001.2001.349 [12] Ryan Compton, Stanley Osher, Louis-S. Bouchard. Hybrid regularization for MRI reconstruction with static field inhomogeneity correction. Inverse Problems & Imaging, 2013, 7 (4) : 1215-1233. doi: 10.3934/ipi.2013.7.1215 [13] Yuyuan Ouyang, Yunmei Chen, Ying Wu. Total variation and wavelet regularization of orientation distribution functions in diffusion MRI. Inverse Problems & Imaging, 2013, 7 (2) : 565-583. doi: 10.3934/ipi.2013.7.565 [14] Daniela Calvetti, Erkki Somersalo. Microlocal sequential regularization in imaging. Inverse Problems & Imaging, 2007, 1 (1) : 1-11. doi: 10.3934/ipi.2007.1.1 [15] Guillaume Bal, Olivier Pinaud, Lenya Ryzhik. On the stability of some imaging functionals. Inverse Problems & Imaging, 2016, 10 (3) : 585-616. doi: 10.3934/ipi.2016013 [16] Laurent Bourgeois, Jean-François Fritsch, Arnaud Recoquillay. Imaging junctions of waveguides. Inverse Problems & Imaging, 2021, 15 (2) : 285-314. doi: 10.3934/ipi.2020065 [17] Yunmei Chen, Xiaojing Ye, Feng Huang. A novel method and fast algorithm for MR image reconstruction with significantly under-sampled data. Inverse Problems & Imaging, 2010, 4 (2) : 223-240. doi: 10.3934/ipi.2010.4.223 [18] Piotr Gwiazda, Filip Z. Klawe, Agnieszka Świerczewska-Gwiazda. Thermo-visco-elasticity for the Mróz model in the framework of thermodynamically complete systems. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 981-991. doi: 10.3934/dcdss.2014.7.981 [19] Hyeuknam Kwon, Yoon Mo Jung, Jaeseok Park, Jin Keun Seo. A new computer-aided method for detecting brain metastases on contrast-enhanced MR images. Inverse Problems & Imaging, 2014, 8 (2) : 491-505. doi: 10.3934/ipi.2014.8.491 [20] Anca-Voichita Matioc. On particle trajectories in linear deep-water waves. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1537-1547. doi: 10.3934/cpaa.2012.11.1537

2020 Impact Factor: 1.639

## Tools

Article outline

Figures and Tables