\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Diffusion posterior sampling for magnetic resonance imaging

  • *Corresponding author: Chao Wang

    *Corresponding author: Chao Wang

The first author is supported by 2024 Youth Talents Support Program from Capital Normal University.

Abstract / Introduction Full Text(HTML) Figure(4) / Table(4) Related Papers Cited by
  • Score-based diffusion models have been utilized to solve inverse problems in magnetic resonance imaging (MRI) in the context of diffusion posterior sampling. Their training-free benefit over supervised learning is particularly promising when there is an anatomy shift in experimental configuration of MRI. The diffusion posterior sampling (DPS) alternates between unconditional generation and measurement-guided gradient descent step, derived from an approximation to the intractable measurement likelihood conditioned on the intermediate image estimate. To address the unknown likelihood, the conventional solution predicts the clean image from the intermediate image via Tweedie's formula. The primary drawback of such method is the computational expense of back-propagation through the network for gradient calculation. To alleviate the back-propagation, we propose a novel gradient descent iteration that eliminates the need for the complicated back-propagation. Compared to the existing annealed Langevin dynamics-based DPS for MRI, our approach offers enhanced performance in image reconstruction with a cleaner background. The significant performance of our approach is validated by experiments.

    Mathematics Subject Classification: Primary: 68T07, 68U10; Secondary: 94A08.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  The tested masks in our experiments. (a) the mask pattern I contains equip-spaced vertical lines with acceleration factor $ R = 4 $; (b) the mask pattern II contains equip-spaced horizontal lines with acceleration factor $ R = 4 $; (c) the mask pattern III contains random vertical lines with acceleration factor $ R = 4 $; (d) the mask pattern I with acceleration factor $ R = 12 $; (e) the mask pattern IV is the Poisson mask with acceleration factor $ R = 9.7 $ provided in Stanford knee dataset. Except for the mask IV is of size $ 256\times 320 $. Other masks are of size $ 384\times 384 $

    Figure 2.  Comparison of the reconstructions from different methods for in-distribution NYU fastMRI brain images. All diffusion posterior sampling methods perform 500 steps for fair comparison. The last column shows the reference image (a.k.a. fully-sampled MVUE) and the colorbar. The difference map of all methods to the reference is depicted below the reconstructions

    Figure 3.  Comparison of the reconstructions from different methods for out-of-distribution datasets. All diffusion posterior sampling methods perform 500 steps for fair comparison. The last column shows the reference image (a.k.a. fully-sampled MVUE) and the colorbar. The difference map of all methods to the reference is depicted below the reconstructions

    Figure 4.  Comparison of the reconstructions from different methods for noisy measurements. The difference map of all methods to the reference is depicted right next to the reconstructions

    Table 1.  The statistics of the testing datasets

    Dataset # of images size
    fastMRI brain 16 384$ \times $384
    fastMRI knee 75 320$ \times $320
    Stanford knee 24 256$ \times $320
    Abdomen 34 320$ \times $158
     | Show Table
    DownLoad: CSV

    Table 2.  Quantitative results for in-distribution evaluations on NYU fastMRI brain dataset

    Setting (I, 4) (I, 8) (I, 12) (II, 4) (III, 4)
    Steps Method PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
    2311 Jalal 35.41 0.788 33.54 0.794 32.69 0.788 35.06 0.816 34.19 0.773
    500 Jalal 35.31 0.789 32.49 0.783 31.28 0.775 34.84 0.809 33.53 0.767
    500 Config 0 35.68 0.810 33.43 0.820 32.34 0.817 33.01 0.838 34.13 0.787
    Config 1 32.04 0.700 27.66 0.589 27.34 0.597 33.88 0.757 31.70 0.710
    Config 2 39.77 0.846 32.89 0.814 32.79 0.812 40.02 0.854 37.07 0.838
    Config 3 38.10 0.886 33.11 0.841 32.21 0.837 38.19 0.894 36.31 0.882
    Config 4 40.02 0.904 34.64 0.861 34.40 0.853 40.27 0.916 37.99 0.899
     | Show Table
    DownLoad: CSV

    Table 3.  Quantitative results for out-of-distribution evaluations on three datasets

    Abdomen Stanford knee fastMRI knee
    Steps Method PSNR SSIM PSNR SSIM PSNR SSIM
    2311 Jalal 36.58 0.878 32.81 0.794 31.33 0.734
    500 Jalal 36.84 0.883 34.21 0.834 31.38 0.738
    500 Config 0 37.50 0.907 35.07 0.864 32.04 0.768
    Config 1 33.22 0.746 29.89 0.634 27.84 0.578
    Config 2 38.78 0.910 35.02 0.794 32.45 0.741
    Config 3 37.97 0.925 33.43 0.791 32.02 0.749
    Config 4 38.80 0.938 34.59 0.810 32.49 0.773
     | Show Table
    DownLoad: CSV

    Table 4.  Quantitative results of noisy measurements for both in-distribution and out-of-distribution datasets

    fastMRI brain fast knees Sta knees Abdo.
    (I, 8) (I, 12) (II, 12) (III, 12) (II, 4) (IV, 9.97) (I, 4)
    PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
    32.01 0.759 30.98 0.758 32.32 0.771 27.00 0.711 35.78 0.841 32.05 0.787 31.24 0.728
    33.79 0.835 33.22 0.826 33.28 0.827 29.78 0.793 38.29 0.918 34.09 0.769 32.41 0.756
     | Show Table
    DownLoad: CSV
  • [1] H. K. AggarwalM. P. Mani and M. Jacob, MoDL: Model-based deep learning architecture for inverse problems, IEEE Transactions on Medical Imaging, 38 (2018), 394-405.  doi: 10.1109/TMI.2018.2865356.
    [2] J.-F. CaiB. DongS. Osher and Z. Shen, Image restoration: Total variation, wavelet frames, and beyond, Journal of the American Mathematical Society, 25 (2012), 1033-1089.  doi: 10.1090/S0894-0347-2012-00740-1.
    [3] A. ChambolleV. CasellesD. CremersM. Novaga and T. Pock, An introduction to total variation for image analysis, Theoretical Foundations and Numerical Methods for Sparse Recovery, 9 (2010), 236-340.  doi: 10.1515/9783110226157.263.
    [4] T. Chan, S. Esedoglu, F. Park and A. Yip, Total variation image restoration: Overview and recent developments, Handbook of Mathematical Models in Computer Vision, (2006), 17-31.
    [5] H. Chung, J. Kim, M. T. Mccann, M. L. Klasky and J. C. Ye, Diffusion posterior sampling for general noisy inverse problems, The Eleventh International Conference on Learning Representations, (2022).
    [6] H. Chung, B. Sim, D. Ryu and J. C. Ye, Improving diffusion models for inverse problems using manifold constraints, 35 (2022), 25683-25696.
    [7] H. Chung and J. C. Ye, Score-based diffusion models for accelerated MRI, Medical Image Analysis, 80 (2022), 102479.  doi: 10.1016/j.media.2022.102479.
    [8] M. Z. Darestani, A. S. Chaudhari and R. Heckel, Measuring robustness in deep learning based compressive sensing, International Conference on Machine Learning, (2021), 2433-2444.
    [9] P. Dhariwal and A. Nichol, Diffusion models beat GANs on image synthesis, 34 (2021), 8780-8794.
    [10] B. T. Feng, J. Smith, M. Rubinstein, H. Chang, K. L. Bouman and W. T. Freeman, Score-based diffusion models as principled priors for inverse imaging, International Conference on Computer Vision (ICCV), (2023).
    [11] I. GoodfellowJ. Pouget-AbadieM. MirzaB. XuD. Warde-FarleyS. OzairA. Courville and Y. Bengio, Generative adversarial networks, Communications of the ACM, 63 (2020), 139-144.  doi: 10.1145/3422622.
    [12] A. Graikos, N. Malkin, N. Jojic and D. Samaras, Diffusion models as plug-and-play priors, 35 (2022), 14715-14728.
    [13] R. Heckel and P. Hand, Deep decoder: Concise image representations from untrained non-convolutional networks, International Conference on Learning Representations, (2018).
    [14] J. Ho, A. Jain and P. Abbeel, Denoising diffusion probabilistic models, 33 (2020), 6840-6851.
    [15] A. JalalM. ArvinteG. DarasE. PriceA. G. Dimakis and J. Tamir, Robust compressed sensing MRI with deep generative priors, Advances in Neural Information Processing Systems, 34 (2021), 14938-14954. 
    [16] Z. Kadkhodaie and E. Simoncelli, Stochastic solutions for linear inverse problems using the prior implicit in a denoiser, Advances in Neural Information Processing Systems, 34 (2021), 13242-13254. 
    [17] B. Kawar, M. Elad, S. Ermon and J. Song, Denoising diffusion restoration models, Advances in Neural Information Processing Systems, (2022).
    [18] D. P. Kingma and M. Welling, Auto-encoding variational Bayes, arXiv preprint, arXiv: 1312.6114.
    [19] D. P. Kingma and P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, 31 (2018).
    [20] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche and A. Ashok, Reconnet: Non-iterative reconstruction of images from compressively sensed measurements, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), 449-458.
    [21] J. Li and C. Wang, DiffFPR: Diffusion prior for oversampled Fourier phase retrieval, Proceedings of the 41th International Conference on Machine Learning, (2024).
    [22] J. LiW. Wang and H. Ji, Self-supervised deep learning for image reconstruction: A Langevin Monte Carlo approach, SIAM Journal on Imaging Sciences, 16 (2023), 2247-2284.  doi: 10.1137/23M1548025.
    [23] M. LustigD. Donoho and J. M. Pauly, Sparse MRI: The application of compressed sensing for rapid mr imaging, Magnetic Resonance in Medicine, 58 (2007), 1182-1195.  doi: 10.1002/mrm.21391.
    [24] M. MardaniE. GongJ. Y. ChengS. S. VasanawalaG. ZaharchukL. Xing and J. M. Pauly, Deep generative adversarial neural networks for compressive sensing MRI, IEEE Transactions on Medical Imaging, 38 (2018), 167-179.  doi: 10.1109/TMI.2018.2858752.
    [25] T. Meinhardt, M. Moller, C. Hazirbas and D. Cremers, Learning proximal operators: Using denoising networks for regularizing inverse imaging problems, ICCV, (2017), 1781-1790.
    [26] X. Meng and Y. Kabashima, Diffusion model based posterior sampling for noisy linear inverse problems, arXiv preprint, arXiv: 2211.12343.
    [27] B. Ozturkler, M. Mardani, A. Vahdat, J. Kautz and J. M. Pauly, Regularization by denoising diffusion process for MRI reconstruction, Medical Imaging with Deep Learning, Short Paper Track, (2023).
    [28] K. P. PruessmannM. WeigerM. B. Scheidegger and P. Boesiger, Sense: Sensitivity encoding for fast MRI, Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 42 (1999), 952-962. 
    [29] S. Ravishankar and Y. Bresler, MR image reconstruction from highly undersampled k-space data by dictionary learning, IEEE Transactions on Medical Imaging, 30 (2010), 1028-1041.  doi: 10.1109/TMI.2010.2090538.
    [30] S. Ravishankar and J. A. Fessler, Data-driven models and approaches for imaging, Mathematics in Imaging, (2017), MW2C–4.
    [31] E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang and W. Yin, Plug-and-play methods provably converge with properly trained denoisers, International Conference on Machine Learning, (2019), 5546-5557.
    [32] D. K. Sodickson and W. J. Manning, Simultaneous acquisition of spatial harmonics (smash): Fast imaging with radiofrequency coil arrays, Magnetic Resonance in Medicine, 38 (1997), 591-603.  doi: 10.1002/mrm.1910380414.
    [33] J. Song, A. Vahdat, M. Mardani and J. Kautz, Pseudoinverse-guided diffusion models for inverse problems, International Conference on Learning Representations, (2022).
    [34] Y. Song and S. Ermon, Generative modeling by estimating gradients of the data distribution, 32 (2019).
    [35] Y. Song and S. Ermon, Improved techniques for training score-based generative models, Advances in Neural Information Processing Systems, 33 (2020), 12438-12448. 
    [36] Y. Song, L. Shen, L. Xing and S. Ermon, Solving inverse problems in medical imaging with score-based generative models, International Conference on Learning Representations, (2021).
    [37] Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon and B. Poole, Score-based generative modeling through stochastic differential equations, International Conference on Learning Representations, (2020).
    [38] A. Sriram, J. Zbontar, T. Murrell, C. L. Zitnick, A. Defazio and D. K. Sodickson, Grappanet: Combining parallel imaging with deep learning for multi-coil MRI reconstruction, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 14315-14322.
    [39] M. UeckerP. LaiM. J. MurphyP. VirtueM. EladJ. M. PaulyS. S. Vasanawala and M. Lustig, ESPIRiT—an eigenvalue approach to autocalibrating parallel MRI: Where sense meets grappa, Magnetic Resonance in Medicine, 71 (2014), 990-1001.  doi: 10.1002/mrm.24751.
    [40] D. Ulyanov, A. Vedaldi and V. Lempitsky, Deep image prior, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), 9446-9454.
    [41] P. Vincent, A connection between score matching and denoising autoencoders, Neural Computation, 23 (2011), 1661-1674.  doi: 10.1162/NECO_a_00142.
    [42] Y. XuM. DengX. ChengY. TianZ. Liu and T. Jaakkola, Restart sampling for improving generative processes, Advances in Neural Information Processing Systems, 36 (2023), 76806-76838. 
    [43] J. Zhang and B. Ghanem, Ista-net: Interpretable optimization-inspired deep network for image compressive sensing, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), 1828-1837.
    [44] K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool and R. Timofte, Plug-and-play image restoration with deep denoiser prior, arXiv preprint, arXiv: 2008.13751.
    [45] Y. Zhu, K. Zhang, J. Liang, J. Cao, B. Wen, R. Timofte and L. Van Gool, Denoising diffusion models for plug-and-play image restoration, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, (2023), 1219-1229.
  • 加载中

Figures(4)

Tables(4)

SHARE

Article Metrics

HTML views(1697) PDF downloads(338) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return