doi: 10.3934/ipi.2022022
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

Bayesian neural network priors for edge-preserving inversion

Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012, USA

Received  December 2021 Revised  March 2022 Early access April 2022

Fund Project: Partially supported by the US National Science Foundation under grants #1723211 and #1913129

We consider Bayesian inverse problems wherein the unknown state is assumed to be a function with discontinuous structure a priori. A class of prior distributions based on the output of neural networks with heavy-tailed weights is introduced, motivated by existing results concerning the infinite-width limit of such networks. We show theoretically that samples from such priors have desirable discontinuous-like properties even when the network width is finite, making them appropriate for edge-preserving inversion. Numerically we consider deconvolution problems defined on one- and two-dimensional spatial domains to illustrate the effectiveness of these priors; MAP estimation, dimension-robust MCMC sampling and ensemble-based approximations are utilized to probe the posterior distribution. The accuracy of point estimates is shown to exceed those obtained from non-heavy tailed priors, and uncertainty estimates are shown to provide more useful qualitative information.

Citation: Chen Li, Matthew Dunlop, Georg Stadler. Bayesian neural network priors for edge-preserving inversion. Inverse Problems and Imaging, doi: 10.3934/ipi.2022022
References:
[1]

L. Ardizzone, J. Kruse, C. Rother and U. Köthe, Analyzing inverse problems with invertible neural networks, In International Conference on Learning Representations, 2019, https://openreview.net/forum?id=rJed6j0cKX.

[2]

M. Asim, M. Daniels, O. Leong, A. Ahmed and P. Hand, Invertible generative models for inverse problems: Mitigating representation error and dataset bias, In Proceedings of the 37th International Conference on Machine Learning, (eds. H. D. Ⅲ and A. Singh), Proceedings of Machine Learning Research, PMLR, 119 (2020), 399–409.

[3]

A. BeskosM. GirolamiS. LanP. E. Farrell and A. M. Stuart, Geometric MCMC for infinite-dimensional inverse problems, J. Comput. Phys., 335 (2017), 327-351.  doi: 10.1016/j.jcp.2016.12.041.

[4]

H. BölcskeiP. GrohsG. Kutyniok and P. Petersen, Optimal approximation with sparsely connected deep neural networks, SIAM J. Math. Data Sci., 1 (2019), 8-45.  doi: 10.1137/18M118709X.

[5]

S. Borak, W. Härdle and R. Weron, Stable distributions, 21–44, Statistical Tools for Finance and Insurance, (2005), 21–44. doi: 10.1007/3-540-27395-6_1.

[6]

T. Bui-ThanhO. GhattasJ. Martin and G. Stadler, A computational framework for infinite-dimensional Bayesian inverse problems part Ⅰ: The linearized case, with application to global seismic inversion, SIAM J. Sci. Comput., 35 (2013), 2494-2523.  doi: 10.1137/12089586X.

[7]

N. K. Chada, S. Lasanen and L. Roininen, Posterior convergence analysis of $\alpha$-stable sheets, 2019, arXiv: 1907.03086.

[8]

N. K. ChadaL. Roininen and J. Suuronen, Cauchy markov random field priors for Bayesian inversion, Stat. Comput., 32 (2022), 33.  doi: 10.1007/s11222-022-10089-z.

[9]

A. Chambolle, M. Novaga, D. Cremers and T. Pock, An introduction to total variation for image analysis, In Theoretical Foundations and Numerical Methods for Sparse Recovery, 2010.

[10]

V. Chen, M. M. Dunlop, O. Papaspiliopoulos and A. M. Stuart, Dimension-robust MCMC in Bayesian inverse problems, 2019, arXiv: 1803.03344.

[11]

S. L. CotterM. Dashti and A. M. Stuart, Approximation of Bayesian inverse problems for PDEs, SIAM J. Numer. Anal., 48 (2010), 322-345.  doi: 10.1137/090770734.

[12]

S. L. CotterG. O. RobertsA. M. Stuart and D. White, MCMC methods for functions: Modifying old algorithms to make them faster, Statist. Sci., 28 (2013), 424-446.  doi: 10.1214/13-STS421.

[13]

M. DashtiS. Harris and A. Stuart, Besov priors for Bayesian inverse problems, Inverse Probl. Imaging, 6 (2012), 183-200.  doi: 10.3934/ipi.2012.6.183.

[14]

A. G. de G. Matthews, J. Hron, M. Rowland, R. E. Turner and Z. Ghahramani, Gaussian process behaviour in wide deep neural networks, In International Conference on Learning Representations, 2018, https://openreview.net/forum?id=H1-nGgWC-.

[15]

R. Der and D. Lee, Beyond Gaussian processes: On the distributions of infinite networks, In Advances in Neural Information Processing Systems, (eds. Y. Weiss, B. Schölkopf and J. C. Platt), MIT Press, (2006), 275–282, http://papers.nips.cc/paper/2869-beyond-gaussian-processes-on-the-distributions-of-infinite-networks.pdf.

[16]

J. N. Franklin, Well-posed stochastic extensions of ill-posed linear problems, J. Math. Anal. Appl., 31 (1970), 682-716.  doi: 10.1016/0022-247X(70)90017-X.

[17]

B. V. Gnedenko and A. N. Kolmogorov, Limit Distributions for Sums of Independent Random Variables, Addison-Wesley Publishing Co., Inc., Cambridge, Mass., 1954.

[18]

G. GonzálezV. Kolehmainen and A. Seppänen, Isotropic and anisotropic total variation regularization in electrical impedance tomography, Comput. Math. Appl., 74 (2017), 564-576.  doi: 10.1016/j.camwa.2017.05.004.

[19]

M. HairerA. M. Stuart and S. J. Vollmer, Spectral gaps for a Metropolis–Hastings algorithm in infinite dimensions, Ann. Appl. Probab., 24 (2014), 2455-2490.  doi: 10.1214/13-AAP982.

[20]

A. Immer, M. Korzepa and M. Bauer, Improving predictions of Bayesian neural nets via local linearization, In AISTATS, (2021), 703–711, http://proceedings.mlr.press/v130/immer21a.html.

[21]

J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Applied Mathematical Sciences, 160. Springer-Verlag, New York, 2005, https://cds.cern.ch/record/1338003.

[22]

J. Kaipio and E. Somersalo, Statistical inverse problems: Discretization, model reduction and inverse crimes, J. Comput. Appl. Math., 198 (2007), 493-504.  doi: 10.1016/j.cam.2005.09.027.

[23]

B. Lakshminarayanan, A. Pritzel and C. Blundell, Simple and scalable predictive uncertainty estimation using deep ensembles, In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, (2017), 6405–6416.

[24]

M. LassasE. Saksman and S. Siltanen, Discretization-invariant Bayesian inversion and Besov space priors, Inverse Probl. Imaging, 3 (2009), 87-122.  doi: 10.3934/ipi.2009.3.87.

[25]

M. Lassas and S. Siltanen, Can one use total variation prior for edge-preserving Bayesian inversion?, Inverse Problems, 20 (2004), 1537-1563.  doi: 10.1088/0266-5611/20/5/013.

[26]

M. MarkkanenL. RoininenJ. M. J. Huttunen and S. Lasanen, Cauchy difference priors for edge-preserving Bayesian inversion, J. Inverse Ill-Posed Probl., 27 (2019), 225-240.  doi: 10.1515/jiip-2017-0048.

[27]

R. M. Neal, Priors for infinite networks, Bayesian Learning for Neural Networks, 118 (1996), 29-53.  doi: 10.1007/978-1-4612-0745-0_2.

[28]

J. Nocedal and S. J. Wright, Numerical Optimization, 2$^{nd}$ edition, Springer Series in Operations Research and Financial Engineering. Springer, New York, 2006.

[29]

R. Rahaman and A. H. Thiery, Uncertainty quantification and deep ensembles, 2020, arXiv: 2007.08792.

[30]

C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning), MIT Press, Cambridge, MA, 2006.

[31]

V. K. Rohatgi, An Introduction to Probability and Statistics, Wiley, New York, 1976.

[32]

C. SchillingsB. Sprungk and P. Wacker, On the convergence of the Laplace approximation and noise-level-robustness of Laplace-based Monte Carlo methods for Bayesian inverse problems, Numer. Math., 145 (2020), 915-971.  doi: 10.1007/s00211-020-01131-1.

[33]

A. M. Stuart, Inverse problems: A Bayesian perspective, Acta Numer., 19 (2010), 451-559.  doi: 10.1017/S0962492910000061.

[34]

T. J. Sullivan, Well-posed Bayesian inverse problems and heavy-tailed stable quasi-Banach space priors, Inverse Probl. Imaging, 11 (2017), 857-874.  doi: 10.3934/ipi.2017040.

[35]

C. K. I. Williams, Computing with infinite networks, In Proceedings of the 9th International Conference on Neural Information Processing Systems, NIPS'96, MIT Press, Cambridge, MA, USA, (1996), 295–301.

[36]

Z.-H. ZhouJ. Wu and W. Tang, Ensembling neural networks: Many could be better than all, Artificial Intelligence, 137 (2002), 239-263.  doi: 10.1016/S0004-3702(02)00190-X.

show all references

References:
[1]

L. Ardizzone, J. Kruse, C. Rother and U. Köthe, Analyzing inverse problems with invertible neural networks, In International Conference on Learning Representations, 2019, https://openreview.net/forum?id=rJed6j0cKX.

[2]

M. Asim, M. Daniels, O. Leong, A. Ahmed and P. Hand, Invertible generative models for inverse problems: Mitigating representation error and dataset bias, In Proceedings of the 37th International Conference on Machine Learning, (eds. H. D. Ⅲ and A. Singh), Proceedings of Machine Learning Research, PMLR, 119 (2020), 399–409.

[3]

A. BeskosM. GirolamiS. LanP. E. Farrell and A. M. Stuart, Geometric MCMC for infinite-dimensional inverse problems, J. Comput. Phys., 335 (2017), 327-351.  doi: 10.1016/j.jcp.2016.12.041.

[4]

H. BölcskeiP. GrohsG. Kutyniok and P. Petersen, Optimal approximation with sparsely connected deep neural networks, SIAM J. Math. Data Sci., 1 (2019), 8-45.  doi: 10.1137/18M118709X.

[5]

S. Borak, W. Härdle and R. Weron, Stable distributions, 21–44, Statistical Tools for Finance and Insurance, (2005), 21–44. doi: 10.1007/3-540-27395-6_1.

[6]

T. Bui-ThanhO. GhattasJ. Martin and G. Stadler, A computational framework for infinite-dimensional Bayesian inverse problems part Ⅰ: The linearized case, with application to global seismic inversion, SIAM J. Sci. Comput., 35 (2013), 2494-2523.  doi: 10.1137/12089586X.

[7]

N. K. Chada, S. Lasanen and L. Roininen, Posterior convergence analysis of $\alpha$-stable sheets, 2019, arXiv: 1907.03086.

[8]

N. K. ChadaL. Roininen and J. Suuronen, Cauchy markov random field priors for Bayesian inversion, Stat. Comput., 32 (2022), 33.  doi: 10.1007/s11222-022-10089-z.

[9]

A. Chambolle, M. Novaga, D. Cremers and T. Pock, An introduction to total variation for image analysis, In Theoretical Foundations and Numerical Methods for Sparse Recovery, 2010.

[10]

V. Chen, M. M. Dunlop, O. Papaspiliopoulos and A. M. Stuart, Dimension-robust MCMC in Bayesian inverse problems, 2019, arXiv: 1803.03344.

[11]

S. L. CotterM. Dashti and A. M. Stuart, Approximation of Bayesian inverse problems for PDEs, SIAM J. Numer. Anal., 48 (2010), 322-345.  doi: 10.1137/090770734.

[12]

S. L. CotterG. O. RobertsA. M. Stuart and D. White, MCMC methods for functions: Modifying old algorithms to make them faster, Statist. Sci., 28 (2013), 424-446.  doi: 10.1214/13-STS421.

[13]

M. DashtiS. Harris and A. Stuart, Besov priors for Bayesian inverse problems, Inverse Probl. Imaging, 6 (2012), 183-200.  doi: 10.3934/ipi.2012.6.183.

[14]

A. G. de G. Matthews, J. Hron, M. Rowland, R. E. Turner and Z. Ghahramani, Gaussian process behaviour in wide deep neural networks, In International Conference on Learning Representations, 2018, https://openreview.net/forum?id=H1-nGgWC-.

[15]

R. Der and D. Lee, Beyond Gaussian processes: On the distributions of infinite networks, In Advances in Neural Information Processing Systems, (eds. Y. Weiss, B. Schölkopf and J. C. Platt), MIT Press, (2006), 275–282, http://papers.nips.cc/paper/2869-beyond-gaussian-processes-on-the-distributions-of-infinite-networks.pdf.

[16]

J. N. Franklin, Well-posed stochastic extensions of ill-posed linear problems, J. Math. Anal. Appl., 31 (1970), 682-716.  doi: 10.1016/0022-247X(70)90017-X.

[17]

B. V. Gnedenko and A. N. Kolmogorov, Limit Distributions for Sums of Independent Random Variables, Addison-Wesley Publishing Co., Inc., Cambridge, Mass., 1954.

[18]

G. GonzálezV. Kolehmainen and A. Seppänen, Isotropic and anisotropic total variation regularization in electrical impedance tomography, Comput. Math. Appl., 74 (2017), 564-576.  doi: 10.1016/j.camwa.2017.05.004.

[19]

M. HairerA. M. Stuart and S. J. Vollmer, Spectral gaps for a Metropolis–Hastings algorithm in infinite dimensions, Ann. Appl. Probab., 24 (2014), 2455-2490.  doi: 10.1214/13-AAP982.

[20]

A. Immer, M. Korzepa and M. Bauer, Improving predictions of Bayesian neural nets via local linearization, In AISTATS, (2021), 703–711, http://proceedings.mlr.press/v130/immer21a.html.

[21]

J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Applied Mathematical Sciences, 160. Springer-Verlag, New York, 2005, https://cds.cern.ch/record/1338003.

[22]

J. Kaipio and E. Somersalo, Statistical inverse problems: Discretization, model reduction and inverse crimes, J. Comput. Appl. Math., 198 (2007), 493-504.  doi: 10.1016/j.cam.2005.09.027.

[23]

B. Lakshminarayanan, A. Pritzel and C. Blundell, Simple and scalable predictive uncertainty estimation using deep ensembles, In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, (2017), 6405–6416.

[24]

M. LassasE. Saksman and S. Siltanen, Discretization-invariant Bayesian inversion and Besov space priors, Inverse Probl. Imaging, 3 (2009), 87-122.  doi: 10.3934/ipi.2009.3.87.

[25]

M. Lassas and S. Siltanen, Can one use total variation prior for edge-preserving Bayesian inversion?, Inverse Problems, 20 (2004), 1537-1563.  doi: 10.1088/0266-5611/20/5/013.

[26]

M. MarkkanenL. RoininenJ. M. J. Huttunen and S. Lasanen, Cauchy difference priors for edge-preserving Bayesian inversion, J. Inverse Ill-Posed Probl., 27 (2019), 225-240.  doi: 10.1515/jiip-2017-0048.

[27]

R. M. Neal, Priors for infinite networks, Bayesian Learning for Neural Networks, 118 (1996), 29-53.  doi: 10.1007/978-1-4612-0745-0_2.

[28]

J. Nocedal and S. J. Wright, Numerical Optimization, 2$^{nd}$ edition, Springer Series in Operations Research and Financial Engineering. Springer, New York, 2006.

[29]

R. Rahaman and A. H. Thiery, Uncertainty quantification and deep ensembles, 2020, arXiv: 2007.08792.

[30]

C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning), MIT Press, Cambridge, MA, 2006.

[31]

V. K. Rohatgi, An Introduction to Probability and Statistics, Wiley, New York, 1976.

[32]

C. SchillingsB. Sprungk and P. Wacker, On the convergence of the Laplace approximation and noise-level-robustness of Laplace-based Monte Carlo methods for Bayesian inverse problems, Numer. Math., 145 (2020), 915-971.  doi: 10.1007/s00211-020-01131-1.

[33]

A. M. Stuart, Inverse problems: A Bayesian perspective, Acta Numer., 19 (2010), 451-559.  doi: 10.1017/S0962492910000061.

[34]

T. J. Sullivan, Well-posed Bayesian inverse problems and heavy-tailed stable quasi-Banach space priors, Inverse Probl. Imaging, 11 (2017), 857-874.  doi: 10.3934/ipi.2017040.

[35]

C. K. I. Williams, Computing with infinite networks, In Proceedings of the 9th International Conference on Neural Information Processing Systems, NIPS'96, MIT Press, Cambridge, MA, USA, (1996), 295–301.

[36]

Z.-H. ZhouJ. Wu and W. Tang, Ensembling neural networks: Many could be better than all, Artificial Intelligence, 137 (2002), 239-263.  doi: 10.1016/S0004-3702(02)00190-X.

Figure 1.  Comparison between outputs on $ [-1, 1]^2 $ of Bayesian neural network priors with three hidden layers. Shown are realizations of networks with $ \tanh(\cdot) $ as activation function and Cauchy weights (top) and Gaussian weights (bottom)
Figure 2.  Realizations of the neural network with two different weight distributions on the interval $ [-1, 1] $. Shown are realizations with Cauchy weights (a) and Gaussian weights (b)
Figure 3.  Setup for Problem 1. Shown in (a) are the truth model and synthetic observations. As a reference, shown in (b) is the result of Gaussian process regression with the covariance operator $ 0.25(I - 10 \Delta)^{-2} $
Figure 4.  Shown are reconstructions using different initializations obtained through optimization for Problem 1. The results correspond to the Gaussian (a), Cauchy-Gaussian (b), and fully Cauchy (c) weights
Figure 5.  Shown are the means and pointwise standard deviations obtained with the ensemble method (a) and the last-layer Gaussian regression method (b). Both results are for Cauchy-Gaussian priors for Problem 1
Figure 6.  Shown in (a, b, c) are samples and in (d, e, f) the uncertainty of posterior distributions with different neural network priors for Problem 1. The plots correspond to the Gaussian (a, d), Cauchy-Gaussian (b, e), and Cauchy (c, f) neural network priors
Figure 7.  Shown in the first row are the truth $ u^\dagger $ and blurred models used for Problem 2. We show minimizers obtained using the optimization method with different initializations. Results are shown for Gaussian weights (second row), Cauchy-Gaussian weights (third row), and Cauchy weights (fourth row)
Figure 8.  Results obtained with ensemble method for Problem 2. Shown are the ensemble means (top row) and standard deviation (bottom row) obtained with Gaussian weights (left), Cauchy-Gaussian weights (middle), and fully Cauchy weights (right)
Figure 9.  Results for last-layer Gaussian regression, Problem 2. Shown are the standard deviations with last-layer base functions from a pre-trained network. The figures are for networks with Gaussian (left) and Cauchy-Gaussian (right) weights
Figure 10.  MCMC sampling results for Problem 2. Shown are the means (top row) and standard deviations (bottom row) for neural network priors with Gaussian weights (left), Cauchy-Gaussian weights (middle), and fully Cauchy weights (right)
Figure 11.  Comparison between outputs on $ [-1, 1]^2 $ of Bayesian neural network priors with different widths of the last hidden layer. Shown in each column is a realization of $ [80, 80, D_3] $ networks, where $ D_3 = 5, 50,500 $ and $ 5000 $, for Cauchy weights (top row) and Gaussian weights (bottom row). Columns use the same random seed for Cauchy and Gaussian weights
Figure 12.  Comparison between outputs on $ [-1, 1]^2 $ of Bayesian neural network priors with Cauchy weights and networks widths $ [80, 80, 1000] $. The top row shows random draws from networks with full weights and the bottom row from networks with $ 5 $ blocks, i.e., $ 5 $ blocks along the diagonal contain nonzero weights while the rest of the weight matrices is zero. The number of weights in the sparse network is about $ 20\% $ of the number of weights in the fully connected network
Table 1.  Relative $ L^1 $-error of reconstructions obtained using the ensemble method with Gaussian, Cauchy-Gaussian and Cauchy priors in Problem 1
Regularizations Gaussian Cauchy-Gaussian Cauchy
$ \|\mathbb E \left[ u \right] - u^{\dagger}\|_{L^1}/\|u^{\dagger}\|_{L^1} $ 8.35 5.90 5.53
$ \mathbb E [ \|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1} $ 8.44 5.91 5.56
$ {\rm{Std}} [\|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1} $ 0.10 0.11 0.13
Regularizations Gaussian Cauchy-Gaussian Cauchy
$ \|\mathbb E \left[ u \right] - u^{\dagger}\|_{L^1}/\|u^{\dagger}\|_{L^1} $ 8.35 5.90 5.53
$ \mathbb E [ \|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1} $ 8.44 5.91 5.56
$ {\rm{Std}} [\|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1} $ 0.10 0.11 0.13
Table 2.  Relative $ L^1 $-error of samples computed by pCN with Gaussian, Cauchy-Gaussian and Cauchy priors in Problem 1
Neural network prior Gaussian Cauchy-Gaussian Cauchy
$ \|\mathbb E \left[ u \right] - u^{\dagger}\|_{L^1}/\|u^{\dagger}\|_{L^1} $ 8.14 5.66 4.74
$ \mathbb E [ \|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1} $ 8.57 6.45 5.63
$ {\rm{Std}} [\|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1} $ 0.63 0.57 0.73
Neural network prior Gaussian Cauchy-Gaussian Cauchy
$ \|\mathbb E \left[ u \right] - u^{\dagger}\|_{L^1}/\|u^{\dagger}\|_{L^1} $ 8.14 5.66 4.74
$ \mathbb E [ \|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1} $ 8.57 6.45 5.63
$ {\rm{Std}} [\|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1} $ 0.63 0.57 0.73
[1]

Masoumeh Dashti, Stephen Harris, Andrew Stuart. Besov priors for Bayesian inverse problems. Inverse Problems and Imaging, 2012, 6 (2) : 183-200. doi: 10.3934/ipi.2012.6.183

[2]

T. J. Sullivan. Well-posed Bayesian inverse problems and heavy-tailed stable quasi-Banach space priors. Inverse Problems and Imaging, 2017, 11 (5) : 857-874. doi: 10.3934/ipi.2017040

[3]

Tan Bui-Thanh, Omar Ghattas. A scalable algorithm for MAP estimators in Bayesian inverse problems with Besov priors. Inverse Problems and Imaging, 2015, 9 (1) : 27-53. doi: 10.3934/ipi.2015.9.27

[4]

Guillaume Bal, Ian Langmore, Youssef Marzouk. Bayesian inverse problems with Monte Carlo forward models. Inverse Problems and Imaging, 2013, 7 (1) : 81-105. doi: 10.3934/ipi.2013.7.81

[5]

Kui Lin, Shuai Lu, Peter Mathé. Oracle-type posterior contraction rates in Bayesian inverse problems. Inverse Problems and Imaging, 2015, 9 (3) : 895-915. doi: 10.3934/ipi.2015.9.895

[6]

Matti Lassas, Eero Saksman, Samuli Siltanen. Discretization-invariant Bayesian inversion and Besov space priors. Inverse Problems and Imaging, 2009, 3 (1) : 87-122. doi: 10.3934/ipi.2009.3.87

[7]

Junxiong Jia, Jigen Peng, Jinghuai Gao. Posterior contraction for empirical bayesian approach to inverse problems under non-diagonal assumption. Inverse Problems and Imaging, 2021, 15 (2) : 201-228. doi: 10.3934/ipi.2020061

[8]

Tan Bui-Thanh, Quoc P. Nguyen. FEM-based discretization-invariant MCMC methods for PDE-constrained Bayesian inverse problems. Inverse Problems and Imaging, 2016, 10 (4) : 943-975. doi: 10.3934/ipi.2016028

[9]

Didi Lv, Qingping Zhou, Jae Kyu Choi, Jinglai Li, Xiaoqun Zhang. Nonlocal TV-Gaussian prior for Bayesian inverse problems with applications to limited CT reconstruction. Inverse Problems and Imaging, 2020, 14 (1) : 117-132. doi: 10.3934/ipi.2019066

[10]

Meixin Xiong, Liuhong Chen, Ju Ming, Jaemin Shin. Accelerating the Bayesian inference of inverse problems by using data-driven compressive sensing method based on proper orthogonal decomposition. Electronic Research Archive, 2021, 29 (5) : 3383-3403. doi: 10.3934/era.2021044

[11]

Lassi Roininen, Janne M. J. Huttunen, Sari Lasanen. Whittle-Matérn priors for Bayesian statistical inversion with applications in electrical impedance tomography. Inverse Problems and Imaging, 2014, 8 (2) : 561-586. doi: 10.3934/ipi.2014.8.561

[12]

Johnathan M. Bardsley. Gaussian Markov random field priors for inverse problems. Inverse Problems and Imaging, 2013, 7 (2) : 397-416. doi: 10.3934/ipi.2013.7.397

[13]

Jiangfeng Huang, Zhiliang Deng, Liwei Xu. A Bayesian level set method for an inverse medium scattering problem in acoustics. Inverse Problems and Imaging, 2021, 15 (5) : 1077-1097. doi: 10.3934/ipi.2021029

[14]

Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics and Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117

[15]

Christopher Oballe, Alan Cherne, Dave Boothe, Scott Kerick, Piotr J. Franaszczuk, Vasileios Maroulas. Bayesian topological signal processing. Discrete and Continuous Dynamical Systems - S, 2022, 15 (4) : 797-817. doi: 10.3934/dcdss.2021084

[16]

Yunwen Yin, Weishi Yin, Pinchao Meng, Hongyu Liu. The interior inverse scattering problem for a two-layered cavity using the Bayesian method. Inverse Problems and Imaging, , () : -. doi: 10.3934/ipi.2021069

[17]

Onur Teymur, Sarah Filippi. A Bayesian nonparametric test for conditional independence. Foundations of Data Science, 2020, 2 (2) : 155-172. doi: 10.3934/fods.2020009

[18]

Deng Lu, Maria De Iorio, Ajay Jasra, Gary L. Rosner. Bayesian inference for latent chain graphs. Foundations of Data Science, 2020, 2 (1) : 35-54. doi: 10.3934/fods.2020003

[19]

Sahani Pathiraja, Sebastian Reich. Discrete gradients for computational Bayesian inference. Journal of Computational Dynamics, 2019, 6 (2) : 385-400. doi: 10.3934/jcd.2019019

[20]

Mila Nikolova. Model distortions in Bayesian MAP reconstruction. Inverse Problems and Imaging, 2007, 1 (2) : 399-422. doi: 10.3934/ipi.2007.1.399

2020 Impact Factor: 1.639

Metrics

  • PDF downloads (35)
  • HTML views (30)
  • Cited by (0)

Other articles
by authors

[Back to Top]