December  2020, 2(4): 487-512. doi: 10.3934/fods.2020022

Certified and fast computations with shallow covariance kernels

1. 

MATH-ANCHP, École Polytechnique fédérale de Lausanne, 1015 Lausanne, Switzerland

2. 

Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, United Kingdom

3. 

Department of Mathematics and Computer Science, Eindhoven University of Technology, 5612 AZ Eindhoven, Netherlands

4. 

Department of Mathematics, Technical University of Munich, 85748 Garching, Germany

* Corresponding author: Jonas Latz

Received  November 2020 Revised  December 2020 Published  December 2020

Fund Project: Jonas Latz and Elisabeth Ullmann acknowledge the support by Deutsche Forschungsgemeinschaft (DFG) and Technische Universität München (TUM) through the TUM International Graduate School of Science and Engineering (IGSSE) within the project 10.02 BAYES. The work of Stefano Massei has been supported by the SNSF research project Fast algorithms from low-rank updates; grant number: 200020_178806. The authors thank the anonymous referees for several comments and suggestions that helped to improve this manuscript

Many techniques for data science and uncertainty quantification demand efficient tools to handle Gaussian random fields, which are defined in terms of their mean functions and covariance operators. Recently, parameterized Gaussian random fields have gained increased attention, due to their higher degree of flexibility. However, especially if the random field is parameterized through its covariance operator, classical random field discretization techniques fail or become inefficient. In this work we introduce and analyze a new and certified algorithm for the low-rank approximation of a parameterized family of covariance operators which represents an extension of the adaptive cross approximation method for symmetric positive definite matrices. The algorithm relies on an affine linear expansion of the covariance operator with respect to the parameters, which needs to be computed in a preprocessing step using, e.g., the empirical interpolation method. We discuss and test our new approach for isotropic covariance kernels, such as Matérn kernels. The numerical results demonstrate the advantages of our approach in terms of computational time and confirm that the proposed algorithm provides the basis of a fast sampling procedure for parameter dependent Gaussian random fields.

Citation: Daniel Kressner, Jonas Latz, Stefano Massei, Elisabeth Ullmann. Certified and fast computations with shallow covariance kernels. Foundations of Data Science, 2020, 2 (4) : 487-512. doi: 10.3934/fods.2020022
References:
[1]

M. BachmayrI. G. GrahamV. K. Nguyen and R. Scheichl, Unified analysis of periodization-based sampling methods for Matérn covariances, SIAM Journal on Numerical Analysis, 58 (2020), 2953-2980.  doi: 10.1137/19M1269877.  Google Scholar

[2]

M. BarraultY. MadayN. C. Nguyen and A. T. Patera, An 'empirical interpolation' method: Application to efficient reduced-basis discretization of partial differential equations, C. R. Math. Acad. Sci. Paris, 339 (2004), 667-672.  doi: 10.1016/j.crma.2004.08.006.  Google Scholar

[3]

M. Bebendorf, Approximation of boundary element matrices, Numer. Math., 86 (2000), 565-589.  doi: 10.1007/PL00005410.  Google Scholar

[4]

M. Bebendorf, Adaptive cross approximation of multivariate functions, Constr. Approx., 34 (2011), 149-179.  doi: 10.1007/s00365-010-9103-x.  Google Scholar

[5]

H.-J. Bungartz and M. Griebel, Sparse grids, Acta Numer., 13 (2004), 147-269.  doi: 10.1017/S0962492904000182.  Google Scholar

[6]

A. A. Contreras, P. Mycek, O. P. Le Maȋtre, F. Rizzi, B. Debusschere and O. M. Knio, Parallel domain decomposition strategies for stochastic elliptic equations. Part A: Local Karhunen-Loève representations, SIAM J. Sci. Comput., 40 (2018), C520–C546. doi: 10.1137/17M1132185.  Google Scholar

[7]

A. Damianou and N. Lawrence, Deep Gaussian processes, in Proceedings of the Sixteenth International Workshop on Artificial Intelligence and Statistics (AISTATS) (eds. C. Carvalho and P. Ravikumar), AISTATS '13, JMLR W & CP 31, (2013), 207–215. Google Scholar

[8]

C. R. Dietrich and G. N. Newsam, Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix, SIAM J. Sci. Comput., 18 (1997), 1088-1107.  doi: 10.1137/S1064827592240555.  Google Scholar

[9]

M. M. Dunlop, M. A. Girolami, A. M. Stuart and A. L. Teckentrup, How deep are deep Gaussian processes?, J. Mach. Learn. Res., 19 (2018), 1–46, http://jmlr.org/papers/v19/18-015.html.  Google Scholar

[10]

M. M. DunlopM. A. Iglesias and A. M. Stuart, Hierarchical Bayesian level set inversion, Statistics and Computing, 27 (2017), 1555-1584.  doi: 10.1007/s11222-016-9704-8.  Google Scholar

[11]

M. F. Emzir, S. J. Lasanen, Z. Purisha, L. Roininen and S. Särkkä, Non-stationary multi-layered Gaussian priors for Bayesian inversion, Inverse Problems, 37 (2021), 015002. doi: 10.1088/1361-6420/abc962.  Google Scholar

[12]

M. F. Emzir, S. Lasanen, Z. Purisha and S. Särkkä, Hilbert-space reduced-rank methods for deep Gaussian processes, in 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), (2019), 1–6. Google Scholar

[13]

M. FeischlF. Y. Kuo and I. H. Sloan, Fast random field generation with $H$-matrices, Numer. Math., 140 (2018), 639-676.  doi: 10.1007/s00211-018-0974-2.  Google Scholar

[14]

L. Foster, A. Waagen, N. Aijaz and et al., Stable and efficient Gaussian process calculations, J. Mach. Learn. Res., 10 (2009), 857–882, http://www.jmlr.org/papers/v10/foster09a.html.  Google Scholar

[15]

A. Garriga-Alonso, C. E. Rasmussen and L. Aitchison, Deep convolutional networks as shallow Gaussian processes, in 7th International Conference on Learning Representations, 2019. Google Scholar

[16]

M. Gelbrich, On a formula for the $L^2$ Wasserstein metric between measures on Euclidean and Hilbert spaces, Math. Nachr., 147 (1990), 185-203.  doi: 10.1002/mana.19901470121.  Google Scholar

[17]

A. L. Gibbs and F. E. Su, On choosing and bounding probability metrics, Int. Stat. Rev., 70 (2002), 419-435.   Google Scholar

[18]

G. H. Golub and C. F. Van Loan, Matrix Computations, 4th edition, Johns Hopkins University Press, Baltimore, MD, 2013.  Google Scholar

[19]

I. G. GrahamF. Y. KuoD. NuyensR. Scheichl and I. H. Sloan, Analysis of circulant embedding methods for sampling stationary random fields, SIAM J. Numer. Anal., 56 (2018), 1871-1895.  doi: 10.1137/17M1149730.  Google Scholar

[20]

N. HalkoP. G. Martinsson and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), 217-288.  doi: 10.1137/090771806.  Google Scholar

[21]

H. HarbrechtM. Peters and R. Schneider, On the low-rank approximation by the pivoted Cholesky decomposition, Appl. Numer. Math., 62 (2012), 428-440.  doi: 10.1016/j.apnum.2011.10.001.  Google Scholar

[22]

H. HarbrechtM. Peters and M. Siebenmorgen, Efficient approximation of random fields for numerical applications, Numer. Linear Algebra Appl., 22 (2015), 596-617.  doi: 10.1002/nla.1976.  Google Scholar

[23]

O. HaugT. L. ThorarinsdottirS. H. Sørbye and C. L. E. Franzke, Spatial trend analysis of gridded temperature data at varying spatial scales, Advances in Statistical Climatology, Meteorology and Oceanography, 6 (2020), 1-12.  doi: 10.5194/ascmo-6-1-2020.  Google Scholar

[24]

J. S. Hesthaven, G. Rozza and B. Stamm, Certified Reduced Basis Methods for Parametrized Partial Differential Equations, Springer Briefs in Mathematics, Springer, Cham, 2016. doi: 10.1007/978-3-319-22470-1.  Google Scholar

[25]

J. S. HesthavenB. Stamm and S. Zhang, Efficient greedy algorithms for high-dimensional parameter spaces with applications to empirical interpolation and reduced basis methods, ESAIM Math. Model. Numer. Anal., 48 (2014), 259-283.  doi: 10.1051/m2an/2013100.  Google Scholar

[26]

N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd edition, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. doi: 10.1137/1.9780898718027.  Google Scholar

[27]

B. N. KhoromskijA. Litvinenko and H. G. Matthies, Application of hierarchical matrices for computing the Karhunen-Loève expansion, Computing, 84 (2009), 49-67.  doi: 10.1007/s00607-008-0018-3.  Google Scholar

[28]

U. KhristenkoL. ScarabosioP. SwierczynskiE. Ullmann and B. Wohlmuth, Analysis of boundary effects on PDE-based sampling of Whittle-Matérn random fields, SIAM/ASA J. Uncertain. Quantif., 7 (2019), 948-974.  doi: 10.1137/18M1215700.  Google Scholar

[29]

J. LatzM. Eisenberger and E. Ullmann, Fast sampling of parameterised Gaussian random fields, Comput. Methods Appl. Mech. Engrg., 348 (2019), 978-1012.  doi: 10.1016/j.cma.2019.02.003.  Google Scholar

[30]

F. LindgrenH. v. Rue and J. Lindström, An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach, J. R. Stat. Soc. Ser. B Stat. Methodol., 73 (2011), 423-498.  doi: 10.1111/j.1467-9868.2011.00777.x.  Google Scholar

[31]

A. LitvinenkoY. SunM. G. Genton and D. E. Keyes, Likelihood approximation with hierarchical matrices for large spatial datasets, Comput. Statist. Data Anal., 137 (2019), 115-132.  doi: 10.1016/j.csda.2019.02.002.  Google Scholar

[32]

S. L. Lohr, Sampling: Design and Analysis, 2nd edition, Brooks/Cole, Cengage Learning, Boston, MA, 2010.  Google Scholar

[33]

V. MindenA. DamleK. L. Ho and L. Ying, Fast spatial Gaussian process maximum likelihood estimation via skeletonization factorizations, Multiscale Model. Simul., 15 (2017), 1584-1611.  doi: 10.1137/17M1116477.  Google Scholar

[34]

A. Nouy, A priori model reduction through proper generalized decomposition for solving time-dependent partial differential equations, Comput. Methods Appl. Mech. Engrg., 199 (2010), 1603-1626.  doi: 10.1016/j.cma.2010.01.009.  Google Scholar

[35] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, The MIT Press, 2006.   Google Scholar
[36]

A. K. SaibabaJ. Lee and P. K. Kitanidis, Randomized algorithms for generalized Hermitian eigenvalue problems with application to computing Karhunen-Loève expansion, Numer. Linear Algebra Appl., 23 (2016), 314-339.  doi: 10.1002/nla.2026.  Google Scholar

[37]

F. Schäfer, T. J. Sullivan and H. Owhadi, Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity, arXiv preprint, arXiv: 1706.02205. Google Scholar

[38]

C. Schwab and R. A. Todor, Karhunen-Loève approximation of random fields by generalized fast multipole methods, J. Comput. Phys., 217 (2006), 100-122.  doi: 10.1016/j.jcp.2006.01.048.  Google Scholar

[39]

I. SrajO. P. Le MaȋtreO. M. Knio and I. Hoteit, Coordinate transformation and polynomial chaos for the Bayesian inference of a Gaussian process with parametrized prior covariance function, Comput. Methods Appl. Mech. Engrg., 298 (2016), 205-228.  doi: 10.1016/j.cma.2015.10.002.  Google Scholar

[40]

M. L. Stein, Interpolation of Spatial Data, Springer-Verlag, New York, 1999. doi: 10.1007/978-1-4612-1494-6.  Google Scholar

[41]

A. M. Stuart and A. L. Teckentrup, Posterior consistency for Gaussian process approximations of Bayesian posterior distributions, Math. Comp., 87 (2018), 721-753.  doi: 10.1090/mcom/3244.  Google Scholar

[42]

A. Townsend, Computing with Functions in two Dimensions, ProQuest LLC, Ann Arbor, MI, 2014, Thesis (D.Phil.)–University of Oxford (United Kingdom).  Google Scholar

[43]

A. Townsend and L. N. Trefethen, An extension of Chebfun to two dimensions, SIAM J. Sci. Comput., 35 (2013), C495–C518. doi: 10.1137/130908002.  Google Scholar

[44]

A. Townsend and L. N. Trefethen, Continuous analogues of matrix factorizations, Proc. A., 471 (2015), 20140585, 21. doi: 10.1098/rspa.2014.0585.  Google Scholar

[45]

C. Villani, Optimal Transport, vol. 338 of Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-540-71050-9.  Google Scholar

[46]

C. K. Wikle, Hierarchical models for uncertainty quantification: An overview, in Handbook of Uncertainty Quantification (eds. R. Ghanem, D. Higdon and H. Owhadi), Springer International Publishing, Cham, (2016), 1–26.  Google Scholar

[47]

C. K. Williams and M. Seeger, Using the Nyström method to speed up kernel machines, in Advances in Neural Information Processing Systems, (2001), 682–688. Google Scholar

show all references

References:
[1]

M. BachmayrI. G. GrahamV. K. Nguyen and R. Scheichl, Unified analysis of periodization-based sampling methods for Matérn covariances, SIAM Journal on Numerical Analysis, 58 (2020), 2953-2980.  doi: 10.1137/19M1269877.  Google Scholar

[2]

M. BarraultY. MadayN. C. Nguyen and A. T. Patera, An 'empirical interpolation' method: Application to efficient reduced-basis discretization of partial differential equations, C. R. Math. Acad. Sci. Paris, 339 (2004), 667-672.  doi: 10.1016/j.crma.2004.08.006.  Google Scholar

[3]

M. Bebendorf, Approximation of boundary element matrices, Numer. Math., 86 (2000), 565-589.  doi: 10.1007/PL00005410.  Google Scholar

[4]

M. Bebendorf, Adaptive cross approximation of multivariate functions, Constr. Approx., 34 (2011), 149-179.  doi: 10.1007/s00365-010-9103-x.  Google Scholar

[5]

H.-J. Bungartz and M. Griebel, Sparse grids, Acta Numer., 13 (2004), 147-269.  doi: 10.1017/S0962492904000182.  Google Scholar

[6]

A. A. Contreras, P. Mycek, O. P. Le Maȋtre, F. Rizzi, B. Debusschere and O. M. Knio, Parallel domain decomposition strategies for stochastic elliptic equations. Part A: Local Karhunen-Loève representations, SIAM J. Sci. Comput., 40 (2018), C520–C546. doi: 10.1137/17M1132185.  Google Scholar

[7]

A. Damianou and N. Lawrence, Deep Gaussian processes, in Proceedings of the Sixteenth International Workshop on Artificial Intelligence and Statistics (AISTATS) (eds. C. Carvalho and P. Ravikumar), AISTATS '13, JMLR W & CP 31, (2013), 207–215. Google Scholar

[8]

C. R. Dietrich and G. N. Newsam, Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix, SIAM J. Sci. Comput., 18 (1997), 1088-1107.  doi: 10.1137/S1064827592240555.  Google Scholar

[9]

M. M. Dunlop, M. A. Girolami, A. M. Stuart and A. L. Teckentrup, How deep are deep Gaussian processes?, J. Mach. Learn. Res., 19 (2018), 1–46, http://jmlr.org/papers/v19/18-015.html.  Google Scholar

[10]

M. M. DunlopM. A. Iglesias and A. M. Stuart, Hierarchical Bayesian level set inversion, Statistics and Computing, 27 (2017), 1555-1584.  doi: 10.1007/s11222-016-9704-8.  Google Scholar

[11]

M. F. Emzir, S. J. Lasanen, Z. Purisha, L. Roininen and S. Särkkä, Non-stationary multi-layered Gaussian priors for Bayesian inversion, Inverse Problems, 37 (2021), 015002. doi: 10.1088/1361-6420/abc962.  Google Scholar

[12]

M. F. Emzir, S. Lasanen, Z. Purisha and S. Särkkä, Hilbert-space reduced-rank methods for deep Gaussian processes, in 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), (2019), 1–6. Google Scholar

[13]

M. FeischlF. Y. Kuo and I. H. Sloan, Fast random field generation with $H$-matrices, Numer. Math., 140 (2018), 639-676.  doi: 10.1007/s00211-018-0974-2.  Google Scholar

[14]

L. Foster, A. Waagen, N. Aijaz and et al., Stable and efficient Gaussian process calculations, J. Mach. Learn. Res., 10 (2009), 857–882, http://www.jmlr.org/papers/v10/foster09a.html.  Google Scholar

[15]

A. Garriga-Alonso, C. E. Rasmussen and L. Aitchison, Deep convolutional networks as shallow Gaussian processes, in 7th International Conference on Learning Representations, 2019. Google Scholar

[16]

M. Gelbrich, On a formula for the $L^2$ Wasserstein metric between measures on Euclidean and Hilbert spaces, Math. Nachr., 147 (1990), 185-203.  doi: 10.1002/mana.19901470121.  Google Scholar

[17]

A. L. Gibbs and F. E. Su, On choosing and bounding probability metrics, Int. Stat. Rev., 70 (2002), 419-435.   Google Scholar

[18]

G. H. Golub and C. F. Van Loan, Matrix Computations, 4th edition, Johns Hopkins University Press, Baltimore, MD, 2013.  Google Scholar

[19]

I. G. GrahamF. Y. KuoD. NuyensR. Scheichl and I. H. Sloan, Analysis of circulant embedding methods for sampling stationary random fields, SIAM J. Numer. Anal., 56 (2018), 1871-1895.  doi: 10.1137/17M1149730.  Google Scholar

[20]

N. HalkoP. G. Martinsson and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), 217-288.  doi: 10.1137/090771806.  Google Scholar

[21]

H. HarbrechtM. Peters and R. Schneider, On the low-rank approximation by the pivoted Cholesky decomposition, Appl. Numer. Math., 62 (2012), 428-440.  doi: 10.1016/j.apnum.2011.10.001.  Google Scholar

[22]

H. HarbrechtM. Peters and M. Siebenmorgen, Efficient approximation of random fields for numerical applications, Numer. Linear Algebra Appl., 22 (2015), 596-617.  doi: 10.1002/nla.1976.  Google Scholar

[23]

O. HaugT. L. ThorarinsdottirS. H. Sørbye and C. L. E. Franzke, Spatial trend analysis of gridded temperature data at varying spatial scales, Advances in Statistical Climatology, Meteorology and Oceanography, 6 (2020), 1-12.  doi: 10.5194/ascmo-6-1-2020.  Google Scholar

[24]

J. S. Hesthaven, G. Rozza and B. Stamm, Certified Reduced Basis Methods for Parametrized Partial Differential Equations, Springer Briefs in Mathematics, Springer, Cham, 2016. doi: 10.1007/978-3-319-22470-1.  Google Scholar

[25]

J. S. HesthavenB. Stamm and S. Zhang, Efficient greedy algorithms for high-dimensional parameter spaces with applications to empirical interpolation and reduced basis methods, ESAIM Math. Model. Numer. Anal., 48 (2014), 259-283.  doi: 10.1051/m2an/2013100.  Google Scholar

[26]

N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd edition, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. doi: 10.1137/1.9780898718027.  Google Scholar

[27]

B. N. KhoromskijA. Litvinenko and H. G. Matthies, Application of hierarchical matrices for computing the Karhunen-Loève expansion, Computing, 84 (2009), 49-67.  doi: 10.1007/s00607-008-0018-3.  Google Scholar

[28]

U. KhristenkoL. ScarabosioP. SwierczynskiE. Ullmann and B. Wohlmuth, Analysis of boundary effects on PDE-based sampling of Whittle-Matérn random fields, SIAM/ASA J. Uncertain. Quantif., 7 (2019), 948-974.  doi: 10.1137/18M1215700.  Google Scholar

[29]

J. LatzM. Eisenberger and E. Ullmann, Fast sampling of parameterised Gaussian random fields, Comput. Methods Appl. Mech. Engrg., 348 (2019), 978-1012.  doi: 10.1016/j.cma.2019.02.003.  Google Scholar

[30]

F. LindgrenH. v. Rue and J. Lindström, An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach, J. R. Stat. Soc. Ser. B Stat. Methodol., 73 (2011), 423-498.  doi: 10.1111/j.1467-9868.2011.00777.x.  Google Scholar

[31]

A. LitvinenkoY. SunM. G. Genton and D. E. Keyes, Likelihood approximation with hierarchical matrices for large spatial datasets, Comput. Statist. Data Anal., 137 (2019), 115-132.  doi: 10.1016/j.csda.2019.02.002.  Google Scholar

[32]

S. L. Lohr, Sampling: Design and Analysis, 2nd edition, Brooks/Cole, Cengage Learning, Boston, MA, 2010.  Google Scholar

[33]

V. MindenA. DamleK. L. Ho and L. Ying, Fast spatial Gaussian process maximum likelihood estimation via skeletonization factorizations, Multiscale Model. Simul., 15 (2017), 1584-1611.  doi: 10.1137/17M1116477.  Google Scholar

[34]

A. Nouy, A priori model reduction through proper generalized decomposition for solving time-dependent partial differential equations, Comput. Methods Appl. Mech. Engrg., 199 (2010), 1603-1626.  doi: 10.1016/j.cma.2010.01.009.  Google Scholar

[35] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, The MIT Press, 2006.   Google Scholar
[36]

A. K. SaibabaJ. Lee and P. K. Kitanidis, Randomized algorithms for generalized Hermitian eigenvalue problems with application to computing Karhunen-Loève expansion, Numer. Linear Algebra Appl., 23 (2016), 314-339.  doi: 10.1002/nla.2026.  Google Scholar

[37]

F. Schäfer, T. J. Sullivan and H. Owhadi, Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity, arXiv preprint, arXiv: 1706.02205. Google Scholar

[38]

C. Schwab and R. A. Todor, Karhunen-Loève approximation of random fields by generalized fast multipole methods, J. Comput. Phys., 217 (2006), 100-122.  doi: 10.1016/j.jcp.2006.01.048.  Google Scholar

[39]

I. SrajO. P. Le MaȋtreO. M. Knio and I. Hoteit, Coordinate transformation and polynomial chaos for the Bayesian inference of a Gaussian process with parametrized prior covariance function, Comput. Methods Appl. Mech. Engrg., 298 (2016), 205-228.  doi: 10.1016/j.cma.2015.10.002.  Google Scholar

[40]

M. L. Stein, Interpolation of Spatial Data, Springer-Verlag, New York, 1999. doi: 10.1007/978-1-4612-1494-6.  Google Scholar

[41]

A. M. Stuart and A. L. Teckentrup, Posterior consistency for Gaussian process approximations of Bayesian posterior distributions, Math. Comp., 87 (2018), 721-753.  doi: 10.1090/mcom/3244.  Google Scholar

[42]

A. Townsend, Computing with Functions in two Dimensions, ProQuest LLC, Ann Arbor, MI, 2014, Thesis (D.Phil.)–University of Oxford (United Kingdom).  Google Scholar

[43]

A. Townsend and L. N. Trefethen, An extension of Chebfun to two dimensions, SIAM J. Sci. Comput., 35 (2013), C495–C518. doi: 10.1137/130908002.  Google Scholar

[44]

A. Townsend and L. N. Trefethen, Continuous analogues of matrix factorizations, Proc. A., 471 (2015), 20140585, 21. doi: 10.1098/rspa.2014.0585.  Google Scholar

[45]

C. Villani, Optimal Transport, vol. 338 of Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-540-71050-9.  Google Scholar

[46]

C. K. Wikle, Hierarchical models for uncertainty quantification: An overview, in Handbook of Uncertainty Quantification (eds. R. Ghanem, D. Higdon and H. Owhadi), Springer International Publishing, Cham, (2016), 1–26.  Google Scholar

[47]

C. K. Williams and M. Seeger, Using the Nyström method to speed up kernel machines, in Advances in Neural Information Processing Systems, (2001), 682–688. Google Scholar

Figure 1.  Samples of Gaussian processes on $ [0, 1] \times [0, 1] $. Each of the rows contains 6 independent samples. The first two rows show mean-zero processes with Matérn covariance kernel, with smoothness $ 2.5 $ and correlation length $ 0.1 $ (first row) and $ 0.25 $ (second row). The third and fourth row show mean-zero processes with Gaussian covariance kernel, with correlation length $ 0.1 $ (third row) and $ 0.25 $ (fourth row); see Section 4.1 for details. The processes are discretised as discussed in the beginning of Section 5.1 with $ n = 10^4 $ grid points. The sampling is carried out by computing the matrix square root of the respective covariance matrix
Figure 2.  Illustration of a parameterized Gaussian process. On the inner layer, a random variable $ \boldsymbol \theta $ with law $ \mu' $ is sampled. On the outer layer, the process $ X \sim M(\cdot|\boldsymbol \theta) $ is sampled. Then, $ X $ follows the distribution of the hierarchical Gaussian process $ \mu $
Figure 3.  Wasserstein distance $ \mathrm{W}_{2}(\kappa, \widehat{\kappa}_i) $ and trace residual $ \mathrm{trace}(C-\widehat{C}^i)^{1/2} $ (vertical axes) vs. rank $ i $ of the covariances for the Gaussian measures (horizontal axes) considered in Example 2.3 (i)-(ii). In the left figure, Wasserstein distance and trace residual coincide; showing tightness of the bound in Example 2.3(i). In the right figure, Example 2.3(ii), the bound is not tight
Figure 4.  Gaussian covariance kernel. Error associated with the approximate covariance kernel $ \tilde c_s $ vs. length of the expansion $ s $, in the case $ \theta \in [0.1, \sqrt{2}] $ (left) and $ \theta \in [0.001, \sqrt{2}] $ (right). Each curve represents the infinity norm $ \lVert{c(\cdot, \theta)-\tilde c_s(\cdot, \theta)}\rVert_\infty $ for a fixed value of $ \theta $, approximated by taking the maximum over 500 equispaced values for $ d $ in $ [0, \sqrt{2}] $
Figure 5.  Matérn covariance kernel. Error associated with the approximate covariance kernel $ \tilde c_s $ vs. length of the expansion $ s $, for $ ( \theta_1, \theta_2)\in [0.1, \sqrt{2}]\times[2.5, 7.5] $. Each curve represents the infinity norm $ \lVert{c(\cdot, \boldsymbol \theta)-\tilde c_s(\cdot, \boldsymbol \theta)}\rVert_\infty $ for a fixed pair $ \boldsymbol \theta = ( \theta_1, \theta_2) $, approximated by taking the maximum over 500 equispaced values for $ d $ in $ [0, \sqrt{2}] $
Figure 6.  Schematic of the definitions of the nodes $ \mathbf{x}_{1}, \ldots, \mathbf{x}_{n} $, for $ n_0 \in \{4, 8\} $
Figure 7.  Test 1. Error vs. number of iterations for Algorithm 2 applied to a discretized Gaussian covariance operator
Figure 8.  Test 2. Decimal logarithms of the largest 70 eigenvalues $ \mathbf{C}(\theta) $ for different values of $ \theta \in [0.1, \sqrt{2}] $ and $ n_0 = 128 $
Figure 9.  Test 3. Timings in seconds (left) and ranks (right) obtained from the ACA algorithm applied to different discretizations of the covariance operator. The x-axes of both figures show the degrees of freedom $ n = n_0^2 $ of the discretized covariance operators. The y-axes show the mean elapsed time over three runs for the figure on the left and the quantity $ | \Theta_f| $ for the table on the right
Figure 10.  Test $ 4 $. Comparison between the convergence history of Algorithm 1 (blue), the maximum nuclear norm of $ \mathbf{C}( \theta)- \mathbf{ C}_{sI}( \theta) $ over $ \Theta_f $ (green) and the best trace error associated with the truncated SVDs $ \max_{ \theta\in \Theta_f} \mathrm{trace}(\mathbf{C}( \theta)- \mathbf{C}_{(k)}( \theta)) $ (red)
Figure 11.  Test 5. Time consumption vs. number of samples for the sampling strategies based on Algorithm 1 and Algorithm 2, applied to a discretized Gaussian covariance operator
Figure 12.  Error vs. number of iterations for Algorithm 2 applied to a discretized 2D Matérn kernel with $ \theta_1\in[0.1, \sqrt 2] $ and $ \theta_2 = 2.5 $
[1]

Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018

[2]

Cheng Peng, Zhaohui Tang, Weihua Gui, Qing Chen, Jing He. A bidirectional weighted boundary distance algorithm for time series similarity computation based on optimized sliding window size. Journal of Industrial & Management Optimization, 2021, 17 (1) : 205-220. doi: 10.3934/jimo.2019107

[3]

Josselin Garnier, Knut Sølna. Enhanced Backscattering of a partially coherent field from an anisotropic random lossy medium. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1171-1195. doi: 10.3934/dcdsb.2020158

[4]

Alain Bensoussan, Xinwei Feng, Jianhui Huang. Linear-quadratic-Gaussian mean-field-game with partial observation and common noise. Mathematical Control & Related Fields, 2021, 11 (1) : 23-46. doi: 10.3934/mcrf.2020025

[5]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076

[6]

Giulia Cavagnari, Antonio Marigonda. Attainability property for a probabilistic target in wasserstein spaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 777-812. doi: 10.3934/dcds.2020300

[7]

Jing Qin, Shuang Li, Deanna Needell, Anna Ma, Rachel Grotheer, Chenxi Huang, Natalie Durgin. Stochastic greedy algorithms for multiple measurement vectors. Inverse Problems & Imaging, 2021, 15 (1) : 79-107. doi: 10.3934/ipi.2020066

[8]

Sören Bartels, Jakob Keck. Adaptive time stepping in elastoplasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 71-88. doi: 10.3934/dcdss.2020323

[9]

Hirofumi Izuhara, Shunsuke Kobayashi. Spatio-temporal coexistence in the cross-diffusion competition system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 919-933. doi: 10.3934/dcdss.2020228

[10]

Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012

[11]

Shengxin Zhu, Tongxiang Gu, Xingping Liu. AIMS: Average information matrix splitting. Mathematical Foundations of Computing, 2020, 3 (4) : 301-308. doi: 10.3934/mfc.2020012

[12]

San Ling, Buket Özkaya. New bounds on the minimum distance of cyclic codes. Advances in Mathematics of Communications, 2021, 15 (1) : 1-8. doi: 10.3934/amc.2020038

[13]

Timothy Chumley, Renato Feres. Entropy production in random billiards. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1319-1346. doi: 10.3934/dcds.2020319

[14]

Bo Tan, Qinglong Zhou. Approximation properties of Lüroth expansions. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020389

[15]

Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005

[16]

Huyuan Chen, Dong Ye, Feng Zhou. On gaussian curvature equation in $ \mathbb{R}^2 $ with prescribed nonpositive curvature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3201-3214. doi: 10.3934/dcds.2020125

[17]

Jie Zhang, Yuping Duan, Yue Lu, Michael K. Ng, Huibin Chang. Bilinear constraint based ADMM for mixed Poisson-Gaussian noise removal. Inverse Problems & Imaging, 2021, 15 (2) : 339-366. doi: 10.3934/ipi.2020071

[18]

Scott Schmieding, Rodrigo Treviño. Random substitution tilings and deviation phenomena. Discrete & Continuous Dynamical Systems - A, 2021  doi: 10.3934/dcds.2021020

[19]

Qing Li, Yaping Wu. Existence and instability of some nontrivial steady states for the SKT competition model with large cross diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3657-3682. doi: 10.3934/dcds.2020051

[20]

Yukio Kan-On. On the limiting system in the Shigesada, Kawasaki and Teramoto model with large cross-diffusion rates. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3561-3570. doi: 10.3934/dcds.2020161

 Impact Factor: 

Metrics

  • PDF downloads (20)
  • HTML views (64)
  • Cited by (0)

[Back to Top]