# American Institute of Mathematical Sciences

June  2021, 3(2): 251-303. doi: 10.3934/fods.2021016

## A Bayesian multiscale deep learning framework for flows in random media

 Scientific Computing and Artificial Intelligence (SCAI) Laboratory, 311I Cushing Hall, University of Notre Dame, Notre Dame, IN 46556, USA

* Corresponding author: Nicholas Zabaras

Received  March 2021 Published  June 2021 Early access  June 2021

Fine-scale simulation of complex systems governed by multiscale partial differential equations (PDEs) is computationally expensive and various multiscale methods have been developed for addressing such problems. In addition, it is challenging to develop accurate surrogate and uncertainty quantification models for high-dimensional problems governed by stochastic multiscale PDEs using limited training data. In this work to address these challenges, we introduce a novel hybrid deep-learning and multiscale approach for stochastic multiscale PDEs with limited training data. For demonstration purposes, we focus on a porous media flow problem. We use an image-to-image supervised deep learning model to learn the mapping between the input permeability field and the multiscale basis functions. We introduce a Bayesian approach to this hybrid framework to allow us to perform uncertainty quantification and propagation tasks. The performance of this hybrid approach is evaluated with varying intrinsic dimensionality of the permeability field. Numerical results indicate that the hybrid network can efficiently predict well for high-dimensional inputs.

Citation: Govinda Anantha Padmanabha, Nicholas Zabaras. A Bayesian multiscale deep learning framework for flows in random media. Foundations of Data Science, 2021, 3 (2) : 251-303. doi: 10.3934/fods.2021016
##### References:
 [1] J. E. Aarnes, V. Kippe, K.-A. Lie and A. B. Rustad, Modelling of multiscale structures in flow simulations for petroleum reservoirs, Geometric Modelling, Numerical Simulation, and Optimization, (2007), 307–360. doi: 10.1007/978-3-540-68783-2_10. [2] J. E. Aarnes and Y. Efendiev, Mixed multiscale finite element methods for stochastic porous media flows, SIAM Journal on Scientific Computing, 30 (2008), 2319–2339. doi: 10.1137/07070108X. [3] M. S. Alnaes, et al., The FEniCS project version 1.5, Archive of Numerical Software, 3 (2015). doi: 10.11588/ans.2015.100.20553. [4] K. Aziz and A. Settari, Petroleum reservoir simulation, Blitzprint Ltd, (2002). [5] I. Bilionis, N. Zabaras, B. A. Konomi and G. Lin, Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification, Journal of Computational Physics, 521 (2013), 212-239.  doi: 10.1016/j.jcp.2013.01.011. [6] C. Blundell, J. Cornebise, K. Kavukcuoglu and D. Wierstra, Weight uncertainty in neural networks, preprint, arXiv: 1505.05424. [7] S. Chan and A. H.Elsheikh, A machine learning approach for efficient uncertainty quantification using multiscale methods, Journal of Computational Physics, 354 (2018), 493-511.  doi: 10.1016/j.jcp.2017.10.034. [8] E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities, 3$^{rd}$ edition, Elsevier, 2005. [9] R. W. Freund, G. H. Golub and N. M. Nachtigal, Iterative solution of linear systems, Acta Numerica, 1 (1992), 57–100. doi: 10.1.1.55.5646. [10] Y. Gal and Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, preprint, arXiv: 1506.02142. [11] N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 403 (2020), 109056. doi: 10.1016/j.jcp.2019.109056. [12] N. Geneva and N. Zabaras, Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 394 (2019), 125-147.  doi: 10.1016/j.jcp.2019.01.021. [13] X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, (2011), 315–323. Available from: http://proceedings.mlr.press/v15/glorot11a.html. [14] I. Goodfellow, Y. Bengio and A. Courville, Deep learning, MIT Press, 2016. Available from: http://www.deeplearningbook.org [15] K. He, X. Zhang, R. Shaoqing and J. Sun, Deep residual learning for image recognition, preprint, arXiv: 1512.03385. [16] J. Hernández-Lobato and R. Adams, Probabilistic backpropagation for scalable learning of Bayesian neural networks, preprint, arXiv: 1502.05336. [17] G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017). doi: 10.1109/cvpr.2017.243. [18] S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167. [19] S. Jégou, M. Drozdzal, D. Vazquez, A. Romero and Y. Bengio, The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017), 11–19. doi: 10.1.1.55.5646. [20] P. Jenny, S. H. Lee and H. A. Tchelepi, Multi-scale finite-volume method for elliptic problems in subsurface flow simulation, Journal of Computational Physics, 187 (2003), 47-67.  doi: 10.1016/s0021-9991(03)00075-5. [21] D. Kingma and J.Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. [22] D. P. Kingma, T. Salimans and M. Welling, Variational dropout and the local reparameterization trick, preprint, arXiv: 1506.02557. [23] A. Krizhevsky, I. Sutskever and G. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, (2012), 1097–1105. doi: 10.1145/3065386. [24] E. Laloy, R. Hérault, D. Jacques and N. Linde, Training-image based geostatistical inversion using a spatial generative adversarial neural network, Water Resources Research, 54 (2018), 381-406.  doi: 10.1016/j.jcp.2019.01.021. [25] Y. LeCun, Y. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539. [26] Q. Liu and D. Wang, Stein variational gradient descent: A general purpose Bayesian inference algorithm, preprint, arXiv: 1608.04471. [27] L. V. D. Maaten, E. Postma and J. Van den Herik, Dimensionality reduction: A comparative review, Journal of Machine Learning Research, 10 (2009), 66–71. Available from: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.112.5472. [28] S. Mo, N. Zabaras, X. Shi and J. Wu, Integration of adversarial autoencoders with residual dense convolutional networks for estimation of non-Gaussian hydraulic conductivities, Water Resources Research, 56 (2020). doi: 10.1029/2019WR026082. [29] S. Mo, Y. Zhu, N. Zabaras, X. Shi and J. Wu, Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2018), 703-728.  doi: 10.1029/2018wr023528. [30] O. Møyner and K. Lie, A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids, Journal of Computational Physics, 304 (2016), 46-71.  doi: 10.1016/j.jcp.2015.10.010. [31] A. Paszke, et. al., Automatic differentiation in pytorch, Neural Information Processing Systems, (2017). Available from: https://openreview.net/forum?id=BJJsrmfCZ. [32] A. Radford, L. Metz and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, preprint, arXiv: abs/1511.06434. [33] O. Ronneberger, P. Fischer, B. Philipp and T. Brox, U-net: Convolutional networks for biomedical image segmentation, preprint, arXiv: 1505.04597. [34] S. Shah, O. Møyner, M. Tene, K. Lie and H. Hajibeygi, The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB), Journal of Computational Physics, 318 (2016), 36-57.  doi: 10.1016/j.jcp.2016.05.001. [35] SINTEF MRST project web page, (2015), 66–71. Available from: http://www.sintef.no/Projectweb/MRST/. [36] N. Thuerey, K. Weissenow, H. Mehrotra, N. Mainali, L. Prantl and X. Hu, A study of deep learning methods for Reynolds-averaged Navier-Stokes simulations, preprint, arXiv: 1810.08217. [37] R. K. Tripathy and I. Bilionis, Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification, Journal of Computational Physics, 375 (2018), 565-588.  doi: 10.1016/j.jcp.2018.08.036. [38] D. Vernon, Machine vision-Automated visual inspection and robot vision, NASA STI/Recon Technical Report A, 92 (1991). [39] J. Wan and N. Zabaras, A probabilistic graphical model approach to stochastic multiscale partial differential equations, Journal of Computational Physics, 250 (2013), 477-510.  doi: 10.1016/j.jcp.2013.05.016. [40] M. Wang, S. W. Cheung, E. T. Chung, Y. Efendiev, W. T. Leung and Y. Wang, Prediction of discretization of GMsFEM using deep learning, Mathematics, 7 (2019), 412. doi: 10.3390/math7050412. [41] Y. Wang, S. Wun, E. T. Chung, Y. Efendiev and M. Wang, Deep multiscale model learning, preprint, arXiv: 1806.04830. [42] M. A. Zahangir, T. M. Tarek, C. Yakopcic, S. Westberg, P. Sidike, M. N. Shamima, B. C. Van Esesn, A. A. S. Awwal and V. K. Asari, The history began from AlexNET: A comprehensive survey on deep learning approaches, preprint, arXiv: 1803.01164. [43] M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, preprint, arXiv: 1311.2901. [44] J. Zhang, S. W. Cheung, Y. Efendiev, E. Gildin and E. T. Chung, Deep model reduction-model learning for reservoir simulation, Society of Petroleum Engineers, (2019). doi: 10.2118/193912-ms. [45] Y. Zhu and N. Zabaras, Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.  doi: 10.1016/j.jcp.2018.04.018. [46] Y. Zhu, N. Zabaras, P. Koutsourelakis and P. Perdikaris, Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.  doi: 10.1016/j.jcp.2019.05.024.

show all references

##### References:
 [1] J. E. Aarnes, V. Kippe, K.-A. Lie and A. B. Rustad, Modelling of multiscale structures in flow simulations for petroleum reservoirs, Geometric Modelling, Numerical Simulation, and Optimization, (2007), 307–360. doi: 10.1007/978-3-540-68783-2_10. [2] J. E. Aarnes and Y. Efendiev, Mixed multiscale finite element methods for stochastic porous media flows, SIAM Journal on Scientific Computing, 30 (2008), 2319–2339. doi: 10.1137/07070108X. [3] M. S. Alnaes, et al., The FEniCS project version 1.5, Archive of Numerical Software, 3 (2015). doi: 10.11588/ans.2015.100.20553. [4] K. Aziz and A. Settari, Petroleum reservoir simulation, Blitzprint Ltd, (2002). [5] I. Bilionis, N. Zabaras, B. A. Konomi and G. Lin, Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification, Journal of Computational Physics, 521 (2013), 212-239.  doi: 10.1016/j.jcp.2013.01.011. [6] C. Blundell, J. Cornebise, K. Kavukcuoglu and D. Wierstra, Weight uncertainty in neural networks, preprint, arXiv: 1505.05424. [7] S. Chan and A. H.Elsheikh, A machine learning approach for efficient uncertainty quantification using multiscale methods, Journal of Computational Physics, 354 (2018), 493-511.  doi: 10.1016/j.jcp.2017.10.034. [8] E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities, 3$^{rd}$ edition, Elsevier, 2005. [9] R. W. Freund, G. H. Golub and N. M. Nachtigal, Iterative solution of linear systems, Acta Numerica, 1 (1992), 57–100. doi: 10.1.1.55.5646. [10] Y. Gal and Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, preprint, arXiv: 1506.02142. [11] N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 403 (2020), 109056. doi: 10.1016/j.jcp.2019.109056. [12] N. Geneva and N. Zabaras, Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 394 (2019), 125-147.  doi: 10.1016/j.jcp.2019.01.021. [13] X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, (2011), 315–323. Available from: http://proceedings.mlr.press/v15/glorot11a.html. [14] I. Goodfellow, Y. Bengio and A. Courville, Deep learning, MIT Press, 2016. Available from: http://www.deeplearningbook.org [15] K. He, X. Zhang, R. Shaoqing and J. Sun, Deep residual learning for image recognition, preprint, arXiv: 1512.03385. [16] J. Hernández-Lobato and R. Adams, Probabilistic backpropagation for scalable learning of Bayesian neural networks, preprint, arXiv: 1502.05336. [17] G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017). doi: 10.1109/cvpr.2017.243. [18] S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167. [19] S. Jégou, M. Drozdzal, D. Vazquez, A. Romero and Y. Bengio, The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017), 11–19. doi: 10.1.1.55.5646. [20] P. Jenny, S. H. Lee and H. A. Tchelepi, Multi-scale finite-volume method for elliptic problems in subsurface flow simulation, Journal of Computational Physics, 187 (2003), 47-67.  doi: 10.1016/s0021-9991(03)00075-5. [21] D. Kingma and J.Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. [22] D. P. Kingma, T. Salimans and M. Welling, Variational dropout and the local reparameterization trick, preprint, arXiv: 1506.02557. [23] A. Krizhevsky, I. Sutskever and G. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, (2012), 1097–1105. doi: 10.1145/3065386. [24] E. Laloy, R. Hérault, D. Jacques and N. Linde, Training-image based geostatistical inversion using a spatial generative adversarial neural network, Water Resources Research, 54 (2018), 381-406.  doi: 10.1016/j.jcp.2019.01.021. [25] Y. LeCun, Y. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539. [26] Q. Liu and D. Wang, Stein variational gradient descent: A general purpose Bayesian inference algorithm, preprint, arXiv: 1608.04471. [27] L. V. D. Maaten, E. Postma and J. Van den Herik, Dimensionality reduction: A comparative review, Journal of Machine Learning Research, 10 (2009), 66–71. Available from: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.112.5472. [28] S. Mo, N. Zabaras, X. Shi and J. Wu, Integration of adversarial autoencoders with residual dense convolutional networks for estimation of non-Gaussian hydraulic conductivities, Water Resources Research, 56 (2020). doi: 10.1029/2019WR026082. [29] S. Mo, Y. Zhu, N. Zabaras, X. Shi and J. Wu, Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2018), 703-728.  doi: 10.1029/2018wr023528. [30] O. Møyner and K. Lie, A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids, Journal of Computational Physics, 304 (2016), 46-71.  doi: 10.1016/j.jcp.2015.10.010. [31] A. Paszke, et. al., Automatic differentiation in pytorch, Neural Information Processing Systems, (2017). Available from: https://openreview.net/forum?id=BJJsrmfCZ. [32] A. Radford, L. Metz and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, preprint, arXiv: abs/1511.06434. [33] O. Ronneberger, P. Fischer, B. Philipp and T. Brox, U-net: Convolutional networks for biomedical image segmentation, preprint, arXiv: 1505.04597. [34] S. Shah, O. Møyner, M. Tene, K. Lie and H. Hajibeygi, The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB), Journal of Computational Physics, 318 (2016), 36-57.  doi: 10.1016/j.jcp.2016.05.001. [35] SINTEF MRST project web page, (2015), 66–71. Available from: http://www.sintef.no/Projectweb/MRST/. [36] N. Thuerey, K. Weissenow, H. Mehrotra, N. Mainali, L. Prantl and X. Hu, A study of deep learning methods for Reynolds-averaged Navier-Stokes simulations, preprint, arXiv: 1810.08217. [37] R. K. Tripathy and I. Bilionis, Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification, Journal of Computational Physics, 375 (2018), 565-588.  doi: 10.1016/j.jcp.2018.08.036. [38] D. Vernon, Machine vision-Automated visual inspection and robot vision, NASA STI/Recon Technical Report A, 92 (1991). [39] J. Wan and N. Zabaras, A probabilistic graphical model approach to stochastic multiscale partial differential equations, Journal of Computational Physics, 250 (2013), 477-510.  doi: 10.1016/j.jcp.2013.05.016. [40] M. Wang, S. W. Cheung, E. T. Chung, Y. Efendiev, W. T. Leung and Y. Wang, Prediction of discretization of GMsFEM using deep learning, Mathematics, 7 (2019), 412. doi: 10.3390/math7050412. [41] Y. Wang, S. Wun, E. T. Chung, Y. Efendiev and M. Wang, Deep multiscale model learning, preprint, arXiv: 1806.04830. [42] M. A. Zahangir, T. M. Tarek, C. Yakopcic, S. Westberg, P. Sidike, M. N. Shamima, B. C. Van Esesn, A. A. S. Awwal and V. K. Asari, The history began from AlexNET: A comprehensive survey on deep learning approaches, preprint, arXiv: 1803.01164. [43] M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, preprint, arXiv: 1311.2901. [44] J. Zhang, S. W. Cheung, Y. Efendiev, E. Gildin and E. T. Chung, Deep model reduction-model learning for reservoir simulation, Society of Petroleum Engineers, (2019). doi: 10.2118/193912-ms. [45] Y. Zhu and N. Zabaras, Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.  doi: 10.1016/j.jcp.2018.04.018. [46] Y. Zhu, N. Zabaras, P. Koutsourelakis and P. Perdikaris, Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.  doi: 10.1016/j.jcp.2019.05.024.
Schematic of the fine-scale and coarse-scale grids. Thick lines represent the primal coarse-grid $\bar{\Omega}_j$, and the blue line indicates a block of the dual coarse-grid $\Omega_k^D$. Thin lines define the fine-scale elements $f_i$ that constitute the fine-grid $\{\Omega_i\}_{i = 1}^{n_f}$
A schematic of the multiscale framework
(a) Discretization of the domain: fine-scale domain (black bold lines correspond to the coarse-grid ($\bar{\Omega}_j$) and thin lines correspond to the fine-grid $\Omega_i$), coarse-blocks and fine-cells (b) Local triangulation (indicated in purple) and coarse-block centers (indicated in black) (c) Cells inside the support region are indicated in blue patch, the support boundary is indicated in green patch and the coarse center node is indicated in black for the corresponding coarse-block and (d) Global boundary (indicated in gray) and coarse-block center (indicated in black)
The basis functions for the interior and non-interior support regions: (a) Coarse-blocks ($3\times3$) and basis function for coarse-block $5$ (basis function for the interior support region), (b) Coarse-blocks ($3\times3$) and basis function for coarse-block $1$ (basis function for the non-interior support region) and (c) Illustration of the interior support regions (shown in the green patch) where the basis functions are computed using the Deep Learning surrogate, and non-interior support regions (shown in the blue patch) where the basis functions are computed using the multiscale solver
A schematic of the DenseED network. (a) The top block shows the DenseED architecture with the encoding layers, and the bottom block shows the decoding layers containing convolution, Batch Norm, and ReLU. The convolution in the encoding layer reduces the size of the feature map, and the convolution (ConvT) in the decoding layer performs up-sampling. (b) The dense block also contains convolution, Batch Norm, and ReLU. The main difference between the encoding or decoding layer and the dense block is that the size of the feature maps is the same as the input in the dense block. Lastly, we apply the sigmoid activation function at the end of the last decoding layer
Comparison of the standard multiscale framework with the data-driven hybrid multiscale DenseED framework
A schematic of the hybrid deep neural network- multiscale framework using DenseED. Parameters $\mathit{\boldsymbol{A}}$, $\mathit{\boldsymbol{q}}$ and $\bar{\mathit{\boldsymbol{\Phi}}}^{non-int}$ are obtained from MRST [35] (magenta dashed line) and the network is trained using Pytorch (implementation is marked in blue dashed line)
Permeability field KLE$-100$ (top left), KLE$-1000$ (top right), KLE$-16384$ (bottom left) and channelized (bottom right)
Permeability coarse block (top) for KLE$-100$, KLE$-1000$, KLE$-16384$ and channelized field and the corresponding basis functions (bottom)
HM-DenseED model: Prediction of KLE$-100$ with $32$ training data: first row (from left to right) shows the target (pressure, $x-$velocity and $y-$velocity components), the second row shows the corresponding predictions, and the last row shows the error between the corresponding targets and predictions
HM-DenseED model: Prediction of KLE$-100$ with $96$ training data: first row (from left to right) shows the target (pressure, $x-$velocity and $y-$velocity components), the second row shows the corresponding predictions, and the last row shows the error between the corresponding targets and predictions
HM-DenseED model: Prediction of KLE$-1000$ with $64$ training data: first row (from left to right) shows the target (pressure, $x-$velocity and $y-$velocity components), the second row shows the corresponding predictions, and the last row shows the error between the corresponding targets and predictions
HM-DenseED model: Prediction of KLE$-1000$ with $128$ training data: first row (from left to right) shows the target (pressure, $x-$velocity and $y-$velocity components), the second row shows the corresponding predictions, and the last row shows the error between the corresponding targets and predictions
HM-DenseED model: Prediction of KLE$-16384$ with $96$ training data: first row (from left to right) shows the target (pressure, $x-$velocity and $y-$velocity components), the second row shows the corresponding predictions, and the last row shows the error between the corresponding targets and predictions
HM-DenseED model: Prediction of KLE$-16384$ with $160$ training data: first row (from left to right) shows the target (pressure, $x-$velocity and $y-$velocity components), the second row shows the corresponding predictions, and the last row shows the error between the corresponding targets and predictions
HM-DenseED model: Prediction of channelized field with $160$ training data: first row (from left to right) shows the target (pressure, $x-$velocity and $y-$velocity components), the second row shows the corresponding predictions, and the last row shows the error between the corresponding targets and predictions
The basis function for KLE$-100$. The first row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $32$ training data. The second row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $64$ training data. The last row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $96$ training data
The basis function for KLE$-1000$. The first row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $64$ training data. The second row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $96$ training data. The last row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $128$ training data
The basis function for KLE$-16384$. The first row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $96$ training data. The second row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $128$ training data. The last row shows the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $160$ training data
The basis function for the channelized field. Here, we show the ground truth basis function, HM-DenseED model predicted basis function, and the error between them for the model trained with $160$ training data
Distribution estimate for the pressure for KLE$-100$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the pressure for KLE$-1000$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the pressure for KLE$-16384$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the pressure for channelized flow. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the $x-$velocity component (horizontal flux) for KLE$-100$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the $x-$velocity component (horizontal flux) for KLE$-1000$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the $x-$velocity component (horizontal flux) for KLE$-16384$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the $x-$velocity component (horizontal flux) for channelized flow. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the $y-$velocity component (vertical flux) for KLE$-100$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the $y-$velocity component (vertical flux) for KLE$-1000$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the $y-$velocity component (vertical flux) for KLE$-16384$. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Distribution estimate for the $y-$velocity component (vertical flux) for channelized flow. Location: $(0.6,0.4)$. Here, the Monte Carlo result is shown in the blue dashed line; the hybrid DenseED result is shown in green circles, and the DenseED model result is shown in magenta dashed line
Training and testing RMSE plot for KLE$-100$ ($64-$training data), KLE$-1000$ ($96-$training data), KLE$-16384$ ($128-$training data) and channelized field ($160-$ training data)
Comparison of test $R^2$ scores (for pressure) for HM-DenseED (left) and DenseED (right) for KLE$-100$, $-1000$, $-16384$ and channelized permeability field and for various training data
Prediction of KLE$-100$ with $32$ training data (left), and $96$ training data (right); For individual prediction statistics, from left to right (first row): For single test input $\mathit{\boldsymbol{K}}^{*}$, test output (ground truth) $t^{*}$, Predictive mean $\mathbb{E}[\widehat{{\mathit{\boldsymbol{P}}}_f}^{*}|\mathit{\boldsymbol{K}}^{*},\mathcal{{{D}}}]$, from left to right (second row) Error: Predictive mean and test output (ground truth), and predictive variance $\text{Var}(\widehat{{\mathit{\boldsymbol{P}}}_f}^{*}|\mathit{\boldsymbol{K}}^{*},\mathcal{{{D}}})$
Prediction of KLE$-1000$ with $64$ training data (left), and $128$ training data (right); For individual prediction statistics, from left to right (first row): For single test input $\boldsymbol{K}^{*}$, test output (ground truth) $t^{*}$, predictive mean $\mathbb{E}[\widehat{{\boldsymbol{P}}_f}^{*}|\boldsymbol{K}^{*},\mathcal{{D}}]$, from left to right (second row) Error: Predictive mean and test output (ground truth), and Predictive variance $\text{Var}(\widehat{{\boldsymbol{P}}_f}^{*}|\boldsymbol{K}^{*},\mathcal{{D}})$
Prediction of KLE$-16384$ with $96$ training data (left), and $160$ training data (right); For individual prediction statistics, from left to right (first row): For single test input $\mathit{\boldsymbol{K}}^{*}$, test output (ground truth) $t^{*}$, Predictive mean $\mathbb{E}[\widehat{{\mathit{\boldsymbol{P}}}_f}^{*}|\mathit{\boldsymbol{K}}^{*},\mathcal{{{D}}}]$, from left to right (second row) Error: Predictive mean and test output (ground truth), and Predictive variance $\text{Var}(\widehat{{\mathit{\boldsymbol{P}}}_f}^{*}|\mathit{\boldsymbol{K}}^{*},\mathcal{{{D}}})$
Prediction for channelized field; For individual prediction statistics, from left to right (first row): For single test input $\boldsymbol{K}^{*}$, test output (ground truth) $t^{*}$, predictive mean $\mathbb{E}[\widehat{{\boldsymbol{P}}_f}^{*}|\boldsymbol{K}^{*},\mathcal{{D}}]$, from left to right (second row) Error: Predictive mean and test output (ground truth), and predictive variance $\text{Var}(\widehat{{\boldsymbol{P}}_f}^{*}|\boldsymbol{K}^{*},\mathcal{{D}})$
MNLP of test data
Non-Bayesian and Bayesian test $R^2$ scores for KLE$-100$, $-1000$, $-16384$ and channelized field (Hybrid DenseED model)
Bayesian HM-DenseED: Training and testing RMSE plot for KLE$-100$ ($64-$training data), KLE$-1000$ ($96-$training data), KLE$-16384$ ($128-$training data) and channelized field ($160-$ training data)
(Left) Uncertainty propagation for KLE$-100$ ($32$ training data). We show the Monte Carlo output mean, predictive output mean $\mathbb{E}_{\mathit{\boldsymbol{\theta}}}[\mathbb{E}[\mathit{\boldsymbol{y}}|\mathit{\boldsymbol{\theta}}]]$, the error of the above two, and two standard deviations of the conditional predictive mean $Var_{\mathit{\boldsymbol{\theta}}}[\mathbb{E}[\mathit{\boldsymbol{y}}|\mathit{\boldsymbol{\theta}}]]$. (Right) Uncertainty propagation for KLE$-100$: ($64$ training data) we show the Monte Carlo output variance, predictive output variance $\mathbb{E}_{\mathit{\boldsymbol{\theta}}}[Var(\mathit{\boldsymbol{y}} | \mathit{\boldsymbol{\theta}})]$, the error of the above two, and two standard deviations of the conditional predictive variance $\text{Var}_{\mathit{\boldsymbol{\theta}}} (\text{Var}(\mathit{\boldsymbol{y}} | \mathit{\boldsymbol{\theta}}))$
Uncertainty propagation for KLE$-100$ ($96$k__ge training data). (Left) We show the Monte Carlo output mean, predictive output mean $\mathbb{E}_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$, the error of the above two, and two standard deviations of the conditional predictive mean $Var_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$. (Right) We show the Monte Carlo output variance, predictive output variance $\mathbb{E}_{\boldsymbol{\theta}}[Var(\boldsymbol{y} | \boldsymbol{\theta})]$, the error of the above two, and two standard deviations of the conditional predictive variance $\text{Var}_{\boldsymbol{\theta}} (\text{Var}(\boldsymbol{y} | \boldsymbol{\theta}))$
Uncertainty propagation for KLE$-1000$ ($64$k__ge training data). (Left) We show the Monte Carlo output mean, predictive output mean $\mathbb{E}_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$, the error of the above two, and two standard deviations of the conditional predictive mean $Var_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$. (Right) We show the Monte Carlo output variance, predictive output variance $\mathbb{E}_{\boldsymbol{\theta}}[Var(\boldsymbol{y} | \boldsymbol{\theta})]$, the error of the above two, and two standard deviations of the conditional predictive variance $\text{Var}_{\boldsymbol{\theta}} (\text{Var}(\boldsymbol{y} | \boldsymbol{\theta}))$
Uncertainty propagation for KLE$-1000$ ($128$k__ge training data). (Left) We show the Monte Carlo output mean, predictive output mean $\mathbb{E}_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$, the error of the above two, and two standard deviations of the conditional predictive mean $Var_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$. (Right) We show the Monte Carlo output variance, predictive output variance $\mathbb{E}_{\boldsymbol{\theta}}[Var(\boldsymbol{y} | \boldsymbol{\theta})]$, the error of the above two, and two standard deviations of the conditional predictive variance $\text{Var}_{\boldsymbol{\theta}} (\text{Var}(\boldsymbol{y} | \boldsymbol{\theta}))$
Uncertainty propagation for KLE$-16384$ ($96$k__ge training data). (Left) We show the Monte Carlo output mean, predictive output mean $\mathbb{E}_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$, the error of the above two, and two standard deviations of the conditional predictive mean $Var_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$. (Right) We show the Monte Carlo output variance, predictive output variance $\mathbb{E}_{\boldsymbol{\theta}}[Var(\boldsymbol{y} | \boldsymbol{\theta})]$, the error of the above two, and two standard deviations of the conditional predictive variance $\text{Var}_{\boldsymbol{\theta}} (\text{Var}(\boldsymbol{y} | \boldsymbol{\theta}))$
Uncertainty propagation for KLE$-16384$ ($160$ training data). (Left) We show the Monte Carlo output mean, predictive output mean $\mathbb{E}_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$, the error of the above two, and two standard deviations of the conditional predictive mean $Var_{\boldsymbol{\theta}}[\mathbb{E}[\boldsymbol{y}|\boldsymbol{\theta}]]$. (Right) We show the Monte Carlo output variance, predictive output variance $\mathbb{E}_{\boldsymbol{\theta}}[Var(\boldsymbol{y} | \boldsymbol{\theta})]$, the error of the above two, and two standard deviations of the conditional predictive variance $\text{Var}_{\boldsymbol{\theta}} (\text{Var}(\boldsymbol{y} | \boldsymbol{\theta}))$
Uncertainty propagation for channelized field ($160$ training data). (Left) We show the Monte Carlo output mean, predictive output mean $\mathbb{E}_{\mathit{\boldsymbol{\theta}}}[\mathbb{E}[\mathit{\boldsymbol{y}}|\mathit{\boldsymbol{\theta}}]]$, the error of the above two, and two standard deviations of the conditional predictive mean $Var_{\mathit{\boldsymbol{\theta}}}[\mathbb{E}[\mathit{\boldsymbol{y}}|\mathit{\boldsymbol{\theta}}]]$. (Right) We show the Monte Carlo output variance, predictive output variance $\mathbb{E}_{\mathit{\boldsymbol{\theta}}}[Var(\mathit{\boldsymbol{y}} | \mathit{\boldsymbol{\theta}})]$, the error of the above two, and two standard deviations of the conditional predictive variance $\text{Var}_{\mathit{\boldsymbol{\theta}}} (\text{Var}(\mathit{\boldsymbol{y}} | \mathit{\boldsymbol{\theta}}))$
Comparison of DenseED (first row) and fully-connected network (second row) for a test set
Distribution estimate for the pressure at location $(0.96, 0.54)$
DenseED architecture
 Layers $C_f$ Resolution $H_f \times W_f$ Number of parameters Input $1$ $15 \times 15$ - Convolution k7s2p3 $48$ $7 \times 7$ $2352$ Dense Block (1) K16L4 $112$ $7 \times 7$ $42048$ Encoding Layer $56$ $4 \times 4$ $34888$ Dense Block (2) K16L8 $184$ $4 \times 4$ $130944$ Decoding Layer (1) $92$ $8 \times 8$ $14276$ Dense Block (3) K16L4 $156$ $8 \times 8$ $67808$ Decoding Layer (2) $1$ $15 \times 15$ $13728$ $k =$ kernel size, $s =$ stride, $p =$ padding, $L =$ Number of layers and $K =$ growth rate.
 Layers $C_f$ Resolution $H_f \times W_f$ Number of parameters Input $1$ $15 \times 15$ - Convolution k7s2p3 $48$ $7 \times 7$ $2352$ Dense Block (1) K16L4 $112$ $7 \times 7$ $42048$ Encoding Layer $56$ $4 \times 4$ $34888$ Dense Block (2) K16L8 $184$ $4 \times 4$ $130944$ Decoding Layer (1) $92$ $8 \times 8$ $14276$ Dense Block (3) K16L4 $156$ $8 \times 8$ $67808$ Decoding Layer (2) $1$ $15 \times 15$ $13728$ $k =$ kernel size, $s =$ stride, $p =$ padding, $L =$ Number of layers and $K =$ growth rate.
The computational cost for the HM-DenseED, the Bayesian HM-DenseED, the fine-scale and the multiscale simulation models in-terms of wall-clock time for obtaining the basis functions for $100$ test data. Here, Matlab* indicates that we only use Matlab for generating the basis functions for the non-interior support regions and Matlab indicates generating the basis functions for both the interior and non-interior support regions
 Backend, Hardware Wall-clock (s) for obtaining the basis functions Fine-scale Fenics, Intel Xeon $E5-2680$ - Multiscale Matlab, Intel Xeon $E5-2680$ 654.930 HM-DenseED PyTorch, Intel Xeon $E5-2680$Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{19.5}$=132.73 HM-DenseED PyTorch, NVIDIA Tesla V100Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{1.9}$=115.13 Bayesian HM-DenseED PyTorch, Intel Xeon $E5-2680$Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{259.5}$ = 372.73 Bayesian HM-DenseED PyTorch, NVIDIA Tesla $V100$Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{15.7}$ = 128.93
 Backend, Hardware Wall-clock (s) for obtaining the basis functions Fine-scale Fenics, Intel Xeon $E5-2680$ - Multiscale Matlab, Intel Xeon $E5-2680$ 654.930 HM-DenseED PyTorch, Intel Xeon $E5-2680$Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{19.5}$=132.73 HM-DenseED PyTorch, NVIDIA Tesla V100Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{1.9}$=115.13 Bayesian HM-DenseED PyTorch, Intel Xeon $E5-2680$Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{259.5}$ = 372.73 Bayesian HM-DenseED PyTorch, NVIDIA Tesla $V100$Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{15.7}$ = 128.93
The computational cost for the HM-DenseED, the Bayesian HM-DenseED, the fine-scale and the multiscale simulation models in-terms of wall-clock time for obtaining the pressure for $100$ test data. Here, Matlab* indicates that we only use the Matlab for generating the basis functions for the non-interior support regions and Matlab indicates generating the basis functions for both the interior and non-interior support regions
 Backend, Hardware Wall-clock (s) for obtaining the pressure Fine-scale Fenics, Intel Xeon $E5-2680$ 2300.822 Multiscale Matlab, Intel Xeon $E5-2680$ 1500.611 HM-DenseED PyTorch, Intel Xeon $E5-2680$Matlab$^{*}$, Intel Xeon E5-2680 $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{32.5}$=145.73 HM-DenseED PyTorch, NVIDIA Tesla V100Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{6.04}$=119.27 Bayesian HM-DenseED PyTorch, Intel Xeon $E5-2680$Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{627.20}$= 740.43 Bayesian HM-DenseED PyTorch, NVIDIA Tesla V100Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{61.27}$ = 174.5
 Backend, Hardware Wall-clock (s) for obtaining the pressure Fine-scale Fenics, Intel Xeon $E5-2680$ 2300.822 Multiscale Matlab, Intel Xeon $E5-2680$ 1500.611 HM-DenseED PyTorch, Intel Xeon $E5-2680$Matlab$^{*}$, Intel Xeon E5-2680 $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{32.5}$=145.73 HM-DenseED PyTorch, NVIDIA Tesla V100Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{6.04}$=119.27 Bayesian HM-DenseED PyTorch, Intel Xeon $E5-2680$Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{627.20}$= 740.43 Bayesian HM-DenseED PyTorch, NVIDIA Tesla V100Matlab$^{*}$, Intel Xeon $E5-2680$ $\underset{(\text{Matlab*})}{113.23}$+$\underset{\text{(PyTorch)}}{61.27}$ = 174.5
Test $R^2$ score for different configurations and data for KLE$-100$
 Data $\longrightarrow$ $\mathit{\boldsymbol{32}}$ $\mathit{\boldsymbol{64}}$ $\mathit{\boldsymbol{96}}$ Configurations $\downarrow$ $\mathit{\boldsymbol{1-1}}$ $0.799$ $0.9317$ $0.966$ $\mathit{\boldsymbol{1-1-1}}$ $0.859$ $0.9538$ $0.968$ $\mathit{\boldsymbol{1-1-1-1-1}}$ $0.825$ $0.9198$ $0.964$ $\mathit{\boldsymbol{4-8-4}}$ $0.962$ $0.97$ $0.973$ $\mathit{\boldsymbol{6-12-6}}$ $0.9624$ $0.972$ $0.9745$
 Data $\longrightarrow$ $\mathit{\boldsymbol{32}}$ $\mathit{\boldsymbol{64}}$ $\mathit{\boldsymbol{96}}$ Configurations $\downarrow$ $\mathit{\boldsymbol{1-1}}$ $0.799$ $0.9317$ $0.966$ $\mathit{\boldsymbol{1-1-1}}$ $0.859$ $0.9538$ $0.968$ $\mathit{\boldsymbol{1-1-1-1-1}}$ $0.825$ $0.9198$ $0.964$ $\mathit{\boldsymbol{4-8-4}}$ $0.962$ $0.97$ $0.973$ $\mathit{\boldsymbol{6-12-6}}$ $0.9624$ $0.972$ $0.9745$
Comparison of the DenseED with a fully-connected network for learning the basis functions
 Hybrid DenseED-multiscale Hybrid fully-connected Configuration $4-8-4$ $225$ $\rightarrow$ $144$ $\rightarrow 64 \rightarrow 144 \rightarrow 225$ Learning rate $1e-5$ $1e-4$ Weight decay $1e-6$ $1e-5$ Optimizer Adam Adam Epochs $200$ $200$
 Hybrid DenseED-multiscale Hybrid fully-connected Configuration $4-8-4$ $225$ $\rightarrow$ $144$ $\rightarrow 64 \rightarrow 144 \rightarrow 225$ Learning rate $1e-5$ $1e-4$ Weight decay $1e-6$ $1e-5$ Optimizer Adam Adam Epochs $200$ $200$
 [1] Richard Archibald, Feng Bao, Yanzhao Cao, He Zhang. A backward SDE method for uncertainty quantification in deep learning. Discrete and Continuous Dynamical Systems - S, 2022  doi: 10.3934/dcdss.2022062 [2] Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2021, 8 (2) : 131-152. doi: 10.3934/jcd.2021006 [3] Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics and Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117 [4] Andrew J. Majda, Michal Branicki. Lessons in uncertainty quantification for turbulent dynamical systems. Discrete and Continuous Dynamical Systems, 2012, 32 (9) : 3133-3221. doi: 10.3934/dcds.2012.32.3133 [5] Jing Li, Panos Stinis. Mori-Zwanzig reduced models for uncertainty quantification. Journal of Computational Dynamics, 2019, 6 (1) : 39-68. doi: 10.3934/jcd.2019002 [6] H. T. Banks, Robert Baraldi, Karissa Cross, Kevin Flores, Christina McChesney, Laura Poag, Emma Thorpe. Uncertainty quantification in modeling HIV viral mechanics. Mathematical Biosciences & Engineering, 2015, 12 (5) : 937-964. doi: 10.3934/mbe.2015.12.937 [7] Alex Capaldi, Samuel Behrend, Benjamin Berman, Jason Smith, Justin Wright, Alun L. Lloyd. Parameter estimation and uncertainty quantification for an epidemic model. Mathematical Biosciences & Engineering, 2012, 9 (3) : 553-576. doi: 10.3934/mbe.2012.9.553 [8] Ryan Bennink, Ajay Jasra, Kody J. H. Law, Pavel Lougovski. Estimation and uncertainty quantification for the output from quantum simulators. Foundations of Data Science, 2019, 1 (2) : 157-176. doi: 10.3934/fods.2019007 [9] Michael Herty, Elisa Iacomini. Uncertainty quantification in hierarchical vehicular flow models. Kinetic and Related Models, 2022, 15 (2) : 239-256. doi: 10.3934/krm.2022006 [10] H. N. Mhaskar, T. Poggio. Function approximation by deep networks. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4085-4095. doi: 10.3934/cpaa.2020181 [11] Seonho Park, Maciej Rysz, Kaitlin L. Fair, Panos M. Pardalos. Synthetic-Aperture Radar image based positioning in GPS-denied environments using Deep Cosine Similarity Neural Networks. Inverse Problems and Imaging, 2021, 15 (4) : 763-785. doi: 10.3934/ipi.2021013 [12] Christopher Oballe, David Boothe, Piotr J. Franaszczuk, Vasileios Maroulas. ToFU: Topology functional units for deep learning. Foundations of Data Science, 2021  doi: 10.3934/fods.2021021 [13] Ziju Shen, Yufei Wang, Dufan Wu, Xu Yang, Bin Dong. Learning to scan: A deep reinforcement learning approach for personalized scanning in CT imaging. Inverse Problems and Imaging, 2022, 16 (1) : 179-195. doi: 10.3934/ipi.2021045 [14] Ying Sue Huang. Resynchronization of delayed neural networks. Discrete and Continuous Dynamical Systems, 2001, 7 (2) : 397-401. doi: 10.3934/dcds.2001.7.397 [15] Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks and Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011 [16] Zheng Chen, Liu Liu, Lin Mu. Solving the linear transport equation by a deep neural network approach. Discrete and Continuous Dynamical Systems - S, 2022, 15 (4) : 669-686. doi: 10.3934/dcdss.2021070 [17] Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren, Carola-Bibiane Schönlieb. Deep learning as optimal control problems: Models and numerical methods. Journal of Computational Dynamics, 2019, 6 (2) : 171-198. doi: 10.3934/jcd.2019009 [18] Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019 [19] Miria Feng, Wenying Feng. Evaluation of parallel and sequential deep learning models for music subgenre classification. Mathematical Foundations of Computing, 2021, 4 (2) : 131-143. doi: 10.3934/mfc.2021008 [20] Tatyana S. Turova. Structural phase transitions in neural networks. Mathematical Biosciences & Engineering, 2014, 11 (1) : 139-148. doi: 10.3934/mbe.2014.11.139

Impact Factor:

## Tools

Article outline

Figures and Tables