# American Institute of Mathematical Sciences

December  2020, 2(4): 391-428. doi: 10.3934/fods.2020019

## Multi-fidelity generative deep learning turbulent flows

 Scientific Computing and Artificial Intelligence (SCAI) Laboratory, University of Notre Dame, 311 Cushing Hall, Notre Dame, IN 46556, USA

*Corresponding author: Nicholas Zabaras

Received  October 2020 Published  December 2020

In computational fluid dynamics, there is an inevitable trade off between accuracy and computational cost. In this work, a novel multi-fidelity deep generative model is introduced for the surrogate modeling of high-fidelity turbulent flow fields given the solution of a computationally inexpensive but inaccurate low-fidelity solver. The resulting surrogate is able to generate physically accurate turbulent realizations at a computational cost magnitudes lower than that of a high-fidelity simulation. The deep generative model developed is a conditional invertible neural network, built with normalizing flows, with recurrent LSTM connections that allow for stable training of transient systems with high predictive accuracy. The model is trained with a variational loss that combines both data-driven and physics-constrained learning. This deep generative model is applied to non-trivial high Reynolds number flows governed by the Navier-Stokes equations including turbulent flow over a backwards facing step at different Reynolds numbers and turbulent wake behind an array of bluff bodies. For both of these examples, the model is able to generate unique yet physically accurate turbulent fluid flows conditioned on an inexpensive low-fidelity solution.

Citation: Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019
##### References:
 [1] F. Ahmed and N. Rajaratnam, Flow around bridge piers, Journal of Hydraulic Engineering, 124 (1998), 288-300.  doi: 10.1061/(ASCE)0733-9429(1998)124:3(288).  Google Scholar [2] L. Ardizzone, C. Lüth, J. Kruse, C. Rother and U. Köthe, Guided image generation with conditional invertible neural networks, preprint, arXiv: 1907.02392. Google Scholar [3] K. Bieker, S. Peitz, S. L. Brunton, J. N. Kutz and M. Dellnitz, Deep model predictive control with online learning for complex physical systems, preprint, arXiv: 1905.10094. Google Scholar [4] L. Chen, K. Asai, T. Nonomura, G. Xi and T. Liu, A review of backward-facing step (BFS) flow mechanisms, heat transfer and control, Thermal Science and Engineering Progress, 6 (2018), 194-216.  doi: 10.1016/j.tsep.2018.04.004.  Google Scholar [5] J. Chung, S. Ahn and Y. Bengio, Hierarchical multiscale recurrent neural networks, preprint, arXiv: 1609.01704. Google Scholar [6] L. Dinh, D. Krueger and Y. Bengio, Nice: Non-linear independent components estimation, preprint, arXiv: 1410.8516. Google Scholar [7] L. Dinh, J. Sohl-Dickstein and S. Bengio, Density estimation using real nvp, preprint, arXiv: 1605.08803. Google Scholar [8] E. Erturk, Numerical solutions of 2-D steady incompressible flow over a backward-facing step, part I: High Reynolds number solutions, Computers and Fluids, 37 (2008), 633-655.  doi: 10.1016/j.compfluid.2007.09.003.  Google Scholar [9] N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 109056. doi: 10.1016/j.jcp.2019.109056.  Google Scholar [10] N. Geneva and N. Zabaras, Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 383 (2019), 125-147.  doi: 10.1016/j.jcp.2019.01.021.  Google Scholar [11] X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011,315–323. Google Scholar [12] J. S. González, A. G. G. Rodriguez, J. C. Mora, J. R. Santos and M. B. Payan, Optimization of wind farm turbines layout using an evolutive algorithm, Renewable Energy, 35 (2010), 1671–1681. Available from: http://www.sciencedirect.com/science/article/pii/S0960148110000145. Google Scholar [13] I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT press, 2016.   Google Scholar [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems, 2014, 2672–2680. Google Scholar [15] W. Grathwohl, R. T. Chen, J. Betterncourt, I. Sutskever and D. Duvenaud, Ffjord: Free-form continuous dynamics for scalable reversible generative models, preprint, arXiv: 1810.01367. Google Scholar [16] X. Guo, W. Li and F. Iorio, Convolutional neural networks for steady flow approximation, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining doi: 10.1145/2939672.2939738.  Google Scholar [17] G. Haller, An objective definition of a vortex, Journal of Fluid Mechanics, 525 (2005), 1-26.  doi: 10.1017/S0022112004002526.  Google Scholar [18] R. Han, Y. Wang, Y. Zhang and G. Chen, A novel spatial-temporal prediction method for unsteady wake flows based on hybrid deep neural network, Physics of Fluids, 31 (2019), 127101. doi: 10.1063/1.5127247.  Google Scholar [19] O. Hennigh, Lat-net: Compressing lattice Boltzmann flow simulations using deep neural networks, preprint, arXiv: 1705.09036. Google Scholar [20] J. Hoffman and C. Johnson, A new approach to computational turbulence modeling, Computer Methods in Applied Mechanics and Engineering, 195 (2006), 2865-2880.  doi: 10.1016/j.cma.2004.09.015.  Google Scholar [21] J. Holgate, A. Skillen, T. Craft and A. Revell, A review of embedded large eddy simulation for internal flows, Archives of Computational Methods in Engineering, 26 (2019), 865-882.  doi: 10.1007/s11831-018-9272-5.  Google Scholar [22] G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. doi: 10.1109/CVPR.2017.243.  Google Scholar [23] W. Huang, Q. Yang and H. Xiao, CFD modeling of scale effects on turbulence flow and scour around bridge piers, Computers and Fluids, 38 (2009), 1050-1058.  doi: 10.1016/j.compfluid.2008.01.029.  Google Scholar [24] J. C. Hunt, A. A. Wray and P. Moin, Eddies, streams, and convergence zones in turbulent flows, in Center for Turbulence Research Report, CTR-S88, 1988. Available from: https://ntrs.nasa.gov/search.jsp?R=19890015184. Google Scholar [25] S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167. Google Scholar [26] J.-H. Jacobsen, A. Smeulders and E. Oyallon, i-revnet: Deep invertible networks, preprint, arXiv: 1802.07088. Google Scholar [27] H. Jasak, A. Jemcov, Z. Tukovic, et al., OpenFOAM: A C++ library for complex physics simulations, in International Workshop on Coupled Methods in Numerical Dynamics, 1000, IUC Dubrovnik, Croatia, 2007, 1–20. Google Scholar [28] B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. Gross and B. Solenthaler, Deep fluids: A generative network for parameterized fluid simulations, Computer Graphics Forum, 38 (2019), 59-70.  doi: 10.1111/cgf.13619.  Google Scholar [29] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. Google Scholar [30] D. P. Kingma and M. Welling, Auto-encoding variational bayes, arXiv: 1312.6114. Google Scholar [31] D. P. Kingma and P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, in Advances in Neural Information Processing Systems, 2018, 10215–10224. Google Scholar [32] M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh and D. Kingma, Videoflow: A flow-based generative model for video, preprint, arXiv: 1903.01434. Google Scholar [33] R. Kumar, S. Ozair, A. Goyal, A. Courville and Y. Bengio, Maximum entropy generators for energy-based models, preprint, arXiv: 1901.08508. Google Scholar [34] C. J. Lapeyre, A. Misdariis, N. Cazard, D. Veynante and T. Poinsot, Training convolutional neural networks to estimate turbulent sub-grid scale reaction rates, Combustion and Flame, 203 (2019), 255-264.  doi: 10.1016/j.combustflame.2019.02.019.  Google Scholar [35] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato and F. Huang, A tutorial on energy-based learning, Predicting Structured Data, 1 (2006), 59 pp. Google Scholar [36] C. Li, J. Li, G. Wang and L. Carin, Learning to sample with adversarially learned likelihood-ratio, 2018. Available from: https://openreview.net/forum?id=S1eZGHkDM. Google Scholar [37] J. Ling, A. Kurzawski and J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics, 807 (2016), 155-166.  doi: 10.1017/jfm.2016.615.  Google Scholar [38] P. Liu, X. Qiu, X. Chen, S. Wu and X.-J. Huang, Multi-timescale long short-term memory neural network for modelling sentences and documents, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, 2326–2335. doi: 10.18653/v1/D15-1280.  Google Scholar [39] R. Maulik, O. San, A. Rasheed and P. Vedula, Subgrid modelling for two-dimensional turbulence using neural networks, Journal of Fluid Mechanics, 858 (2019), 122-144.  doi: 10.1017/jfm.2018.770.  Google Scholar [40] S. M. Mitran, A Comparison of Adaptive Mesh Refinement Approaches for Large Eddy Simulation, Technical report, Washington University, Seattle, Department of Applied Mathematics, 2001. Google Scholar [41] S. Mo, Y. Zhu, N. Zabaras, X. Shi and J. Wu, Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2019), 703-728.  doi: 10.1029/2018WR023528.  Google Scholar [42] A. Mohan, D. Daniel, M. Chertkov and D. Livescu, Compressed convolutional lstm: An efficient deep learning framework to model high fidelity 3d turbulence, preprint, arXiv: 1903.00033. Google Scholar [43] M. H. Patel, Dynamics of Offshore Structures, Butterworth-Heinemann, 2013. Google Scholar [44] S. B. Pope, Turbulent Flows, Cambridge University Press, Cambridge, 2000.  doi: 10.1017/CBO9780511840531.  Google Scholar [45] P. Quéméré and P. Sagaut, Zonal multi-domain rans/les simulations of turbulent flows, International Journal for Numerical Methods in Fluids, 40 (2002), 903-925.  doi: 10.1002/fld.381.  Google Scholar [46] J. Rabault, M. Kuchta, A. Jensen, U. Réglade and N. Cerardi, Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, Journal of Fluid Mechanics, 865 (2019), 281-302.  doi: 10.1017/jfm.2019.62.  Google Scholar [47] M. Raissi, P. Perdikaris and G. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics, 378 (2019), 686-707.  doi: 10.1016/j.jcp.2018.10.045.  Google Scholar [48] M. Raissi, Z. Wang, M. S. Triantafyllou and G. E. Karniadakis, Deep learning of vortex-induced vibrations, Journal of Fluid Mechanics, 861 (2019), 119-137.  doi: 10.1017/jfm.2018.872.  Google Scholar [49] P. Sagaut, Multiscale and Multiresolution Approaches in Turbulence: LES, DES and Hybrid RANS/LES Methods: Applications and Guidelines, World Scientific, 2013. doi: 10.1142/p878.  Google Scholar [50] M. Samorani, The wind farm layout optimization problem, in Handbook of Wind Power Systems (eds. P. M. Pardalos, S. Rebennack, M. V. F. Pereira, N. A. Iliadis and V. Pappu) doi: 10.1007/978-3-642-41080-2_2.  Google Scholar [51] J. U. Schlüter, H. Pitsch and P. Moin, Large-eddy simulation inflow conditions for coupling with reynolds-averaged flow solvers, AIAA Journal, 42 (2004), 478–484. Available from: https://doi.org/10.2514/1.3488. Google Scholar [52] X. SHI, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong and W.-c. Woo, Convolutional LSTM network: A machine learning approach for precipitation nowcasting, in Advances in Neural Information Processing Systems 28, Curran Associates, Inc., 2015,802–810. Available from: http://papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learningapproach-for-precipitation-nowcasting.pdf Google Scholar [53] J. Smagorinsky, General circulation experiments with the primitive equations: I. The basic experiment, Monthly Weather Review, 91 (1963), 99-164.  doi: 10.1175/1520-0493(1963)091<0099:GCEWTP>2.3.CO;2.  Google Scholar [54] I. Sobel and G. Feldman, A 3x3 isotropic gradient operator for image processing, Presented at a talk at the Stanford Artificial Intelligence Project, 271–272. Google Scholar [55] C. G. Speziale, Computing non-equilibrium turbulent flows with time-dependent RANS and VLES, in Fifteenth International Conference on Numerical Methods in Fluid Dynamics, Springer, 1997,123–129. doi: 10.1007/BFb0107089.  Google Scholar [56] A. Subramaniam, M. L. Wong, R. D. Borker, S. Nimmagadda and S. K. Lele, Turbulence enrichment using generative adversarial networks, preprint, arXiv: 2003.01907. Google Scholar [57] L. Sun, H. Gao, S. Pan and J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Applied Mechanics and Engineering, 361 (2020), 112732. doi: 10.1016/j.cma.2019.112732.  Google Scholar [58] E. G. Tabak and C. V. Turner, A family of nonparametric density estimation algorithms, Communications on Pure and Applied Mathematics, 66 (2013), 145-164.  doi: 10.1002/cpa.21423.  Google Scholar [59] E. G. Tabak and E. Vanden-Eijnden, Density estimation by dual ascent of the log-likelihood, Communications in Mathematical Sciences, 8 (2010), 217-233.  doi: 10.4310/CMS.2010.v8.n1.a11.  Google Scholar [60] S. Taghizadeh, F. D. Witherden and S. S. Girimaji, Turbulence closure modeling with data-driven techniques: Physical compatibility and consistency considerations, preprint, arXiv: 2004.03031. Google Scholar [61] M. Terracol, E. Manoha, C. Herrero, E. Labourasse, S. Redonnet and P. Sagaut, Hybrid methods for airframe noise numerical prediction, Theoretical and Computational Fluid Dynamics, 19 (2005), 197-227.  doi: 10.1007/s00162-005-0165-5.  Google Scholar [62] M. Terracol, P. Sagaut and C. Basdevant, A multilevel algorithm for large-eddy simulation of turbulent compressible flows, Journal of Computational Physics, 167 (2001), 439-474.  doi: 10.1016/S0021-9991(02)00017-7.  Google Scholar [63] J. Tompson, K. Schlachter, P. Sprechmann and K. Perlin, Accelerating eulerian fluid simulation with convolutional networks, in Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, 3424–3433. Available from: http://dl.acm.org/citation.cfm?id=3305890.3306035. Google Scholar [64] A. Travin, M. Shur, M. Strelets and P. R. Spalart, Physical and numerical upgrades in the detached-eddy simulation of complex turbulent flows, in Advances in LES of Complex Flows (eds. R. Friedrich and W. Rodi), Springer Netherlands, Dordrecht, 2002,239–254. doi: 10.1007/0-306-48383-1_16.  Google Scholar [65] Y.-H. Tseng, C. Meneveau and M. B. Parlange, Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation, Environmental Science & Technology, 40 (2006), 2653-2662.  doi: 10.1021/es051708m.  Google Scholar [66] J.-X. Wang, J.-L. Wu and H. Xiao, Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data, Phys. Rev. Fluids, 2 (2017), 034603. doi: 10.1103/PhysRevFluids.2.034603.  Google Scholar [67] Z. Wang, K. Luo, D. Li, J. Tan and J. Fan, Investigations of data-driven closure for subgrid-scale stress in large-eddy simulation, Physics of Fluids, 30 (2018), 125101. doi: 10.1063/1.5054835.  Google Scholar [68] M. Werhahn, Y. Xie, M. Chu and N. Thuerey, A multi-pass GAN for fluid flow super-resolution, preprint, arXiv: 1906.01689. doi: 10.1145/3340251.  Google Scholar [69] S. Wiewel, M. Becher and N. Thuerey, Latent space physics: Towards learning the temporal evolution of fluid flow, Computer Graphics Forum, 38 (2019), 71-82.  doi: 10.1111/cgf.13620.  Google Scholar [70] J. Wu, H. Xiao, R. Sun and Q. Wang, Reynolds-averaged Navier-Stokes equations with explicit data-driven Reynolds stress closure can be ill-conditioned, Journal of Fluid Mechanics, 869 (2019), 553-586.  doi: 10.1017/jfm.2019.205.  Google Scholar [71] H. Xiao, J.-L. Wu, J.-X. Wang, R. Sun and C. Roy, Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach, Journal of Computational Physics, 324 (2016), 115-136.  doi: 10.1016/j.jcp.2016.07.038.  Google Scholar [72] W. Xiong, W. Luo, L. Ma, W. Liu and J. Luo, Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, 2364–2373. doi: 10.1109/CVPR.2018.00251.  Google Scholar [73] Y. Yang and P. Perdikaris, Adversarial uncertainty quantification in physics-informed neural networks, Journal of Computational Physics, 394 (2019), 136-152.  doi: 10.1016/j.jcp.2019.05.027.  Google Scholar [74] L. Zhao, X. Peng, Y. Tian, M. Kapadia and D. Metaxas, Learning to forecast and refine residual motion for image-to-video generation, in Proceedings of the European Conference on Computer Vision (ECCV), 2018,387–403. doi: 10.1007/978-3-030-01267-0_24.  Google Scholar [75] Y. Zhu and N. Zabaras, Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.  doi: 10.1016/j.jcp.2018.04.018.  Google Scholar [76] Y. Zhu, N. Zabaras, P.-S. Koutsourelakis and P. Perdikaris, Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.  doi: 10.1016/j.jcp.2019.05.024.  Google Scholar

show all references

##### References:
 [1] F. Ahmed and N. Rajaratnam, Flow around bridge piers, Journal of Hydraulic Engineering, 124 (1998), 288-300.  doi: 10.1061/(ASCE)0733-9429(1998)124:3(288).  Google Scholar [2] L. Ardizzone, C. Lüth, J. Kruse, C. Rother and U. Köthe, Guided image generation with conditional invertible neural networks, preprint, arXiv: 1907.02392. Google Scholar [3] K. Bieker, S. Peitz, S. L. Brunton, J. N. Kutz and M. Dellnitz, Deep model predictive control with online learning for complex physical systems, preprint, arXiv: 1905.10094. Google Scholar [4] L. Chen, K. Asai, T. Nonomura, G. Xi and T. Liu, A review of backward-facing step (BFS) flow mechanisms, heat transfer and control, Thermal Science and Engineering Progress, 6 (2018), 194-216.  doi: 10.1016/j.tsep.2018.04.004.  Google Scholar [5] J. Chung, S. Ahn and Y. Bengio, Hierarchical multiscale recurrent neural networks, preprint, arXiv: 1609.01704. Google Scholar [6] L. Dinh, D. Krueger and Y. Bengio, Nice: Non-linear independent components estimation, preprint, arXiv: 1410.8516. Google Scholar [7] L. Dinh, J. Sohl-Dickstein and S. Bengio, Density estimation using real nvp, preprint, arXiv: 1605.08803. Google Scholar [8] E. Erturk, Numerical solutions of 2-D steady incompressible flow over a backward-facing step, part I: High Reynolds number solutions, Computers and Fluids, 37 (2008), 633-655.  doi: 10.1016/j.compfluid.2007.09.003.  Google Scholar [9] N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 109056. doi: 10.1016/j.jcp.2019.109056.  Google Scholar [10] N. Geneva and N. Zabaras, Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 383 (2019), 125-147.  doi: 10.1016/j.jcp.2019.01.021.  Google Scholar [11] X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011,315–323. Google Scholar [12] J. S. González, A. G. G. Rodriguez, J. C. Mora, J. R. Santos and M. B. Payan, Optimization of wind farm turbines layout using an evolutive algorithm, Renewable Energy, 35 (2010), 1671–1681. Available from: http://www.sciencedirect.com/science/article/pii/S0960148110000145. Google Scholar [13] I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT press, 2016.   Google Scholar [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems, 2014, 2672–2680. Google Scholar [15] W. Grathwohl, R. T. Chen, J. Betterncourt, I. Sutskever and D. Duvenaud, Ffjord: Free-form continuous dynamics for scalable reversible generative models, preprint, arXiv: 1810.01367. Google Scholar [16] X. Guo, W. Li and F. Iorio, Convolutional neural networks for steady flow approximation, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining doi: 10.1145/2939672.2939738.  Google Scholar [17] G. Haller, An objective definition of a vortex, Journal of Fluid Mechanics, 525 (2005), 1-26.  doi: 10.1017/S0022112004002526.  Google Scholar [18] R. Han, Y. Wang, Y. Zhang and G. Chen, A novel spatial-temporal prediction method for unsteady wake flows based on hybrid deep neural network, Physics of Fluids, 31 (2019), 127101. doi: 10.1063/1.5127247.  Google Scholar [19] O. Hennigh, Lat-net: Compressing lattice Boltzmann flow simulations using deep neural networks, preprint, arXiv: 1705.09036. Google Scholar [20] J. Hoffman and C. Johnson, A new approach to computational turbulence modeling, Computer Methods in Applied Mechanics and Engineering, 195 (2006), 2865-2880.  doi: 10.1016/j.cma.2004.09.015.  Google Scholar [21] J. Holgate, A. Skillen, T. Craft and A. Revell, A review of embedded large eddy simulation for internal flows, Archives of Computational Methods in Engineering, 26 (2019), 865-882.  doi: 10.1007/s11831-018-9272-5.  Google Scholar [22] G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. doi: 10.1109/CVPR.2017.243.  Google Scholar [23] W. Huang, Q. Yang and H. Xiao, CFD modeling of scale effects on turbulence flow and scour around bridge piers, Computers and Fluids, 38 (2009), 1050-1058.  doi: 10.1016/j.compfluid.2008.01.029.  Google Scholar [24] J. C. Hunt, A. A. Wray and P. Moin, Eddies, streams, and convergence zones in turbulent flows, in Center for Turbulence Research Report, CTR-S88, 1988. Available from: https://ntrs.nasa.gov/search.jsp?R=19890015184. Google Scholar [25] S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167. Google Scholar [26] J.-H. Jacobsen, A. Smeulders and E. Oyallon, i-revnet: Deep invertible networks, preprint, arXiv: 1802.07088. Google Scholar [27] H. Jasak, A. Jemcov, Z. Tukovic, et al., OpenFOAM: A C++ library for complex physics simulations, in International Workshop on Coupled Methods in Numerical Dynamics, 1000, IUC Dubrovnik, Croatia, 2007, 1–20. Google Scholar [28] B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. Gross and B. Solenthaler, Deep fluids: A generative network for parameterized fluid simulations, Computer Graphics Forum, 38 (2019), 59-70.  doi: 10.1111/cgf.13619.  Google Scholar [29] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. Google Scholar [30] D. P. Kingma and M. Welling, Auto-encoding variational bayes, arXiv: 1312.6114. Google Scholar [31] D. P. Kingma and P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, in Advances in Neural Information Processing Systems, 2018, 10215–10224. Google Scholar [32] M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh and D. Kingma, Videoflow: A flow-based generative model for video, preprint, arXiv: 1903.01434. Google Scholar [33] R. Kumar, S. Ozair, A. Goyal, A. Courville and Y. Bengio, Maximum entropy generators for energy-based models, preprint, arXiv: 1901.08508. Google Scholar [34] C. J. Lapeyre, A. Misdariis, N. Cazard, D. Veynante and T. Poinsot, Training convolutional neural networks to estimate turbulent sub-grid scale reaction rates, Combustion and Flame, 203 (2019), 255-264.  doi: 10.1016/j.combustflame.2019.02.019.  Google Scholar [35] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato and F. Huang, A tutorial on energy-based learning, Predicting Structured Data, 1 (2006), 59 pp. Google Scholar [36] C. Li, J. Li, G. Wang and L. Carin, Learning to sample with adversarially learned likelihood-ratio, 2018. Available from: https://openreview.net/forum?id=S1eZGHkDM. Google Scholar [37] J. Ling, A. Kurzawski and J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics, 807 (2016), 155-166.  doi: 10.1017/jfm.2016.615.  Google Scholar [38] P. Liu, X. Qiu, X. Chen, S. Wu and X.-J. Huang, Multi-timescale long short-term memory neural network for modelling sentences and documents, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, 2326–2335. doi: 10.18653/v1/D15-1280.  Google Scholar [39] R. Maulik, O. San, A. Rasheed and P. Vedula, Subgrid modelling for two-dimensional turbulence using neural networks, Journal of Fluid Mechanics, 858 (2019), 122-144.  doi: 10.1017/jfm.2018.770.  Google Scholar [40] S. M. Mitran, A Comparison of Adaptive Mesh Refinement Approaches for Large Eddy Simulation, Technical report, Washington University, Seattle, Department of Applied Mathematics, 2001. Google Scholar [41] S. Mo, Y. Zhu, N. Zabaras, X. Shi and J. Wu, Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2019), 703-728.  doi: 10.1029/2018WR023528.  Google Scholar [42] A. Mohan, D. Daniel, M. Chertkov and D. Livescu, Compressed convolutional lstm: An efficient deep learning framework to model high fidelity 3d turbulence, preprint, arXiv: 1903.00033. Google Scholar [43] M. H. Patel, Dynamics of Offshore Structures, Butterworth-Heinemann, 2013. Google Scholar [44] S. B. Pope, Turbulent Flows, Cambridge University Press, Cambridge, 2000.  doi: 10.1017/CBO9780511840531.  Google Scholar [45] P. Quéméré and P. Sagaut, Zonal multi-domain rans/les simulations of turbulent flows, International Journal for Numerical Methods in Fluids, 40 (2002), 903-925.  doi: 10.1002/fld.381.  Google Scholar [46] J. Rabault, M. Kuchta, A. Jensen, U. Réglade and N. Cerardi, Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, Journal of Fluid Mechanics, 865 (2019), 281-302.  doi: 10.1017/jfm.2019.62.  Google Scholar [47] M. Raissi, P. Perdikaris and G. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics, 378 (2019), 686-707.  doi: 10.1016/j.jcp.2018.10.045.  Google Scholar [48] M. Raissi, Z. Wang, M. S. Triantafyllou and G. E. Karniadakis, Deep learning of vortex-induced vibrations, Journal of Fluid Mechanics, 861 (2019), 119-137.  doi: 10.1017/jfm.2018.872.  Google Scholar [49] P. Sagaut, Multiscale and Multiresolution Approaches in Turbulence: LES, DES and Hybrid RANS/LES Methods: Applications and Guidelines, World Scientific, 2013. doi: 10.1142/p878.  Google Scholar [50] M. Samorani, The wind farm layout optimization problem, in Handbook of Wind Power Systems (eds. P. M. Pardalos, S. Rebennack, M. V. F. Pereira, N. A. Iliadis and V. Pappu) doi: 10.1007/978-3-642-41080-2_2.  Google Scholar [51] J. U. Schlüter, H. Pitsch and P. Moin, Large-eddy simulation inflow conditions for coupling with reynolds-averaged flow solvers, AIAA Journal, 42 (2004), 478–484. Available from: https://doi.org/10.2514/1.3488. Google Scholar [52] X. SHI, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong and W.-c. Woo, Convolutional LSTM network: A machine learning approach for precipitation nowcasting, in Advances in Neural Information Processing Systems 28, Curran Associates, Inc., 2015,802–810. Available from: http://papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learningapproach-for-precipitation-nowcasting.pdf Google Scholar [53] J. Smagorinsky, General circulation experiments with the primitive equations: I. The basic experiment, Monthly Weather Review, 91 (1963), 99-164.  doi: 10.1175/1520-0493(1963)091<0099:GCEWTP>2.3.CO;2.  Google Scholar [54] I. Sobel and G. Feldman, A 3x3 isotropic gradient operator for image processing, Presented at a talk at the Stanford Artificial Intelligence Project, 271–272. Google Scholar [55] C. G. Speziale, Computing non-equilibrium turbulent flows with time-dependent RANS and VLES, in Fifteenth International Conference on Numerical Methods in Fluid Dynamics, Springer, 1997,123–129. doi: 10.1007/BFb0107089.  Google Scholar [56] A. Subramaniam, M. L. Wong, R. D. Borker, S. Nimmagadda and S. K. Lele, Turbulence enrichment using generative adversarial networks, preprint, arXiv: 2003.01907. Google Scholar [57] L. Sun, H. Gao, S. Pan and J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Applied Mechanics and Engineering, 361 (2020), 112732. doi: 10.1016/j.cma.2019.112732.  Google Scholar [58] E. G. Tabak and C. V. Turner, A family of nonparametric density estimation algorithms, Communications on Pure and Applied Mathematics, 66 (2013), 145-164.  doi: 10.1002/cpa.21423.  Google Scholar [59] E. G. Tabak and E. Vanden-Eijnden, Density estimation by dual ascent of the log-likelihood, Communications in Mathematical Sciences, 8 (2010), 217-233.  doi: 10.4310/CMS.2010.v8.n1.a11.  Google Scholar [60] S. Taghizadeh, F. D. Witherden and S. S. Girimaji, Turbulence closure modeling with data-driven techniques: Physical compatibility and consistency considerations, preprint, arXiv: 2004.03031. Google Scholar [61] M. Terracol, E. Manoha, C. Herrero, E. Labourasse, S. Redonnet and P. Sagaut, Hybrid methods for airframe noise numerical prediction, Theoretical and Computational Fluid Dynamics, 19 (2005), 197-227.  doi: 10.1007/s00162-005-0165-5.  Google Scholar [62] M. Terracol, P. Sagaut and C. Basdevant, A multilevel algorithm for large-eddy simulation of turbulent compressible flows, Journal of Computational Physics, 167 (2001), 439-474.  doi: 10.1016/S0021-9991(02)00017-7.  Google Scholar [63] J. Tompson, K. Schlachter, P. Sprechmann and K. Perlin, Accelerating eulerian fluid simulation with convolutional networks, in Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, 3424–3433. Available from: http://dl.acm.org/citation.cfm?id=3305890.3306035. Google Scholar [64] A. Travin, M. Shur, M. Strelets and P. R. Spalart, Physical and numerical upgrades in the detached-eddy simulation of complex turbulent flows, in Advances in LES of Complex Flows (eds. R. Friedrich and W. Rodi), Springer Netherlands, Dordrecht, 2002,239–254. doi: 10.1007/0-306-48383-1_16.  Google Scholar [65] Y.-H. Tseng, C. Meneveau and M. B. Parlange, Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation, Environmental Science & Technology, 40 (2006), 2653-2662.  doi: 10.1021/es051708m.  Google Scholar [66] J.-X. Wang, J.-L. Wu and H. Xiao, Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data, Phys. Rev. Fluids, 2 (2017), 034603. doi: 10.1103/PhysRevFluids.2.034603.  Google Scholar [67] Z. Wang, K. Luo, D. Li, J. Tan and J. Fan, Investigations of data-driven closure for subgrid-scale stress in large-eddy simulation, Physics of Fluids, 30 (2018), 125101. doi: 10.1063/1.5054835.  Google Scholar [68] M. Werhahn, Y. Xie, M. Chu and N. Thuerey, A multi-pass GAN for fluid flow super-resolution, preprint, arXiv: 1906.01689. doi: 10.1145/3340251.  Google Scholar [69] S. Wiewel, M. Becher and N. Thuerey, Latent space physics: Towards learning the temporal evolution of fluid flow, Computer Graphics Forum, 38 (2019), 71-82.  doi: 10.1111/cgf.13620.  Google Scholar [70] J. Wu, H. Xiao, R. Sun and Q. Wang, Reynolds-averaged Navier-Stokes equations with explicit data-driven Reynolds stress closure can be ill-conditioned, Journal of Fluid Mechanics, 869 (2019), 553-586.  doi: 10.1017/jfm.2019.205.  Google Scholar [71] H. Xiao, J.-L. Wu, J.-X. Wang, R. Sun and C. Roy, Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach, Journal of Computational Physics, 324 (2016), 115-136.  doi: 10.1016/j.jcp.2016.07.038.  Google Scholar [72] W. Xiong, W. Luo, L. Ma, W. Liu and J. Luo, Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, 2364–2373. doi: 10.1109/CVPR.2018.00251.  Google Scholar [73] Y. Yang and P. Perdikaris, Adversarial uncertainty quantification in physics-informed neural networks, Journal of Computational Physics, 394 (2019), 136-152.  doi: 10.1016/j.jcp.2019.05.027.  Google Scholar [74] L. Zhao, X. Peng, Y. Tian, M. Kapadia and D. Metaxas, Learning to forecast and refine residual motion for image-to-video generation, in Proceedings of the European Conference on Computer Vision (ECCV), 2018,387–403. doi: 10.1007/978-3-030-01267-0_24.  Google Scholar [75] Y. Zhu and N. Zabaras, Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.  doi: 10.1016/j.jcp.2018.04.018.  Google Scholar [76] Y. Zhu, N. Zabaras, P.-S. Koutsourelakis and P. Perdikaris, Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.  doi: 10.1016/j.jcp.2019.05.024.  Google Scholar
Comparison between traditional hybrid VLES-LES simulation (left) and the proposed multi-fidelity deep generative turbulence model (right) for studying the wake behind a wall-mounted cube
] and transient multi-fidelity Glow (TM-Glow) introduced in Section 3.2">Figure 2.  Comparison of the forward and backward passes of various INN structures including (left to right) the standard INN, conditional INN (CINN) [76] and transient multi-fidelity Glow (TM-Glow) introduced in Section 3.2
Unfolded computational graph of a recurrent neural network model for which the arrows show functional dependence
TM-Glow model. This model is comprised of a low-fidelity encoder that conditions a generative flow model to produce samples of high-fidelity field snapshots. LSTM affine blocks are introduced to pass information between time-steps using recurrent connections. Boxes with rounded corners in (a) indicate a stack of the elements inside and should not be confused with plate notation. Arrows illustrate the forward pass of the INN. (For interpretation of the colors in the figure(s), the reader is referred to the web version of this article.)
The unrolled computational graph of the TM-Glow model for a model depth of $k_{d} = 3$
The LSTM affine block used in TM-Glow consisting of $k_{c}$ affine coupling layers including an unnormalized conditional affine block (UnNorm Block), a stack of conditional affine blocks (Conditional Block) and a conditional LSTM affine block (LSTM Block)
The two variants of affine coupling layers used in TM-Glow with an input and output denoted as $\mathit{\boldsymbol{h}}_{k-1} = \left\{\mathit{\boldsymbol{h}}_{k-1}^{1}, \mathit{\boldsymbol{h}}_{k-1}^{2}\right\}$ and $\mathit{\boldsymbol{h}}_{k} = \left\{\mathit{\boldsymbol{h}}_{k}^{1}, \mathit{\boldsymbol{h}}_{k}^{2}\right\}$, respectively. Time-step superscripts have been omitted for clarity of presentation
Squeeze and split forward operations used to manipulate the dimensionality of the features in TM-Glow. (Left) The squeeze operation compresses the input feature map $\mathit{\boldsymbol{h}}_{k-1}$ using a checkerboard pattern halving the spatial dimensionality and increasing the number of channels by four. (Right) The split operation factors out half of an input $\mathit{\boldsymbol{h}}_{k-1}$ which are then taken to be latent random variable $\mathit{\boldsymbol{z}}^{(i)}$. The remaining features, $\mathit{\boldsymbol{h}}_{k}$ are sent deeper in the network. Time-step superscripts have been omitted for clarity of presentation
] and Rectified Linear Unit (ReLU) activation functions are used [11] in junction with the convolutional operations. Convolutions are denoted by the kernel size $k$, stride $s$ and padding $p$">Figure 9.  Dense block with a growth rate and length of $2$. Residual connections between convolutions progressively stack feature maps resulting in $12$ output channels in this schematic. Standard batch-normalization [25] and Rectified Linear Unit (ReLU) activation functions are used [11] in junction with the convolutional operations. Convolutions are denoted by the kernel size $k$, stride $s$ and padding $p$
(Left to right) Velocity magnitude MSE and turbulent kinetic energy (TKE) test MSE for TM-Glow models containing $k_{d}\cdot k_{c}$ affine coupling layers
Reliability diagrams of the predicted x-velocity, y-velocity and pressure fields predicted with TM-Glow evaluated over $12000$ model predictions. The black dashed line indicates matching empirical distributions between the model's samples and observed validation data
Flow over a backwards step. The green region indicates the recirculation region TM-Glow will be used to predict. All domain boundaries are no-slip with the exceptions of the uniform inlet and zero gradient outlet. The total outlet simulation length is made to be double that of the prediction range to negate effects of the boundary condition on this zone
]">Figure 13.  Computational mesh around the backwards step used for the low- and high-fidelity CFD simulations solved with OpenFOAM [27]
(Left to right) Flow over backwards step velocity magnitude and turbulent kinetic energy (TKE) error during training of TM-Glow on different data set sizes. Error values were average over five model samples
(Top to bottom) Velocity magnitude of the high-fidelity target, low-fidelity input, $3$ TM-Glow samples and standard deviation for two test flows
(Top to bottom) Q-criterion of the high-fidelity target, low-fidelity input and three TM-Glow samples for two test flows
TM-Glow time-series samples of $x-$velocity, $y-$velocity and pressure fields for a backwards step test case at $Re = 7500$. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted
(Top to bottom) Time averaged x-velocity, y-velocity and pressure profiles for two different test cases at (left to right) $Re = 7500$ and $Re = 47500$. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $2\sigma$) are computed using $20$ time-series samples
(Top to bottom) Turbulent kinetic energy and Reynolds shear stress profiles for two different test cases at (left to right) $Re = 7500$ and $Re = 47500$. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $2\sigma$) are computed using $20$ time-series samples
Flow around array of bluff bodies. The red region indicates the area for which the bodies can be placed randomly. The green region indicates the wake zone that we will use TM-Glow to predict a high-fidelity response from a low-fidelity simulation
Velocity magnitude of the low-fidelity and high-fidelity simulations for two different cylinder arrays. (Left to right) Cylinder array configuration and the corresponding (top to bottom) high-fidelity and low-fidelity finite volume simulation results at several time-steps
]">Figure 22.  Computational mesh around the cylinder array used for the low- and high-fidelity CFD simulations solved with OpenFOAM [27]
(Left to right) Cylinder array velocity magnitude and turbulent kinetic energy (TKE) error during training of TM-Glow on different data set sizes. Error values were average over five model samples
(Top to bottom) Velocity magnitude of the high-fidelity target, low-fidelity input, three TM-Glow samples and standard deviation for two test cases
TM-Glow time-series samples of $x-$velocity, $y-$velocity and pressure fields for a cylinder array test case. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted
TM-Glow time-series samples of $x-$velocity, $y-$velocity and pressure fields for a cylinder array test case. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted
Time-averaged flow profiles for two test flows. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $2\sigma$) are computed using $20$ time-series samples
Turbulent statistic profiles for two test flows. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $2\sigma$) are computed using $20$ time-series samples
">Figure 29.  Computational requirement for training TM-Glow given training data-sets of various sizes. Computation is quantified using Service Units (SU) defined in Table 6
Invertible operations used in the generative normalizing flow method of TM-Glow. Being consistent with the notation in [31], we assume the inputs and outputs of each operation are of dimension $\mathit{\boldsymbol{h}}_{k-1}, \mathit{\boldsymbol{h}}_{k} \in \mathbb{R}^{c\times h \times w}$ with $c$ channels and a feature map size of $\left[h \times w\right]$. Indexes over the spatial domain of the feature map are denoted by $\mathit{\boldsymbol{h}}(x, y)\in \mathbb{R}^{c}$. The coupling neural network and convolutional LSTM are abbreviated as $NN$ and $LSTM$, respectively. Time-step superscripts have been neglected for clarity of presentation
 Operation Forward Inverse Log Jacobian Conditional Affine Layer \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1}\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{i-1}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned} \begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ LSTM Affine Layer \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{i-1}\\ \boldsymbol{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned} \begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ \boldsymbol{h}{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ ActNorm $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{s}\odot \boldsymbol{h}_{k-1}(x,y) + \boldsymbol{b}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y)=(\boldsymbol{h}_{k}(x,y)-\boldsymbol{b})/\boldsymbol{s}$ $h\cdot w \cdot \textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ $1\times 1$ Convolution $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{W}\boldsymbol{h}_{k-1}(x,y) \quad \boldsymbol{W}\in\mathbb{R}^{c\times c}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y) =\boldsymbol{W}^{-1}\boldsymbol{h}_{k}(x,y)$ $h\cdot w \cdot \log\left(\det \left|\boldsymbol{W}\right|\right)$ Split \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right) \\ p_{\boldsymbol{\theta}}(\boldsymbol{z}_{k}) = \mathcal{N}\left(\boldsymbol{h}_{k-1}^{2}| \boldsymbol{\mu}, \boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k} = \boldsymbol{h}_{k-1}^{1} \end{aligned} \begin{aligned} \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right)\\ \boldsymbol{h}{h}_{k-1}^{2} \sim \mathcal{N}\left(\boldsymbol{\mu},\boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} N/A
 Operation Forward Inverse Log Jacobian Conditional Affine Layer \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1}\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{i-1}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned} \begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ LSTM Affine Layer \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{i-1}\\ \boldsymbol{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned} \begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ \boldsymbol{h}{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ ActNorm $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{s}\odot \boldsymbol{h}_{k-1}(x,y) + \boldsymbol{b}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y)=(\boldsymbol{h}_{k}(x,y)-\boldsymbol{b})/\boldsymbol{s}$ $h\cdot w \cdot \textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ $1\times 1$ Convolution $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{W}\boldsymbol{h}_{k-1}(x,y) \quad \boldsymbol{W}\in\mathbb{R}^{c\times c}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y) =\boldsymbol{W}^{-1}\boldsymbol{h}_{k}(x,y)$ $h\cdot w \cdot \log\left(\det \left|\boldsymbol{W}\right|\right)$ Split \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right) \\ p_{\boldsymbol{\theta}}(\boldsymbol{z}_{k}) = \mathcal{N}\left(\boldsymbol{h}_{k-1}^{2}| \boldsymbol{\mu}, \boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k} = \boldsymbol{h}_{k-1}^{1} \end{aligned} \begin{aligned} \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right)\\ \boldsymbol{h}{h}_{k-1}^{2} \sim \mathcal{N}\left(\boldsymbol{\mu},\boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} N/A
TM-Glow model and training parameters used for both numerical test cases. For the parameters that vary between test cases the superscript $\dagger$ and $\ddagger$ to denote numerical examples in Sections 5 and 6, respectively. Hyper-parameter differences are due to memory constraints imposed from the varying predictive domain sizes
 TM-Glow Training Model Depth, $k_{d}$ $3$ Optimizer ADAM [29] Conditional Features, $\mathit{\boldsymbol{\xi}}^{(i)}$ $32$ Weight Decay $1e-6$ Recurrent Features, $\mathit{\boldsymbol{a}}^{(i)}_{in}, \mathit{\boldsymbol{c}}^{(i)}_{in}$ $64, 64$ Epochs $400$ Affine Coupling Layers, $k_{c}$ $16$ Mini-batch Size $32^{\dagger}, 64^{\ddagger}$ Coupling NN Layers $2$ BPTT $10$ time-steps Inverse Temp., $\beta$ 200
 TM-Glow Training Model Depth, $k_{d}$ $3$ Optimizer ADAM [29] Conditional Features, $\mathit{\boldsymbol{\xi}}^{(i)}$ $32$ Weight Decay $1e-6$ Recurrent Features, $\mathit{\boldsymbol{a}}^{(i)}_{in}, \mathit{\boldsymbol{c}}^{(i)}_{in}$ $64, 64$ Epochs $400$ Affine Coupling Layers, $k_{c}$ $16$ Mini-batch Size $32^{\dagger}, 64^{\ddagger}$ Coupling NN Layers $2$ BPTT $10$ time-steps Inverse Temp., $\beta$ 200
Ablation study of the impact of different parts of the backward KL loss. As a base-line we also train TM-Glow using the standard maximum likelihood estimation (MLE) approach. The mean square error (MSE) of various flow field quantities for various loss formulations are listed. The lowest values for each error are bolded
 MLE $V_{Pres}$ $V_{Div}$ $V_{L2}$ $V_{RMS}$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right)$ $\overline{V_{Div}}$ $\overline{V_{Pres}}$ ✔ ✘ ✘ ✘ ✘ 0.0589 0.0085 0.0135 0.0204 0.0486 0.0137 0.0019 0.0615 ✘ ✔ ✔ ✔ ✔ 0.0490 0.0115 0.0188 0.0168 0.0292 0.0125 0.0012 0.0192 ✘ ✘ ✔ ✔ ✔ 0.0390 0.0078 0.0189 0.0162 0.0251 0.0106 0.0013 0.0402 ✘ ✘ ✘ ✔ ✔ 0.0463 0.0113 0.0158 0.0166 0.0256 0.0129 0.0012 0.0424 ✘ ✘ ✘ ✔ ✘ 0.0435 0.0089 0.0140 0.0168 0.0272 0.0131 0.0012 0.0366
 MLE $V_{Pres}$ $V_{Div}$ $V_{L2}$ $V_{RMS}$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right)$ $\overline{V_{Div}}$ $\overline{V_{Pres}}$ ✔ ✘ ✘ ✘ ✘ 0.0589 0.0085 0.0135 0.0204 0.0486 0.0137 0.0019 0.0615 ✘ ✔ ✔ ✔ ✔ 0.0490 0.0115 0.0188 0.0168 0.0292 0.0125 0.0012 0.0192 ✘ ✘ ✔ ✔ ✔ 0.0390 0.0078 0.0189 0.0162 0.0251 0.0106 0.0013 0.0402 ✘ ✘ ✘ ✔ ✔ 0.0463 0.0113 0.0158 0.0166 0.0256 0.0129 0.0012 0.0424 ✘ ✘ ✘ ✔ ✘ 0.0435 0.0089 0.0140 0.0168 0.0272 0.0131 0.0012 0.0366
Backwards step test error of various normalized time-averaged flow field quantities of the low-fidelity solution interpolated to the high-fidelity mesh and TM-Glow trained on various training data set sizes. Lower is better. TM-Glow errors were averaged over $20$ samples from the model. The training wall-clock (WC) time of each data set size is also listed
 $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}/u_{0}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}/u_{0}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}/u^{2}_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}/u_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}/u_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}/u^{2}_{0}\right)$ WC Hrs. Low-Fidelity 0.1212 0.0224 0.0199 0.0237 0.0177 0.0124 - $8$ Flows 0.0182 0.0036 0.0023 0.0053 0.0059 0.0034 6.5 $16$ Flows 0.0185 0.0031 0.0021 0.0030 0.0033 0.0023 10.0 $32$ Flows 0.0091 0.0019 0.0014 0.0022 0.0022 0.0014 12.1 $48$ Flows 0.0074 0.0017 0.0014 0.0021 0.0022 0.0013 16.6
 $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}/u_{0}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}/u_{0}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}/u^{2}_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}/u_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}/u_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}/u^{2}_{0}\right)$ WC Hrs. Low-Fidelity 0.1212 0.0224 0.0199 0.0237 0.0177 0.0124 - $8$ Flows 0.0182 0.0036 0.0023 0.0053 0.0059 0.0034 6.5 $16$ Flows 0.0185 0.0031 0.0021 0.0030 0.0033 0.0023 10.0 $32$ Flows 0.0091 0.0019 0.0014 0.0022 0.0022 0.0014 12.1 $48$ Flows 0.0074 0.0017 0.0014 0.0021 0.0022 0.0013 16.6
Cylinder array test error of various time-averaged flow field quantities of the low-fidelity solution interpolated to the high-fidelity mesh and TM-Glow trained on different training data set sizes. Lower is better. TM-Glow errors were averaged over $20$ samples from the model. The training wall-clock (WC) time of each data set size is also listed
 $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right)$ WC Hrs. Low-Fidelity 0.1033 0.0081 0.0179 0.0655 0.0981 0.02156 - $16$ Flows 0.0461 0.0078 0.0292 0.0116 0.0191 0.00096 4.3 $32$ Flows 0.0461 0.0078 0.0166 0.0128 0.0185 0.0093 4.9 $64$ Flows 0.0409 0.0062 0.0118 0.0107 0.0172 0.0084 6.8 $96$ Flows 0.0386 0.0059 0.0128 0.0100 0.0152 0.0074 10.3
 $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right)$ WC Hrs. Low-Fidelity 0.1033 0.0081 0.0179 0.0655 0.0981 0.02156 - $16$ Flows 0.0461 0.0078 0.0292 0.0116 0.0191 0.00096 4.3 $32$ Flows 0.0461 0.0078 0.0166 0.0128 0.0185 0.0093 4.9 $64$ Flows 0.0409 0.0062 0.0118 0.0107 0.0172 0.0084 6.8 $96$ Flows 0.0386 0.0059 0.0128 0.0100 0.0152 0.0074 10.3
Hardware used to run the low-fidelity and high-fidelity CFD simulations as well as the training and prediction of TM-Glow for both numerical examples
 CPU Cores CPU Model GPUs GPU Model SU Hour Low-Fidelity 1 Intel Xeon E5-2680 - - 1 High-Fidelity 8 Intel Xeon E5-2680 - - 8 TM-Glow 1 Intel Xeon Gold 6226 4 NVIDIA Tesla V100 8
 CPU Cores CPU Model GPUs GPU Model SU Hour Low-Fidelity 1 Intel Xeon E5-2680 - - 1 High-Fidelity 8 Intel Xeon E5-2680 - - 8 TM-Glow 1 Intel Xeon Gold 6226 4 NVIDIA Tesla V100 8
Prediction cost of the surrogate compared to the high-fidelity simulator for flow over a backwards step (left) and flow around a cylinder array (right).
 Backwards Step SU Hours Wall-clock (mins) Low-Fidelity 0.06 4.5 TM-Glow 20 Samples 0.03 0.75 Surrogate Prediction 0.09 5.25 High-Fidelity Prediction 5.6 42
 Backwards Step SU Hours Wall-clock (mins) Low-Fidelity 0.06 4.5 TM-Glow 20 Samples 0.03 0.75 Surrogate Prediction 0.09 5.25 High-Fidelity Prediction 5.6 42
 [1] Andrew J. Majda, Michal Branicki. Lessons in uncertainty quantification for turbulent dynamical systems. Discrete & Continuous Dynamical Systems, 2012, 32 (9) : 3133-3221. doi: 10.3934/dcds.2012.32.3133 [2] H. T. Banks, Robert Baraldi, Karissa Cross, Kevin Flores, Christina McChesney, Laura Poag, Emma Thorpe. Uncertainty quantification in modeling HIV viral mechanics. Mathematical Biosciences & Engineering, 2015, 12 (5) : 937-964. doi: 10.3934/mbe.2015.12.937 [3] Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020  doi: 10.3934/jcd.2021006 [4] José Miguel Pasini, Tuhin Sahai. Polynomial chaos based uncertainty quantification in Hamiltonian, multi-time scale, and chaotic systems. Journal of Computational Dynamics, 2014, 1 (2) : 357-375. doi: 10.3934/jcd.2014.1.357 [5] Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352 [6] Michele La Rocca, Cira Perna. Designing neural networks for modeling biological data: A statistical perspective. Mathematical Biosciences & Engineering, 2014, 11 (2) : 331-342. doi: 10.3934/mbe.2014.11.331 [7] H. N. Mhaskar, T. Poggio. Function approximation by deep networks. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4085-4095. doi: 10.3934/cpaa.2020181 [8] Yong Hong Wu, B. Wiwatanapataphee. Modelling of turbulent flow and multi-phase heat transfer under electromagnetic force. Discrete & Continuous Dynamical Systems - B, 2007, 8 (3) : 695-706. doi: 10.3934/dcdsb.2007.8.695 [9] Jing Li, Panos Stinis. Mori-Zwanzig reduced models for uncertainty quantification. Journal of Computational Dynamics, 2019, 6 (1) : 39-68. doi: 10.3934/jcd.2019002 [10] Alex Capaldi, Samuel Behrend, Benjamin Berman, Jason Smith, Justin Wright, Alun L. Lloyd. Parameter estimation and uncertainty quantification for an epidemic model. Mathematical Biosciences & Engineering, 2012, 9 (3) : 553-576. doi: 10.3934/mbe.2012.9.553 [11] Ryan Bennink, Ajay Jasra, Kody J. H. Law, Pavel Lougovski. Estimation and uncertainty quantification for the output from quantum simulators. Foundations of Data Science, 2019, 1 (2) : 157-176. doi: 10.3934/fods.2019007 [12] Seonho Park, Maciej Rysz, Kaitlin L. Fair, Panos M. Pardalos. Synthetic-Aperture Radar image based positioning in GPS-denied environments using Deep Cosine Similarity Neural Networks. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021013 [13] Stanisław Migórski. A note on optimal control problem for a hemivariational inequality modeling fluid flow. Conference Publications, 2013, 2013 (special) : 545-554. doi: 10.3934/proc.2013.2013.545 [14] Martin Gugat, Alexander Keimer, Günter Leugering, Zhiqiang Wang. Analysis of a system of nonlocal conservation laws for multi-commodity flow on networks. Networks & Heterogeneous Media, 2015, 10 (4) : 749-785. doi: 10.3934/nhm.2015.10.749 [15] Leong-Kwan Li, Sally Shao, K. F. Cedric Yiu. Nonlinear dynamical system modeling via recurrent neural networks and a weighted state space search algorithm. Journal of Industrial & Management Optimization, 2011, 7 (2) : 385-400. doi: 10.3934/jimo.2011.7.385 [16] Chuangxia Huang, Hedi Yang, Jinde Cao. Weighted pseudo almost periodicity of multi-proportional delayed shunting inhibitory cellular neural networks with D operator. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1259-1272. doi: 10.3934/dcdss.2020372 [17] Ying Sue Huang. Resynchronization of delayed neural networks. Discrete & Continuous Dynamical Systems, 2001, 7 (2) : 397-401. doi: 10.3934/dcds.2001.7.397 [18] François Gay-Balmaz, Darryl D. Holm. Predicting uncertainty in geometric fluid mechanics. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : 1229-1242. doi: 10.3934/dcdss.2020071 [19] Ana Rita Nogueira, João Gama, Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics & Games, 2021  doi: 10.3934/jdg.2021008 [20] Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011

Impact Factor:

## Tools

Article outline

Figures and Tables

[Back to Top]