
-
Previous Article
Observations on the bias of nonnegative mechanisms for differential privacy
- FoDS Home
- This Issue
-
Next Article
Multilevel Ensemble Kalman Filtering based on a sample average of independent EnKF estimators
Multi-fidelity generative deep learning turbulent flows
Scientific Computing and Artificial Intelligence (SCAI) Laboratory, University of Notre Dame, 311 Cushing Hall, Notre Dame, IN 46556, USA |
In computational fluid dynamics, there is an inevitable trade off between accuracy and computational cost. In this work, a novel multi-fidelity deep generative model is introduced for the surrogate modeling of high-fidelity turbulent flow fields given the solution of a computationally inexpensive but inaccurate low-fidelity solver. The resulting surrogate is able to generate physically accurate turbulent realizations at a computational cost magnitudes lower than that of a high-fidelity simulation. The deep generative model developed is a conditional invertible neural network, built with normalizing flows, with recurrent LSTM connections that allow for stable training of transient systems with high predictive accuracy. The model is trained with a variational loss that combines both data-driven and physics-constrained learning. This deep generative model is applied to non-trivial high Reynolds number flows governed by the Navier-Stokes equations including turbulent flow over a backwards facing step at different Reynolds numbers and turbulent wake behind an array of bluff bodies. For both of these examples, the model is able to generate unique yet physically accurate turbulent fluid flows conditioned on an inexpensive low-fidelity solution.
References:
[1] |
F. Ahmed and N. Rajaratnam,
Flow around bridge piers, Journal of Hydraulic Engineering, 124 (1998), 288-300.
doi: 10.1061/(ASCE)0733-9429(1998)124:3(288). |
[2] |
L. Ardizzone, C. Lüth, J. Kruse, C. Rother and U. Köthe, Guided image generation with conditional invertible neural networks, preprint, arXiv: 1907.02392. |
[3] |
K. Bieker, S. Peitz, S. L. Brunton, J. N. Kutz and M. Dellnitz, Deep model predictive control with online learning for complex physical systems, preprint, arXiv: 1905.10094. |
[4] |
L. Chen, K. Asai, T. Nonomura, G. Xi and T. Liu,
A review of backward-facing step (BFS) flow mechanisms, heat transfer and control, Thermal Science and Engineering Progress, 6 (2018), 194-216.
doi: 10.1016/j.tsep.2018.04.004. |
[5] |
J. Chung, S. Ahn and Y. Bengio, Hierarchical multiscale recurrent neural networks, preprint, arXiv: 1609.01704. |
[6] |
L. Dinh, D. Krueger and Y. Bengio, Nice: Non-linear independent components estimation, preprint, arXiv: 1410.8516. |
[7] |
L. Dinh, J. Sohl-Dickstein and S. Bengio, Density estimation using real nvp, preprint, arXiv: 1605.08803. |
[8] |
E. Erturk,
Numerical solutions of 2-D steady incompressible flow over a backward-facing step, part I: High Reynolds number solutions, Computers and Fluids, 37 (2008), 633-655.
doi: 10.1016/j.compfluid.2007.09.003. |
[9] |
N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 109056.
doi: 10.1016/j.jcp.2019.109056. |
[10] |
N. Geneva and N. Zabaras,
Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 383 (2019), 125-147.
doi: 10.1016/j.jcp.2019.01.021. |
[11] |
X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011,315–323. |
[12] |
J. S. González, A. G. G. Rodriguez, J. C. Mora, J. R. Santos and M. B. Payan, Optimization of wind farm turbines layout using an evolutive algorithm, Renewable Energy, 35 (2010), 1671–1681. Available from: http://www.sciencedirect.com/science/article/pii/S0960148110000145. |
[13] |
I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT press, 2016.
![]() ![]() |
[14] |
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems, 2014, 2672–2680. |
[15] |
W. Grathwohl, R. T. Chen, J. Betterncourt, I. Sutskever and D. Duvenaud, Ffjord: Free-form continuous dynamics for scalable reversible generative models, preprint, arXiv: 1810.01367. |
[16] |
X. Guo, W. Li and F. Iorio, Convolutional neural networks for steady flow approximation, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
doi: 10.1145/2939672.2939738. |
[17] |
G. Haller,
An objective definition of a vortex, Journal of Fluid Mechanics, 525 (2005), 1-26.
doi: 10.1017/S0022112004002526. |
[18] |
R. Han, Y. Wang, Y. Zhang and G. Chen, A novel spatial-temporal prediction method for unsteady wake flows based on hybrid deep neural network, Physics of Fluids, 31 (2019), 127101.
doi: 10.1063/1.5127247. |
[19] |
O. Hennigh, Lat-net: Compressing lattice Boltzmann flow simulations using deep neural networks, preprint, arXiv: 1705.09036. |
[20] |
J. Hoffman and C. Johnson,
A new approach to computational turbulence modeling, Computer Methods in Applied Mechanics and Engineering, 195 (2006), 2865-2880.
doi: 10.1016/j.cma.2004.09.015. |
[21] |
J. Holgate, A. Skillen, T. Craft and A. Revell,
A review of embedded large eddy simulation for internal flows, Archives of Computational Methods in Engineering, 26 (2019), 865-882.
doi: 10.1007/s11831-018-9272-5. |
[22] |
G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
doi: 10.1109/CVPR.2017.243. |
[23] |
W. Huang, Q. Yang and H. Xiao,
CFD modeling of scale effects on turbulence flow and scour around bridge piers, Computers and Fluids, 38 (2009), 1050-1058.
doi: 10.1016/j.compfluid.2008.01.029. |
[24] |
J. C. Hunt, A. A. Wray and P. Moin, Eddies, streams, and convergence zones in turbulent flows, in Center for Turbulence Research Report, CTR-S88, 1988. Available from: https://ntrs.nasa.gov/search.jsp?R=19890015184. |
[25] |
S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167. |
[26] |
J.-H. Jacobsen, A. Smeulders and E. Oyallon, i-revnet: Deep invertible networks, preprint, arXiv: 1802.07088. |
[27] |
H. Jasak, A. Jemcov, Z. Tukovic, et al., OpenFOAM: A C++ library for complex physics simulations, in International Workshop on Coupled Methods in Numerical Dynamics, 1000, IUC Dubrovnik, Croatia, 2007, 1–20. |
[28] |
B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. Gross and B. Solenthaler,
Deep fluids: A generative network for parameterized fluid simulations, Computer Graphics Forum, 38 (2019), 59-70.
doi: 10.1111/cgf.13619. |
[29] |
D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. |
[30] |
D. P. Kingma and M. Welling, Auto-encoding variational bayes, arXiv: 1312.6114. |
[31] |
D. P. Kingma and P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, in Advances in Neural Information Processing Systems, 2018, 10215–10224. |
[32] |
M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh and D. Kingma, Videoflow: A flow-based generative model for video, preprint, arXiv: 1903.01434. |
[33] |
R. Kumar, S. Ozair, A. Goyal, A. Courville and Y. Bengio, Maximum entropy generators for energy-based models, preprint, arXiv: 1901.08508. |
[34] |
C. J. Lapeyre, A. Misdariis, N. Cazard, D. Veynante and T. Poinsot,
Training convolutional neural networks to estimate turbulent sub-grid scale reaction rates, Combustion and Flame, 203 (2019), 255-264.
doi: 10.1016/j.combustflame.2019.02.019. |
[35] |
Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato and F. Huang, A tutorial on energy-based learning, Predicting Structured Data, 1 (2006), 59 pp. |
[36] |
C. Li, J. Li, G. Wang and L. Carin, Learning to sample with adversarially learned likelihood-ratio, 2018. Available from: https://openreview.net/forum?id=S1eZGHkDM. |
[37] |
J. Ling, A. Kurzawski and J. Templeton,
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics, 807 (2016), 155-166.
doi: 10.1017/jfm.2016.615. |
[38] |
P. Liu, X. Qiu, X. Chen, S. Wu and X.-J. Huang, Multi-timescale long short-term memory neural network for modelling sentences and documents, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, 2326–2335.
doi: 10.18653/v1/D15-1280. |
[39] |
R. Maulik, O. San, A. Rasheed and P. Vedula,
Subgrid modelling for two-dimensional turbulence using neural networks, Journal of Fluid Mechanics, 858 (2019), 122-144.
doi: 10.1017/jfm.2018.770. |
[40] |
S. M. Mitran, A Comparison of Adaptive Mesh Refinement Approaches for Large Eddy Simulation, Technical report, Washington University, Seattle, Department of Applied Mathematics, 2001. |
[41] |
S. Mo, Y. Zhu, N. Zabaras, X. Shi and J. Wu,
Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2019), 703-728.
doi: 10.1029/2018WR023528. |
[42] |
A. Mohan, D. Daniel, M. Chertkov and D. Livescu, Compressed convolutional lstm: An efficient deep learning framework to model high fidelity 3d turbulence, preprint, arXiv: 1903.00033. |
[43] |
M. H. Patel, Dynamics of Offshore Structures, Butterworth-Heinemann, 2013. |
[44] |
S. B. Pope, Turbulent Flows, Cambridge University Press, Cambridge, 2000.
doi: 10.1017/CBO9780511840531.![]() ![]() ![]() |
[45] |
P. Quéméré and P. Sagaut,
Zonal multi-domain rans/les simulations of turbulent flows, International Journal for Numerical Methods in Fluids, 40 (2002), 903-925.
doi: 10.1002/fld.381. |
[46] |
J. Rabault, M. Kuchta, A. Jensen, U. Réglade and N. Cerardi,
Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, Journal of Fluid Mechanics, 865 (2019), 281-302.
doi: 10.1017/jfm.2019.62. |
[47] |
M. Raissi, P. Perdikaris and G. Karniadakis,
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics, 378 (2019), 686-707.
doi: 10.1016/j.jcp.2018.10.045. |
[48] |
M. Raissi, Z. Wang, M. S. Triantafyllou and G. E. Karniadakis,
Deep learning of vortex-induced vibrations, Journal of Fluid Mechanics, 861 (2019), 119-137.
doi: 10.1017/jfm.2018.872. |
[49] |
P. Sagaut, Multiscale and Multiresolution Approaches in Turbulence: LES, DES and Hybrid RANS/LES Methods: Applications and Guidelines, World Scientific, 2013.
doi: 10.1142/p878. |
[50] |
M. Samorani, The wind farm layout optimization problem, in Handbook of Wind Power Systems (eds. P. M. Pardalos, S. Rebennack, M. V. F. Pereira, N. A. Iliadis and V. Pappu)
doi: 10.1007/978-3-642-41080-2_2. |
[51] |
J. U. Schlüter, H. Pitsch and P. Moin, Large-eddy simulation inflow conditions for coupling with reynolds-averaged flow solvers, AIAA Journal, 42 (2004), 478–484. Available from: https://doi.org/10.2514/1.3488. |
[52] |
X. SHI, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong and W.-c. Woo, Convolutional LSTM network: A machine learning approach for precipitation nowcasting, in Advances in Neural Information Processing Systems 28, Curran Associates, Inc., 2015,802–810. Available from: http://papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learningapproach-for-precipitation-nowcasting.pdf |
[53] |
J. Smagorinsky,
General circulation experiments with the primitive equations: I. The basic experiment, Monthly Weather Review, 91 (1963), 99-164.
doi: 10.1175/1520-0493(1963)091<0099:GCEWTP>2.3.CO;2. |
[54] |
I. Sobel and G. Feldman, A 3x3 isotropic gradient operator for image processing, Presented at a talk at the Stanford Artificial Intelligence Project, 271–272. |
[55] |
C. G. Speziale, Computing non-equilibrium turbulent flows with time-dependent RANS and VLES, in Fifteenth International Conference on Numerical Methods in Fluid Dynamics, Springer, 1997,123–129.
doi: 10.1007/BFb0107089. |
[56] |
A. Subramaniam, M. L. Wong, R. D. Borker, S. Nimmagadda and S. K. Lele, Turbulence enrichment using generative adversarial networks, preprint, arXiv: 2003.01907. |
[57] |
L. Sun, H. Gao, S. Pan and J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Applied Mechanics and Engineering, 361 (2020), 112732.
doi: 10.1016/j.cma.2019.112732. |
[58] |
E. G. Tabak and C. V. Turner,
A family of nonparametric density estimation algorithms, Communications on Pure and Applied Mathematics, 66 (2013), 145-164.
doi: 10.1002/cpa.21423. |
[59] |
E. G. Tabak and E. Vanden-Eijnden,
Density estimation by dual ascent of the log-likelihood, Communications in Mathematical Sciences, 8 (2010), 217-233.
doi: 10.4310/CMS.2010.v8.n1.a11. |
[60] |
S. Taghizadeh, F. D. Witherden and S. S. Girimaji, Turbulence closure modeling with data-driven techniques: Physical compatibility and consistency considerations, preprint, arXiv: 2004.03031. |
[61] |
M. Terracol, E. Manoha, C. Herrero, E. Labourasse, S. Redonnet and P. Sagaut,
Hybrid methods for airframe noise numerical prediction, Theoretical and Computational Fluid Dynamics, 19 (2005), 197-227.
doi: 10.1007/s00162-005-0165-5. |
[62] |
M. Terracol, P. Sagaut and C. Basdevant,
A multilevel algorithm for large-eddy simulation of turbulent compressible flows, Journal of Computational Physics, 167 (2001), 439-474.
doi: 10.1016/S0021-9991(02)00017-7. |
[63] |
J. Tompson, K. Schlachter, P. Sprechmann and K. Perlin, Accelerating eulerian fluid simulation with convolutional networks, in Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, 3424–3433. Available from: http://dl.acm.org/citation.cfm?id=3305890.3306035. |
[64] |
A. Travin, M. Shur, M. Strelets and P. R. Spalart, Physical and numerical upgrades in the detached-eddy simulation of complex turbulent flows, in Advances in LES of Complex Flows (eds. R. Friedrich and W. Rodi), Springer Netherlands, Dordrecht, 2002,239–254.
doi: 10.1007/0-306-48383-1_16. |
[65] |
Y.-H. Tseng, C. Meneveau and M. B. Parlange,
Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation, Environmental Science & Technology, 40 (2006), 2653-2662.
doi: 10.1021/es051708m. |
[66] |
J.-X. Wang, J.-L. Wu and H. Xiao, Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data, Phys. Rev. Fluids, 2 (2017), 034603.
doi: 10.1103/PhysRevFluids.2.034603. |
[67] |
Z. Wang, K. Luo, D. Li, J. Tan and J. Fan, Investigations of data-driven closure for subgrid-scale stress in large-eddy simulation, Physics of Fluids, 30 (2018), 125101.
doi: 10.1063/1.5054835. |
[68] |
M. Werhahn, Y. Xie, M. Chu and N. Thuerey, A multi-pass GAN for fluid flow super-resolution, preprint, arXiv: 1906.01689.
doi: 10.1145/3340251. |
[69] |
S. Wiewel, M. Becher and N. Thuerey,
Latent space physics: Towards learning the temporal evolution of fluid flow, Computer Graphics Forum, 38 (2019), 71-82.
doi: 10.1111/cgf.13620. |
[70] |
J. Wu, H. Xiao, R. Sun and Q. Wang,
Reynolds-averaged Navier-Stokes equations with explicit data-driven Reynolds stress closure can be ill-conditioned, Journal of Fluid Mechanics, 869 (2019), 553-586.
doi: 10.1017/jfm.2019.205. |
[71] |
H. Xiao, J.-L. Wu, J.-X. Wang, R. Sun and C. Roy,
Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach, Journal of Computational Physics, 324 (2016), 115-136.
doi: 10.1016/j.jcp.2016.07.038. |
[72] |
W. Xiong, W. Luo, L. Ma, W. Liu and J. Luo, Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, 2364–2373.
doi: 10.1109/CVPR.2018.00251. |
[73] |
Y. Yang and P. Perdikaris,
Adversarial uncertainty quantification in physics-informed neural networks, Journal of Computational Physics, 394 (2019), 136-152.
doi: 10.1016/j.jcp.2019.05.027. |
[74] |
L. Zhao, X. Peng, Y. Tian, M. Kapadia and D. Metaxas, Learning to forecast and refine residual motion for image-to-video generation, in Proceedings of the European Conference on Computer Vision (ECCV), 2018,387–403.
doi: 10.1007/978-3-030-01267-0_24. |
[75] |
Y. Zhu and N. Zabaras,
Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.
doi: 10.1016/j.jcp.2018.04.018. |
[76] |
Y. Zhu, N. Zabaras, P.-S. Koutsourelakis and P. Perdikaris,
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.
doi: 10.1016/j.jcp.2019.05.024. |
show all references
References:
[1] |
F. Ahmed and N. Rajaratnam,
Flow around bridge piers, Journal of Hydraulic Engineering, 124 (1998), 288-300.
doi: 10.1061/(ASCE)0733-9429(1998)124:3(288). |
[2] |
L. Ardizzone, C. Lüth, J. Kruse, C. Rother and U. Köthe, Guided image generation with conditional invertible neural networks, preprint, arXiv: 1907.02392. |
[3] |
K. Bieker, S. Peitz, S. L. Brunton, J. N. Kutz and M. Dellnitz, Deep model predictive control with online learning for complex physical systems, preprint, arXiv: 1905.10094. |
[4] |
L. Chen, K. Asai, T. Nonomura, G. Xi and T. Liu,
A review of backward-facing step (BFS) flow mechanisms, heat transfer and control, Thermal Science and Engineering Progress, 6 (2018), 194-216.
doi: 10.1016/j.tsep.2018.04.004. |
[5] |
J. Chung, S. Ahn and Y. Bengio, Hierarchical multiscale recurrent neural networks, preprint, arXiv: 1609.01704. |
[6] |
L. Dinh, D. Krueger and Y. Bengio, Nice: Non-linear independent components estimation, preprint, arXiv: 1410.8516. |
[7] |
L. Dinh, J. Sohl-Dickstein and S. Bengio, Density estimation using real nvp, preprint, arXiv: 1605.08803. |
[8] |
E. Erturk,
Numerical solutions of 2-D steady incompressible flow over a backward-facing step, part I: High Reynolds number solutions, Computers and Fluids, 37 (2008), 633-655.
doi: 10.1016/j.compfluid.2007.09.003. |
[9] |
N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 109056.
doi: 10.1016/j.jcp.2019.109056. |
[10] |
N. Geneva and N. Zabaras,
Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 383 (2019), 125-147.
doi: 10.1016/j.jcp.2019.01.021. |
[11] |
X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011,315–323. |
[12] |
J. S. González, A. G. G. Rodriguez, J. C. Mora, J. R. Santos and M. B. Payan, Optimization of wind farm turbines layout using an evolutive algorithm, Renewable Energy, 35 (2010), 1671–1681. Available from: http://www.sciencedirect.com/science/article/pii/S0960148110000145. |
[13] |
I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT press, 2016.
![]() ![]() |
[14] |
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems, 2014, 2672–2680. |
[15] |
W. Grathwohl, R. T. Chen, J. Betterncourt, I. Sutskever and D. Duvenaud, Ffjord: Free-form continuous dynamics for scalable reversible generative models, preprint, arXiv: 1810.01367. |
[16] |
X. Guo, W. Li and F. Iorio, Convolutional neural networks for steady flow approximation, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
doi: 10.1145/2939672.2939738. |
[17] |
G. Haller,
An objective definition of a vortex, Journal of Fluid Mechanics, 525 (2005), 1-26.
doi: 10.1017/S0022112004002526. |
[18] |
R. Han, Y. Wang, Y. Zhang and G. Chen, A novel spatial-temporal prediction method for unsteady wake flows based on hybrid deep neural network, Physics of Fluids, 31 (2019), 127101.
doi: 10.1063/1.5127247. |
[19] |
O. Hennigh, Lat-net: Compressing lattice Boltzmann flow simulations using deep neural networks, preprint, arXiv: 1705.09036. |
[20] |
J. Hoffman and C. Johnson,
A new approach to computational turbulence modeling, Computer Methods in Applied Mechanics and Engineering, 195 (2006), 2865-2880.
doi: 10.1016/j.cma.2004.09.015. |
[21] |
J. Holgate, A. Skillen, T. Craft and A. Revell,
A review of embedded large eddy simulation for internal flows, Archives of Computational Methods in Engineering, 26 (2019), 865-882.
doi: 10.1007/s11831-018-9272-5. |
[22] |
G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
doi: 10.1109/CVPR.2017.243. |
[23] |
W. Huang, Q. Yang and H. Xiao,
CFD modeling of scale effects on turbulence flow and scour around bridge piers, Computers and Fluids, 38 (2009), 1050-1058.
doi: 10.1016/j.compfluid.2008.01.029. |
[24] |
J. C. Hunt, A. A. Wray and P. Moin, Eddies, streams, and convergence zones in turbulent flows, in Center for Turbulence Research Report, CTR-S88, 1988. Available from: https://ntrs.nasa.gov/search.jsp?R=19890015184. |
[25] |
S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167. |
[26] |
J.-H. Jacobsen, A. Smeulders and E. Oyallon, i-revnet: Deep invertible networks, preprint, arXiv: 1802.07088. |
[27] |
H. Jasak, A. Jemcov, Z. Tukovic, et al., OpenFOAM: A C++ library for complex physics simulations, in International Workshop on Coupled Methods in Numerical Dynamics, 1000, IUC Dubrovnik, Croatia, 2007, 1–20. |
[28] |
B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. Gross and B. Solenthaler,
Deep fluids: A generative network for parameterized fluid simulations, Computer Graphics Forum, 38 (2019), 59-70.
doi: 10.1111/cgf.13619. |
[29] |
D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. |
[30] |
D. P. Kingma and M. Welling, Auto-encoding variational bayes, arXiv: 1312.6114. |
[31] |
D. P. Kingma and P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, in Advances in Neural Information Processing Systems, 2018, 10215–10224. |
[32] |
M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh and D. Kingma, Videoflow: A flow-based generative model for video, preprint, arXiv: 1903.01434. |
[33] |
R. Kumar, S. Ozair, A. Goyal, A. Courville and Y. Bengio, Maximum entropy generators for energy-based models, preprint, arXiv: 1901.08508. |
[34] |
C. J. Lapeyre, A. Misdariis, N. Cazard, D. Veynante and T. Poinsot,
Training convolutional neural networks to estimate turbulent sub-grid scale reaction rates, Combustion and Flame, 203 (2019), 255-264.
doi: 10.1016/j.combustflame.2019.02.019. |
[35] |
Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato and F. Huang, A tutorial on energy-based learning, Predicting Structured Data, 1 (2006), 59 pp. |
[36] |
C. Li, J. Li, G. Wang and L. Carin, Learning to sample with adversarially learned likelihood-ratio, 2018. Available from: https://openreview.net/forum?id=S1eZGHkDM. |
[37] |
J. Ling, A. Kurzawski and J. Templeton,
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics, 807 (2016), 155-166.
doi: 10.1017/jfm.2016.615. |
[38] |
P. Liu, X. Qiu, X. Chen, S. Wu and X.-J. Huang, Multi-timescale long short-term memory neural network for modelling sentences and documents, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, 2326–2335.
doi: 10.18653/v1/D15-1280. |
[39] |
R. Maulik, O. San, A. Rasheed and P. Vedula,
Subgrid modelling for two-dimensional turbulence using neural networks, Journal of Fluid Mechanics, 858 (2019), 122-144.
doi: 10.1017/jfm.2018.770. |
[40] |
S. M. Mitran, A Comparison of Adaptive Mesh Refinement Approaches for Large Eddy Simulation, Technical report, Washington University, Seattle, Department of Applied Mathematics, 2001. |
[41] |
S. Mo, Y. Zhu, N. Zabaras, X. Shi and J. Wu,
Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2019), 703-728.
doi: 10.1029/2018WR023528. |
[42] |
A. Mohan, D. Daniel, M. Chertkov and D. Livescu, Compressed convolutional lstm: An efficient deep learning framework to model high fidelity 3d turbulence, preprint, arXiv: 1903.00033. |
[43] |
M. H. Patel, Dynamics of Offshore Structures, Butterworth-Heinemann, 2013. |
[44] |
S. B. Pope, Turbulent Flows, Cambridge University Press, Cambridge, 2000.
doi: 10.1017/CBO9780511840531.![]() ![]() ![]() |
[45] |
P. Quéméré and P. Sagaut,
Zonal multi-domain rans/les simulations of turbulent flows, International Journal for Numerical Methods in Fluids, 40 (2002), 903-925.
doi: 10.1002/fld.381. |
[46] |
J. Rabault, M. Kuchta, A. Jensen, U. Réglade and N. Cerardi,
Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, Journal of Fluid Mechanics, 865 (2019), 281-302.
doi: 10.1017/jfm.2019.62. |
[47] |
M. Raissi, P. Perdikaris and G. Karniadakis,
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics, 378 (2019), 686-707.
doi: 10.1016/j.jcp.2018.10.045. |
[48] |
M. Raissi, Z. Wang, M. S. Triantafyllou and G. E. Karniadakis,
Deep learning of vortex-induced vibrations, Journal of Fluid Mechanics, 861 (2019), 119-137.
doi: 10.1017/jfm.2018.872. |
[49] |
P. Sagaut, Multiscale and Multiresolution Approaches in Turbulence: LES, DES and Hybrid RANS/LES Methods: Applications and Guidelines, World Scientific, 2013.
doi: 10.1142/p878. |
[50] |
M. Samorani, The wind farm layout optimization problem, in Handbook of Wind Power Systems (eds. P. M. Pardalos, S. Rebennack, M. V. F. Pereira, N. A. Iliadis and V. Pappu)
doi: 10.1007/978-3-642-41080-2_2. |
[51] |
J. U. Schlüter, H. Pitsch and P. Moin, Large-eddy simulation inflow conditions for coupling with reynolds-averaged flow solvers, AIAA Journal, 42 (2004), 478–484. Available from: https://doi.org/10.2514/1.3488. |
[52] |
X. SHI, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong and W.-c. Woo, Convolutional LSTM network: A machine learning approach for precipitation nowcasting, in Advances in Neural Information Processing Systems 28, Curran Associates, Inc., 2015,802–810. Available from: http://papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learningapproach-for-precipitation-nowcasting.pdf |
[53] |
J. Smagorinsky,
General circulation experiments with the primitive equations: I. The basic experiment, Monthly Weather Review, 91 (1963), 99-164.
doi: 10.1175/1520-0493(1963)091<0099:GCEWTP>2.3.CO;2. |
[54] |
I. Sobel and G. Feldman, A 3x3 isotropic gradient operator for image processing, Presented at a talk at the Stanford Artificial Intelligence Project, 271–272. |
[55] |
C. G. Speziale, Computing non-equilibrium turbulent flows with time-dependent RANS and VLES, in Fifteenth International Conference on Numerical Methods in Fluid Dynamics, Springer, 1997,123–129.
doi: 10.1007/BFb0107089. |
[56] |
A. Subramaniam, M. L. Wong, R. D. Borker, S. Nimmagadda and S. K. Lele, Turbulence enrichment using generative adversarial networks, preprint, arXiv: 2003.01907. |
[57] |
L. Sun, H. Gao, S. Pan and J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Applied Mechanics and Engineering, 361 (2020), 112732.
doi: 10.1016/j.cma.2019.112732. |
[58] |
E. G. Tabak and C. V. Turner,
A family of nonparametric density estimation algorithms, Communications on Pure and Applied Mathematics, 66 (2013), 145-164.
doi: 10.1002/cpa.21423. |
[59] |
E. G. Tabak and E. Vanden-Eijnden,
Density estimation by dual ascent of the log-likelihood, Communications in Mathematical Sciences, 8 (2010), 217-233.
doi: 10.4310/CMS.2010.v8.n1.a11. |
[60] |
S. Taghizadeh, F. D. Witherden and S. S. Girimaji, Turbulence closure modeling with data-driven techniques: Physical compatibility and consistency considerations, preprint, arXiv: 2004.03031. |
[61] |
M. Terracol, E. Manoha, C. Herrero, E. Labourasse, S. Redonnet and P. Sagaut,
Hybrid methods for airframe noise numerical prediction, Theoretical and Computational Fluid Dynamics, 19 (2005), 197-227.
doi: 10.1007/s00162-005-0165-5. |
[62] |
M. Terracol, P. Sagaut and C. Basdevant,
A multilevel algorithm for large-eddy simulation of turbulent compressible flows, Journal of Computational Physics, 167 (2001), 439-474.
doi: 10.1016/S0021-9991(02)00017-7. |
[63] |
J. Tompson, K. Schlachter, P. Sprechmann and K. Perlin, Accelerating eulerian fluid simulation with convolutional networks, in Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, 3424–3433. Available from: http://dl.acm.org/citation.cfm?id=3305890.3306035. |
[64] |
A. Travin, M. Shur, M. Strelets and P. R. Spalart, Physical and numerical upgrades in the detached-eddy simulation of complex turbulent flows, in Advances in LES of Complex Flows (eds. R. Friedrich and W. Rodi), Springer Netherlands, Dordrecht, 2002,239–254.
doi: 10.1007/0-306-48383-1_16. |
[65] |
Y.-H. Tseng, C. Meneveau and M. B. Parlange,
Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation, Environmental Science & Technology, 40 (2006), 2653-2662.
doi: 10.1021/es051708m. |
[66] |
J.-X. Wang, J.-L. Wu and H. Xiao, Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data, Phys. Rev. Fluids, 2 (2017), 034603.
doi: 10.1103/PhysRevFluids.2.034603. |
[67] |
Z. Wang, K. Luo, D. Li, J. Tan and J. Fan, Investigations of data-driven closure for subgrid-scale stress in large-eddy simulation, Physics of Fluids, 30 (2018), 125101.
doi: 10.1063/1.5054835. |
[68] |
M. Werhahn, Y. Xie, M. Chu and N. Thuerey, A multi-pass GAN for fluid flow super-resolution, preprint, arXiv: 1906.01689.
doi: 10.1145/3340251. |
[69] |
S. Wiewel, M. Becher and N. Thuerey,
Latent space physics: Towards learning the temporal evolution of fluid flow, Computer Graphics Forum, 38 (2019), 71-82.
doi: 10.1111/cgf.13620. |
[70] |
J. Wu, H. Xiao, R. Sun and Q. Wang,
Reynolds-averaged Navier-Stokes equations with explicit data-driven Reynolds stress closure can be ill-conditioned, Journal of Fluid Mechanics, 869 (2019), 553-586.
doi: 10.1017/jfm.2019.205. |
[71] |
H. Xiao, J.-L. Wu, J.-X. Wang, R. Sun and C. Roy,
Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach, Journal of Computational Physics, 324 (2016), 115-136.
doi: 10.1016/j.jcp.2016.07.038. |
[72] |
W. Xiong, W. Luo, L. Ma, W. Liu and J. Luo, Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, 2364–2373.
doi: 10.1109/CVPR.2018.00251. |
[73] |
Y. Yang and P. Perdikaris,
Adversarial uncertainty quantification in physics-informed neural networks, Journal of Computational Physics, 394 (2019), 136-152.
doi: 10.1016/j.jcp.2019.05.027. |
[74] |
L. Zhao, X. Peng, Y. Tian, M. Kapadia and D. Metaxas, Learning to forecast and refine residual motion for image-to-video generation, in Proceedings of the European Conference on Computer Vision (ECCV), 2018,387–403.
doi: 10.1007/978-3-030-01267-0_24. |
[75] |
Y. Zhu and N. Zabaras,
Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.
doi: 10.1016/j.jcp.2018.04.018. |
[76] |
Y. Zhu, N. Zabaras, P.-S. Koutsourelakis and P. Perdikaris,
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.
doi: 10.1016/j.jcp.2019.05.024. |




























Operation | Forward | Inverse | Log Jacobian |
Conditional Affine Layer | $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1}\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{i-1}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned}$ | $\begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ | $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ |
LSTM Affine Layer | $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{i-1}\\ \boldsymbol{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned}$ | $\begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ \boldsymbol{h}{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ | $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ |
ActNorm | $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{s}\odot \boldsymbol{h}_{k-1}(x,y) + \boldsymbol{b}$ | $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y)=(\boldsymbol{h}_{k}(x,y)-\boldsymbol{b})/\boldsymbol{s}$ | $h\cdot w \cdot \textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ |
$1\times 1$ Convolution | $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{W}\boldsymbol{h}_{k-1}(x,y) \quad \boldsymbol{W}\in\mathbb{R}^{c\times c}$ | $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y) =\boldsymbol{W}^{-1}\boldsymbol{h}_{k}(x,y)$ | $h\cdot w \cdot \log\left(\det \left|\boldsymbol{W}\right|\right)$ |
Split | $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right) \\ p_{\boldsymbol{\theta}}(\boldsymbol{z}_{k}) = \mathcal{N}\left(\boldsymbol{h}_{k-1}^{2}| \boldsymbol{\mu}, \boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k} = \boldsymbol{h}_{k-1}^{1} \end{aligned}$ | $\begin{aligned} \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right)\\ \boldsymbol{h}{h}_{k-1}^{2} \sim \mathcal{N}\left(\boldsymbol{\mu},\boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ | N/A |
Operation | Forward | Inverse | Log Jacobian |
Conditional Affine Layer | $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1}\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{i-1}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned}$ | $\begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ | $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ |
LSTM Affine Layer | $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{i-1}\\ \boldsymbol{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned}$ | $\begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ \boldsymbol{h}{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ | $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ |
ActNorm | $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{s}\odot \boldsymbol{h}_{k-1}(x,y) + \boldsymbol{b}$ | $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y)=(\boldsymbol{h}_{k}(x,y)-\boldsymbol{b})/\boldsymbol{s}$ | $h\cdot w \cdot \textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ |
$1\times 1$ Convolution | $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{W}\boldsymbol{h}_{k-1}(x,y) \quad \boldsymbol{W}\in\mathbb{R}^{c\times c}$ | $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y) =\boldsymbol{W}^{-1}\boldsymbol{h}_{k}(x,y)$ | $h\cdot w \cdot \log\left(\det \left|\boldsymbol{W}\right|\right)$ |
Split | $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right) \\ p_{\boldsymbol{\theta}}(\boldsymbol{z}_{k}) = \mathcal{N}\left(\boldsymbol{h}_{k-1}^{2}| \boldsymbol{\mu}, \boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k} = \boldsymbol{h}_{k-1}^{1} \end{aligned}$ | $\begin{aligned} \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right)\\ \boldsymbol{h}{h}_{k-1}^{2} \sim \mathcal{N}\left(\boldsymbol{\mu},\boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ | N/A |
TM-Glow | Training | ||
Model Depth, |
Optimizer | ADAM [29] | |
Conditional Features, |
Weight Decay | ||
Recurrent Features, |
Epochs | ||
Affine Coupling Layers, |
Mini-batch Size | ||
Coupling NN Layers | BPTT | ||
Inverse Temp., |
200 |
TM-Glow | Training | ||
Model Depth, |
Optimizer | ADAM [29] | |
Conditional Features, |
Weight Decay | ||
Recurrent Features, |
Epochs | ||
Affine Coupling Layers, |
Mini-batch Size | ||
Coupling NN Layers | BPTT | ||
Inverse Temp., |
200 |
MLE | ||||||||||||
✔ | ✘ | ✘ | ✘ | ✘ | 0.0589 | 0.0085 | 0.0135 | 0.0204 | 0.0486 | 0.0137 | 0.0019 | 0.0615 |
✘ | ✔ | ✔ | ✔ | ✔ | 0.0490 | 0.0115 | 0.0188 | 0.0168 | 0.0292 | 0.0125 | 0.0012 | 0.0192 |
✘ | ✘ | ✔ | ✔ | ✔ | 0.0390 | 0.0078 | 0.0189 | 0.0162 | 0.0251 | 0.0106 | 0.0013 | 0.0402 |
✘ | ✘ | ✘ | ✔ | ✔ | 0.0463 | 0.0113 | 0.0158 | 0.0166 | 0.0256 | 0.0129 | 0.0012 | 0.0424 |
✘ | ✘ | ✘ | ✔ | ✘ | 0.0435 | 0.0089 | 0.0140 | 0.0168 | 0.0272 | 0.0131 | 0.0012 | 0.0366 |
MLE | ||||||||||||
✔ | ✘ | ✘ | ✘ | ✘ | 0.0589 | 0.0085 | 0.0135 | 0.0204 | 0.0486 | 0.0137 | 0.0019 | 0.0615 |
✘ | ✔ | ✔ | ✔ | ✔ | 0.0490 | 0.0115 | 0.0188 | 0.0168 | 0.0292 | 0.0125 | 0.0012 | 0.0192 |
✘ | ✘ | ✔ | ✔ | ✔ | 0.0390 | 0.0078 | 0.0189 | 0.0162 | 0.0251 | 0.0106 | 0.0013 | 0.0402 |
✘ | ✘ | ✘ | ✔ | ✔ | 0.0463 | 0.0113 | 0.0158 | 0.0166 | 0.0256 | 0.0129 | 0.0012 | 0.0424 |
✘ | ✘ | ✘ | ✔ | ✘ | 0.0435 | 0.0089 | 0.0140 | 0.0168 | 0.0272 | 0.0131 | 0.0012 | 0.0366 |
WC Hrs. | |||||||
Low-Fidelity | 0.1212 | 0.0224 | 0.0199 | 0.0237 | 0.0177 | 0.0124 | - |
0.0182 | 0.0036 | 0.0023 | 0.0053 | 0.0059 | 0.0034 | 6.5 | |
0.0185 | 0.0031 | 0.0021 | 0.0030 | 0.0033 | 0.0023 | 10.0 | |
0.0091 | 0.0019 | 0.0014 | 0.0022 | 0.0022 | 0.0014 | 12.1 | |
0.0074 | 0.0017 | 0.0014 | 0.0021 | 0.0022 | 0.0013 | 16.6 |
WC Hrs. | |||||||
Low-Fidelity | 0.1212 | 0.0224 | 0.0199 | 0.0237 | 0.0177 | 0.0124 | - |
0.0182 | 0.0036 | 0.0023 | 0.0053 | 0.0059 | 0.0034 | 6.5 | |
0.0185 | 0.0031 | 0.0021 | 0.0030 | 0.0033 | 0.0023 | 10.0 | |
0.0091 | 0.0019 | 0.0014 | 0.0022 | 0.0022 | 0.0014 | 12.1 | |
0.0074 | 0.0017 | 0.0014 | 0.0021 | 0.0022 | 0.0013 | 16.6 |
WC Hrs. | |||||||
Low-Fidelity | 0.1033 | 0.0081 | 0.0179 | 0.0655 | 0.0981 | 0.02156 | - |
0.0461 | 0.0078 | 0.0292 | 0.0116 | 0.0191 | 0.00096 | 4.3 | |
0.0461 | 0.0078 | 0.0166 | 0.0128 | 0.0185 | 0.0093 | 4.9 | |
0.0409 | 0.0062 | 0.0118 | 0.0107 | 0.0172 | 0.0084 | 6.8 | |
0.0386 | 0.0059 | 0.0128 | 0.0100 | 0.0152 | 0.0074 | 10.3 |
WC Hrs. | |||||||
Low-Fidelity | 0.1033 | 0.0081 | 0.0179 | 0.0655 | 0.0981 | 0.02156 | - |
0.0461 | 0.0078 | 0.0292 | 0.0116 | 0.0191 | 0.00096 | 4.3 | |
0.0461 | 0.0078 | 0.0166 | 0.0128 | 0.0185 | 0.0093 | 4.9 | |
0.0409 | 0.0062 | 0.0118 | 0.0107 | 0.0172 | 0.0084 | 6.8 | |
0.0386 | 0.0059 | 0.0128 | 0.0100 | 0.0152 | 0.0074 | 10.3 |
CPU Cores | CPU Model | GPUs | GPU Model | SU Hour | |
Low-Fidelity | 1 | Intel Xeon E5-2680 | - | - | 1 |
High-Fidelity | 8 | Intel Xeon E5-2680 | - | - | 8 |
TM-Glow | 1 | Intel Xeon Gold 6226 | 4 | NVIDIA Tesla V100 | 8 |
CPU Cores | CPU Model | GPUs | GPU Model | SU Hour | |
Low-Fidelity | 1 | Intel Xeon E5-2680 | - | - | 1 |
High-Fidelity | 8 | Intel Xeon E5-2680 | - | - | 8 |
TM-Glow | 1 | Intel Xeon Gold 6226 | 4 | NVIDIA Tesla V100 | 8 |
Backwards Step | SU Hours | Wall-clock (mins) |
Low-Fidelity | 0.06 | 4.5 |
TM-Glow 20 Samples | 0.03 | 0.75 |
Surrogate Prediction | 0.09 | 5.25 |
High-Fidelity Prediction | 5.6 | 42 |
Backwards Step | SU Hours | Wall-clock (mins) |
Low-Fidelity | 0.06 | 4.5 |
TM-Glow 20 Samples | 0.03 | 0.75 |
Surrogate Prediction | 0.09 | 5.25 |
High-Fidelity Prediction | 5.6 | 42 |
[1] |
Richard Archibald, Feng Bao, Yanzhao Cao, He Zhang. A backward SDE method for uncertainty quantification in deep learning. Discrete and Continuous Dynamical Systems - S, 2022 doi: 10.3934/dcdss.2022062 |
[2] |
Andrew J. Majda, Michal Branicki. Lessons in uncertainty quantification for turbulent dynamical systems. Discrete and Continuous Dynamical Systems, 2012, 32 (9) : 3133-3221. doi: 10.3934/dcds.2012.32.3133 |
[3] |
H. T. Banks, Robert Baraldi, Karissa Cross, Kevin Flores, Christina McChesney, Laura Poag, Emma Thorpe. Uncertainty quantification in modeling HIV viral mechanics. Mathematical Biosciences & Engineering, 2015, 12 (5) : 937-964. doi: 10.3934/mbe.2015.12.937 |
[4] |
Michael Herty, Elisa Iacomini. Uncertainty quantification in hierarchical vehicular flow models. Kinetic and Related Models, 2022, 15 (2) : 239-256. doi: 10.3934/krm.2022006 |
[5] |
Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2021, 8 (2) : 131-152. doi: 10.3934/jcd.2021006 |
[6] |
Jules Guillot, Emmanuel Frénod, Pierre Ailliot. Physics informed model error for data assimilation. Discrete and Continuous Dynamical Systems - S, 2022 doi: 10.3934/dcdss.2022059 |
[7] |
José Miguel Pasini, Tuhin Sahai. Polynomial chaos based uncertainty quantification in Hamiltonian, multi-time scale, and chaotic systems. Journal of Computational Dynamics, 2014, 1 (2) : 357-375. doi: 10.3934/jcd.2014.1.357 |
[8] |
Yong Hong Wu, B. Wiwatanapataphee. Modelling of turbulent flow and multi-phase heat transfer under electromagnetic force. Discrete and Continuous Dynamical Systems - B, 2007, 8 (3) : 695-706. doi: 10.3934/dcdsb.2007.8.695 |
[9] |
Michele La Rocca, Cira Perna. Designing neural networks for modeling biological data: A statistical perspective. Mathematical Biosciences & Engineering, 2014, 11 (2) : 331-342. doi: 10.3934/mbe.2014.11.331 |
[10] |
Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete and Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352 |
[11] |
Jing Li, Panos Stinis. Mori-Zwanzig reduced models for uncertainty quantification. Journal of Computational Dynamics, 2019, 6 (1) : 39-68. doi: 10.3934/jcd.2019002 |
[12] |
Alex Capaldi, Samuel Behrend, Benjamin Berman, Jason Smith, Justin Wright, Alun L. Lloyd. Parameter estimation and uncertainty quantification for an epidemic model. Mathematical Biosciences & Engineering, 2012, 9 (3) : 553-576. doi: 10.3934/mbe.2012.9.553 |
[13] |
Ryan Bennink, Ajay Jasra, Kody J. H. Law, Pavel Lougovski. Estimation and uncertainty quantification for the output from quantum simulators. Foundations of Data Science, 2019, 1 (2) : 157-176. doi: 10.3934/fods.2019007 |
[14] |
H. N. Mhaskar, T. Poggio. Function approximation by deep networks. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4085-4095. doi: 10.3934/cpaa.2020181 |
[15] |
Seonho Park, Maciej Rysz, Kaitlin L. Fair, Panos M. Pardalos. Synthetic-Aperture Radar image based positioning in GPS-denied environments using Deep Cosine Similarity Neural Networks. Inverse Problems and Imaging, 2021, 15 (4) : 763-785. doi: 10.3934/ipi.2021013 |
[16] |
Stanisław Migórski. A note on optimal control problem for a hemivariational inequality modeling fluid flow. Conference Publications, 2013, 2013 (special) : 545-554. doi: 10.3934/proc.2013.2013.545 |
[17] |
Martin Gugat, Alexander Keimer, Günter Leugering, Zhiqiang Wang. Analysis of a system of nonlocal conservation laws for multi-commodity flow on networks. Networks and Heterogeneous Media, 2015, 10 (4) : 749-785. doi: 10.3934/nhm.2015.10.749 |
[18] |
Leong-Kwan Li, Sally Shao, K. F. Cedric Yiu. Nonlinear dynamical system modeling via recurrent neural networks and a weighted state space search algorithm. Journal of Industrial and Management Optimization, 2011, 7 (2) : 385-400. doi: 10.3934/jimo.2011.7.385 |
[19] |
Christopher Oballe, David Boothe, Piotr J. Franaszczuk, Vasileios Maroulas. ToFU: Topology functional units for deep learning. Foundations of Data Science, 2021 doi: 10.3934/fods.2021021 |
[20] |
Chuangxia Huang, Hedi Yang, Jinde Cao. Weighted pseudo almost periodicity of multi-proportional delayed shunting inhibitory cellular neural networks with D operator. Discrete and Continuous Dynamical Systems - S, 2021, 14 (4) : 1259-1272. doi: 10.3934/dcdss.2020372 |
Impact Factor:
Tools
Metrics
Other articles
by authors
[Back to Top]