American Institute of Mathematical Sciences

December  2020, 2(4): 391-428. doi: 10.3934/fods.2020019

Multi-fidelity generative deep learning turbulent flows

 Scientific Computing and Artificial Intelligence (SCAI) Laboratory, University of Notre Dame, 311 Cushing Hall, Notre Dame, IN 46556, USA

*Corresponding author: Nicholas Zabaras

Received  October 2020 Published  December 2020

In computational fluid dynamics, there is an inevitable trade off between accuracy and computational cost. In this work, a novel multi-fidelity deep generative model is introduced for the surrogate modeling of high-fidelity turbulent flow fields given the solution of a computationally inexpensive but inaccurate low-fidelity solver. The resulting surrogate is able to generate physically accurate turbulent realizations at a computational cost magnitudes lower than that of a high-fidelity simulation. The deep generative model developed is a conditional invertible neural network, built with normalizing flows, with recurrent LSTM connections that allow for stable training of transient systems with high predictive accuracy. The model is trained with a variational loss that combines both data-driven and physics-constrained learning. This deep generative model is applied to non-trivial high Reynolds number flows governed by the Navier-Stokes equations including turbulent flow over a backwards facing step at different Reynolds numbers and turbulent wake behind an array of bluff bodies. For both of these examples, the model is able to generate unique yet physically accurate turbulent fluid flows conditioned on an inexpensive low-fidelity solution.

Citation: Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019
References:

show all references

References:
Comparison between traditional hybrid VLES-LES simulation (left) and the proposed multi-fidelity deep generative turbulence model (right) for studying the wake behind a wall-mounted cube
Comparison of the forward and backward passes of various INN structures including (left to right) the standard INN, conditional INN (CINN) [76] and transient multi-fidelity Glow (TM-Glow) introduced in Section 3.2
Unfolded computational graph of a recurrent neural network model for which the arrows show functional dependence
TM-Glow model. This model is comprised of a low-fidelity encoder that conditions a generative flow model to produce samples of high-fidelity field snapshots. LSTM affine blocks are introduced to pass information between time-steps using recurrent connections. Boxes with rounded corners in (a) indicate a stack of the elements inside and should not be confused with plate notation. Arrows illustrate the forward pass of the INN. (For interpretation of the colors in the figure(s), the reader is referred to the web version of this article.)
The unrolled computational graph of the TM-Glow model for a model depth of $k_{d} = 3$
The LSTM affine block used in TM-Glow consisting of $k_{c}$ affine coupling layers including an unnormalized conditional affine block (UnNorm Block), a stack of conditional affine blocks (Conditional Block) and a conditional LSTM affine block (LSTM Block)
The two variants of affine coupling layers used in TM-Glow with an input and output denoted as $\mathit{\boldsymbol{h}}_{k-1} = \left\{\mathit{\boldsymbol{h}}_{k-1}^{1}, \mathit{\boldsymbol{h}}_{k-1}^{2}\right\}$ and $\mathit{\boldsymbol{h}}_{k} = \left\{\mathit{\boldsymbol{h}}_{k}^{1}, \mathit{\boldsymbol{h}}_{k}^{2}\right\}$, respectively. Time-step superscripts have been omitted for clarity of presentation
Squeeze and split forward operations used to manipulate the dimensionality of the features in TM-Glow. (Left) The squeeze operation compresses the input feature map $\mathit{\boldsymbol{h}}_{k-1}$ using a checkerboard pattern halving the spatial dimensionality and increasing the number of channels by four. (Right) The split operation factors out half of an input $\mathit{\boldsymbol{h}}_{k-1}$ which are then taken to be latent random variable $\mathit{\boldsymbol{z}}^{(i)}$. The remaining features, $\mathit{\boldsymbol{h}}_{k}$ are sent deeper in the network. Time-step superscripts have been omitted for clarity of presentation
Dense block with a growth rate and length of $2$. Residual connections between convolutions progressively stack feature maps resulting in $12$ output channels in this schematic. Standard batch-normalization [25] and Rectified Linear Unit (ReLU) activation functions are used [11] in junction with the convolutional operations. Convolutions are denoted by the kernel size $k$, stride $s$ and padding $p$
(Left to right) Velocity magnitude MSE and turbulent kinetic energy (TKE) test MSE for TM-Glow models containing $k_{d}\cdot k_{c}$ affine coupling layers
Reliability diagrams of the predicted x-velocity, y-velocity and pressure fields predicted with TM-Glow evaluated over $12000$ model predictions. The black dashed line indicates matching empirical distributions between the model's samples and observed validation data
Flow over a backwards step. The green region indicates the recirculation region TM-Glow will be used to predict. All domain boundaries are no-slip with the exceptions of the uniform inlet and zero gradient outlet. The total outlet simulation length is made to be double that of the prediction range to negate effects of the boundary condition on this zone
Computational mesh around the backwards step used for the low- and high-fidelity CFD simulations solved with OpenFOAM [27]
(Left to right) Flow over backwards step velocity magnitude and turbulent kinetic energy (TKE) error during training of TM-Glow on different data set sizes. Error values were average over five model samples
(Top to bottom) Velocity magnitude of the high-fidelity target, low-fidelity input, $3$ TM-Glow samples and standard deviation for two test flows
(Top to bottom) Q-criterion of the high-fidelity target, low-fidelity input and three TM-Glow samples for two test flows
TM-Glow time-series samples of $x-$velocity, $y-$velocity and pressure fields for a backwards step test case at $Re = 7500$. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted
(Top to bottom) Time averaged x-velocity, y-velocity and pressure profiles for two different test cases at (left to right) $Re = 7500$ and $Re = 47500$. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $2\sigma$) are computed using $20$ time-series samples
(Top to bottom) Turbulent kinetic energy and Reynolds shear stress profiles for two different test cases at (left to right) $Re = 7500$ and $Re = 47500$. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $2\sigma$) are computed using $20$ time-series samples
Flow around array of bluff bodies. The red region indicates the area for which the bodies can be placed randomly. The green region indicates the wake zone that we will use TM-Glow to predict a high-fidelity response from a low-fidelity simulation
Velocity magnitude of the low-fidelity and high-fidelity simulations for two different cylinder arrays. (Left to right) Cylinder array configuration and the corresponding (top to bottom) high-fidelity and low-fidelity finite volume simulation results at several time-steps
Computational mesh around the cylinder array used for the low- and high-fidelity CFD simulations solved with OpenFOAM [27]
(Left to right) Cylinder array velocity magnitude and turbulent kinetic energy (TKE) error during training of TM-Glow on different data set sizes. Error values were average over five model samples
(Top to bottom) Velocity magnitude of the high-fidelity target, low-fidelity input, three TM-Glow samples and standard deviation for two test cases
TM-Glow time-series samples of $x-$velocity, $y-$velocity and pressure fields for a cylinder array test case. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted
TM-Glow time-series samples of $x-$velocity, $y-$velocity and pressure fields for a cylinder array test case. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted
Time-averaged flow profiles for two test flows. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $2\sigma$) are computed using $20$ time-series samples
Turbulent statistic profiles for two test flows. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $2\sigma$) are computed using $20$ time-series samples
Computational requirement for training TM-Glow given training data-sets of various sizes. Computation is quantified using Service Units (SU) defined in Table 6
Invertible operations used in the generative normalizing flow method of TM-Glow. Being consistent with the notation in [31], we assume the inputs and outputs of each operation are of dimension $\mathit{\boldsymbol{h}}_{k-1}, \mathit{\boldsymbol{h}}_{k} \in \mathbb{R}^{c\times h \times w}$ with $c$ channels and a feature map size of $\left[h \times w\right]$. Indexes over the spatial domain of the feature map are denoted by $\mathit{\boldsymbol{h}}(x, y)\in \mathbb{R}^{c}$. The coupling neural network and convolutional LSTM are abbreviated as $NN$ and $LSTM$, respectively. Time-step superscripts have been neglected for clarity of presentation
 Operation Forward Inverse Log Jacobian Conditional Affine Layer \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1}\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{i-1}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned} \begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ LSTM Affine Layer \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{i-1}\\ \boldsymbol{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned} \begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ \boldsymbol{h}{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ ActNorm $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{s}\odot \boldsymbol{h}_{k-1}(x,y) + \boldsymbol{b}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y)=(\boldsymbol{h}_{k}(x,y)-\boldsymbol{b})/\boldsymbol{s}$ $h\cdot w \cdot \textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ $1\times 1$ Convolution $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{W}\boldsymbol{h}_{k-1}(x,y) \quad \boldsymbol{W}\in\mathbb{R}^{c\times c}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y) =\boldsymbol{W}^{-1}\boldsymbol{h}_{k}(x,y)$ $h\cdot w \cdot \log\left(\det \left|\boldsymbol{W}\right|\right)$ Split \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right) \\ p_{\boldsymbol{\theta}}(\boldsymbol{z}_{k}) = \mathcal{N}\left(\boldsymbol{h}_{k-1}^{2}| \boldsymbol{\mu}, \boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k} = \boldsymbol{h}_{k-1}^{1} \end{aligned} \begin{aligned} \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right)\\ \boldsymbol{h}{h}_{k-1}^{2} \sim \mathcal{N}\left(\boldsymbol{\mu},\boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} N/A
 Operation Forward Inverse Log Jacobian Conditional Affine Layer \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1}\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{i-1}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned} \begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ LSTM Affine Layer \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{i-1}\\ \boldsymbol{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned} \begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ \boldsymbol{h}{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ ActNorm $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{s}\odot \boldsymbol{h}_{k-1}(x,y) + \boldsymbol{b}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y)=(\boldsymbol{h}_{k}(x,y)-\boldsymbol{b})/\boldsymbol{s}$ $h\cdot w \cdot \textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$ $1\times 1$ Convolution $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{W}\boldsymbol{h}_{k-1}(x,y) \quad \boldsymbol{W}\in\mathbb{R}^{c\times c}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y) =\boldsymbol{W}^{-1}\boldsymbol{h}_{k}(x,y)$ $h\cdot w \cdot \log\left(\det \left|\boldsymbol{W}\right|\right)$ Split \begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right) \\ p_{\boldsymbol{\theta}}(\boldsymbol{z}_{k}) = \mathcal{N}\left(\boldsymbol{h}_{k-1}^{2}| \boldsymbol{\mu}, \boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k} = \boldsymbol{h}_{k-1}^{1} \end{aligned} \begin{aligned} \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right)\\ \boldsymbol{h}{h}_{k-1}^{2} \sim \mathcal{N}\left(\boldsymbol{\mu},\boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned} N/A
TM-Glow model and training parameters used for both numerical test cases. For the parameters that vary between test cases the superscript $\dagger$ and $\ddagger$ to denote numerical examples in Sections 5 and 6, respectively. Hyper-parameter differences are due to memory constraints imposed from the varying predictive domain sizes
 TM-Glow Training Model Depth, $k_{d}$ $3$ Optimizer ADAM [29] Conditional Features, $\mathit{\boldsymbol{\xi}}^{(i)}$ $32$ Weight Decay $1e-6$ Recurrent Features, $\mathit{\boldsymbol{a}}^{(i)}_{in}, \mathit{\boldsymbol{c}}^{(i)}_{in}$ $64, 64$ Epochs $400$ Affine Coupling Layers, $k_{c}$ $16$ Mini-batch Size $32^{\dagger}, 64^{\ddagger}$ Coupling NN Layers $2$ BPTT $10$ time-steps Inverse Temp., $\beta$ 200
 TM-Glow Training Model Depth, $k_{d}$ $3$ Optimizer ADAM [29] Conditional Features, $\mathit{\boldsymbol{\xi}}^{(i)}$ $32$ Weight Decay $1e-6$ Recurrent Features, $\mathit{\boldsymbol{a}}^{(i)}_{in}, \mathit{\boldsymbol{c}}^{(i)}_{in}$ $64, 64$ Epochs $400$ Affine Coupling Layers, $k_{c}$ $16$ Mini-batch Size $32^{\dagger}, 64^{\ddagger}$ Coupling NN Layers $2$ BPTT $10$ time-steps Inverse Temp., $\beta$ 200
Ablation study of the impact of different parts of the backward KL loss. As a base-line we also train TM-Glow using the standard maximum likelihood estimation (MLE) approach. The mean square error (MSE) of various flow field quantities for various loss formulations are listed. The lowest values for each error are bolded
 MLE $V_{Pres}$ $V_{Div}$ $V_{L2}$ $V_{RMS}$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right)$ $\overline{V_{Div}}$ $\overline{V_{Pres}}$ ✔ ✘ ✘ ✘ ✘ 0.0589 0.0085 0.0135 0.0204 0.0486 0.0137 0.0019 0.0615 ✘ ✔ ✔ ✔ ✔ 0.0490 0.0115 0.0188 0.0168 0.0292 0.0125 0.0012 0.0192 ✘ ✘ ✔ ✔ ✔ 0.0390 0.0078 0.0189 0.0162 0.0251 0.0106 0.0013 0.0402 ✘ ✘ ✘ ✔ ✔ 0.0463 0.0113 0.0158 0.0166 0.0256 0.0129 0.0012 0.0424 ✘ ✘ ✘ ✔ ✘ 0.0435 0.0089 0.0140 0.0168 0.0272 0.0131 0.0012 0.0366
 MLE $V_{Pres}$ $V_{Div}$ $V_{L2}$ $V_{RMS}$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right)$ $\overline{V_{Div}}$ $\overline{V_{Pres}}$ ✔ ✘ ✘ ✘ ✘ 0.0589 0.0085 0.0135 0.0204 0.0486 0.0137 0.0019 0.0615 ✘ ✔ ✔ ✔ ✔ 0.0490 0.0115 0.0188 0.0168 0.0292 0.0125 0.0012 0.0192 ✘ ✘ ✔ ✔ ✔ 0.0390 0.0078 0.0189 0.0162 0.0251 0.0106 0.0013 0.0402 ✘ ✘ ✘ ✔ ✔ 0.0463 0.0113 0.0158 0.0166 0.0256 0.0129 0.0012 0.0424 ✘ ✘ ✘ ✔ ✘ 0.0435 0.0089 0.0140 0.0168 0.0272 0.0131 0.0012 0.0366
Backwards step test error of various normalized time-averaged flow field quantities of the low-fidelity solution interpolated to the high-fidelity mesh and TM-Glow trained on various training data set sizes. Lower is better. TM-Glow errors were averaged over $20$ samples from the model. The training wall-clock (WC) time of each data set size is also listed
 $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}/u_{0}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}/u_{0}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}/u^{2}_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}/u_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}/u_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}/u^{2}_{0}\right)$ WC Hrs. Low-Fidelity 0.1212 0.0224 0.0199 0.0237 0.0177 0.0124 - $8$ Flows 0.0182 0.0036 0.0023 0.0053 0.0059 0.0034 6.5 $16$ Flows 0.0185 0.0031 0.0021 0.0030 0.0033 0.0023 10.0 $32$ Flows 0.0091 0.0019 0.0014 0.0022 0.0022 0.0014 12.1 $48$ Flows 0.0074 0.0017 0.0014 0.0021 0.0022 0.0013 16.6
 $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}/u_{0}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}/u_{0}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}/u^{2}_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}/u_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}/u_{0}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}/u^{2}_{0}\right)$ WC Hrs. Low-Fidelity 0.1212 0.0224 0.0199 0.0237 0.0177 0.0124 - $8$ Flows 0.0182 0.0036 0.0023 0.0053 0.0059 0.0034 6.5 $16$ Flows 0.0185 0.0031 0.0021 0.0030 0.0033 0.0023 10.0 $32$ Flows 0.0091 0.0019 0.0014 0.0022 0.0022 0.0014 12.1 $48$ Flows 0.0074 0.0017 0.0014 0.0021 0.0022 0.0013 16.6
Cylinder array test error of various time-averaged flow field quantities of the low-fidelity solution interpolated to the high-fidelity mesh and TM-Glow trained on different training data set sizes. Lower is better. TM-Glow errors were averaged over $20$ samples from the model. The training wall-clock (WC) time of each data set size is also listed
 $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right)$ WC Hrs. Low-Fidelity 0.1033 0.0081 0.0179 0.0655 0.0981 0.02156 - $16$ Flows 0.0461 0.0078 0.0292 0.0116 0.0191 0.00096 4.3 $32$ Flows 0.0461 0.0078 0.0166 0.0128 0.0185 0.0093 4.9 $64$ Flows 0.0409 0.0062 0.0118 0.0107 0.0172 0.0084 6.8 $96$ Flows 0.0386 0.0059 0.0128 0.0100 0.0152 0.0074 10.3
 $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right)$ $MSE\left(\overline{\mathit{\boldsymbol{p}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right)$ $MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right)$ WC Hrs. Low-Fidelity 0.1033 0.0081 0.0179 0.0655 0.0981 0.02156 - $16$ Flows 0.0461 0.0078 0.0292 0.0116 0.0191 0.00096 4.3 $32$ Flows 0.0461 0.0078 0.0166 0.0128 0.0185 0.0093 4.9 $64$ Flows 0.0409 0.0062 0.0118 0.0107 0.0172 0.0084 6.8 $96$ Flows 0.0386 0.0059 0.0128 0.0100 0.0152 0.0074 10.3
Hardware used to run the low-fidelity and high-fidelity CFD simulations as well as the training and prediction of TM-Glow for both numerical examples
 CPU Cores CPU Model GPUs GPU Model SU Hour Low-Fidelity 1 Intel Xeon E5-2680 - - 1 High-Fidelity 8 Intel Xeon E5-2680 - - 8 TM-Glow 1 Intel Xeon Gold 6226 4 NVIDIA Tesla V100 8
 CPU Cores CPU Model GPUs GPU Model SU Hour Low-Fidelity 1 Intel Xeon E5-2680 - - 1 High-Fidelity 8 Intel Xeon E5-2680 - - 8 TM-Glow 1 Intel Xeon Gold 6226 4 NVIDIA Tesla V100 8
Prediction cost of the surrogate compared to the high-fidelity simulator for flow over a backwards step (left) and flow around a cylinder array (right).
 Backwards Step SU Hours Wall-clock (mins) Low-Fidelity 0.06 4.5 TM-Glow 20 Samples 0.03 0.75 Surrogate Prediction 0.09 5.25 High-Fidelity Prediction 5.6 42
 Backwards Step SU Hours Wall-clock (mins) Low-Fidelity 0.06 4.5 TM-Glow 20 Samples 0.03 0.75 Surrogate Prediction 0.09 5.25 High-Fidelity Prediction 5.6 42
 [1] Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020  doi: 10.3934/jcd.2021006 [2] Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352 [3] Chuangxia Huang, Hedi Yang, Jinde Cao. Weighted pseudo almost periodicity of multi-proportional delayed shunting inhibitory cellular neural networks with D operator. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1259-1272. doi: 10.3934/dcdss.2020372 [4] Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2021001 [5] Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045 [6] Ivanka Stamova, Gani Stamov. On the stability of sets for reaction–diffusion Cohen–Grossberg delayed neural networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1429-1446. doi: 10.3934/dcdss.2020370 [7] Caterina Balzotti, Simone Göttlich. A two-dimensional multi-class traffic flow model. Networks & Heterogeneous Media, 2020  doi: 10.3934/nhm.2020034 [8] Bingyan Liu, Xiongbing Ye, Xianzhou Dong, Lei Ni. Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2021016 [9] Jianping Zhou, Yamin Liu, Ju H. Park, Qingkai Kong, Zhen Wang. Fault-tolerant anti-synchronization control for chaotic switched neural networks with time delay and reaction diffusion. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1569-1589. doi: 10.3934/dcdss.2020357 [10] Ozlem Faydasicok. Further stability analysis of neutral-type Cohen-Grossberg neural networks with multiple delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1245-1258. doi: 10.3934/dcdss.2020359 [11] Jinsen Zhuang, Yan Zhou, Yonghui Xia. Synchronization analysis of drive-response multi-layer dynamical networks with additive couplings and stochastic perturbations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1607-1629. doi: 10.3934/dcdss.2020279 [12] Vieri Benci, Sunra Mosconi, Marco Squassina. Preface: Applications of mathematical analysis to problems in theoretical physics. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020446 [13] Nguyen Thi Kim Son, Nguyen Phuong Dong, Le Hoang Son, Alireza Khastan, Hoang Viet Long. Complete controllability for a class of fractional evolution equations with uncertainty. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020104 [14] Gernot Holler, Karl Kunisch. Learning nonlocal regularization operators. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021003 [15] Min Ji, Xinna Ye, Fangyao Qian, T.C.E. Cheng, Yiwei Jiang. Parallel-machine scheduling in shared manufacturing. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020174 [16] Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010 [17] Ying Lin, Qi Ye. Support vector machine classifiers by non-Euclidean margins. Mathematical Foundations of Computing, 2020, 3 (4) : 279-300. doi: 10.3934/mfc.2020018 [18] Xiaoxian Tang, Jie Wang. Bistability of sequestration networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1337-1357. doi: 10.3934/dcdsb.2020165 [19] Chun Liu, Huan Sun. On energetic variational approaches in modeling the nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 455-475. doi: 10.3934/dcds.2009.23.455 [20] Jean-Paul Chehab. Damping, stabilization, and numerical filtering for the modeling and the simulation of time dependent PDEs. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021002

Impact Factor: