\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

A model hierarchy for predicting the flow in stirred tanks with physics-informed neural networks

  • Corresponding author: Veronika Trávníková

    Corresponding author: Veronika Trávníková 

*Equal contribution.

Abstract / Introduction Full Text(HTML) Figure(19) / Table(13) Related Papers Cited by
  • This paper explores the potential of Physics-Informed Neural Networks (PINNs) to serve as Reduced Order Models (ROMs) for simulating the flow field within stirred tank reactors (STRs). We solve the two-dimensional stationary Navier-Stokes equations within a geometrically intricate domain and explore methodologies that allow us to integrate additional physical insights into the model. These approaches include imposing the Dirichlet boundary conditions (BCs) strongly and employing domain decomposition (DD), with both overlapping and non-overlapping subdomains. We adapt the Extended Physics-Informed Neural Network (XPINN) approach to solve different sets of equations in distinct subdomains based on the diverse flow characteristics present in each region. Our exploration results in a hierarchy of models spanning various levels of complexity, where the best models exhibit $ \ell_1 $ prediction errors of less than 1% for both pressure and velocity. To illustrate the reproducibility of our approach, we track the errors over repeated independent training runs of the best identified model and show its reliability. Subsequently, by incorporating the stirring rate as a parametric input, we develop a fast-to-evaluate model of the flow capable of interpolating across a wide range of Reynolds numbers. Although we exclusively restrict ourselves to STRs in this work, we conclude that the steps taken to obtain the presented model hierarchy can be transferred to other applications.

    Mathematics Subject Classification: Primary: 76-10, 68T07; Secondary: 76D05, 35Q68.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Schematic depiction of a three-dimensional STR geometry (A) as well as the simplified two-dimensional problem domain considered in the rest of this paper (B)

    Figure 2.  Coordinate systems and geometrical dimensions related to the problem domain

    Figure 3.  High-fidelity solution for velocity (A) and pressure (B) at $ \mathrm{Re} = 4000 $ that was used to validate predictions of the PINN models

    Figure 4.  Spatial distribution of normalized errors of the baseline model presented in Section 4.1 for the velocity (A) and pressure fields (B). The velocity errors are most pronounced in the area between the stirrer and the baffles, where the PINN solely relies on the PDE residuals without information from the labeled boundary points, while the highest pressure errors are localized in the inner region around the stirrer

    Figure 5.  Spatial distribution of normalized velocity errors of the baseline model with labeled training data (A) and of the baseline model with scaled loss components (B)

    Figure 6.  Spatial distribution of normalized error in velocity magnitude of the vanilla model trained on $ \Omega_\text{sym} $ using polar coordinates

    Figure 7.  Schematic representation of the different subdomains and interfaces used in the more complex PINN configurations

    Figure 8.  Velocity magnitudes of baseline models in radial direction for $ \varphi = 0 $. These models fail to capture the kink in velocity magnitude at the tip of the stirrer. This shortcoming is addressed by enforcing the Dirichlet BC at $ \Gamma_\text{stirrer} $ in a strong manner in Sections 4.5 and 4.6

    Figure 9.  Analytical function satisfying the Dirichlet BCs in the domain (A) and spatial distribution of the normalized error of the model with strongly enforced BCs (B)

    Figure 10.  Spatial distribution of normalized error in velocity magnitude predicted by the model using hybrid imposition of Dirichlet BCs described in Section 4.6

    Figure 11.  Spatial distribution of the normalized error in velocity magnitude predicted by the models using DD

    Figure 12.  Close up of the upper right baffle for the DD models. The segment of the boundary highlighted in red, $ \Gamma_\text{baffle} $, uniquely belongs to $ \Omega^{(1)}_\text{outer} $, i.e., we need to prescribe a BC for $ v^{(1)}_{\text{outer}, \varphi} $, while the rest of the wall belongs to $ \Omega^{(2)}_\text{outer} $ and we prescribe $ v^{(2)}_{\text{outer}, \varphi} $

    Figure 13.  Profiles of velocity magnitude in radial direction $ r $ predicted by the DD models

    Figure 14.  Mean value and standard deviation of the normalized error in velocity magnitude obtained from the repeated model training, measured in radial direction along an angle of $ \varphi = 0 $. The errors obtained from the different runs only differ by a very small amount in the outer region close to the wall

    Figure 15.  Distributions of velocity and pressure errors for selected values of $ \mathrm{Re} $ of the model parameterized by $ \omega $ using soft continuity constraints on the subdomain interface. The ends of the bars indicate minimum and maximum errors, while the additional bar in between represents the mean value, which corresponds to $ \delta^{(q)}_{\ell^1} $ in Equation (19). The overall performance of the model improves with increasing $ \mathrm{Re} $

    Figure 16.  Sources of error in the velocity predictions made by the parameterized model for $ \mathrm{Re} = 1000 $. Spatial distribution of velocity error within the domain (A) and comparison of the velocity profile approximated by the boundary function $ \boldsymbol{v}_\text{BC} $ with profiles of the reference solution for $ \mathrm{Re} = 1000 $ and $ \mathrm{Re} = 4000 $ (B)

    Figure 17.  Schematic illustrating the domain overlap defined in Section 5.2 and the bounds $ \Gamma_\text{in} $ and $ \Gamma_\text{out} $ within which the distance function $ g_\text{overlap} $ is defined

    Figure 18.  Velocity and pressure error distributions for selected values of $ \mathrm{Re} $ of the model parameterized by $ \omega $ with overlapping domains

    Figure 19.  Reference velocity profile along the radius at $ \varphi = 0 $ for $ \mathrm{Re} = 4000 $ and the function $ \boldsymbol{v}_\text{BC} $ constructed to approximate it

    Table 1.  Geometrical dimensions and material properties that are considered constant throughout the paper

    (A) Geometrical dimensions of the STR depicted in Figure 2b. (B) Constant material properties used for the simulations and PINN training.
    Quantity Value [m] Property Value Unit
    $ R_\text{stirrer} $ 0.040 ρ 1000 kg/m3
    $ R_\text{baffle} $ 0.085 µ 0.001 kg/m s
    $ R_\text{reactor} $ 0.100
    $ t_\text{baffle} $ 0.005
     | Show Table
    DownLoad: CSV

    Table 2.  Model errors and training times for the non-parameterized PINN models discussed in Section 4 with $ \mathrm{Re} = 4000 $

    Model $ \ell^1 $ errors [%] $ \ell^2 $ errors [%] Time [min]
    $ \delta^{(\boldsymbol{v})}_{\ell^1} $ $ \delta^{(p)}_{\ell^1} $ $ \delta^{(\boldsymbol{v})}_{\ell^2} $ $ \delta^{(p)}_{\ell^2} $
    Baseline (Section 4.1) 10.74 9.48 16.41 14.62 8.70
    Baseline Data (Section 4.2) 2.27 1.04 2.93 1.24 10.40
    Baseline Scaled (Section 4.3) 3.88 2.49 6.47 4.63 8.62
    Baseline Polar (Section 4.4) 3.66 1.14 4.89 1.64 5.80
    Strong BC (Section 4.5) 5.11 2.26 7.44 2.68 61.93
    Hybrid BC (Section 4.6) 3.22 1.20 4.31 1.43 65.08
    DD (Section 4.7) 1.45 0.63 1.93 0.97 51.05
    DD with outer split (Section 4.8) 0.97 0.74 1.41 1.28 71.10
     | Show Table
    DownLoad: CSV

    Table 3.  Model errors and training times for the parameterized PINN models discussed in Section 5 across different Reynolds numbers

    Model $ \mathrm{Re} $ $ \ell^1 $ errors [%] $ \ell^2 $ errors [%] Time [min]
    $ \delta^{(\boldsymbol{v})}_{\ell^1} $ $ \delta^{(p)}_{\ell^1} $ $ \delta^{(\boldsymbol{v})}_{\ell^2} $ $ \delta^{(p)}_{\ell^2} $
    DD model (Section 5.1) 1000 2.16 5.30 3.04 5.95 148.68
    4000 0.89 2.28 1.39 2.58
    6000 0.76 1.25 1.10 1.50
    8000 0.72 0.80 0.96 1.02
    10,000 0.67 0.54 0.87 0.78
    DD model with domain overlap (Section 5.2) 1000 2.17 1.80 3.00 3.04 205.57
    4000 1.03 0.76 1.46 1.28
    6000 0.83 0.51 1.13 0.85
    8000 0.74 0.37 0.97 0.58
    10,000 0.66 0.32 0.85 0.46
     | Show Table
    DownLoad: CSV

    Table 4.  Hyperparameters of the baseline model presented in Section 4.1

    Hyperparameter Value
    Architecture Number of layers 2
    Number of neurons per layer 100
    Activation function tanh
    Loss function Regularization $ \ell^1+\ell^2 $
    Loss scaling $ \alpha_{\text{momentum}, x} = 1 $
    $ \alpha_{\text{momentum}, y} = 1 $
    $ \alpha_{\text{mass}} = 1 $
    $ \alpha_\text{wall} = 1 $
    $ \alpha_\text{impeller} = 1 $
    Optimization Optimizer L-BFGS-B
    Epochs 25,000
    Sampling Number of domain points 2048
    $ \hookrightarrow $ Resampled every 1000 epochs
    Number of boundary points $ 1024 \; \text{on} \; \Gamma_\text{stirrer} $
    $ 1024 \; \text{on} \; \Gamma_\text{wall} $
     | Show Table
    DownLoad: CSV

    Table 5.  Additional hyperparameters of the baseline model with data presented in Section 4.2

    Hyperparameter Value
    Loss function Loss scaling $ \alpha_{\text{data}} = 1 $
    Sampling Number of data points 2000
     | Show Table
    DownLoad: CSV

    Table 6.  Modifications of the hyperparameters of the baseline model with loss scaling presented in Section 4.3

    Hyperparameter Value
    Loss function Loss scaling $ \alpha_{\text{momentum}, x} = 5 \times 10^1 $
    $ \alpha_{\text{momentum}, y} = 5 \times 10^1 $
    $ \alpha_{\text{mass}} = 5 \times 10^1 $
    $ \alpha_{\text{wall}} = 5 $
    $ \alpha_{\text{impeller}} = 1 $
     | Show Table
    DownLoad: CSV

    Table 7.  Modifications and additions of the hyperparameters of the baseline model in polar coordinates presented in Section 4.4

    Hyperparameter Value
    Loss function Loss scaling $ \alpha_{\text{momentum}, r} = 1 \times 10^6 $
    $ \alpha_{\text{momentum}, \varphi} = 1 \times 10^6 $
    $ \alpha_{\text{mass}} = 1 $
    $ \alpha_{\text{wall}, v_r} = 1 \times 10^1 $
    $ \alpha_{\text{wall}, v_\varphi} = 1 \times 10^1 $
    $ \alpha_{\text{impeller}, v_r} = 5 $
    $ \alpha_{\text{impeller}, v_\varphi} = 5 $
    $ \alpha_{\text{symmetry}, v_r} = 1 \times 10^2 $
    $ \alpha_{\text{symmetry}, v_\varphi} = 1 \times 10^2 $
    $ \alpha_{\text{symmetry}, p} = 1 $
    Optimization Epochs 12,500
    Sampling Number of domain points 4096
    $ \hookrightarrow $ Resampled every 1000 epochs
    Number of boundary points $ 512 \; \text{on} \; \Gamma_\text{stirrer} $
    $ 512 \; \text{on} \; \Gamma_\text{wall} $
    $ 1024 \; \text{on} \; \Gamma_\text{sym} $
     | Show Table
    DownLoad: CSV

    Table 8.  Hyperparameters of the strong BC model presented in Section 4.5

    Hyperparameter Value
    Architecture Number of layers 2
    Number of neurons per layer 100
    Activation function tanh
    Loss function Regularization $ \ell^1+\ell^2 $
    Loss scaling $ \alpha_{\text{momentum}, x} = 1 $
    $ \alpha_{\text{momentum}, y} = 1 $
    $ \alpha_{\text{mass}} = 1 $
    Optimization Optimizer L-BFGS-B
    Epochs 25,000
    Sampling Number of domain points 2048
    $ \hookrightarrow $ Resampled every 1000 epochs
     | Show Table
    DownLoad: CSV

    Table 9.  Hyperparameters of the hybrid BC model presented in Section 4.6

    Hyperparameter Value
    Loss function Loss scaling $ \alpha_{\text{wall}} = 1 $
    Sampling Number of boundary points $ 1024 \; \text{on} \; \Gamma_\text{wall} $
     | Show Table
    DownLoad: CSV

    Table 10.  Hyperparameters of the DD model with two domains presented in Section 4.7

    Hyperparameter Value
    Architecture Network for inner region
    $ \hookrightarrow $ Number of layers 2
    $ \hookrightarrow $ Number of neurons per layer 25
    Network for outer region
    $ \hookrightarrow $ Number of layers 2
    $ \hookrightarrow $ Number of neurons per layer 100
    Activation function tanh
    Loss function Regularization $ \ell^1+\ell^2 $
    Loss scaling $ \alpha_{\text{momentum}, r} = 1 \times 10^9 $ (inner region)
    $ \alpha_{\text{momentum}, \varphi} = 1 \times 10^8 $ (inner region)
    $ \alpha_{\text{momentum}, r} = 5 \times 10^{13} $ (outer region)
    $ \alpha_{\text{momentum}, \varphi} = 5 \times 10^{14} $ (outer region)
    $ \alpha_{\text{mass}} = 5 \times 10^8 $ (outer region)
    $ \alpha_{\text{coupling}, v_r} = 1 \times 10^{12} $
    $ \alpha_{\text{coupling}, v_\varphi} = 1 \times 10^{12} $
    $ \alpha_{\text{coupling}, \partial_r v_\varphi} = 1 \times 10^7 $
    $ \alpha_{\text{coupling}, p} = 1 \times 10^6 $
    $ \alpha_{\text{wall}, v_r} = 1 \times 10^{10} $
    $ \alpha_{\text{wall}, v_\varphi} = 1 \times 10^{13} $
    $ \alpha_{\text{symmetry}, v_r} = 1 \times 10^9 $
    $ \alpha_{\text{symmetry}, v_\varphi} = 1 \times 10^9 $
    $ \alpha_{\text{symmetry}, p} = 1 \times 10^5 $
    Optimization Optimizer L-BFGS-B
    Epochs 12,500
    Sampling Number of domain points 4096
    $ \hookrightarrow $ Resampled every 1000 epochs
    $ \hookrightarrow $ Position of $ R_\text{inter} $ 0.07
    $ \hookrightarrow $ Ratio of points in $ \phantom{\rho}{\Omega_\text{inner}}/{\Omega_\text{outer}} $ 0.2
    Number of boundary points $ 256 \; \text{on} \; \Gamma_\text{wall} $
    $ 512 \; \text{on} \; \Gamma_\text{sym} $
    $ 256 \; \text{on} \; \Gamma_\text{inter} $
     | Show Table
    DownLoad: CSV

    Table 11.  Hyperparameters of the DD model with outer domain split presented in Section 4.8

    Hyperparameter Value
    Loss function Loss scaling $ \alpha_{\text{momentum}, r} = 1 \times 10^{11} $ (inner region)
    $ \alpha_{\text{momentum}, \varphi} = 1 \times 10^8 $ (inner region)
    $ \alpha_{\text{momentum}, r} = 4 \times 10^{16} $ (outer region)
    $ \alpha_{\text{momentum}, \varphi} = 4 \times 10^{16} $ (outer region)
    $ \alpha_{\text{mass}} = 4 \times 10^{10} $ (outer region)
    $ \alpha_{\text{coupling}, v_r} = 1 $
    $ \alpha_{\text{coupling}, v_\varphi} = 1 \times 10^{14} $
    $ \alpha_{\text{coupling}, \partial_r v_\varphi} = 1 \times 10^8 $
    $ \alpha_{\text{coupling}, p} = 1 \times 10^6 $
    $ \alpha_{\text{wall}, v_r} = 1 \times 10^{14} $
    $ \alpha_{\text{wall}, v_\varphi} = 1 \times 10^{15} $
    $ \alpha_{\text{symmetry}, v_r} = 1 $
    $ \alpha_{\text{symmetry}, v_\varphi} = 1 \times 10^{10} $
    $ \alpha_{\text{symmetry}, p} = 1 \times 10^5 $
    $ \alpha_{\text{baffle}} = 1 \times 10^{14} $
    $ \alpha_{\text{c}} = 1 \times 10^{13} $
    $ \alpha_{\text{d}} = 1 \times 10^8 $
    Sampling Number of domain points 4096
    $ \hookrightarrow $ Resampled every 1000 epochs
    $ \hookrightarrow $ Position of $ R_\text{inter} $ 0.08
    $ \hookrightarrow $ Ratio of points in $ \phantom{\rho}{\Omega_\text{inner}}/{\Omega_\text{outer}} $ 0.2
    Number of boundary points $ 528 \; \text{on} \; \Gamma_\text{wall} $
    $ 512 \; \text{on} \; \Gamma_\text{sym} $
    $ 256 \; \text{on} \; \Gamma_\text{inter} $
    $ 256 \; \text{on} \; \Gamma_\text{c} $
    $ 512 \; \text{on} \; \Gamma_\text{baffle} $
     | Show Table
    DownLoad: CSV

    Table 12.  Hyperparameters of the parameterized DD model with two domains presented in Section 5.1

    Hyperparameter Value
    Architecture Network for inner region
    $ \hookrightarrow $ Number of layers 2
    $ \hookrightarrow $ Number of neurons per layer 25
    Network for outer region
    $ \hookrightarrow $ Number of layers 2
    $ \hookrightarrow $ Number of neurons per layer 100
    Activation function tanh
    Loss function Regularization $ \ell^1+\ell^2 $
    Loss scaling $ \alpha_{\text{momentum}, r} = 1 \times 10^3 $ (inner region)
    $ \alpha_{\text{momentum}, \varphi} = 1 $ (inner region)
    $ \alpha_{\text{momentum}, r} = 1 \times 10^7 $ (outer region)
    $ \alpha_{\text{momentum}, \varphi} = 1 \times 10^8 $ (outer region)
    $ \alpha_{\text{mass}} = 1 \times 10^3 $ (outer region)
    $ \alpha_{\text{coupling}, v_r} = 1 \times 10^7 $
    $ \alpha_{\text{coupling}, v_\varphi} = 1 \times 10^9 $
    $ \alpha_{\text{coupling}, \partial_r v_\varphi} = 1 \times 10^8 $
    $ \alpha_{\text{coupling}, p} = 1 \times 10^3 $
    $ \alpha_{\text{wall}, v_r} = 1 \times 10^2 $
    $ \alpha_{\text{wall}, v_\varphi} = 1 \times 10^4 $
    $ \alpha_{\text{symmetry}, v_r} = 1 $
    $ \alpha_{\text{symmetry}, v_\varphi} = 1 $
    $ \alpha_{\text{symmetry}, p} = 1 \times 10^2 $
    Optimization Optimizer L-BFGS-B
    Epochs 30,000
    Sampling Number of domain points 4096
    $ \hookrightarrow $ Resampled every 1000 epochs
    $ \hookrightarrow $ Position of $ R_\text{inter} $ 0.075
    $ \hookrightarrow $ Ratio of points in $ \phantom{\rho}{\Omega_\text{inner}}/{\Omega_\text{outer}} $ 2.3
    Number of boundary points $ 256 \; \text{on} \; \Gamma_\text{wall} $
    $ 512 \; \text{on} \; \Gamma_\text{sym} $
    $ 256 \; \text{on} \; \Gamma_\text{inter} $
     | Show Table
    DownLoad: CSV

    Table 13.  Hyperparameters of the parameterized DD model with domain overlap presented in Section 5.2

    Hyperparameter Value
    Loss function Loss scaling $ \alpha_{\text{momentum}, r} = 1 \times 10^5 $ (inner region)
    $ \alpha_{\text{momentum}, \varphi} = 1 $ (inner region)
    $ \alpha_{\text{momentum}, r} = 1 \times 10^9 $ (outer region)
    $ \alpha_{\text{momentum}, \varphi} = 1 \times 10^9 $ (outer region)
    $ \alpha_{\text{mass}} = 1 \times 10^4 $ (outer region)
    $ \alpha_{\text{coupling}, v_r} = 1 \times 10^7 $
    $ \alpha_{\text{wall}, v_r} = 1 \times 10^4 $
    $ \alpha_{\text{wall}, v_\varphi} = 1 \times 10^4 $
    $ \alpha_{\text{symmetry}, v_r} = 1 $
    $ \alpha_{\text{symmetry}, v_\varphi} = 1 $
    $ \alpha_{\text{symmetry}, p} = 1 \times 10^2 $
    Sampling Number of domain points 4096
    $ \hookrightarrow $ Resampled every 1000 epochs
    $ \hookrightarrow $ Ratio of points in $ \phantom{\rho}{\Omega_\text{inner}}/{\Omega_\text{outer}} $ 0.125
    Number of domain overlap points 1536
    $ \hookrightarrow $ Width of overlap 0.01
     | Show Table
    DownLoad: CSV
  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu and X. Zheng, TensorFlow: A system for large-scale machine learning, 2016. arXiv: 1605.08695.
    [2] N. AdebarJ. KeuppV. N. EmenikeJ. KühlbornL. vom DahlR. Möckel and J. Smiatek, Scientific deep machine learning concepts for the prediction of concentration profiles and chemical reaction kinetics: Consideration of reaction conditions, The Journal of Physical Chemistry A, 128 (2024), 929-944.  doi: 10.1021/acs.jpca.3c06265.
    [3] T. Akiba, S. Sano, T. Yanase, T. Ohta and M. Koyama, Optuna: A next-generation hyperparameter optimization framework, in The 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2019), 2623-2631. doi: 10.1145/3292500.3330701.
    [4] R. H. ByrdP. LuJ. Nocedal and C. Zhu, A limited memory algorithm for bound constrained optimization, SIAM Journal on Scientific Computing, 16 (1995), 1190-1208.  doi: 10.1137/0916069.
    [5] S. CaiZ. MaoZ. WangM. Yin and G. E. Karniadakis, Physics-informed neural networks (PINNs) for fluid mechanics: A review, Acta Mechanica Sinica, 37 (2021), 1727-1738.  doi: 10.1007/s10409-021-01148-1.
    [6] C. Cheng and G.-T. Zhang, Deep learning method based on physics informed neural network with resnet block for solving fluid flow problems, Water, 13 (2021), 423.  doi: 10.3390/w13040423.
    [7] S. CuomoV. S. Di ColaF. GiampaoloG. RozzaM. Raissi and F. Piccialli, Scientific machine learning through physics–informed neural networks: Where we are and what's next, Journal of Scientific Computing, 92 (2022), 88.  doi: 10.1007/s10915-022-01939-z.
    [8] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli and Y. Bengio, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, MIT Press, Cambridge, MA, USA, (2014), 2933-2941.
    [9] N. DirkesF. Key and M. Behr, Eulerian formulation of the tensor-based morphology equations for strain-based blood damage modeling, Computer Methods in Applied Mechanics and Engineering, 426 (2024), 116979.  doi: 10.1016/j.cma.2024.116979.
    [10] V. DwivediN. Parashar and B. Srinivasan, Distributed physics informed neural network for data-efficient solution to partial differential equations, Neurocomputing, 420 (2021), 299-316.  doi: 10.1016/j.neucom.2020.09.006.
    [11] H. EivaziM. TahaniP. Schlatter and R. Vinuesa, Physics-informed neural networks for solving Reynolds-averaged Navier-Stokes equations, Physics of Fluids, 34 (2022), 075117.  doi: 10.1063/5.0095270.
    [12] S. A. Faroughi, N. M. Pawar, C. Fernandes, M. Raissi, S. Das, N. K. Kalantari and S. K. Mahjour, Physics-guided, physics-informed, and physics-encoded neural networks and operators in scientific computing: Fluid and solid mechanics, Journal of Computing and Information Science in Engineering, 1-45.
    [13] E. HaghighatM. RaissiA. MoureH. Gomez and R. Juanes, A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics, Computer Methods in Applied Mechanics and Engineering, 379 (2021), 113741.  doi: 10.1016/j.cma.2021.113741.
    [14] C. Haringa, Through the Organisms Eyes, PhD thesis, Delft University of Technology, 2017.
    [15] M. IslamM. S. H. ThakurS. Mojumder and M. N. Hasan, Extraction of material properties through multi-fidelity deep learning from molecular dynamics simulation, Computational Materials Science, 188 (2021), 110187.  doi: 10.1016/j.commatsci.2020.110187.
    [16] A. D. Jagtap and G. E. Karniadakis, Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations, Communications in Computational Physics, 28 (2020), 2002-2041.  doi: 10.4208/cicp.oa-2020-0164.
    [17] A. D. JagtapE. Kharazmi and G. E. Karniadakis, Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems, Computer Methods in Applied Mechanics and Engineering, 365 (2020), 113028.  doi: 10.1016/j.cma.2020.113028.
    [18] A. D. JagtapZ. MaoN. Adams and G. E. Karniadakis, Physics-informed neural networks for inverse problems in supersonic flows, Journal of Computational Physics, 466 (2022), 111402.  doi: 10.1016/j.jcp.2022.111402.
    [19] X. JinS. CaiH. Li and G. E. Karniadakis, NSFnets (navier-stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations, Journal of Computational Physics, 426 (2021), 109951.  doi: 10.1016/j.jcp.2020.109951.
    [20] E. Kharazmi, Z. Zhang and G. E. Karniadakis, Variational physics-informed neural networks for solving partial differential equations, 2019. arXiv: 1912.00873.
    [21] G. KissasY. YangE. HwuangW. R. WitscheyJ. A. Detre and P. Perdikaris, Machine learning in cardiovascular flows modeling: Predicting arterial blood pressure from non-invasive 4d flow mri data using physics-informed neural networks, Computer Methods in Applied Mechanics and Engineering, 358 (2020), 112623.  doi: 10.1016/j.cma.2019.112623.
    [22] P. L. LagariL. H. TsoukalasS. Safarkhani and I. E. Lagaris, Systematic construction of neural forms for solving partial differential equations inside rectangular domains, subject to initial, boundary and interface conditions, International Journal on Artificial Intelligence Tools, 29 (2020), 2050009.  doi: 10.1142/S0218213020500098.
    [23] I. E. LagarisA. Likas and D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks, 9 (1998), 987-1000.  doi: 10.1109/72.712178.
    [24] Z. LaiC. MylonasS. Nagarajaiah and E. Chatzi, Structural identification with physics-informed neural ordinary differential equations, Journal of Sound and Vibration, 508 (2021), 116196.  doi: 10.1016/j.jsv.2021.116196.
    [25] S. Li and X. Feng, Dynamic weight strategy of physics-informed neural networks for the 2d navier–stokes equations, Entropy, 24 (2022), 1254.  doi: 10.3390/e24091254.
    [26] L. LuX. MengZ. Mao and G. E. Karniadakis, DeepXDE: A deep learning library for solving differential equations, SIAM Review, 63 (2021), 208-228.  doi: 10.1137/19M1274067.
    [27] R. MojganiM. Balajewicz and P. Hassanzadeh, Kolmogorov n–width and Lagrangian physics-informed neural networks: A causality-conforming manifold for convection-dominated PDEs, Computer Methods in Applied Mechanics and Engineering, 404 (2023), 115810.  doi: 10.1016/j.cma.2022.115810.
    [28] B. Moseley, A. Markham and T. Nissen-Meyer, Solving the wave equation with physics-informed deep learning, 2020. arXiv: 2006.11894.
    [29] B. MoseleyA. Markham and T. Nissen-Meyer, Finite basis physics-informed neural networks (FBPINNs): A scalable domain decomposition approach for solving differential equations, Advances in Computational Mathematics, 49 (2023), 62.  doi: 10.1007/s10444-023-10065-9.
    [30] D. C. Psichogios and L. H. Ungar, A hybrid neural network-first principles approach to process modeling, AIChE Journal, 38 (1992), 1499-1511.  doi: 10.1002/aic.690381003.
    [31] M. RaissiP. Perdikaris and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics, 378 (2019), 686-707.  doi: 10.1016/j.jcp.2018.10.045.
    [32] M. RaissiA. Yazdani and G. E. Karniadakis, Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations, Science, 367 (2020), 1026-1030.  doi: 10.1126/science.aaw4741.
    [33] A. RosseburgJ. FitschenJ. WutzT. Wucherpfennig and M. Schlüter, Hydrodynamic inhomogeneities in large scale stirred tanks – Influence on mixing time, Chemical Engineering Science, 188 (2018), 208-220.  doi: 10.1016/j.ces.2018.05.008.
    [34] T. SahinM. von Danwitz and A. Popp, Solving Forward and Inverse Problems of Contact Mechanics Using Physics-Informed Neural Networks, Advanced Modeling and Simulation in Engineering Sciences, 11 (2024), 11.  doi: 10.1186/s40323-024-00265-3.
    [35] F. Sahli CostabalY. YangP. PerdikarisD. E. Hurtado and E. Kuhl, Physics-informed neural networks for cardiac activation mapping, Frontiers in Physics, 8 (2020).  doi: 10.3389/fphy.2020.00042.
    [36] H. Sheng and C. Yang, PFNN: A penalty-free neural network method for solving a class of second-order boundary-value problems on complex geometries, Journal of Computational Physics, 428 (2021), 110085.  doi: 10.1016/j.jcp.2020.110085.
    [37] K. ShuklaA. D. Jagtap and G. E. Karniadakis, Parallel physics-informed neural networks via domain decomposition, Journal of Computational Physics, 447 (2021), 110683.  doi: 10.1016/j.jcp.2021.110683.
    [38] L. SunH. GaoS. Pan and J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Applied Mechanics and Engineering, 361 (2020), 112732.  doi: 10.1016/j.cma.2019.112732.
    [39] T. E. TezduyarM. Behr and J. Liou, A new strategy for finite element computations involving moving boundaries and interfaces—The deforming-spatial-domain/space-time procedure: Ⅰ. The concept and the preliminary numerical tests, Computer Methods in Applied Mechanics and Engineering, 94 (1992), 339-351.  doi: 10.1016/0045-7825(92)90059-S.
    [40] S. TillmannD. HilgerN. Hosters and S. Elgeti, Shape-optimization of extrusion-dies via parameterized physics-informed neural networks, PAMM, 23 (2023), e202300203.  doi: 10.1002/pamm.202300203.
    [41] R. Van Der MeerC. W. Oosterlee and A. Borovykh, Optimally weighted loss functions for solving PDEs with neural networks, Journal of Computational and Applied Mathematics, 405 (2022), 113887.  doi: 10.1016/j.cam.2021.113887.
    [42] P. VirtanenR. GommersT. E. OliphantM. HaberlandT. ReddyD. CournapeauE. BurovskiP. PetersonW. WeckesserJ. BrightS. J. van der WaltM. BrettJ. WilsonK. J. MillmanN. MayorovA. R. J. NelsonE. JonesR. KernE. LarsonC. J. Careyİ. PolatY. FengE. W. MooreJ. VanderPlasD. LaxaldeJ. PerktoldR. CimrmanI. HenriksenE. A. QuinteroC. R. HarrisA. M. ArchibaldA. H. RibeiroF. Pedregosa and P. van Mulbregt, SciPy 1.0: Fundamental algorithms for scientific computing in Python, Nature Methods, 17 (2020), 261-272.  doi: 10.1038/s41592-020-0772-5.
    [43] S. Wang, S. Sankaran, H. Wang and P. Perdikaris, An expert's guide to training physics-informed neural networks, 2023. arXiv: 2308.08468.
    [44] S. WangY. Teng and P. Perdikaris, Understanding and mitigating gradient flow pathologies in physics-informed neural networks, SIAM Journal on Scientific Computing, 43 (2021), A3055-A3081.  doi: 10.1137/20M1318043.
    [45] S. WangX. Yu and P. Perdikaris, When and why PINNs fail to train: A neural tangent kernel perspective, Journal of Computational Physics, 449 (2022), 110768.  doi: 10.1016/j.jcp.2021.110768.
    [46] J. YuC. Yan and M. Guo, Non-intrusive reduced-order modeling for fluid problems: A brief review, Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 233 (2019), 5896-5912.  doi: 10.1177/0954410019890721.
    [47] W. ZhouS. Miwa and K. Okamoto, Advancing fluid dynamics simulations: A comprehensive approach to optimizing physics-informed neural networks, Physics of Fluids, 36 (2024), 013615.  doi: 10.1063/5.0180770.
    [48] C. ZhuR. H. ByrdP. Lu and J. Nocedal, Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization, ACM Transactions on Mathematical Software, 23 (1997), 550-560.  doi: 10.1145/279232.279236.
  • 加载中

Figures(19)

Tables(13)

SHARE

Article Metrics

HTML views(4002) PDF downloads(407) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return