\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

A neural network closure for the Euler-Poisson system based on kinetic simulations

Abstract Full Text(HTML) Figure(26) / Table(1) Related Papers Cited by
  • This work deals with the modeling of plasmas, which are ionized gases. Thanks to machine learning, we construct a closure for the one-dimensional Euler-Poisson system valid for a wide range of collisional regimes. This closure, based on a fully convolutional neural network called V-net, takes as input the whole spatial density, mean velocity and temperature and predicts as output the whole heat flux. It is learned from data coming from kinetic simulations of the Vlasov-Poisson equations. Data generation and preprocessings are designed to ensure an almost uniform accuracy over the chosen range of Knudsen numbers (which parametrize collisional regimes). Finally, several numerical tests are carried out to assess validity and flexibility of the whole pipeline.

    Mathematics Subject Classification: Primary: 35Q31, 65M08, 82D10, 68T07.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Graph of the composition of the closure. The different operations are (Re)-(Re') resampling, (P) pre-processing, (Sl) slicing, ($ \text{NN}_\theta $) neural network, (R) reconstruction, (P') post-processing and (Sm) smoothing

    Figure 2.  Visual representation of the three operations involved in the non-locality: the (Sm) smoothing, the (R) reconstruction and (Sl) slicing mechanism, and the ($ \text{NN}_\theta $) neural network. Highlighted in gray is the part of the data used to compute the output value whose position is indicated by the red line. For better readability, only one of the four vectors of the input $ X $, and only the three windows involved, are represented. These examples use the redundancy parameter $ r = 2 $. Two scenarios are illustrated: on the left, each output value of the neural network depends on relatively few input values, while on the right it depends on relatively many. Also, the position of the output value considered is not the same

    Figure 3.  Graph of a 1D V-Net with a window of size $ N = 512 $, a depth $ k = 2 $ and $ \ell = 3 $ levels

    Figure 4.  Scheme of the data generation process for one simulation with the kinetic model. With one Knudsen number and one initial condition, we produce 20 entries in the dataset

    Figure 5.  Possible functions $ \rho $, $ u $ and $ T $ generated with the process described in Section 4.2

    Figure 6.  Scheme of the whole data processing for the training and the prediction processes respectivily. The different operations are (S) standardization, (Sl) slicing, (N) normalization, (I) inverse normalization, (R) reconstruction and (Sm) smoothing

    Figure 7.  Slicing of a 1D signal into overlapping windows with different redundancy parameters $ r $. The original signal is represented vertically on the left, and the windows are on the right. The hatched parts of each window are the margins that are ignored when reconstructing the signal. The windows that exceed the span of the original signal are completed by periodicity

    Figure 8.  Kernel used for the reconstruction of an entire signal from the windows predicted by the neural network, with the redundancy parameter $ r = 3 $

    Figure 9.  Three examples of predictions. Note the different scales of the y-axis: the amplitude of the heat flux tends to increase with the Knudsen number

    Figure 10.  Distribution of the relative errors of the neural network and the Navier-Stokes estimation over the test dataset ($ 10, 000 $ predictions)

    Figure 11.  Error of the Navier-Stokes estimations and of the network on the test dataset, depending on the Knudsen number and the $ L^2 $ norm of the real heat flux. Each dot corresponds to one entry in the test dataset. For better clarity, only one $ 31^{\text{th}} $ of the data is shown here

    Figure 12.  Relative error of the Navier-Stokes estimations and of the network on the test dataset, depending on the Knudsen number. The line represents the median of the error over the 100 entries of the test dataset for each epsilon, and the coloured area the interquartile interval

    Figure 13.  Norm of the real heat flux and relative errors of the predicted heat flux and the Navier-Stokes estimation throughout simulations. Median and interquartile interval over 50 simulations

    Figure 14.  Median relative error and number of learnable parameters for V-Nets with different hyper-parameters : ($ \ell $) number of levels, ($ d $) depth, and ($ p $) size of the kernels. Points sharing the same color represent networks that differ by only one hyper-parameter

    Figure 15.  Example of the evolution of the density, mean velocity, temperature and heat flux during the first time unit with a randomly generated initial condition and $ \varepsilon = 0.1 $, obtained with the kinetic model

    Figure 16.  Comparison of the four models on the density and heat flux at $ t = 1 $ with a randomly generated initial condition and $ \varepsilon = 0.1 $

    Figure 17.  Examples of the evolution of the electric energy with different initial conditions and different Knudsen numbers

    Figure 18.  Distributions over 200 simulations of the relative errors of the three fluid models compared to the kinetic model, measured on the logarithm of the electric energy from $ t = 0 $ to $ t = 8 $. For better readability, the x-axis in in logarithmic scale

    Figure 19.  $ L^2 $ relative errors of the fluid models over the kinetic model depending on the Knudsen number. The first plot shows the raw data and the second shows the median and interquartile interval for 20 uniform classes of Knudsen numbers between $ 0.01 $ and $ 1 $

    Figure 20.  Example of a heat flux predicted by the neural network, with and without smoothing. The smoothing here uses $ \sigma\simeq0.05 $

    Figure 21.  Relative errors of the predictions over the test dataset depending on the quantity $ \sigma $ of smoothing

    Figure 22.  Proportion of simulations of the fluid model reaching $ t = 3 $ out of 30 simulations, depending on the quantity $ \sigma $ of smoothing

    Figure 23.  Relative errors of the fluid+network model on five different quantities depending on the resolution used. Each point is an average of 10 simulations with always the same 10 different initial conditions and $ \varepsilon = 0.1 $. The network used was trained with data generated with $ N_x = 1\, 024 $. To adapt to different resolutions, option 1 relies on the slicing mechanism to adapt to the different input sizes, while option 2 uses the resampling of the input to $ N_x = 1\, 024 $ and the resampling of the output back to the original resolution. For all resolutions, the errors are computed relative to kinetic simulations with a high resolution ($ N_x = 2\, 048 $, $ N_v = 141 $)

    Figure 24.  Example of convergence of the fluid+network model with option 2, towards what seems to be a slightly different solution than the kinetic model

    Figure 25.  Distributions over 100 simulations of the relative errors of the three fluid models compared to the kinetic model, measured on the logarithm of the electric energy from $ t = 0 $ to $ t = 8 $, with discontinuous initial conditions. For better readability, the x-axis in in logarithmic scale. It looks a lot like figure 18, which seems to indicate that the fluid+network model is not affected much by the initial discontinuities

    Figure 26.  Example of the evolution of the density and temperature up to $ t = 0.5 $ with a randomly generated discontinuous initial condition and $ \varepsilon = 0.1 $, obtained with the kinetic model. The initial discontinuities are quickly smoothed out by the dynamic of the system

    Table 1.  Hyper-parameters of the reference neural network

    Hyper-parameter Value
    size of the input window ($ N $) 512
    number of levels ($ \ell $) 5
    depth ($ d $) 4
    size of the kernels ($ p $) 11
    activation function softplus
     | Show Table
    DownLoad: CSV
  • [1] A. Beck, D. Flad and C.-D. Munz, Deep neural networks for data-driven les closure models, J. Comput. Phys., 398 (2019), 108910, 23 pp. doi: 10.1016/j.jcp.2019.108910.
    [2] N. BesseF. BerthelinY. Brenier and P. Bertrand, The multi-water-bag equations for collisionless kinetic modeling, Kinet. Relat. Models, 2 (2009), 39-80.  doi: 10.3934/krm.2009.2.39.
    [3] N. Besse and P. Bertrand, Gyro-water-bag approach in nonlinear gyrokinetic turbulence, J. Comput. Phys., 228 (2009), 3973-3995.  doi: 10.1016/j.jcp.2009.02.025.
    [4] S. I. Braginskii, Transport phenomena in plasma, Rev. Plasma Phys., 1 (1963), 205. 
    [5] A. Brizard, Nonlinear gyrofluid description of turbulent magnetized plasmas, Phys. Fluids B, 4 (1992), 1213-1228.  doi: 10.1063/1.860129.
    [6] Z. CaiY. Fan and R. Li, Globally hyperbolic regularization of grad's moment system, Communications on Pure and Applied Mathematics, 67 (2014), 464-518.  doi: 10.1002/cpa.21472.
    [7] Z. Chang and J. D. Callen, Unified fluid/kinetic description of plasma microinstabilities. part i: Basic equations in a sheared slab geometry, Physics of Fluids B: Plasma Physics, 4 (1992), 1167-1181.  doi: 10.1063/1.860125.
    [8] G. F. ChewM. L. GoldbergerF. E. Low and Y. Nambu, Application of dispersion relations to low-energy meson-nucleon scattering, Phys. Rev., 106 (1957), 1337-1344.  doi: 10.1103/PhysRev.106.1337.
    [9] A. Crestetto, N. Crouseilles and M. Lemou, Kinetic/fluid micro-macro numerical schemes for Vlasov-Poisson-BGK equation using particles, Kinet. Relat. Models, 5 (2012) 787–816. doi: 10.3934/krm.2012.5.787.
    [10] N. CrouseillesP. Degond and M. Lemou, A hybrid kinetic/fluid model for solving the gas dynamics boltzmann–bgk equation, J. Comput. Phys., 199 (2004), 776-808.  doi: 10.1016/j.jcp.2004.03.007.
    [11] P. Degond, Macroscopic limits of the boltzmann equation: A review, In Modeling and Computational Methods for Kinetic Equations, Birkhäuser, Boston, MA, 2004, 3–57.
    [12] P. DegondG. Dimarco and L. Mieussens, A multiscale kinetic–fluid solver with dynamic localization of kinetic effects, J. Comput. Phys., 229 (2010), 4907-4933.  doi: 10.1016/j.jcp.2010.03.009.
    [13] O. DesjardinsR. O. Fox and P. Villedieu, A quadrature-based moment method for dilute fluid-particle flows, J. Comput. Phys., 227 (2008), 2514-2539.  doi: 10.1016/j.jcp.2007.10.026.
    [14] G. Dimarco and L. Pareschi, Numerical methods for kinetic equations, Acta Numer., 23 (2014), 369-520.  doi: 10.1017/S0962492914000063.
    [15] B. Dubroca and J.-L. Feugeas, Etude théorique et numérique d'une hiérarchie de modèles aux moments pour le transfert radiatif, C. R. Acad. Sci. Paris Sér. I Math., 329 (1999), 915–920. doi: 10.1016/S0764-4442(00)87499-6.
    [16] B. DubrocaJ.-L. Feugeas and M. Frank, Angular moment model for the Fokker-Planck equation, The European Physical Journal D, 60 (2010), 301-207.  doi: 10.1140/epjd/e2010-00190-8.
    [17] D. Dumoulin and F. Visin, A guide to convolution arithmetic for deep learning, 2016.
    [18] K. DuraisamyG. Iaccarino and H. Xiao, Turbulence modeling in the age of data, Annu. Rev. Fluid Mech., 51 (2019), 357-377. 
    [19] C. K. Garrett and C. D. Hauck, A comparison of moment closures for linear kinetic transport equations: The line source benchmark, Transport Theor. Stat. Phys., 42 (2013), 203-235.  doi: 10.1080/00411450.2014.910226.
    [20] H. Grad, On the kinetic theory of rarefied gases, Commun. Pure Appl. Math., 2 (1949), 331-407.  doi: 10.1002/cpa.3160020403.
    [21] G. W. HammettW. Dorland and F. W. Perkins, Fluid models of phase mixing, landau damping, and nonlinear gyrokinetic dynamics, Phys. Fluids B, 4 (1992), 2052-2061.  doi: 10.1063/1.860014.
    [22] G. W. Hammett and F. W. Perkins, Fluid moment models for landau damping with application to the ion-temperature-gradient instability, Phys. Rev. Lett., 64 (1990), 3019-3022.  doi: 10.1103/PhysRevLett.64.3019.
    [23] J. HanC. MaZ. Ma and W. E, Uniformly accurate machine learning-based hydrodynamic models for kinetic equations, PNAS, 116 (2019), 21983-21991.  doi: 10.1073/pnas.1909854116.
    [24] K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016,770–778. doi: 10.1109/CVPR.2016.90.
    [25] P. Helluy, L. Navoret, N. Pham and A. Crestetto, Reduced Vlasov-Maxwell simulations, C. R. Mécanique, 342 (2014), 619–635. doi: 10.1016/j.crme.2014.06.008.
    [26] P. HunanaG. P. ZankM. LaurenzaA. TeneraniG. M. WebbM. L. GoldsteinM. Velli and L. Adhikari, New closures for more precise modeling of landau damping in the fluid framework., Physical Review Letters, 121 (2018), 135101.  doi: 10.1103/PhysRevLett.121.135101.
    [27] M. Junk, Domain of definition of levermore's five-moment system, J. Stat. Phys., 93 (1998), 1143-1167.  doi: 10.1023/B:JOSS.0000033155.07331.d9.
    [28] R. J. Leveque, Finite Volume Methods for Hyperbolic Problems, volume 31. Cambridge University Press, 2002. doi: 10.1017/CBO9780511791253.
    [29] C. D. Levermore, Moment closure hierarchies for kinetic theories, J. Stat. Phys., 83 (1996), 1021-1065.  doi: 10.1007/BF02179552.
    [30] C. D. Levermore and W. J. Morokoff, The gaussian moment closure for gas dynamics, SIAM J. Appl. Math., 59 (1998), 72-96.  doi: 10.1137/S0036139996299236.
    [31] J. L. Lumley, Toward a turbulent constitutive relation, J. Fluid Mech, 41 (1970), 413-434.  doi: 10.1017/S0022112070000678.
    [32] C. Ma, B. Zhu, X.-Q. Xu and W. Wang, Machine learning surrogate models for landau fluid closure, Physics of Plasmas, 27 (2020), 042502, arXiv: 1909.11509. doi: 10.1063/1.5129158.
    [33] G. Manfredi, Density functional theory for collisionless plasmas–equivalence of fluid and kinetic approaches, J. Plasma Phys., 86 (2020). doi: 10.1017/S0022377820000240.
    [34] R. MaulikN. A. GarlandJ. W. BurbyX.-Z. Tang and P. Balaprakash, Neural network representability of fully ionized plasma fluid model closures, Phys. Plasmas, 27 (2020), 072106.  doi: 10.1063/5.0006457.
    [35] F. Milletari, N. Navab and S. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation, In 2016 Fourth International Conference on 3D Vision (3DV), 2016,565–571. doi: 10.1109/3DV.2016.79.
    [36] C. Negulescu and S. Possanner, Closure of the strongly magnetized electron fluid equations in the adiabatic regime, Multiscale Model. Sim., 14 (2016), 839-873.  doi: 10.1137/15M1027309.
    [37] T. PassotP. L. Sulem and P. Hunana, Extending magnetohydrodynamics to the slow dynamics of collisionless plasmas, Physics of Plasmas, 19 (2012), 082113.  doi: 10.1063/1.4746092.
    [38] M. PerinC. ChandreP. J. Morrison and E. Tassi, Hamiltonian closures for fluid models with four moments by dimensional analysis, J. Phys. A Math. Theor., 48 (2015), 275501.  doi: 10.1088/1751-8113/48/27/275501.
    [39] N. PhamP. Helluy and A. Crestetto, Space-only hyperbolic approximation of the vlasov equation, ESAIM: Proc., 43 (2013), 17-36.  doi: 10.1051/proc/201343002.
    [40] O. Ronneberger, P. Fischer and T. Brox, U-net: Convolutional networks for biomedical image segmentation, In International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015,234–241. doi: 10.1007/978-3-319-24574-4_28.
    [41] J. Schneider, Entropic approximation in kinetic theory, Esaim Math Model Numer Anal., 38 (2004), 541-561.  doi: 10.1051/m2an:2004025.
    [42] C. Shorten and T. M. Khoshgoftaar, A survey on image data augmentation for deep learning, Journal of Big Data, 6 (2019), 60.  doi: 10.1186/s40537-019-0197-0.
    [43] P. B. SnyderG. W. Hammett and W. Dorland, Landau fluid models of collisionless magnetohydrodynamics, Phys. Plasmas, 4 (1997), 3974-3985.  doi: 10.1063/1.872517.
    [44] E. Sonnendrücker, Numerical Methods for the Vlasov-Maxwell Equations, 2015.
    [45] H. Struchtrup and M. Torrilhon, Regularization of grad's 13 moment equations: Derivation and linear analysis, Physics of Fluids, 15 (2003), 2668-2680.  doi: 10.1063/1.1597472.
    [46] E. Tassi, Hamiltonian closures in fluid models for plasmas, Eur. Phys. J. D, 71 (2017), 269.  doi: 10.1140/epjd/e2017-80223-6.
    [47] M. Torrilhon, Hyperbolic moment equations in kinetic gas theory based on multi-variate pearson-iv-distributions, Commun. in Comput. Phys., 7 (2010), 639-673.  doi: 10.4208/cicp.2009.09.049.
    [48] M. Torrilhon, Modeling nonequilibrium gas flow based on moment equations, Annual Review of Fluid Mechanics, 48 (2016), 429-458. 
    [49] J.-X. WangJ.-L. Wu and H. Xiao, Physics-informed machine learning approach for reconstructing reynolds stress modeling discrepancies based on dns data, Physic. Rev. Fluids, 2 (2017), 034603.  doi: 10.1103/PhysRevFluids.2.034603.
    [50] X.-H. Zhou, J. Han and H. Xiao, Learning nonlocal constitutive models with neural networks, Comput. Methods Appl. Mech. Engrg., 384 (2021), Paper No. 113927, 27 pp. arXiv: 2010.10491. doi: 10.1016/j.cma.2021.113927.
  • 加载中

Figures(26)

Tables(1)

SHARE

Article Metrics

HTML views(723) PDF downloads(228) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return