• Previous Article
    An application of interpolation inequalities between the deviation of curvature and the isoperimetric ratio to the length-preserving flow
  • DCDS-S Home
  • This Issue
  • Next Article
    Theoretical and numerical studies for energy estimates of the shallow water equations with a transmission boundary condition
March  2021, 14(3): 1079-1092. doi: 10.3934/dcdss.2020352

Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable

1. 

Faculty of Marine Technology, Tokyo University of Marine Science and Technology, 2-1-6, Ecchujima, Koto-ku, Tokyo, 135-8533, Japan

2. 

Graduate School of Business Administration, Hitotsubashi University, 2-1 Naka, Kunitachi, Tokyo 186-8601, Japan

3. 

JST PRESTO, 4-1-8 Honcho, Kawaguchi-shi, Saitama 332-0012, Japan

4. 

Institute for Physical Science and Technology, University of Maryland, College Park, MD 20742, USA

Received  January 2019 Revised  November 2019 Published  May 2020

We construct a data-driven dynamical system model for a macroscopic variable the Reynolds number of a high-dimensionally chaotic fluid flow by training its scalar time-series data. We use a machine-learning approach, the reservoir computing for the construction of the model, and do not use the knowledge of a physical process of fluid dynamics in its procedure. It is confirmed that an inferred time-series obtained from the model approximates the actual one and that some characteristics of the chaotic invariant set mimic the actual ones. We investigate the appropriate choice of the delay-coordinate, especially the delay-time and the dimension, which enables us to construct a model having a relatively high-dimensional attractor with low computational costs.

Citation: Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352
References:
[1]

P. Antonik, M. Gulina, J. Pauwels and S. Massar, Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.012215.  Google Scholar

[2]

P. C. Di Leoni, A. Mazzino and L. Biferale, Inferring flow parameters and turbulent configuration with physics-informed data assimilation and spectral nudging, Phys. Rev. Fluids, 3 (2018). doi: 10.1103/PhysRevFluids.3.104604.  Google Scholar

[3]

D. Ibáñez-Soria, J. Garcia-Ojalvo, A. Soria-Frisch and G. Ruffini, Detection of generalized synchronization using echo state networks, Chaos, 28 (2018), 7pp. doi: 10.1063/1.5010285.  Google Scholar

[4]

M. Inubushi and K. Yoshimura, Reservoir computing beyond memory-nonlinearity trade-off, Scientific Reports, 7 (2017). doi: 10.1038/s41598-017-10257-6.  Google Scholar

[5]

T. Ishihara and Y. Kaneda, High resolution DNS of incompressible homogeneous forced turbulence-time dependence of the statistics, in Statistical Theories and Computational Approaches to Turbulence, Springer, Tokyo, 2003,177–188. doi: 10.1007/978-4-431-67002-5_11.  Google Scholar

[6]

K. Ishioka, ispack-0.4.1, 1999. Available from: http://www.gfd-dennou.org/arch/ispack/. Google Scholar

[7]

H. Jaeger, The "echo state" approach to analysing and training recurrent neural networks, GMD Report, 148 (2001). Google Scholar

[8]

H. Jaeger and H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004), 78-80.  doi: 10.1126/science.1091277.  Google Scholar

[9]

Z. Lu, B. R. Hunt and E. Ott, Attractor reconstruction by machine learning, Chaos, 28 (2018), 9pp. doi: 10.1063/1.5039508.  Google Scholar

[10]

Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett and E. Ott, Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos, 27 (2017). doi: 10.1063/1.4979665.  Google Scholar

[11]

M. Lukosevivcius and H. Jaeger, Reservoir computing approaches to recurrent neural network training, Comput. Science Rev., 3 (2009), 127-149.  doi: 10.1016/j.cosrev.2009.03.005.  Google Scholar

[12]

W. MaassT. Natschläger and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Comput., 14 (2002), 2531-2560.  doi: 10.1162/089976602760407955.  Google Scholar

[13]

K. Nakai and Y. Saiki, Machine-learning inference of fluid variables from data using reservoir computing, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.023111.  Google Scholar

[14]

J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018). doi: 10.1103/PhysRevLett.120.024102.  Google Scholar

[15]

J. Pathak, Z. Lu, B. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 9pp. doi: 10.1063/1.5010300.  Google Scholar

[16]

T. SauerJ. A. Yorke and M. Casdagli, Embedology, J. Statist. Phys., 65 (1991), 579-616.  doi: 10.1007/BF01053745.  Google Scholar

[17]

F. Takens, Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, Lecture Notes in Math., 898, Springer, Berlin-New York, 1981,366–381. doi: 10.1007/BFb0091924.  Google Scholar

[18]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Scripta Series in Mathematics, V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, 1977.  Google Scholar

[19]

D. VerstraetenB. SchrauwenM. D'Haene and and D. A. Stroobandt, An experimental unification of reservoir computing methods, Neural Network, 20 (2007), 391-403.  doi: 10.1016/j.neunet.2007.04.003.  Google Scholar

show all references

References:
[1]

P. Antonik, M. Gulina, J. Pauwels and S. Massar, Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.012215.  Google Scholar

[2]

P. C. Di Leoni, A. Mazzino and L. Biferale, Inferring flow parameters and turbulent configuration with physics-informed data assimilation and spectral nudging, Phys. Rev. Fluids, 3 (2018). doi: 10.1103/PhysRevFluids.3.104604.  Google Scholar

[3]

D. Ibáñez-Soria, J. Garcia-Ojalvo, A. Soria-Frisch and G. Ruffini, Detection of generalized synchronization using echo state networks, Chaos, 28 (2018), 7pp. doi: 10.1063/1.5010285.  Google Scholar

[4]

M. Inubushi and K. Yoshimura, Reservoir computing beyond memory-nonlinearity trade-off, Scientific Reports, 7 (2017). doi: 10.1038/s41598-017-10257-6.  Google Scholar

[5]

T. Ishihara and Y. Kaneda, High resolution DNS of incompressible homogeneous forced turbulence-time dependence of the statistics, in Statistical Theories and Computational Approaches to Turbulence, Springer, Tokyo, 2003,177–188. doi: 10.1007/978-4-431-67002-5_11.  Google Scholar

[6]

K. Ishioka, ispack-0.4.1, 1999. Available from: http://www.gfd-dennou.org/arch/ispack/. Google Scholar

[7]

H. Jaeger, The "echo state" approach to analysing and training recurrent neural networks, GMD Report, 148 (2001). Google Scholar

[8]

H. Jaeger and H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004), 78-80.  doi: 10.1126/science.1091277.  Google Scholar

[9]

Z. Lu, B. R. Hunt and E. Ott, Attractor reconstruction by machine learning, Chaos, 28 (2018), 9pp. doi: 10.1063/1.5039508.  Google Scholar

[10]

Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett and E. Ott, Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos, 27 (2017). doi: 10.1063/1.4979665.  Google Scholar

[11]

M. Lukosevivcius and H. Jaeger, Reservoir computing approaches to recurrent neural network training, Comput. Science Rev., 3 (2009), 127-149.  doi: 10.1016/j.cosrev.2009.03.005.  Google Scholar

[12]

W. MaassT. Natschläger and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Comput., 14 (2002), 2531-2560.  doi: 10.1162/089976602760407955.  Google Scholar

[13]

K. Nakai and Y. Saiki, Machine-learning inference of fluid variables from data using reservoir computing, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.023111.  Google Scholar

[14]

J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018). doi: 10.1103/PhysRevLett.120.024102.  Google Scholar

[15]

J. Pathak, Z. Lu, B. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 9pp. doi: 10.1063/1.5010300.  Google Scholar

[16]

T. SauerJ. A. Yorke and M. Casdagli, Embedology, J. Statist. Phys., 65 (1991), 579-616.  doi: 10.1007/BF01053745.  Google Scholar

[17]

F. Takens, Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, Lecture Notes in Math., 898, Springer, Berlin-New York, 1981,366–381. doi: 10.1007/BFb0091924.  Google Scholar

[18]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Scripta Series in Mathematics, V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, 1977.  Google Scholar

[19]

D. VerstraetenB. SchrauwenM. D'Haene and and D. A. Stroobandt, An experimental unification of reservoir computing methods, Neural Network, 20 (2007), 391-403.  doi: 10.1016/j.neunet.2007.04.003.  Google Scholar

Figure 1.  Inference of a time-series of the Reynolds number of a fluid flow. Time-series of $ s_1 = \tilde{R}_{\lambda} $ is inferred from the reservoir model in comparison with that of a reference data obtained by the direct numerical simulation of the Navier–Stokes equation (top left). The variable $ t^\prime \; ( = t-T>0) $ denotes the time after finishing the training phase at $ t = T $. The inference errors $ \varepsilon_1, \varepsilon_2 $ defined by $ \varepsilon_1(t) = | \mathbf{s}(t)-\hat{ \mathbf{s}}(t)| $, and $ \varepsilon_2(t) = |s_1(t)-\hat{s}_1(t)| = |\tilde{R}_{\lambda}(t)-\hat{\tilde{R}}_{\lambda}(t)| $ are shown to increase exponentially due to the chaotic property (top right). In the bottom figure switching between laminar state with a small amplitude fluctuation and bursting state with a large amplitude fluctuation appear in an inferred time-series of $ s_1 = \tilde{R}_{\lambda} $, which are observed in the actual time-series
Figure 2.  Reproducing the delay property which is to be satisfied for the successfully inferred time-series $ \hat{\mathbf s} $. We observe that for all values of $ m = 2, \cdots, 14 $ and for most $ t^{\prime} $, $ \hat{s}_1(t^\prime)\approx\hat{s}_{m}(t^\prime+(m-1)\Delta\tau) $, although the time-series of only $ \hat{s}_1(t^\prime) $ and $ \hat{s}_{14}(t^\prime+13\Delta\tau) $ $ (7000\le t^{\prime}\le 8000) $ are shown
Figure 3.  Poincaré points on the plane $ (s_2, s_3) $ along the trajectory $ \hat{ \mathbf{s}} $ obtained from the reservoir model (red) and $ \mathbf{s} $ from the Navier-Stokes equation (blue). The time length of each trajectory is $ 90000 $. The Poincaré section is defined by $ {s}_{1} = 0, k_ge d{s}_{1}/dt>0 $. Two sections are similar to each other, although a trajectory generated from the reservoir model does not cover some region of bursting states
Figure 4.  Density distributions generated from trajectories for a variable $ s_1 $ obtained from the constructed reservoir model (reservoir output) and from the direct numerical simulation of the Navier-Stokes equation (actual). Each trajectory with a time-length 50000 has a different initial condition. The distributions are similar to each other in the sense that the peak is taken at $ s_1\approx0.2 $, and the distribution has relatively long tails
Figure 5.  Inference of a time-series of the Reynolds number for $ t^{\prime}>T_{\text{out}} $ ($ T_{\text{out}} = 1000 $) using the reservoir model constructed by using the training data for $ t^\prime\le 0 $ (see Fig. 1). We use the same $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ as those used for the model inferring the trajectory in Fig. 1. But we use the time-series $ s_1(t^\prime) $ for $ T_{\text{out}}- T_1<t^\prime<T_{\text{out}} $ as an initial condition, where $ T_1 $ is the transient time for the reservoir state vector $ \mathbf{r}(t) $ to be converged. In the top panel, switching between laminar and bursting states is observed in the inferred trajectory. The bottom panel is the enlargement of the top panel, and shows that the model has a predictability for $ 1000<t^\prime<1080 $
Figure 6.  Inference of time-series of the Reynolds number in many time-intervals $ T_{\text{out}}<t^{\prime}<T_{\text{out}}+250 $ ($ T_{\text{out}} = 500, 1000, \cdots, 6000 $) using the same reservoir model constructed by using the training data for $ t^\prime\le 0 $ (see Fig. 1 and 5.) As in Fig. 5, we only change the initial condition for each case, while the model is fixed after the appropriate choice of $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ is determined by using the training data for $ t^{\prime}<0 $
Figure 7.  Auto-correlation function $ C(x) $ for a trajectory $ \{R_{\lambda}(t)\} $ with respect to the value of time-delay $ x $ (left), and its enlarged figure (right). Auto-correlation function $ C(x) $ is shown together with the straight lines $ \pm0.3, \pm0.5 $ (left panel), and $ 0.3, 0.7 $ (right panel). Each of the different colors represents $ C(x) $ computed from a trajectory from a different initial condition with time-lengths 5000. The difference is mainly due to the intermittent property of the dynamics. In the left panel the envelope $ C_e(x)( = \exp(-x/60)) $ is shown to go below 0.5 when $ x \approx 40 $, and also go below 0.3 when $ x \approx 75 $. From the right panel $ C(x) $ is shown to go below $ 0.7 $ at the first time, when $ x \approx 3.0 $, and go below $ 0.3 $ at the first time, when $ x \approx 5.0 $
Table 1.  The list of variables and matrices in the reservoir computing
variable
$ \mathbf{u}\; (\in \mathbf{R}^M) $ input variable
$ \mathbf{r}\; (\in \mathbf{R}^N) $ reservoir state vector
$ \mathbf{s}\; (\in \mathbf{R}^M) $ actual output variable obtained from Navier–Stokes equation
$ \hat{ \mathbf{s}}\; (\in \mathbf{R}^M) $ inferred output variable obtained from reservoir computing
$ \mathbf{A}\; (\in \mathbf{R}^{N \times N}) $ weighted adjacency matrix
$ \mathbf{W}_{\text{in}}\; (\in \mathbf{R}^{M \times N}) $ linear input weight
$ \mathbf{W}_{\text{out}}\; (\in \mathbf{R}^{N \times M}) $ matrix used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \mathbf{c}\; (\in \mathbf{R}^{M}) $ vector used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \tilde{x} $ normalized variable of $ {x} $
variable
$ \mathbf{u}\; (\in \mathbf{R}^M) $ input variable
$ \mathbf{r}\; (\in \mathbf{R}^N) $ reservoir state vector
$ \mathbf{s}\; (\in \mathbf{R}^M) $ actual output variable obtained from Navier–Stokes equation
$ \hat{ \mathbf{s}}\; (\in \mathbf{R}^M) $ inferred output variable obtained from reservoir computing
$ \mathbf{A}\; (\in \mathbf{R}^{N \times N}) $ weighted adjacency matrix
$ \mathbf{W}_{\text{in}}\; (\in \mathbf{R}^{M \times N}) $ linear input weight
$ \mathbf{W}_{\text{out}}\; (\in \mathbf{R}^{N \times M}) $ matrix used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \mathbf{c}\; (\in \mathbf{R}^{M}) $ vector used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \tilde{x} $ normalized variable of $ {x} $
Table 2.  The list of parameters and their values used in the reservoir computing in each section
parameter Sec. 4 Sec. 5
$ M $ dimension of input and output variables 14 Table. 3
$ \Delta \tau $ delay-time of the delay-coordinate 4.0 Table. 3
$ N $ dimension of reservoir state vector 3000 2000
$ D $ parameter of determining $ \mathbf{A} $ 120 80
$ \Delta t $ time step for reservoir dynamics 0.5
$ T_0 $ transient time for $ \mathbf{r} $ to be converged 3750
$ T $ training time 40000
$ L_0 $ $ \; (=T_0/\Delta t) $ number of iterations for the transient 7500
$ L $ $ \; (=T/\Delta t) $ number of iterations for the training 80000
$ \rho $ maximal eigenvalue of $ \mathbf{A} $ 0.7
$ \sigma $ scale of input weights in $ \mathbf{W}_{\text{in}} $ 0.5
$ \alpha $ nonlinearity degree of reservoir dynamics 0.6
$ \beta $ regularization parameter 0.1
parameter Sec. 4 Sec. 5
$ M $ dimension of input and output variables 14 Table. 3
$ \Delta \tau $ delay-time of the delay-coordinate 4.0 Table. 3
$ N $ dimension of reservoir state vector 3000 2000
$ D $ parameter of determining $ \mathbf{A} $ 120 80
$ \Delta t $ time step for reservoir dynamics 0.5
$ T_0 $ transient time for $ \mathbf{r} $ to be converged 3750
$ T $ training time 40000
$ L_0 $ $ \; (=T_0/\Delta t) $ number of iterations for the transient 7500
$ L $ $ \; (=T/\Delta t) $ number of iterations for the training 80000
$ \rho $ maximal eigenvalue of $ \mathbf{A} $ 0.7
$ \sigma $ scale of input weights in $ \mathbf{W}_{\text{in}} $ 0.5
$ \alpha $ nonlinearity degree of reservoir dynamics 0.6
$ \beta $ regularization parameter 0.1
Table 3.  The number of successful trials for each choice of the delay-time $ \Delta \tau $ and the dimension $ M $ of the delay-coordinate. The matrices $ \mathbf{A} $ and $ \mathbf{W}_{\text{in}} $ are chosen randomly, and the number of successful cases are counted. See Table. \ref{tab:parameter} for the parameter values. We say the inference is successful, if the three conditions (ⅰ)(ⅱ)(ⅲ) in (12) hold, where the criteria $ (e_{60}, e_{90}) $ are set as (a)$ (0.14, 0.30) $ and (b)$ (0.13, 0.17) $. For each set of values $ (\Delta \tau, M) $ we tried 8160 cases of $ \mathbf{A} $ and $ \mathbf{W}_\text{in} $. For each value of $ \Delta\tau $, the best choice of $ M $ is identified by the bold number(s) (blue), and the best among each criterion is identified by the underlined bold number(s) (red)
$ {\rm{ (a) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.14, 0.30)$
$ \Delta \tau $ $ \backslash $ $ M $ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 1 19 24 43 37 27
3.5 0 0 0 11 20 28 57 48 21 11 7
4.0 0 3 18 43 107 59 21 14 2 4 5
4.5 3 14 43 54 21 15 8 1 1 1 0
5.0 10 24 26 19 9 1 1 1 0 0 0
$ {\rm{ (b) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.13, 0.17)$
$\Delta \tau$ $\backslash$ $M$ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 0 3 6 10 8 4
3.5 0 0 0 2 3 5 6 4 1 3 1
4.0 0 0 2 8 14 10 1 4 1 0 1
4.5 1 1 8 14 1 0 1 0 0 0 0
5.0 2 4 6 6 3 0 1 0 0 0 0
$ {\rm{ (a) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.14, 0.30)$
$ \Delta \tau $ $ \backslash $ $ M $ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 1 19 24 43 37 27
3.5 0 0 0 11 20 28 57 48 21 11 7
4.0 0 3 18 43 107 59 21 14 2 4 5
4.5 3 14 43 54 21 15 8 1 1 1 0
5.0 10 24 26 19 9 1 1 1 0 0 0
$ {\rm{ (b) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.13, 0.17)$
$\Delta \tau$ $\backslash$ $M$ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 0 3 6 10 8 4
3.5 0 0 0 2 3 5 6 4 1 3 1
4.0 0 0 2 8 14 10 1 4 1 0 1
4.5 1 1 8 14 1 0 1 0 0 0 0
5.0 2 4 6 6 3 0 1 0 0 0 0
[1]

Min Ji, Xinna Ye, Fangyao Qian, T.C.E. Cheng, Yiwei Jiang. Parallel-machine scheduling in shared manufacturing. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020174

[2]

Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020  doi: 10.3934/jcd.2021006

[3]

Vieri Benci, Marco Cococcioni. The algorithmic numbers in non-archimedean numerical computing environments. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020449

[4]

Ying Lin, Qi Ye. Support vector machine classifiers by non-Euclidean margins. Mathematical Foundations of Computing, 2020, 3 (4) : 279-300. doi: 10.3934/mfc.2020018

[5]

Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331

[6]

Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019

[7]

Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115

[8]

Xin Zhong. Singularity formation to the nonhomogeneous magneto-micropolar fluid equations. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021021

[9]

Zsolt Saffer, Miklós Telek, Gábor Horváth. Analysis of Markov-modulated fluid polling systems with gated discipline. Journal of Industrial & Management Optimization, 2021, 17 (2) : 575-599. doi: 10.3934/jimo.2019124

[10]

Caterina Balzotti, Simone Göttlich. A two-dimensional multi-class traffic flow model. Networks & Heterogeneous Media, 2020  doi: 10.3934/nhm.2020034

[11]

Shuang Liu, Yuan Lou. A functional approach towards eigenvalue problems associated with incompressible flow. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3715-3736. doi: 10.3934/dcds.2020028

[12]

Pablo D. Carrasco, Túlio Vales. A symmetric Random Walk defined by the time-one map of a geodesic flow. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020390

[13]

Joan Carles Tatjer, Arturo Vieiro. Dynamics of the QR-flow for upper Hessenberg real matrices. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1359-1403. doi: 10.3934/dcdsb.2020166

[14]

Petr Pauš, Shigetoshi Yazaki. Segmentation of color images using mean curvature flow and parametric curves. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1123-1132. doi: 10.3934/dcdss.2020389

[15]

Yue-Jun Peng, Shu Wang. Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 415-433. doi: 10.3934/dcds.2009.23.415

[16]

Wenlong Sun, Jiaqi Cheng, Xiaoying Han. Random attractors for 2D stochastic micropolar fluid flows on unbounded domains. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 693-716. doi: 10.3934/dcdsb.2020189

[17]

Andrea Giorgini, Roger Temam, Xuan-Truong Vu. The Navier-Stokes-Cahn-Hilliard equations for mildly compressible binary fluid mixtures. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 337-366. doi: 10.3934/dcdsb.2020141

[18]

Pavel Eichler, Radek Fučík, Robert Straka. Computational study of immersed boundary - lattice Boltzmann method for fluid-structure interaction. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 819-833. doi: 10.3934/dcdss.2020349

[19]

Gui-Qiang Chen, Beixiang Fang. Stability of transonic shock-fronts in three-dimensional conical steady potential flow past a perturbed cone. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 85-114. doi: 10.3934/dcds.2009.23.85

[20]

Peter Frolkovič, Viera Kleinová. A new numerical method for level set motion in normal direction used in optical flow estimation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 851-863. doi: 10.3934/dcdss.2020347

2019 Impact Factor: 1.233

Article outline

Figures and Tables

[Back to Top]