doi: 10.3934/dcdss.2020352

Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable

1. 

Faculty of Marine Technology, Tokyo University of Marine Science and Technology, 2-1-6, Ecchujima, Koto-ku, Tokyo, 135-8533, Japan

2. 

Graduate School of Business Administration, Hitotsubashi University, 2-1 Naka, Kunitachi, Tokyo 186-8601, Japan

3. 

JST PRESTO, 4-1-8 Honcho, Kawaguchi-shi, Saitama 332-0012, Japan

4. 

Institute for Physical Science and Technology, University of Maryland, College Park, MD 20742, USA

Received  January 2019 Revised  November 2019 Published  May 2020

We construct a data-driven dynamical system model for a macroscopic variable the Reynolds number of a high-dimensionally chaotic fluid flow by training its scalar time-series data. We use a machine-learning approach, the reservoir computing for the construction of the model, and do not use the knowledge of a physical process of fluid dynamics in its procedure. It is confirmed that an inferred time-series obtained from the model approximates the actual one and that some characteristics of the chaotic invariant set mimic the actual ones. We investigate the appropriate choice of the delay-coordinate, especially the delay-time and the dimension, which enables us to construct a model having a relatively high-dimensional attractor with low computational costs.

Citation: Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020352
References:
[1]

P. Antonik, M. Gulina, J. Pauwels and S. Massar, Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.012215.  Google Scholar

[2]

P. C. Di Leoni, A. Mazzino and L. Biferale, Inferring flow parameters and turbulent configuration with physics-informed data assimilation and spectral nudging, Phys. Rev. Fluids, 3 (2018). doi: 10.1103/PhysRevFluids.3.104604.  Google Scholar

[3]

D. Ibáñez-Soria, J. Garcia-Ojalvo, A. Soria-Frisch and G. Ruffini, Detection of generalized synchronization using echo state networks, Chaos, 28 (2018), 7pp. doi: 10.1063/1.5010285.  Google Scholar

[4]

M. Inubushi and K. Yoshimura, Reservoir computing beyond memory-nonlinearity trade-off, Scientific Reports, 7 (2017). doi: 10.1038/s41598-017-10257-6.  Google Scholar

[5]

T. Ishihara and Y. Kaneda, High resolution DNS of incompressible homogeneous forced turbulence-time dependence of the statistics, in Statistical Theories and Computational Approaches to Turbulence, Springer, Tokyo, 2003,177–188. doi: 10.1007/978-4-431-67002-5_11.  Google Scholar

[6]

K. Ishioka, ispack-0.4.1, 1999. Available from: http://www.gfd-dennou.org/arch/ispack/. Google Scholar

[7]

H. Jaeger, The "echo state" approach to analysing and training recurrent neural networks, GMD Report, 148 (2001). Google Scholar

[8]

H. Jaeger and H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004), 78-80.  doi: 10.1126/science.1091277.  Google Scholar

[9]

Z. Lu, B. R. Hunt and E. Ott, Attractor reconstruction by machine learning, Chaos, 28 (2018), 9pp. doi: 10.1063/1.5039508.  Google Scholar

[10]

Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett and E. Ott, Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos, 27 (2017). doi: 10.1063/1.4979665.  Google Scholar

[11]

M. Lukosevivcius and H. Jaeger, Reservoir computing approaches to recurrent neural network training, Comput. Science Rev., 3 (2009), 127-149.  doi: 10.1016/j.cosrev.2009.03.005.  Google Scholar

[12]

W. MaassT. Natschläger and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Comput., 14 (2002), 2531-2560.  doi: 10.1162/089976602760407955.  Google Scholar

[13]

K. Nakai and Y. Saiki, Machine-learning inference of fluid variables from data using reservoir computing, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.023111.  Google Scholar

[14]

J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018). doi: 10.1103/PhysRevLett.120.024102.  Google Scholar

[15]

J. Pathak, Z. Lu, B. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 9pp. doi: 10.1063/1.5010300.  Google Scholar

[16]

T. SauerJ. A. Yorke and M. Casdagli, Embedology, J. Statist. Phys., 65 (1991), 579-616.  doi: 10.1007/BF01053745.  Google Scholar

[17]

F. Takens, Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, Lecture Notes in Math., 898, Springer, Berlin-New York, 1981,366–381. doi: 10.1007/BFb0091924.  Google Scholar

[18]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Scripta Series in Mathematics, V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, 1977.  Google Scholar

[19]

D. VerstraetenB. SchrauwenM. D'Haene and and D. A. Stroobandt, An experimental unification of reservoir computing methods, Neural Network, 20 (2007), 391-403.  doi: 10.1016/j.neunet.2007.04.003.  Google Scholar

show all references

References:
[1]

P. Antonik, M. Gulina, J. Pauwels and S. Massar, Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.012215.  Google Scholar

[2]

P. C. Di Leoni, A. Mazzino and L. Biferale, Inferring flow parameters and turbulent configuration with physics-informed data assimilation and spectral nudging, Phys. Rev. Fluids, 3 (2018). doi: 10.1103/PhysRevFluids.3.104604.  Google Scholar

[3]

D. Ibáñez-Soria, J. Garcia-Ojalvo, A. Soria-Frisch and G. Ruffini, Detection of generalized synchronization using echo state networks, Chaos, 28 (2018), 7pp. doi: 10.1063/1.5010285.  Google Scholar

[4]

M. Inubushi and K. Yoshimura, Reservoir computing beyond memory-nonlinearity trade-off, Scientific Reports, 7 (2017). doi: 10.1038/s41598-017-10257-6.  Google Scholar

[5]

T. Ishihara and Y. Kaneda, High resolution DNS of incompressible homogeneous forced turbulence-time dependence of the statistics, in Statistical Theories and Computational Approaches to Turbulence, Springer, Tokyo, 2003,177–188. doi: 10.1007/978-4-431-67002-5_11.  Google Scholar

[6]

K. Ishioka, ispack-0.4.1, 1999. Available from: http://www.gfd-dennou.org/arch/ispack/. Google Scholar

[7]

H. Jaeger, The "echo state" approach to analysing and training recurrent neural networks, GMD Report, 148 (2001). Google Scholar

[8]

H. Jaeger and H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004), 78-80.  doi: 10.1126/science.1091277.  Google Scholar

[9]

Z. Lu, B. R. Hunt and E. Ott, Attractor reconstruction by machine learning, Chaos, 28 (2018), 9pp. doi: 10.1063/1.5039508.  Google Scholar

[10]

Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett and E. Ott, Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos, 27 (2017). doi: 10.1063/1.4979665.  Google Scholar

[11]

M. Lukosevivcius and H. Jaeger, Reservoir computing approaches to recurrent neural network training, Comput. Science Rev., 3 (2009), 127-149.  doi: 10.1016/j.cosrev.2009.03.005.  Google Scholar

[12]

W. MaassT. Natschläger and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Comput., 14 (2002), 2531-2560.  doi: 10.1162/089976602760407955.  Google Scholar

[13]

K. Nakai and Y. Saiki, Machine-learning inference of fluid variables from data using reservoir computing, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.023111.  Google Scholar

[14]

J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018). doi: 10.1103/PhysRevLett.120.024102.  Google Scholar

[15]

J. Pathak, Z. Lu, B. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 9pp. doi: 10.1063/1.5010300.  Google Scholar

[16]

T. SauerJ. A. Yorke and M. Casdagli, Embedology, J. Statist. Phys., 65 (1991), 579-616.  doi: 10.1007/BF01053745.  Google Scholar

[17]

F. Takens, Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, Lecture Notes in Math., 898, Springer, Berlin-New York, 1981,366–381. doi: 10.1007/BFb0091924.  Google Scholar

[18]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Scripta Series in Mathematics, V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, 1977.  Google Scholar

[19]

D. VerstraetenB. SchrauwenM. D'Haene and and D. A. Stroobandt, An experimental unification of reservoir computing methods, Neural Network, 20 (2007), 391-403.  doi: 10.1016/j.neunet.2007.04.003.  Google Scholar

Figure 1.  Inference of a time-series of the Reynolds number of a fluid flow. Time-series of $ s_1 = \tilde{R}_{\lambda} $ is inferred from the reservoir model in comparison with that of a reference data obtained by the direct numerical simulation of the Navier–Stokes equation (top left). The variable $ t^\prime \; ( = t-T>0) $ denotes the time after finishing the training phase at $ t = T $. The inference errors $ \varepsilon_1, \varepsilon_2 $ defined by $ \varepsilon_1(t) = | \mathbf{s}(t)-\hat{ \mathbf{s}}(t)| $, and $ \varepsilon_2(t) = |s_1(t)-\hat{s}_1(t)| = |\tilde{R}_{\lambda}(t)-\hat{\tilde{R}}_{\lambda}(t)| $ are shown to increase exponentially due to the chaotic property (top right). In the bottom figure switching between laminar state with a small amplitude fluctuation and bursting state with a large amplitude fluctuation appear in an inferred time-series of $ s_1 = \tilde{R}_{\lambda} $, which are observed in the actual time-series
Figure 2.  Reproducing the delay property which is to be satisfied for the successfully inferred time-series $ \hat{\mathbf s} $. We observe that for all values of $ m = 2, \cdots, 14 $ and for most $ t^{\prime} $, $ \hat{s}_1(t^\prime)\approx\hat{s}_{m}(t^\prime+(m-1)\Delta\tau) $, although the time-series of only $ \hat{s}_1(t^\prime) $ and $ \hat{s}_{14}(t^\prime+13\Delta\tau) $ $ (7000\le t^{\prime}\le 8000) $ are shown
Figure 3.  Poincaré points on the plane $ (s_2, s_3) $ along the trajectory $ \hat{ \mathbf{s}} $ obtained from the reservoir model (red) and $ \mathbf{s} $ from the Navier-Stokes equation (blue). The time length of each trajectory is $ 90000 $. The Poincaré section is defined by $ {s}_{1} = 0, k_ge d{s}_{1}/dt>0 $. Two sections are similar to each other, although a trajectory generated from the reservoir model does not cover some region of bursting states
Figure 4.  Density distributions generated from trajectories for a variable $ s_1 $ obtained from the constructed reservoir model (reservoir output) and from the direct numerical simulation of the Navier-Stokes equation (actual). Each trajectory with a time-length 50000 has a different initial condition. The distributions are similar to each other in the sense that the peak is taken at $ s_1\approx0.2 $, and the distribution has relatively long tails
Figure 5.  Inference of a time-series of the Reynolds number for $ t^{\prime}>T_{\text{out}} $ ($ T_{\text{out}} = 1000 $) using the reservoir model constructed by using the training data for $ t^\prime\le 0 $ (see Fig. 1). We use the same $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ as those used for the model inferring the trajectory in Fig. 1. But we use the time-series $ s_1(t^\prime) $ for $ T_{\text{out}}- T_1<t^\prime<T_{\text{out}} $ as an initial condition, where $ T_1 $ is the transient time for the reservoir state vector $ \mathbf{r}(t) $ to be converged. In the top panel, switching between laminar and bursting states is observed in the inferred trajectory. The bottom panel is the enlargement of the top panel, and shows that the model has a predictability for $ 1000<t^\prime<1080 $
Figure 6.  Inference of time-series of the Reynolds number in many time-intervals $ T_{\text{out}}<t^{\prime}<T_{\text{out}}+250 $ ($ T_{\text{out}} = 500, 1000, \cdots, 6000 $) using the same reservoir model constructed by using the training data for $ t^\prime\le 0 $ (see Fig. 1 and 5.) As in Fig. 5, we only change the initial condition for each case, while the model is fixed after the appropriate choice of $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ is determined by using the training data for $ t^{\prime}<0 $
Figure 7.  Auto-correlation function $ C(x) $ for a trajectory $ \{R_{\lambda}(t)\} $ with respect to the value of time-delay $ x $ (left), and its enlarged figure (right). Auto-correlation function $ C(x) $ is shown together with the straight lines $ \pm0.3, \pm0.5 $ (left panel), and $ 0.3, 0.7 $ (right panel). Each of the different colors represents $ C(x) $ computed from a trajectory from a different initial condition with time-lengths 5000. The difference is mainly due to the intermittent property of the dynamics. In the left panel the envelope $ C_e(x)( = \exp(-x/60)) $ is shown to go below 0.5 when $ x \approx 40 $, and also go below 0.3 when $ x \approx 75 $. From the right panel $ C(x) $ is shown to go below $ 0.7 $ at the first time, when $ x \approx 3.0 $, and go below $ 0.3 $ at the first time, when $ x \approx 5.0 $
Table 1.  The list of variables and matrices in the reservoir computing
variable
$ \mathbf{u}\; (\in \mathbf{R}^M) $ input variable
$ \mathbf{r}\; (\in \mathbf{R}^N) $ reservoir state vector
$ \mathbf{s}\; (\in \mathbf{R}^M) $ actual output variable obtained from Navier–Stokes equation
$ \hat{ \mathbf{s}}\; (\in \mathbf{R}^M) $ inferred output variable obtained from reservoir computing
$ \mathbf{A}\; (\in \mathbf{R}^{N \times N}) $ weighted adjacency matrix
$ \mathbf{W}_{\text{in}}\; (\in \mathbf{R}^{M \times N}) $ linear input weight
$ \mathbf{W}_{\text{out}}\; (\in \mathbf{R}^{N \times M}) $ matrix used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \mathbf{c}\; (\in \mathbf{R}^{M}) $ vector used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \tilde{x} $ normalized variable of $ {x} $
variable
$ \mathbf{u}\; (\in \mathbf{R}^M) $ input variable
$ \mathbf{r}\; (\in \mathbf{R}^N) $ reservoir state vector
$ \mathbf{s}\; (\in \mathbf{R}^M) $ actual output variable obtained from Navier–Stokes equation
$ \hat{ \mathbf{s}}\; (\in \mathbf{R}^M) $ inferred output variable obtained from reservoir computing
$ \mathbf{A}\; (\in \mathbf{R}^{N \times N}) $ weighted adjacency matrix
$ \mathbf{W}_{\text{in}}\; (\in \mathbf{R}^{M \times N}) $ linear input weight
$ \mathbf{W}_{\text{out}}\; (\in \mathbf{R}^{N \times M}) $ matrix used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \mathbf{c}\; (\in \mathbf{R}^{M}) $ vector used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \tilde{x} $ normalized variable of $ {x} $
Table 2.  The list of parameters and their values used in the reservoir computing in each section
parameter Sec. 4 Sec. 5
$ M $ dimension of input and output variables 14 Table. 3
$ \Delta \tau $ delay-time of the delay-coordinate 4.0 Table. 3
$ N $ dimension of reservoir state vector 3000 2000
$ D $ parameter of determining $ \mathbf{A} $ 120 80
$ \Delta t $ time step for reservoir dynamics 0.5
$ T_0 $ transient time for $ \mathbf{r} $ to be converged 3750
$ T $ training time 40000
$ L_0 $ $ \; (=T_0/\Delta t) $ number of iterations for the transient 7500
$ L $ $ \; (=T/\Delta t) $ number of iterations for the training 80000
$ \rho $ maximal eigenvalue of $ \mathbf{A} $ 0.7
$ \sigma $ scale of input weights in $ \mathbf{W}_{\text{in}} $ 0.5
$ \alpha $ nonlinearity degree of reservoir dynamics 0.6
$ \beta $ regularization parameter 0.1
parameter Sec. 4 Sec. 5
$ M $ dimension of input and output variables 14 Table. 3
$ \Delta \tau $ delay-time of the delay-coordinate 4.0 Table. 3
$ N $ dimension of reservoir state vector 3000 2000
$ D $ parameter of determining $ \mathbf{A} $ 120 80
$ \Delta t $ time step for reservoir dynamics 0.5
$ T_0 $ transient time for $ \mathbf{r} $ to be converged 3750
$ T $ training time 40000
$ L_0 $ $ \; (=T_0/\Delta t) $ number of iterations for the transient 7500
$ L $ $ \; (=T/\Delta t) $ number of iterations for the training 80000
$ \rho $ maximal eigenvalue of $ \mathbf{A} $ 0.7
$ \sigma $ scale of input weights in $ \mathbf{W}_{\text{in}} $ 0.5
$ \alpha $ nonlinearity degree of reservoir dynamics 0.6
$ \beta $ regularization parameter 0.1
Table 3.  The number of successful trials for each choice of the delay-time $ \Delta \tau $ and the dimension $ M $ of the delay-coordinate. The matrices $ \mathbf{A} $ and $ \mathbf{W}_{\text{in}} $ are chosen randomly, and the number of successful cases are counted. See Table. \ref{tab:parameter} for the parameter values. We say the inference is successful, if the three conditions (ⅰ)(ⅱ)(ⅲ) in (12) hold, where the criteria $ (e_{60}, e_{90}) $ are set as (a)$ (0.14, 0.30) $ and (b)$ (0.13, 0.17) $. For each set of values $ (\Delta \tau, M) $ we tried 8160 cases of $ \mathbf{A} $ and $ \mathbf{W}_\text{in} $. For each value of $ \Delta\tau $, the best choice of $ M $ is identified by the bold number(s) (blue), and the best among each criterion is identified by the underlined bold number(s) (red)
$ {\rm{ (a) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.14, 0.30)$
$ \Delta \tau $ $ \backslash $ $ M $ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 1 19 24 43 37 27
3.5 0 0 0 11 20 28 57 48 21 11 7
4.0 0 3 18 43 107 59 21 14 2 4 5
4.5 3 14 43 54 21 15 8 1 1 1 0
5.0 10 24 26 19 9 1 1 1 0 0 0
$ {\rm{ (b) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.13, 0.17)$
$\Delta \tau$ $\backslash$ $M$ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 0 3 6 10 8 4
3.5 0 0 0 2 3 5 6 4 1 3 1
4.0 0 0 2 8 14 10 1 4 1 0 1
4.5 1 1 8 14 1 0 1 0 0 0 0
5.0 2 4 6 6 3 0 1 0 0 0 0
$ {\rm{ (a) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.14, 0.30)$
$ \Delta \tau $ $ \backslash $ $ M $ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 1 19 24 43 37 27
3.5 0 0 0 11 20 28 57 48 21 11 7
4.0 0 3 18 43 107 59 21 14 2 4 5
4.5 3 14 43 54 21 15 8 1 1 1 0
5.0 10 24 26 19 9 1 1 1 0 0 0
$ {\rm{ (b) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.13, 0.17)$
$\Delta \tau$ $\backslash$ $M$ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 0 3 6 10 8 4
3.5 0 0 0 2 3 5 6 4 1 3 1
4.0 0 0 2 8 14 10 1 4 1 0 1
4.5 1 1 8 14 1 0 1 0 0 0 0
5.0 2 4 6 6 3 0 1 0 0 0 0
[1]

Steinar Evje, Kenneth H. Karlsen. Hyperbolic-elliptic models for well-reservoir flow. Networks & Heterogeneous Media, 2006, 1 (4) : 639-673. doi: 10.3934/nhm.2006.1.639

[2]

Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011

[3]

Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031

[4]

Mingbao Cheng, Shuxian Xiao, Guosheng Liu. Single-machine rescheduling problems with learning effect under disruptions. Journal of Industrial & Management Optimization, 2018, 14 (3) : 967-980. doi: 10.3934/jimo.2017085

[5]

Andreas Chirstmann, Qiang Wu, Ding-Xuan Zhou. Preface to the special issue on analysis in machine learning and data science. Communications on Pure & Applied Analysis, 2020, 19 (8) : i-iii. doi: 10.3934/cpaa.2020171

[6]

Thomas Schuster, Joachim Weickert. On the application of projection methods for computing optical flow fields. Inverse Problems & Imaging, 2007, 1 (4) : 673-690. doi: 10.3934/ipi.2007.1.673

[7]

Ping Yan, Ji-Bo Wang, Li-Qiang Zhao. Single-machine bi-criterion scheduling with release times and exponentially time-dependent learning effects. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1117-1131. doi: 10.3934/jimo.2018088

[8]

Cai-Tong Yue, Jing Liang, Bo-Fei Lang, Bo-Yang Qu. Two-hidden-layer extreme learning machine based wrist vein recognition system. Big Data & Information Analytics, 2017, 2 (1) : 59-68. doi: 10.3934/bdia.2017008

[9]

Xingong Zhang. Single machine and flowshop scheduling problems with sum-of-processing time based learning phenomenon. Journal of Industrial & Management Optimization, 2020, 16 (1) : 231-244. doi: 10.3934/jimo.2018148

[10]

Marc Bocquet, Julien Brajard, Alberto Carrassi, Laurent Bertino. Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2020, 2 (1) : 55-80. doi: 10.3934/fods.2020004

[11]

K.H. Wong, C. Myburgh, L. Omari. A gradient flow approach for computing jump linear quadratic optimal feedback gains. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 803-808. doi: 10.3934/dcds.2000.6.803

[12]

Anita T. Layton, J. Thomas Beale. A partially implicit hybrid method for computing interface motion in Stokes flow. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1139-1153. doi: 10.3934/dcdsb.2012.17.1139

[13]

Yang Woo Shin, Dug Hee Moon. Throughput of flow lines with unreliable parallel-machine workstations and blocking. Journal of Industrial & Management Optimization, 2017, 13 (2) : 901-916. doi: 10.3934/jimo.2016052

[14]

T. Tachim Medjo. On the Newton method in robust control of fluid flow. Discrete & Continuous Dynamical Systems - A, 2003, 9 (5) : 1201-1222. doi: 10.3934/dcds.2003.9.1201

[15]

Kit Yan Chan, Changjun Yu, Kok Lay Teo, Sven Nordholm. Essential issues on solving optimal power flow problems using soft-computing. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 341-351. doi: 10.3934/naco.2014.4.341

[16]

Mostafa Abouei Ardakan, A. Kourank Beheshti, S. Hamid Mirmohammadi, Hamed Davari Ardakani. A hybrid meta-heuristic algorithm to minimize the number of tardy jobs in a dynamic two-machine flow shop problem. Numerical Algebra, Control & Optimization, 2017, 7 (4) : 465-480. doi: 10.3934/naco.2017029

[17]

Zhichao Geng, Jinjiang Yuan. Scheduling family jobs on an unbounded parallel-batch machine to minimize makespan and maximum flow time. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1479-1500. doi: 10.3934/jimo.2018017

[18]

Christos Sourdis. Analysis of an irregular boundary layer behavior for the steady state flow of a Boussinesq fluid. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 1039-1059. doi: 10.3934/dcds.2017043

[19]

Cheng-Zhong Xu, Gauthier Sallet. Multivariable boundary PI control and regulation of a fluid flow system. Mathematical Control & Related Fields, 2014, 4 (4) : 501-520. doi: 10.3934/mcrf.2014.4.501

[20]

D. L. Denny. Existence of solutions to equations for the flow of an incompressible fluid with capillary effects. Communications on Pure & Applied Analysis, 2004, 3 (2) : 197-216. doi: 10.3934/cpaa.2004.3.197

2019 Impact Factor: 1.233

Metrics

  • PDF downloads (34)
  • HTML views (196)
  • Cited by (0)

Other articles
by authors

[Back to Top]