• Previous Article
    An application of interpolation inequalities between the deviation of curvature and the isoperimetric ratio to the length-preserving flow
  • DCDS-S Home
  • This Issue
  • Next Article
    Theoretical and numerical studies for energy estimates of the shallow water equations with a transmission boundary condition
March  2021, 14(3): 1079-1092. doi: 10.3934/dcdss.2020352

Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable

1. 

Faculty of Marine Technology, Tokyo University of Marine Science and Technology, 2-1-6, Ecchujima, Koto-ku, Tokyo, 135-8533, Japan

2. 

Graduate School of Business Administration, Hitotsubashi University, 2-1 Naka, Kunitachi, Tokyo 186-8601, Japan

3. 

JST PRESTO, 4-1-8 Honcho, Kawaguchi-shi, Saitama 332-0012, Japan

4. 

Institute for Physical Science and Technology, University of Maryland, College Park, MD 20742, USA

Received  January 2019 Revised  November 2019 Published  May 2020

We construct a data-driven dynamical system model for a macroscopic variable the Reynolds number of a high-dimensionally chaotic fluid flow by training its scalar time-series data. We use a machine-learning approach, the reservoir computing for the construction of the model, and do not use the knowledge of a physical process of fluid dynamics in its procedure. It is confirmed that an inferred time-series obtained from the model approximates the actual one and that some characteristics of the chaotic invariant set mimic the actual ones. We investigate the appropriate choice of the delay-coordinate, especially the delay-time and the dimension, which enables us to construct a model having a relatively high-dimensional attractor with low computational costs.

Citation: Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352
References:
[1]

P. Antonik, M. Gulina, J. Pauwels and S. Massar, Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.012215.  Google Scholar

[2]

P. C. Di Leoni, A. Mazzino and L. Biferale, Inferring flow parameters and turbulent configuration with physics-informed data assimilation and spectral nudging, Phys. Rev. Fluids, 3 (2018). doi: 10.1103/PhysRevFluids.3.104604.  Google Scholar

[3]

D. Ibáñez-Soria, J. Garcia-Ojalvo, A. Soria-Frisch and G. Ruffini, Detection of generalized synchronization using echo state networks, Chaos, 28 (2018), 7pp. doi: 10.1063/1.5010285.  Google Scholar

[4]

M. Inubushi and K. Yoshimura, Reservoir computing beyond memory-nonlinearity trade-off, Scientific Reports, 7 (2017). doi: 10.1038/s41598-017-10257-6.  Google Scholar

[5]

T. Ishihara and Y. Kaneda, High resolution DNS of incompressible homogeneous forced turbulence-time dependence of the statistics, in Statistical Theories and Computational Approaches to Turbulence, Springer, Tokyo, 2003,177–188. doi: 10.1007/978-4-431-67002-5_11.  Google Scholar

[6]

K. Ishioka, ispack-0.4.1, 1999. Available from: http://www.gfd-dennou.org/arch/ispack/. Google Scholar

[7]

H. Jaeger, The "echo state" approach to analysing and training recurrent neural networks, GMD Report, 148 (2001). Google Scholar

[8]

H. Jaeger and H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004), 78-80.  doi: 10.1126/science.1091277.  Google Scholar

[9]

Z. Lu, B. R. Hunt and E. Ott, Attractor reconstruction by machine learning, Chaos, 28 (2018), 9pp. doi: 10.1063/1.5039508.  Google Scholar

[10]

Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett and E. Ott, Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos, 27 (2017). doi: 10.1063/1.4979665.  Google Scholar

[11]

M. Lukosevivcius and H. Jaeger, Reservoir computing approaches to recurrent neural network training, Comput. Science Rev., 3 (2009), 127-149.  doi: 10.1016/j.cosrev.2009.03.005.  Google Scholar

[12]

W. MaassT. Natschläger and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Comput., 14 (2002), 2531-2560.  doi: 10.1162/089976602760407955.  Google Scholar

[13]

K. Nakai and Y. Saiki, Machine-learning inference of fluid variables from data using reservoir computing, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.023111.  Google Scholar

[14]

J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018). doi: 10.1103/PhysRevLett.120.024102.  Google Scholar

[15]

J. Pathak, Z. Lu, B. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 9pp. doi: 10.1063/1.5010300.  Google Scholar

[16]

T. SauerJ. A. Yorke and M. Casdagli, Embedology, J. Statist. Phys., 65 (1991), 579-616.  doi: 10.1007/BF01053745.  Google Scholar

[17]

F. Takens, Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, Lecture Notes in Math., 898, Springer, Berlin-New York, 1981,366–381. doi: 10.1007/BFb0091924.  Google Scholar

[18]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Scripta Series in Mathematics, V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, 1977.  Google Scholar

[19]

D. VerstraetenB. SchrauwenM. D'Haene and and D. A. Stroobandt, An experimental unification of reservoir computing methods, Neural Network, 20 (2007), 391-403.  doi: 10.1016/j.neunet.2007.04.003.  Google Scholar

show all references

References:
[1]

P. Antonik, M. Gulina, J. Pauwels and S. Massar, Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.012215.  Google Scholar

[2]

P. C. Di Leoni, A. Mazzino and L. Biferale, Inferring flow parameters and turbulent configuration with physics-informed data assimilation and spectral nudging, Phys. Rev. Fluids, 3 (2018). doi: 10.1103/PhysRevFluids.3.104604.  Google Scholar

[3]

D. Ibáñez-Soria, J. Garcia-Ojalvo, A. Soria-Frisch and G. Ruffini, Detection of generalized synchronization using echo state networks, Chaos, 28 (2018), 7pp. doi: 10.1063/1.5010285.  Google Scholar

[4]

M. Inubushi and K. Yoshimura, Reservoir computing beyond memory-nonlinearity trade-off, Scientific Reports, 7 (2017). doi: 10.1038/s41598-017-10257-6.  Google Scholar

[5]

T. Ishihara and Y. Kaneda, High resolution DNS of incompressible homogeneous forced turbulence-time dependence of the statistics, in Statistical Theories and Computational Approaches to Turbulence, Springer, Tokyo, 2003,177–188. doi: 10.1007/978-4-431-67002-5_11.  Google Scholar

[6]

K. Ishioka, ispack-0.4.1, 1999. Available from: http://www.gfd-dennou.org/arch/ispack/. Google Scholar

[7]

H. Jaeger, The "echo state" approach to analysing and training recurrent neural networks, GMD Report, 148 (2001). Google Scholar

[8]

H. Jaeger and H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004), 78-80.  doi: 10.1126/science.1091277.  Google Scholar

[9]

Z. Lu, B. R. Hunt and E. Ott, Attractor reconstruction by machine learning, Chaos, 28 (2018), 9pp. doi: 10.1063/1.5039508.  Google Scholar

[10]

Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett and E. Ott, Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos, 27 (2017). doi: 10.1063/1.4979665.  Google Scholar

[11]

M. Lukosevivcius and H. Jaeger, Reservoir computing approaches to recurrent neural network training, Comput. Science Rev., 3 (2009), 127-149.  doi: 10.1016/j.cosrev.2009.03.005.  Google Scholar

[12]

W. MaassT. Natschläger and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Comput., 14 (2002), 2531-2560.  doi: 10.1162/089976602760407955.  Google Scholar

[13]

K. Nakai and Y. Saiki, Machine-learning inference of fluid variables from data using reservoir computing, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.023111.  Google Scholar

[14]

J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018). doi: 10.1103/PhysRevLett.120.024102.  Google Scholar

[15]

J. Pathak, Z. Lu, B. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 9pp. doi: 10.1063/1.5010300.  Google Scholar

[16]

T. SauerJ. A. Yorke and M. Casdagli, Embedology, J. Statist. Phys., 65 (1991), 579-616.  doi: 10.1007/BF01053745.  Google Scholar

[17]

F. Takens, Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, Lecture Notes in Math., 898, Springer, Berlin-New York, 1981,366–381. doi: 10.1007/BFb0091924.  Google Scholar

[18]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Scripta Series in Mathematics, V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, 1977.  Google Scholar

[19]

D. VerstraetenB. SchrauwenM. D'Haene and and D. A. Stroobandt, An experimental unification of reservoir computing methods, Neural Network, 20 (2007), 391-403.  doi: 10.1016/j.neunet.2007.04.003.  Google Scholar

Figure 1.  Inference of a time-series of the Reynolds number of a fluid flow. Time-series of $ s_1 = \tilde{R}_{\lambda} $ is inferred from the reservoir model in comparison with that of a reference data obtained by the direct numerical simulation of the Navier–Stokes equation (top left). The variable $ t^\prime \; ( = t-T>0) $ denotes the time after finishing the training phase at $ t = T $. The inference errors $ \varepsilon_1, \varepsilon_2 $ defined by $ \varepsilon_1(t) = | \mathbf{s}(t)-\hat{ \mathbf{s}}(t)| $, and $ \varepsilon_2(t) = |s_1(t)-\hat{s}_1(t)| = |\tilde{R}_{\lambda}(t)-\hat{\tilde{R}}_{\lambda}(t)| $ are shown to increase exponentially due to the chaotic property (top right). In the bottom figure switching between laminar state with a small amplitude fluctuation and bursting state with a large amplitude fluctuation appear in an inferred time-series of $ s_1 = \tilde{R}_{\lambda} $, which are observed in the actual time-series
Figure 2.  Reproducing the delay property which is to be satisfied for the successfully inferred time-series $ \hat{\mathbf s} $. We observe that for all values of $ m = 2, \cdots, 14 $ and for most $ t^{\prime} $, $ \hat{s}_1(t^\prime)\approx\hat{s}_{m}(t^\prime+(m-1)\Delta\tau) $, although the time-series of only $ \hat{s}_1(t^\prime) $ and $ \hat{s}_{14}(t^\prime+13\Delta\tau) $ $ (7000\le t^{\prime}\le 8000) $ are shown
Figure 3.  Poincaré points on the plane $ (s_2, s_3) $ along the trajectory $ \hat{ \mathbf{s}} $ obtained from the reservoir model (red) and $ \mathbf{s} $ from the Navier-Stokes equation (blue). The time length of each trajectory is $ 90000 $. The Poincaré section is defined by $ {s}_{1} = 0, k_ge d{s}_{1}/dt>0 $. Two sections are similar to each other, although a trajectory generated from the reservoir model does not cover some region of bursting states
Figure 4.  Density distributions generated from trajectories for a variable $ s_1 $ obtained from the constructed reservoir model (reservoir output) and from the direct numerical simulation of the Navier-Stokes equation (actual). Each trajectory with a time-length 50000 has a different initial condition. The distributions are similar to each other in the sense that the peak is taken at $ s_1\approx0.2 $, and the distribution has relatively long tails
Fig. 1). We use the same $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ as those used for the model inferring the trajectory in Fig. 1. But we use the time-series $ s_1(t^\prime) $ for $ T_{\text{out}}- T_1<t^\prime<T_{\text{out}} $ as an initial condition, where $ T_1 $ is the transient time for the reservoir state vector $ \mathbf{r}(t) $ to be converged. In the top panel, switching between laminar and bursting states is observed in the inferred trajectory. The bottom panel is the enlargement of the top panel, and shows that the model has a predictability for $ 1000<t^\prime<1080 $">Figure 5.  Inference of a time-series of the Reynolds number for $ t^{\prime}>T_{\text{out}} $ ($ T_{\text{out}} = 1000 $) using the reservoir model constructed by using the training data for $ t^\prime\le 0 $ (see Fig. 1). We use the same $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ as those used for the model inferring the trajectory in Fig. 1. But we use the time-series $ s_1(t^\prime) $ for $ T_{\text{out}}- T_1<t^\prime<T_{\text{out}} $ as an initial condition, where $ T_1 $ is the transient time for the reservoir state vector $ \mathbf{r}(t) $ to be converged. In the top panel, switching between laminar and bursting states is observed in the inferred trajectory. The bottom panel is the enlargement of the top panel, and shows that the model has a predictability for $ 1000<t^\prime<1080 $
Fig. 1 and 5.) As in Fig. 5, we only change the initial condition for each case, while the model is fixed after the appropriate choice of $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ is determined by using the training data for $ t^{\prime}<0 $">Figure 6.  Inference of time-series of the Reynolds number in many time-intervals $ T_{\text{out}}<t^{\prime}<T_{\text{out}}+250 $ ($ T_{\text{out}} = 500, 1000, \cdots, 6000 $) using the same reservoir model constructed by using the training data for $ t^\prime\le 0 $ (see Fig. 1 and 5.) As in Fig. 5, we only change the initial condition for each case, while the model is fixed after the appropriate choice of $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ is determined by using the training data for $ t^{\prime}<0 $
Figure 7.  Auto-correlation function $ C(x) $ for a trajectory $ \{R_{\lambda}(t)\} $ with respect to the value of time-delay $ x $ (left), and its enlarged figure (right). Auto-correlation function $ C(x) $ is shown together with the straight lines $ \pm0.3, \pm0.5 $ (left panel), and $ 0.3, 0.7 $ (right panel). Each of the different colors represents $ C(x) $ computed from a trajectory from a different initial condition with time-lengths 5000. The difference is mainly due to the intermittent property of the dynamics. In the left panel the envelope $ C_e(x)( = \exp(-x/60)) $ is shown to go below 0.5 when $ x \approx 40 $, and also go below 0.3 when $ x \approx 75 $. From the right panel $ C(x) $ is shown to go below $ 0.7 $ at the first time, when $ x \approx 3.0 $, and go below $ 0.3 $ at the first time, when $ x \approx 5.0 $
Table 1.  The list of variables and matrices in the reservoir computing
variable
$ \mathbf{u}\; (\in \mathbf{R}^M) $ input variable
$ \mathbf{r}\; (\in \mathbf{R}^N) $ reservoir state vector
$ \mathbf{s}\; (\in \mathbf{R}^M) $ actual output variable obtained from Navier–Stokes equation
$ \hat{ \mathbf{s}}\; (\in \mathbf{R}^M) $ inferred output variable obtained from reservoir computing
$ \mathbf{A}\; (\in \mathbf{R}^{N \times N}) $ weighted adjacency matrix
$ \mathbf{W}_{\text{in}}\; (\in \mathbf{R}^{M \times N}) $ linear input weight
$ \mathbf{W}_{\text{out}}\; (\in \mathbf{R}^{N \times M}) $ matrix used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \mathbf{c}\; (\in \mathbf{R}^{M}) $ vector used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \tilde{x} $ normalized variable of $ {x} $
variable
$ \mathbf{u}\; (\in \mathbf{R}^M) $ input variable
$ \mathbf{r}\; (\in \mathbf{R}^N) $ reservoir state vector
$ \mathbf{s}\; (\in \mathbf{R}^M) $ actual output variable obtained from Navier–Stokes equation
$ \hat{ \mathbf{s}}\; (\in \mathbf{R}^M) $ inferred output variable obtained from reservoir computing
$ \mathbf{A}\; (\in \mathbf{R}^{N \times N}) $ weighted adjacency matrix
$ \mathbf{W}_{\text{in}}\; (\in \mathbf{R}^{M \times N}) $ linear input weight
$ \mathbf{W}_{\text{out}}\; (\in \mathbf{R}^{N \times M}) $ matrix used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \mathbf{c}\; (\in \mathbf{R}^{M}) $ vector used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
$ \tilde{x} $ normalized variable of $ {x} $
Table 2.  The list of parameters and their values used in the reservoir computing in each section
parameter Sec. 4 Sec. 5
$ M $ dimension of input and output variables 14 Table. 3
$ \Delta \tau $ delay-time of the delay-coordinate 4.0 Table. 3
$ N $ dimension of reservoir state vector 3000 2000
$ D $ parameter of determining $ \mathbf{A} $ 120 80
$ \Delta t $ time step for reservoir dynamics 0.5
$ T_0 $ transient time for $ \mathbf{r} $ to be converged 3750
$ T $ training time 40000
$ L_0 $ $ \; (=T_0/\Delta t) $ number of iterations for the transient 7500
$ L $ $ \; (=T/\Delta t) $ number of iterations for the training 80000
$ \rho $ maximal eigenvalue of $ \mathbf{A} $ 0.7
$ \sigma $ scale of input weights in $ \mathbf{W}_{\text{in}} $ 0.5
$ \alpha $ nonlinearity degree of reservoir dynamics 0.6
$ \beta $ regularization parameter 0.1
parameter Sec. 4 Sec. 5
$ M $ dimension of input and output variables 14 Table. 3
$ \Delta \tau $ delay-time of the delay-coordinate 4.0 Table. 3
$ N $ dimension of reservoir state vector 3000 2000
$ D $ parameter of determining $ \mathbf{A} $ 120 80
$ \Delta t $ time step for reservoir dynamics 0.5
$ T_0 $ transient time for $ \mathbf{r} $ to be converged 3750
$ T $ training time 40000
$ L_0 $ $ \; (=T_0/\Delta t) $ number of iterations for the transient 7500
$ L $ $ \; (=T/\Delta t) $ number of iterations for the training 80000
$ \rho $ maximal eigenvalue of $ \mathbf{A} $ 0.7
$ \sigma $ scale of input weights in $ \mathbf{W}_{\text{in}} $ 0.5
$ \alpha $ nonlinearity degree of reservoir dynamics 0.6
$ \beta $ regularization parameter 0.1
Table 3.  The number of successful trials for each choice of the delay-time $ \Delta \tau $ and the dimension $ M $ of the delay-coordinate. The matrices $ \mathbf{A} $ and $ \mathbf{W}_{\text{in}} $ are chosen randomly, and the number of successful cases are counted. See Table. \ref{tab:parameter} for the parameter values. We say the inference is successful, if the three conditions (ⅰ)(ⅱ)(ⅲ) in (12) hold, where the criteria $ (e_{60}, e_{90}) $ are set as (a)$ (0.14, 0.30) $ and (b)$ (0.13, 0.17) $. For each set of values $ (\Delta \tau, M) $ we tried 8160 cases of $ \mathbf{A} $ and $ \mathbf{W}_\text{in} $. For each value of $ \Delta\tau $, the best choice of $ M $ is identified by the bold number(s) (blue), and the best among each criterion is identified by the underlined bold number(s) (red)
$ {\rm{ (a) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.14, 0.30)$
$ \Delta \tau $ $ \backslash $ $ M $ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 1 19 24 43 37 27
3.5 0 0 0 11 20 28 57 48 21 11 7
4.0 0 3 18 43 107 59 21 14 2 4 5
4.5 3 14 43 54 21 15 8 1 1 1 0
5.0 10 24 26 19 9 1 1 1 0 0 0
$ {\rm{ (b) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.13, 0.17)$
$\Delta \tau$ $\backslash$ $M$ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 0 3 6 10 8 4
3.5 0 0 0 2 3 5 6 4 1 3 1
4.0 0 0 2 8 14 10 1 4 1 0 1
4.5 1 1 8 14 1 0 1 0 0 0 0
5.0 2 4 6 6 3 0 1 0 0 0 0
$ {\rm{ (a) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.14, 0.30)$
$ \Delta \tau $ $ \backslash $ $ M $ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 1 19 24 43 37 27
3.5 0 0 0 11 20 28 57 48 21 11 7
4.0 0 3 18 43 107 59 21 14 2 4 5
4.5 3 14 43 54 21 15 8 1 1 1 0
5.0 10 24 26 19 9 1 1 1 0 0 0
$ {\rm{ (b) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.13, 0.17)$
$\Delta \tau$ $\backslash$ $M$ 10 11 12 13 14 15 16 17 18 19 20
3.0 0 0 0 0 0 0 3 6 10 8 4
3.5 0 0 0 2 3 5 6 4 1 3 1
4.0 0 0 2 8 14 10 1 4 1 0 1
4.5 1 1 8 14 1 0 1 0 0 0 0
5.0 2 4 6 6 3 0 1 0 0 0 0
[1]

Ana Rita Nogueira, João Gama, Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics & Games, 2021  doi: 10.3934/jdg.2021008

[2]

Takeshi Saito, Kazuyuki Yagasaki. Chebyshev spectral methods for computing center manifolds. Journal of Computational Dynamics, 2021  doi: 10.3934/jcd.2021008

[3]

Vieri Benci, Marco Cococcioni. The algorithmic numbers in non-archimedean numerical computing environments. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1673-1692. doi: 10.3934/dcdss.2020449

[4]

Dugan Nina, Ademir Fernando Pazoto, Lionel Rosier. Controllability of a 1-D tank containing a fluid modeled by a Boussinesq system. Evolution Equations & Control Theory, 2013, 2 (2) : 379-402. doi: 10.3934/eect.2013.2.379

[5]

Rama Ayoub, Aziz Hamdouni, Dina Razafindralandy. A new Hodge operator in discrete exterior calculus. Application to fluid mechanics. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021062

[6]

Ahmad Mousavi, Zheming Gao, Lanshan Han, Alvin Lim. Quadratic surface support vector machine with L1 norm regularization. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021046

[7]

Shan-Shan Lin. Due-window assignment scheduling with learning and deterioration effects. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021081

[8]

Shu-Yu Hsu. Existence and properties of ancient solutions of the Yamabe flow. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 91-129. doi: 10.3934/dcds.2018005

[9]

Matthias Erbar, Jan Maas. Gradient flow structures for discrete porous medium equations. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1355-1374. doi: 10.3934/dcds.2014.34.1355

[10]

Zhengchao Ji. Cylindrical estimates for mean curvature flow in hyperbolic spaces. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1199-1211. doi: 10.3934/cpaa.2021016

[11]

Ling-Bing He, Li Xu. On the compressible Navier-Stokes equations in the whole space: From non-isentropic flow to isentropic flow. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3489-3530. doi: 10.3934/dcds.2021005

[12]

Haibo Cui, Haiyan Yin. Convergence rate of solutions toward stationary solutions to the isentropic micropolar fluid model in a half line. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 2899-2920. doi: 10.3934/dcdsb.2020210

[13]

Mehmet Duran Toksari, Emel Kizilkaya Aydogan, Berrin Atalay, Saziye Sari. Some scheduling problems with sum of logarithm processing times based learning effect and exponential past sequence dependent delivery times. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021044

[14]

Hao Li, Honglin Chen, Matt Haberland, Andrea L. Bertozzi, P. Jeffrey Brantingham. PDEs on graphs for semi-supervised learning applied to first-person activity recognition in body-worn video. Discrete & Continuous Dynamical Systems, 2021  doi: 10.3934/dcds.2021039

[15]

Muberra Allahverdi, Harun Aydilek, Asiye Aydilek, Ali Allahverdi. A better dominance relation and heuristics for Two-Machine No-Wait Flowshops with Maximum Lateness Performance Measure. Journal of Industrial & Management Optimization, 2021, 17 (4) : 1973-1991. doi: 10.3934/jimo.2020054

[16]

Feng Luo. A combinatorial curvature flow for compact 3-manifolds with boundary. Electronic Research Announcements, 2005, 11: 12-20.

[17]

Pablo D. Carrasco, Túlio Vales. A symmetric Random Walk defined by the time-one map of a geodesic flow. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2891-2905. doi: 10.3934/dcds.2020390

[18]

Peter Benner, Jens Saak, M. Monir Uddin. Balancing based model reduction for structured index-2 unstable descriptor systems with application to flow control. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 1-20. doi: 10.3934/naco.2016.6.1

[19]

G. Deugoué, B. Jidjou Moghomye, T. Tachim Medjo. Approximation of a stochastic two-phase flow model by a splitting-up method. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1135-1170. doi: 10.3934/cpaa.2021010

[20]

Haodong Chen, Hongchun Sun, Yiju Wang. A complementarity model and algorithm for direct multi-commodity flow supply chain network equilibrium problem. Journal of Industrial & Management Optimization, 2021, 17 (4) : 2217-2242. doi: 10.3934/jimo.2020066

2019 Impact Factor: 1.233

Metrics

  • PDF downloads (94)
  • HTML views (327)
  • Cited by (0)

Other articles
by authors

[Back to Top]