doi: 10.3934/dcdss.2021138
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

An out-of-distribution-aware autoencoder model for reduced chemical kinetics

1. 

Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA

2. 

Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA

* Corresponding author: Guannan Zhang

Notice: This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan)

Received  August 2021 Revised  September 2021 Early access November 2021

While detailed chemical kinetic models have been successful in representing rates of chemical reactions in continuum scale computational fluid dynamics (CFD) simulations, applying the models in simulations for engineering device conditions is computationally prohibitive. To reduce the cost, data-driven methods, e.g., autoencoders, have been used to construct reduced chemical kinetic models for CFD simulations. Despite their success, data-driven methods rely heavily on training data sets and can be unreliable when used in out-of-distribution (OOD) regions (i.e., when extrapolating outside of the training set). In this paper, we present an enhanced autoencoder model for combustion chemical kinetics with uncertainty quantification to enable the detection of model usage in OOD regions, and thereby creating an OOD-aware autoencoder model that contributes to more robust CFD simulations of reacting flows. We first demonstrate the effectiveness of the method in OOD detection in two well-known datasets, MNIST and Fashion-MNIST, in comparison with the deep ensemble method, and then present the OOD-aware autoencoder for reduced chemistry model in syngas combustion.

Citation: Pei Zhang, Siyan Liu, Dan Lu, Ramanan Sankaran, Guannan Zhang. An out-of-distribution-aware autoencoder model for reduced chemical kinetics. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021138
References:
[1]

A. AminiW. SchwartingA. Soleimany and D. Rus, Deep evidential regression, Advances in Neural Information Processing Systems, 33 (2020), 14927-14937.   Google Scholar

[2]

R. B. Bird, W. E. Stewart and E. N. Lightfoot, Transport Phenomena, John Wiley and Sons, New York, 1960. Google Scholar

[3]

J. H. ChenA. ChoudharyB. de SupinskiM. DeVriesE. R. HawkesS. KlaskyW. K. LiaoK. L. MaJ. Mellor-CrummeyN. PodhorszkiR. SankaranS. Shende and C. S. Yoo, Terascale direct numerical simulations of turbulent combustion using S3D, Computational Science & Discovery, 2 (2009), 015001.  doi: 10.1088/1749-4699/2/1/015001.  Google Scholar

[4]

A. CoussementO. Gicquel and A. Parente, MG-local-PCA method for reduced order combustion modeling, Proceedings of the Combustion Institute, 34 (2013), 1117-1123.  doi: 10.1016/j.proci.2012.05.073.  Google Scholar

[5]

G. Esposito and H. Chelliah, Skeletal reaction models based on principal component analysis: Application to ethylene-air ignition, propagation, and extinction phenomena, Combustion and Flame, 158 (2011), 477-489.  doi: 10.1016/j.combustflame.2010.09.010.  Google Scholar

[6]

Y. Gal and Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, Proceedings of The 33rd International Conference on Machine Learning, 48 (2016), 1050-1059.   Google Scholar

[7]

E. R. HawkesR. SankaranJ. C. Sutherland and J. H. Chen, Scalar mixing in direct numerical simulations of temporally evolving plane jet flames with skeletal {CO/H$_2$} kinetics, Proceedings of the Combustion Institute, 31 (2007), 1633-1640.   Google Scholar

[8]

M. D. Hoffman, D. M. Blei, C. Wang and J. Paisley, Stochastic variational inference, J. Mach. Learn. Res., 14 (2013), 1303–1347, URL http://jmlr.org/papers/v14/hoffman13a.html.  Google Scholar

[9]

B. J. IsaacA. CoussementO. GicquelP. J. Smith and A. Parente, Reduced-order PCA models for chemical reacting flows, Combustion and Flame, 161 (2014), 2785-2800.  doi: 10.1016/j.combustflame.2014.05.011.  Google Scholar

[10]

B. J. IsaacJ. N. ThornockJ. SutherlandP. J. Smith and A. Parente, Advanced regression methods for combustion modelling using principal components, Combustion and Flame, 162 (2015), 2592-2601.  doi: 10.1016/j.combustflame.2015.03.008.  Google Scholar

[11]

B. LakshminarayananA. Pritzel and C. Blundell, Simple and scalable predictive uncertainty estimation using deep ensembles, Proceedings of the 31st International Conference on Neural Information Processing Systems, 17 (2017), 6405-6416.   Google Scholar

[12]

Y. LiC.-W. ZhouK. P. SomersK. Zhang and H. J. Curran, The oxidation of 2-butene: A high pressure ignition delay, kinetic modeling study and reactivity comparison with isobutene and 1-butene, Proceedings of the Combustion Institute, 36 (2017), 403-411.  doi: 10.1016/j.proci.2016.05.052.  Google Scholar

[13]

T. Lu and C. K. Law, Toward accommodating realistic fuel chemistry in large-scale computations, Progress in Energy and Combustion Science, 35 (2009), 192-215.  doi: 10.1016/j.pecs.2008.10.002.  Google Scholar

[14]

D. J. C. MacKay, A practical Bayesian framework for backpropagation networks, Neural Comput., 4 (1992), 448-472.  doi: 10.1162/neco.1992.4.3.448.  Google Scholar

[15]

M. R. MalikB. J. IsaacA. CoussementP. J. Smith and A. Parente, Principal component analysis coupled with nonlinear regression for chemistry reduction, Combustion and Flame, 187 (2018), 30-41.  doi: 10.1016/j.combustflame.2017.08.012.  Google Scholar

[16]

H. Mirgolbabaei and T. Echekki, A novel principal component analysis-based acceleration scheme for LES–ODT: An a priori study, Combustion and Flame, 160 (2013), 898-908.  doi: 10.2514/6.2013-168.  Google Scholar

[17]

H. Mirgolbabaei and T. Echekki, Nonlinear reduction of combustion composition space with kernel principal component analysis, Combustion and Flame, 161 (2014), 118-126.  doi: 10.1016/j.combustflame.2013.08.016.  Google Scholar

[18]

H. Mirgolbabaei and T. Echekki, The reconstruction of thermo-chemical scalars in combustion from a reduced set of their principal components, Combustion and Flame, 162 (2015), 1650-1652.  doi: 10.1016/j.combustflame.2014.11.027.  Google Scholar

[19]

H. MirgolbabaeiT. Echekki and N. Smaoui, A nonlinear principal component analysis approach for turbulent combustion composition space, International Journal of Hydrogen Energy, 39 (2014), 4622-4633.  doi: 10.1016/j.ijhydene.2013.12.195.  Google Scholar

[20]

D. A. Nix and A. S. Weigend, Learning local error bars for nonlinear regression, In Advances in neural information processing systems, (1995), 489–496. Google Scholar

[21]

A. ParenteJ. C. SutherlandL. Tognotti and P. J. Smith, Identification of low-dimensional manifolds in turbulent flames, Proceedings of the Combustion Institute, 32 (2009), 1579-1586.  doi: 10.1016/j.proci.2008.06.177.  Google Scholar

[22]

T. PearceA. BrintrupM. Zaki and A. Neely, High-quality prediction intervals for deep learning: A distribution-free, ensembled approach, Proceedings of the 35th International Conference on Machine Learning, 80 (2018), 4075-4084.   Google Scholar

[23]

T. Pearce, F. Leibfried and A. Brintrup, Uncertainty in neural networks: Approximately Bayesian ensembling, In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 108 (2020), 234–244. Google Scholar

[24]

T. S. Salem, H. Langseth and H. Ramampiaro, Prediction intervals: Split normal mixture from quality-driven deep ensembles, In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), Proceedings of Machine Learning Research, 124 (2020), 1179–1187. Google Scholar

[25]

R. Shan and T. Lu, Ignition and extinction in perfectly stirred reactors with detailed chemistry, Combustion and Flame, 159 (2012), 2069-2076.  doi: 10.1016/j.combustflame.2012.01.023.  Google Scholar

[26]

E. Simhayev, G. Katz and L. Rokach, PIVEN: A deep neural network for prediction intervals with specific value prediction, 2021. Google Scholar

[27]

J. C. Sutherland and A. Parente, Combustion modeling using principal component analysis, Proceedings of the Combustion Institute, 32 (2009), 1563-1570.   Google Scholar

[28]

S. VajdaP. Valko and T. Turányi, Principal component analysis of kinetic models, International Journal of Chemical Kinetics, 17 (1985), 55-81.  doi: 10.1002/kin.550170107.  Google Scholar

[29]

P. Zhang, S. Liu, D. Lu, G. Zhang and R. Sankaran, A prediction interval method for uncertainty quantification of regression models, In ICLR 2021 SimDL Workshop, Virtual, 2021. Google Scholar

[30]

P. Zhang, R. Sankaran and E. R. Hawkes, A priori examination of reduced chemistry models derived from canonical stirred reactors using three-dimensional direct numerical simulation datasets, In AIAA Scitech, (2021), 1784 doi: 10.2514/6.2021-1784.  Google Scholar

[31]

P. Zhang, R. Sankaran, M. Stoyanov, D. Lebrun-Grandie and C. E. Finney, Reduced models for chemical kinetics derived from parallel ensemble simulations of stirred reactors, In AIAA Scitech Forum, (2020), 0177. doi: 10.2514/6.2020-0177.  Google Scholar

show all references

References:
[1]

A. AminiW. SchwartingA. Soleimany and D. Rus, Deep evidential regression, Advances in Neural Information Processing Systems, 33 (2020), 14927-14937.   Google Scholar

[2]

R. B. Bird, W. E. Stewart and E. N. Lightfoot, Transport Phenomena, John Wiley and Sons, New York, 1960. Google Scholar

[3]

J. H. ChenA. ChoudharyB. de SupinskiM. DeVriesE. R. HawkesS. KlaskyW. K. LiaoK. L. MaJ. Mellor-CrummeyN. PodhorszkiR. SankaranS. Shende and C. S. Yoo, Terascale direct numerical simulations of turbulent combustion using S3D, Computational Science & Discovery, 2 (2009), 015001.  doi: 10.1088/1749-4699/2/1/015001.  Google Scholar

[4]

A. CoussementO. Gicquel and A. Parente, MG-local-PCA method for reduced order combustion modeling, Proceedings of the Combustion Institute, 34 (2013), 1117-1123.  doi: 10.1016/j.proci.2012.05.073.  Google Scholar

[5]

G. Esposito and H. Chelliah, Skeletal reaction models based on principal component analysis: Application to ethylene-air ignition, propagation, and extinction phenomena, Combustion and Flame, 158 (2011), 477-489.  doi: 10.1016/j.combustflame.2010.09.010.  Google Scholar

[6]

Y. Gal and Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, Proceedings of The 33rd International Conference on Machine Learning, 48 (2016), 1050-1059.   Google Scholar

[7]

E. R. HawkesR. SankaranJ. C. Sutherland and J. H. Chen, Scalar mixing in direct numerical simulations of temporally evolving plane jet flames with skeletal {CO/H$_2$} kinetics, Proceedings of the Combustion Institute, 31 (2007), 1633-1640.   Google Scholar

[8]

M. D. Hoffman, D. M. Blei, C. Wang and J. Paisley, Stochastic variational inference, J. Mach. Learn. Res., 14 (2013), 1303–1347, URL http://jmlr.org/papers/v14/hoffman13a.html.  Google Scholar

[9]

B. J. IsaacA. CoussementO. GicquelP. J. Smith and A. Parente, Reduced-order PCA models for chemical reacting flows, Combustion and Flame, 161 (2014), 2785-2800.  doi: 10.1016/j.combustflame.2014.05.011.  Google Scholar

[10]

B. J. IsaacJ. N. ThornockJ. SutherlandP. J. Smith and A. Parente, Advanced regression methods for combustion modelling using principal components, Combustion and Flame, 162 (2015), 2592-2601.  doi: 10.1016/j.combustflame.2015.03.008.  Google Scholar

[11]

B. LakshminarayananA. Pritzel and C. Blundell, Simple and scalable predictive uncertainty estimation using deep ensembles, Proceedings of the 31st International Conference on Neural Information Processing Systems, 17 (2017), 6405-6416.   Google Scholar

[12]

Y. LiC.-W. ZhouK. P. SomersK. Zhang and H. J. Curran, The oxidation of 2-butene: A high pressure ignition delay, kinetic modeling study and reactivity comparison with isobutene and 1-butene, Proceedings of the Combustion Institute, 36 (2017), 403-411.  doi: 10.1016/j.proci.2016.05.052.  Google Scholar

[13]

T. Lu and C. K. Law, Toward accommodating realistic fuel chemistry in large-scale computations, Progress in Energy and Combustion Science, 35 (2009), 192-215.  doi: 10.1016/j.pecs.2008.10.002.  Google Scholar

[14]

D. J. C. MacKay, A practical Bayesian framework for backpropagation networks, Neural Comput., 4 (1992), 448-472.  doi: 10.1162/neco.1992.4.3.448.  Google Scholar

[15]

M. R. MalikB. J. IsaacA. CoussementP. J. Smith and A. Parente, Principal component analysis coupled with nonlinear regression for chemistry reduction, Combustion and Flame, 187 (2018), 30-41.  doi: 10.1016/j.combustflame.2017.08.012.  Google Scholar

[16]

H. Mirgolbabaei and T. Echekki, A novel principal component analysis-based acceleration scheme for LES–ODT: An a priori study, Combustion and Flame, 160 (2013), 898-908.  doi: 10.2514/6.2013-168.  Google Scholar

[17]

H. Mirgolbabaei and T. Echekki, Nonlinear reduction of combustion composition space with kernel principal component analysis, Combustion and Flame, 161 (2014), 118-126.  doi: 10.1016/j.combustflame.2013.08.016.  Google Scholar

[18]

H. Mirgolbabaei and T. Echekki, The reconstruction of thermo-chemical scalars in combustion from a reduced set of their principal components, Combustion and Flame, 162 (2015), 1650-1652.  doi: 10.1016/j.combustflame.2014.11.027.  Google Scholar

[19]

H. MirgolbabaeiT. Echekki and N. Smaoui, A nonlinear principal component analysis approach for turbulent combustion composition space, International Journal of Hydrogen Energy, 39 (2014), 4622-4633.  doi: 10.1016/j.ijhydene.2013.12.195.  Google Scholar

[20]

D. A. Nix and A. S. Weigend, Learning local error bars for nonlinear regression, In Advances in neural information processing systems, (1995), 489–496. Google Scholar

[21]

A. ParenteJ. C. SutherlandL. Tognotti and P. J. Smith, Identification of low-dimensional manifolds in turbulent flames, Proceedings of the Combustion Institute, 32 (2009), 1579-1586.  doi: 10.1016/j.proci.2008.06.177.  Google Scholar

[22]

T. PearceA. BrintrupM. Zaki and A. Neely, High-quality prediction intervals for deep learning: A distribution-free, ensembled approach, Proceedings of the 35th International Conference on Machine Learning, 80 (2018), 4075-4084.   Google Scholar

[23]

T. Pearce, F. Leibfried and A. Brintrup, Uncertainty in neural networks: Approximately Bayesian ensembling, In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 108 (2020), 234–244. Google Scholar

[24]

T. S. Salem, H. Langseth and H. Ramampiaro, Prediction intervals: Split normal mixture from quality-driven deep ensembles, In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), Proceedings of Machine Learning Research, 124 (2020), 1179–1187. Google Scholar

[25]

R. Shan and T. Lu, Ignition and extinction in perfectly stirred reactors with detailed chemistry, Combustion and Flame, 159 (2012), 2069-2076.  doi: 10.1016/j.combustflame.2012.01.023.  Google Scholar

[26]

E. Simhayev, G. Katz and L. Rokach, PIVEN: A deep neural network for prediction intervals with specific value prediction, 2021. Google Scholar

[27]

J. C. Sutherland and A. Parente, Combustion modeling using principal component analysis, Proceedings of the Combustion Institute, 32 (2009), 1563-1570.   Google Scholar

[28]

S. VajdaP. Valko and T. Turányi, Principal component analysis of kinetic models, International Journal of Chemical Kinetics, 17 (1985), 55-81.  doi: 10.1002/kin.550170107.  Google Scholar

[29]

P. Zhang, S. Liu, D. Lu, G. Zhang and R. Sankaran, A prediction interval method for uncertainty quantification of regression models, In ICLR 2021 SimDL Workshop, Virtual, 2021. Google Scholar

[30]

P. Zhang, R. Sankaran and E. R. Hawkes, A priori examination of reduced chemistry models derived from canonical stirred reactors using three-dimensional direct numerical simulation datasets, In AIAA Scitech, (2021), 1784 doi: 10.2514/6.2021-1784.  Google Scholar

[31]

P. Zhang, R. Sankaran, M. Stoyanov, D. Lebrun-Grandie and C. E. Finney, Reduced models for chemical kinetics derived from parallel ensemble simulations of stirred reactors, In AIAA Scitech Forum, (2020), 0177. doi: 10.2514/6.2020-0177.  Google Scholar

Figure 1.  Elemental mass conservation errors. An elemental error of 20% is expected to cause large errors in temperature prediction thus false extinction/ignition events in engine simulations. With OOD-aware model, non-physical solutions with conservation laws violated can be detected before leading to poor engine designs
Figure 2.  Estimation of 95% PI prediction in toy regression task $ y = x^3 + \varepsilon $ with asymmetric noise $ \varepsilon $. The 95% PI produced by PI3NN captures about 95% of training data with tight bounds within training region, while DE produces an unnecessarily wide lower bound. Both methods produce reasonably wide PIs in the OOD region
Figure 3.  Architecture diagrams of AE with DE and PI3NN. Neurons are represented by the circles with different edge colors for different types of layers, i.e., red for input and output layers, green for bottleneck layer, and blue for other hidden layers. The red circles filled with gray are outputs for variance/standard deviation variables. The diagrams shown here are simplified for illustration only. The actual numbers of layers and neurons in each layer vary in different experiments
Figure 4.  Comparison of OOD detection accuracy in MNIST (ID) and Fashion-MNIST (OOD) test sets between DE and PI3NN methods. Both methods capture the difference between ID and OOD with larger uncertainties for OOD samples. The confusion matrices show that the majority of test samples are correctly detected as OOD (true positive) or ID (true negative)
Figure 5.  Receiver operating characteristic curves of DE and PI3NN in OOD detection. All methods can capture the OOD samples well in the image experiment. Though DE with ensemble size of 2 shows better accuracy than PI3NN, it requires two runs while PI3NN only needs a single run. DE with a single run shows lower OOD detection accuracy than PI3NN
Figure 6.  Normalized joint histogram of predicted values by AE (with $ n_z = 2 $) and true values for 12 thermo-chemical state variables, i.e., temperature and mass fractions of 11 species, in PSR test set. The dark red area along the diagonal line shows that AE can reduce the state dimension of syngas CO/H$ _2 $ combustion from $ n_x = 12 $ to $ n_z = 2 $ without much loss of accuracy
Figure 7.  Predictive uncertainty (the width of PI) vs. predictive error of the ID (red) and OOD (blue) test sets from the PI3NN and DE methods. Scatter points are samples randomly selected from the two test sets. The filled contours show the normalized joint histogram for samples in ID/OOD test sets, with light red/blue colors representing fewer samples and dark red/blue representing more samples. DE with a single run $ \left((a)-(c)\right) $ fails to capture the difference in uncertainty for ID and OOD samples. DE with 10 runs $ \left((d)-(f)\right) $ shows improved but still limited separation of OOD samples from ID samples and it fails to produce the uncertainty-error correlation. PI3NN $ \left((g)-(i)\right) $ shows a strong correlation between the uncertainty and the error and clearly demonstrates that OOD and ID have different uncertainty magnitudes
Figure 8.  Receiver operating characteristic curves of PI3NN and DE ($ M = 1, 10, 20 $) methods in OOD detection. PI3NN shows the best OOD detection accuracy. DE shows improved detection accuracy with increasing ensemble size ($ M $)
[1]

Andrew J. Majda, Michal Branicki. Lessons in uncertainty quantification for turbulent dynamical systems. Discrete & Continuous Dynamical Systems, 2012, 32 (9) : 3133-3221. doi: 10.3934/dcds.2012.32.3133

[2]

Jing Li, Panos Stinis. Mori-Zwanzig reduced models for uncertainty quantification. Journal of Computational Dynamics, 2019, 6 (1) : 39-68. doi: 10.3934/jcd.2019002

[3]

H. T. Banks, Robert Baraldi, Karissa Cross, Kevin Flores, Christina McChesney, Laura Poag, Emma Thorpe. Uncertainty quantification in modeling HIV viral mechanics. Mathematical Biosciences & Engineering, 2015, 12 (5) : 937-964. doi: 10.3934/mbe.2015.12.937

[4]

Alex Capaldi, Samuel Behrend, Benjamin Berman, Jason Smith, Justin Wright, Alun L. Lloyd. Parameter estimation and uncertainty quantification for an epidemic model. Mathematical Biosciences & Engineering, 2012, 9 (3) : 553-576. doi: 10.3934/mbe.2012.9.553

[5]

Ryan Bennink, Ajay Jasra, Kody J. H. Law, Pavel Lougovski. Estimation and uncertainty quantification for the output from quantum simulators. Foundations of Data Science, 2019, 1 (2) : 157-176. doi: 10.3934/fods.2019007

[6]

Giuseppe Bianchi, Lorenzo Bracciale, Keren Censor-Hillel, Andrea Lincoln, Muriel Médard. The one-out-of-k retrieval problem and linear network coding. Advances in Mathematics of Communications, 2016, 10 (1) : 95-112. doi: 10.3934/amc.2016.10.95

[7]

Yu Chen, Zixian Cui, Shihan Di, Peibiao Zhao. Capital asset pricing model under distribution uncertainty. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021113

[8]

Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Stability of the dynamics of an asymmetric neural network. Communications on Pure & Applied Analysis, 2009, 8 (2) : 655-671. doi: 10.3934/cpaa.2009.8.655

[9]

R.L. Sheu, M.J. Ting, I.L. Wang. Maximum flow problem in the distribution network. Journal of Industrial & Management Optimization, 2006, 2 (3) : 237-254. doi: 10.3934/jimo.2006.2.237

[10]

Ning Zhang, Qiang Wu. Online learning for supervised dimension reduction. Mathematical Foundations of Computing, 2019, 2 (2) : 95-106. doi: 10.3934/mfc.2019008

[11]

Lyudmila Grigoryeva, Juan-Pablo Ortega. Dimension reduction in recurrent networks by canonicalization. Journal of Geometric Mechanics, 2021  doi: 10.3934/jgm.2021028

[12]

Parker Childs, James P. Keener. Slow manifold reduction of a stochastic chemical reaction: Exploring Keizer's paradox. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1775-1794. doi: 10.3934/dcdsb.2012.17.1775

[13]

José Miguel Pasini, Tuhin Sahai. Polynomial chaos based uncertainty quantification in Hamiltonian, multi-time scale, and chaotic systems. Journal of Computational Dynamics, 2014, 1 (2) : 357-375. doi: 10.3934/jcd.2014.1.357

[14]

Xueli Bai, Suying Liu. A new criterion to a two-chemical substances chemotaxis system with critical dimension. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3717-3721. doi: 10.3934/dcdsb.2018074

[15]

Sean D. Lawley, Janet A. Best, Michael C. Reed. Neurotransmitter concentrations in the presence of neural switching in one dimension. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2255-2273. doi: 10.3934/dcdsb.2016046

[16]

Ndolane Sene. Fractional input stability and its application to neural network. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 853-865. doi: 10.3934/dcdss.2020049

[17]

Ying Sue Huang, Chai Wah Wu. Stability of cellular neural network with small delays. Conference Publications, 2005, 2005 (Special) : 420-426. doi: 10.3934/proc.2005.2005.420

[18]

King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

[19]

Shyan-Shiou Chen, Chih-Wen Shih. Asymptotic behaviors in a transiently chaotic neural network. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 805-826. doi: 10.3934/dcds.2004.10.805

[20]

Jia Shu, Jie Sun. Designing the distribution network for an integrated supply chain. Journal of Industrial & Management Optimization, 2006, 2 (3) : 339-349. doi: 10.3934/jimo.2006.2.339

2020 Impact Factor: 2.425

Article outline

Figures and Tables

[Back to Top]