doi: 10.3934/fods.2021013

Normalization effects on shallow neural networks and related asymptotic expansions

Department of Mathematics and Statistics, Boston University, 111 Cummington Mall, Boston, MA, 02215

* Corresponding author: Jiahui Yu

Received  December 2020 Revised  April 2021 Published  June 2021

Fund Project: K.S. was partially supported by the National Science Foundation (DMS 1550918) and Simons Foundation Award 672441

We consider shallow (single hidden layer) neural networks and characterize their performance when trained with stochastic gradient descent as the number of hidden units $ N $ and gradient descent steps grow to infinity. In particular, we investigate the effect of different scaling schemes, which lead to different normalizations of the neural network, on the network's statistical output, closing the gap between the $ 1/\sqrt{N} $ and the mean-field $ 1/N $ normalization. We develop an asymptotic expansion for the neural network's statistical output pointwise with respect to the scaling parameter as the number of hidden units grows to infinity. Based on this expansion, we demonstrate mathematically that to leading order in $ N $, there is no bias-variance trade off, in that both bias and variance (both explicitly characterized) decrease as the number of hidden units increases and time grows. In addition, we show that to leading order in $ N $, the variance of the neural network's statistical output decays as the implied normalization by the scaling parameter approaches the mean field normalization. Numerical studies on the MNIST and CIFAR10 datasets show that test and train accuracy monotonically improve as the neural network's normalization gets closer to the mean field normalization.

Citation: Jiahui Yu, Konstantinos Spiliopoulos. Normalization effects on shallow neural networks and related asymptotic expansions. Foundations of Data Science, doi: 10.3934/fods.2021013
References:
[1]

B. AlipanahiA. DelongM. Weirauch and B. Frey, Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning, Nature Biotechnology, 33 (2015), 831-838.  doi: 10.1038/nbt.3300.  Google Scholar

[2]

S. Arik, M. Chrzanowski, A. Coates, G. Diamos and A. Gibiansky, et al., Deep voice: Real-time neural text-to-speech, preprint, arXiv: 1702.07825. Google Scholar

[3]

A. Barron, Approximation and estimation bounds for artificial neural networks, Machine Learning, 14 (1994), 115-133. doi: 10.1007/BF00993164.  Google Scholar

[4]

P. BartlettD. Foster and M. Telgarsky, Spectrally-normalized margin bounds for neural networks, Adv. Neural Information Processing Systems (NeurIPS), 30 (2017), 6240-6249.   Google Scholar

[5]

M. Bojarski, D. Del Test, D. Dworakowski, B. Firnier and B. Flepp, et al., End to end learning for self-driving cars, preprint, arXiv: 1604.07316. Google Scholar

[6]

L. Chizat and F. Bach, On the global convergence of gradient descent for over-parameterized models using optimal transport, Adv. Neural Information Processing Systems (NeurIPS), 31 (2018), 3036–3046. Available from: https://papers.nips.cc/paper/2018/file/a1afc58c6ca9540d057299ec3016d726-Paper.pdf. Google Scholar

[7]

S. Du, J. Lee, H. Li, L. Wang and X. Zhai, Gradient descent finds global minima of deep neural networks, International Conference on Machine Learning, Long Beach, CA, 2019. Google Scholar

[8]

S. Du, X. Zhai, B. Poczos and A. Singh, Gradient descent provably optimizes over-parameterized neural networks, International Conference on Learing Representation, 2019. Available from: https://openreview.net/forum?id=S1eK3i09YQ. Google Scholar

[9]

A. EstevaB. KuprelR. NovoaJ. KoS. SwetterH. Blau and S. Thrun, Dermatologist-level classification of skin cancer with deep neural networks, Nature, 542 (2017), 115-118.  doi: 10.1038/nature21056.  Google Scholar

[10]

S. N. Ethier and T. G. Kurtz, Markov Processes: Characterization and Convergence, Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, John Wiley & Sons, Inc., New York, 1986. doi: 10.1002/9780470316658.  Google Scholar

[11]

M. Geiger, A. Jacot, S. Spigler, F. Gabriel and L. Sagun, et al., Scaling description of generalization with number of parameters in deep learning, J. Stat. Mech. Theory Exp., 2020 (2020), 23pp. doi: 10.1088/1742-5468/ab633c.  Google Scholar

[12]

X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010,249–256. Google Scholar

[13]

I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, 2016.  Google Scholar

[14]

S. Gu, E. Holly, T. Lillicrap and S. Levine, Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates, IEEE Conference on Robotics and Automation, Singapore, 2017. doi: 10.1109/ICRA.2017.7989385.  Google Scholar

[15]

K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks, 4 (1991), 251-257.  doi: 10.1016/0893-6080(91)90009-T.  Google Scholar

[16]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[17]

J. Huang and H. T. Yau, Dynamics of deep neural networks and neural tangent hierarchy, Proceedings of the 37th International Conference on Machine Learning, PMLR, 119 (2020), 4542-4551.   Google Scholar

[18]

Y. Ito, Nonlinearity creates linear independence, Adv. Comput. Math., 5 (1996), 189-203.  doi: 10.1007/BF02124743.  Google Scholar

[19]

A. Jacot, F. Gabriel and C. Hongler, Neural tangent kernel: Convergence and generalization in neural networks, $32^{nd}$ Conference on Neural Information Processing Systems (NeurIPS), 2018. Google Scholar

[20]

A. Krizhevsky, Learning Multiple Layers of Features from Tiny Images, Technical Report, 2009. Google Scholar

[21]

C.-M. Kuan and K. Hornik, Convergence of learning algorithms with constant learning rates, IEEE Transactions on Neural Networks, 2 (1991), 484-489.  doi: 10.1109/72.134285.  Google Scholar

[22]

H. J. Kushner and G. G. Yin, Stochastic Approximation and Recurisve Algorithms and Applications, Stochastic Modelling and Applied Probability, 35, Springer-Verlag, New York, 2003. doi: 10.1007/b97441.  Google Scholar

[23]

Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539.  Google Scholar

[24]

Y. LeCunL. BottouY. Bengio and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86 (1998), 2278-2324.  doi: 10.1109/5.726791.  Google Scholar

[25]

Y. Leviathan and Y. Matias, Google duplex: An AI system for accomplishing real-world tasks over the phone, Google Research, 2018. Available from: https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html. Google Scholar

[26]

J. LingR. Jones and J. Templeton, Machine learning strategies for systems with invariance properties, J. Comput. Phys., 318 (2016), 22-35.  doi: 10.1016/j.jcp.2016.05.003.  Google Scholar

[27]

J. LingA. Kurzawski and J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, J. Fluid Mech, 807 (2016), 155-166.  doi: 10.1017/jfm.2016.615.  Google Scholar

[28]

S. Mallat, Understanding deep convolutional neural networks, Philos. Transac. Roy. Soc. A, 374 (2016). doi: 10.1098/rsta.2015.0203.  Google Scholar

[29]

S. Mei, A. Montanari and P.-M. Nguyen, A mean field view of the landscape of two-layer neural networks, Proc. Natl. Acad. Sci. USA, 115 (2018), E7665-E7671. doi: 10.1073/pnas.1806579115.  Google Scholar

[30]

O. Moynot and M. Samuelides, Large deviations and mean-field theory for asymmetric random recurrent neural networks, Probab. Theory Related Fields, 123 (2002), 41-75.  doi: 10.1007/s004400100182.  Google Scholar

[31]

B. Neal, S. Mittal, A. Baratin, V. Tantia, M. Scicluna, S. Lacoste-Julien and I. Mitliagkas, A modern take on the bias-variance tradeoff in neural networks, preprint, arXiv: 1810.08591. Google Scholar

[32]

H. Pierson and M. Gashler, Deep learning in robotics: A review of recent research, Advanced Robotics, 31 (2017), 821-835.  doi: 10.1080/01691864.2017.1365009.  Google Scholar

[33]

G. M. Rotskoff and E. Vanden-Eijnden, Trainability and accuracy of neural networks: An interacting particle system approach, preprint, arXiv: 1805.00915. Google Scholar

[34]

J. Sirignano and R. Cont, Universal features of price formation in financial markets: Perspectives from deep learning, Quant. Finance, 19 (2019), 1449-1459.  doi: 10.1080/14697688.2019.1622295.  Google Scholar

[35]

J. Sirignano, A. Sadhwani and K. Giesecke, Deep learning for mortgage risk, preprint, arXiv: 1607.02470. doi: 10.2139/ssrn.2799443.  Google Scholar

[36]

J. Sirignano and K. Spiliopoulos, Asymptotics of reinforcement learning with neural networks, Stochastic Systems, to appear. Google Scholar

[37]

J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, J. Comput. Phys., 375 (2018), 1339-1364.  doi: 10.1016/j.jcp.2018.08.029.  Google Scholar

[38]

J. Sirignano and K. Spiliopoulos, Mean field analysis of deep neural networks, Math. Oper. Res., (2021). Google Scholar

[39]

J. Sirignano and K. Spiliopoulos, Mean field analysis of neural networks: A central limit theorem, Stochastic Process. Appl., 130 (2020), 1820-1852.  doi: 10.1016/j.spa.2019.06.003.  Google Scholar

[40]

J. Sirignano and K. Spiliopoulos, Mean field analysis of neural networks: A law of large numbers, SIAM J. Appl. Math., 80 (2020), 725-752.  doi: 10.1137/18M1192184.  Google Scholar

[41]

Y. Taigman, M. Yang, M. Ranzato and L. Wolf, DeepFace: Closing the gap to human-level performance in face verification, 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, 2014. doi: 10.1109/CVPR.2014.220.  Google Scholar

[42]

M. Telgarsky, Benefits of depth in neural networks, preprint, arXiv: 1602.04485. Google Scholar

[43]

Y. Zhang, W. Chan and N. Jaitly, Very deep convolutional networks for end-to-end speech recognition, IEEE International Conference on Acoustics, Speech, and Signal Processing, New Orleans, LA, 2017. doi: 10.1109/ICASSP.2017.7953077.  Google Scholar

[44]

D. Zou, Y. Cao, D. Zhou and Q. Gu, Gradient descent optimizes over-parameterized deep ReLU networks, Mach. Learn., 109 (2020), 467–492. doi: 10.1007/s10994-019-05839-6.  Google Scholar

show all references

References:
[1]

B. AlipanahiA. DelongM. Weirauch and B. Frey, Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning, Nature Biotechnology, 33 (2015), 831-838.  doi: 10.1038/nbt.3300.  Google Scholar

[2]

S. Arik, M. Chrzanowski, A. Coates, G. Diamos and A. Gibiansky, et al., Deep voice: Real-time neural text-to-speech, preprint, arXiv: 1702.07825. Google Scholar

[3]

A. Barron, Approximation and estimation bounds for artificial neural networks, Machine Learning, 14 (1994), 115-133. doi: 10.1007/BF00993164.  Google Scholar

[4]

P. BartlettD. Foster and M. Telgarsky, Spectrally-normalized margin bounds for neural networks, Adv. Neural Information Processing Systems (NeurIPS), 30 (2017), 6240-6249.   Google Scholar

[5]

M. Bojarski, D. Del Test, D. Dworakowski, B. Firnier and B. Flepp, et al., End to end learning for self-driving cars, preprint, arXiv: 1604.07316. Google Scholar

[6]

L. Chizat and F. Bach, On the global convergence of gradient descent for over-parameterized models using optimal transport, Adv. Neural Information Processing Systems (NeurIPS), 31 (2018), 3036–3046. Available from: https://papers.nips.cc/paper/2018/file/a1afc58c6ca9540d057299ec3016d726-Paper.pdf. Google Scholar

[7]

S. Du, J. Lee, H. Li, L. Wang and X. Zhai, Gradient descent finds global minima of deep neural networks, International Conference on Machine Learning, Long Beach, CA, 2019. Google Scholar

[8]

S. Du, X. Zhai, B. Poczos and A. Singh, Gradient descent provably optimizes over-parameterized neural networks, International Conference on Learing Representation, 2019. Available from: https://openreview.net/forum?id=S1eK3i09YQ. Google Scholar

[9]

A. EstevaB. KuprelR. NovoaJ. KoS. SwetterH. Blau and S. Thrun, Dermatologist-level classification of skin cancer with deep neural networks, Nature, 542 (2017), 115-118.  doi: 10.1038/nature21056.  Google Scholar

[10]

S. N. Ethier and T. G. Kurtz, Markov Processes: Characterization and Convergence, Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, John Wiley & Sons, Inc., New York, 1986. doi: 10.1002/9780470316658.  Google Scholar

[11]

M. Geiger, A. Jacot, S. Spigler, F. Gabriel and L. Sagun, et al., Scaling description of generalization with number of parameters in deep learning, J. Stat. Mech. Theory Exp., 2020 (2020), 23pp. doi: 10.1088/1742-5468/ab633c.  Google Scholar

[12]

X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010,249–256. Google Scholar

[13]

I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, 2016.  Google Scholar

[14]

S. Gu, E. Holly, T. Lillicrap and S. Levine, Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates, IEEE Conference on Robotics and Automation, Singapore, 2017. doi: 10.1109/ICRA.2017.7989385.  Google Scholar

[15]

K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks, 4 (1991), 251-257.  doi: 10.1016/0893-6080(91)90009-T.  Google Scholar

[16]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[17]

J. Huang and H. T. Yau, Dynamics of deep neural networks and neural tangent hierarchy, Proceedings of the 37th International Conference on Machine Learning, PMLR, 119 (2020), 4542-4551.   Google Scholar

[18]

Y. Ito, Nonlinearity creates linear independence, Adv. Comput. Math., 5 (1996), 189-203.  doi: 10.1007/BF02124743.  Google Scholar

[19]

A. Jacot, F. Gabriel and C. Hongler, Neural tangent kernel: Convergence and generalization in neural networks, $32^{nd}$ Conference on Neural Information Processing Systems (NeurIPS), 2018. Google Scholar

[20]

A. Krizhevsky, Learning Multiple Layers of Features from Tiny Images, Technical Report, 2009. Google Scholar

[21]

C.-M. Kuan and K. Hornik, Convergence of learning algorithms with constant learning rates, IEEE Transactions on Neural Networks, 2 (1991), 484-489.  doi: 10.1109/72.134285.  Google Scholar

[22]

H. J. Kushner and G. G. Yin, Stochastic Approximation and Recurisve Algorithms and Applications, Stochastic Modelling and Applied Probability, 35, Springer-Verlag, New York, 2003. doi: 10.1007/b97441.  Google Scholar

[23]

Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539.  Google Scholar

[24]

Y. LeCunL. BottouY. Bengio and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86 (1998), 2278-2324.  doi: 10.1109/5.726791.  Google Scholar

[25]

Y. Leviathan and Y. Matias, Google duplex: An AI system for accomplishing real-world tasks over the phone, Google Research, 2018. Available from: https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html. Google Scholar

[26]

J. LingR. Jones and J. Templeton, Machine learning strategies for systems with invariance properties, J. Comput. Phys., 318 (2016), 22-35.  doi: 10.1016/j.jcp.2016.05.003.  Google Scholar

[27]

J. LingA. Kurzawski and J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, J. Fluid Mech, 807 (2016), 155-166.  doi: 10.1017/jfm.2016.615.  Google Scholar

[28]

S. Mallat, Understanding deep convolutional neural networks, Philos. Transac. Roy. Soc. A, 374 (2016). doi: 10.1098/rsta.2015.0203.  Google Scholar

[29]

S. Mei, A. Montanari and P.-M. Nguyen, A mean field view of the landscape of two-layer neural networks, Proc. Natl. Acad. Sci. USA, 115 (2018), E7665-E7671. doi: 10.1073/pnas.1806579115.  Google Scholar

[30]

O. Moynot and M. Samuelides, Large deviations and mean-field theory for asymmetric random recurrent neural networks, Probab. Theory Related Fields, 123 (2002), 41-75.  doi: 10.1007/s004400100182.  Google Scholar

[31]

B. Neal, S. Mittal, A. Baratin, V. Tantia, M. Scicluna, S. Lacoste-Julien and I. Mitliagkas, A modern take on the bias-variance tradeoff in neural networks, preprint, arXiv: 1810.08591. Google Scholar

[32]

H. Pierson and M. Gashler, Deep learning in robotics: A review of recent research, Advanced Robotics, 31 (2017), 821-835.  doi: 10.1080/01691864.2017.1365009.  Google Scholar

[33]

G. M. Rotskoff and E. Vanden-Eijnden, Trainability and accuracy of neural networks: An interacting particle system approach, preprint, arXiv: 1805.00915. Google Scholar

[34]

J. Sirignano and R. Cont, Universal features of price formation in financial markets: Perspectives from deep learning, Quant. Finance, 19 (2019), 1449-1459.  doi: 10.1080/14697688.2019.1622295.  Google Scholar

[35]

J. Sirignano, A. Sadhwani and K. Giesecke, Deep learning for mortgage risk, preprint, arXiv: 1607.02470. doi: 10.2139/ssrn.2799443.  Google Scholar

[36]

J. Sirignano and K. Spiliopoulos, Asymptotics of reinforcement learning with neural networks, Stochastic Systems, to appear. Google Scholar

[37]

J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, J. Comput. Phys., 375 (2018), 1339-1364.  doi: 10.1016/j.jcp.2018.08.029.  Google Scholar

[38]

J. Sirignano and K. Spiliopoulos, Mean field analysis of deep neural networks, Math. Oper. Res., (2021). Google Scholar

[39]

J. Sirignano and K. Spiliopoulos, Mean field analysis of neural networks: A central limit theorem, Stochastic Process. Appl., 130 (2020), 1820-1852.  doi: 10.1016/j.spa.2019.06.003.  Google Scholar

[40]

J. Sirignano and K. Spiliopoulos, Mean field analysis of neural networks: A law of large numbers, SIAM J. Appl. Math., 80 (2020), 725-752.  doi: 10.1137/18M1192184.  Google Scholar

[41]

Y. Taigman, M. Yang, M. Ranzato and L. Wolf, DeepFace: Closing the gap to human-level performance in face verification, 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, 2014. doi: 10.1109/CVPR.2014.220.  Google Scholar

[42]

M. Telgarsky, Benefits of depth in neural networks, preprint, arXiv: 1602.04485. Google Scholar

[43]

Y. Zhang, W. Chan and N. Jaitly, Very deep convolutional networks for end-to-end speech recognition, IEEE International Conference on Acoustics, Speech, and Signal Processing, New Orleans, LA, 2017. doi: 10.1109/ICASSP.2017.7953077.  Google Scholar

[44]

D. Zou, Y. Cao, D. Zhou and Q. Gu, Gradient descent optimizes over-parameterized deep ReLU networks, Mach. Learn., 109 (2020), 467–492. doi: 10.1007/s10994-019-05839-6.  Google Scholar

Figure 1.  Performance of scaled neural networks on MNIST test dataset (cross entropy loss)
Figure 2.  Performance of scaled neural networks on MNIST test dataset (MSE loss)
Figure 3.  Performance of scaled convolutional neural networks on CIFAR10 test dataset (cross entropy loss)
Figure 4.  Performance of scaled neural networks on MNIST training dataset (MSE loss)
[1]

Danilo Costarelli, Gianluca Vinti. Asymptotic expansions and Voronovskaja type theorems for the multivariate neural network operators. Mathematical Foundations of Computing, 2020, 3 (1) : 41-50. doi: 10.3934/mfc.2020004

[2]

Mingbao Cheng, Shuxian Xiao, Guosheng Liu. Single-machine rescheduling problems with learning effect under disruptions. Journal of Industrial & Management Optimization, 2018, 14 (3) : 967-980. doi: 10.3934/jimo.2017085

[3]

Haruki Katayama, Hiroyuki Masuyama, Shoji Kasahara, Yutaka Takahashi. Effect of spectrum sensing overhead on performance for cognitive radio networks with channel bonding. Journal of Industrial & Management Optimization, 2014, 10 (1) : 21-40. doi: 10.3934/jimo.2014.10.21

[4]

Jui-Pin Tseng. Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4693-4729. doi: 10.3934/dcds.2013.33.4693

[5]

Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045

[6]

Ying Sue Huang. Resynchronization of delayed neural networks. Discrete & Continuous Dynamical Systems, 2001, 7 (2) : 397-401. doi: 10.3934/dcds.2001.7.397

[7]

Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011

[8]

Ana Rita Nogueira, João Gama, Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics & Games, 2021, 8 (3) : 203-231. doi: 10.3934/jdg.2021008

[9]

Tatyana S. Turova. Structural phase transitions in neural networks. Mathematical Biosciences & Engineering, 2014, 11 (1) : 139-148. doi: 10.3934/mbe.2014.11.139

[10]

Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031

[11]

Andreas Chirstmann, Qiang Wu, Ding-Xuan Zhou. Preface to the special issue on analysis in machine learning and data science. Communications on Pure & Applied Analysis, 2020, 19 (8) : i-iii. doi: 10.3934/cpaa.2020171

[12]

Roberto Serra, Marco Villani, Alex Graudenzi, Annamaria Colacci, Stuart A. Kauffman. The simulation of gene knock-out in scale-free random Boolean models of genetic networks. Networks & Heterogeneous Media, 2008, 3 (2) : 333-343. doi: 10.3934/nhm.2008.3.333

[13]

Sigurd Angenent. Formal asymptotic expansions for symmetric ancient ovals in mean curvature flow. Networks & Heterogeneous Media, 2013, 8 (1) : 1-8. doi: 10.3934/nhm.2013.8.1

[14]

Fioralba Cakoni, Shari Moskow, Scott Rome. Asymptotic expansions of transmission eigenvalues for small perturbations of media with generally signed contrast. Inverse Problems & Imaging, 2018, 12 (4) : 971-992. doi: 10.3934/ipi.2018041

[15]

Gianira N. Alfarano, Martino Borello, Alessandro Neri. A geometric characterization of minimal codes and their asymptotic performance. Advances in Mathematics of Communications, 2020  doi: 10.3934/amc.2020104

[16]

Benedict Leimkuhler, Charles Matthews, Tiffany Vlaar. Partitioned integrators for thermodynamic parameterization of neural networks. Foundations of Data Science, 2019, 1 (4) : 457-489. doi: 10.3934/fods.2019019

[17]

Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2021001

[18]

Ricai Luo, Honglei Xu, Wu-Sheng Wang, Jie Sun, Wei Xu. A weak condition for global stability of delayed neural networks. Journal of Industrial & Management Optimization, 2016, 12 (2) : 505-514. doi: 10.3934/jimo.2016.12.505

[19]

Benedetta Lisena. Average criteria for periodic neural networks with delay. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 761-773. doi: 10.3934/dcdsb.2014.19.761

[20]

Larry Turyn. Cellular neural networks: asymmetric templates and spatial chaos. Conference Publications, 2003, 2003 (Special) : 864-871. doi: 10.3934/proc.2003.2003.864

 Impact Factor: 

Metrics

  • PDF downloads (12)
  • HTML views (28)
  • Cited by (0)

Other articles
by authors

[Back to Top]