• Previous Article
    An improved deep convolutional neural network model with kernel loss function in image classification
  • MFC Home
  • This Issue
  • Next Article
    Error analysis on regularized regression based on the Maximum correntropy criterion
February  2020, 3(1): 41-50. doi: 10.3934/mfc.2020004

Asymptotic expansions and Voronovskaja type theorems for the multivariate neural network operators

Department of Mathematics and Computer Science, University of Perugia, 1, Via Vanvitelli, 06123 Perugia, Italy

* Corresponding author: Danilo Costarelli

Received  December 2019 Revised  February 2020 Published  February 2020

Fund Project: The first author has been partially supported within the 2019 GNAMPA-INdAM Project "Metodi di analisi reale per l'approssimazione attraverso operatori discreti e applicazioni", while the second author within the projects: (1) Ricerca di Base 2018 dell'Università degli Studi di Perugia - "Metodi di Teoria dell'Approssimazione, Analisi Reale, Analisi Nonlineare e loro Applicazioni", (2) Ricerca di Base 2019 dell'Università degli Studi di Perugia - "Integrazione, Approssimazione, Analisi Nonlineare e loro Applicazioni", (3) "Metodi e processi innovativi per lo sviluppo di una banca di immagini mediche per fini diagnostici" funded by the Fondazione Cassa di Risparmio di Perugia, 2018

In this paper, an asymptotic formula for the so-called multivariate neural network (NN) operators has been established. As a direct consequence, a first and a second order pointwise Voronovskaja type theorem has been reached. At the end, the particular case of the NN operators activated by the logistic function has been treated in details.

Citation: Danilo Costarelli, Gianluca Vinti. Asymptotic expansions and Voronovskaja type theorems for the multivariate neural network operators. Mathematical Foundations of Computing, 2020, 3 (1) : 41-50. doi: 10.3934/mfc.2020004
References:
[1]

J. A. Adell and D. Cardenas-Morales, Quantitative generalized Voronovskaja's formulae for Bernstein polynomials, Journal of Approximation Theory, 231 (2018), 41-52.  doi: 10.1016/j.jat.2018.04.007.  Google Scholar

[2]

T. Acar, Asymptotic formulas for generalized Szász-Mirakyan operators, Applied Mathematics and Computation, 263 (2015), 223-239.  doi: 10.1016/j.amc.2015.04.060.  Google Scholar

[3]

T. AcarA. Aral and I. Rasa, Approximation by k-th order modifications of Szaás-Mirakyan operators, Studia Scient. Math. Hungar., 53 (2016), 379-398.  doi: 10.1556/012.2016.53.3.1339.  Google Scholar

[4]

T. AcarA. Aral and I. Rasa, The New Forms of Voronovskaya's Theorem in weighted spaces, Positivity, 20 (2016), 25-40.  doi: 10.1007/s11117-015-0338-4.  Google Scholar

[5]

C. Bardaro and I. Mantellini, On pointwise approximation properties of multivariate semi-discrete sampling type operators, Results in Mathematics, 72 (2017), 1449-1472.  doi: 10.1007/s00025-017-0667-7.  Google Scholar

[6]

F. Cao and Z. Chen, The approximation operators with sigmoidal functions, Comput. Math. Appl., 58 (2009), 758-765.  doi: 10.1016/j.camwa.2009.05.001.  Google Scholar

[7]

F. Cao and Z. Chen, The construction and approximation of a class of neural networks operators with ramp functions, J. Comput. Anal. Appl., 14 (2012), 101-112.   Google Scholar

[8]

P. Cardaliaguet and G. Euvrard, Approximation of a function and its derivative with a neural network, Neural Networks, 5 (1992), 207-220.  doi: 10.1016/S0893-6080(05)80020-6.  Google Scholar

[9]

H. L. Cheang Gerald, Approximation with neural networks activated by ramp sigmoids, J. Approx. Theory, 162 (2010), 1450-1465.  doi: 10.1016/j.jat.2010.03.004.  Google Scholar

[10]

E. W. CheneyW. A. Light and Y. Xu, Constructive methods of approximation by rigde functions and radial functions, Numerical Algorithms, 4 (1993), 205-223.  doi: 10.1007/BF02144104.  Google Scholar

[11]

D. Costarelli and A. R. Sambucini, Approximation results in Orlicz spaces for sequences of Kantorovich max-product neural network operators, Results in Mathematics, 73 (2018), Art. 15, 15 pp. doi: 10.1007/s00025-018-0799-4.  Google Scholar

[12]

D. Costarelli and R. Spigler, Approximation results for neural network operators activated by sigmoidal functions, Neural Networks, 44 (2013), 101-106.  doi: 10.1016/j.neunet.2013.03.015.  Google Scholar

[13]

D. Costarelli and R. Spigler, Multivariate neural network operators with sigmoidal activation functions, Neural Networks, 48 (2013), 72-77.  doi: 10.1016/j.neunet.2013.07.009.  Google Scholar

[14]

D. Costarelli, R. Spigler and G. Vinti, A survey on approximation by means of neural network operators, Journal of NeuroTechnology, 1 (2016). Google Scholar

[15]

D. Costarelli and G. Vinti, Inverse results of approximation and the saturation order for the sampling Kantorovich series, Journal of Approximation Theory, 242 (2019), 64-82.  doi: 10.1016/j.jat.2019.03.001.  Google Scholar

[16]

D. Costarelli and G. Vinti, Voronovskaja type theorems and high order convergence neural network operators with sigmoidal functions, in: Mediterranean Journal of Mathematics, (2020). Google Scholar

[17] F. Cucker and D. X. Zhou, Learning Theory: An Approximation Theory Viewpoint, Cambridge University Press, Cambridge, 2007.  doi: 10.1017/CBO9780511618796.  Google Scholar
[18]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[19]

N. J. Guliyev and V. E. Ismailov, On the approximation by single hidden layer feedforward neural networks with fixed weights,, Neural Networks, 98 (2018), 296-304.  doi: 10.1016/j.neunet.2017.12.007.  Google Scholar

[20]

N. J. Guliyev and V. E. Ismailov, Approximation capability of two hidden layer feedforward neural networks with fixed weights, Neurocomputing, 316 (2018), 262-269.  doi: 10.1016/j.neucom.2018.07.075.  Google Scholar

[21]

A. Iliev and N. Kyurkchiev, On the Hausdor distance between the Heaviside function and some transmuted activation functions, Mathematical Modeling and Applications, 2 (2016), 1-5.   Google Scholar

[22]

A. Iliev, N. Kyurkchiev and S. Markov, On the approximation of the cut and step functions by logistic and Gompertz functions, BIOMATH, 4 (2015), 201510101, 12pp. doi: 10.11145/j.biomath.2015.10.101.  Google Scholar

[23]

V. E. Ismailov, On the approximation by neural networks with bounded number of neurons in hidden layers,, J. Math. Anal. Appl., 417 (2014), 963-969.  doi: 10.1016/j.jmaa.2014.03.092.  Google Scholar

[24]

P. P. Korovkin, On convergence of linear positive operators in the space of continuous functions, Dokl. Akad. Nauk SSSR, 90 (1953), 961-964.   Google Scholar

[25]

J. Schmidhuber, Deep learning in neural networks: An overview, Neural Networks, 61 (2015), 85-117.  doi: 10.1016/j.neunet.2014.09.003.  Google Scholar

[26]

S. Smale and D. X. Zhou, Learning theory estimates via integral operators and their approximations, Constructive Approximation, 26 (2007), 153-172.  doi: 10.1007/s00365-006-0659-y.  Google Scholar

[27]

Y. ZhangJ. WuZ. CaiB. Du and P.S. Yu, An unsupervised parameter learning model for RVFL neural network,, Neural Networks, 112 (2019), 85-97.  doi: 10.1016/j.neunet.2019.01.007.  Google Scholar

show all references

References:
[1]

J. A. Adell and D. Cardenas-Morales, Quantitative generalized Voronovskaja's formulae for Bernstein polynomials, Journal of Approximation Theory, 231 (2018), 41-52.  doi: 10.1016/j.jat.2018.04.007.  Google Scholar

[2]

T. Acar, Asymptotic formulas for generalized Szász-Mirakyan operators, Applied Mathematics and Computation, 263 (2015), 223-239.  doi: 10.1016/j.amc.2015.04.060.  Google Scholar

[3]

T. AcarA. Aral and I. Rasa, Approximation by k-th order modifications of Szaás-Mirakyan operators, Studia Scient. Math. Hungar., 53 (2016), 379-398.  doi: 10.1556/012.2016.53.3.1339.  Google Scholar

[4]

T. AcarA. Aral and I. Rasa, The New Forms of Voronovskaya's Theorem in weighted spaces, Positivity, 20 (2016), 25-40.  doi: 10.1007/s11117-015-0338-4.  Google Scholar

[5]

C. Bardaro and I. Mantellini, On pointwise approximation properties of multivariate semi-discrete sampling type operators, Results in Mathematics, 72 (2017), 1449-1472.  doi: 10.1007/s00025-017-0667-7.  Google Scholar

[6]

F. Cao and Z. Chen, The approximation operators with sigmoidal functions, Comput. Math. Appl., 58 (2009), 758-765.  doi: 10.1016/j.camwa.2009.05.001.  Google Scholar

[7]

F. Cao and Z. Chen, The construction and approximation of a class of neural networks operators with ramp functions, J. Comput. Anal. Appl., 14 (2012), 101-112.   Google Scholar

[8]

P. Cardaliaguet and G. Euvrard, Approximation of a function and its derivative with a neural network, Neural Networks, 5 (1992), 207-220.  doi: 10.1016/S0893-6080(05)80020-6.  Google Scholar

[9]

H. L. Cheang Gerald, Approximation with neural networks activated by ramp sigmoids, J. Approx. Theory, 162 (2010), 1450-1465.  doi: 10.1016/j.jat.2010.03.004.  Google Scholar

[10]

E. W. CheneyW. A. Light and Y. Xu, Constructive methods of approximation by rigde functions and radial functions, Numerical Algorithms, 4 (1993), 205-223.  doi: 10.1007/BF02144104.  Google Scholar

[11]

D. Costarelli and A. R. Sambucini, Approximation results in Orlicz spaces for sequences of Kantorovich max-product neural network operators, Results in Mathematics, 73 (2018), Art. 15, 15 pp. doi: 10.1007/s00025-018-0799-4.  Google Scholar

[12]

D. Costarelli and R. Spigler, Approximation results for neural network operators activated by sigmoidal functions, Neural Networks, 44 (2013), 101-106.  doi: 10.1016/j.neunet.2013.03.015.  Google Scholar

[13]

D. Costarelli and R. Spigler, Multivariate neural network operators with sigmoidal activation functions, Neural Networks, 48 (2013), 72-77.  doi: 10.1016/j.neunet.2013.07.009.  Google Scholar

[14]

D. Costarelli, R. Spigler and G. Vinti, A survey on approximation by means of neural network operators, Journal of NeuroTechnology, 1 (2016). Google Scholar

[15]

D. Costarelli and G. Vinti, Inverse results of approximation and the saturation order for the sampling Kantorovich series, Journal of Approximation Theory, 242 (2019), 64-82.  doi: 10.1016/j.jat.2019.03.001.  Google Scholar

[16]

D. Costarelli and G. Vinti, Voronovskaja type theorems and high order convergence neural network operators with sigmoidal functions, in: Mediterranean Journal of Mathematics, (2020). Google Scholar

[17] F. Cucker and D. X. Zhou, Learning Theory: An Approximation Theory Viewpoint, Cambridge University Press, Cambridge, 2007.  doi: 10.1017/CBO9780511618796.  Google Scholar
[18]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[19]

N. J. Guliyev and V. E. Ismailov, On the approximation by single hidden layer feedforward neural networks with fixed weights,, Neural Networks, 98 (2018), 296-304.  doi: 10.1016/j.neunet.2017.12.007.  Google Scholar

[20]

N. J. Guliyev and V. E. Ismailov, Approximation capability of two hidden layer feedforward neural networks with fixed weights, Neurocomputing, 316 (2018), 262-269.  doi: 10.1016/j.neucom.2018.07.075.  Google Scholar

[21]

A. Iliev and N. Kyurkchiev, On the Hausdor distance between the Heaviside function and some transmuted activation functions, Mathematical Modeling and Applications, 2 (2016), 1-5.   Google Scholar

[22]

A. Iliev, N. Kyurkchiev and S. Markov, On the approximation of the cut and step functions by logistic and Gompertz functions, BIOMATH, 4 (2015), 201510101, 12pp. doi: 10.11145/j.biomath.2015.10.101.  Google Scholar

[23]

V. E. Ismailov, On the approximation by neural networks with bounded number of neurons in hidden layers,, J. Math. Anal. Appl., 417 (2014), 963-969.  doi: 10.1016/j.jmaa.2014.03.092.  Google Scholar

[24]

P. P. Korovkin, On convergence of linear positive operators in the space of continuous functions, Dokl. Akad. Nauk SSSR, 90 (1953), 961-964.   Google Scholar

[25]

J. Schmidhuber, Deep learning in neural networks: An overview, Neural Networks, 61 (2015), 85-117.  doi: 10.1016/j.neunet.2014.09.003.  Google Scholar

[26]

S. Smale and D. X. Zhou, Learning theory estimates via integral operators and their approximations, Constructive Approximation, 26 (2007), 153-172.  doi: 10.1007/s00365-006-0659-y.  Google Scholar

[27]

Y. ZhangJ. WuZ. CaiB. Du and P.S. Yu, An unsupervised parameter learning model for RVFL neural network,, Neural Networks, 112 (2019), 85-97.  doi: 10.1016/j.neunet.2019.01.007.  Google Scholar

Figure 1.  The function $ \phi_{\sigma_{\ell}} $
Figure 2.  The function $ \Psi_{\sigma_{\ell}} $ of two variables
[1]

Xiaoli Wang, Meihua Yang, Peter E. Kloeden. Sigmoidal approximations of a delay neural lattice model with Heaviside functions. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2385-2402. doi: 10.3934/cpaa.2020104

[2]

Shyan-Shiou Chen, Chih-Wen Shih. Asymptotic behaviors in a transiently chaotic neural network. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 805-826. doi: 10.3934/dcds.2004.10.805

[3]

Stephen Coombes, Helmut Schmidt. Neural fields with sigmoidal firing rates: Approximate solutions. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1369-1379. doi: 10.3934/dcds.2010.28.1369

[4]

Hiroshi Isozaki, Hisashi Morioka. A Rellich type theorem for discrete Schrödinger operators. Inverse Problems & Imaging, 2014, 8 (2) : 475-489. doi: 10.3934/ipi.2014.8.475

[5]

Yuantian Xia, Juxiang Zhou, Tianwei Xu, Wei Gao. An improved deep convolutional neural network model with kernel loss function in image classification. Mathematical Foundations of Computing, 2020, 3 (1) : 51-64. doi: 10.3934/mfc.2020005

[6]

Josef Diblík, Zdeněk Svoboda. Asymptotic properties of delayed matrix exponential functions via Lambert function. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 123-144. doi: 10.3934/dcdsb.2018008

[7]

Paola Mannucci, Claudio Marchi, Nicoletta Tchou. Asymptotic behaviour for operators of Grushin type: Invariant measure and singular perturbations. Discrete & Continuous Dynamical Systems - S, 2019, 12 (1) : 119-128. doi: 10.3934/dcdss.2019008

[8]

Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Stability of the dynamics of an asymmetric neural network. Communications on Pure & Applied Analysis, 2009, 8 (2) : 655-671. doi: 10.3934/cpaa.2009.8.655

[9]

Jacek Banasiak, Aleksandra Puchalska. Generalized network transport and Euler-Hille formula. Discrete & Continuous Dynamical Systems - B, 2018, 23 (5) : 1873-1893. doi: 10.3934/dcdsb.2018185

[10]

Stefano Bianchini, Daniela Tonon. A decomposition theorem for $BV$ functions. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1549-1566. doi: 10.3934/cpaa.2011.10.1549

[11]

Ying Sue Huang, Chai Wah Wu. Stability of cellular neural network with small delays. Conference Publications, 2005, 2005 (Special) : 420-426. doi: 10.3934/proc.2005.2005.420

[12]

King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

[13]

Ndolane Sene. Fractional input stability and its application to neural network. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 853-865. doi: 10.3934/dcdss.2020049

[14]

Tomás Caraballo, Francisco Morillas, José Valero. Asymptotic behaviour of a logistic lattice system. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4019-4037. doi: 10.3934/dcds.2014.34.4019

[15]

Lianzhang Bao, Wenxian Shen. Logistic type attraction-repulsion chemotaxis systems with a free boundary or unbounded boundary. I. Asymptotic dynamics in fixed unbounded domain. Discrete & Continuous Dynamical Systems - A, 2020, 40 (2) : 1107-1130. doi: 10.3934/dcds.2020072

[16]

Felipe Alvarez, Juan Peypouquet. Asymptotic equivalence and Kobayashi-type estimates for nonautonomous monotone operators in Banach spaces. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1109-1128. doi: 10.3934/dcds.2009.25.1109

[17]

Xijun Hu, Penghui Wang. Hill-type formula and Krein-type trace formula for $S$-periodic solutions in ODEs. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 763-784. doi: 10.3934/dcds.2016.36.763

[18]

Rui Hu, Yuan Yuan. Stability, bifurcation analysis in a neural network model with delay and diffusion. Conference Publications, 2009, 2009 (Special) : 367-376. doi: 10.3934/proc.2009.2009.367

[19]

Hui-Qiang Ma, Nan-Jing Huang. Neural network smoothing approximation method for stochastic variational inequality problems. Journal of Industrial & Management Optimization, 2015, 11 (2) : 645-660. doi: 10.3934/jimo.2015.11.645

[20]

Yixin Guo, Aijun Zhang. Existence and nonexistence of traveling pulses in a lateral inhibition neural network. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1729-1755. doi: 10.3934/dcdsb.2016020

 Impact Factor: 

Article outline

Figures and Tables

[Back to Top]