2005, 2(1): 1-23. doi: 10.3934/mbe.2005.2.1

Characterization of Neural Interaction During Learning and Adaptation from Spike-Train Data


Department of Electrical Engineering, Arizona State University, Tempe, AZ 85287, United States


School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona 85287-5706, United States


Courant Institute, New York University, New York, NY 10012, United States


Department of Bioengineering, Arizona State University, Tempe, AZ 85281, United States

Received  August 2004 Revised  October 2004 Published  November 2004

A basic task in understanding the neural mechanism of learning and adaptation is to detect and characterize neural interactions and their changes in response to new experiences. Recent experimental work has indicated that neural interactions in the primary motor cortex of the monkey brain tend to change their preferred directions during adaptation to an external force field. To quantify such changes, it is necessary to develop computational methodology for data analysis. Given that typical experimental data consist of spike trains recorded from individual neurons, probing the strength of neural interactions and their changes is extremely challenging. We recently reported in a brief communication [Zhu et al., Neural Computations 15 , 2359 (2003)] a general procedure to detect and quantify the causal interactions among neurons, which is based on the method of directed transfer function derived from a class of multivariate, linear stochastic models. The procedure was applied to spike trains from neurons in the primary motor cortex of the monkey brain during adaptation, where monkeys were trained to learn a new skill by moving their arms to reach a target under external perturbations. Our computation and analysis indicated that the adaptation tends to alter the connection topology of the underlying neural network, yet the average interaction strength in the network is approximately conserved before and after the adaptation. The present paper gives a detailed account of this procedure and its applicability to spike-train data in terms of the hypotheses, theory, computational methods, control test, and extensive analysis of experimental data.
Citation: Liqiang Zhu, Ying-Cheng Lai, Frank C. Hoppensteadt, Jiping He. Characterization of Neural Interaction During Learning and Adaptation from Spike-Train Data. Mathematical Biosciences & Engineering, 2005, 2 (1) : 1-23. doi: 10.3934/mbe.2005.2.1

Xilin Fu, Zhang Chen. New discrete analogue of neural networks with nonlinear amplification function and its periodic dynamic analysis. Conference Publications, 2007, 2007 (Special) : 391-398. doi: 10.3934/proc.2007.2007.391


D. Warren, K Najarian. Learning theory applied to Sigmoid network classification of protein biological function using primary protein structure. Conference Publications, 2003, 2003 (Special) : 898-904. doi: 10.3934/proc.2003.2003.898


Rui Hu, Yuan Yuan. Stability, bifurcation analysis in a neural network model with delay and diffusion. Conference Publications, 2009, 2009 (Special) : 367-376. doi: 10.3934/proc.2009.2009.367


Leong-Kwan Li, Sally Shao. Convergence analysis of the weighted state space search algorithm for recurrent neural networks. Numerical Algebra, Control & Optimization, 2014, 4 (3) : 193-207. doi: 10.3934/naco.2014.4.193


Ying Sue Huang. Resynchronization of delayed neural networks. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 397-401. doi: 10.3934/dcds.2001.7.397


Stephen Coombes, Helmut Schmidt, Carlo R. Laing, Nils Svanstedt, John A. Wyller. Waves in random neural media. Discrete & Continuous Dynamical Systems - A, 2012, 32 (8) : 2951-2970. doi: 10.3934/dcds.2012.32.2951


Fang Han, Bin Zhen, Ying Du, Yanhong Zheng, Marian Wiercigroch. Global Hopf bifurcation analysis of a six-dimensional FitzHugh-Nagumo neural network with delay by a synchronized scheme. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 457-474. doi: 10.3934/dcdsb.2011.16.457


Zhigang Zeng, Tingwen Huang. New passivity analysis of continuous-time recurrent neural networks with multiple discrete delays. Journal of Industrial & Management Optimization, 2011, 7 (2) : 283-289. doi: 10.3934/jimo.2011.7.283


Graciela Canziani, Rosana Ferrati, Claudia Marinelli, Federico Dukatz. Artificial neural networks and remote sensing in the analysis of the highly variable Pampean shallow lakes. Mathematical Biosciences & Engineering, 2008, 5 (4) : 691-711. doi: 10.3934/mbe.2008.5.691


Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Stability of the dynamics of an asymmetric neural network. Communications on Pure & Applied Analysis, 2009, 8 (2) : 655-671. doi: 10.3934/cpaa.2009.8.655


Shinsuke Koyama, Ryota Kobayashi. Fluctuation scaling in neural spike trains. Mathematical Biosciences & Engineering, 2016, 13 (3) : 537-550. doi: 10.3934/mbe.2016006


Tatyana S. Turova. Structural phase transitions in neural networks. Mathematical Biosciences & Engineering, 2014, 11 (1) : 139-148. doi: 10.3934/mbe.2014.11.139


Stephen Coombes, Helmut Schmidt. Neural fields with sigmoidal firing rates: Approximate solutions. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1369-1379. doi: 10.3934/dcds.2010.28.1369


Sean D. Lawley, Janet A. Best, Michael C. Reed. Neurotransmitter concentrations in the presence of neural switching in one dimension. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2255-2273. doi: 10.3934/dcdsb.2016046


Antonio Di Crescenzo, Maria Longobardi, Barbara Martinucci. On a spike train probability model with interacting neural units. Mathematical Biosciences & Engineering, 2014, 11 (2) : 217-231. doi: 10.3934/mbe.2014.11.217


Ying Sue Huang, Chai Wah Wu. Stability of cellular neural network with small delays. Conference Publications, 2005, 2005 (Special) : 420-426. doi: 10.3934/proc.2005.2005.420


Ricai Luo, Honglei Xu, Wu-Sheng Wang, Jie Sun, Wei Xu. A weak condition for global stability of delayed neural networks. Journal of Industrial & Management Optimization, 2016, 12 (2) : 505-514. doi: 10.3934/jimo.2016.12.505


Benedetta Lisena. Average criteria for periodic neural networks with delay. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 761-773. doi: 10.3934/dcdsb.2014.19.761


K Najarian. On stochastic stability of dynamic neural models in presence of noise. Conference Publications, 2003, 2003 (Special) : 656-663. doi: 10.3934/proc.2003.2003.656


King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

2018 Impact Factor: 1.313


  • PDF downloads (8)
  • HTML views (0)
  • Cited by (0)

[Back to Top]