ISSN:

1551-0018

eISSN:

1547-1063

All Issues

## Mathematical Biosciences & Engineering

2014 , Volume 11 , Issue 1

Special issue on Computational Neuroscience

Select all articles

Export/Reference:

2014, 11(1): i-ii
doi: 10.3934/mbe.2014.11.1i

*+*[Abstract](1708)*+*[PDF](104.3KB)**Abstract:**

This Special Issue of Mathematical Biosciences and Engineering contains ten selected papers presented at the Neural Coding 2012 workshop. Neuroscience is traditionally very close to mathematics which stems from the famous theoretical work of McCulloch--Pitts and Hodgkin--Huxley in the middle of the previous century. Great progress has been made since those times and through the decades this fruitful combination of disciplines continue. The workshop was held in the beautiful town of Prague in the Czech Republic, September 2-7, 2012. This was the 10th of a series of international workshops on this subject, the first one also held in Prague (1995), then in Versailles (1997), Osaka (1999), Plymouth (2001), Aulla (2003), Marburg (2005), Montevideo (2007), Tainan (2009), and in Limassol (2010). As in the previous workshops, this was a single track multidisciplinary event bringing together experimental and computational neuroscientists with ample time for informal discussions in a convivial atmosphere. The Neural Coding Workshops are traditionally biennial symposia each lasting 5 or 6 days. They are relatively small in size, interdisciplinary with major emphasis on the search for common principles in neural coding. The workshop was conceived to bring together scientists from different disciplines for an in-depth discussion of model-building and computational strategies.

For more information please click the “Full Text” above.

2014, 11(1): 1-10
doi: 10.3934/mbe.2014.11.1

*+*[Abstract](2017)*+*[PDF](492.4KB)**Abstract:**

A method to generate first passage times for a class of stochastic processes is proposed. It does not require construction of the trajectories as usually needed in simulation studies, but is based on an integral equation whose unknown quantity is the probability density function of the studied first passage times and on the application of the hazard rate method. The proposed procedure is particularly efficient in the case of the Ornstein-Uhlenbeck process, which is important for modeling spiking neuronal activity.

2014, 11(1): 11-25
doi: 10.3934/mbe.2014.11.11

*+*[Abstract](1944)*+*[PDF](471.9KB)**Abstract:**

Leaky integrate-and-fire neuronal models with reversal potentials have a number of different diffusion approximations, each depending on the form of the amplitudes of the postsynaptic potentials. Probability distributions of the first-passage times of the membrane potential in the original model and its diffusion approximations are numerically compared in order to find which of the approximations is the most suitable one. The properties of the random amplitudes of postsynaptic potentials are discussed. It is shown on a simple example that the quality of the approximation depends directly on them.

2014, 11(1): 27-48
doi: 10.3934/mbe.2014.11.27

*+*[Abstract](2551)*+*[PDF](1178.3KB)**Abstract:**

A new synchrony index for neural activity is defined in this paper. The method is able to measure synchrony dynamics in low firing rate scenarios. It is based on the computation of the time intervals between nearest spikes of two given spike trains. Generalized additive models are proposed for the synchrony profiles obtained by this method. Two hypothesis tests are proposed to assess for differences in the level of synchronization in a real data example. Bootstrap methods are used to calibrate the distribution of the tests. Also, the expected synchrony due to chance is computed analytically and by simulation to assess for actual synchronization.

2014, 11(1): 49-62
doi: 10.3934/mbe.2014.11.49

*+*[Abstract](2399)*+*[PDF](753.6KB)**Abstract:**

Because every spike of a neuron is determined by input signals, a train of spikes may contain information about the dynamics of unobserved neurons. A state-space method based on the leaky integrate-and-fire model, describing neuronal transformation from input signals to a spike train has been proposed for tracking input parameters represented by their mean and fluctuation [11]. In the present paper, we propose to make the estimation more realistic by adopting an LIF model augmented with an adaptive moving threshold. Moreover, because the direct state-space method is computationally infeasible for a data set comprising thousands of spikes, we further develop a practical method for transforming instantaneous firing characteristics back to input parameters. The instantaneous firing characteristics, represented by the firing rate and non-Poisson irregularity, can be estimated using a computationally feasible algorithm. We applied our proposed methods to synthetic data to clarify that they perform well.

2014, 11(1): 63-80
doi: 10.3934/mbe.2014.11.63

*+*[Abstract](2333)*+*[PDF](509.4KB)**Abstract:**

The question, how much information can be theoretically gained from variable neuronal firing rate with respect to constant average firing rate is investigated. We employ the statistical concept of information based on the Kullback-Leibler divergence, and assume rate-modulated renewal processes as a model of spike trains. We show that if the firing rate variation is sufficiently small and slow (with respect to the mean interspike interval), the information gain can be expressed by the Fisher information. Furthermore, under certain assumptions, the smallest possible information gain is provided by gamma-distributed interspike intervals. The methodology is illustrated and discussed on several different statistical models of neuronal activity.

2014, 11(1): 81-104
doi: 10.3934/mbe.2014.11.81

*+*[Abstract](2075)*+*[PDF](541.0KB)**Abstract:**

Spiking statistics of a self-inhibitory neuron is considered. The neuron receives excitatory input from a Poisson stream and inhibitory impulses through a feedback line with a delay. After triggering, the neuron is in the refractory state for a positive period of time.

Recently, [35,6], it was proven for a neuron with delayed feedback and without the refractory state, that the output stream of interspike intervals (ISI) cannot be represented as a Markov process. The refractory state presence, in a sense limits the memory range in the spiking process, which might restore Markov property to the ISI stream.

Here we check such a possibility. For this purpose, we calculate the conditional probability density $P(t_{n+1}\mid t_{n},\ldots,t_1,t_{0})$, and prove exactly that it does not reduce to $P(t_{n+1}\mid t_{n},\ldots,t_1)$ for any $n\ge0$. That means, that activity of the system with refractory state as well cannot be represented as a Markov process of any order.

We conclude that it is namely the delayed feedback presence which results in non-Markovian statistics of neuronal firing. As delayed feedback lines are common for any realistic neural network, the non-Markovian statistics of the network activity should be taken into account in processing of experimental data.

2014, 11(1): 105-123
doi: 10.3934/mbe.2014.11.105

*+*[Abstract](3082)*+*[PDF](670.4KB)**Abstract:**

Fano factor is one of the most widely used measures of variability of spike trains. Its standard estimator is the ratio of sample variance to sample mean of spike counts observed in a time window and the quality of the estimator strongly depends on the length of the window. We investigate this dependence under the assumption that the spike train behaves as an equilibrium renewal process. It is shown what characteristics of the spike train have large effect on the estimator bias. Namely, the effect of refractory period is analytically evaluated. Next, we create an approximate asymptotic formula for the mean square error of the estimator, which can also be used to find minimum of the error in estimation from single spike trains. The accuracy of the Fano factor estimator is compared with the accuracy of the estimator based on the squared coefficient of variation. All the results are illustrated for spike trains with gamma and inverse Gaussian probability distributions of interspike intervals. Finally, we discuss possibilities of how to select a suitable observation window for the Fano factor estimation.

2014, 11(1): 125-138
doi: 10.3934/mbe.2014.11.125

*+*[Abstract](3179)*+*[PDF](1787.4KB)**Abstract:**

To elucidate how a biological rhythm is regulated, the extended (three-dimensional) Bonhoeffer-van der Pol or FitzHugh-Nagumo equations are employed to investigate the dynamics of a population of neuronal oscillators globally coupled through a common buffer (mean field). Interesting phenomena, such as extraordinarily slow phase-locked oscillations (compared to the natural period of each neuronal oscillator) and the death of all oscillations, are observed. We demonstrate that the slow synchronization is due mainly to the existence of ``fast" oscillators. Additionally, we examine the effect of noise on the synchronization and variability of the interspike intervals. Peculiar phenomena, such as noise-induced acceleration and deceleration, are observed. The results herein suggest that very small noise may significantly influence a biological rhythm.

2014, 11(1): 139-148
doi: 10.3934/mbe.2014.11.139

*+*[Abstract](2704)*+*[PDF](307.3KB)**Abstract:**

A model is considered for a neural network that is a stochastic process on a random graph. The neurons are represented by ``integrate-and-fire" processes. The structure of the graph is determined by the probabilities of the connections, and it depends on the activity in the network. The dependence between the initial level of sparseness of the connections and the dynamics of activation in the network was investigated. A balanced regime was found between activity, i.e., the level of excitation in the network, and inhibition, that allows formation of synfire chains.

2014, 11(1): 149-165
doi: 10.3934/mbe.2014.11.149

*+*[Abstract](2364)*+*[PDF](630.3KB)**Abstract:**

We derive learning rules for finding the connections between units in stochastic dynamical networks from the recorded history of a ``visible'' subset of the units. We consider two models. In both of them, the visible units are binary and stochastic. In one model the ``hidden'' units are continuous-valued, with sigmoidal activation functions, and in the other they are binary and stochastic like the visible ones. We derive exact learning rules for both cases. For the stochastic case, performing the exact calculation requires, in general, repeated summations over an number of configurations that grows exponentially with the size of the system and the data length, which is not feasible for large systems. We derive a mean field theory, based on a factorized ansatz for the distribution of hidden-unit states, which offers an attractive alternative for large systems. We present the results of some numerical calculations that illustrate key features of the two models and, for the stochastic case, the exact and approximate calculations.

2018 Impact Factor: 1.313

## Readers

## Authors

## Editors

## Referees

## Librarians

## Email Alert

Add your name and e-mail address to receive news of forthcoming issues of this journal:

[Back to Top]