
ISSN:
1551-0018
eISSN:
1547-1063
All Issues
Mathematical Biosciences & Engineering
2014 , Volume 11 , Issue 2
Special issue on BIOCOMP 2012 - Mathematical Modeling and Computational Topics in Biosciences.
Dedicated to the Memory of Professor Luigi M. Ricciardi (1942-2011).
Select all articles
Export/Reference:
2014, 11(2): i-ii
doi: 10.3934/mbe.2014.11.2i
+[Abstract](2463)
+[PDF](96.7KB)
Abstract:
The International Conference ``BIOCOMP2012 - Mathematical Modeling and Computational Topics in Biosciences'', was held in Vietri sul Mare (Italy), June 4-8, 2012. It was dedicated to the Memory of Professor Luigi M. Ricciardi (1942-2011), who was a visionary and tireless promoter of the 3 previous editions of the BIOCOMP conference series. We thought that the best way to honor his memory was to continue the BIOCOMP program. Over the years, this conference promoted scientific activities related to his wide interests and scientific expertise, which ranged in various areas of applications of mathematics, probability and statistics to biosciences and cybernetics, also with emphasis on computational problems. We are pleased that many of his friends and colleagues, as well as many other scientists, were attracted by the goals of this recent event and offered to contribute to its success.
For more information please click the “Full Text” above.
The International Conference ``BIOCOMP2012 - Mathematical Modeling and Computational Topics in Biosciences'', was held in Vietri sul Mare (Italy), June 4-8, 2012. It was dedicated to the Memory of Professor Luigi M. Ricciardi (1942-2011), who was a visionary and tireless promoter of the 3 previous editions of the BIOCOMP conference series. We thought that the best way to honor his memory was to continue the BIOCOMP program. Over the years, this conference promoted scientific activities related to his wide interests and scientific expertise, which ranged in various areas of applications of mathematics, probability and statistics to biosciences and cybernetics, also with emphasis on computational problems. We are pleased that many of his friends and colleagues, as well as many other scientists, were attracted by the goals of this recent event and offered to contribute to its success.
For more information please click the “Full Text” above.
2014, 11(2): 167-188
doi: 10.3934/mbe.2014.11.167
+[Abstract](3037)
+[PDF](492.7KB)
Abstract:
The aim of this paper is to consider a non-autonomous predator-prey-like system, with a Gompertz growth law for the prey. By introducing random variations in both prey birth and predator death rates, a stochastic model for the predator-prey-like system in a random environment is proposed and investigated. The corresponding Fokker-Planck equation is solved to obtain the joint probability density for the prey and predator populations and the marginal probability densities. The asymptotic behavior of the predator-prey stochastic model is also analyzed.
The aim of this paper is to consider a non-autonomous predator-prey-like system, with a Gompertz growth law for the prey. By introducing random variations in both prey birth and predator death rates, a stochastic model for the predator-prey-like system in a random environment is proposed and investigated. The corresponding Fokker-Planck equation is solved to obtain the joint probability density for the prey and predator populations and the marginal probability densities. The asymptotic behavior of the predator-prey stochastic model is also analyzed.
2014, 11(2): 189-201
doi: 10.3934/mbe.2014.11.189
+[Abstract](2342)
+[PDF](594.6KB)
Abstract:
With the aim to describe the interaction between a couple of neurons a stochastic model is proposed and formalized. In such a model, maintaining statements of the Leaky Integrate-and-Fire framework, we include a random component in the synaptic current, whose role is to modify the equilibrium point of the membrane potential of one of the two neurons and when a spike of the other one occurs it is turned on. The initial and after spike reset positions do not allow to identify the inter-spike intervals with the corresponding first passage times. However, we are able to apply some well-known results for the first passage time problem for the Ornstein-Uhlenbeck process in order to obtain (i) an approximation of the probability density function of the inter-spike intervals in one-way-type interaction and (ii) an approximation of the tail of the probability density function of the inter-spike intervals in the mutual interaction. Such an approximation is admissible for small instantaneous firing rates of both neurons.
With the aim to describe the interaction between a couple of neurons a stochastic model is proposed and formalized. In such a model, maintaining statements of the Leaky Integrate-and-Fire framework, we include a random component in the synaptic current, whose role is to modify the equilibrium point of the membrane potential of one of the two neurons and when a spike of the other one occurs it is turned on. The initial and after spike reset positions do not allow to identify the inter-spike intervals with the corresponding first passage times. However, we are able to apply some well-known results for the first passage time problem for the Ornstein-Uhlenbeck process in order to obtain (i) an approximation of the probability density function of the inter-spike intervals in one-way-type interaction and (ii) an approximation of the tail of the probability density function of the inter-spike intervals in the mutual interaction. Such an approximation is admissible for small instantaneous firing rates of both neurons.
2014, 11(2): 203-215
doi: 10.3934/mbe.2014.11.203
+[Abstract](2779)
+[PDF](445.8KB)
Abstract:
The aim of this work is to investigate the dynamics of a neural network, in which neurons, individually described by the FitzHugh-Nagumo model, are coupled by a generalized diffusive term. The formulation we are going to exploit is based on the general framework of graph theory. With the aim of defining the connection structure among the excitable elements, the discrete Laplacian matrix plays a fundamental role. In fact, it allows us to model the instantaneous propagation of signals between neurons, which need not be physically close to each other.
This approach enables us to address three fundamental issues. Firstly, each neuron is described using the well-known FitzHugh-Nagumo model which might allow to differentiate their individual behaviour. Furthermore, exploiting the Laplacian matrix, a well defined connection structure is formalized. Finally, random networks and an ensemble of excitatory and inhibitory synapses are considered.
Several simulations are performed to graphically present how dynamics within a network evolve. Thanks to an appropriate initial stimulus a wave is created: it propagates in a self-sustained way through the whole set of neurons. A novel graphical representation of the dynamics is shown.
The aim of this work is to investigate the dynamics of a neural network, in which neurons, individually described by the FitzHugh-Nagumo model, are coupled by a generalized diffusive term. The formulation we are going to exploit is based on the general framework of graph theory. With the aim of defining the connection structure among the excitable elements, the discrete Laplacian matrix plays a fundamental role. In fact, it allows us to model the instantaneous propagation of signals between neurons, which need not be physically close to each other.
This approach enables us to address three fundamental issues. Firstly, each neuron is described using the well-known FitzHugh-Nagumo model which might allow to differentiate their individual behaviour. Furthermore, exploiting the Laplacian matrix, a well defined connection structure is formalized. Finally, random networks and an ensemble of excitatory and inhibitory synapses are considered.
Several simulations are performed to graphically present how dynamics within a network evolve. Thanks to an appropriate initial stimulus a wave is created: it propagates in a self-sustained way through the whole set of neurons. A novel graphical representation of the dynamics is shown.
2014, 11(2): 217-231
doi: 10.3934/mbe.2014.11.217
+[Abstract](2931)
+[PDF](486.0KB)
Abstract:
We investigate an extension of the spike train stochastic model based on the conditional intensity, in which the recovery function includes an interaction between several excitatory neural units. Such function is proposed as depending both on the time elapsed since the last spike and on the last spiking unit. Our approach, being somewhat related to the competing risks model, allows to obtain the general form of the interspike distribution and of the probability of consecutive spikes from the same unit. Various results are finally presented in the two cases when the free firing rate function (i) is constant, and (ii) has a sinusoidal form.
We investigate an extension of the spike train stochastic model based on the conditional intensity, in which the recovery function includes an interaction between several excitatory neural units. Such function is proposed as depending both on the time elapsed since the last spike and on the last spiking unit. Our approach, being somewhat related to the competing risks model, allows to obtain the general form of the interspike distribution and of the probability of consecutive spikes from the same unit. Various results are finally presented in the two cases when the free firing rate function (i) is constant, and (ii) has a sinusoidal form.
2014, 11(2): 233-256
doi: 10.3934/mbe.2014.11.233
+[Abstract](2982)
+[PDF](2200.4KB)
Abstract:
Flow of energy and free energy minimization underpins almost every aspect of naturally occurring physical mechanisms. Inspired by this fact this work establishes an energy-based framework that spans the multi-scale range of biological neural systems and integrates synaptic dynamic, synchronous spiking activity and neural states into one consistent working paradigm. Following a bottom-up approach, a hypothetical energy function is proposed for dynamic synaptic models based on the theoretical thermodynamic principles and the Hopfield networks. We show that a synapse exposes stable operating points in terms of its excitatory postsynaptic potential as a function of its synaptic strength. We postulate that synapses in a network operating at these stable points can drive this network to an internal state of synchronous firing. The presented analysis is related to the widely investigated temporal coherent activities (cell assemblies) over a certain range of time scales (binding-by-synchrony). This introduces a novel explanation of the observed (poly)synchronous activities within networks regarding the synaptic (coupling) functionality. On a network level the transitions from one firing scheme to the other express discrete sets of neural states. The neural states exist as long as the network sustains the internal synaptic energy.
Flow of energy and free energy minimization underpins almost every aspect of naturally occurring physical mechanisms. Inspired by this fact this work establishes an energy-based framework that spans the multi-scale range of biological neural systems and integrates synaptic dynamic, synchronous spiking activity and neural states into one consistent working paradigm. Following a bottom-up approach, a hypothetical energy function is proposed for dynamic synaptic models based on the theoretical thermodynamic principles and the Hopfield networks. We show that a synapse exposes stable operating points in terms of its excitatory postsynaptic potential as a function of its synaptic strength. We postulate that synapses in a network operating at these stable points can drive this network to an internal state of synchronous firing. The presented analysis is related to the widely investigated temporal coherent activities (cell assemblies) over a certain range of time scales (binding-by-synchrony). This introduces a novel explanation of the observed (poly)synchronous activities within networks regarding the synaptic (coupling) functionality. On a network level the transitions from one firing scheme to the other express discrete sets of neural states. The neural states exist as long as the network sustains the internal synaptic energy.
2014, 11(2): 257-283
doi: 10.3934/mbe.2014.11.257
+[Abstract](3730)
+[PDF](760.0KB)
Abstract:
The objective of this paper is to study an optimal resource management problem for some classes of tritrophic systems composed by autotrophic resources (plants), bottom level consumers (herbivores) and top level consumers (humans). The first class of systems we discuss are linear chains, in which biomass flows from plants to herbivores, and from herbivores to humans. In the second class of systems humans are omnivorous and hence compete with herbivores for plant resources. Finally, in the third class of systems humans are omnivorous, but the plant resources are partitioned so that humans and herbivores do not complete for the same ones. The three trophic chains are expressed as Lotka-Volterra models, which seems to be a suitable choice in contexts where there is a shortage of food for the consumers. Our model parameters are taken from the literature on agro-pastoral systems in Sub-Saharan Africa.
The objective of this paper is to study an optimal resource management problem for some classes of tritrophic systems composed by autotrophic resources (plants), bottom level consumers (herbivores) and top level consumers (humans). The first class of systems we discuss are linear chains, in which biomass flows from plants to herbivores, and from herbivores to humans. In the second class of systems humans are omnivorous and hence compete with herbivores for plant resources. Finally, in the third class of systems humans are omnivorous, but the plant resources are partitioned so that humans and herbivores do not complete for the same ones. The three trophic chains are expressed as Lotka-Volterra models, which seems to be a suitable choice in contexts where there is a shortage of food for the consumers. Our model parameters are taken from the literature on agro-pastoral systems in Sub-Saharan Africa.
2014, 11(2): 285-302
doi: 10.3934/mbe.2014.11.285
+[Abstract](2647)
+[PDF](429.4KB)
Abstract:
An Ornstein-Uhlenbeck diffusion process is considered as a model for the membrane potential activity of a single neuron. We assume that the neuron is subject to a sequence of inhibitory and excitatory post-synaptic potentials that occur with time-dependent rates. The resulting process is characterized by time-dependent drift. For this model, we construct the return process describing the membrane potential. It is a non homogeneous Ornstein-Uhlenbeck process with jumps on which the effect of random refractoriness is introduced. An asymptotic analysis of the process modeling the number of firings and the distribution of interspike intervals is performed under the assumption of exponential distribution for the firing time. Some numerical evaluations are performed to provide quantitative information on the role of the parameters.
An Ornstein-Uhlenbeck diffusion process is considered as a model for the membrane potential activity of a single neuron. We assume that the neuron is subject to a sequence of inhibitory and excitatory post-synaptic potentials that occur with time-dependent rates. The resulting process is characterized by time-dependent drift. For this model, we construct the return process describing the membrane potential. It is a non homogeneous Ornstein-Uhlenbeck process with jumps on which the effect of random refractoriness is introduced. An asymptotic analysis of the process modeling the number of firings and the distribution of interspike intervals is performed under the assumption of exponential distribution for the firing time. Some numerical evaluations are performed to provide quantitative information on the role of the parameters.
2014, 11(2): 303-315
doi: 10.3934/mbe.2014.11.303
+[Abstract](2308)
+[PDF](472.5KB)
Abstract:
The mean-field dynamics of a collection of stochastic agents evolving under local and nonlocal interactions in one dimension is studied via analytically solvable models. The nonlocal interactions between agents result from $(a)$ a finite extension of the agents interaction range and $(b)$ a barycentric modulation of the interaction strength. Our modeling framework is based on a discrete two-velocity Boltzmann dynamics which can be analytically discussed. Depending on the span and the modulation of the interaction range, we analytically observe a transition from a purely diffusive regime without definite pattern to a flocking evolution represented by a solitary wave traveling with constant velocity.
The mean-field dynamics of a collection of stochastic agents evolving under local and nonlocal interactions in one dimension is studied via analytically solvable models. The nonlocal interactions between agents result from $(a)$ a finite extension of the agents interaction range and $(b)$ a barycentric modulation of the interaction strength. Our modeling framework is based on a discrete two-velocity Boltzmann dynamics which can be analytically discussed. Depending on the span and the modulation of the interaction range, we analytically observe a transition from a purely diffusive regime without definite pattern to a flocking evolution represented by a solitary wave traveling with constant velocity.
2014, 11(2): 317-330
doi: 10.3934/mbe.2014.11.317
+[Abstract](1924)
+[PDF](1166.2KB)
Abstract:
Mathematical models have been very useful in biological research. From the interaction of biology and mathematics, new problems have emerged that have generated advances in the theory, suggested further experimental work and motivated plausible conjectures. From our perspective, it is absolutely necessary to incorporate modeling tools in the study of circadian rhythms and that without a solid mathematical framework a real understanding of them will not be possible. Our interest is to study the main process underlying the synchronization in the pacemaker of a circadian system: these mechanisms should be conserved in all living beings. Indeed, from an evolutionary perspective, it seems reasonable to assume that either they have a common origin or that they emerge from similar selection circumstances. We propose a general framework to understand the emergence of synchronization as a robust characteristic of some cooperative systems of non-linear coupled oscillators. In a first approximation to the problem we vary the topology of the network and the strength of the interactions among oscillators. In order to study the emergent dynamics, we carried out some numerical computations. The results are consistent with experiments reported in the literature. Finally, we proposed a theoretical framework to study the phenomenon of synchronization in the context of circadian rhythms: the dissipative synchronization of nonautonomous dynamical systems.
Mathematical models have been very useful in biological research. From the interaction of biology and mathematics, new problems have emerged that have generated advances in the theory, suggested further experimental work and motivated plausible conjectures. From our perspective, it is absolutely necessary to incorporate modeling tools in the study of circadian rhythms and that without a solid mathematical framework a real understanding of them will not be possible. Our interest is to study the main process underlying the synchronization in the pacemaker of a circadian system: these mechanisms should be conserved in all living beings. Indeed, from an evolutionary perspective, it seems reasonable to assume that either they have a common origin or that they emerge from similar selection circumstances. We propose a general framework to understand the emergence of synchronization as a robust characteristic of some cooperative systems of non-linear coupled oscillators. In a first approximation to the problem we vary the topology of the network and the strength of the interactions among oscillators. In order to study the emergent dynamics, we carried out some numerical computations. The results are consistent with experiments reported in the literature. Finally, we proposed a theoretical framework to study the phenomenon of synchronization in the context of circadian rhythms: the dissipative synchronization of nonautonomous dynamical systems.
2014, 11(2): 331-342
doi: 10.3934/mbe.2014.11.331
+[Abstract](2389)
+[PDF](815.8KB)
Abstract:
In this paper, we propose a strategy for the selection of the hidden layer size in feedforward neural network models. The procedure herein presented is based on comparison of different models in terms of their out of sample predictive ability, for a specified loss function. To overcome the problem of data snooping, we extend the scheme based on the use of the reality check with modifications apt to compare nested models. Some applications of the proposed procedure to simulated and real data sets show that it allows to select parsimonious neural network models with the highest predictive accuracy.
In this paper, we propose a strategy for the selection of the hidden layer size in feedforward neural network models. The procedure herein presented is based on comparison of different models in terms of their out of sample predictive ability, for a specified loss function. To overcome the problem of data snooping, we extend the scheme based on the use of the reality check with modifications apt to compare nested models. Some applications of the proposed procedure to simulated and real data sets show that it allows to select parsimonious neural network models with the highest predictive accuracy.
2014, 11(2): 343-361
doi: 10.3934/mbe.2014.11.343
+[Abstract](2648)
+[PDF](786.7KB)
Abstract:
Receptive fields of retinal and other sensory neurons show a large variety of spatiotemporal linear and non linear types of responses to local stimuli. In visual neurons, these responses present either asymmetric sensitive zones or center-surround organization. In most cases, the nature of the responses suggests the existence of a kind of distributed computation prior to the integration by the final cell which is evidently supported by the anatomy. We describe a new kind of discrete and continuous filters to model the kind of computations taking place in the receptive fields of retinal cells. To show their performance in the analysis of different non-trivial neuron-like structures, we use a computer tool specifically programmed by the authors to that effect. This tool is also extended to study the effect of lesions on the whole performance of our model nets.
Receptive fields of retinal and other sensory neurons show a large variety of spatiotemporal linear and non linear types of responses to local stimuli. In visual neurons, these responses present either asymmetric sensitive zones or center-surround organization. In most cases, the nature of the responses suggests the existence of a kind of distributed computation prior to the integration by the final cell which is evidently supported by the anatomy. We describe a new kind of discrete and continuous filters to model the kind of computations taking place in the receptive fields of retinal cells. To show their performance in the analysis of different non-trivial neuron-like structures, we use a computer tool specifically programmed by the authors to that effect. This tool is also extended to study the effect of lesions on the whole performance of our model nets.
2014, 11(2): 363-384
doi: 10.3934/mbe.2014.11.363
+[Abstract](2959)
+[PDF](648.9KB)
Abstract:
Quantitative measurement for the timings of cell division and death with the application of mathematical models is a standard way to estimate kinetic parameters of cellular proliferation. On the basis of label-based measurement data, several quantitative mathematical models describing short-term dynamics of transient cellular proliferation have been proposed and extensively studied. In the present paper, we show that existing mathematical models for cell population growth can be reformulated as a specific case of generation progression models, a variant of parity progression models developed in mathematical demography. Generation progression ratio (GPR) is defined for a generation progression model as an expected ratio of population increase or decrease via cell division. We also apply a stochastic simulation algorithm which is capable of representing the population growth dynamics of transient amplifying cells for various inter-event time distributions of cell division and death. Demographic modeling and the application of stochastic simulation algorithm presented here can be used as a unified platform to systematically investigate the short term dynamics of cell population growth.
Quantitative measurement for the timings of cell division and death with the application of mathematical models is a standard way to estimate kinetic parameters of cellular proliferation. On the basis of label-based measurement data, several quantitative mathematical models describing short-term dynamics of transient cellular proliferation have been proposed and extensively studied. In the present paper, we show that existing mathematical models for cell population growth can be reformulated as a specific case of generation progression models, a variant of parity progression models developed in mathematical demography. Generation progression ratio (GPR) is defined for a generation progression model as an expected ratio of population increase or decrease via cell division. We also apply a stochastic simulation algorithm which is capable of representing the population growth dynamics of transient amplifying cells for various inter-event time distributions of cell division and death. Demographic modeling and the application of stochastic simulation algorithm presented here can be used as a unified platform to systematically investigate the short term dynamics of cell population growth.
2014, 11(2): 385-401
doi: 10.3934/mbe.2014.11.385
+[Abstract](2753)
+[PDF](1970.8KB)
Abstract:
The distribution of time intervals between successive spikes generated by a neuronal cell --the interspike intervals (ISI)-- may reveal interesting features of the underlying dynamics. In this study we analyze the ISI sequence --the spike train-- generated by a simple network of neurons whose output activity is modeled by a jump-diffusion process. We prove that, when specific ranges of the involved parameters are chosen, it is possible to observe multimodal ISI distributions which reveal that the modeled network fires with more than one single preferred time interval. Furthermore, the system exhibits resonance behavior, with modulation of the spike timings by the noise intensity. We also show that inhibition helps the signal transmission between the units of the simple network.
The distribution of time intervals between successive spikes generated by a neuronal cell --the interspike intervals (ISI)-- may reveal interesting features of the underlying dynamics. In this study we analyze the ISI sequence --the spike train-- generated by a simple network of neurons whose output activity is modeled by a jump-diffusion process. We prove that, when specific ranges of the involved parameters are chosen, it is possible to observe multimodal ISI distributions which reveal that the modeled network fires with more than one single preferred time interval. Furthermore, the system exhibits resonance behavior, with modulation of the spike timings by the noise intensity. We also show that inhibition helps the signal transmission between the units of the simple network.
2018
Impact Factor: 1.313
Readers
Authors
Editors
Referees
Librarians
Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]