A note on a neuron network model with diffusion

We study the dynamics of an inhomogeneous neuronal network parametrized by a real number \begin{document}$ \sigma $\end{document} and structured by the time elapsed since the last discharge. The dynamics are governed by the parabolic PDE which describes the probability density of neurons with elapsed time \begin{document}$ s $\end{document} after its last discharge. We prove existence and uniqueness of a solution to the model. Moreover, we show that under some conditions on the connectivity and the firing rate, the network exhibits total desynchronization.


1.
Introduction. The presence of variability in neural activity (in vivo and in vitro) is well known [7]. For example, in the case of individual cortical neurons it is observed (in vivo) that their spike trains are highly irregular [32,33]. Besides the randomness observed in spontaneous neuronal activity, in vitro experiments also confirmed irregular behavior of neural activity [9]. Recent developments in theoretical neuroscience focus on understanding the essence of these variability. The random influence on the neuronal firing activity is termed as 'neuronal noise' [15] and the sources of noise are broadly classified as extrinsic and intrinsic [9,15]. The extrinsic noise is due to the network effects and signal transmission over cells whereas the intrinsic noise is generated at cell level.
In the extrinsic noise case, one adds the noise term explicitly while modeling the evolution of state of the nervous cells to get stochastic differential equations. One of the classical models in this context is the noisy leaky integrate-and-fire model which is structured by the membrane potential [1,2,3,4,5,10,13]. To account for intrinsic variability and randomness in the firing of neurons, the main assumption in mathematical models is that a firing event can take place at any time according to a certain probability rate [8]. The implementation of intrinsic noise influences the firing time and the process can be completely characterized by the amount of time elapsed since the last spike (action potential). Motivated by this, many neuronal models in the literature neglect the mechanisms responsible for spike generation and instead describe the dynamics of neurons structured by the discharge times [20,21,22,23]. Numerical experiments [29,30] suggest that the population models provide insight to the mechanisms which regulate various forms of activities in neuronal population [22,23]. In [24,25,26], neuronal population dynamics models are based on occurrence times of events. In [6], analytical links between the Fokker-Planck equation which arises in membrane potential-structured models and the McKendrick-von Foerster equations which are age-structured systems were established. A microscopic model corresponding to the models described in [21,22,23,29,30] for finite number of neurons is given in [31]. The work presented in this paper is based on partial differential equations models of evolution developed in [12,21,22,23]. Homogeneous neural networks are those networks in which neural activities are driven by the same dynamic, all the neurons are excitatory and the interactions between the neurons are taken in account through a global neural activity. In [23], the authors model homogeneous neural networks as the time evolution of neurons structured in 'time after the last discharge' (or age) s. More precisely, the dynamics of neurons are described by the probability density n(s, t) of finding a neuron in state s, i.e., time elapsed since the last discharge, at time t. Let X represent the environment due to the global neural activity in a network of connectivity J and p(s, X) represent the firing rate of neurons in the state s and environment X. Then the density n satisfies a partial differential equation of evolution type (in particular McKendrick-Von Foerster type [27]) p(s, X(t))n(s, t)ds, t > 0, p(s, X(t))n(s, t)ds, t > 0, n(s, 0) = n 0 (s), s > 0. (1) The boundary condition, at state s = 0, makes the neurons reenter the cycle after firing. We notice that the authors assume that the firing rate p is increasing with respect to s, increasing with respect to the global activity variable X and asymptotically converges to 1 as s and X go to infinity. In fact, these assumptions model the biological property of neurons to discharge in response to a stimulation and to recover their excitability in time. Mathematical difficulties such as proving existence, uniqueness of solution and studying the longtime behavior of the solution to (1), are due to the nonlinear coupling between X and n. Under some properties of the function p and the connectivity, authors prove the existence and uniqueness through a Banach-Picard fixed point theorem. They use an entropy method [18] to study the convergence, as time goes to infinity, of the network to a steady state.
Later, the authors of [12] consider an infinite inhomogeneous networks parametrized by a real number σ with different refractory periods. All these sub-networks are coupled by their mean activity. In this model, the population of neurons are described by the probability density n(s, t; σ) of finding a neuron in state s, where s is the time elapsed since the last discharge, at time t in the sub-network σ. Let p(s, X; σ) denote the firing rate of neurons in the state s, in environment X. The main assumption in their model is that the synaptic weights are the same for all the networks. This implies that the density n is governed by the partial differential equation of evolution (which is similar to that of (1)) n t (s, t; σ) + n s (s, t; σ) + p(s, X(t); σ)n(s, t; σ) = 0, s > 0, t > 0, σ ∈ R, n(0, t; σ) = ∞ 0 p(s, X(t); σ)n(s, t; σ)ds, t > 0, σ ∈ R, The mathematical difficulties in this case are the same as that of (1) and the authors of [12] use the same tools to investigate the long time behavior of the solution to (2).
In this paper we consider an infinite inhomogeneous neuronal networks parametrized by a real number σ ∈ R. Moreover, we assume that each network has specific intrinsic dynamics and refractory period and all the networks are coupled by means of their mean activity. Further, we assume that the synaptic weights are given by the same constant J which models the connectivity. As in the earlier papers, p (the instantaneous firing rate) and X describe the post discharge recovery of the neural membrane and the global neural activity, respectively. We denote by n(s, t; σ) the probability density of neurons at time t in the state s (where s, as in the earlier works, is the time elapsed since the last discharge). We take into account the variability/randomness in the phenomena that causes firing of neurons by incorporating a diffusive effect through the second order derivative (with respect to s) of the probability density of neurons n(s, t; σ) in system (2). Thus, we would like to study the following model for σ ∈ R, n t (s, t; σ) + n s (s, t; σ) + p(s, X(t); σ)n(s, t; σ) = n ss (s, t; σ), s, t > 0, n(0, t; σ) − n s (0, t; σ) = ∞ 0 p(s, X(t); σ)n(s, t; σ)ds, t > 0, p(s, X(t); σ)n(s, t; σ)dsdσ, t > 0, n(s, 0, σ) = n 0 (s; σ), s > 0. ( When the diffusion constant = 0 then the model given in (3) reduces to the time elapsed inhomogeneous neuron network model considered in [12]. It is worth noticing that the quantity X, which is coupled with the partial differential equation satisfies the nonlinear integral equation. In fact, we make some technical assumptions on p to establish the existence of solution to this problem. In fact, the role that X plays in the model is similar to that of the weighted population (see [11], [28]). Now, on integrating (3) with respect to s, we get Here g(σ) denotes the probability density of neural networks parameterized by σ. Therefore it is natural to assume that n 0 satisfies R ∞ 0 n 0 dsdσ = R g(σ)dσ = 1.
As before, difficulties to prove existence, uniqueness and convergence as time goes to infinity come from the nonlinear coupling between the environment (macroscopic variable) and the neuron density n (microscopic variable). In Section 2, we give assumptions on the firing rate of neurons p to have existence and uniqueness of solution to (3) through Theorem 2.1 and its long time behavior to a steady state through Theorems 2.2 and 2.3. Theorem 2.1 focuses on the existence and uniqueness of solution to (3) and its proof is based on the Schauder fixed point theorem and the L 1 stability that we establish. It is very important to notice that the assumptions to prove existence and uniqueness result in this article are weaker than that of those used in [22] where the Banach fixed point theorem is employed. Theorem 2.2 provides the existence of a steady state and the convergence of solutions to the steady state is given by Theorem 2.3. The proof of the convergence is based on entropy results and compactness. Moreover, we add Theorem 3.1, which, under stronger assumptions, gives the exponential convergence to the steady state. For the sake of readability, the proofs of Theorems 2.1 and 2.3 are given in Section 3 and the most technical part of the proofs of these theorems is given in Annex 5. Conclusions are provided in Section 4.

2.
Assumptions and main results. Throughout the article, we assume that p is a nonnegative continuous function with 0 ≤ p ≤ 1. Furthermore, we assume that and that for every (s, σ) ∈ R + × R the map x → p(s, x; σ) ∈ C 0 (R + ), and is increasing.
We notice that assumptions (5)-(6) mean that the firing rate of neurons, i.e., the probability of firing p, in the 'state s' (age after firing) and in an environment X, increases with the environment and when the age after firing goes to infinity, almost surely, a new discharge happens. We generalize assumptions given in [12,21,22,23], which model the neurons discharge behavior. In fact, (5)-(6) are enough to prove the existence of a solution to (3). Moreover, we assume that This assumption means that the variation of the discharge probability, with respect to the environment, is small enough. Assumption (7) is indeed enough to prove uniqueness of solution to (3). We now define the steady state equations corresponding to (3) as The steady state which is defined as a solution to (8) plays a crucial role in the study of the longtime behavior of the solution to (3). We begin with the statement of existence and uniqueness result in Theorem 2.1. Then we focus on the existence of the steady state (8) in Theorem 2.2. Finally in Theorem 2.3, we discuss the convergence of the solution to equation (3) to the steady state as t tends to infinity. (5) and (6) are satisfied. Furthermore, we assume that the initial condition n 0 satisfies Then there exists a solution n ∈ L 2 [0, T ], Moreover, if (7) holds then a solution to (3) is unique.
Proof. (Outline) We prove the existence of a solution n to the problem, for σ ∈ R Later we show that (10) is equivalent to (3) by establishing (19). The advantage of this form is that, we can work with the Hilbert space L 2 (R + ×R). As p−1 ∈ L 2 (R + ), we seek the solution n to (10) in a time dependent L 2 -space so that the boundary term in (10) is well defined. The proof is quite classical and uses a fixed point argument (in particular the Schauder fixed point theorem). Let T > 0, X := andp is a solution tō We show that T has a fixed point which is indeed a solution to (10). In order to prove uniqueness we establish (24). Details of the proof are given in Section 3.1.
We now state the existence result for the steady state. (6) and (7) then there exists a unique solution N to (8) in Proof. We notice that The map Γ is a strongly positive and compact operator (assumption (7) implies uniqueness ofp). Therefore, using the Krein-Rutman theorem, we have the existence of N , a positive eigenfunction of Γ. Integrate the equation Γ(N ) − λN = 0 (where λ is the eigenvalue of Γ associated to N ), we find that λ N = 0, which means that λ = 0 (and N is a solution to (8)). Since N (s, σ)ds = g(σ), N is uniquely defined.
Finally, we state the result pertaining to the asymptotic behavior of the solution to (3). In fact, we establish desynchronization. (7) and then The proof is given in Section 3.2.
Moreover, under some technical assumptions on p, L and J, we prove that n converges exponentially to the steady state N (see Theorem 3.1).
3. Proof of main Theorems. This section is subdivided in two subsections. In the first subsection we prove Theorem 2.1 whereas in the second one we prove the long time behavior result, i.e., Theorem 2.3.

3.1.
Existence and uniqueness results. The objective of this subsection is to establish the existence and uniqueness of solutions to (3). In particular, we provide the details of the proof of Theorem 2.1. Interesting parts of the proof are the existence ofp satisfying (12) for a given p and the L 1 -stability estimate.
Proof. of Theorem 2.1: The proof is divided into four main steps.
Step 1: In this step we show that T is a well defined map.
Since p is nonnegative, p 1 ≥ 0 = p 0 . As p is increasing with respect to x, by induction one can show that p k+1 (s, t, σ) ≥p k (s, t, σ), s > 0, t > 0, σ ∈ R, and 0 ≤p k ≤ 1. Therefore the sequence (p k ) converges to some function, sayp. Moreover by the Lebesgue dominated convergence theorem we have (s, t, σ)n(t, s, σ)dsdσ, and finally by the continuity of p we obtain thatp is a solution to (12). Using the standard arguments it is straightforward to show the existence and uniqueness of a solution n ∈ X , to (11) (see [11], for instance). Hence the map T is well defined.
Step 2: In this step we prove that T is a compact operator.
To prove the compactness of T , we use the Lions-Aubin lemma ( [14]). Indeed, if n is a solution to (11) then we have the following bounds (see Annex 5.1 for a proof): and where C T is a constant which depends only on T, ,p and n 0 . Therefore from the estimates (15)- (20) it follows that T is a compact operator. Moreover, since the constant map which takes one is a super solution to (10), we have the following inequality n(s, t; σ) ≤ 1, s > 0, t > 0, σ ∈ R.
Step 3: In this step we prove the existence of a solution to (10).
Since the operator T is compact on X , thanks to the Schauder fixed point theorem, there exists a fixed point of T which is a solution to (10) satisfying all the estimates given in (15)- (21). This proves the existence of a solution to (10).
Step 4: We now turn our attention towards the uniqueness of a solution to (3).

Long time behavior.
In this subsection, we present results pertaining to the asymptotic behavior of the solution to (3). GRE inequality (see Annex) together with the La Salle principle give us the convergence of the solution to (3) to the corresponding steady state.
In view of inequalities (22) and (23) (and the decay of x → 1 − p(., x, .)) it follows that Since Ψ s ≤ 0, s > 0, we get Ψ ≤ 1 and Therefore we find that Therefore the quantity R ∞ 0 |u|Ψdsdσ decreases as long as does not vanish. To prove the convergence, we use the La Salle principle. The proof is similar to the one given in [19]. Let (t k ) k be a sequence of positive numbers such that t k → ∞, n k (s, t; σ) := n(s, t + t k ; σ) and p k (s, t; σ) := p(s, t + t k ; σ). Using inequality (26) and by integration we find the uniform L 1 bound We now recall that (n k ) k belongs to a compact subset of L 2 (see the proof of Theorem 2.1). Moreover, since 1 − p k is uniformly bounded in L 2 and using the Banach Alaoglu theorem, we have (1 − p k ) k belongs to a weak compact set of L 2 and there exists a subsequences (which are still denoted by p k , n k ) such that p(s, X * ; σ)n * (s; σ)dsdσ.
Thus we have that n * = N as long as p * does not vanish, i.e., p * = p(s, X * ; σ) with X * = J p(s, X * ; σ)N (s; σ)dsdσ. By uniqueness of the stationary solution (see Theorem 2.2), we find that n * = N everywhere. Therefore, we have the convergence of |n k − N | to 0 as t → ∞. To conclude, it suffices to notice that R ∞ 0 |u|Ψdsdσ is decreasing by (26), which implies the convergence of |u| to 0 as t → ∞.
In the next result, we provide a condition which ensures us the convergence of the solution to (3) to the steady state with an exponential rate.
Furthermore, assume that there exists β > 0 such that Proof. We begin with setting u = n − N . Then u is a solution to We now estimate |X(t) − X * |. For, we consider Therefore we obtain Combining (31) and (30) we get Thanks to assumption (29), we find Now the Gronwall lemma gives us the promised result.

Conclusions.
In this paper we have proposed a neural network model which describes the dynamics of inhomogeneous neuronal networks with diffusive effect due to variability/randomness. Our model is an extension of the nonlinear renewal model proposed in [23,12] where the density of neurons follows a nonlinear renewal partial differential equation in which the randomness is modeled by a diffusive effect. We have proved that, under certain technical conditions on the firing rate and connectivity, the problem is well posed. In fact we have established an L 1 -stability estimate to our model. Proves are slightly different from those proposed in [23,12] as we have used compactness arguments to prove the existence instead of a 'contractant' argument. In fact, we could establish the existence of a solution to (3) under weaker assumptions on the firing rate p. Moreover, using the La Salle principle, we have proved, the convergence of the density of neurons solution to our neural network model converges to the steady of the model when the synaptic weight (connectivity) is sufficiently small. In order to study the asymptotic behavior of the solution of our model, we have used the General Relative Entropy inequality which is natural in structured population models. Finally, we show that the convergence of the neuron density to the steady state is exponential whenever the connectivity is very small.
In a future work, it would be interesting to extend the existence and stability results to periodic solutions to our model (3). We anticipate that the compactness argument to prove the existence and uniqueness, and the entropy result to prove convergence in this article are well suited to the periodic problems also. Another open and interesting question is the optimality of assumptions to prove the convergence (and the existence of a steady state).

5.
Annex. In this section we prove a priori estimates stated in Section 3 and present General relative entropy (GRE) estimate.

5.1.
Proof of estimates (a priori bounds). In order to prove (15), we first multiply (11) by n and integrate with respect to s to have Using the Cauchy inequality, we find that Owing to the Gronwall lemma, we have the first bound (15). On integrating (33) with respect to time t ∈ [0, T ] and using estimate (15), we find that (16) holds. We now turn our attention towards (19). Let ψ ∈ H 1 (R + ), which will be chosen later. On multiplying (11) by ψ, integrating with respect to s, and using the integration by parts we obtain

A straightforward computation gives
Thus we have Again, we use Hölder's inequality to find In view of (15) we obtain G R (τ ) → R→∞ 0, a.e.
Therefore by the Lebesgue dominated convergence theorem, we have t 0 G R (u)e u−t du → 0, as R → ∞. Now using estimate (16) and letting R → ∞ in (35), we get (19).
In order to prove (17), we multiply the first equation in (3) with s 2 and integrate to find  We integrate the previous equation with respect to t to prove that (17) holds. Using the estimate 2s ≤ 1 + s 2 , (17) and (19) it is straightforward to obtain inequality (18). In other words we have  Since we have it follows that Then integrating with respect to time and using inequalities (15)- (19) then we find (20).

GRE inequality.
In this subsection we prove GRE inequality which plays crucial role in the study of the long time behavior of the solution to (3). GRE inequalities for structured population models can be found in [11,16,17,18,27,28]. Using the first equation in (11)   Again, using the boundary condition, we find that (36) is satisfied.