A MODEL BASED RULE FOR SELECTING SPIKING THRESHOLDS IN NEURON MODELS

. Determining excitability thresholds in neuronal models is of high interest due to its applicability in separating spiking from non-spiking phases of neuronal membrane potential processes. However, excitability thresholds are known to depend on various auxiliary variables, including any conductance or gating variables. Such dependences pose as a double-edged sword; they are natural consequences of the complexity of the model, but proves diﬃcult to apply in practice, since gating variables are rarely measured. In this paper a technique for ﬁnding excitability thresholds, based on the local behaviour of the ﬂow in dynamical systems, is presented. The technique incorporates the dynamics of the auxiliary variables, yet only produces thresh- olds for the membrane potential. The method is applied to several classical neuron models and the threshold’s dependence upon external parameters is studied, along with a general evaluation of the technique.

1. Introduction. One of the most essential properties of a neuronal model is its ability to capture both the active spiking phases (fast large-amplitude oscillations) and the inactive resting phases (weakly nonlinear oscillations), [4]. The concept of excitability thresholds, which in general is not well defined, is an ad hoc characteristic separating these two "domains". Excitability threshold are, nevertheless, essential in many applications, e.g., when studying membrane potentials data is often separated into active and inactive phases and analysed accordingly.
A vast selection of literature exists on the subject of choosing spiking thresholds in neuronal models. These include both experimentally based approaches and purely theoretical constructions based on some class of models. The focus of this paper is the latter. For experimentally based solutions see, e.g., [19] for a thorough presentation and benchmarking of some of the most common methods.
Some of the theoretical model based approaches rely on differential geometry and prove highly relevant in the context of detecting distinctive behaviour in dynamical systems, see e.g., [7]. A classical approach is studying the inflection sets of the system, i.e., the region of the state space at which trajectories have vanishing curvature. This approach for detecting excitability thresholds was first proposed in 1976 ( [15]), where it was applied to the BonhoefferVan der Pol model. It was later reintroduced in [16] in connection to canards in chemical systems. For thorough treatment in relation to excitability and canards, see e.g., [4] and [21]. As discussed in [4], inflection methods are limited to planar systems. Another type of model based threshold characterisation relies on studying the steepest slope of the membrane potential process. This, however, crucially depends on the state of the gating variables. Such dependences is studied in [18], in which the authors provide a threshold equation that yields an instantaneous threshold value as a function of the underlying ionic channel conductance. In [20] the excitability threshold in neuronal models is successfully captured as manifolds in the state space and thus stressing the dependence of the threshold on activation and inactivation variables.
The technique presented in this paper focuses on multidimensional neuronal models, in which an excitability threshold solely for the membrane potential is desired. This is favourable, as the gating variables are rarely measured. Though the technique produces a threshold independent of the gating variables, it still captures the overall dynamics of the system. The paper is outlined as follows: in section 2 the general framework is established along with the fundamental assumptions of the model. Moreover, the excitability threshold technique is motivated and derived. In section 3 the threshold rule is applied to six different neuron models for varying parameter settings. Finally, the advantages, drawbacks and general evaluation of the method are considered in section 4.

2.
Framework and construction. Let E and Θ denote open subsets of R d and R p , respectively. Let f : E × Θ → R d be a C 1 -function and consider the initial value problem: with x 0 ∈ E and θ ∈ Θ. A solution or trajectory of (1) is a function ϕ : R×E ×Θ → E satisfying We say that f constitutes a dynamical system on the state space E. Standard existence and uniqueness results for solutions to (1) can be found in, e.g., [17]. Unless relevant, θ will be dropped from the notation. We will consider any neuronal model given as a dynamical system. We assume the first coordinate of x represents the electrical potential, denoted by v. Let u denote the additional variables. We therefore have the tensor structure: x = (v, u), f = (f v , f u ) and ϕ = (ϕ v , ϕ u ). Additionally, we assume f is C 2 .
In the following we need the manifold: known as the v-nullcline. Any trajectory of the system has its marginal v-stationary points on N . Consequently, all spikes occur on the manifold N , thus stressing the importance of N in relation to excitability thresholds. In figure 1 trajectories in reverse time of different system are plotted (see section 3 and appendix A for details). Though they are initialised on equally spaced points on N we observe clustering of the trajectories.
Because the flow is in reverse, trajectories will be highly sensitive towards initial conditions if initialised close to the clustered trajectories. Trajectories to the left curve towards the inactive regime and trajectories to the right curve towards the active regime. Hence, in this figure, the clustered trajectories are closely related to the inflection sets of the state spaces. Additionally, they are also closely linked to the manifolds studied in [20]. This clustering of trajectories is therefore closely linked to the transition between active and inactive phases and thus builds the foundation of the threshold technique outlined below. Generally, the clustering is characterised by how the flow scales volumes. In figure 1 equally spaced initialisations on N (corresponding to equal volumes) are squeezed together or separated from each other by the flow. Consequently, clustering of trajectories occurs when volumes are considerably scaled down by the reverse flow, or, scaled up by the non-reversed flow.
Formally, the scaling of volumes is quantified by the Jacobian of the flow x → ϕ(t, x): In order to make this characteristic tractable, we consider the infinitesimal scaling of volumes at t = 0. Standard results yield: for some function ε vanishing at t = 0. Consequently, where ∇ is the divergence operator. Motivated by the above, the proposed threshold, v thr , is given by where π v is the projection onto the v-coordinate.
Clearly, the applicability of v thr relies on existence and uniqueness of a maximal argument for ∇f (x) on N . For instance, v thr does not exist in linear systems. However, as seen in section 3, v thr is well defined in the classical neuron models. Moreover, heat maps of ∇f for four bi-dimensional models is presented in figure 2.
As mentioned, one of the most important properties of a neuronal model is its ability to capture both active and inactive phases and their separate distinctive behaviour. Furthermore, the transition between the two regimes must be fast, otherwise the model does not sufficiently reflect the "all-or-none"-principle of excitability in neurons. From the model's perspective this implies that only finely tuned initialisations exhibits local v-maxima that can neither be considered active nor inactive. Such crossings of N are exactly those having large values of ∇f and thus captured by (7). Therefore v thr is well defined if the model considered sufficiently mimics the "all-or-none" principle.
In the case of the Hodgkin-Huxley and the Connor-Stevens model computing v thr is especially simple, as evaluating (8) amounts to solving a linear programming (lp) problem. All models, except for the Fitzhugh-Nagumo model, do not admit closed form expressions for v thr and are evaluated numerically. The results for varying input current, I, are visualised in figure 3. The excitability threshold suggested by the above technique lies at the typically proposed level for the different models. Moreover, the threshold increases with I.
For the Hodgkin-Huxley and the Connor-Stevens model a sudden change occur around I = 3.3 and I = −4.9, respectively. As indicated, these are not discontinuities, but are the results of changing active constraints in the lp problem of evaluating (8). The sudden change is not associated with bifurcations (the closest bifurcation takes place at I = 9.78 for the HH model, see [12]). In fact, the threshold rule seems to be unaffected by any of the bifurcations occurring when tuning I. This emphasises that v thr is not related to whether the system promotes spiking behaviour or not, but how local v-maxima are separated by the dynamics. Finally, we consider the Fitzhugh-Nagumo model. Straightforward calculations yield v thr = 0, hence the threshold in this particular model is independent of the parameters. Again, this stresses the interpretation of v thr ; it measures where the mimicking of the "all-or-none"-principle is most prominent in the model. Hence, for varying parameters, v = 0 still acts as the separation of local v-maxima. 4. Discussion. In the above an excitability threshold for the membrane potential in neuronal models has been presented. It applies to multidimensional neuronal models given as ODEs and is relatively easy to evaluate. The threshold rule relies on the same classic considerations behind other threshold rules, e.g., [4] and [20]. While still incorporating the full dynamics of the system, it provides thresholds in v only. Additionally, requirements of bi-dimensionality, imposed in e.g., [4], is not necessary.
A drawback of the threshold rule is that it may not fully capture the complexity of models, as more sophisticated manifold based threshold rules do, e.g., as in [20]. However, the above presented technique applies to situations in which only the membrane potential is observable.
Finally, the main drawback of the technique is that it only applies whenever argmax x∈N ∇f (x) is well defined. However, as pointed out in section 2, if the neuron model captures the "all-or-none"-principle sufficiently well, then ∇f will be large whenever a transition from inactive to active phases occurs.

Acknowledgments.
A great thanks to Susanne Ditlevsen for supervising the project that led to these results.
Appendix A. Models.
A.1. The Hodgkin-Huxley model. The Hodgkin-Huxley model, first presented in 1952 (see [8]), is the most influential model in neuroscience. It is a fourdimensional dynamical model governed by the following dynamics: Here τ x = 1/(α x + β x ) and x ∞ = α x /(α x + β x ) for x = n, m, h and The parameters are listed in table 1 and taken from [12].
The parameters used in figure 1 and 2 Here τ x = 1/(α x + β x ) and The parameters are listed in table 2 and taken from [3].  [11]. It relies on two reductions: m is assumed instantaneous and the sum of the slower variables h and n remain constant at level K. The dynamics therefore reduces tȯ We set K = 0.8 and the rest of the specifications are as in section A.1.
A.5. The Morris-Lecar model. Another classical example of a conductance based neuron model is the Morris-Lecar model, see [13] for details. The dynamics are as follows: where The parameter values used in this example are taken from [5] and are given in table  3. A.6. The Abbot-Kepler reduction. The reduction of the Hodgkin-Huxley model given below is just one of many possible reductions. They all follow the same principle proposed by Abbot and Kepler ([1], [10]). In this paper we consideṙ where The rest is specified in section A.1.