Numerical investigation of a neural field model including dendritic processing

We consider a simple neural field model in which the state variable is dendritic voltage, and in which somas form a continuous one-dimensional layer. This neural field model with dendritic processing is formulated as an integro-differential equation. We introduce a computational method for approximating solutions to this nonlocal model, and use it to perform numerical simulations for neuro-biologically realistic choices of anatomical connectivity and nonlinear firing rate function. For the time discretisation we adopt an Implicit-Explicit (IMEX) scheme; the space discretisation is based on a finite-difference scheme to approximate the diffusion term and uses the trapezoidal rule to approximate integrals describing the nonlocal interactions in the model. We prove that the scheme is of first-order in time and second order in space, and can be efficiently implemented if the factorisation of a small, banded matrix is precomputed. By way of validation we compare the outputs of a numerical realisation to theoretical predictions for the onset of a Turing pattern, and to the speed and shape of a travelling front for a specific choice of Heaviside firing rate. We find that theory and numerical simulations are in excellent agreement.

1. Introduction. Ever since Hans Berger made the first recording of the human electroencephalogram (EEG) in 1924 there has been a tremendous interest in understanding the physiological basis of brain rhythms. This has included the development of mathematical models of cortical tissue, which are often referred to as neural field models. The formulation of these models has not changed much since the seminal work of Wilson and Cowan, Nunez and Amari in the 1970s, as recently described in [7]. Neural fields and neural mass models approximate neural activity assuming the cortical tissue is a continuous medium. They are coarse-grained spatiotemporal models, which lack important physiological mechanisms known to be fundamental in generating brain rhythms, such as dendritic structure and cortical folding. Nonetheless their basic structure has been shown to provide a mechanistic starting point for understanding whole brain dynamics, as described by Nunez [14], and especially that of the EEG.
Modern biophysical theories assert that EEG signals from a single scalp electrode arise from the coordinated activity of ∼ 10 8 pyramidal cells in the cortex. These are arranged with their dendrites in parallel and perpendicular to the cortical surface. When activated by synapses at the proximal dendrites, extracellular current flows parallel to the dendrites, with a net membrane current at the synapse. For excitatory (inhibitory) synapses this creates a sink (source) with a negative (positive) extracellular potential. Because there is no accumulation of charge in the tissue the proximal synaptic current is compensated by other currents flowing in the medium causing a distributed source in the case of a sink and vice-versa for a synapse that acts as a source. Hence, at the population level the potential field generated by a synchronously activated population of cortical pyramidal cells behaves like that of a dipole layer. Although the important contribution that single dendritic trees make to generating extracellular electric field potentials has been known for some time, and can be calculated using Maxwell's equations [15], they are often not accounted for in neural field models. However, with the advent of laminar electrodes to record from different cortical layers it is now timely to build on early work by Crook and coworkers [9] and by Bressloff, reviewed in [5], and develop neural field models that incorporate a notion of dendritic depth. This will allow a significant and important departure from present-day neural field models, and recognise the contribution of dendritic processing to macroscopic large-scale brain signals. A simple way to generalise standard neural field models is to consider the dendritic cable model of Rall as the core component in a neural field, with source terms on the cable mediating long-range synaptic interactions. These in turn can be described with the introduction of an axo-dendritic connectivity function.
Here we consider a neural field model which treats the voltage on a dendrite as the primary variable of interest in a simple model of neural tissue. The model comprises a continuum of somas (a somatic layer, see schematic in Figure 1.1(a)). Dendrites are modeled as unbranched fibres, orthogonal to the somatic layer which, for simplicity, is one-dimensional and rectilinear (see Figure 1.1(b)). At each point along the somatic layer x ∈ R we envisage a fibre with coordinate ξ ∈ R. The voltage dynamics along the fibre is described by the cable equation, with a nonlocal input current arising as an integral over the outputs from the somatic layer (where ξ = 0). Denoting the voltage by V (x, ξ, t) we have an integro-differential equation for the real-valued function V : R 2 × R → R of the form W (x, ξ, y, η)S(V (y, η, t)) dy dη, posed on (x, ξ, t) ∈ R 3 , for some typically sigmoidal or Heaviside-type firing rate function S, and some external input function G. Here ν is the diffusion coefficient and 1/γ the membrane time-constant of the cable. As we shall see below, it will be crucial for our analysis that currents flow exclusively along the fibres, that is, the diffusive term in (1.1) contains derivatives only with respect to ξ.
The model is completed with a choice of the generalised axo-dendritic connectivity function W . The nonlocal input current arises from the somatic layer, hence they are transferred from sources in an ε-neighbourhood of ξ = 0, 0 < ε 1, to contact points in an ε-neighbourhood of ξ = ξ 0 on the cable (see Figure 1.1(b)). In addition, the strength of interaction depends solely on the distance between the source and the contact point, measured along the somatic layer, leading to the decomposition where w describes the strength of interaction across the somatic space and is chosen to be translationally invariant and δ ε is a quickly-decaying function. This work introduces a computational method for approximating solutions to (1.1), subject to suitable initial and boundary conditions, and applies it to the numerical simulation of the model with kernel given by (1.2). Numerical methods for neural fields in 2-dimensional media have been introduced recently in flat geometries [16,11,12] and on 2-manifolds embedded in a 3-dimensional space [3,21]. In addition, several available open-source codes, such as the Neural Field Simulator [13], the Brain Dynamics Toolbox [10], and NFTsim [18], perform simulations of neural  .1) is for a 1D somatic layer, with coordinate x ∈ R, and fiber coordinate ξ ∈ R. Input currents are generated in a small neighbourhood of the somatic layer, at ξ = 0 and are delivered to a contact point, in a small neighbourhood of ξ = ξ 0 . The strength of interaction depends on the distance between sources and contact points, measured along the somatic layer, hence the inputs that are generated at A and transmitted to B, C, and D depend on |x B − x A |, |x C − x A |, and |x D − x A |, respectively (see (1.2)). field equations. Numerical schemes for models of type (1.1) have not been introduced, analysed, or implemented, and these are the main contributions of the present article.
In Section 2 we describe, analyse, and discuss implementation details of the numerical method. In Sections 3-4 we illustrate the performance of the method by means of some numerical experiments, including problems whose exact solution has known properties. The numerical results are discussed and their physical meaning is explained. We finish with some conclusions and discussion in Section 5.
where W Ω is the restriction of W on Ω. This restriction implies that the function w in (1.2) be substituted by its periodic extension on [−L x , L x ). In the remainder of this paper we will omit the subscript Ω from W , and assume w to be 2L x -periodic.
To expose our scheme we introduce a spatiotemporal discretisation using the evenly spaced grid {(x j , ξ i , t n )} defined by where we posed N k = {1, 2, . . . , k} for k ∈ N. The scheme we propose uses the method of lines for (2.1), in conjunction with differentiation matrices for the diffusive term and a quadrature scheme for the integral operator. Collocating (2.1) at the somato-dendritic nodes we obtain where, with a slight abuse of notation, we denote by V an interpolant to the function V in (2.1) through {(x j , ξ i )}. To obtain a numerical solution of the problem we must choose: (i) an approximation for the linear operator (−γ + ν∂ ξξ ) at the somatodendritic nodes; (ii) an approximation for the integral operator at the same nodes; (iii) a scheme to time step the derived set of ODEs.
In the presentation of the scheme, we shall use two equivalent representations for the voltage approximation: a matricial description and a lexicographic vectorial representation, obtained by introducing the index mapping k(i, j) = n ξ (i − 1) + j, In the latter, we will sometimes suppress the dependence of k on (i, j), for notational convenience.
2.1. Discretisation of the linear operator. A simple choice for discretising the linear differential operator (−γ + ν∂ ξξ ) is to adopt differentiation matrices [19]. If a differentiation matrix D ξξ ∈ R n ξ ×n ξ is chosen to approximate the action of the Laplacian operator ∂ ξξ on twice differentiable, univariate functions defined on [−L ξ , L ξ ], satisfying Neumann boundary conditions, and sampled at nodes {ξ i }, then the action of the operator −γ + ν∂ ξξ on bivariate functions defined on [−L x , L x ) × [−L ξ , L ξ ], twice differentiable in ξ with Neumann boundary conditions, sampled at the nodes {(x j , ξ i )} with lexicographical ordering k(i, j) is approximated by the following block-diagonal matrix where I n , n ∈ N, is the n-by-n identity matrix, and ⊗ is the Kronecker product between matrices. Since the model prescribes diffusion only along the dendritic coordinate, the corresponding matrix has a block-diagonal structure with identical blocks, which can be exploited to improve performance in numerical computations. The sparsity pattern of a block is dictated by the underlying scheme to approximate the univariate Laplacian: we have full blocks if D ξξ is derived from spectral schemes, and sparse blocks for finite-difference schemes.
2.2. Discretisation of the nonlinear integral operator. The starting point to discretise the integral operator is an mth order quadrature formula with q m nodes {(y l , η l ) : l ∈ N qm } and weights {σ l : l ∈ N qm } for the integral of a bivariate function over Ω, Using this formula we approximate the nonlinear operator in (2.2) by We stress that, in general, the quadrature nodes {(y l , η l )} and the collocation nodes {(x k(i,j) , ξ k(i,j) )} are disjoint. The former are chosen so as to approximate accurately the integral term, the latter to approximate the differential operator. When the two grids are disjoint, an interpolation of V with nodes {(y l , η l )} is necessary to derive a set of ODEs at the collocation nodes. In the remainder of this paper we will assume that collocation and quadrature nodes coincide, so that we can omit the interpolant, for simplicity.

Matrix ODE formulation.
Combining the differentiation matrix, the quadrature rule, and the lexicographic representation (2.4) we obtain a set of n x n ξ ODEs The structure of the differentiation matrix in section (2.1), however, suggests a rewriting of (2.5) in terms of the blocks of the linear operator, which correspond to "slices" at constant values of x: we recall the matrix representation (2.3) and obtain an equivalent matrix ODE formulation where N is the matrix-valued function with components N ij (V ) = Q m (V )(x j , ξ i ) and G is the matrix with components G(x j , ξ i , t). In passing, we note that the linear part of the equation involves a multiplication between an n ξ -by-n ξ matrix and the n ξ -by-n x matrix V .
2.4. Time-stepping scheme. The proposed time-stepping scheme for (2.1) is obtained from (2.6) with the following choices: (i) a first-order, implicit-explicit (IMEX) time-stepping scheme [1]; (ii) a second-order, centered finite-difference scheme for the differentiation matrix D ξξ ; (ii) a second-order trapezium rule for the quadrature rule Q m . As we shall see, these choices bring a few computational advantages, which will be outlined below.
IMEX schemes treat the linear (diffusive) part of the ODE implicitly, and the nonlinear part explicitly, so that the stiff diffusive term is integrated implicitly to avoid excessively small time steps. The simplest IMEX method uses backward Euler for the diffusive term, leading to where V n ≈ V (t n ), G n = G(t n ), and A is the matrix In concrete calculations we use second-order centred finite differences, leading to in which Neumann boundary conditions are included in the differentiation matrix. Finally, we discuss the choice of the quadrature scheme. We use a composite trapezium scheme with nodes {x j } and weights {ρ j } in x, and nodes {ξ i } and weights {σ i } in ξ, respectively, hence quadrature and collocation sets coincide, 2.5. Properties of the IMEX scheme. In this section we collect some analytical results on the IMEX scheme (2.7)-(2.10). We work with spaces of sufficiently regular continuous functions, which provides the simplest setting for our results. We denote by C k (D) the space of k-times continuously differentiable functions from D to R, where k is an integer, D a domain in R 3 . We also indicate by C k b (D) the space of continuous functions from D to R with bounded and continuous partial derivatives up to order k. Both spaces are endowed with the infinity norm · ∞ . We will use the symbol | · | ∞ for the standard infinity-norm on matrices, induced by the corresponding vector norm. In addition, we will denote byD the closure of D.
We begin with a generic assumption of boundedness on the functions in (2.1): Lemma 2.2 (Boundedness of IMEX solution). Assume Hypothesis 2.1, then there exists a unique bounded sequence (V n ) n∈N satisfying the IMEX scheme (2.7)-(2.10). In addition, the following bound holds Proof. The matrix A in (2.8) has real, strictly positive eigenvalues given by where we have used the fact that the eigenvalues of D ξξ are known in closed form. We conclude that A is invertible, hence for any fixed n ∈ N, the matrix V n solving (2.7) is unique. In addition, A is strictly diagonally dominant, hence the following bound holds [20] (2.11) To prove boundedness of the sequence (V n ) n∈N we first bound the matrices N (V n−1 ), G n appearing in (2.7) and similarly |G n−1 | ∞ ≤ n x C G , and then combine them with the bound for |A −1 | ∞ to find We set and use induction and elementary properties of the geometric series to obtain which proves the assertion.
In addition to proving boundedness of the solution, we address the convergence rate of the IMEX scheme. For this result, we assume the existence of a sufficiently regular solution to (2.1). Lemma 2.3 (Local convergence rate of the IMEX scheme). Assume Hypothesis 2.1, W ∈ C 2 (Ω × Ω), S ∈ C 2 b (Ω), and assume (2.1) admits a strong solution V * whose partial derivatives Further, let (V n ) n∈N be the solution to the IMEX scheme (2.7)-(2.10), and let There exist constants C τ , C h > 0 such that Proof. The regularity assumptions on V * , and standard results on finite-difference approximation and trapezium quadrature rule guarantee the existence of constants C tt , C xx , C ξξ , C ξξξξ > 0 such that for all n ∈ {0} ∪ N nt where the errors for the forward finite-difference in t, centred finite-difference in ξ, and trapezium rule are listed progressively, with constants proportional to the respective partial derivatives. We subtract (2.15) from and obtain the error bound Since the first derivative S of S is bounded, we have the following estimate for the nonlinear term which, together with (2.11) and (2.16) gives a recursive bound for the ∞-norm matrix error |V n − V n * | ∞ 1 , Hence, If ζ < γ, then r < 1, and we obtain (2.12) as If ζ = γ, then r = 1 and (2.13) is found as follows If ζ > γ, then r > 1 and we can bound the nth term of the sequence with an exponential, using the bound (1 + x/n) n ≤ e x for all x ∈ R, as follows, which combined with (2.17) gives (2.14): The preceding lemma shows that the IMEX scheme has first order convergence in time, and second order convergence in space. As expected, this conclusion holds without imposing any restriction to the size of τ in relation to h, as happens, for example, in the case of explicit methods for parabolic equations. In passing we note that if ζ < γ and V * (t) exists for all t ∈ R, the error estimate (2.12) holds for n ∈ N, that is, in an unbounded interval of time; on the other hand, the error estimates do not hold on an unbounded time interval when ζ ≥ γ, as the bounds depend on T .
2.6. Implementational aspects and efficiency. In this section we make a few considerations on the implementation of the proposed IMEX scheme, with the view of comparing its efficiency to an ordinary IMEX scheme, that is, to an IMEX scheme applied to (2.5).
2.6.1. Implementation. IMEX schemes for planar semilinear problems require the inversion of a discretised Laplacian, which usually is a square matrix with the same dimension of the problem (n ξ n x equations in our case). The particular structure of the problem under consideration, however, implies that the matrix to be inverted is much smaller (the square matrix A has only n ξ rows and n ξ columns). At each time step (2.7) we solve a problem of the type AX = B, where A ∈ R n ξ ×n ξ , and X, B ∈ R n ξ ×nx . This can be achieved efficiently by pre-computing a factorisation of A, and then back-substituting for all columns of B. Since the matrix A is sparse and with low bandwidth, efficient implementations of the LU decompositions and backsubstitution can be used to solve the n x linear problems corresponding to the columns of X and B.
An important aspect of the numerical implementation is the evaluation of the nonlinear term (2.10): evaluating the right-hand side of (2.7) requires in general O(n 2 ξ n 2 x ) operations, which is a bottleneck for the time stepper, in particular for large domains. However, the structure of the problem can be exploited once again to evaluate this term efficiently. We make use of the following properties: 1. The kernel W specified in (1.2) has a product structure, hence In addition, w is periodic, therefore the matrix with entries w(|x j − x j |) is circulant with (rotating) row vector 2. The function x → V (x, · ) is 2L x -periodic, hence the trapezium rule has identical weights ρ j = h x , and the integration in x is a circular convolution, which can be performed efficiently in O(n x log n x ) operations, using the Discrete Fourier Transform (DFT). We have and a DFT can be used to perform the outer sums [8,16]. Introducing the direct, F n , and inverse, F −1 n , DFTs for n-vectors, we express compactly the nonzero elements of N as follows (2.19) where α, α , σ ∈ R n ξ ×1 are column vectors, and denotes the Hadamard product, that is, elementwise vector multiplication. The formula above evaluates the nonlinear term N in just O(n x n ξ ) + O(n x log n x ) operations.
We summarise our implementation with the pseudocode provided in Algorithm 1, and we will henceforth compare quantitatively its efficiency with a standard IMEX implementation, which we also provide in Algorithm 2. The matricial version, Algorithm 1 exposes row-and column-vectors, for which a very compact Matlab implementation can be derived. We give an example of such implementation in Appendix A, and we refer the reader to [2] for a repository of codes used in this article.
2.6.2. Efficiency estimates. We now make a few considerations about the efficiency of our algorithm. We will provide two main measures of efficiency: an estimate of the floating point operations (flops), and an estimate of the storage space (in floating point numbers) required by the algorithm, as a function of the input data which, in our case, are the number of gridpoints in each direction, n x and n ξ . We are interested in how the estimates scale for large n x , n ξ .
To estimate the number of flops, we count the number of operations required by Algorithms 1 and 2 in the initialisation step (lines 2-6), and in a single time step (lines [8][9][10][11][12]. We base our estimates on the following facts and hypotheses: 1. The cost of multiplying an m-by-n matrix by an n-vector is 2mn − m flops. 2. If an n-by-n matrix is tridiagonal, then the matrices L and U of its LUfactorisation are bidiagonal, and L has 1 along its main diagonal. This implies that storing the LU factorisation requires only 3 n-vectors. Calculating the LU factorisation costs 2n + 1 flops, while solving the corresponding linear problem LU x = b, with x, b ∈ R n , requires 2n − 2 and 3n − 2 flops for the forward-and backward-subsitution, respectively. Similar considerations apply if A is not tridiagonal, but still sparse, as it would be obtained using a different discretisation method for the diffusive operator: estimates for the flops of the corresponding P LU -factorisation depend, in general, on the sparsity pattern of A, as well as on the permutation strategy, which is heuristic but can have an impact on the sparsity of L and U , thereby influencing the performance of the algorithm. We present calculations only in the case of a tridiagonal matrix A, for which explicit estimates are possible. 3. As stated above, it is well known that a single FFT of an n-vector costs O(n log n) operations. 4. We assume that function evaluations of the functions G, S, w, δ cost one flop. This estimate is optimistic, as most function evaluations will require more than one flop, but we make this simplifying assumption for both the algorithms we are comparing.
Algorithm 1: IMEX time stepper in matrix form (2.6), nonlinear term computed with pseudospectral evaluation (2.19) Input : Initial condition V 0 ∈ R n ξ ×nx , time step τ , number of steps n t .
Solve for V n the linear problem (LU )V n = V + τ (N + G). In Table 2.1 we count flops required in each line of Algorithms 1 and 2. The data is grouped so as to distinguish between the initialisation phase of the algorithms, and the iterations for the time steps. Algorithm 1 outperforms substantiatlly Algorithm 2 in both phases. In the initialisation, the number of flops scales linearly for Algorithm 1, and quadratically for Algorithm 2. This is mostly owing to the LU -factorisation step, which involves the n ξ -by-n ξ matrix A in the former, and an n ξ n x -by-n ξ n x matrix in the latter.
The efficiency gain is more striking in the cost per time step: owing to the pseudospectral evaluation of the nonlinearity, only O(n ξ n x ) + O(n x log n x ) flops are necessary in Algorithm 1, as opposed to O(n 2 ξ n 2 x ) flops in Algorithm 2. An important point to note that, in the case of a 2D somatic space with, say coordinates (x, y, ξ) and n x = n y = N , n ξ grid points (see Figure 1.1(a)), the size of the matrix A in Algorithm 1remains unaltered, while Algorithm 2 requires the factorisation and inversion of a much larger matrix, of size n ξ N 2 -by-n ξ N 2 . Estimates for the efficiencies in this case can be obtained by replacing n x by N 2 in the table, leading to much greater savings.
Finally, in Table 2.2 we collect the variables used by both algorithms, and count the storage requirement of each of them, measured floating point numbers. The results show that Algorithm 1 requires asymptotically the same storage as Algorithm 2 O(n ξ n x ). For fixed values of n ξ and n x , however, the latter uses almost twice as much storage space as the former.
n ξ n x 9 n ξ n x 10 3n ξ n x + O(n x log n x ) + n ξ − n x 10 2n 2 ξ n 2 x − n 2 ξ n x 11 n ξ n x + O(n x log n x ) + 2n x 11 5n ξ n x − 4 12 3. Travelling waves. We tested the algorithm on an analytically tractable neural field problem, and we report in this section our numerical experiments. For the test, we take a sigmoidal firing rate function Total 4n x n ξ + 7n ξ + 3n x 7n x n ξ + 2n ξ + 2n < l a t e x i t s h a 1 _ b a s e 6 4 = " V T s F j 5 6 F X S r x A c g 9 r Y h 2 a M r h B d s = " and kernel specified by where β, θ, κ are positive constants. If S(V ) = H(V − θ), H being the Heaviside function, δ ε is replaced by the Dirac delta distribution, and the evolution equation

< l a t e x i t s h a 1 _ b a s e 6 4 = " T 8 5 c G w T a A a d l M 9 G T v r O W X 4 2 5 B u 8 = " > A A A B 6 n i c b V A 9 S w N B E J 3 z M 8 a v q K X N Y h R i E + 5 i o W X A x j K i + Y D k C H u b v W T J 3 t 6 x O y e E I z / B x k I R W 3 + R n f / G T X K F J j 4 Y e L w 3 w 8 y 8 I J H C o O t + O 2 v r G 5 t b 2 4 W d 4 u 7 e / s F h 6 e i 4 Z e J U M 9 5 k s Y x 1 J 6 C G S 6 F 4 E w V K 3 k k 0 p 1 E g e T s Y 3 8 7 8 9 h P X R s T q E S c J 9 y M 6 V C I U j K K V H i r B Z b 9 U d q v u H G S V e D k p Q 4 5 G v / T V G 8 Q s j b h C J q k x X c 9 N 0 M + o R s E k n x Z 7 q e E J Z W M 6 5 F 1 L F Y 2 4 8 b P 5 q V N y Y Z U B C W N t S y G Z q 7 8 n M h o Z M 4 k C 2 x l R H J l l b y b + 5 3 V T D G / 8 T K g k R a 7 Y Y l G Y S o I x m f 1 N B k J z h n J i C W V a 2 F s J G 1 F N G d p 0 i j Y E b / n l V d K q V b 2 r a u 2 + V q 6 f 5 3 E U 4 B T O o A I e X E M d 7 q A B T W A w h G d 4 h T d H O i / O u / O x a F 1 z 8 p k T + A P n 8 w e B 2 I 0 v < / l a t e x i t > (a) < l a t e x i t s h a 1 _ b a s e 6 4 = " H F h D 0 R n e 0 W y 6 V 2 0 H o K T D x J / v f T c = " > A A A B 6 n i c b V A 9 S w N B E J 3 z M 8 a v q K X N Y h R i E + 5 i o W X A x j K i + Y D k C H u b v W T J 3 t 6 x O y e E I z / B x k I R W 3 + R n f / G T X K F J j 4 Y e L w 3 w 8 y 8 I J H C o O t + O 2 v r G 5 t b 2 4 W d 4 u 7 e / s F h 6 e i 4 Z e J U M 9 5 k s Y x 1 J 6 C G S 6 F 4 E w V K 3 k k 0 p 1 E g e T s Y 3 8 7 8 9 h P X R s T q E S c J 9 y M 6 V C I U j K K V H i r 0 s l 8 q u 1 V 3 D r J K v J y U I U e j X / r q D W K W R l w h k 9 S Y r u c m 6 G d U o 2 C S T 4 u 9 1 P C E s j E d 8 q 6 l i k b c + N n 8 1 C m 5 s M q A h L G 2 p Z D M 1 d 8 T G Y 2 M m U S B 7 Y w o j s y y N x P / 8 7 o p h j d + J l S S I l d s s S h M J c G Y z P 4 m A 6 E 5 Q z m x h D I t 7 K 2 E j a i m D G 0 6 R R u C t / z y K m n V q t 5 V t X Z f K 9 f P 8 z g K c A p n U A E P r q E O d 9 C A J j A Y w j O 8 w p s j n R f n 3 f l Y t K 4 5 + c w J / I H z + Q O A U 4 0 u < / l a t e x i t >
v < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 8 n 5 / Y + E H T E J h 4 z R j g c + p K w v P 5 U = " > A A A B 6 H i c b V A 9 S w N B E J 3 z M 8 a v q K X N Y h S s w l 0 s t A z Y W C Z g P i A 5 w t 5 m L l m z t 3 f s 7 g X C k V 9 g Y 6 G I r T / J z n / j J r l C E  is posed on R 2 , then the model supports solutions for which V (x, 0, t) is a travelling front V (x, 0, t) = ϕ(x − v * t), with ϕ(±∞) = V ± , whose speed v * satisfies the implicit equation [17]

< l a t e x i t s h a 1 _ b a s e 6 4 = " v x f A S y U v I P T w 4 n q O p v X 6 T 5 s L l q 8 = " > A A A B 6 n i c b V B N S w M x E J 3 U r 1 q / q h 6 9 B K v g q W T r o R 4 L X j x W t B / Q L i W b Z t v Q b H Z J s k J Z + h O 8 e F D E q 7 / I m / / G t N 2 D t j 4 Y e L w 3 w 8 y 8 I J H C W E K + U W F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v W W E l 7 y a a 0 y i Q v B N M b u d + 5 4 l r I 2 L 1 a K c J 9 y M 6 U i I U j F o n P d Q J G Z Q r p E o W w O v E y 0 k F c j Q H 5 a / + M G Z p x J V l k h r T 8 0 h i / Y x q K 5 j k s 1 I / N T y h b E J H v O e o o h E 3 f r Y 4 d Y Y v n T L E Y a x d K Y s X 6 u + J j E b G T K P A d U b U j s 2 q N x f / 8 3 q p D W / 8 T K g k t V y x 5 a I w l d j G e P 4 3 H g r N m Z V T R y j T w t 2 K 2 Z h q y q x L p + R C 8 F Z f X i f t W t W 7 r t b u S a V x k c d R h D M 4 h y v w o A 4 N u I M m t I D B C J 7 h F d 6 Q R C / o H X 0 s W w s o n z m F P 0 C f P 1 a 0 j R E = < / l a t e x i t >
To test our scheme we study solutions to (2.1), (3.1), (3.2) with L x , β 1, L ξ ν/γ, (the characteristic electrotonic length), and ε 1. Since for this problem [−L x , L x ) ∼ = S, we expect to observe at ξ = ξ 0 two counter-propagating waves with approximate speed v and We show an exemplary profile of this coherent structure in Figure 3.1, where we observe two counter-propagating waves, as described above.
Since the wavespeed v * is available implicitly, we performed some tests to validate the proposed algorithm. Firstly, we compute roots of (3.3) in the variable v * , as a function of the firing rate threshold θ. In Figure 3.2(a) we observe a good agreement with the wavespeed observed in direct simulations. The latter has been computed by post-processing data from numerical simulations: using first-order interpolants we approximate a positive function x * (t) such that V (x * (t), 0, t) = θ, that is, the θ-level set of V (x, 0, t) on [0, L x ] × [0, T ]; after an initial transient,ẋ * (t) is approximately constant and provides an estimate of v * , which is derived via first-order finite differences. In Figure 3.2(a) we observe a small discrepancy, which should be expected as we have several sources of error, namely: the time-stepping error, the spatial discretisation error for the differential and integral operators, the error due to the sigmoidal firing rate and to δ ε (the theory is valid for Heaviside firing rate and for a Dirac-delta distribution δ). In Figures 3.2(b)-(d) we show convergence plots for these errors (except for the second-order spatial discretisation error which is dominated in numerical simulations by the first-order time-stepping error).
4. Turing instability. The model defined by (1.1), with an appropriate choice of somatic interaction, can also support a Turing instability to spatially periodic patterns [4]. These in turn may either be independent of time or periodic in time. In the latter case this leads to periodic travelling waves. Whether emergent patterns be static or dynamic they both provide another validation test for the numerical scheme presented here, as the bifurcation point as determined analytically from a Turing analysis should agree with the onset of patterning in a direct numerical simulation. A relatively recent description of the method for determining the Turing instability in a neural field with dendritic processing can be found in [7]. Here we briefly summarise the necessary steps to arrive at a formula for the continuous spectrum, from which the Turing instability can be obtained.
In general a homogeneous steady state solution of (1.1) will only exist if either S(0) = 0 or R 2 W (x, ξ, y, η) dy dη = constant for all (x, ξ). The latter condition is not generic, and so for the purposes of this validation exercise we shall work with the choice S(0) = 0 for which V = 0 is the only homogeneous steady state. Linearising around V = 0 and using (1.2) gives an evolution equation for the perturbations δV (x, ξ, t) that can be written in the form Focusing on a somatic field δV (x, 0, t), we see from (4.1) (with ξ = 0) that this has solutions of the form e λt e ipx for λ ∈ C and p ∈ R, where λ = λ(p) is defined by the implicit solution of E(λ, p) = 0, where Here the function ψ is defined as in (3.3) and w(p) is the Fourier transform of w: We note that since w(x) = w(|x|) then w(p) ∈ R. Note that if λ ∈ R with λ > −γ then E(λ, p) ∈ R. The trivial steady state is stable to spatially-periodic perturbations if w(p) < w * = 2ψ(0, ν)ν exp(ψ(0, ν)ξ 0 )/S (0) for all p ∈ R.

H o K T D x J / v f T c = " > A A A B 6 n i c b V A 9 S w N B E J 3 z M 8 a v q K X N Y h R i E + 5 i o W X A x j K i + Y D k C H u b v W T J 3 t 6 x O y e E I z / B x k I R W 3 + R n f / G T X K F J j 4 Y e L w 3 w 8 y 8 I J H C o O t + O 2 v r G 5 t b 2 4 W d 4 u 7 e /
s F h 6 e i 4 Z e J U M 9 5 k s Y x 1 J 6 C G S 6 F 4 E w V K 3 k k 0 p 1 E g e T s Y 3 8 7 8 9 h P X R s T q E S c J 9 y M 6 V C I U j K K V H i r 0 s l 8 q u 1 V 3 D r J K v J y U I U e j X / r q D W K W R l w h k 9 S Y r u c m 6 G d U o 2 C S T 4 u 9 1 P C E s j E d 8 q 6 l i k b c + N n 8

f 1 Y 9 X P O k a f R j / 7 K G k c = " > A A A B 6 H i c b V A 9 S w N B E J 2 L X z F + R S 1 t F q N g F e 5 i o W X A x j I B 8 w H J E f Y 2 c 8 m a v b 1 j d 0 8 I R 3 6 B j Y U i t v 4 k O / + N m + Q K T X w w 8 H h v h p l 5 Q S K 4 N q 7 7 7 R Q 2 N r e 2 d 4 q 7 p b 3 9 g 8 O j 8 v F J W 8 e p Y t h i s Y h V N 6
A a B Z f Y M t w I 7 C Y K a R Q I 7 A S T u 7 n f e U K l e S w f z D R B P 6 I j y U P O q L F S 0 x 2 U K 2 7 V X Y C s E y 8 n F c j R G J S / + s O Y p R F K w w T V u u e 5 i f E z q g x n A m e l f q o x o W x C R 9 i z V N I I t Z 8 t D p 2 R S 6 s M S R g r W 9 K Q h f p 7 I q O R 1 t M o s J 0 R N W O 9 6 s 3 F / 7 x e a s J b P + M y S Q 1 K t l w U p o K Y m M y / J k O u k B k x t Y Q y x e 2 t h I 2 p o s z Y b E o 2 B G / 1 5 X X S r l W 9 6 2 q t 6 V b q F 3 k c R T i D c 7 g C D 2 6 g D v f Q g B Y w Q H i G V 3 h z H p 0 X 5 9 3 5 W L Y W n H z m F P 7 A + f w B c C G M l g = = < / l a t e x i t > (b) Hence a static instability occurs under a parameter variation when w(p * ) = w * for some p * ∈ R, the critical wavelength of the unstable pattern. Hence if w(p) has a positive peak away from the origin, at p = p * , then a static Turing instability can occur (see Figure 4.1(a)). This is possible if w(|x|) has a Mexican-hat shape, describing short range excitation and long range inhibition.

< l a t e x i t s h a 1 _ b a s e 6 4 = " T 8 5 c G w T a A a d l M 9 G T v r O W X 4 2 5 B u 8 = " > A A A B 6 n i c b V A 9 S w N B E J 3 z M 8 a v q K X N Y h R i E + 5 i o W X A x j K i + Y D k C H u b v W T J 3 t 6 x O y e E I z / B x k I R W 3 + R n f / G T X K F J j 4 Y e L w 3 w 8 y 8 I J H C o O t + O 2 v r G 5 t b 2 4 W d 4 u 7 e / s
We have validated this scenario numerically, by simulating a neural field with  Figure we pick the steepness of the sigmoidal firing rate as main parameter, deduce that a Turing bifurcation occurs for a critical value β * ∈ [28, 30], perturb the trivial steady state by setting initial condition V 0 (x, ξ) = 0.01 cos(p * x) and domainΩ = [−4π/p * , 4π/p * ] × [−π/p * , π/p * ], and observe the perturbations decaying for β = 28, and growing for β = 30, respectively. Note that, if λ ∈ C, then E(λ, p) ∈ C and it is possible that a dynamic Turing instability can occur, with an emergent frequency ω c . It is known that this case is more likely for an inverted Mexican-hat shape, describing short range inhibition and long range excitation [4]. We do not show computations for this case, but we briefly discuss it below. The dynamic bifurcation condition can be defined by tracking the continuous spectrum at the point where it first crosses the imaginary axis away from the real line. This is equivalent to solving P p H ω − H p P ω = 0 with P (0, ω c , p) = 0 = H(0, ω c , p), where the subscripts denote partial differentiation and P (ν, ω, p) = Re E(ν + iω, p) and H(ν, ω, p) = Im E(ν + iω, p) [6, Chapter 1].

Conclusions.
In this paper we have presented an efficient scheme for the numerical solution of neural fields that incorporate dendritic processing. The model prescribes diffusivity along the dendritic coordinate, but not along the cortex; in addition, the nonlinear coupling is nonlocal on the cortex, but essentially local on the dendrites. This structure allows the formulation of a compact numerical scheme, and provides efficiency savings both in terms of operation counts, and in terms of the space required by the algorithm. Firstly, a small diffusivity differentiation matrix is decomposed at the beginning of the computation, and then used repeatedly to solve a set of linear problems in the cortical direction. This aspect of the computation is appealing, especially for high-dimensional computations where a 2D cortex is coupled to the 1D dendritic coordinate, as the decomposition is performed once, and involves only a 1D differentiation matrix. Secondly, the largest computational effort of the scheme, which is in the evaluation of the nonlinear term, can be reduced considerably using DFTs. We have also provided a basic numerical analysis of the scheme, under the assumption that a solution to the infinite-dimensional problem exists. The existence of this solution remains an open problem, which will be addressed in future publications.
The numerical implementation presented here does not exploit the fact that the synaptic kernel is localised via the function δ ε . If one models the kernel using a compactly supported function, for instance (5.1) δ ε (ξ) = κ exp − ξ 2 ε 2 1 (−ε,ε) (ξ), which is supported in a small interval of O(ε) length, its evaluation at the grid points is nonzero only on a small index set, namely where I, I ⊆ N n ξ are index sets with O(ε/L n ξ ) elements |I|, |I | n ξ , respectively. This implies and we note that only |I| rows of N are nonzero, and the inner sum is only over |I | elements. The formula above evaluates the nonlinear term N in just (2|I| + |I |)n x + O(n x log n x ) = O(n x +n x log n x ) operations. Numerical experiments and convergence tests have been performed also for this formula, albeit the results not presented here, because a synaptic kernel with the choice (5.1) is no longer in C 2 (Ω × Ω), hence Lemma 2.3 does not hold, and we plan to provide a convergence result for this case elsewhere. We provide, however, a Matlab implementation of this code in Appendix A.
Possible extensions of the model include curved geometries [21,3], which should benefit from a similar strategy used here for the dendritic coordinate, as well as multiple population models. We expect that the latter will induce different coherent structures to the ones reported here. The method outlined in this paper is valid also in the context of numerical bifurcation analysis, which can be employed to study the bifurcation structure of steady states and travelling waves.
The inclusion of synaptic processing to the present model is straightforward, by coupling (2.1) to an equation of type QΨ = K where Q = (1 + α −1 ∂ t ) 2 is a temporal differential operator. The resulting model would not involve any additional spatial differential or integral operator, therefore the proposed scheme can be applied by simply augmenting the discretised state variables.
It is also important to address the role of axonal delays on the generation of large scale brain rhythms. A recent paper [17] has explored how this might be done in a purely PDE setting, generalising the Nunez brain-wave equation to include dendrites. A natural extension of the work in this paper is to consider a more general numerical treatment of a model with both dendritic processing and space-dependent axonal delays in an integro-differential framework.