NEUROTRANSMITTER CONCENTRATIONS IN THE PRESENCE OF NEURAL SWITCHING IN ONE DIMENSION

In volume transmission, neurons in one brain nucleus send their axons to a second nucleus where neurotransmitter is released into the extracellular space. One would like methods to calculate the average amount of neurotransmitter at different parts of the extracellular space, depending on neural properties and the geometry of the projections and the extracellular space. This question is interesting mathematically because the neuron terminals are both the sources (when they are firing) and the sinks (when they are quiescent) of neurotransmitter. We show how to formulate the questions as boundary value problems for the heat equation with stochastically switching boundary conditions. In one space dimension, we derive explicit formulas for the average concentration in terms of the parameters of the problems in two simple prototype examples and then explain how the same methods can be used to solve the general problem. Applications of the mathematical results to the neuroscience context are discussed.


Introduction.
A fundamental mechanism by which neurons convey information is one-to-one neural transmission, in which a neuron fires an action potential that travels down its axon to a synapse that is adjacent to the cell body or a dendrite of a second neuron.The arrival of the action potential at the synapse causes biochemical changes that result in neurotransmitter being released into the synaptic cleft where it diffuses to the post-synaptic membrane (i.e. the 2nd neuron), binds to receptors, and tends to make the second neuron fire or not fire depending on whether the neurotransmitter is excitatory or inhibitory.In this type of neural transmission, commonly called electrophysiological, the "purpose" is to convey the electrical signal from one neuron to the next.The role of biochemistry (in the synapse and in the synaptic cleft) is simply to facilitate the electrophysiology.
However, neurons convey information by another mechanism as well.Certain collections of neurons that have the same neurotransmitter can project to a distant volume (a nucleus or part of a nucleus) in the brain and when they fire they increase the concentration of the neurotransmitter in the extracellular space in the distant volume.The increased concentration modulates the electrophysiological neural transmission in the distant region by binding to receptors on the cells in the target region.This kind of neural activity is called volume transmission [11,19].It is also called neuromodulation because the effect of the neurotransmitter is the modulation of one-to-one transmission by other neurons or synapses in the projection region.There are many important examples of volume transmission such as the dopaminergic projection from the substantia nigra to the striatum [6], the serotonergic projection from the dorsal raphe nucleus to the striatum [2,3], and projections of norepinephrine neurons from the locus coeruleus to the cortex [11].The serotonin and dopamine projections are crucial to motor control and Parkinson's disease, and the norepinephrine projection to the initiation and maintenance of wakefulness.
The purpose of this paper is to use recently developed mathematical machinery on the stochastic switching of boundary conditions in PDEs [14,16] to understand certain aspects of volume transmission.Suppose that a large number of neurons with the same neurotransmitter project randomly to a distant volume where they release neurotransmitter into the extracellular space.Each neural terminal in the projection region is a source of neurotransmitter when the neuron fires and is a sink for neurotransmitter otherwise because transporters carry the neurotransmitter back into the terminals.Given the statistics of the stochastic firing of each neuron, how can we calculate the average neurotransmitter level over the whole extracellular space?How can we calculate the spatial dependence of expected neurotransmitter level?How do the answers to these questions depend on firing rates, amounts released, average distances between terminals, diffusion constants, and other important parameters?In this paper, we answer these questions in one space dimension.Of course, one space dimension is unphysiological, but the techniques and the answers (some of them surprising) give insight into what can be expected in higher dimensions.
In Section 2 we consider the following two simple prototype problems.Let u(x, t) be the concentration of neurotransmitter in the interval [0, L].For the first problem, we consider the stochastic process that solves and switches randomly between the boundary conditions Thus, at x = 0 there is a hard wall through which neurotransmitter cannot diffuse.At x = L the boundary condition switches between firing (f ) where neurotransmitter is released into the interval at a constant rate, c, and quiescence (q) where it is reabsorbed by the terminal.We are interested in computing the asymptotic behavior of Eu(x, t) as t → ∞.We will see that Eu(x, t) is asymptotically constant in x under general assumptions on the distributions of the switching times, and we compute an explicit, simple formula for this constant in terms of D, L, c, and the switching constants if the switching is at exponentially distributed times.Finally, we numerically compute the spatial dependence of the standard deviation.
For the second case, we derive explicit formulas for Eu(x, t) as t → ∞ for the problem where u(x, t) satisfies (1) and at exponentially distributed times switches between the boundary conditions: This models the situation where there is a neuron at x = L that switches stochastically between firing (f ) and quiescence (q) while there is a glial cell at x = 0 that absorbs neurotransmitter.Again, in addition to finding a formula for the mean we compute the standard deviation numerically.
In Section 3 we consider the much more general stochastic process where u(x, t) satisfies (1) but now there are neurons at both x = 0 and x = L.Both neurons fire stochastically and independently with exponential rates that are different and release rates, c 0 , c L , that are different.Methods used in Section 2 and from [14] are used to show that lim t→∞ Eu(x, t) can be computed in terms of all the given parameters of the problem by solving a set of eight linear equations.
Section 4 is devoted to extracting information about volume transmission from the explicit formulas for lim t→∞ Eu(x, t) derived in Sections 2 and 3.In the Discussion we explain why the questions in two and three dimensions are much more difficult and indicate some of our preliminary results.
2. Simple prototype problems.We use two approaches to the technical calculations.The "iterated random function" method, Section 2.1, is very general and allows us to calculate the limits with very weak restrictions on the distribution of switching times.The "moment" method, Section 2.2, assumes that switching times are exponentially distributed.This allows us to use Markov methods and to compute standard deviations.

2.1.
Iterative random function approach.We wish to consider the L 2 [0, L]valued stochastic process that solves (1) and (2).To define the process {u(x, t)} t≥0 and cast it in the setting of [16], we define two self-adjoint operators: The operators A q and A f generate C 0 -semigroups on L 2 [0, L], which we denote respectively by e Aqt and e A f t .The solution operator Φ t q : L 2 [0, L] → L 2 [0, L] for the heat equation (1) with the (q) boundary conditions in (2) is given by Φ t q (g) := e Aqt g.
The eigenvalues and eigenvectors for A q are If we let h(x) := 2L π c 1 − cos( π 2L x) , then the solution operator for the heat equation with the (f ) boundary conditions in (2) is The eigenvalues and eigenvectors for A f are with β 0 = 0 and b 0 (x) = 1/L.In order to define the random switching times, we define the set Ω of all possible switching environments and equip it with a probability measure.Let µ f and µ q be two continuous probability distributions on the positive real line with finite first and second moments.We further assume that if τ q is drawn from µ q , then We note that this is satisfied if the probability density of µ q is bounded in a neighborhood of 0. Define each switching environment, ω ∈ Ω, as the bi-infinite sequence ω = {ω k } k∈Z , where each ω k is a pair of non-negative real numbers, (τ k f , τ k q ), drawn from µ f × µ q .That is, (τ k f , τ k q ) is an R 2 -valued random variable drawn from the product measure µ f × µ q .We take P to be the infinite product measure generated by µ f × µ q and let E denote the corresponding expectation.Summarizing notation, we have that ω = (. . ., ω −1 , ω 0 , ω 1 , . . . ) = . . ., (τ −1 f , τ −1 q ), (τ 0 f , τ 0 q ), (τ 1 f , τ 1 q ), . . .∈ Ω.
To define the stochastic process {u(x, t)} t≥0 we need some notation from renewal theory.For each ω ∈ Ω and natural number n, define the elapsed time after n pairs of switches with S 0 := 0. Define the number of pairs of switches before time t N (t) := max{n ≥ 0 : S n ≤ t}.
We also define the state process J(t) which indicates the current boundary condition Finally, for t ≥ 0, define the elapsed time since the last switch (often called the age) by For each ω ∈ Ω, g ∈ L 2 [0, L], and integers k, n, define the map For u 0 ∈ L 2 [0, L], ω ∈ Ω, and t ≥ 0, define our continuous-time L 2 [0, L]-valued process {u(x, t)} t≥0 by f (ϕ 1,N (t) (u 0 )).( 12) Since the switching time distributions are continuous, they are non-arithmetic (also known as non-lattice).Thus, by Theorem 2.4 of [16], we have that u(x, t) converges in distribution as t → ∞ to an L 2 [0, L]-valued random variable, ū(x).
In order to describe this limit ū, we define the random variables where g ∈ L 2 [0, L] and τ q is an independent draw from µ q .By Proposition 2.1 in [16], the random variables Y f and Y q exist almost surely and are independent of g. (We remark that random variables such as Y f and Y q are often called random pullback attractors because they take an initial condition and pull it back to the infinite past [7,17,20]).By Theorem 2.4 in [16], we have that ū is given by where ξ is an independent Bernoulli random variable with parameter and a q and a f are two independent random variables taking values on the positive real line with cumulative distribution functions given by Equation ( 13) has a simple interpretation.It means that in order to find the distribution of the solution one must do the following.First, flip a coin with parameter ρ to decide the current boundary condition (either (q) or (f )).If it is (q), then apply the map Φ q to the pullback Y f for time a q , where a q is the amount of time since the last switch given that Φ q is currently being applied.If it is (f ), then apply Φ f to Y q for time a f .In order to extend some further results in [16], we must overcome two barriers.First, our process {u(x, t)} t≥0 defined in equation ( 12) does not have a deterministic bound like the processes considered in [16].Second, we need to make statements about the spatial derivative of our process.The following few lemmas collect the necessary estimates to overcome these difficulties.
Here and throughout, we use where 1 u(t) ≥K denotes the indicator function on the event u(t) ≥ K.
Proof.It is straightforward to see from the definitions in equations ( 4) and ( 6) that there exists constants α > 0, K 1 , and K 2 such that Thus, if we define a process z(t) ∈ R by equation ( 12) with Φ t q and Φ t f replaced by the maps with initial condition z 0 := u 0 , then u(t) ≤ z(t) for all t almost surely.Hence, if {z(t)} t≥0 is uniformly integrable, then { u(t) } t≥0 is uniformly integrable.We will prove the following condition that implies uniform integrability Recalling the definition of S n from equation (10), observe that It is easy to see that the first term in ( 16) satisfies Further, since the random times are independent, we can bound the expectation of the second term in ( 16) where τ f and τ q are drawn from µ f and µ q , respectively.
To bound the expectation of the third term in (16), observe that Hence, if we let then C < ∞ since τ f has finite first and second moments by assumption.Thus Putting the bounds in equations ( 17), (18), and (19) together with ( 16), we see that E[z 2 (S n )] is bounded above independently of n.A similar calculation shows that E[z(S n )] is also bounded above independently of n.Further, it is immediate that at any time t ≥ 0, we have that Equation ( 15) follows.
A similar argument gives the following lemma.
Lemma 2.2.The random variables Y q , Y f , and ū have finite expectation.
In order to make certain statements about spatial derivatives, we need the following lemma.
Lemma 2.3.The random variables Y q ∞ and Y f ∞ have finite expectation.
Proof.By Proposition 2.2 in [16], if τ f and τ q are independent draws from µ f and µ q , then we have the following equality in distribution Hence, recalling the eigenvalues and eigenvectors in ( 5), we have that by independence, the Cauchy-Schwarz inequality, the assumption in equation ( 8), Lemma 2.2, and the fact that a k = 1 and a k ∞ = a 1 ∞ for all k.Further, by equation (20) we have that by equation ( 21) and the assumption that τ f has finite expectation.
In the following theorem, we prove that the mean of u(x, t) is constant in x at large time.To see intuitively why this is true, first take the expectation of (1) and interchange expectation with differentiation to show that the mean satisfies the heat equation.Next, since u(x, t) converges in distribution at large time, the time derivative of the mean vanishes at large time and thus the mean satisfies the steady state heat equation and is therefore linear.Finally, since the boundary condition is always no flux at x = 0, the mean satisfies a no flux condition at x = 0. Combining these last two points forces the mean to be constant.The proof of the theorem makes this argument rigorous.
In the following theorem and in the remainder of this subsection, we use E to denote the Bochner integral of L 2 [0, L]-valued random variables and not the pointwise expectation of random functions.
Theorem 2.4.Assume that the switching time distributions, µ q and µ f , are continuous, have finite first and second moments, and satisfy equation (8).Then the process u(x, t) converges in distribution as t → ∞ to an L 2 [0, L]-valued random variable, ū(x), whose expectation is equal to a constant for almost every x ∈ [0, L].
Proof.Let g ∈ L 2 [0, L] and let ε > 0. By Lemma 2.1, there exists a K > 0 so that It follows from Theorem 2.4 in [16] and the definition of convergence in distribution that there exists a T > 0 so that where •, • denotes the L 2 [0, L] inner product.Combining ( 22) and ( 23) with the Cauchy-Schwarz inequality, we have that Thus we conclude that Since taking the inner product against g is a bounded linear operator on L 2 [0, L], we can exchange expectation with the inner product in equation ( 24) and obtain that Eu(t) → Eū weakly in Thus, to show that s = 0, we only need to show that we can exchange the expectation with the limit in equation ( 28) and then apply equation (27).To do this, we need to find an integrable random variable M such that Recalling the eigenvalues and eigenvectors in equation ( 4), observe that since a k ∞ = a 1 ∞ for all k.Thus, if we let M 1 be the righthand side of (30), then M 1 bounds d dx Φ a q q (Y f ) ∞ .We now check that M 1 has finite expectation.Using ( 20), ( 6), (5), and Lemma 2.3, it is straightforward to obtain that Since a q and Y f in (30) are independent, M 1 has finite expectation.A similar argument shows that there exists a random variable M 2 that almost surely bounds and has finite expectation.Recalling the definition of ū from equation ( 13), we have that M := M 1 + M 2 satisfies equation (29) and EM < ∞.Thus, by the dominated convergence theorem and equations ( 27) and (28), we have that and the proof is complete.Now that we have shown that the expectation of the process at large time is constant in space for very general switching time distributions, we compute this constant in the case of exponential switching times in the following theorem.
Theorem 2.5.If the switching time distributions, µ f and µ q , are exponential with respective rate parameters r f and r q , then the constant value of Eū is given by where µ := r q /r f and η := (r f + r q )/D.
Proof.By Corollary 2.5 in [16], ū is given by where ξ is an independent Bernoulli random variable with parameter Hence, by Theorem 2.4 above, there exists a M ∈ R such that We will use equation (32) to find M. Let {φ n } ∞ n=1 be such that φ n ∈ C ∞ 0 (0, L), φ n ≥ 0, and φ n 1 = 1 for each n and for each g ∈ C[0, L].Then, by equation (32), we have that for each n where {b k } k≥0 are the eigenvectors of A f given in equation (7).We want to take the limit as n → ∞ in equation (34).Since Y q is almost surely smooth and Y q (L) = 0 almost surely, using Lemma 2.3 along with Holder's inequality and the dominated convergence theorem gives lim n→∞ φ n , EY q = 0. (35) Next, we want to show that In order to show this, we need to prove that converges uniformly in x.To do this, we now find an expression for b k , EY f .By equation (32), we have that for all integers k ≥ 0. By Proposition 2.2 in [16], if τ f is an independent draw from µ f , then we have the following equality in distribution . Hence, recalling the definition of Φ f in equation ( 6) and letting {β k } k≥0 be the eigenvalues of A f given in equation ( 7), we have that for k ≥ 1 After solving the system of equations in (38) and (39), the sum in (37) becomes converges uniformly in x, and thus (36) is verified.
2.2.Moment approach.Suppose the switching time distributions µ f and µ q defined above are exponential with respective rates r f and r q .It follows that the state process {J(t)} t≥0 defined in equation ( 11) is a continuous time Markov jump process on {0, 1} that leaves the firing state (state 0) at rate r f and leaves the quiescent state (state 1) at rate r q 0 r f rq 1.
Though the stochastic process {u(x, t)} t≥0 defined above in equation ( 12) is constructed as an L 2 [0, L]-valued process, u(x, t) is actually smooth in x ∈ [0, L] for each t > 0 by virtue of being a solution to the heat equation.Thus, the process is well-defined pointwise in [0, L].That is, for fixed is a stochastic process taking values in R.
Hence, for t > 0, x ∈ [0, L], and j ∈ {0, 1}, we define where by E we mean the expectation of the R-valued process in (41).Throughout the remainder of the paper, we use E to denote this pointwise expectation.Assume that J(t) is initially distributed according to its invariant distribution.The following proposition follows from Theorem 1 in [14].The PDE and boundary conditions satisfied by v i follow from interchanging differentiation with expectation.Thus the proof in [14] amounts to checking the hypotheses of the dominated convergence theorem, which follow from standard estimates for the heat equation.
Proposition 1.The functions v 0 and v 1 defined in equation ( 42) satisfy the following boundary value problem.
with boundary conditions where ρ = r f /(r f + r q ) is the proportion of time in the (q) state.
We can solve the boundary value problem in Proposition 1 at steady state to yield where µ := r q /r f and η := (r f + r q )/D, which recovers the result found above in Theorem 2.5.
In addition to finding the mean, we can also find the standard deviation.For t > 0, (x, y) ∈ [0, L] 2 , and j ∈ {0, 1}, we define the two-point correlations The following proposition follows from Theorem 1 in [14].
Proposition 2. The functions C 0 and C 1 defined in equation ( 44) satisfy the following boundary value problem on the square [0, L] 2 .
with boundary conditions that couple to the moments defined in equation ( 42) It is straightforward to numerically solve this boundary value problem at steady state to obtain After obtaining this function, we subtract from it the square of the steady state mean found above to obtain the variance at large time A plot of the square root of this function (i.e. the standard deviation) is given in Figure 1 (left).While the mean is constant in space, the standard deviation is much higher near the switching boundary.2) conditions (left plot) or the equation (3) conditions (right plot).The means are given by ( 31) and (46) for the left and right plots, respectively.The standard deviation for the left plot is found by numerically solving the boundary value problem in Proposition 2 at steady state, subtracting the square of the mean, and then taking the square root.The standard deviation for the right plot is found analogously.For both plots, parameters are L = D = r q = 1 and c = r f = 100.

2.3.
Another simple problem.Consider the same switching PDE as above, but now suppose the boundary condition at x = 0 is always an absorbing Dirichlet condition, u(0, t) = 0.That is, suppose u(x, t) satisfies the heat equation in the interval [0, L] and at exponentially distributed random times switches between the boundary conditions This models the situation where there is a neuron at x = L that switches stochastically between firing (f ) and quiescence (q) while there is a glial cell at x = 0 that absorbs neurotransmitter.Using the methods from subsection 2.2 above, we can calculate the expectation of the solution at large time.In particular, after solving the steady state version of a boundary value problem similar to that in Proposition 1, we find that the expected solution at large time is where µ := r q /r f and η := (r f + r q )/D.We can also derive the analog of Proposition 2 for this problem in order to find the standard deviation.The mean and standard deviation is plotted in Figure 1 (right).
3. The general problem.Suppose we now let both ends of the interval switch independently.That is, suppose u(x, t) satisfies the heat equation in the interval [0, L] and suppose the boundary condition at x = 0 switches between u(0, t) = 0 and ∂ x u(0, t) = −c 0 < 0, while the boundary condition at x = L switches between u(L, t) = 0 and This models the situation where there is a neuron at x = L and at x = 0 that fire independently.
Suppose two independent Markov jump processes control the states of the x = 0 and x = L boundaries.It is straightforward to combine these two independent Markov processes into a single 4 state Markov process J(t) ∈ {0, 1, 2, 3} with generator Q.For j ∈ {0, 1, 2, 3} define the functions Assume that J(t) is initially distributed according to its invariant distribution.The following proposition follows from Theorem 1 in [14].
Proposition 3. The functions defined above in equation (47) satisfy the PDEs where and r q is the rate of switching from the quiescent state to the firing state and r f is the rate of switching from the firing state to the quiescent state.The intuitive reason why the expectation is constant is that if x is far away from L then it is hard for neurotransmitter to diffuse there without being reabsorbed first, but once there it is hard to be reabsorbed when the boundary conditions switch because of the distance to L. It is clear that M should be proportional to c since c is the rate at which neurotransmitter is put into the interval when the neuron is firing.
If r q gets larger and/or r f gets smaller then µ will increase and cause M to increase since the neuron is spending a larger fraction of time in the firing state.Similarly, if r q gets smaller and/or r f gets larger, both µ and M will decrease because the neuron is spending a smaller fraction of time firing.But what if we keep µ constant and scale r q and r f to be large?Then η becomes large so M approaches 0 since coth is monotone decreasing to 1.This makes sense because it is very hard for neurotransmitter to escape a small region near L because as soon as it is released, the boundary conditions switch and it is reabsorbed.More generally, fast switching between Dirichlet and Neumann always becomes pure Dirichlet if the proportion of time in each state is fixed [15].This phenomenon can be understood in terms of the mean absorption time of a Brownian motion to a switching boundary.Indeed, for a particle starting on a boundary that switches between reflecting and absorbing with the proportion of time in each state fixed, the mean absorption time goes to zero as the switching rate increases [4,5].
On the other hand, if µ is constant then η gets small as both r q and r f get small, so M approaches ∞.Intuitively, this is because the input is constant in time when the neuron is firing but the absorption is (approximately) proportional to the amount in the interval so the input dominates as the switching times become long.To see why absorption is approximately proportional to the amount in the interval, first observe from the form of the solution operator in (6) that after being in the firing state for a long time s, the solution is approximately the product sφ(x) for some function φ(x).Then, when the boundary condition switches to absorbing, the form of the solution operator in (4) implies that the amount absorbed before the next switch will be proportional to s.
The diffusion constant D has the reciprocal effect on η as the sum r q + r f , so when D get small M also gets small and when D gets large M goes to ∞ for the reasons given above.Finally, notice that M gets smaller as L increases, which makes sense because the neurotransmitter is diffusing into a larger region.What is interesting, however, is that coth(z) asymptotes to 1 as z → ∞, which means that once L is large compared to η the value of M is almost independent of L, M ≈ c µ η .
4.2.General formulas.In Section 3 we considered the general problem where there is a neuron terminal at both ends of the interval, the parameters of the neurons are different and they switch independently.We showed that lim t→∞ Eu(x, t) = M, which does not depend on x and we indicated how to compute an explicit formula for M in terms of all the parameters of the problem.We also displayed an explicit formula for M in the case where the neural parameters were identical.Although the formula in this special case remains complicated, we can take various limits that have biological significance.It is not hard to check that: This makes sense because as η gets large, the switching gets faster and faster as we discussed above, and therefore the neurotransmitter cannot easily escape from a small region about either endpoint.Also, This makes sense because, as we saw above, as the switching gets slower and slower, input dominates over removal.Finally, In the simple case where there is a terminal at one end and a no flux condition at the other end, this is the limit that we saw in Section 4.1 above.The limit exists because as L gets very large the coth term goes to one and M becomes independent of L. The same thing happens here if the terminals at the ends are sufficiently far apart.
4.3.Real neural parameters.Many dopaminergic and serotonergic neurons fire at a basal rate of about 1 spike/sec [10,12].The length of a typical action potential is about 1-10 milliseconds [13] and so it's reasonable to assume that the release of neurotransmitter lasts a total of about 5 milliseconds.This means that reasonable values are r q = 1/sec, r f = 200/sec, and µ = rq r f = 1 200 .The diffusion constant for dopamine is approximately 10 −6 (cm) 2 /sec [22], so The spacing between neural terminals varies widely, but for serotonin it has been estimated that there are about (2.6)10 6 terminals per cubic millimeter [21] or a distance of about 7µm between terminals.In [18], Figure 1, some terminals are considerably further apart than 20 µm and some are less.If we assume that 7µm ≤ L ≤ 20µm, then 9.9 ≤ ηL = ( Thus coth (ηL) ≈ 1 and we are well within the range of L where M is approximately independent of L. Typical extracellular concentrations of dopamine are approximately .090µM[1] and from this one can use the formula (51) to compute c, the one parameter for which there are no experimental measurements.

4.4.
Complete solution of the one-dimensional problem.In Section 3 we outlined a method to give an explicit formula for the constant mean, M, in the case where there are neurons at both 0 and L and they fire independently with different parameters.This allows us to compute the overall spatial mean in the very general situation where there is a piece of neural tissue represented by the interval [a, b] that contains many neurons, switching between firing and quiescence, and glial cells that absorb neurotransmitter, see Figure 2. We assume that there are finitely many neural terminals and finitely many glial cells in the interval [a, b].Since we are in one-dimension, each neuron or glial cell separates the tissue on its left from the tissue on its right.Put differently, the tissue is divided into subintervals with terminals or glial cells only at the ends.If there are glial cells at each end, then the neurotransmitter mean in that interval will be 0 as t → ∞.If there is a glial cell at one end and a neuron at the other end, then the mean is linear over the interval (formula (46) above) and the spatial mean over the interval is elementary to compute.If there are terminals at both ends, then the spatial distribution of the asymptotic mean in the interval is given by the complex general formula (that we did not state explicitly) in Section 3. Finally, at the ends of the tissue if a glial cell is nearest the end, then the mean neurotransmitter level is 0 in that interval, and if a terminal is nearest the end, then the mean is constant over the interval and given by the formula (51) above.Thus, once we know the parameters for each neural terminal, we can compute the neurotransmitter mean in each subinterval, and therefore, by elementary methods can compute explicitly the overall mean over the whole tissue as t → ∞.

5.
Discussion.The goal of this paper was to begin the development of mathematical methods for understanding volume transmission in which cells in one nucleus project their axons to a distant nucleus and release neurotransmitter extrasynaptically.Thus the terminals on the projections maintain (or change) the concentration of the neurotransmitter in the extracellular space in the projection nucleus.Given the properties of the neurons, one would like to compute various quantities like average neurotransmitter concentration.The problem is interesting from a mathematical point of view because the neuron terminals are both the source of neurotransmitter and the most important sink.We formulated the question as a problem of switching boundary conditions for the heat equation where each terminal is a point at which neurotransmitter is released at a constant rate when the neuron is firing and becomes an absorbing boundary condition when it is not.This formulation ignores some details of the biology.Surely the release rate is not constant throughout the short action potential, and some neurotransmitter may continue to be released for some milliseconds after the action potential is finished.The reuptake of neurotransmitter into the terminal is by transporters obeying Michaelis-Menten kinetics and therefore treating reabsorption as an absorbing boundary condition is an approximation.And, we have only considered the question in one space dimension.
Nevertheless, we have found some interesting and unexpected phenomena that could be important for biological understanding if they also hold in two and three dimensions.Chief among these is the fact that in the simplest problem, a switching neuron at x = L and a no flux boundary condition at x = 0, lim t→∞ Eu(x, t) is a constant, M, independent of x and the variance decays rapidly as one moves away from the terminal.Thus the whole tissue, [0, L], sees the same average concentration of neurotransmitter even though some parts are closer to the terminal and some parts are further away.And, we were able to calculate a simple explicit formula for M in terms of the parameters of the problem.In Section 3 we developed a method to solve the problem when there are independently switching neuron terminals at both ends of the interval.There we prove that M is constant in space if the terminals have the same parameters and is linear in x if the parameters are different.
The other simple problem that we considered was to have a switching terminal at x = L and an absorbing glial cell at x = 0.There are 10 times as many glial cells in the brain as there are neurons [10].Some of them are known to take up neurotransmitter [8,9], though probably not as quickly or efficiently as neural terminals.In this case M(x) = lim →∞ Eu(x, t) is a linear function of x and we compute its slope explicitly in terms of the neural parameters.Recent work with a biological collaborator [24] suggests that this uptake mechanism plays an important role for serotonin.
We have begun some calculations in higher dimensions where the analytical and geometrical issues are much more difficult.The problem can be formulated similarly to what have done here, but closed formulas for M seem much harder to obtain, though M can be computed numerically.Numerical calculations show some similar properties to the one dimensional results in this paper.However, the terminals can no longer be treated as points and thus their shape matters, and it is not yet clear how the excluded volume, the tortuosity of the extracellular space, and the placement of geometric obstacles affects the spatial variation of lim t→∞ Eu(x, t).
is a weak solution to Laplace's equation on the interval [0, L].But by the regularity of ∆ on [0, L], we have that Eū is actually a classical solution and thus it is the affine function (Eū)(x) = sx + M, for constants s, M ∈ R. It remains to show that s = 0. Let {φ n } ∞ n=1 be such that φ n ∈ C ∞ 0 (0, L), φ n ≥ 0, and φ n 1 = 1 for each n and lim n→∞ φ n , g = g(0) (26) for each g ∈ C[0, L].Since ū is almost surely smooth and d dx ū(0) = 0 almost surely, integration by parts gives that lim taking the inner product with d dx φ n is a bounded linear functional in L 2 [0, L] and since Eū = sx + M, integration by parts gives lim n→∞

Figure 1 .
Figure1.Large time pointwise mean and standard deviation for the process that solves (1) and at exponential times switches between either the equation (2) conditions (left plot) or the equation (3) conditions (right plot).The means are given by (31) and (46) for the left and right plots, respectively.The standard deviation for the left plot is found by numerically solving the boundary value problem in Proposition 2 at steady state, subtracting the square of the mean, and then taking the square root.The standard deviation for the right plot is found analogously.For both plots, parameters are L = D = r q = 1 and c = r f = 100.

Figure 2 .
Figure 2. A line of neural tissue with glial cells and neural terminals.