RECURRENCE FOR SWITCHING DIFFUSION WITH PAST DEPENDENT SWITCHING AND COUNTABLE STATE SPACE

. This work continues and substantially extends our recent work on switching diﬀusions with the switching processes that depend on the past states and that take values in a countable state space. That is, the discrete component of the two-component process takes values in a countably inﬁnite set and its switching rate at current time depends on the value of the continuous component involving past history. This paper focuses on recurrence, positive recurrence, and weak stabilization of such systems. In particular, the paper aims to providing more veriﬁable conditions on recurrence and positive recurrence and related issues. Assuming that the system is linearizable, it provides feasible conditions focusing on the coeﬃcients of the systems for positive recurrence. Then linear feedback controls for weak stabilization are considered. Some illustrative examples are also given.


1.
Introduction. Nowadays, because of the complex real-world applications, numerous systems in engineering, social networks, biological and medical systems, and ecological systems demand more realistic models. Due to the need, substantial effort has been devoted to developing sophisticated models for control, optimization, and stability. A wide range of applications in wireless communications, queueing networks, biology, ecology, financial engineering, and social networks require the use of hybrid models in which continuous dynamics and discrete events coexist and interact. A switching diffusion, a two-component process (X(t), α(t)) (a continuous component X(t) and a discrete component α(t) taking values in a discrete set), is one of such hybrid systems. When the discrete-event process α(t) = i, the continuous component X(t) evolves as a diffusion process whose drift and diffusion coefficients depend on i. Because of their importance, much effort has been devoted to such hybrid dynamic systems; see [14,24,33,32,30,34] and the references therein. For a comprehensive study to the Markov process (X(t), α(t)) in which the generator of α(t) depends on the current state X(t), we refer the reader to [28].
Treating systems running in continuous time, and stemming from stochastic differential equations based models and random discrete events, switching diffusions came into being, which feature in continuous states and discrete events (discrete states) coexist and interact. Earlier considerations required the discrete states being a continuous-time Markov chain that is independent of the driving random disturbances (the Brownian motion) for the continuous states. Subsequent studies put emphases on the case the discrete states (discrete random events) depend on the continuous states [28], which substantially enlarges the systems can be treated but makes the analysis much more difficult to handle.
Treating switching diffusions, most of the works considered α(t) as a process taking values in a finite set. Very recently, the finite set restriction for α(t) has been removed and α(t) is allowed to take values in a countable state space, with the proviso the systems being memoryless. To be able to treat more realistic models and to broaden the applicability, we undertook the task of investigating the dynamics of (X(t), α(t)) in which α(t) has a countable state space and its switching intensities depend on the history of the continuous component X(t). As a first attempt, motivated by, for example, applications in queueing systems and control systems among others, a systematic study of such switching diffusions was initiated in [20,22]. In [20], we gave precise formulation of the process (X(t), α(t)) and established the existence and uniqueness of solutions together with such properties as Markov-Feller property and Feller property of function-valued stochastic processes associated to our processes under suitable conditions. In [22], general conditions for recurrence and ergodicity of switching diffusion processes with past-dependent switching having a countable state space are given using appropriate Lyapunov functions. In practice, it is often difficult to find a suitable Lyapunov function. With such motivations, this paper aims to provide easily verifiable conditions for positive recurrence when the systems are linearizable. As coined by Wonham, recurrence is also known as weak stability. This paper further examines weak-stabilization problems, which is of practical importance because in control and optimization, dealing with longterm average cost problems, we often need to replace the instantaneous transition probability measures by invariant measures whenever possible. To make the replacement, we need the existence of the corresponding invariant measures and the convergence of transition probability measures to that of the invariant measures. When the systems encountered are not positive recurrent (also known as weakly stable), it will be necessary to design controls so that the controlled switching diffusions are weakly stable or positive recurrent, which guarantees the existence of stationary measures.
The rest of the paper is organized as follows. The formulation of switching diffusions with past-dependent switching and countably many possible switching locations is given in Section 2. Section 3 concentrates on recurrence and ergodicity, and provides certain sufficient conditions for positive recurrence and ergodicity of the related switching diffusions. In Section 4, we concentrate on weak stabilization of switching diffusions by a linear feedback controls. To demonstrate our results, we provide a few examples in Section 5. A short summary is given in Section 6. Finally, for completeness, an appendix is provided at the end of the paper to cover some technical complements.

RECURRENCE FOR SWITCHING DIFFUSION 881
follows, we mainly work with C([−r, 0], R n0 ), and simply denote it by C := C([−r, 0], R n0 ). For each φ ∈ C, use the sup norm metric φ = sup{|φ(t)| : t ∈ [−r, 0]}. For t ≥ 0, we use y t to denote the segment function or memory segment function y t = {y(t+s) : −r ≤ s ≤ 0}. Denote by |x| the Euclidean norm of x for x ∈ R n0 . We work with (Ω, F, {F t } t≥0 , P), a complete filtered probability space with the filtration {F t } t≥0 satisfying the usual condition, i.e., it is increasing and right continuous while F 0 contains all P-null sets. We work with a probability space on which we can construct an R d -valued Brownian motion W (t) and an independent Poisson measure p(dt, dz) that are both F t -adapted. Let W (t) be an F t -adapted and R d -valued Brownian motion, and let b(·, ·) : R n0 × Z + → R n0 , and σ(·, ·) : . . } (the set of positive integers). We work with the two-component process (X(t), α(t)), where α(t) is a pure jump process taking value in Z + , and X(t) satisfies (2.1) We assume that the jump intensity of α(t) depends on the trajectory of for each i ∈ Z + and each φ ∈ C and Q(φ) = (q ij (φ)). For simplicity, we assume that q i (φ) is uniformly bounded (for treating more general classes of functions, cf. [20]) and (2.2) Note that in (2.2), in contrast to [28], in lieu of Q(X(t)), Q(X t )). That is, the segment process X t rather than the solution of the stochastic differential equation is used. Recall that a strong solution to (2.1) and (2.2) on [0, T ] with initial data (ξ, i 0 ) being C × Z + -valued and F 0 -measurable random variable, is an F t -adapted process (X(t), α(t)) such that • X(t) is continuous and α(t) is cadlag (right continuous with left limits) with probability 1 (w.p.1). • X(t) = ξ(t) for t ∈ [−r, 0] and α(0) = i 0 • (X(t), α(t)) satisfies (2.1) and (2.2) for all t ∈ [0, T ] w.p.1.
Alternatively, the switching diffusion above may also be given as follows. For each function φ : [−r, 0] → R n , and i ∈ Z + , let ∆ ij (φ), j = i be the consecutive left-closed and right-open intervals of the real line, each having length q ij (φ). That is, The process α(t) can be defined as the solution to where a(t−) = lim s→t − α(s) and p(dt, dz) is a Poisson random measure with intensity dt × m(dz) and m is the Lebesgue measure on R such that p(dt, dz) is independent of the Brownian motion W (·). The pair (X(t), α(t)) is therefore a solution to Under some mild conditions, the process (X(t), α(t)) will satisfy the relation (2.2) (see [20,Lemma A.1]). For subsequent use, define To get some insight, we consider a couple of examples below. One is a fluid model from queueing systems, which begins with a switched ordinary differential equations, where the probability distribution of the switching process depends on the past history. The other example stems from applications in ecological systems.
Example 2.1. In this example, we consider an extension of the Markov-modulatedrate fluid models treated in [29]. Stemming from queueing systems, this example is simple in that it is even without the Brownian motion part, but it explains the modeling view point of the past depend switching with a countable state space. Consider the fluid buffer model with an infinite capacity. Let X(t) be the amount of fluids in the buffer at time t, known as buffer content or buffer level. The fluids enter and leave the buffer at random rates. The input and output of fluids are modulated by a switching process α(t) with state space Z + = {1, 2, . . .}, known as a stochastic external environment. Using α(t) to determine the input and output rates, we introduce a drift function f (·) (see Kulkarni [15] Different from [29], we do not assume α(t) to be a Markov chain, but rather, we assume that the transition rates satisfy (2.2). That is, the transition rates depend on the history of the process X(t).
Note that formally, the net rate that is the difference of the input and the output rates at time t is given by f (α(t)). The dynamics of the buffer content {X(t) : t ≥ 0} can be described by the following differential equation: where x + = max{x, 0}. Note that X(t) can be rewritten as where a ∧ b = min(a, b) for two real numbers a and b, and measures the amount of potential output lost up to time t due to the emptiness of the buffer. Many of the current interests are concerned with the fluid model above such as long-run average control problems or stability of the systems.
Example 2.2. Consider the evolution of a predator-prey model in which the predator is macro and the prey is micro. Denote by X(t) and α(t) the density of the predator species and the number of the prey at time t respectively. In [6], the dynamics of the predator is given by a differential equation and the number of the prey evolves according to a birth and death process with switching rates given by otherwise. (2.8) In (2.8), β and δ represent the birth rate and the death rate of the predator, D is the death rate of the prey, C and c are the intraspecific competition rates of the prey and predator respectively. B is the loss rate of the prey due to the predation, while ρ is the intake rate of the predator. Suppose that the dynamics of X(t) is subject to environmental noise described by a Brownian motion. Since the life cycle of a micro species is usually very short, so it is reasonable to assume that the dynamical equation of X(t) is past-independent.
On the other hand, for the micro species α(t), the reproduction process of α(t) is assumed to be non-instantaneous. More precisely, suppose that the reproduction depends on the period of time from egg formation to hatching, say r. then we can assume that the switching rate of α(t) depends on a history of X(·) from t − r to t rather than the current state X(t). Thus, we may replace X(t) in (2.8) by a functional 0 −r X(t + u)µ(du) where µ is some measure on [−r, 0]. With these assumptions, the model (2.7) and (2.8) become We want to answer the question: under which conditions the species will be permanent forever or they will extinct at some instance? Whether or not there is an invariant measure associated with the system under consideration. These questions are related to the stability and ergodicity of the corresponding stochastic systems.
on the coefficients of the drifts and diffusion matrices. As pointed out by [26], recurrence may be called weak stability. It is well known that stability normally is referred to properties of solutions at certain stationary points. Likewise, recurrence can also be regarded as such. However, the neighborhood is no longer about a finite point, but rather the neighborhood of infinity. In stability analysis, we often wish to study something called near linear systems, which are systems locally like a linear one with high order terms involved. Here, we have similar things. We wish to linearize the systems about the point of "∞". Then we wish to see if the linearized systems are weakly stable (recurrent), is it true the nonlinear systems are as well.
Before proceeding further, let us recall the definition of recurrence.
is said to be recurrent if for any bounded set D ⊂ C and a finite set N ⊂ Z + , we have To obtain the recurrence, we need an assumption that guarantees the irreducibility of the process (X t , α(t)). Thus, we impose the following conditions.
Assumption 3.2. Suppose either of the following condition is satisfied. 1. For any i ∈ Z + , A(x, i) is elliptic uniformly on each compact set, that is, for There is Q = ( q ij ) Z+×Z+ such that Q is irreducible and There exists a k 0 ∈ Z + and a bounded non-negative sequence Remark 3.1. This assumption stems from a condition for strong ergodicity of a Markov chain having a countable state space. We will show that, under this assumption, Remark 3.2. Let us briefly comment on the conditions.
• Assumption 3.2 consists of two parts. The first part is a condition on the ellipticity, whereas the second part is to ensure the irreducibility of the switching process. One can view this condition as in continuous-time Markov chain ensuring certain positive probabilities. However, because of the past dependence, these quantities now depend on φ. • Assumption 3.3 essentially indicates that the drifts and diffusion matrices can be linearized in the neighborhood of infinity. The Q(φ) can also be approximated in a certain sense by a generator of a Markov chain. • Because the switching takes values in Z + , its state may not be bounded, Assumption 3.4 requires that the switching does not act too wildly. In fact, it is "pushed in" in the sense that Assumption 3.4 is satisfied. • It can be easily seen that under Assumption 3.3, there existsL > 0 such that |b(x, i)| + |σ(x, i)| ≤L(|x| + 1) for any (x, i) ∈ R n0 × Z + .
Throughout this paper, we assume that Assumptions 3.1, 3.2, 3.3, and 3.4 are satisfied. Then, we have some auxiliary results whose proofs are relegate to the appendix for continuity of the flow of presentation.

DANG H. NGUYEN AND GEORGE YIN
is a strong ergodic process, that is, where α(t) is the Markov chain associated with the generator Q, p ij (t), i, j ∈ Z + is the transition probability of α(t) and ν = (ν 1 , ν 2 , . . . ) is an invariant probability measure of α(t).
where P i , E i are the probability and expectation under the condition α(0) = i, respectively.
where a(i) = d j=1 σ j (i)σ j . Then the process (X t , α(t)) is positive recurrent. Moreover, the process has a unique invariant probability measure µ * such that for any where P (t, (φ, i), ·) is the transition probability of (X t , α(t)) and · T V is the total variation norm.

Remark 3.3.
This result is similar to [33, Theorem 5.1], which is based on the Fredholm alternative. However, in our setting, α(t) takes value in a countable state space. The Fredholm alternative is not applicable. Thus, we need a different approach to obtain the desired result.
Proof of Theorem 3.1. Let ε 0 > 0 sufficiently small that Denote by L i the generator of the diffusion when the discrete component is in state i, that is, are the gradient and Hessian of f (x, i) with respect to x, respectively, with where z denotes the transpose of z. By direct computation and (3.2), we have In view of (3.2) and the boundedness of σ(i), we have where c = sup i∈Z+ {|c(i)|} < ∞. In view of Gronwall's inequality, for any t ≥ 0 and On the other hand, we have In light of Lemma 3.2, there exists an H 2 > H 1 such that and Using these and the Markov property of (X t , α(t)), we have (3.13) where the last inequality follows from T > 10crλ −1 and (3.9). Applying (3.13) and (3.12) to (3.8), we obtain (3.14) By Lemma 3.5, it follows from (3.7) and (3.14) that for θ ∈ [0, 0.5], 0 < |x| < h 2 , i ≤ k 0 , we have for some K > 0 depending on T, c, and M σ . Let θ ∈ (0, 0.5] such that θ K < λT 4 . We have or equivalently, Exponentiating both sides of the inequality Applying the Markov property of (X t , α(t)) to (3.17) yields Using this recursively, we have Thus, In view of Lemma 3.3, there exists a T 3 > r such that On the other hand, By Lemma 3.1, there exists an H 3 > 0 satisfying Combining (3.19) and (3.20) yields Define stopping times ϑ = inf{t ≥ 0 : X t ≤ H 3 and α(t) ≤ k 0 }, and events B k = X t ≤ H 3 and α(t) ≤ k 0 for some t ∈ τ (k) , τ (k) + T 3 , k ≥ 0.
then the controlled regime-switching system is weakly stabilizable (i.e., the controlled regime-switching diffusion is positive recurrent).

5.
Examples. This section is devoted to several examples. They are intended for demonstration purpose.
Example 5.1. It is well known that an Ornstein-Uhlenbeck (OU) process is a nontrivial example of a process being stationary, Gaussian, and Markov. Such processes have been used in many applications, for example in finance. One of the distinct properties is that it can be used to delineate mean reversion. In this example, we consider a switched Ornstein-Uhlenbeck process dX(t) = θ(α(t))(µ(α(t)) − X(t))dt + σ(α(t))dW (t) where θ(i), µ(i), σ(i) are bounded, X(t) is real-valued, σ(i * ) = 0 for some i * ∈ Z + . Suppose that where c i are positive and bounded. Thus, the limit at infinity of Q(φ) is Applying [1, Proposition 3.3], we can show that the Markov chain α(t) with generators Q is strongly ergodic. Solving the system If i θ(i)ν(i) < 0, then the system is positive recurrent. Example 5.2. Suppose Q(φ) ≡ Q is the generator of a strongly ergodic Markov chain with invariant probability measure ν = (ν 1 , ν 2 , . . . ) where ν i > 0 for any i ∈ Z + . Let A(i) be a 2 × 1 vector and B(i) a 2 × 2 matrix; both of them are bounded in i ∈ Z + . Consider the diffusion X(t) = (X 1 (t), X 2 (t)) in R 2 given by In particular, suppose that A(i) = 0, c 1,i , c 2,i ∈ R \ {0} are bounded, Then It is interesting to note that the diffusion is degenerate at 0, but the drift at 0 is non-zero. Thus, the positive recurrence of the system can still be obtained if Example 5.3. This example is motivated by a controlled stochastic dynamic system that is linear in the continuous state variable x. To be more specific, consider the following scalar switching diffusion dX(t) = C(α(t)) + A(α(t))X(t) + B(α(t))u(t) dt + σ(α(t))X(t)dW 1 (t) + dW 2 (t), where A(i), B(i), C(i), σ(i), c i are bounded, and W 1 and W 2 are two independent Brownian motions. Let Q(φ) = Q(|φ(−r)|) where and c i , i = 1, 2, . . . , are bounded positive constants. Thus, the limit at infinity of By [1,Proposition 3.3], it is easy to verify that the Markov chain α(t) with generators Q is strongly ergodic. Solving the system we obtain that the invariant measure of α(t) is Letting u(t) = −L(i)X(t) we have Suppose that we can control only on mode i = 1. Thus, L(i) = 0 if i ≥ 2. If B(1) > 0, choosing L(i) sufficiently large that then the controlled system is weakly stabilized.
6. Final remarks. This work developed more verifiable conditions on recurrence and positive recurrence and related issues for switching diffusions with the switching taking values in a countably infinite set and depending on the history of the state process. It also developed feedback strategies for weakly stabilization. For further reading on feedback controls of diffusions, we refer to the excellent work of Yong and Zhou [31]. For the systems under consideration, there are numerous applications and potential applications. As one particular example, we mention the recent work [12]. Extending the effort of treating stochastic population growth in spatially heterogeneous environments to include random environment modeled by stochastic switching processes is both theoretically interesting and practically useful. Moreover, effort may also be directed to treating regime-switching models in related work on other ecological and biological applications; see for example [7,8,9,10,19,21] among others. Since L i ln |x| and σ(x, i) are bounded it is easy to prove that, for any T > 0, there exists a K 4 > 0 such that L α(s) ln |X(s)|ds + σ(X(s), α(x))dW (s) > K 4 ε −1 < ε.
Let ϑ = inf{t The lemma is therefore proved.
Proof of Lemma 3.4. This lemma concerns basic properties of the Laplace transform. It can be proved by straightforward arguments in calculus. The proof is therefore omitted.