Decay rates for stabilization of linear continuous-time systems with random switching

For a class of linear switched systems in continuous time a controllability condition implies that state feedbacks allow to achieve almost sure stabilization with arbitrary exponential decay rates. This is based on the Multiplicative Ergodic Theorem applied to an associated system in discrete time. This result is related to the stabilizability problem for linear persistently excited systems.


(Communicated by Vivek S. Borkar)
Abstract. For a class of linear switched systems in continuous time a controllability condition implies that state feedbacks allow to achieve almost sure stabilization with arbitrary exponential decay rates. This is based on the Multiplicative Ergodic Theorem applied to an associated system in discrete time. This result is related to the stabilizability problem for linear persistently excited systems.
1. Introduction. Let N be a positive integer and consider the family of N control systemsẋ where, for i ∈ {1, . . . , N }, x i (t) ∈ R di is the state of the subsystem i, u i (t) ∈ R mi is the control input of the subsystem i, d i and m i are non-negative integers, A i and B i are matrices with real entries and appropriate dimensions, and α i : R + → {0, 1} is a switching signal determining the activity of the control input on the i-th subsystem. We assume that at each time the control input is active in exactly one subsystem, i.e., This paper analyzes the stabilizability of all subsystems in (1) by linear feedback laws u i (t) = K i x i (t) under randomly generated switching signals α 1 , . . . , α N satisfying (2), and the maximal almost sure exponential decay rates that can be achieved with such feedbacks.
with x(t) ∈ R d , u(t) ∈ R m , A and B matrices with real entries and appropriate dimensions, and α a (T, µ)-persistently exciting (PE) signal for some positive constants T ≥ µ, i.e., a signal α ∈ L ∞ (R + , [0, 1]) satisfying, for every t ≥ 0, t+T t α(s) ds ≥ µ (4) (cf. Chaillet, Chitour, Loría, and Sigalotti [5], Chitour, Colonius, and Sigalotti [9], Chitour, Mazanti, and Sigalotti [10], Chitour and Sigalotti [11], Srikant and Akella [33]). Notice that, when α takes its values in {0, 1}, (3) can be seen as a particular case of (1) by adding a trivial subsystem (cf. Corollary 2). The stabilizability problem for (3) consists in investigating if, given A, B, T , and µ, one can find a linear feedback u(t) = Kx(t) which stabilizes (3) exponentially for every (T, µ)persistently exciting signal α. This problem has been considered in [11], where the authors provide sufficient conditions for stabilizability and prove that, in contrast to the situation for autonomous linear control systems, controllability does not imply stabilizability with arbitrary exponential decay rates, even if one considers only persistently exciting signals taking values in {0, 1}. The main result of our paper, Theorem 5.1, implies that, if one requires the feedback to stabilize (3) for almost every randomly generated signal α (with respect to the random model described in Section 2), then one can retrieve stabilizability with arbitrary decay rates, giving thus a positive answer to an open problem stated by Chitour and Sigalotti (personal communication). Some works in the literature have addressed the stabilization of systems similar to (3) with randomly generated signals α, such as Diwadkar, Dasgupta, and Vaidya [16] and Diwadkar and Vaidya [17]. Both references provide criteria for the exponential mean square stabilization of an analogue of (3) in discrete time, with an additional non-linear term in [16], the signal α being a sequence of independent identically distributed Bernoulli random variables in {0, 1} in [16] and a sequence of real-valued square-integrable random variables with the same expected value and variance in [17]. With respect to the setting of the present paper, apart from the fact that we restrict our attention to more general systems under the form (1) and in continuous time, a major difference is that we are interested here not only in stabilizability, but also in obtaining arbitrarily large almost sure exponential decay rates.
In this paper, in order to study the stabilizability by linear feedback laws of (1), we rewrite it asẋ where by setting α(t) to be the unique index i ∈ {1, . . . , N } such that α i (t) = 1. We then look for linear feedback laws of the form u i (t) = K i P i x, where P i ∈ M di,d (R) is the matrix associated with the canonical projection onto the i-th factor of the product With such feedback laws, (5) readṡ Before considering the stabilizability of (5), we begin the paper by the stability analysis of the linear switched system with random switchinġ where L 1 , . . . , L N ∈ M d (R) and α : R + → {1, . . . , N } is as before. We characterize its exponential behavior through its Lyapunov exponents, using the classical Multiplicative Ergodic Theorem due to Oseledets (cf. Arnold [1]). It turns out that a direct application of this theorem to systems in continuous time with random switching is not feasible, since in general they do not define random dynamical systems in the sense of [1] (cf. Example 1). Instead, we apply the Multiplicative Ergodic Theorem to an associated system in discrete time and then deduce results for the Lyapunov exponents of the continuous-time system (7). We remark that Lyapunov exponents for continuous-time systems with random switching are also considered by Li, Chen, Lam, and Mao in [25], but under assumptions on the random switching signal α guaranteeing that the corresponding switched system is a random dynamical system, which allows the direct use of the Multiplicative Ergodic Theorem in continuous time.
The considered linear equations with random switching (7) form Piecewise Deterministic Markov Processes (PDMP). These processes were introduced in Davis [14] and have since been extensively studied in the literature. For an analysis of their invariant measures, in particular, their supports, cf. Bakhtin and Hurth [2] and Benaïm, Le Borgne, Malrieu, and Zitt [3], also for further references. An important particular case which also attracts much research interest is that of Markovian jump linear systems (MJLS), in which one assumes that the random switching signal is generated by a continuous-time Markov chain. For more details, we refer to Bolzern, Colaneri, and De Nicolao [4], Fang and Loparo [18], and to the monograph Costa, Fragoso, and Todorov [13]. The case of nonlinear switched systems with random switching signals has also been considered in the literature, cf. e.g. Chatterjee and Liberzon [6], where multiple Lyapunov functions are used to derive a stability criterion under some slow switching condition that contains as a particular case switching signals coming from continuous-time Markov chains. We also remark that several different notions of stability for systems with random switching have been used in the literature; see, e.g., Feng, Loparo, Ji, and Chizeck [19] for a comparison between the usual notions in the context of MJLS. The one considered in this paper is that of almost sure stability.
The contents of this paper is as follows: Section 2 constructs the random signals α in (5) and (7). Example 1 shows that, in general, (7) endowed with such random switching signals does not define a random dynamical system, and Remark 2 discusses the relation to previous works in the literature. Section 3 introduces an associated system in discrete time, which defines a random dynamical system in discrete time. We discuss relations between the Lyapunov exponents for continuous-and discrete-time systems and state the conclusions we obtain from the Multiplicative Ergodic Theorem. Section 4 derives a formula for the maximal Lyapunov exponent, which is the main ingredient in the stability analysis of (7). Finally, Section 5 presents the main result of this paper, namely that almost sure stabilization can be achieved for (1) with arbitrary decay rate under a controllability hypothesis.  Recall that a piecewise constant function has only finitely many discontinuity points on any bounded interval. Given an initial condition x 0 ∈ R d and α ∈ P, system (7) admits a unique solution defined on R + , which we denote by ϕ c (·; x 0 , α). Furthermore, for i ∈ N , we denote by Φ i the linear flow defined by the matrix L i , i.e., Φ i t = e Lit for every t ∈ R.
In order to describe the random model for the switching signal α, let us first introduce some notation. Given a measurable space X, we denote by Pr(X) the set of all probability measures on X. The set N is assumed to be endowed with the σ-algebra P(N ) containing all subsets of N , and R + and R * + are assumed to be endowed with their respective Borel σ-algebras, denoted for simplicity by B in both cases. Let Ω = (N × R + ) N * and endow Ω with the standard product σ-algebra F = (P(N ) × B) N * (see, e.g., Halmos [23, §38, §49]).
Let M ∈ M N (R) be an irreducible right-stochastic matrix and p ∈ R N be its unique invariant probability vector (regarded here as a row vector). For i ∈ N , let µ i ∈ Pr(R * + ) and assume that µ i has finite expectation (we also regard µ i as a Borel probability measure on R + whenever necessary). Consider the time-homogeneous discrete-time Markov process in N × R + whose transition probabilities P : N × R + → Pr(N × R + ) and initial law ν 1 ∈ Pr(N × R + ) are given by Notice that the associated transition operator T : Pr(N × R + ) → Pr(N × R + ) of this process is given by and it induces a measure P ∈ Pr(Ω) defined, for n ∈ N * , i 1 , . . . , i n ∈ N , and U 1 , . . . , U n ∈ B, by (For the definition of a discrete-time Markov process in an uncountable set and its transition probability, initial law, and transition operator, we refer to Hairer [22] and Meyn and Tweedie [29,Chapter 3].) To construct a random switching signal α from a certain ω = (i n , t n ) ∞ n=1 ∈ Ω, we regard (i n ) ∞ n=1 as the sequence of states taken by α and t n as the time spent in the state i n , according to the following definition.
Definition 2.1. We define the map α α α : Ω → P as follows: for ω = (i n , t n ) ∞ n=1 ∈ Ω, we set s 0 = 0, s n = n k=1 t k for n ∈ N * , and α α α(ω)(t) = i n for t ∈ [s n−1 , s n ), n ∈ N * . This construction of α α α amounts to choosing a random initial state according to the probability law defined by p and, at every switching event to a state i, choosing a random time to stay in this state according to the law µ i and a random next state according to the probability law corresponding to the i-th row (M ij ) N j=1 of the matrix M . Notice that α α α(ω) is well-defined only when ω belongs to the subset Ω 0 ⊂ Ω defined by One can easily prove by standard techniques that P(Ω 0 ) = 1, yielding that α α α(ω) is well-defined for almost every ω ∈ Ω.
Remark 1. In general, since µ 1 , . . . , µ N are arbitrary Borel probability measures on R * + , α α α(ω) is not a continuous-time Markov process on N . On the other hand, every time-homogeneous continuous-time Markov process on N can be written using the previous definitions by a suitable choice of M , p, and µ 1 , . . . , µ N . Our more general framework covers some important practical cases that cannot be modeled by continuous-time Markov processes. For instance, one can model a deterministic switching signal which switches periodically between N subsystems with prescribed times T 1 , . . . , T N spent in each subsystem by choosing M as an appropriate irreducible permutation matrix encoding the switching sequence and µ i = δ Ti for i ∈ N , where δ T denotes the Dirac measure at T . In practical implementations, the time spent at a state i may not be exactly equal to T i and some random switches may occur, which can be modeled in our framework by perturbing the matrix M and choosing µ i , e.g., as an absolutely continuous measure concentrated around T i .
In order to consider solutions of (7) for signals α chosen randomly according to the previous construction, we use the solution map ϕ c of (7) to provide the following definition.
Definition 2.2. We define the continuous-time map For x 0 ∈ R d \ {0} and almost every ω ∈ Ω, we define the Lyapunov exponent of the continuous-time system (13) by The Lyapunov exponent λ rc is used to characterize the asymptotic behavior of (13). A natural idea to obtain information on such Lyapunov exponents would be to apply the continuous-time Multiplicative Ergodic Theorem (see, e.g., Arnold [1, Theorem 3.4.1]). To do so, ϕ rc should define a random dynamical system on R d ×Ω, i.e., one would have to provide a metric dynamical system θ on Ω -a measurable dynamical system θ : R + × Ω → Ω on (Ω, F, P) such that θ t preserves P for every t ≥ 0 -in such a way that ϕ rc becomes a cocycle over θ. However, in general the natural choice for θ to obtain the cocycle property for ϕ rc , namely the time shift, does not define such a measure preserving map, as shown in the following example.
One immediately verifies that θ t corresponds to the time shift in P, i.e., for every t, s ≥ 0 and ω ∈ Ω 0 , one has However, the map θ t in (Ω, F) does not preserve the measure P in general. Indeed, suppose that µ i = δ 1 for every i ∈ N , where δ 1 denotes the Dirac measure at 1. In particular, a set E ∈ F has nonzero measure only if E contains a point (i j , t j ) ∞ j=1 with t j = 1 for every j ∈ N * . For r ∈ N * and i 1 , . . . , i r ∈ N , let Then for k ∈ N * , and n ∈ N * the unique integer such that t ∈ s * n−1 , s * n , one has s * n − t = 1, t * n+j−1 = 1 for j = 2, . . . , r, and i * n+j−1 = i j for j ∈ r. If t / ∈ N, then s * n = t + 1 / ∈ N, and thus there exists j ∈ n such that t * j = 1. We have shown that, if t / ∈ N, then, for every ω = (i * j , t * j ) ∞ j=1 ∈ θ −1 t (E), there exists j ∈ N * such that t * j = 1, and thus P(θ −1 t (E)) = 0, hence θ t does not preserve the measure P.
Remark 2. For some particular choices of µ 1 , . . . , µ N , the time-shift θ t may preserve P, in which case the continuous-time Multiplicative Ergodic Theorem can be applied directly to (13). This special case falls in the framework of Li, Chen, Lam, and Mao [25]. An important particular case where θ t preserves P is when µ 1 , . . . , µ N are chosen in such a way that α α α becomes a homogeneous continuous-time Markov chain, which is the case treated, e.g., in Bolzern, Colaneri, and De Nicolao [4], and in Fang and Loparo [18]. The results we provide in Section 4 generalize the corresponding almost sure stability criteria from [4,18,25] to randomly switching signals constructed according to Definition 2.1.
3. Associated discrete-time system and Lyapunov exponents. Example 1 shows that in general one cannot expect to obtain a random dynamical system from ϕ rc in order to apply the continuous-time Multiplicative Ergodic Theorem. Our strategy to study the exponential behavior of ϕ rc relies instead on defining a suitable discrete-time map ϕ rd associated with ϕ rc , in such a way that ϕ rd does define a discrete-time random dynamical system -to which the discrete-time Multiplicative Ergodic Theorem can be applied -and that the exponential behavior of ϕ rc and ϕ rd can be compared.
∈ Ω, we set s n (ω) = n k=1 t k for n ∈ N * and s 0 (ω) = 0. We define the discrete-time map ϕ rd by and almost every ω ∈ Ω, we define the Lyapunov exponent of the discrete-time system (15) by The map ϕ rd corresponds to regarding the continuous-time map ϕ rc only at the switching times s n (ω). It is the solution map of the random discrete-time equation System (17) is obtained from (7) by taking the values of a continuous-time solution at the discrete times s n (ω). The sequence (s n (ω)) ∞ n=0 contains all the discontinuities of α α α(ω) and may also contain times with trivial jumps. The Lyapunov exponent λ rd characterizes the asymptotic behavior of ϕ rd .
Notice that the solution maps ϕ rc and ϕ rd satisfy, for every x 0 ∈ R d and almost and We now prove that ϕ rd defines a discrete-time random dynamical system on R d × Ω. To do so, we must first provide a discrete-time metric dynamical system θ on (Ω, F, P), which can be chosen simply as the usual shift operator. Let θ : Ω → Ω be defined by One can easily verify, using (11) and the fact that pM = p, that the measure P is invariant under θ, and thus θ is a discrete-time metric dynamical system in (Ω, F, P). Moreover, since θ(Ω 0 ) = Ω 0 , θ also defines a metric dynamical system in (Ω 0 , F, P) (where F and P are understood to be restricted to Ω 0 ). Notice that θ is ergodic in (Ω, F, P). Indeed, given ν ∈ Pr(N × R + ), let P ν be the probability measure on Ω associated with the discrete-time Markov process in N × R + with transition probabilities P given by (8) and initial law ν. One can easily check that P ν is invariant under θ if and only if ν coincides with the initial law ν 1 defined in (9), and thus it follows from classical ergodicity results for Markov chains (see, e.g., Hairer [22,Theorem 5.7]) that θ is ergodic in (Ω, F, P).

FRITZ COLONIUS AND GUILHERME MAZANTI
Now that we have defined the random discrete-time system (15) and provided the metric dynamical system θ, we can show that the pair (θ, ϕ rd ) defines a random dynamical system. Proposition 1. (θ, ϕ rd ) is a discrete-time random dynamical system over (Ω, F, P).
We now compare the asymptotic behavior of (13) and (15) by considering the relation between the Lyapunov exponents λ rc (x 0 , ω) and λ rd (x 0 , ω) of the continuousand discrete-time systems. The following result can be easily obtained from the ergodicity of θ and Birkhoff's Ergodic Theorem.
Proposition 2. For almost every ω ∈ Ω, one has The next result provides the relation between λ rc and λ rd in terms of the quantity m defined in (22).
Moreover lim sup n→∞ 1 s n (ω) log ϕ rc (s n (ω); x 0 , ω) ≤ lim sup and then the conclusion follows since sn(ω) n → m as n → ∞ for almost every ω ∈ Ω. We now turn to the proof of the inequality λ rd (x 0 , ω) ≥ mλ rc (x 0 , ω). Let C, γ > 0 be such that Φ i t x ≤ Ce γt x for every i ∈ N , x ∈ R d , and t ≥ 0. For x 0 ∈ R d \{0} and t > 0, let n t ∈ N be the unique integer such that t ∈ (s nt (ω), s nt+1 (ω)], which is well-defined for almost every ω ∈ Ω. Then Since t ∈ (s nt (ω), s nt+1 (ω)], one has, for almost every ω ∈ Ω, where we use (22) to obtain that and thus nt t → 1 m as t → ∞. Using this fact and inserting (24) into (23), one obtains the conclusion of the theorem by letting t → ∞.
where L denotes the Lebesgue measure in R.
Proof. Fix i ∈ N . Let ϕ i : Ω → R + be given by Then, by Birkhoff's Ergodic Theorem, one has, for almost every ω ∈ Ω, On the other hand, by definition of α α α, for almost every ω = (i n , t n ) ∞ n=1 ∈ Ω,

FRITZ COLONIUS AND GUILHERME MAZANTI
Hence it follows from Proposition 2 and (25) that, for almost every ω ∈ Ω, Let ω ∈ Ω be such that (26) holds and take T ∈ R + . Choose n T ∈ N such that s n T (ω) ≤ T < s n T +1 (ω). Then The conclusion of the proposition then follows since, by Proposition 2, sn+1(ω) sn(ω) → 1 as n → ∞ for almost every ω ∈ Ω.   (7) and (17) are linear has been used only in the proof of Proposition 3, where one uses an exponential bound on the growth of the flows Φ i t = e Lit , namely that there exist constants C, γ > 0 such that e Lit ≤ Ce γt for every t ≥ 0 and i ∈ N . If we consider, instead of system (7), the nonlinear switched systemẋ where f 1 , . . . , f N are complete vector fields generating flows Φ 1 , . . . , Φ N , and modify the discrete-time system (17) accordingly, all the previous results remain true, with the same proofs, under the additional assumption that there exist constants C, γ > 0 such that Φ i t x ≤ Ce γt x for every t ≥ 0, i ∈ N , and x ∈ R d . However, the next results do not generalize to the nonlinear framework.
In order to conclude this section, we apply the discrete-time Multiplicative Ergodic Theorem (see, e.g., Arnold [1, Theorem 3.4.1]) in the one-sided invertible case to system (15) and we use Proposition 3 to obtain that several of its conclusions also hold for the continuous-time system (13).
Notice that e −Lit x ≤ Ce γt x for every i ∈ N , x ∈ R d and t ≥ 0, and hence e Lit x ≥ C −1 e −γt x . Let t > 0 and choose n t ∈ N such that t ∈ (s nt (ω), s nt+1 (ω)]. Then, proceeding as in (23), one gets Using (24), we thus obtain that lim inf which yields the existence of the limit. 4. The maximal Lyapunov exponent. We are interested in this section in the maximal Lyapunov exponents for systems (13) and (15), i.e., the real numbers λ c 1 and λ d 1 from Proposition 5(iv). We denote these numbers by λ c max and λ d max , respectively. Before proving the main results of this section, we state the following lemma, which shows that the Gelfand formula for the spectral radius ρ holds uniformly over compact sets of matrices. This follows from the estimates derived in Green [20,Section 3.3]. For the reader's convenience, we provide a proof.
Since all norms on M d (R) are equivalent, there is β A > 0 such that for all B ∈ U Then there is N ∈ N * , depending only on A and ε, such that for all n ≥ N and all B ∈ U , 1 ρ(B) + ε B n 1/n = F (B) n 1/n < 1, implying B n 1/n < ρ(B) + ε. Since this holds for every B in a neighborhood U of A and B n 1/n ≥ ρ(B) for every n ∈ N * , one obtains that the convergence in U is uniform, and the assertion follows by compactness of A.
We can now prove our first result regarding the characterization of λ c max and λ d max . Proposition 6. For almost every ω ∈ Ω, we have Moreover, n Ω log Φ(n, ω) dP(ω).
Under some extra assumptions on the probability measures µ i , i ∈ N , one obtains that the inequality in (28) is actually an equality. Proposition 7. Suppose there exists r > 1 such that, for every i ∈ N , one has (0,∞) t r dµ i (t) < ∞. Then λ d max is given by Proof. One clearly has, using (27), that The theorem is proved if we show one can exchange the limit and the integral in the above expression, which we do by applying Vitali's convergence theorem (see, e.g., Rudin [31,Chapter 6]). We are thus left to show that the sequence of functions 1 n log Φ(n, ·) ∞ n=1 is uniformly integrable, i.e., for every ε > 0, there exists δ > 0 such that, for every E ∈ F with P(E) < δ, one has 1 n E log Φ(n, ω) dP(ω) < ε.
Hence, it suffices to show that the sequence sn n ∞ n=1 is uniformly integrable. For n ∈ N * and E ∈ F, we have, by Hölder's inequality, where r ∈ (1, ∞) is such that 1 r + 1 r = 1 and K = max i∈N (0,∞) t r dµ i (t) < ∞. Equation (33) establishes the uniform integrability of sn n ∞ n=1 , which yields the result.
As an immediate consequence of Proposition 2, Proposition 3, Proposition 6, and Proposition 7, we obtain the following result.
If we have further that there exists r > 1 such that R+ t r dµ i (t) < ∞ for every i ∈ N , then the inequality in (34) is an equality and (35) is equivalent to the almost sure exponential stability of (13) and to the almost sure exponential stability of (15).
We conclude this section with the following characterization of a weighted sum of the Lyapunov exponents λ d i , i ∈ N . Proposition 8. Suppose there exists r > 1 such that, for every i ∈ N , one has where m i is as in Proposition 5(v).
Proof. Thanks to Proposition 5(v), one obtains that, for almost every ω = (i n , where we exchange limit and integral thanks to Vitali's convergence theorem and to the fact that sn(ω) n=1 is uniformly integrable, as shown in the proof of Proposition 7.
5. Main result. In this section, we use the stability criterion from Corollary 1 to study the stabilization by linear feedback laws of (1). As stated in the Introduction, we write (1) under the form (5), which is a switched control system with dynamics given by the N equationsẋ = Ax + B i u i , i ∈ N .
We consider system (5) in a probabilistic setting by taking random signals α α α(ω) as in Definition 2.1, i.e., the random control systemẋ(t) = Ax(t) + B α α α(ω)(t) u α α α(ω)(t) (t). The problem treated in this section is the arbitrary rate stabilizability of this system by linear feedback laws u i = K i P i x, i ∈ N , where we recall that P i ∈ M di,d (R) is the projection onto the i-th factor of R d = R d1 × · · · × R d N . More precisely, we consider the closed-loop random switched systeṁ We wish to know if, given λ ∈ R, there exist matrices K i ∈ M mi,di (R), i ∈ N , such that the maximal Lyapunov exponent λ c max of the continuous-time system (37), defined as in Section 4, satisfies λ c max ≤ λ. Our main result is the following, which states that this is true under the controllability of (A i , B i ) for every i ∈ N , thus implying that arbitrary decay rates are achievable.
When (i 1 , . . . , i r ) = (i * 1 , . . . , i * r ), we can obtain a sharper bound than (41). For i ∈ N , denote by N (i) the nonempty set of all indices k ∈ r such that i * k = i, and denote by n(i) ∈ N * the number of elements in N (i). Then which shows, using (40), that Let Then, combining (41) and (42), we obtain from (39) that The right-hand side of (43) tends to −∞ as γ → ∞, which can be achieved by (38). Hence it follows from Corollary 1 that the maximal Lyapunov exponent of (37) can be made arbitrarily small.
Recall that the main motivation for Theorem 5.1 comes from the stabilizability of persistently excited systems (3) under linear feedback laws. Let us now provide an application of Theorem 5.1 to (3). To do so, let µ 1 , µ 2 ∈ Pr(R * + ) have finite expectation and M ∈ M 2 (R) be right-stochastic and irreducible with unique invariant probability vector p ∈ R 2 . We also slightly modify Definition 2.1 for the remainder of this section by saying that, for ω = (i n , t n ) ∞ n=1 , one has α α α(ω)(t) = 2 − i n for t ∈ [s n−1 , s n ) and n ∈ N * , which amounts to saying that α α α(ω) takes the value 0 in the state i = 2 and the value 1 in the state i = 1. As a consequence of Theorem 5.1, we obtain the following result for (3).  If (A, B) is controllable, then, for every λ ∈ R, there exists K ∈ M m,d (R) such that the maximal Lyapunov exponent λ c max of the closed-loop random switched systeṁ x(t) = (A + α α α(ω)(t)BK)x(t) satisfies λ c max ≤ λ. Proof. The corollary follows immediately from Theorem 5.1 by letting N = 2, A 1 = A, B 1 = B, and adding a trivial second subsystem with d 2 = m 2 = 0.
It was proved in [11,Proposition 4.5] that there are (two dimensional) controllable systems for which the achievable decay rates under persistently exciting signals through linear feedback laws are bounded below, even when we consider only persistently exciting signals α taking values in {0, 1} instead of [0, 1]. Corollary 2 shows that, in the probabilistic setting defined above, one can get arbitrarily large (almost sure) decay rates for (3), which is in contrast to the situation for persistently excited systems. An explanation for this fact is that the probability of having a signal α with very fast switching for an infinitely long time, such as the signals used in the proof of [11,Proposition 4.5], is zero, and hence such signals do not interfere with the behavior of the (random) maximal Lyapunov exponent.