Switching controls for linear stochastic differential systems

We analyze the exact controllability problem of switching controls for stochastic control systems endowed with different actuators. The goal is to control the dynamics of the system by switching from an actuator to the other such that, in each instant of time, there are as few active actuators as possible. We prove that, under suitable rank conditions, switching control strategies exist and can be built in a systematic way. The proof is based on building a new functional by the adjoint system whose minimizers are the switching controls.

Remark 1. The motivation to introduce the above notion is as follows: Generally speaking, one needs very restrictive conditions on controls to get the exact controllability for following control system: For example, in [17], Peng proved that a necessary condition for the exact controllability of (2) is that rank(D) = n and (A, B) fulfills a Kalman-type rank condition. These conditions on the two actuators are too restrictive. But they are necessary.
To relax these conditions, we use four actuators. At every time t ∈ [0, T ], only two of them act on the system. By this, neither rank(D 1 ) = rank(D 2 ) = n nor (A, B 1 ) or (A, B 2 ) should fulfill a Kalman-type rank condition is needed. Instead of this, we assume that rank(D 1 , D 2 ) = n and that (A, (B 1 , B 2 )) satisfies the Kalman rank condition. This relaxes the restrictions on the actuators.
Remark 2. In system (1), γ andγ decide the switching mode of controls. One needs to choose different γ andγ for different x 0 and x 1 . The way of choosing γ andγ can be seen in Section 2.
Remark 3. In this paper, the controls in the drift term and the diffusion term are different. It is very interesting to consider the control problem in which the controls are the same. However, this problem is much more difficult and beyond the scope of this paper.
As the development of control theory, controllability problems for stochastic systems drew more and more attention in recent years(see [2,5,7,8,9,10,11,12,13,14,17,20,21,22,23] and the rich references therein). Especially, the controllability problem for linear stochastic differential equations are studied in [2,4,8,14,17,21]. In [17], it is proved that (2) is exact controllable only if the rank of D is n. Moreover, a sufficient condition (in the form of rank condition) for exact controllability of (2) is given in [17]. Some generalizations of the results in [17] are obtained in [10]. In [14], it is proved that one can use controls in L 1 F (0, T ; L 2 (Ω; R m )) to get the exact controllability of (2) when D = 0. Moreover, it is shown in [14] that if D = 0, (2) is not exactly controllable when the control space is L r F (0, T ; L 2 (Ω; R m )) for any r > 1. Some generalizations for results in [14] are done in [21]. In [2,8], the approximate controllability problem for (2) is studied. Particularly, a rank condition for the approximate controllability of (2) is given in [8]. In [4], a negative result for approximate controllability is given when D = 0. Furthermore, a characterization of the reachable set of (2) when when D = 0 is given in [4].
Control systems in real applications are often endowed with several actuators. Switching controllers arise in many fields of applications (see [1,3,6,15,16,18,19,24] for example). In this paper, we consider the problem that, to design switching control strategies to guarantee that, at each instant of time, there are as few activated controls as possible to drive the state of the system to a given destination. Compared with the existing works on the exact controllability of linear stochastic differential equations, the main difference of our results are that we do not need the control matrix in the diffusion term to be full rank. To get this, we construct a way to design the switching mode. More details can be seen in Section 2. We borrow some idea in [24] to design our control. However, since the solution of the adjoint system is not analytic with respect to the time variable, we can not simply use the method in [24].

2.
Main results and their proofs. In this section, we present the main results in this paper and their proofs.
Consider the following backward stochastic differential equations where z T ∈ L 2 F T (Ω; R n ). It is well known that (3) has a unique solution and there is a constant L > 0 such that Theorem 2.1. Assume that rank(D 1 , D 2 ) = n and (A, (B 1 , B 2 )) satisfies the Kalman rank condition. Then, for all T > 0, the functional J x0,x1 (·) achieves at least one minimum at some minimizerz T .
Proof. Denote by φ(·) the fundamental solution of (2) with u = 0 and v = 0. Without loss of generality, we can assume that x 1 = φ(T )x 0 . Indeed, when x 1 = φ(T )x 0 , the null controls u 1 = u 2 = 0 and v 1 = v 2 = 0 suffice to drive the initial datum x 0 to the final one x 1 , and they satisfy the switching condition. Thus, in the sequel, we assume that x 0 and x 1 are given such that Firstly, we will prove that the functional J x0,x1 : Denote by (z, Z) and (ẑ, Z) the solution of (3) with the final dataz T andẑ T , respectively. Then Hence, for given ε > 0, by taking Thus, J x0,x1 is continuous. Next, we will prove that J x0,x1 is convex. Let λ ∈ (0, 1) and assume thatz T =ẑ T .
Next, we will prove that J x0,x1 is coercive. It is sufficient to show that there exists a positive constant L > 0 such that Noting that we only need to prove that there is a constant L > 0 such that Consider the following stochastic differential equation: In (7), y is the solution and g is the nonhomogeneous term. For a given z T ∈ L 2 F T (Ω; R n ) and the corresponding solution (z, Z), clearly, z is a solution of (7) with the initial data y(0) = z(0) and nonhomogeneous term g = Y . Hence, we only need to prove that there is a constant L > 0 such that for any y 0 ∈ R n and g ∈ L 2 F (0, T ; R n ), From the wellposedness result for stochastic differential equations, we know that there is L > 0 such that for any y 0 ∈ R n and g ∈ L 2 F (0, T ; R n ), Since the rank of (D 1 , D 2 ) is n, there is a constant L > 0 such that Letȳ = Ey. Thenȳ is the solution of The equation (11) is an ordinary differential equation. Since (A, (B 1 , B 2 )) satisfies the Kalman rank condition, there is a constant L > 0 such that for all y 0 ∈ R n and g ∈ L 2 F (0, T ; R n ), From (9), (10) and (12), we get (8). Therefore, we find that that J x0,x1 is coercive. By the above argument, we see that J x0,x1 is continuous, convex and coercive. Hence, it has at least a minimizer.

YONG HE
Letz be a minimizer of J x0,x1 (·) and (z, Z) be the corresponding solution of (3). Put then by choosing the switching controller we have the solution of (1) with Proof. Letz T be a minimizer of J x0,x1 (·) and (z, Z) the corresponding solution of (3). For any Denote by (ψ, Ψ) the solution of (3) with the final data ψ T . Then We claim that the limit on the right hand side of (15) is Indeed, it is easy to see that, pointwise a.e. in (0, T ), and 1 2h as h → 0. To get the limit (16) it is then sufficient to apply the dominated convergence theorem. To do that it suffices to show that 1 2h and 1 2h where f 1 (·), f 2 (·) ∈ L 1 (0, T ). We only prove (17) here. The proof of (18) is very similar.
The only difficulty of proving the uniform bound arises on the set where the two maximizers are not taken over the same component. Indeed, when both maximizers are taken over the same component, for instance, if then the quotient in (17) can be bounded above by 2E(|B 1z ||B 1 ψ|), which is in L 1 (0, T ) since bothz and ψ belong to L 2 (0, T ). Let us then consider the remaining case where, for instance, and In that case, the quotient in (17) coincides with It is then sufficient to get an upper bound on To do this, the only difficulty is to get an upper bound on But, obviously, for (19) and (20) to hold we need that which guarantees the uniform boundedness of (22). As a consequence of this analysis, the Euler-Lagrange equation associated to the minimization of J takes the form

YONG HE
for all ψ T ∈ L 2 F T (Ω; R n ). In view of (23) we conclude that the following switching controllers where 1 Ij (j = 1, 2, 3, 4) stands for the characteristic function of the sets I j (j = 1, 2, 3, 4), are such that the switching condition (2.4) holds and the solution of (2.1) satisfies the final requirement (2.2).
Now we discuss the construction of the control when m(I 1 ∪ I 2 ) < T . We make the following assumption: First, we define a functional J x0,x1 (·) : L 2 F T (Ω; R n ) → R as follows: (26) Let (z, Z) be the minimizer of (26). Set The following holds: we have the solution of (1) with Proof. Similar to the arguments developed in the proof of Theorem 2.2, we can find the controls on the set Q 1 ∪ Q 2 in the form of (28). Thus, we only need to analyze the behavior of the functional within the set Q 3 ∪ Q 4 . It is easy to see that ifz T is a minimizer of J x0,x1 , then it is also a minimizer of the modified functional over the set of solutions (z(·), Z(·)) such that Thus, there exist Lagrange multipliers λ 3 and λ 4 such that the Euler-Lagrange equation reads: (30) This implies that by the control defined as we get x(T ) = x 1 in L 2 F T (Ω; R n ). By the fact that B 1z (t, ω) = B 2z (t, ω) in Q 3 and B 1z (t, ω) = −B 2z (t, ω) in Q 4 , we have that, on the set Q 3 ∪ Q 4 , the controls take the form This completes the proof of Theorem 2.3.
We have built switching controllers by minimizing functionals J x0,x1 (·) of the form (5) or its variants. Now we show that the controls obtained this way are also optimal in some sense. More precisely, we have the following result: Theorem 2.4. Let the assumptions of Theorem 2.2 be satisfied. Then, the controls (ũ 1 ,ũ 2 ,ṽ 1 ,ṽ 2 ) obtained by minimizing the functional J x0,x1 are of minimal norm. More precisely, for all other control pair (u 1 , u 2 , v 1 , v 2 ) satisfying the final requirement x(T ) = x 1 .
Proof. Let (u 1 , u 2 , v 1 , v 2 ) be any other pair of switching controls. Denote by (z, Z) the solution of the adjoint system associated to the minimizer of J x0,x1 . By Itô's formula, and noting that both for the controls (u 1 , u 2 , v 1 , v 2 ) and (ũ 1 ,ũ 2 ,ṽ 1 ,ṽ 2 ), x(T ) = x 1 , we deduce that: and For the switching controls (ũ 1 ,ũ 2 ,ṽ 1 ,ṽ 2 ), we have that and Since E|u 1 (t)| 2 E|u 2 (t)| 2 = 0 and E|ũ 1 (t)| 2 E|ũ 2 (t)| 2 = 0 for every t ∈ [0, T ], we have Remark 5. Theorems 2.1-2.4 can be generated to the infinite dimensional setting. For example, the three key points in the proof of Theorem 2.1 is as follows: 1. the wellposedness of (3) and the inequality (4), which guarantee the continuity of the functional J x0,x1 ; 2. the introduction of stochastic differential equation (7), which reduces the observability estimate of the backward stochastic differential equation (3) to the wellposedness of (7) and the observability estimate of an ordinary differential equation. Both 1 and 2 above can be generalized to the infinite dimensional setting. The details is beyond the scope of this paper.
Remark 6. In this paper, we consider the exact controllability problem. It is also very interesting to study the null/approximate controllability problem with switching controls. Of course, under the conditions of Theorems 2.2 and 2.3, we know that system (1) is null/approximately controllable. However, whether one can find some weaker conditions to get the null/approximate controllability of (1) is unknown. In fact, according to some recent results in [4], the null/approximate controllability problem for stochastic differential equations is much more complex than the exact controllability problem.