Forward-Backward Evolution Equations and Applications

Well-posedness is studied for a special system of two-point boundary value problem for evolution equations which is called a forward-backward evolution equation (FBEE, for short). Two approaches are introduced: A decoupling method with some brief discussions, and a method of continuation with some substantial discussions. For the latter, we have introduced Lyapunov operators for FBEEs, whose existence leads to some uniform a priori estimates for the mild solutions of FBEEs, which will be sufficient for the well-posedness. For some special cases, Lyapunov operators are constructed. Also, from some given Lyapunov operators, the corresponding solvable FBEEs are identified.


Introduction
In this paper, we consider the following system of evolution equations: (1.1)         ẏ (t) = Ay(t) + b(t, y(t), ψ(t)), ψ(t) = −A * ψ(t) − g(t, y(t), ψ(t)), t ∈ [0, T ], where A : D(A) ⊆ X → X generates a C 0 -semigroup e At on a real Hilbert space X (identified with its dual X * ), with e At * = e A * t , t ≥ 0, being the adjoint semigroup generated by A * (the adjoint operator of A), and b, g, and h being suitable maps. The above could be called a two-point boundary value problem, mimicking a similar notion for ordinary differential equations. We see that the equation for y(·) is an initial value problem which should be solved forwardly, and the equation for ψ(·) is a terminal value problem which should be solved backwardly. Therefore, inspired by the so-called forward-backward stochastic differential equations (FBSDEs, for short, see [17,25,26] for details), we prefer to call (1.1) a forward-backward evolution equation (FBEE, for short).
In common occasions, two-point boundary value problem is related to certain eigenvalue problems, for which the well-posedness of the problem might not be the goal, instead, one might be more interested in the existence of solutions, not necessarily the uniqueness. See, for examples, [3,32,6], and see also [7] for some other and the equations are satisfied almost everywhere. A pair (y(·), ψ(·)) is called a mild solution (or a weak solution) to FBEE (1.1) if the following system of integral equations are satisfied: Note that in the case A is bounded, (1.1) and (1.2) are actually equivalent, and thus, a mild solution (y(·), ψ(·)) is actually a strong solution.
Our study of the above system is mainly motivated by the study of optimal control theory. It is known that for a standard optimal control problem of an evolution equation with, say, a Bolza type cost functional, by applying the Pontryagin maximum/minimum principle, one will obtain an optimality system of the above form whose solution will give a candidate for the optimal trajectory and its adjoint ( [14]). Therefore, solvability of the above type system is important, at least for optimal control theory of evolution equations.
Roughly speaking, when T is small enough, or the Lipschitz constants of the involved functions are small enough, one can show that FBEE (1.1) will have a unique mild solution, by means of contraction mapping theorem. On the other hand, if (1.1) is the optimality system (obtained via Pontryagin maximum/minimum principle) of a corresponding optimal control problem which admits an optimal control, then this FBEE admits a mild solution, which might not be unique. Further, if the corresponding optimal control has an optimal control and the optimality system admits a unique mild solution, then this solution can be used to construct the optimal control(s). Hence, under proper conditions, FBEE (1.1) could admit a (unique) mild solution, without restriction on the length of the time horizon T , and/or the size of the Lipschitz constants of the involved functions. This is actually the case if the FBEE is the optimality system of a linear-quadratic (LQ, for sort) optimal control problem satisfying proper conditions ( [14]).
In this paper, we will study the (unique) solvability of FBEE (1.1) under some general conditions. Two approaches will be introduced: decoupling method and method of continuation. The former is inspired by the so-called invariant embedding which can be traced back to [1,5,4]. Such a method was used in the study of FBSDEs (see [15,17], for details). The latter is inspired by the method of continuity for elliptic partial differential equations (see, e.g. [10]), and FBSDEs ( [11,25,19,26]). Due to the nature of FBEE (1.1), some technical difficulties exist in applying either of these methods. We will briefly present some main idea of the decoupling method and will relatively more carefully present the method of continuation.
The rest of this paper is organized as follows. In Section 2, we will present some preliminary results, including a main motivation from optimal control theory. Linear FBEEs are carefully discussed in Section 3. In Section 4, a brief description on the decoupling method will be given. In Section 5, we will introduce the so-called Lyapunov operator which is adopted from [26] (for FBSDEs). The existence of Lyapunov operators lead to some uniform a priori estimates for the mild solutions of our FBEE. Well-posedness of FBEEs will be established in Section 6. In Section 7, we will construct some Lyapunov operators through which some well-posed FBEEs will be identified. In Section 8, we briefly discussed some extensions of our main results. In Section 9, several illustrative examples will be presented. Finally, some concluding remarks will be made in Section 10.

Preliminaries
Throughout of this paper, we let X be a separable real Hilbert space, with the norm · and the inner product · , · . We identify the dual X * with X. The set of all bounded linear operators from X to itself is denoted by L(X). The set of all self-adjoint operators on X is denoted by S(X) and the set of all positive semi-definite operators on X is denoted by S + (X). For the notational simplicity, when there is no confusion, we will not distinguish between λ and λI (for any λ ∈ R). For example, we use λ − A to denote λI − A. Also, if F is in S + (X), we denote it by F 0; if F − cI 0, we simply denote it by F c, and F c means −F −c. Next, we denote For convenience and definiteness of our presentation, we introduce the following standing assumptions: (H0) ′ A : D(A) ⊆ X → X generates a C 0 -semigroup e At on X.
In the above, f : [0, T ] × X × U → X, f 0 : [0, T ] × X × U → R, f 1 : X → R are suitable maps, with U being a separable metric space. We call x ∈ X an initial state, u(·) a control, and y(·) a state trajectory, respectively. Denote This is the set of all admissible controls. Under some mild conditions, for any x ∈ X and u(·) ∈ U, state equation (2.5) admits a unique mild solution y(·) ≡ y(· ; x, u(·)), i.e., the solution to the following integral equation: (2.7) y(t) = e At x + t 0 e A(t−s) f (s, y(s), u(s))ds, t ∈ [0, T ], and the cost functional J(x; u(·)) is well-defined. Then one can pose the following optimal control problem.
With the above setting, we have the following standard result. To simplify the presentation, we assume that the involved maps f, f 0 , f 1 have all the required measurability and smoothness. The readers are referred to [14] for details. Proposition 2.1. (Pontryagin's Minimum Principle) Let (H0) ′ hold and let (ȳ(·),ū(·)) be an optimal pair of Problem (C). Then the following minimum condition holds: where ψ(·) is the mild solution to the following adjoint equation: i.e., the following holds: Note that (2.5) and (2.10) form a system with the minimum condition (2.9) bringing in the coupling. Suppose there exists a map ϕ : [0, T ] × X × X → U such that ψ, f (t, y, ϕ(t, y, ψ)) + f 0 (t, y, ϕ(t, y, ψ)) = min u∈U ψ, f (t, y, u) + f 0 (t, y, u) .
Note that the right-hand side of the state equation is affine in u(·) and the integrand in the cost functional is up to quadratic in u(·). We therefore refer to the corresponding optimal control problem as an affinequadratic optimal control problem (AQ problem, for short). For finite-dimensional case, general AQ problem was studied in [24]. In current case, the adjoint equation reads   By the minimum condition we obtain, assuming the invertibility of R(t), Therefore, the corresponding optimality system reads as follows: the corresponding optimal control problem is referred to as linear-convex problem, which was studied in [28,29,30]. See also [31] for some investigations on finite-dimensional two-person zero-sum differential games of linear state equation with non-quadratic payoff/cost functional where the convexity of y → Q(t, y) and y → G(y) were not assumed. Further, if for some Q : [0, T ] → S(X) and G ∈ S(X), the problem is reduced to a classical LQ problem. In this case, the optimality system becomes the following linear FBEE: It is known that under the following conditions: the map u(·) → J(x; u(·)) for the current LQ problem is uniformly convex, and the above linear FBEE (2.15) admits a unique mild solution (y(·), ψ(·)) ( [14]).
Next, we note that under (H0) ′ , by Hille-Yosida's theorem, there exist M 1 and ω ∈ R such that ∀λ > ω, n 1, and the Yosida approximation A λ of A is well-defined: By making a shifting and absorbing a relevant term into b(t, y, ψ) (see (1.1)), we may assume that ω = 0 in the above. Then by [21], we may assume the following: Proof. Under (2.1), A admits the following spectral decomposition ( [8]): where µ → E µ is the projection-valued measure associated with A, and σ(A) ⊆ (−∞, −σ 0 ] is the spectrum of A. Consequently, Now, let (2.3) hold. We let X = X + iX be the complexification of X, i.e., with the following definition of addition, scalar multiplication, and inner product: x + iy, x + i y = x, x + y, y + i y, x − x, y , ∀x, x, y, y ∈ X.
Then under (2.3), we have which implies that Hence, A admits the following spectral decomposition: with σ(A) ⊆ iR. Consequently, for any λ > 0, Note that for any Hence, we must have Consequently, Likewise, Hence, (2.23) follows.
To conclude this section, let us introduce some assumptions on the coefficients of FBEE (1.1).

Linear FBEEs
In this section, we consider the following linear FBEE: The above is a special case of (1.1). A pair (y(·), ψ(·)) is called a mild solution to (3.1) if the following holds: Our first result is the following.
Proof. By the variation of constants formula, we have The above is a Fredholm integral equation for ψ(·) of the second kind. By our assumption, it has a unique solution. Then our result follows.
Next, we consider a special case: A * = −A. For such a case, we have that then A + B(·) generates an evolution operator Φ(· , ·) on X × X. The following result concerns the wellposedness of the corresponding linear FBEE.  ∈ L(X × X).
We note that in principle, condition (3.5) is checkable, although it might be practically complicated. We also note that, in the above, the condition that A * = −A, or e At is a group, plays an essential role. It seems that if e At is not a group, the arguments used above will not work (since Φ(· , ·) in the above might not be defined).
We now return to the general linear FBEE (3.1) (without assuming (H0)). Suppose (y(·), ψ(·)) is a strong solution to linear FBEE (3.1). Inspired by the well-known invariant imbedding idea ( [4,15,27,16]), we suppose that the following relation holds: for some Fréchet differentiable functions P : [0, T ] → L(X) and p : [0, T ] → X. Then, formally, we should have Hence, This suggests that we choose P(·) satisfying the following: Note that if A is bounded, (3.8) and (3.10) are equivalent. Further, recalling that Φ 11 (· , ·) and Φ 22 (· , ·) are the evolution operators generated by A + B 11 (·) and A * + B 22 (·), respectively, one sees that (3.10) is also equivalent to the following: Having the above derivation, we now present the following result. Proof. For any λ > 0, consider the following: where, with the mild solution P(·) of the Riccati equation (3.8), Clearly, P λ (·) is uniformly bounded (by noting (2.19)). Moreover, for any x ∈ X, Hereafter, K > 0 represents a generic constant which can be different from line to line. Note that P λ (·) also solves the following Lyapunov equation: Now, we let p λ (·) be the solution of the following: We estimate Then by Gronwall's inequality, we have Now, let y λ (·) be the solution to the following: By the convergence of P λ (·) → P(·) and p λ (·) → p(·), we have Then one has   ẏ Since 13 and (y(·), ψ(·)) is a mild solution to FBEE (3.1).

Now, a natural question is when the Riccati equation (3.8) admits a mild solution. Let us rewrite the Riccait equation as follows:
(3.14) A trivial case is B 12 (·) = 0 for which the above equation is linear and it always has a solution, under (H0) ′ and (3.2). In general, we have the following result. (i) Equation (3.14) admits at most one solution P(·) ∈ C([0, T ]; L(X)).

(ii) Under our conditions, we have
Hence, and (3.8) can be written as and let P n+1 (·) be the mild solution of the following Lyapunov equation: Observe the following:

This implies that
On the other hand, from (3.18), one has Hence, by [22,18] for any t ∈ [t, T ], there exists a P(t) ∈ S + (X) such that Note that for any x ∈ X, Thus, making use of (3.19), we obtain that P(·) is a mild solution to (3.14).
Let us look at linear FBEE (2.15) resulting from linear-quadratic optimal control problem. We rewrite (2.15) below: which leads to the existence of R(t) −1 . Hence, Then in the case that

Decoupling Method -A Brief Description
We note that in the previous section, the essence of the approach by means of Riccati equation is to use the ansatz Inspired by this, we now look at nonlinear cases. For FBEE (1.1), suppose for some K(· , ·). Let {ζ n } n≥1 be an orthonormal basis of X. Then y), ζ n , and for any z ∈ X, This means Note that K y (t, y) is independent of the choice of {ζ n } n≥1 .
We now present the following result.
Proof. Note that as a part of requirement for K(· , ·) being a solution to (4.2), one has and Hence, our claim follows.
It is seen that thanks to the map K(· , ·), the original FBEE (1.1) is decoupled into (4.3) and (4.4). Because of this, we introduce the following notion: Now the natural question is when one can solve equation (4.2). The linear case has been treated in the previous section. To look at the nonlinear case, let us further assume that A * = A and it has a sequence of eigenvalues with the corresponding eigenfunctions {ζ n } n 1 which form an orthonormal basis for X. Then Hence, 0 = K t (t, y)+K y (t, y) Ay +b(t, y, K(t, y)) +A * K(t, y)+g(t, y, K(t, y)) = ∞ n=1 k n t (t, y) + k n y (t, y), Ay + b(t, y, K(t, y)) + λ n k n (t, y) + g(t, y, K(t, y)), ζ n ζ n .
To guarantee (4.7), we assume that We note that (4.6) is a first order Hamilton-Jacobi equation in the Hilbert space X, involving an unbounded operator A. Therefore, it is possible to study the existence of viscosity solution of it. When the viscosity solution has certain regularity, one might be able to obtain a decoupling field K(t, y) = k 1 (t, y)ζ 1 for our FBEE. Apparently, this is merely a very special case for the general FBEEs, and it already looks complicated. Hence, there is a very long way to go in this direction to establish a satisfactory theory (for nonlinear FBEEs). We hope to report some further results in this direction in our future publications.

Lyapunov Operators and a Priori Estimates
We now look at the solvability by another method, called method of continuity. We first look at the following linear FBEE: y(0) and ψ(T ) are given, .
We introduce the following Lyapunov differential equation for operator-valued function Π(·): with M,M , Θ : [0, T ] → L(X) and Q 0 ,Q 0 : [0, T ] → S(X) to be properly chosen later. We may equivalently write (5.4) as follows: Let us first look at (5.5) and (5.6). Operator-valued functions P (·) andP (·) are mild solutions to (5.5) and (5.6), respectively, if the following hold: We use the above definition simply because when A is bounded, (5.5) is equivalent to (5.9), and (5.6) is equivalent to (5.10). Further, if we let Φ(· , ·) andΦ(· , ·) be the evolution operators generated by A − M (·) and A −M (·), respectively, then P (·) andP (·) admit the following representation: This yields that if Now, let us look at (5.7) and (5.8), which are equivalent. We assume that (H0) holds. Therefore, we have two cases to discuss. Note that under (2.1), A admits a spectral decomposition Hence, (5.14) holds in this case. In particular, if we have Also, as a special case of (5.17), if for some suitable scalar functions γ(·), θ(·), m(·),m(·), then for which (5.14) will also hold. We point out that in the current case, (5.14) is not needed. However, if (5.17) holds and we are working in a complex Hilbert space, we will still have (5.14) and Γ(·) can also be given by (5.16).

20
In what follows, when we say a mild solution Π(·) of (5.4), we mean that P (·) andP (·) are given by (5.9) and (5.10), respectively, and Γ(·) is defined by by (5.16) such that (5.14) holds for the case A = A * and Γ(·) is defined by (5.21) for the case A * = −A (since we prefer to stay with a real Hilbert space).
The following is the main result of this section and it will play an important role below.
Next, we let For any x ∈ X and (b, g, h) ∈ G 1 , (b 0 , g 0 , h 0 ) ∈ G 0 , and ρ ∈ [0, 1], consider the following FBEE: It is easy to see that when It is easy to see that for ρ = 0, (5.26) is a trivial decoupled FBEE which admits a unique mild solution, and for ρ = 1, (5.26) is essentially the same as (although it looks a little more general than) FBEE (1.1). We will show that under certain conditions, there exists an absolute constant ε > 0 such that when (5.26) is (uniquely) solvable for some ρ ∈ [0, 1), it must be (uniquely) solvable for (5.26) with ρ replaced by (ρ + ε) ∧ 1. Then by repeating the same argument, we obtain the (unique) solvability of (1.1) over [0, T ]. Such an argument is called a method of continuation (see [25]). In doing so, the key is to establish an a priori estimate for the mild solutions to (5.26), uniform in ρ ∈ [0, 1]. To this end, we need to make some preparations.
We note that in the above proposition, it is only assumed that (b, g, h) ∈ G 3 (the set of all generators satisfying (H3)). Therefore, the Fréchet derivatives b y , b ψ , and so on are not necessarily bounded. However, it is still possible that On the other hand, in the case (b, g, h) ∈ G 4 , we have Hence, when the following holds: FBEE (5.27) admits a unique mild solution (y ρ λ (·), ψ ρ λ (·)), by means of contraction mapping theorem. It is not hard to see that condition (5.35) holds when one of the following holds: • The parameter ρ = 0, this is a trivial case, for which the FBEE is linear and decoupled.
• The time duration T is small enough.
• The coupling is weak enough in the sense that the Lipschitz constant of b(t, y, ψ) with respect to ψ (the bound of b ψ (·)), and/or the Lipschitz constants of g(t, y, ψ) and h(y) with respect to y (the bounds of g y (·) and h y (·)) are small enough. An extreme case is that b(t, y, ψ) is independent of ψ, or g(t, y, ψ) and h(y) are independent of y, which corresponds to the decoupled case.
From Proposition 5.2, we see that due to the coupling, in general, one can only obtain an estimate of y(·) in terms of ψ(·), and an estimate of ψ(·) in terms of y(·). In order to obtain an a priori estimate on the whole ( y(·), ψ(·)), we need either have an estimate for such that Π(·) is a mild solution to the Lyapunov differential equation and for some constants µ, K > 0, the following are satisfied: If (5.37) is replaced by the following: then Π(·) is called a type (II) Lyapunov operator of (b, g, h).
If Π(·) is either a type (I) or Type (II) Lyapunov operator of (b, g, h), we simply call it a Lyapunov operator of (b, g, h).
On the other hand, in the case that Π(·) is a type (II) Lyapunov operator for (b, g, h), we have Hence, Then, combining the above with (5.33) and (5.34), we again obtain (5.40).

Well-Posedness of FBEEs via Lyapunov Operators
We now state and prove the following theorem concerning the well-posedness of FBEE (1.1).

ds .
Note that the constant K in front of ε above is universal. Then choose an ε > 0 satisfying Kε ≤ 1 2 so that 29 the first term on the right hand side can be absorbed into the left hand, leading to the following: Continuing the above procedure, we obtain the solvability of the following coupled FBEE: with the mild solution (y(·), ψ(·)) satisfying Thus, in particular, by taking (b 0 , g 0 , h 0 ) = 0, we obtain the solvability of FBEE (1.1) with estimate Now, let (ȳ(·),ψ(·)) be a mild solution to (1.1) corresponding to (b,ḡ,h) ∈ G 2 ∩ G 3 . Then, (ȳ(·) − y(·),ψ(·) − ψ(·)) satisfies a linear FBEE with the generator admitting a type (I) or type (II) Lyapunov operator Π(·), the same as that for the generator (b, g, h). Hence, applying (6.8), we obtain the following stability estimate: This proves the theorem.

Construction of Lyapunov Operators and Solvable FBEEs
In this section, we will construct some Lyapunov operators, through which we obtain well-posedness of corresponding FBEEs. First of all, we prove the following result which is practically more convenient to use than the definition.
be a mild solution to linear Lyapunov differential equation (5.4) for some Then Π(·) is both a Lyapunov operator of types (I) and (II) for (b, g, h) if the following hold: 3) P (T ) + h y (y) * Γ(T )+Γ(T ) * h y (y) + h y (y) * P (T )h y (y) δ, ∀y ∈ X, and (7.4) for some δ > 0, with Note that δ > 0 appears in (7.2)-(7.4) does not have to be the same. But, we can always make them the same by shrinking δ if necessary.
We note that condition (7.13) is more convenient to check than (7.4). Next, we look at some concrete special cases of Theorem 7.1, which will be more practically useful. We first present the following result. In the above, the following convention is adopted: In particular, if and in the case A * = −A, for t ∈ [0, T ], Further, if m =m = 0, then, for A * = A, t ∈ [0, T ], Whenm − m = 0, the above is understood as We now look at two cases.
In the case A * = A, (7.25) and (7.26) become In the case A * = −A, (7.25) and (7.26) become with the above understood as follows when m = 0, with the above understood as follows whenm = 0, The rest conclusions are clear.
Combining Theorem 7.1 or Corollary 7.2 with Lemma 7.3, we can present many concrete cases for which the corresponding FBEEs are well-posed. For the simplicity of presentation, we only consider below the case that (7.20) holds. First, we present a simple lemma.
(i) For the case A * = A, we note that By the definition of η(·), we have that and lim m→−σ0 in the sense that for some δ > 0, It is possible to cook up many other cases from Theorem 7.5 and/or Corollary 7.6, for which the corresponding FBEEs are well-posed. Let us list some of them here. Corollary 7.9. Let (H0) hold and (b, g, h) ∈ G 2 ∩ G 3 . Then the corresponding FBEE is well-posed if one of the following holds: (i) For some δ, ε > 0, I + h y (y) + h y (y) * δ, ∀y ∈ X.
(ii) and (iii) can be proved similarly.
Inspired by the above result, it is easy for us to prove many other results of similar nature. We prefer not to get into exhausting details.

More General Cases
In this section, we will briefly consider some more general cases.
First of all, we consider the case (b, g, h) ∈ G 2 , i.e., the generator (b, g, h) only satisfies (H2), and might not be Fréchet differentiable in (y, ψ). Such a situation happens in many optimal control problems. To study such a case, let us recall some results from [20].
Let f : X → X be Lipschitz continuous andȳ ∈ X. For any linear subspace L ⊆ X, we define L-Gâteaux-Jacobian D L f (ȳ) ∈ L(L; X) by the following (if the limit exists): The set of all pointsȳ ∈ X for which D L f (ȳ) exist is denoted by Ω L (f ). Next, we let co D L f (y) y ∈ Ω L (f ), y −ȳ δ and define the generalized Jacobian of f (·) atȳ by the following: For any y, z ∈ X, define y ⊗ z : L(X) → R by (y ⊗ z)(Ψ) = Ψ(y), z , ∀Ψ ∈ L(X).
The weak topology induced by X ⊗ X on L(X) is called the weak * -operator-topology, denoted by β(X). The following can be found in [20].
Then, all the results from previous sections for (b, g, h) ∈ G 2 ∩ G 3 can be carried over properly to the case (b, g, h) ∈ G 2 . where y → Q(y) and y → G(y) are C 2 and convex. Then Pontryagin minimum principle leads to the optimality system: , In this case, we have b(t, y, ψ) = −BR −1 B * ψ, g(t, y, ψ) = Q y (y), h(y) = G y (y). Thus, g y (t, y, ψ) = Q yy (y), g ψ (t, y, ψ) = 0, h y (y) = G yy (y). Then Hence, under conditions R δ, M G yy (y) 0, M Q yy (y) δ, ∀y ∈ X, for some M, δ > 0, all the conditions of Corollary 7.7 hold, and the FBEE (9.1) admits a unique mild solution. A further special case is the following: Qy, y , G(y) = 1 2 Gy, y , for some Q, G ∈ S + (X). In this case, the FBEE can be written as Hence, according to the above, when for some δ > 0, the FBEE (9.2) admits a unique mild solution.
Example 9.2. (AQ Problem) For the simplicity of presentation, we let S(·) = 0, and assume that all the involved functions are time-independent. Then the optimality system reads g(t, y, ψ) = Q y (y) + F y (y) * ψ, h(y) = G y (y).
Example 9.3. (Optimal Control of a Parabolic PDE). We now consider an optimal control problem for a parabolic equation. Such a problem was studied in [23]. The controlled state equation reads: where y(t, x) is the state and u(t, x) is the control, and Ω ⊆ R n is a bounded domain with smooth boundary ∂Ω. The cost functional is the following: We assume that      f (t, x) 0, y d (t, x) 0, (t, x) ∈ (0, T ) × Ω, y 0 (x) 0, z(x) 0, x ∈ Ω.
According to [23], optimal control exists and the optimality system reads: Hence, Then Thus, 0 I I 0 B(t, y, ψ) + B(t, y, ψ) provided ψ is bounded (which was shown in [23]) and N is large enough.

Concluding Remarks
We have discussed the well-posedness of FBEEs which is mainly motivated by the optimality systems of optimal control problems for infinite dimensional evolution equations. We have presented some basic results from two approaches: the decoupling method and the method of continuity. It is seen that the theory is far from mature and many challenging questions are left open. Here is a partial list of these: • In the direction of decoupling method, it is widely open that how one can construct decoupling field, through solving a PDE in Hilbert space.
• In the direction of method of continuity, more careful analysis is need to make the stated condition easier to use.
• More general generators A other than A * = A and A * = −A. Also, taking into account of PDEs, the generator (b, g, h) might be unbounded (involving differential operators).