Linear Openness and Feedback Stabilization of Nonlinear Control Systems

It is well known from the seminal Brockett's theorem that the openness property of the mapping on the right-hand side of a given nonlinear ODE control system is a necessary condition for the existence of locally asymptotically stabilizing continuous stationary feedback laws. However, this condition fails to be sufficient for such a feedback stabilization. In this paper we develop an approach of variational analysis to continuous feedback stabilization of nonlinear control systems with replacing openness by the linear openness property, which has been well understood and characterized in variational theory. It allows us, in particular, to obtain efficient conditions via the system data supporting the sufficiency in Brockett's theorem and ensuring local exponential stabilization by means of continuous stationary feedback laws. Furthermore, we derive new necessary conditions for local exponential and asymptotic stabilization of continuous-time control systems by using both continuous and smooth stationary feedback laws and establish also some counterparts of the obtained sufficient conditions for local asymptotic stabilization by continuous stationary feedback laws in the case of nonlinear discrete-time control systems.


Introduction
Consider the autonomous control system governed by nonlinear ordinary differential equationṡ where the vector function f : U × V → R n on the right-hand side of (1) is sufficiently smooth on some open set U × V ⊂ R n × R m around the origin (0, 0) as a given equilibrium pair. The continuous feedback stabilization property of our consideration is formulated as follows; see, e.g., [12,Definition 10.11].
Definition 1.1. We say that the control system (1) is locally asymptotically stabilizable by means of continuous stationary feedback laws if there exists a continuous control u ∈ C(R n , R m ) with u(0) = 0 such that 0 ∈ R n is a locally asymptotically stable equilibrium point for the closed-loop systeṁ The question of whether (1) can be locally asymptotically stabilized by means of continuous stationary feedback laws has been intensively studied in the literature with deriving some necessary as well as sufficient conditions for it; see, e.g., [5,7,8,11,12,20,28,29,30] and the references therein for a variety of results, discussions, and applications. In particular, the fundamental theorem by Brockett [7] shows that the control system (1) cannot be locally asymptotically stabilized by continuous stationary feedback laws if the openness property of f is not satisfied at the equilibrium point (0, 0). In fact, Brockett's original result of the topological (mainly degree theoretic) nature [7,Theorem 1] establishes the necessity of the openness property for the existence of a continuously differentiable (smooth) locally stabilizing feedback law. Its extension to merely continuous stationary feedback laws is given by Zabczyk [31] by using a different device based on a deep result by Krasnoselskii and Zabreiko [16] from geometric nonlinear analysis. On the other hand, Coron [11] and Sontag [30] construct impressive examples demonstrating that the openness property is not sufficient for the existence of such a locally asymptotically stabilizing continuous stationary feedback law.
In this paper we suggest to replace the openness property of f by its linear openness counterpart, which is a basic property of single-valued and set-valued mappings that has been well understood and completely characterized (contrary to the openness property) in general settings of variational analysis; see Section 2 for a brief overview and the books [6,14,25,26,27] with the extended bibliographies therein for numerous applications to various issues in optimization, equilibria, control, and practical models.
We believe that the developed approach of variational analysis and generalized differentiation has the strong potential for applications to a broader class of control systems, not just to smooth system (1), and discuss these issues in the concluding section. However, for the better understanding we confine ourselves here to the frameworks of smooth ODE systems of type (1) and their discrete-time analogs. In the case of (1) the developed variational approach allows us to verify that the linear openness of f around the equilibrium point, being combined with a precise relationship between the exact bound of linear openness and unstable eigenvalues of the partial Jacobian matrix of f in x, ensures the existence of a continuous stationary feedback law, which locally exponentially (and hence locally asymptotically) stabilizes (1) via the closed-loop control system (2). In this way we arrive at a partial converse to Brockett's theorem. Moreover, we justify the necessity of linear openness for local exponential stabilization of (1) by means of continuous stationary feedback laws as well as the necessity of this property for local asymptotic stabilization of (1) by means of smooth feedback laws under an additional assumption. Certain analogs of the obtained sufficient conditions for local asymptotic stabilization by means of continuous stationary feedback laws are establish also for the discrete-time nonlinear control counterpart of (1).
The rest of the paper is organized as follows. Section 2 presents a brief overview of basic tools and results of variational analysis mainly concerning the linear openness property of single-valued and set-valued mappings as well as closely related properties of metric regularity and Lipschitzian stability, from both qualitative and quantitative viewpoints. The presented material justifies our variational approach to stabilizing nonlinear control systems not only of the smooth type (1) and its discrete-time counterpart, but also for more general classes to be considered in future developments. Section 3 is devoted to the implementation of this variational approach to establish sufficient as well as necessary conditions for stabilizing nonlinear continuous-time control systems (1) by means of both continuous and smooth feedback laws. In Section 4 we deal with discrete-time control systems, Section 5 presents some illustrative examples, and the final Section 6 summaries the main achievements of the paper and discusses some perspectives of the variational approach for further research in this direction.
Throughout the paper we use standard notation of variational analysis and control theory; see, e.g., [12,25,27]. Recall that B r (z) stands for the closed ball centered at z with radius r > 0, while the symbol B signifies the closed unit ball of the space in question.

Linear Openness and Related Properties of Nonlinear Mappings
We start with recalling some properties of mappings used in the sequel. Most of the definitions below are valid or can be reformulated in general frameworks of normed and even metric spaces, but we need them in finite dimensions and so proceed accordingly. Definition 2.1. A single-valued mapping f : R l → R n is said to be open atz ∈ R l if the f -image of every neighborhood ofz contains/covers a neighborhood of f (z). This amounts to saying that The origin of this property goes back to the classical Banach-Schauder open mapping theorem saying that a bounded linear operator between Banach spaces is open if and only if it is surjective. The openness property (3) of nonlinear vector functions f was used by Brockett [7] as a necessary condition for the existence of locally asymptotically stabilizing continuous stationary feedback laws in the sense of Definition 1.1 for the continuous-time control systems (1). We now recall an appropriate strengthening of (3), which has been recognized as a fundamental property in variational analysis for nonlinear (and even set-valued) mappings while allowing us to get the sufficiency in Brockett's theorem.
Definition 2.2. A mapping f : R l → R n is said to be linearly open (or having the covering property) aroundz with modulus κ > 0 if there exists neighborhood U ofz such that The supremum of all the moduli {κ} for which (4) holds with some neighborhood U is called the exact covering/linear openness bound of f aroundz and is denoted by cov f (z).
Property (4) first appeared in [13] under the name of "covering in a neighborhood" while it has been introduced and popularized by Milyutin in his talks and personal communications a long time before the publication of [13]. The interest to designating and calculating the (quantitative) exact bound cov f (z) arose later on; see [23,25,27] and the references therein. The term "linear openness" for (4) was first used probably in [27]; this property is also widespread in the literature under the name of "openness at linear rate." It is clear that the linear openness property from Definition 2.2 yields its openness counterpart from Definition 2.1. Simple examples demonstrate that the opposite implication fails. Example 2.3. Consider the real-valued function f : R → R given by f (z) := z 3 . It is easy to check that f possesses the openness property (3) atz = 0. However, the linear openness property (4) obviously fails for f around the same point. The reader can observe that the same phenomenon holds true for the power functions f (z) := z s with any odd natural exponent s ≥ 3 withz = 0.
It is important to emphasize the following two principal differences between the openness property (3) and its linear counterpart (4) from Definition 2.2: (i) Property (4) ensures the uniformity of covering aroundz.
(ii) Property (4) designates a linear rate of openness quantified by modulus κ. These two issues are crucial to establish complete characterizations of linear openness/covering with a precise calculation of the exact covering bound cov f (z). Before proceeding in this direction, let us recall another fundamental property of variational analysis, which happens to be equivalent to linear openness with the reciprocal relationship between the exact bounds of the corresponding moduli. We refer the reader to the books [6,14,25,27] and the bibliographies therein for the genesis of this property, its characterizations, various relationships, and numerous applications. In particular, it has been realized that metric regularity and covering properties are actually equivalent to each other and their exact bounds are reciprocally related as cov f (z) · reg f (z) = 1.
They can also be equivalently described by Lipschitzian behavior of the (set-valued) inverse f −1 : R n → → R l around (f (z),z). Such a property is defined for arbitrary set-valued mappings as follows.
Definition 2.5. A set-valued mapping F : R l → → R n is said to be Lipschitz-like with modulus ℓ ≥ 0 around the pair (z,ȳ) belonging to the graph if there exist neighborhoods U ofz and V ofȳ such that The infimum of all the moduli {ℓ} for which (7) holds with some neighborhoods U and V is called the exact Lipschitzian bound of F around (z,ȳ) and is denoted by lip F (z,ȳ).
If F is single-valued, the Lipschitz-like property from Definition 2.5 goes back to the classical local Lipschitzian behavior, while in the compact-valued case with V = R n in (7) it reduces to the Hausdorff local Lipschitzian property of multifunctions. In the general case of V in (7) this condition is also known as the pseudo-Lipschitz or Aubin property. It can viewed as a natural graphical localization of Lipschitzian behavior for set-valued mappings. Note that in the case of single-valued mappings F = f the exact Lipschitzian bound lip f (z) of f aroundz can be easily represented in the following way: It is known that the linear openness and metric regularity properties of f aroundz from Definition 2.2 and Definition 2.4, respectively, are equivalent to the Lipschitz-like property of Definition 2.5 for the inverse mapping F = f −1 around (f (z),z) with the exact bound relationship Furthermore, the aforementioned equivalences and exact bound relationships hold true for general set-valued mappings between Banach spaces with appropriately extended definitions of covering and metric regularity; see [25,Subsections 1

.2.2 and 1.2.3] and the commentaries therein for a complete account.
When f is of class C 1 as in our primary setting here, it has been shown independently by Lyusternik [22] and Graves [18] that the surjectively of the derivative operator ∇f (z), which amounts to the full rank condition for the Jacobian matrix ∇f (z) in finite dimensions, is sufficient for the openness property (3) in [18] and for a version of metric regularity (as a description of the tangent space to a smooth manifold) in [22]. Note that neither Lyusternik nor Graves dealt with the linear openness and metric regularity properties defined above but their (rather close to each other) proofs were instrumental to verify the sufficiency of the surjectivity condition on ∇f (z) for the more delicate properties from Definitions 2.2 and 2.4 in the case of smooth single-valued mappings between arbitrary Banach spaces. However, they did not consider either the necessity of the surjectivity condition for the underlying properties, or the exact bound calculations.
It is implied by more general results of variational analysis (and has been first observed in its framework) that we actually have complete qualitative and quantitative characterizations of the basic properties defined above. The following theorem is taken from [25, Theorem 1.57], while being valid in general Banach spaces; see also the commentaries to [25, Chapter 1] for further discussions. Recall that f : This property always holds if f is smooth (i.e., of class C 1 ) aroundz but may be stronger than the standard Fréchet differentiability of f at this point.
Theorem 2.6. Let f : R l → R n with n ≤ l be strictly differentiable atz. Then f enjoys the equivalent linear openness and metric regularity properties aroundz if and only if the strict derivative ∇f (z) is surjective, i.e., the Jacobian matrix has full rank. Furthermore, the exact bounds of the linear openness and metric regularity of f atz are precisely calculated, respectively, by where the symbol " * " stands for the adjoint operator (matrix transposition), and where the last norm in (8) is the standard norm for linear bounded operators applied to the inverse operator (∇f (z) * ) −1 , which is single-valued in the case of the surjectivity of the derivative operator ∇f (z).
Having in mind further possible extensions (see more in Section 6) of our approach and results to nonsmooth control systems (1) and their set-valued versionsẋ ∈ F (x) as well as to partial differential counterparts, we formulate now for the reader's convenience generalized differential characterizations of the aforementioned basic properties of variational analysis in the general framework of set-valued mappings F : R l → → R n with closed graphs. This is done by using the coderivative D * F (z,ȳ) : R n → → R l of F at (z,ȳ) ∈ gph F , which we are not going to define here while recalling that provided that the mapping F = f is single-valued and strictly differentiable atx; in particular, if it is of class C 1 as in the case studied in this paper. In general, the coderivative is a positively homogeneous mapping that enjoys comprehensive calculus rules based on variational/extremal principles of variational analysis. We refer the reader to the books [25,26,27] and the commentaries therein for exact results, discussions, extended bibliographies, and various applications.
In particular, a complete characterization of the Lipschitzian property from Definition 2.5 for F around (z,ȳ) with the precise calculation of the exact bound lip F (z,ȳ) obtained in [23,Theorem 5.7] reads as where · stands for the usual norm of a positively homogeneous mapping G : R n → → R l defined by Another proof of (10) is given in [27,Theorem 9.40] under the name of "Mordukhovich criterion." The reader can find an infinite-dimensional extension of (10) in [25,Theorem 4.10]. The corresponding counterparts of (10) for the (inverse) equivalent properties of metric regularity and linear openness are where ker G := {v ∈ R n | 0 ∈ G(v)}. Due to the coderivative representation (9) for strictly differentiable single-valued mappings, the kernel coderivative criterion for metric regularity and linear openness in (11) reduces to the surjectivity of ∇f (z) while the exact bound formulas in (11) get back to (8) for such mappings as stated in Theorem 2.6 above. After the given brief overview of available tools of variational analysis related to linear openness, we are now ready to proceed with the implementation of the variational approach in the case of smooth continuoustime control systems (1) as well for the corresponding ones with discrete time.

Exponential and Asymptotic Stabilization of Continuous-Time Control Systems
Consider first the continuous-time control system (1) and recall the notion of its local exponential stabilization by using continuous stationary feedback laws. The main attention here is paid to the following stabilization notion for the nonlinear control system (1).
Definition 3.1. The control system (1) is said to be locally exponentially stabilizable if there exist a continuous stationary feedback law u(x) with u(0) = 0 and constants α > 0, M > 0, and δ > 0 such that for any starting point x 0 ∈ R n with x 0 < δ there exists a unique solution x(t, x 0 ) of the closed-loop control system (2) satisfying x(0, x 0 ) = x 0 and the exponential decay condition To formulate the following main result of the paper, consider the partial Jacobian matrices for f ∈ C 1 , the (complex) spectrum Λ(A) of A, and the collection of eigenvalues with nonnegative real parts of A in the case of the continuous-time control system (1).
where sup ∅ := −∞. Then the control system (1) can be locally exponentially stabilized by means of continuous stationary feedback laws. Furthermore, the linear openness of f around (0, 0) is necessary for such a local exponential stabilization of (1) without any additional assumptions on the system data if continuously differentiable stationary feedback laws are used instead of merely continuous ones.
Proof. The verifications of both sufficient and necessary parts of the theorem strongly involve the usage of the complete linear openness characterization given in Theorem 2.6. We start with proving the sufficiency statement of this theorem. Besides Theorem 2.6, it employs a stability property for linear openness with respect to small Lipschitzian perturbations along with some results well understood in controllability and stabilization theories for smooth control systems. To proceed in this way, for any constant ν > 0 we consider the perturbed vector function It is obvious that lip g ν (0, 0) = ν. Taking into account the equivalence between the linear openness and metric regularity properties discussed Section 2 with the exact bound relationship (6), we deduce from the linear openness version of [25,Theorem 4.25] that cov f ν (0, 0) > κ − ν > 0 whenever ν ∈ [0, κ).
Thus the derivative operator ∇f ν | (0,0) : R n × R m → R n is surjective for such ν due to the necessity of the surjectivity (i.e., full rank of the Jacobian matrix ∇f ν (0, 0)) condition for the linear openness property from Theorem 2.6. Furthermore, condition (14) in assumption (C) implies that cov f λ (0, 0) > κ − λ > 0 for all λ ∈ Λ + (A), and hence the operator ∇f λ | (0,0) : R n × R m → R n is surjective for all λ ∈ Λ + (A) by Theorem 2.6. The latter condition can be equivalently written in the rank form Since (16) is the well-known Hautus test for asymptotic controllability of linear autonomous systems [19,29], we have that the linearized control systeṁ with A and B from (12), is asymptotically controllable. Employing now [19,Theorem 4] and its proof for the case of continuous-time control systems allows us to conclude that the linearized system (17) is exponentially stabilizable, which in turn yields as in [21, Chapter 6; see, e.g., Exercise 6 on p. 392] the local exponential stabilization of the original nonlinear system (1) and hence verifies the sufficiency part of the theorem.
To justify next the remaining necessity statement of this theorem, we assume that the nonlinear control system (1) is exponentially stabilizable by means of smooth stationary feedback laws. It is equivalent to the exponential stabilization of the linearized system (17) (see, e.g., [31, Propositions 1 and 2]) with the possibility to use even linear feedback laws therein. The latter implies, by the equivalence between (i) and (iii) in [19,Theorem 4] for continuous-time control systems, that the linearized system (17) is asymptotically controllable. Thus the Hautus test for asymptotic controllability of linear autonomous continuous-time control systems tells us, due to the equivalence between (i) and (ii) in [19,Theorem 4], that rank A − λI | B = n for all complex numbers λ with Re(λ) ≥ 0; see, e.g., [29,Exercise 5.5.7]. Plugging now λ = 0 into (18) gives us the simple rank condition which ensures, by the sufficiency part of the linear openness characterization in Theorem 2.6, that f is linearly open around (0, 0). This completes the proof of the theorem.  (19) holds, which is surely different from the classical Kalman rank condition for controllability of linear autonomous continuous-time control systems; see, e.g., [21]. As follows from the above proof of Theorem 3.2, the additional assumptions in (C) allow us to justify the validity of the Hautus test for asymptotic controllability (16) of the linearized system (17). It is clear that the opposite implication does not hold, i.e., (19) does not imply (16). Indeed, there are simple examples of linear control systems, which are not controllable while satisfying the underlying rank condition (19); see, e.g.,ẋ 1 = x 1 ,ẋ 2 = u.
The following two consequences of Theorem 3.2 provide some specifications of this result for particular subclasses of continuous-time control (1). They are similar to those observed by Brockett [7] in the case of local asymptotic stabilization of (1).

Corollary 3.4. Suppose that in the framework of Theorem 3.2 we have
where span{g 0 (·), . . . , g m (·)} = R d . If d < n, then the control system (1) cannot be locally exponentially stabilized by means of continuously differentiable stationary feedback laws.
Proof. Assume that (1) where the vectors {g i (0)} m i=1 are linearly independent. Then the control system (1) can be locally exponentially stabilized by means of continuously differentiable stationary feedback laws if and only if m = n.
Proof. This follows from combining Theorem 3.2 and Corollary 3.4.
Remark 3.6. It is crucial to assume in Corollary 3.5 that the vectors {g i (0)} m i=1 are linearly independent; cf. [7] and [30] for the case of local asymptotic stabilization.
Let us now present several consequences of the sufficiency in Theorem 3.2.
calculated along the entire spectrum Λ(A) of the matrix A instead of just its nonnegative real part (13). Then the control system (1) can be locally exponentially stabilized by means of continuous stationary feedback laws.
Proof. This follows directly from Theorem 3.2 and the parameter choice in assumption (C) of the theorem and its counterpart imposed in this corollary. On the other hand, the conclusion of this corollary while regarding the (generally weaker) local asymptotic stabilization of (1) in the sense of Definition 1.1 can be derived from [12, Theorem 10.14] married to the linear openness arguments employed above and using the fact that the modified rank condition rank A − λI | B = n for all λ ∈ Λ(A) (21) ensures the controllability of the linearized systemẋ = Ax + Bu by the Hautus test for controllability from [19, Theorem 1] applied to the linearized continuous-time control system (17). It is easy to see that condition (16) in Theorem 3.2 may be significantly better than the corresponding one (21) in the proof of Corollary 3.7. Note also that in the case where Λ + (A) = ∅, assumption (C) of Theorem 3.2 reduces simply to the requirement that the mapping f in (1) is linearly open around the equilibrium point (0, 0), i.e., in this case necessary and sufficient conditions of Theorem 3.2 merge. Remark 3.9. It is worth mentioning that the exact covering bound cov f (0, 0) used in Theorem 3.2 and Corollary 3.7 is precisely calculated by formula (8) withz = (0, 0) ∈ R n × R m therein. Thus all the conditions of these results can be described entirely in terms of the initial data of the control system (1) without involving any rank calculations. Proof. It is obvious to see that condition (C) of Theorem 3.2 holds automatically under the assumptions of this corollary, and thus the conclusion follows. It is worth noting that the latter setting covers the situation of the example presented in [30] showing that the sufficiency of the openness property in Brockett's theorem fails. Indeed, in Sontag's example all the assumptions of Corollary 3.10 are satisfied except the linear openness of f around the origin.
with Λ + (A) ⊂ R and let g : R n × R m → R n be of class C 1 around the origin with g(0, 0) = 0, ∇ x g(0, 0) = 0, and ∇ u g(0, 0) = 0. Then the validity of condition (C) from Theorem 3.2 ensures that the (slightly) perturbed nonlinear control systemẋ can be locally exponentially stabilized by means of continuous stationary feedback laws.
Proof. This follows directly from Theorem 3.2 applied to the perturbed control system (23).
Having in mind these observations and the corresponding developments presented in [12], we arrive at the following two corollaries, which are consequences of Corollary 3.7 and its proof. For the first one, the reader is referred to [12,Definition 3.2], where the notion of small-time local controllability of (1) is formulated. Proof. It is shown in [12,Theorem 3.8] that the controllability of the linearized control system (17) ensures the small-time local controllability of (1). On the other hand, the proof of Corollary 3.7 verifies that the assumptions imposed therein yield controllability of the linearized control (17)  The next result shows that the linear openness and spectrum assumptions of Corollary 3.7 for the case of linear autonomous continuous-time control systems ensure actually global controllability in time T (as defined, e.g., in [12]) of not only the system itself, but also of its bounded perturbations. Given in addition a continuous function g ∈ C 1 (R n , R n ), suppose that it is bounded on R n . Then for each T > 0 the boundedly perturbed control systeṁ is globally controllable in the fixed time T .
Proof. Similarly to the proof of Theorem 3.2 we conclude that the assumptions imposed in this corollary ensure the validity of the Hautus test (21) for controllability of the linear control system under consideration.
To arrive at the claimed conclusion, it remains to apply the result of [12, Corollary 3.41].
Example 3.15. Consider the two-dimensional nonlinear control system from [12, Exercise 3.42]: (24) is unbounded. It is easy to check that this system is not globally controllable while all but the boundedness assumption of g in Corollary 3.14 are satisfied.
Finally in this section, we revisit the Brockett's classical setting [7] concerning local asymptotic stabilization of (1) by means of smooth stationary feedback laws and derive the strengthening of his main (third) necessary condition (with the replacement of openness by linear openness) from the first one in [7, Theorem 1] under an additional assumption on the collection of eigenvalues with nonnegative real parts.
Theorem 3.16. Let f , A, and B be as in Theorem 3.2, and assume that for the set Λ + (A) defined in (13). Then the local asymptotic stabilization of system (1) by means of continuously differentiable stationary feedback laws implies that f is linearly open around the equilibrium (0, 0).
Proof. Brockett's first necessary condition in [7, Theorem 1(i)] tells us that the local asymptotic stabilization of the control system (1) by means of smooth feedback laws implies that the linearized system (17) should have no uncontrollable modes associated with eigenvalues whose real part is positive. On the other hand, condition (25) means that that there are no purely imaginary eigenvalues in the collections of eigenvalues with nonnegative real parts Λ + (A) of the matrix A. Taking into account the definition of Λ + (A) in (13) and combining it with the imposed assumption (25) ensure that the Hautus test (16) for asymptotic controllability of (17) holds. This readily yields, as in the proof of the necessity in Theorem 3.2, that the simple rank condition (19) is satisfied. The latter amounts to the surjectivity of the derivative operator ∇f | (0,0) : R n × R m → R n . Employing the sufficiency part of Theorem 2.6 verifies the linear openness of f around (0, 0) and thus completes the proof of the theorem.
Remark 3.17. Note that the aforementioned first necessary condition in [7, Theorem 1] is different from the Hautus test for asymptotic controllability as, e.g., for the systeṁ Assumption (25) imposed in Theorem 3.16 merges these two conditions. Similarly to Corollary 3.4 of Theorem 3.2 in the case of local exponential stabilization, we now arrive at the corresponding consequence of Theorem 3.16 for the local asymptotic stabilization of (1), where the form of f in (1) is given by (20).

Corollary 3.18.
Let the form of f in (1) be given by (20), and let A and B be as in Theorem 3.2. Assume in addition that the collection of eigenvalues with nonnegative real parts Λ + (A) of A satisfies condition (25). If d < n, then the control system (1) cannot be locally asymptotically stabilized by means of continuously differentiable stationary feedback laws.
Proof. Suppose on the contrary that the system under consideration can be locally asymptotically stabilized by means of continuously differentiable feedback laws. Then Theorem 3.16 ensures that f is linearly open around (0, 0) and hence it enjoys the openness property (3) at this point. This contradicts the necessary condition for local asymptotical stabilization by smooth stationary feedback laws given by Brockett's theorem [7,Theorem 1] and the first remark to it in [7] while thus justifying the conclusion of this corollary.

Asymptotic Stabilization of Discrete-Time Control Systems
In this section we implement the above techniques of variational analysis, together with known facts of control theory, for the case of nonlinear discrete-time control systems where the index k signifies discrete time. Our achievement here are more modest in comparison with the case of continuous-time control system in Section 3. Namely, we establish only sufficient conditions (involving linear openness) for local asymptotic stabilization of (26) by means of continuous stationary feedback laws. The possibility of deriving sufficient conditions for local exponential stabilization of (26) as well as establishing advanced necessary conditions for both asymptotic and exponential versions of local stabilization by continuous and smooth stationary feedback laws constitutes important open questions.
To proceed, recall that the notion of local asymptotic stabilization of (26) by means of continuous stationary feedback laws is formulated in a similar way to Definition 1.1 with the replacement of the differential closedloop system (2) by its discrete counterpart An interesting necessary condition in this direction, by using smooth stationary feedback laws, was obtained in [17] in terms of the openness property (3) for the mapping (x, u) → x − f (x, u).
The next theorem shows that the linear openness property of the mapping f , combined with appropriate relationships between its exact bound and the collection of eigenvalues which lie on or outside the unit circle of the matrix A in the case of discrete-time control systems (26), ensures the local asymptotic stabilization of (26) by continuous stationary feedback laws. To formulate the result, define the partial derivative matrices A and B as in (12) and consider the collection of eigenvalues which lie on or outside the unit circle of the (complex) spectrum Λ(A) of A in the case of (26) given by λ, (28) where the latter inequality is automatic if Λ 1 (A) = ∅. Then the discrete-time nonlinear control system (26) can be locally asymptotically stabilized by means of continuous stationary feedback laws.
Proof. We basically follow here the lines in the proof of Theorem 3.2 for the continuous-time control system (1) by using the same approach of variational analysis and the corresponding results of controllability and stabilization theories known for discrete-time control systems. For any ν > 0 and the discrete index k = 0, 1, . . . define the parametric family of vector functions Taking into account that lip g ν (0, 0) = ν and using the version of [25,Theorem 4.25] for the stability of linear openness under Lipschitzian perturbations, we get the same lower estimate (15) for the exact covering bound of f around (0, 0) as in the proof of Theorem 3.2. Then remembering the choice of constants in condition (D) imposed in this theorem, calculating η d as in (28) and employing the full rank condition for the Jacobian matrix ∇f ν (0, 0) from the necessity part of Theorem 2.6, gives us the inequality with Λ 1 (A) defined in (27). This yields which is the Hautus test for asymptotic controllability of the linearized discrete-time control system x k+1 = Ax k + Bu k , k = 0, 1, . . . , with A and B taken from (12). Employing finally Hautus' stabilization theorem (see [19,Theorem 4] for discrete-time control systems ensures the existence of a locally asymptotically stabilizing feedback law for (26) and thus completes the proof of the result claimed in this theorem.
Various consequences of Theorem 4.1 can be derived similarly to those of Theorem 3.2 in Section 3 for continuous-time control systems. Let us present just a few of them.
Corollary 4.2. Assume that in the framework of Theorem 4.1 the function f is given by (22) with Λ 1 (A) ⊂ R, and let g : R n × R m → R n be of class C 1 around the origin with g(0, 0) = 0, ∇ x g(0, 0) = 0, and ∇ u g(0, 0) = 0. Then the validity of condition (D) from Theorem 4.1 ensures that the (slightly) perturbed nonlinear discrete-time control system can be locally exponentially stabilized by means of continuous stationary feedback laws.
Proof. Easily follows from Theorem 4.1.
Then the nonlinear discrete-time control system (26) can be locally asymptotically stabilized by means of continuous stationary feedback laws.
Proof. It is a clear consequence of Theorem 4.1; cf. also the arguments in the proof of Corollary 3.7 for continuous-time control systems (1) with the reference to [12].
The last corollary of Theorem 4.1 presented here addresses discrete-time control systems (26) for which the whole spectrum Λ(A) of the matrix A in (12) consists only of zeros. This setting encompasses, in particular, an important case of discrete-time control systems where (26)   Proof. This clearly follows from Corollary 4.3.

Examples
In this section we present two examples illustrating the applications of the main result in Theorem 3.2. The first example deals with a two-dimensional control system while the second example deals with a more complicated three-dimensional one.
Example 5.1. Consider the following continuous-time control system: It is easy to verify that the pair (0, 0) is an equilibrium for (29)- (30). Calculating the matrices A and B from (12) for this system at (0, 0) gives us A = 0 1 0 0 , B = 0 1 .
By checking the rank condition (19) or directly by Definition 2.2 we confirm that f in (29)- (30) is linearly open around (0, 0). Furthermore, we can see that all the assumptions of Corollary 3.10 are satisfied, and hence system (29)-(30) can be locally asymptotically stabilized by means of continuous stationary feedback laws. In fact, it is not hard to check that the feedback control function u(x) := −x 1 − x 2 gives us one possible continuous stationary feedback law that locally asymptotically stabilizes system (29)-(30).
It is easy to verify that (0, 0) is an equilibrium pair for (31)-(33). Thus we calculate by (12)  The linear openness of f in (31)-(33) can be deduced from the rank condition (19), since rank A | B = 3.
Furthermore, we calculate that cov f (0, 0) = 0.6144 and η c = 0.3162, and so assumption (C) of Theorem 3.2 is satisfied. Hence Theorem 3.2 tells us that system (31)-(33) can be locally asymptotically stabilized by means of continuous stationary feedback laws. In fact, the feedback control function u(x) := −x 1 − x 2 − x 3 is one of the possible feedback laws that locally asymptotically stabilizes the control system (31)-(33).

Concluding Remarks
The main trust of this paper is in suggesting a new approach of variational analysis to study feedback stabilization of nonlinear control systems. The emphasis of this approach is on applying the widely understood and comprehensively characterized well-posedness properties of single-valued and set-valued mappings to the issues under consideration instead of conventional topological methods related to degree theory, etc. Implementing these variational ideas in the context of continuous-time nonlinear control systems (1) with smooth dynamics, we replace the topological openness property in the seminal Brockett's theorem by the variational linear openness. In this way, we are able to establish the sufficiency of the linear openness, together with a certain condition on its moduli, for the exponential stabilization of (1) by means of continuous stationary feedback laws, while Brockett's theorem and its extensions assert the necessity of the openness property for asymptotic stabilization. We also derive new conditions ensuring the necessity of linear openness for both local exponential and asymptotic stabilization of (1) by means of stationary continuous as well as smooth feedback laws. Some (but by far not all) counterparts of the obtained results are established by this approach for asymptotic feedback stabilization of nonlinear discrete-time control systems (26).
We believe that the suggested variational approach has strong potential to the study of feedback stabilization and related issues for a much broader class of systems and topics in comparison with those considered in this paper. A brief discussion on the available machinery of variational analysis and generalized differential given in Section 2 sheds some light on appropriate tools and results. Among such topics we mention controllability and the minimum time function in the framework of [10], discontinuous feedback laws in the vein of [4,9,24], extensions to the case of control systems governed by differential inclusions in both finite and infinite dimensions [1,2], feedback stabilization of partial differential control systems [3,12,24], etc.