Second order necessary and sufficient optimality conditions for singular solutions of partially-affine control problems

In this article we study optimal control problems for systems that are affine with respect to some of the control variables and nonlinear in relation to the others. We consider finitely many equality and inequality constraints on the initial and final values of the state. We investigate singular optimal solutions for this class of problems, for which we obtain second order necessary and sufficient conditions for weak optimality in integral form. We also derive Goh pointwise necessary optimality conditions. We show an example to illustrate the results.


Introduction
The purpose of this paper is to investigate optimal control problems governed by systems of ordinary differential equations of the forṁ Many models that enter into this framework can be found in practice and, in particular, in the existing literature. Among these we can mention: the Goddard's problem in three dimensions [24] analyzed in Bonnans et al. [11], several models concerning the motion of rockets as the ones treated in Lawden [33], Bell and Jacobson [8], Goh [26,29], Oberle [40], Azimov [7] and Hull [31]; an hydrothermal electricity production problem studied in Bortolossi et al. [13], the problem of atmospheric flight considered by Oberle in [41], and the optimal production processes studied in Cho et al. [16] and Maurer at al. [36]. All the systems investigated in these cited articles are partially-affine in the sense that they have at least one affine and at least one nonlinear control.
The subject of second order optimality conditions for these partially-affine problems has been studied by Goh in [26,27,28,29], Dmitruk in [21], Dmitruk and Shishov in [22], Bernstein and Zeidan [9], Frankowska and Tonon [23], and Maurer and Osmolovskii [37]. The first works were by Goh, who introduced a change of variables in [27] and used it to obtain necessary optimality conditions in [27,26,25], always assuming normality of the optimal solution. The necessary conditions we present imply those by Goh [25], when there is only one multiplier (see Corollary 5.2). Recently, Dmitruk and Shishov [22] analyzed the quadratic functional associated with the second variation of the Lagrangian function, and provided a set of necessary conditions for the nonnegativity of this quadratic functional. Their results are consequence of a second order necessary condition that we present (see Theorem 5.3). In [21], Dmitruk proposed, without proof, necessary and sufficient conditions for a problem having a particular structure: the affine control variable applies to a term depending only on the state variable, i.e. the affine and nonlinear controls are uncoupled or, equivalently H uv is identically zero, where H denotes the unmaximized Hamiltonian. This hypothesis is not used in our work. Nevertheless, the conditions established here coincide with those suggested in Dmitruk [21], when the latter are applicable. In [9], Bernstein and Zeidan derived the Riccati equation for the singular linearquadratic regulator, which is a modification of the classical linear-quadratic regulator where only some components of the control enter quadratically in the cost function. Frankowska and Tonon proved in [23] second order necessary conditions for problems with closed control constraints and optimal controls containing arcs along which the second order derivative H uu of the unmaximized Hamiltonian vanishes. The necessary conditions given in [23] hold for problems either with no endpoint constraints, or with smooth endpoint constraints and additional hypotheses as calmness and the abnormality of Pontryagin's Maximum Principle. All the articles mentioned in this paragraph use Goh's transformation to derive their optimality conditions, as it is done in the current paper, while none of them proved sufficient conditions of second order which is the main contribution of this article. It is worth mentioning that sufficient conditions were shown by Maurer and Osmolovskii in [37], but for the case of a scalar control subject to bounds and bang-bang optimal solutions (i.e. no singular arc). This structure is not studied here since no closed control constraints are considered and thus our optimal control is supposed to be singular along the whole interval.
The contributions of this article are as follows. We provide a pair of necessary and sufficient conditions in integral form for weak optimality of singular solutions of partially-affine problems (Theorems 5.3-6.2). These conditions are 'no gap' in the sense that the sufficient condition is obtained from the necessary one by strengthening an inequality. We consider fairly general endpoint constraints and we do not assume uniqueness of multiplier. The main result is the sufficient condition of Theorem 6.2, which, up to our knowledge, cannot been found in the existing literature, and has important practical applications. As a product of the necessary condition 5.3 we get the pointwise Goh conditions in Corollary 5.2, extending this way previous results (see [25,23]) to problems with general endpoint constraints, and removing the hypothesis of vanishing H uu imposed in [23]. In order to obtain the sufficient condition we impose a regularity assumption on the optimal controls, that in some practical situations is a consequence of the generalized Legendre-Clebsch condition (see Remark 6.4). We provide a simple example to illustrate our results.
As a main application of the sufficient condition provided in this article we can mention the proof of convergence of an associated shooting algorithm as stated in Aronna [4] and shown in detail in the technical report Aronna [5]. It is worth mentioning that, for practical interest, this shooting algorithm and its proof of convergence can be also used to solve partially-affine problems with bounds on the control and associated bang-singular solutions.
The article is organized as follows. In Section 2 we present the problem, the basic definitions and first order optimality conditions. In Section 3 we give the tools for second order analysis and establish a second order necessary condition. We introduce Goh's transformation in Section 4. In Section 5 we show a new second order necessary condition. In Section 6 we present the main result of this article that is a second order sufficient condition. We show an example to illustrate our results in Section 7, while Section 8 is devoted to the conclusions and possible extensions. Finally, we include an Appendix containing some proofs of technical results that are omitted throughout the article.
Notations. Given a function h of variable (t, x), we write D t h orḣ for its derivative in time, and D x h or h x for the differentiations with respect to space variables. The same convention is extended to higher order derivatives. We let R k denote the k-dimensional real space, i.e. the space of column real vectors of dimension k; and by R k, * its corresponding dual space, which consists of k−dimensional real row vectors. By L p (0, T ; R k ) we mean the Lebesgue space with domain equal to the interval [0, T ] ⊂ R and with values in R k . The notation W q,s (0, T ; R k ) refers to the Sobolev spaces (see e.g. Adams [1]). Given A and B two k × k symmetric real matrices, we write A B to indicate that A − B is positive semidefinite. Given two functions k 1 : R N → R M and k 2 : R N → R L , we say that k 1 is a big-O of k 2 around 0 and write if there exists positive constants δ and M such that |k 1 (x)| ≤ M |k 2 (x)| for |x| < δ. It is a small-o if M goes to 0 as |x| goes to 0, and in this case we write

Statement of the problem and assumptions
2.1. Statement of the problem. We study the optimal control problem (P) given by where the function F : R n+l+m → R n can be written as The sets U and V are open domains of R l and R m , respectively. The control u(·) is called nonlinear, while v(·) is named affine control. We consider the function spaces U := L ∞ (0, T ; R l ) and V := L ∞ (0, T ; R m ) for the controls, and X := W 1,∞ (0, T ; R n ) for the state. When needed, we use w(·) := (x, u, v)(·) to refer to a point in W := X ×U ×V. We call trajectory an element w(·) ∈ W that satisfies the state equation (2). If in addition, the endpoint constraints (3) and (4) and the control constraint (5) hold for w(·), then we say that it is a feasible trajectory of problem (P).
We consider the following regularity hypothesis throughout the article. Assumption 2.1. All data functions have Lipschitz-continuous second order derivatives.
In this paper we study optimality conditions for weak minima of problem (P). A feasible trajectoryŵ(·) = (x,û,v)(·) is said to be a weak minimum if there exists ε > 0 such that the cost function attains atŵ(·) its minimum in the set of feasible trajectories w(·) = (x, u, v)(·) satisfying For the remainder of the article, we fix a nominal feasible trajectorŷ w(·) := (x,û,v)(·) for which we provide optimality conditions. We assume that the controlsû(·) andv(·) do not accumulate at the boundaries of U and V, respectively. This is, letting B denote the closed unit ball of R l+m , we impose: An element δw(·) ∈ W is termed feasible variation forŵ(·) ifŵ(·) + δw(·) is a feasible trajectory for (P). For λ = (α, β, p(·)) in the space R dϕ+1, * × R dη, * × W 1,∞ (0, T ; R n, * ), we define the following functions: • the endpoint Lagrangian function ℓ[λ] : R 2n → R, • and the Lagrangian function L[λ] : W → R, We assume, in sake of simplicity of notation that, whenever some argument of F, f i , H, ℓ, L or their derivatives is omitted, they are evaluated atŵ(·).
If we further want to explicit that they are evaluated at time t, we write , etc. The same convention notations hold for other functions of the state, control and multiplier that we define throughout the article. We assume, without any loss of generality, that ϕ i (x(0),x(T )) = 0, for all i = 1, . . . , d ϕ .

Lagrange multipliers.
We introduce here the concept of multiplier.
The second order conditions that we prove in this article are expressed in terms of the second variation of the Lagrangian function L given in (6) and the set of Lagrange multipliers associated withŵ(·) that we define below.
The following result constitutes a first order necessary condition and yields the existence of Lagrange multipliers. Theorem 2.4. Ifŵ(·) is a weak minimum for (P), then the set Λ is non empty and compact.
Proof. The existence of a Lagrange multiplier follows from Milyutin-Osmolovskii [39,Thm. 2.1] or equivalent results proved in Alekseev et al. [3] and Kurcyusz-Zowe [32]. In order to prove the compactness, observe that Λ is closed and that p(·) may be expressed as a linear continuous mapping of (α, β). Thus, since the normalization (7) holds, Λ is necessarily a finitedimensional compact set.
In view of previous Theorem 2.4, note that Λ can be identified with a compact subset of R s , where s := d ϕ + d η + 1. The main results of this article are stated on a restricted subset of Λ for which the matrix D 2 (u,v) 2 H[λ](ŵ, t) is singular and, consequently, the pairs (ŵ, λ) result to be singular extremals. We comment again on this fact in Remark 3.6 below.

Critical cones.
We define here the sets of critical directions associated withŵ(·), both in the L ∞ -and the L 2 -norms. Even if we are working with control variables in L ∞ and hence the control perturbations are naturally taken in L ∞ , the second order analysis involves quadratic mappings that require to continuously extend the cones to L 2 .
The critical cones in W 2 and W are given, respectively, by The following density result holds.
Lemma 2.5. The critical cone C is dense in C 2 with respect to the W 2topology.
The proof of previous lemma follows from the following technical result (due to Dmitruk [20, Lemma 1]). Lemma 2.6 (on density of cones). Consider a locally convex topological space X, a finite-faced cone Z ⊂ X, and a linear space Y dense in X. Then the cone Z ∩ Y is dense in Z.

Second order analysis
We begin this section by giving an expression of the second order derivative of the Lagrangian function L, in terms of derivatives of ℓ and H. We let Ω denote this second variation. All the second order conditions we present are established in terms of either Ω or some transformed form of Ω. The main result of the current section is the necessary condition in Theorem 3.9, which is applied in Section 5 to get the stronger condition given in Theorem 5.3.

Second variation. Let us consider the quadratic mapping
The result that follows gives an expression of the Lagrangian L at the nominal trajectoryŵ(·). For the sake of simplicity, the time variable is omitted in the statement. Lemma 3.1 (Lagrangian expansion). Let w(·) = (x, u, v)(·) ∈ W be a trajectory and set δw(·) = (δx, δu, δv)(·) := w(·) −ŵ(·). Then, for every multiplier λ ∈ Λ, the following expansion of the Lagrangian holds where ω is a cubic mapping given by and R satisfies the estimate Proof. See Appendix A.1.
Remark 3.2. From previous lemma one gets the identity

3.2.
Second order necessary condition. The following result is a classical second order condition for weak minima.
Theorem 3.3 (Second order necessary condition). Ifŵ(·) is a weak minimum of problem (P), then A proof of Theorem 3.3 can be found in Levitin, Milyutin and Osmolovskii [34]. Nevertheless, for the sake of completeness, we give a proof in the Appendix A.2 that uses techniques of optimization in abstract spaces.
An extension of the condition (20) to the cone C 2 can be easily proved and gives the following, stronger, second order condition.
Theorem 3.4. Ifŵ(·) is a weak minimum of problem (P), then Proof. Observe first that Ω[λ] can be extended to the space W 2 since all the coefficients are essentially bounded. The result follows by the density property of Lemma 2.5 and the compactness of the Lagrange multipliers set Λ proved in Theorem 2.4.

Strengthened second order necessary condition.
In the sequel we aim at strengthening the necessary condition of Theorem 3.4 by proving that the maximum in (21) remains nonnegative when taken in a possibly smaller set of multipliers, whenever Λ is convex. Let co Λ denote the convex hull of Λ. Observe that if λ = (α, β, p(·)) is in co Λ then it verifies (8)- (11) and, ifŵ(·) is a weak minimum, also the second order condition (21) is fulfilled for λ. However, λ may not verify the nontriviality condition (7), thus co Λ may content the trivial (i.e. identically zero) multiplier. Set and consider the subset of co Λ given by Next we prove that (co Λ) # can be characterized in a quite simple way (see Lemma 3.5 below). Theorem 3.9 stated afterwards yields a new necessary optimality condition.  Remark 3.6 (About singular solutions). From now on we restrict the set (co Λ) # or some subset of it and, therefore, H uv [λ] ≡ 0 along the nominal trajectoryŵ(·). Consequently, The latter assertion together with the stationarity condition (11) [44] they refer to singular extremals (as defined above) as totally singular, while they use the term partially singular to refer to controls for which H ν = 0 only on some subintervals of [0, T ], which is not the class of controls studied here. The same definition is adopted in Poggiolini and Stefani [43]. On the other hand, O'Malley in [42] calls partially singular the linear-quadratic problems in which the matrix H νν is (singular but) not of constant non-zero rank, that is a framework included in our class of problems.
In order to prove Lemma 3.5 we shall notice that Ω[λ] can be written as the sum of two maps: the first one being a weakly-continuous function on the space H 2 given by (24) and the second one being the quadratic operator The weak-continuity of the mapping in (24) Remark 3.8. The fact that the matrix in (26) is positive semidefinite is known as the Legendre-Clebsch necessary optimality condition for the extremal (ŵ, λ) (see e.g. Bliss [10] in the framework of Calculus of Variations, and Bryson-Ho [15], Agrachev-Sachkov [2] or Corollary 3.12 below for Optimal Control).
We can now prove Lemma 3.5. Proof of Lemma 3.5. It follows from the decomposition given in (24)- (25) and the characterization of weak-lower semicontinuity stated in previous Lemma 3.7.
Theorem 3.9 (Strengthened second order necessary condition). Ifŵ(·) is a weak minimum of problem (P), then (27) max Remark 3.10 (On unqualified solutions). Notice that it may occur that 0 ∈ (co Λ) # and, in this case, the second order condition in Theorem 3.9 above does not provide any information. This situation may arise when the endpoint constraints are not qualified, in the sense of the constraint qualification condition (73) introduced in the Appendix, which is a natural generalization of the Mangasarian-Fromovitz condition [35] to the infinite-dimensional framework.
In order to achieve Theorem 3.9, let us recall the following result on quadratic forms (taken from Dmitruk [18,Theorem 5]).
Lemma 3.11. Given a Hilbert space H, and a 1 , a 2 , . . . , a p in H, set Let M be a convex and compact subset of R s , and let {Q ψ : ψ ∈ M } be a family of continuous quadratic forms over H, the mapping ψ → Q ψ being affine. Set M # := {ψ ∈ M : Q ψ is weakly-l.s.c. on H} and assume that We are now able to show Theorem 3.9 as desired. Proof of Theorem 3.9. It is a consequence of Theorem 3.4, Lemmas 3.5 and 3.11.
We finish this section with the following extension of the classical second order pointwise Legendre-Clebsch condition, which follows as a corollary of Theorem 3.9.

Goh Transformation
In this section we introduce the Goh trasformation which is a linear change of variables applied usually to a linear differential equation, and that is motivated by the facts explained in the sequel. In the previous section we were able to provide a necessary condition involving the nonnegativity on C 2 of the maximum of Ω[λ] over the set (co Λ) # (Theorem 3.9). Our next step is finding a sufficient condition. To achieve this one would naturally try to strengthen the inequality (27) to convert it into a condition of strong positivity. However, since no quadratic term onv(·) appears in Ω, the latter cannot be strongly positive with respect to the norm of the controls. Thus, what we do here to find the desired sufficient condition is transforming Ω into a new quadratic mapping that may result strongly positive on an appropriate transformed critical cone. For historical interest, we recall that Goh introduced this change of variables in [27] and employed it to derive necessary conditions in [27,25]. Since then, many optimality conditions were obtained by using that transformation as already mentioned in the Introduction.
For the remainder of the article, we consider the following regularity hypothesis on the controls. This hypothesis is not restrictive since it is a consequence of the strengthened generalized Legendre-Clebsch condition as explained in Aronna [5,4], where it is shown that, whenever this generalized condition holds, one can write the controls as smooth functions of the state and costate variable. See also Remark 6.4 below.
Consider hence the linearized state equation (12) and the Goh transformation defined by Observe thatξ(·) defined in that way satisfies the linear equation Here B is an n × m-matrix whose ith column is given by Remind the definition of the linear space W 2 given in paragraph 2.3. Let Y denote the Sobolev space W 1,∞ (0, T ; R m ), and consider the cones Remark 4.2. Observe that P is the cone obtained from C via Goh's transformation (32).
The next result shows the density of P in P 2 . This fact is used afterwards when we extend a necessary condition stated in P to the bigger cone P 2 by continuity arguments, as it was done for C and C 2 in Section 3.
Proof. Notice that the inclusion P ⊂ P 2 is immediate. In order to prove the density, consider the linear spaces
Remark 4.4. We can see that M is an m × n-matrix whose ith row is given by the formula respectively. The components of the matrix R have a quite long expression, that is simplified for some multipliers as it is detailed in equation (50) in the next section.
The identity between Ω and Ω P stated in the following lemma holds.
Finally let us remind the strengthened necessary condition of Theorem 3.9. Observe that by Goh's transformation (27) and in view of Remark 4.2, we obtain the following form of the second order necessary condition.

New second order necessary condition
We aim at removing the dependence onv in the formulation of the second order necessary condition of Corollary 4.6 above. Note that in the inequality (45),v =ẏ appears only in the termv ⊤ G[λ]ȳ. We prove in the sequel that we can restrict the maximum in (45) to the subset of (co Λ) # consisting of the multipliers for which G[λ] vanishes.
Hence, the following optimality condition holds.
Theorem 5.1 (New necessary condition). Ifŵ(·) is a weak minimum of problem (P), then Theorem 5.1 is an extension of similar results given in Dmitruk [17], Milyutin [38] and recently in Aronna et al. [6]. The proof given in Aronna et al. [6,Theorem 4.6] holds for Theorem 5.1 with minor modifications and hence we do not include it in the present article.
Notice that whenŵ(·) has a unique associated multiplier, from Theorem 5.1 one can deduce that G(co Λ) # is not empty, and since the latter is a singleton, the corollary below follows. This result gives an extension of the necessary conditions stated by Goh in [25] to the present framework.

Corollary 5.2 (Goh conditions).
Assume thatŵ(·) is a weak minimum having a unique associated multiplier. Then the following conditions holds.
(i) G ≡ 0 or, equivalently, the matrix H vx F v is symmetric, which, in view of (44), can be written as where p(·) is the unique associated adjoint state.
We aim now at stating a necessary condition that does not depend on v(·). Let us note that, for λ ∈ G(co Λ) # , the quadratic form Ω[λ] does not depend onv(·) since its coefficients vanish. We can then consider its continuous extension to P 2 for multipliers λ ∈ G(co Λ) # , given by where the involved matrices and the function g were defined in (40)- (43). Observe that, since G[λ] ≡ 0, one has that H vx [λ]F v is symmetric and, therefore, the ij entry of R[λ] can be written as for each i, j = 1, . . . , m. From Theorem 5.1, it follows:

Theorem 5.3 (Second order necessary condition in new variables). Ifŵ(·)
is a weak minimum of problem (P), then
(i) Assume that there exists ρ > 0 such that Thenŵ(·) is a weak minimum satisfying γ-growth in the weak sense. (ii) Conversely, ifŵ(·) is a weak solution satisfying γ-growth in the weak sense and such that α 0 > 0 for every λ ∈ G(co Λ) # , then (55) holds for some positive ρ.
In the absence of the nonlinear control u, Theorem 6.2 was proved in Dmitruk [17]. In Aronna et al. [6] the same result was shown for the case of scalar control subject to bounds.
As a consequence of Theorem 6.2 and standard results on positive quadratic mappings due to Hestenes [30] we get the following pointwise condition. Corollary 6.3. Ifŵ(·) satisfies the uniform positivity in (55) and it has a unique associated multiplier, then the matrix in (48) is uniformly positive definite, i.e.
where I refers to the identity matrix. The remainder of this section is devoted to the proof of Theorem 6.2. Several technical lemmas that are used in the following proof were stated and proved in the Appendix B.
With the aim of proving that (ξ(·),ů(·),v(·),h) belongs to P 2 , it remains to check that the linearized endpoint constraints (35)-(36) are verified. Observe that, for each index 0 ≤ i ≤ d ϕ , one has (60) In order to prove that the right hand-side of (60) is nonpositive, we consider the following first order Taylor expansion of ϕ i around (x(0),x(T )) :
Let us now pass to Part (B). Notice that from the expansion of L given in (103) of Lemma B.5, and the inequality (58) we get and thus Let us consider the subset of G(co Λ) # defined by (65) Λ #,ρ := {λ ∈ G(co Λ) # : Ω P 2 [λ] − ργ P is weakly l.s.c. on H 2 × R m }.
(ii) Let us now prove the second statement of the theorem. Assume that w(·) is a weak solution satisfying γ-growth in the weak sense for some constant ρ ′ > 0, and such that α 0 > 0 for every multiplier λ ∈ G(co Λ) # . Let us consider the modified problem and rewrite it in the Mayer form We will next apply the second order necessary condition of Theorem 5.3 to (P ) at the point (w(·) =ŵ(·), y(·) =ŷ(·), π 1 (·) ≡ 0, π 2 (·) ≡ 0). Simple computations show that at this solution each critical cone (see (37)) is the projection of the corresponding critical cone of (P ), and that the same holds for the set of multipliers. Furthermore, the second variation of (P ) evaluated at a multiplierλ ∈ G(coΛ # ) is given by where λ ∈ G(co Λ) # is the corresponding multiplier for problem (37). Hence, the necessary condition in Theorem 5.3 (see Remark 6.5 below) implies that for every (ξ(·),ū(·),v(·),h) ∈ P 2 , there exists λ ∈ G(co Λ) # such that Setting ρ := min G(co Λ) # α 0 ρ ′ > 0 the desired result follows. This completes the proof of the theorem.
Remark 6.5. Since the dynamics of (P ) are not autonomous, what we applied above is an extension of Theorem 5.3 to time-dependent dynamics. The latter follows easily by adding a state variable κ with dynamicsκ = 1 and κ(0) = 0.
Recalling the definitions given in (40)-(43), the second variation Ω P 2 (defined in (49)) on the critical cone P 2 of (PE) gives: We see that Ω P 2 verifies the sufficient condition (55). We should now look for a feasible solution that verifies the first order optimality conditions. In Aronna [4] we used the shooting algorithm to solve problem (PE) numerically. The numerical tests converged to the optimal solution (û,v)(·) ≡ 0 for arbitrary guesses of the initial values of the costate variables. It is inmediate to check thatŵ(·) ≡ 0 is a feasible trajectory that verifies the first order optimality conditions. Since the second variation at thisŵ verifies the sufficient condition of Theorem 6.2, we conclude thatŵ(·) is a strict weak optimal trajectory that satisfies γ-growth.

Conclusion and possible extensions
We studied optimal control problems in the Mayer form governed by systems that are affine in some components of the control variable. A set of 'no gap' necessary and sufficient second order optimality conditions was provided. These conditions apply to a weak minimum, consider fairly general endpoint constraints and do not assume uniqueness of multiplier. We further derived the Goh conditions when we assume uniqueness of multiplier.
The main result of the article is Theorem 6.2. The interest of this result is that it can be applied either to prove optimality of some candidate solution of a given problem, or to show convergence of an associated shooting algorithm as stated in Aronna [4] and proved in the detail in the technical report Aronna [5]. This algorithm and its proof of convergence apply also to partially-affine problems with bounds on the control and bang-singular solutions, and hence its convergence has strong practical interest.
The results here presented can be pursued by many interesting extensions. One of the most important extensions are the optimality conditions for bangsingular solutions for problems containing closed control constraints.

Acknowledgments
Part of this work was done during my Ph.D. under the supervision of Frédéric Bonnans, who I thank for the great guidance.
I also acknowledge the anonymous referee for his careful reading and useful remarks.

Appendix A. Proofs of technical results
We include in this part the proofs that were omitted throughout the article.
Definition A.1. We say that the endpoint equality constraints are qualified if When (73) does not hold, the constraints are not qualified or unqualified.
The proof of Theorem 3.3 is divided in two cases: qualified and not qualified endpoint equality constraints. In the latter case the condition (20) follows easily and it is shown in Lemma A.2 below. The proof for the qualified case is done by means of an auxiliary linear problem and duality arguments. Set λ := (p(·), α, β) with p(·) ≡ 0 and α = 0. Then both λ and −λ are in Λ. Observe that Thus, either Ω[λ](x,ū,v) or Ω[−λ](x,ū,v) is necessarily nonnegative. The desired result follows.
(QPw) Proposition A.3. Assume thatŵ(·) is a weak solution of (AP) for which the endpoint equality constraints are qualified. Letw(·) ∈ C be a critical direction. Then the problem (QPw) is feasible and has nonnegative value.
Proof of Proposition A.3.
Step I. Let us first show feasibility. Since Dη(x(0),û,v) is onto, there exists r ∈ R n ×U ×V for which the equality constraint in (QPw) is satisfied. Set Then (τ, r) is feasible for (QPw).
Step II. Let us now prove that (QPw) has nonnegative value. Suppose on the contrary that there is (τ, r) ∈ R × R n × U × V feasible for (QPw) with τ < 0. We shall look for a family of feasible solutions of (AP) referred as {r(σ)} σ with the following properties: it is defined for small positive values of σ and it satisfies The existence of such family {r(σ)} σ will contradict the local optimality of (x(0),û,v). Consider hencẽ where last inequality holds since (x,ū,v)(·) is a critical direction and in view of the definition of τ in (74). Analogously, one has η(r(σ)) = o(σ 2 ).
Appendix B. Technical lemmas used in the proof of the main Theorem 6.2 Recall first the following classical result for ordinary differential equations.
Finally, the following lemma gives an estimate for the difference between the variation of the state variable and the linearized state.
Combining (100) and (101) yields with the remainder being given by (97). The linearized equation (12) together with (102) lead to (96). In view of (97) and Lemma B.3, it can be seen that the estimates in (98) hold.
In view of Lemmas 3.1, B.2, B.3 and B.4 we can justify the following technical result that is an essential point in the proof of the sufficient condition of Theorem 6.2.