Optimal periodic control for scalar dynamics under integral constraint on the input

This paper studies a periodic optimal control problem governed by a one-dimensional system, linear with respect to the control u, under an integral constraint on u. We give conditions for which the value of the cost function at steady state with a constant control ū can be improved by considering periodic control u with average value equal to ū. This leads to the so-called “over-yielding” met in several applications. With the use of the Pontryagin Maximum Principle, we provide the optimal synthesis of periodic strategies under the integral constraint. The results are illustrated on a single population model in order to study the effect of periodic inputs on the utility of the stock of resource. Key-Words. Optimal Control, Pontryagin Maximum Principle, Periodic solutions, Over-yielding.

1. Introduction. In many applications, the control of dynamical models allows to drive the state x of a system to an operating point, typically a steady statex which is an equilibrium point of the dynamics under a constant controlū. When a criterion of performance is associated with the state of the system, it may happen that a periodic trajectory near the steady state gives a better averaged performance than at steady state. But such a gain in the performance could be at the price of higher effort (or cost) on the control variable. The objective of the present work is to investigate the possibility of improving the performance of a steady state with periodic solutions, while keeping the same control effort over each time period. We consider in this work that this effort is measured by the integral of the control u(·) over a period. Keeping the same effort consists then in imposing that the averaged control over a periodic solution is equal to the controlū at steady state. For this purpose, we formulate an optimal control problem over periodic solutions, under an integral constraint on the control. Periodic optimal control has already been investigated in the literature, mainly under the consideration that solutions are sought near a steady state optimizing the criterion among stationary solutions. In particular, the so-called π-criterion characterizes the existence of "best" periods. It consists first in determining an optimal steady state among constant controls, and then in checking on a linear-quadratic approximation if there exists a frequency of a periodic signal near the nominal constant one that could improve the cost (see [6,5]). For instance, in [2,3,15], this method has been applied on the chemostat model, and it has been shown that its productivity can be improved with a periodic control when there is a delay in the dynamics. However, there are relatively few theoretical works about global optimality of periodic controls (apart from [18] for the characterization of the value function under quite strong assumptions). Most of the existing works deal with local necessary conditions ( [8,12]), second order conditions ( [7,21,13]) or approximations techniques ( [11,1,4]). In [4] for instance, a local analysis is conducted in the context of age-structured system showing how to improve locally the cost function by considering periodic controls versus constant ones (but no integral constraint on the control is considered). It has to be underlined that, in our approach, we do not have to consider that the steady state is optimizing the criterion among all stationary solutions of the system (the optimal steady-state control does not necessarily satisfy the integral constraint). To our knowledge, integral constraint on the control has not been yet considered in problems of determining optimal periodic trajectories. Therefore, our objective is some what different than what has been described above.
In applications for which the control variable is a flow rate of matter (such as in continuously fed reactors for instance [14]), this constraint amounts to consider that a given quantity of matter is available for each period of time, and the problem is then to determine how to deliver this matter during this period (i.e., at a constant flow rate or not?), maintaining a periodic operation over the future times and maximizing the production or the quality of a product over each period. The present problem has been mainly motivated by the modelling of exploited populations of stock (or density) x, see, e.g., [10], for which the control variable u is the harvesting effort (for instance the number of fishermen boats on a lake). In our setting, for a given steady statex and its associated constant controlū, we consider the set of T -periodic trajectories with periodic controls havingū as average. We say that a over-yielding occurs when the averaged utility of the stock x(·) of a T -periodic solution is larger than the utility of the stockx. Let us finally mention [16,17] where periodic inputs are studied in the context of population biology and fisheries management, but with different objectives (no optimization and no such integral constraint are considered).
To our knowledge, this problem has not been yet addressed theoretically in the literature. From a mathematical view point, the integral constraint on the input brings two main difficulties: 1. the existence of non-constant periodic trajectories with a control satisfying the integral constraint, 2. the characterization of an optimal control under both constraints of periodicity of the trajectory and the integral constraint on the input, that we propose to tackle here for scalar dynamics in general framework. The paper is organized as follows. In Section 2, we formulate the problem and give a precise definition of over-yielding. We then provide assumptions on the dynamics and the cost function that guarantee or prevent over-yielding. In particular, we show that convexity is playing an important role. In section 3, we synthesize optimal periodic controls (in particular non constant ones) improving the cost function compared to steady-state (see Theorem 3.6). In Section 4, we show how to relax the assumptions of Section 2 that are required on an invariant domain (a, b) of the dynamics, when these ones are fulfilled only in a neighborhood ofx. This leads us to give a result similar to the one of Section 3 but for restrictive values of the period T . Finally, we illustrate the results of Section 3-4 in Section 5 in the context of sustainable resource management (see, e.g., [10]). We study the impact on the stock of non-constant periodic inputs (harvesting efforts) but with the same average value, and determine the worst-case scenarios with respect to a given utility of the stock.

2.
Existence of over-yielding. Given two functions f, g : R → R of class C 1 , we consider the control systemẋ where u is a control variable taking values in [−1, 1]. We suppose that the system satisfies the following hypotheses: (H1) There exists (a, b) ∈ R 2 with a < b such that g is positive on the interval (H2) One has f − g < 0 and f + g > 0 on I. (1) whereas Hypothesis (H2) is related to controllability properties of (1) (that will be used in the next section for the synthesis of non-constant periodic trajectories). In the rest of the paper, we shall consider initial conditions in I only.
Note that any such pointx is an equilibrium of (1) for the constant control u =ū. Throughout the paper, we fix a pointx ∈ I as a nominal steady state. In the sequel, we shall consider T -periodic solutions of (1), where T ∈ R * + , with a Tperiodic control u that satisfies the integral constraint We then define the set U T of admissible controls as One has the following property.
Proof. On the interval I, the function g is positive and from equation (1), we get Define the function together with the function t → y(t) := h(x(t)) for t ∈ [0, T ]. For any control function u that fulfills the constraint (2), one then has . For any T -periodic solution x in I, y is also T -periodic and one obtains the property (4).
We now require the following hypothesis onx.
(H) The function ψ satisfies the property.
This hypothesis is related with the asymptotic stability ofx for the dynamics (1) in I with the constant controlū (as we shall see in Lemma (2.2), see (5)). For applications, it sounds also reasonable that the given steady statex is a stable equilibrium of the system under constant control. For convenience, we denote by t → x(t, u, x 0 ) the solution of (1) with u ∈ U T and taking the value x 0 ∈ I at time 0. In the following, we shall consider T -periodic solutions with the initial condition x(0) =x (i.e., that are such that x(T, u,x) =x for u ∈ U T ). We first show that Hypothesis (H) guarantees the existence of nonconstant such solutions. Proof. Consider the constant control u =ū and its associated dynamics in İ x =f (x) := g(x)(ū − ψ(x)) = g(x)(ψ(x) − ψ(x)).
As the function g is positive on I, Hypothesis (H) implies that one hasf < 0 on (x, b), andf > 0 on (a,x). Therefore, one has the properties Consider now any bounded T -periodic measurable function v : [0, +∞) → [−1, 1] satisfying T 0 v(t) dt = 0, and the control function u ε (t) :=ū + εv(t), where ε ∈ R. Clearly, u ε satisfies the constraint (2) and for ε small enough, one has u ε (t) ∈ [−1, 1] for any t ≥ 0. Define then the function for (x 0 , ε) ∈ I × R. By the Theorem of continuous dependency of the solutions of ordinary differential equations w.r.t. initial conditions and parameters (see for instance [19]), θ is a continuous function. From (6), we deduce that and by continuity of θ, there exists ε = 0, x + 0 ∈ (x, b) and x − 0 ∈ (a,x) such that θ(x + 0 , ε) < 0 and θ(x − 0 , ε) > 0. By the Mean Value Theorem, we deduce the existence of x 0 ∈ (x − 0 , x + 0 ) such that θ(x 0 , ε) = 0, that is, the existence of a Tperiodic solution x of (1) with a non-constant control u that satisfies the constraint (2). From Lemma 2.1, such solution satisfies T 0 (ψ(x(t)) − ψ(x)) dt = 0, which implies that the map t → ψ(x(t)) − ψ(x) cannot be of constant sign on [0, T ]. Hypothesis (H) implies that x(t) −x has to change its sign. Therefore there existst ∈ (0, T ) with x(t) =x in such a way that the control functionũ defined by t →ũ(t) := u(t +t) guarantees to have x(T,ũ,x) =x. Now, let : R → R be a function of class C 1 and consider the cost function where x u is the unique solution of (1) such that x u (0) =x, associated with a control u ∈ U T . Our aim in this work is to address the question of finding a periodic trajectory with x(0) =x that has a lower cost than the constantx, with a (T -periodic) control of mean valueū. For this purpose, we introduce the following terminology.
Definition 2.3. Given T > 0, we say that (1) exhibits an over-yielding for the cost (7) if there exists a T -periodic solution x of (1) with x(0) =x associated with a control u ∈ U T such that J T (u) < (x).
Moreover, we aim to characterize in the next section the strategies realizing the minimum of the criterion (7) among such controls. The possibility of having an over-yielding relies on specific assumptions on the cost function and the dynamics, that we now introduce.
For a non-constant solution, we find by Jensen's inequality Since γ is increasing over (I) with, we obtain We now provide sufficient conditions for preventing any over-yielding.
(H4) There exists a continuous functionψ such that Proposition 2.2. If (H1) and (H4) hold true then no over-yielding is possible.
Proof. We suppose by contradiction that there exists a periodic solution x associated with a control u ∈ U T such that The functionγ being increasing on (I), we havē Using Jensen's inequality forγ, we can writē ( (x(t))) dt.

Remark 4. (i) Thanks to the previous proposition, if (x) =
x for x ∈ R and ψ is strictly concave, then no over-yielding is possible. In the same way, if is increasing on I and γ strictly concave increasing over (I), then the same conclusion follows.
(ii) Under hypotheses (H1)-(H3), we say that an over yielding is systematic (which means that it exists for any T > 0, see Proposition 2.1).
3. Determination of optimal periodic solutions. In this Section, we assume that Hypotheses (H1)-(H2)-(H3) hold true, so that we know that over-yielding is possible (actually, it is systematic according to Proposition 2.1). For a given T > 0, we shall say that a solution x of (1) is T -admissible if it is T -periodic with x(0) =x and u ∈ U T . We reformulate the control constraint (2) by considering the augmented dynamics together with the boundary conditions: The optimal control problem can be then stated as follows where U denotes the set of measurable control functions u over [0, T ] taking values in [−1, 1]. Note that Problem (13) admits a solution by classical existence results. Indeed, hypotheses (H1)-(H2)-(H3) imply that there exist trajectories of (11) satisfying (12). Since the system is affine w.r.t. the control and is continuous, the existence of an optimal control follows by Filippov's existence theorem [9].
3.1. Application of the Pontryagin Maximum Principle. We derive necessary optimality conditions using the Pontryagin Maximum Principle [20]. Let H : (13) : where λ := (λ x , λ y ) denotes the adjoint vector. Let u ∈ U be an optimal control and (x, y) a solution of (11)-(12) associated with u. Then, there exists a scalar λ 0 ≤ 0 and an absolutely continuous map λ : for a.e. t ∈ [0, T ]. Moreover, (λ 0 , λ) = 0 and the Hamiltonian condition writes Since the dynamics is affine w.r.t. u, the switching function provides the following expression of the control u (thanks to (15)): ).
An extremal trajectory is a quadruple (x, λ, λ 0 , u) where (x, λ) satisfies the stateadjoint equations and u the Hamiltonian condition (15). We recall that a singular arc occurs if φ vanishes on some time interval [t 1 , t 2 ] with t 1 < t 2 , and a switching time t s ∈ (0, T ) is such that an extremal control u is non-constant in any neighborhood of t s (which implies that φ(t s ) = 0). It is also worth to mention that from Hypothesis (H2), when φ > 0, resp. φ < 0, then x is increasing, resp. decreasing. Proof. If λ 0 = 0, then λ x cannot vanish from the adjoint equation. Otherwise λ x would be zero over [0, T ] and the switching function would be constant equal to λ y . Since λ y cannot be simultaneously equal to 0, φ would be of constant sign over [0, T ] implying that u = 1 or u = −1 over [0, T ] and a contradiction with the periodicity of x(·) (recall that f + g > 0 and f − g < 0 over I). As a consequence, λ x is of constant sign. Now, since λ 0 = 0, one haṡ We deduce thatφ is of constant sign (recall that ψ > 0), hence φ is monotone. Consequently, the extremal trajectory has at most one switching point. Thus, one has x(t) >x for any time t ∈ (0, T ) implying a contradiction with (4). If x(t) <x for any time t ∈ (0, T ), we conclude in the same way.
Without any loss of generality, we may assume that λ 0 = −1.

Remark 5.
Considering T -periodic optimal solutions in I without requiring the initial condition x(0) =x, but only x(T ) = x(0) provides the transversality condition λ x (T ) = λ x (0). However, Lemma 2.1 and Hypothesis (H3) (or simply (H)) imply that any T -periodic optimal solution x(·) in I has to pass byx. Therefore, we can impose x(0) =x without any loss of generality, and deduce that λ x (·) is necessarily T -periodic (even though we shall not use this property in the following).  (16), the control u would be equal to 1 in a neighborhood of t, and thus, from (H2), we would have a contradiction with the fact that x M is the maximum of x. We proceed in the same way if φ(t) < 0. 2. It has no singular arc.
, since H is conserved along any extremal trajectory (see for instance [9]), one has As γ is increasing over (I), one has λ y > 0. Suppose now that t s is a switching time such that x(t s ) ∈ (x m , x M ). Using a similar computation as above, we find that Since γ and are respectively strictly convex and increasing on [x m , x M ], (17) and (18) imply a contradiction, thus x(t s ) ∈ {x m , x M } as was to be proved.
Suppose now by a contradiction that there exists a time interval [t 1 , t 2 ] such that Now, since the extremities of the singular arc t 1 and t 2 must be switching times, one must have One then gets 1 λy = γ ( (x m )) which is a contradiction with (17) (since γ is strictly convex) and similarly at t = t 2 . This completes the proof.
At this stage, we have thus proved that optimal trajectories are of bang-bang type (i.e., they are concatenations of arcs with u = ±1) such that at each switching time t s one has x(t s ) ∈ {x m , x M }. One can show that the number of switching times is finite (by doing a similar reasoning as for the exclusion of singular arcs).
Moreover, this number is necessarily even. Indeed, let x(·) be a T -admissible solution of (1) associated with a control u ∈ U T having 2n + 1 switching times over [0, T ] with n > 0. Note that n = 0 is impossible since the map t → x(t) −x has to change its sign over [0, T ]. Hence, it has to be equal at least once tox on the interval (0, T ), say at a timet, to satisfy (4) which invalidates n = 0. Finally, observe that the sign ofẋ(0 + ) andẋ(T − ), with an odd number of switches, are necessarily distinct. It follows that the sign ofẋ(T − ) andẋ(T + ) are also distinct. From the initial condition x(t) =x, the T -periodic solution over (t, T +t) switches then at time t = T , which belongs to the interval (t, T +t). Since x(T ) =x, we have a contradiction with point 1 of Proposition 3.1. Hence the number of switches is even.
We focus now on extremal trajectories with two switches.
3.3. Trajectories with two switches. For a given T > 0, we consider trajectories t → x(t) solutions of (1) on [0, T ] with x(0) =x and associated with a control u defined by two switching times t 1 , t 2 with 0 < t 1 < t 2 < T : These trajectories, that we shall call B + B − B + trajectories, will play an important role in the following. Note that under Hypotheses (H1)-(H2) a B + B − B + trajectory is characterized uniquely by its maximal and minimal values x M = x(t 1 ) and x m = x(t 2 ) in I. For convenience, we define on the interval I the function .
From Hypothesis (H2), note that η is C 1 and positive function on I.
One then obtains and for a T -periodic solution, x(T ) =x gives exactly the property (20). Proceeding with the same decomposition of the interval [0, T ], one can write which gives the quality dx =ūT, when u fulfills (2). Finally, notice that one has 1 f (x) + g(x) for x ∈ I, and thus property (21) is satisfied.
We first analyze the possibilities of satisfying the integral condition (20). Proof. The function f + g is of class C 1 and positive on I with (f + g)(b) = 0. Thus, it is easy to see that K + := − min x∈[a,b] (f + g) (x) > 0. It follows that one has the inequality (f + g)(x) ≤ K + (b − x) for any x ∈ I. As the function η satisfies one deduces that the map is such that for any α ∈ I, χ(α, ·) is of class C 1 , increasing with χ(α, α) = 0 and χ(α, b) = +∞. By the Implicit Function Theorem, there exists a unique map β T : I → I of class C 1 , such that χ(α, β T (α)) = T for any α ∈ I. Moreover, one has The function β T is thus increasing, and then admits limits at the points a + and b − . Therefore one has β T (a + ) := lim α→a+ β T (α) ≥ a and β T (b − ) := lim α→b− β T (α) ≤ b that verify χ(a, β T (a + )) = T and χ(b, β T (b − )) = T , since χ is continuous. As previously, f − g < 0 on I with (f − g)(a) = 0 implies that If β T (a + ) > a, one should then have χ(a, β T (a + )) = +∞ which is not possible since one has χ(α, β T (α)) = T for any α ∈ I. So, one has β T (a + ) = a. As the function η is positive on I, one also has β T (α) > α for any α ∈ I, and we deduce that β T (b − ) = b. This proves that β T can be extended to a one-to-one mapping from [a, b] to [a, b].
We are now ready to show that there exists a unique B + B − B + trajectory that satisfies both integral conditions (20) and (21).
and notice that conditions (20) and (21) As β T is increasing and ψ satisfies (H), we obtain F (α) > 0 for any α <x with β T (α) >x, that is exactly for α ∈ (β −1 T (x),x), and we conclude about the existence and uniqueness of x m , x M in I, with x m <x and x M >x.  (21). Consider now a solution x(·) of (1) such that x(0) =x which is such that u = 1 until x(·) reaches x M , say at a time t 1 and then u = −1 from t 1 until the first time t 2 > t 1 such that x(t 2 ) = x m , and finally u = 1 until x(·) reaches x. For any T > 0, this construction defines a unique B + B − B + trajectory that is T -admissible, thanks to (20)- (21).
It is also worth to mention that x m and x M depend on the period T . In the next Lemma, we provide properties of x m and x M as functions of T .
x m (T ) are given by the expressions From (20) and (21), one has .
Taking the limit when T tends to +∞ in both side of this inequality, one obtains lim T →+∞ x M (T ) = b. Similarly one can prove that lim T →+∞ x m (T ) = a.

3.4.
Optimal solutions. According to Proposition 3.2, for any T > 0, we have seen that there is a unique B + B − B + trajectoryx T (·) that is T -admissible, generated by a control that we shall denoteû T . Moreover, there exists a uniquet ∈ (0, T ) such thatx T (t) =x. Therefore, there are exactly two T -admissible solutionsx T (·), x T (·) with two switches, given byû T andǔ T witȟ u T (t) :=û T (t +t), t ≥ 0, which have the same cost. Similarly, we denote by B − B + B − the trajectoryx T . We now study the monotonicity of the cost J T (û T ) with respect to T . This property is crucial for the optimal synthesis (Theorem 3.6) and relies on the convexity assumptions on the data.
We now give our main result.
Proof. Since (H3) implies (H), Proposition 3.2 gives the uniqueness of a T -admissible B + B − B + trajectory (see Remark 6), which amounts to state that there are exactly two extremals with two switches (corresponding to n = 1), given by the controlŝ u T (·) andǔ T (·). Recall that they have same cost becauseǔ T (·) is obtained by a time translation ofû T (·). Now, Proposition 3.1 shows that an optimal trajectory consists in 2n (with n ≥ 1) switches, that occur exactly at the maximal and minimal values. It should be noted that any such trajectory with 2n switches (n ≥ 1) is T n -periodic. By construction, an extremal has to crossx after its two first switches, say att > 0. From t =t, the control alternates the same values +1 and −1 and switching points occur at exactly the same values of x(·), namely x M and x m . Therefore, using Cauchy-Lipschitz's Theorem, one gets x(t) = x(t +t) for any t ∈ [0,t] and successively on the intervals [t, 2t], · · · , [(n−1)t, nt]. Therefore x(·) ist-periodic with x(nt) = x(T ) =x, thust = T /n. We deduce that an extremal with 2n switches is T /n-periodic. To conclude, suppose that an optimal trajectory has 2n switches with n > 1. Its cost is then equal to J(û T /n ). Applying Lemma 3.5 with T and T /n gives J(û T ) < J(û T /n ), which proves that the optimal solution is achieved for n = 1 (i.e, with two switches).
An interesting consequence of Lemma 3.5 is the monotonicity property of the cost function evaluated at the optimal solution as a function of T .   (H3)). In the present section, we consider situations for which these conditions are not fulfilled on the whole interval I but only in a neighborhood ofx. Typically, there could exist other values ofx satisfying ψ(x) =ū (Hypothesis (H) is thus not fulfilled on I) or γ could be only locally convex in a neighborhood ofx (Hypothesis (H3) is thus not fulfilled on I). The idea is then to restrict the values of the period T for characterizing (periodic) optimal solutions remaining in a neighborhood ofx (and presenting over-yielding). Therefore, we expect to no longer have a systematic over-yielding (see remark 7 and Example 5.2.2 as an illustration).
We first revisit Proposition 3.2 as follows. Then there exists T max > 0 such that for any T ∈ (0, T max ), there exists a unique (x m , x M ) ∈ I 2 that verify (20) and (21) with Proof. Consider a sub-interval J := (ã,b) ⊂ I withã <x <b such that the property is fulfilled (as ψ is strictly positive atx, we know that such an interval exists). Let us then consider the functionf defined on the interval [a, b] bỹ  (20) and (21) on [ã,b] for the pair (f, g) and any T ≤T . This can be done for any sub-interval J that verifies condition (30). We then consider T max as the supremum ofT for all such sub-intervals J.
Given T < T max , one may wonder if is enough to require Hypothesis (H3) to be fulfilled on [x m , x M ] (instead of I) to obtain the optimality of the controlsû T , u T as in Theorem 3.6. However, there could exist extremal trajectories taking values outside the interval [x m , x M ], without requiring additional assumption on the function ψ outside this set. For this purpose, we consider the two controls u − and u + defined by one switching time t − ∈ (0, T ) (for u − ) and t + ∈ (0, T ) (for u + ) as such that the corresponding trajectories x(·, u − ,x) and x(·, u + ,x) are T -periodic (see Fig. 2). Let us then define Figure 2. T -periodic solutions x(·, u − ,x) and x(·, u + ,x).
x − T := x(t + , u + ,x), One can check that under Hypotheses (H1)-(H2), any T -periodic solution x(·) of (1) with x(0) =x and control u taking values in [−1, 1] verifies Indeed, by comparison of solutions of scalar ODEs over [0, t + ], one obtains (since u + (t) = 1 on [0, t + ] and f + ug ≤ f + g, u ∈ [−1, 1]): x(t) ≤ x(t, u + , x + T ), ∀t ∈ [t + , T ]. It follows that x(t) ≤ x(t, u + ,x), ∀t ∈ [0, T ]. By a similar argumentation with the control u − in place of u + , one concludes that which completes the proof of Property (32). It can also be observed that one has We give now a result requiring the condition (29) to be fulfilled on [x − T , x + T ], which guarantees that any optimal solution is in the interval [x m , x M ].  (20) and (21). If ψ is increasing on [x m , x M ], then any T -admissible solution x(·) verifieŝ Proof. Fix T ∈ (0, T max ) that fulfills condition (33). Note that this is possible since ψ is increasing in a neighborhood ofx and (x − T , x + T ) → (x,x) when T → 0. According to Proposition 4.1, there exists unique x m , x M that verify (20) and (21). Since there exists a T -admissible trajectory taking the values x m and x M , one has necessarily Consider now any T -admissible solution x. From the property (32), one hasx ≤ x + T andx ≥ x − T . Moreover, from condition (33) and Lemma 2.1, one hasx >x >x. Let t ∈]0, T [ be such that x(t) =x and suppose that one hasx > x M . We can assume, without loss of generality, that x(t) ≥x is satisfied for any t ∈ [0,t] (if not, consider t 0 := sup{t <t ; x(t) <x} and replace x(·) by x(· + t 0 )). Let (A, B) ∈ R * + × R * + be defined by and .
It can be observed that A and B are the fastest times for a solution of (1) to reach, respectively,x fromx (with the constant control u = 1) andx fromx (with the constant control u = −1). Clearly, one hast ≥ A and T −t > B.
We construct now a T -periodic solutionx of (1) such thatx(0) =x and associated with a controlũ defined as follows where t † is given by , and x † is a solution of κ(x † ) = T −t, the map κ(·) being defined by , ξ ∈ I.
By Hypothesis (H2), the function κ is decreasing and one has and κ(x) = B < T −t. Therefore x † is uniquely defined with x † ∈ (x m ,x). Moreover, one has ∈ ]t, T [. Expression (35) is thus well defined. The solutionx is depicted on Fig. 3. Clearlyx reachesx at timet and it is below the function x on the interval [0,t]. On the interval [t, t † ],x has the fastest descent and therefore stays also below x on this interval. At time t = t † , one hasx(t † ) = x † . Finally, the constant control u = 1 is the only one that allows to connect x † at time t † tox at time T . So, any periodic solution has to be abovex on [t † , T ]. We conclude that one has x(t) ≥x(t) for any t ∈ [0, T ]. As ψ(x) > ψ(x) for x ∈ [x M ,x] and ψ is increasing on [x m , x M ], and as we have shown that x(t) > x m for any t ∈ [0, T ], one can write To conclude, since one has x † > x m and η > 0 on I, one obtains which is not possible according to Lemma 2.1. We then conclude that the inequalitŷ x ≤ x M is satisfied. In a similar manner, one can prove the other inequality 5.2. The logistic with depensation. Some populations are known to present a depensation in the first part of their growth function [10], which is also called a weak Allee effect. This is represented by the following modification of the logistic function f 0 (x) := rx α 1 − We shall consider here the function (x) = x (i.e., the criterion is simply the level of the stock x). Let us define E := h(x ).
We distinguish now two cases depending if E max is below or above E . 5.2.1. Case 1: E max < E . Note first that there are two solutions λ 1 (E max ) and λ 2 (E max ) on the interval (0, K) of the equation h(x) = E max such that λ 1 (E max ) < x < λ 2 (E max ). One can then check that Hypotheses (H1)-(H2)-(H3) are fulfilled on the interval I := (λ 2 (E max ), K). For anyĒ ∈ (0, E max ), one can also show, as in the logistic model, that there exists a unique solutionx ∈ I of (37) which is moreover a stable steady-state of (36) (see [10]). Proposition 2.1 guarantees then an over-yielding whatever is T > 0. Fig. 6 depicts the optimal cost value J T (û T ) for the following parameter values: r = 0.3, K = 5, a = 2.5,x = 4,Ē = 0.48, and E max = 0.5893. Finally, in presence of depensation in the model with a maximal harvesting effort E max < E , our analysis shows that periodic solutions cause a systematic decrease of the mean value of the stock (compared to constant harvesting). 5.2.2. Case 2: E max > E . One can easily check that Hypotheses (H1)-(H2) are fulfilled on the interval (0, K), but not Hypothesis (H3). Sincex is a stable steadystate of the dynamics, the pointx belongs to the interval (x , K) (see [10]). Note also that ψ is increasing in a neighborhood ofx. Proposition 4.1 guarantees then the existence of the T -periodic trajectory B + B − B + (or B − B + B − ) that satisfies the integral constraint, for T not too large. Moreover for T small enough, the function ψ is strictly convex on [x m (T ), x M (T )], and we can conclude about the optimality of these trajectories according to Theorem 4.1.
Using the same parameter values except E max = 0.8235, the function F defined in (22) is depicted on Fig. 7 (left) for different values of T . We recall (see the proof of Proposition 3.2) that the existence of x m , x M is equivalent to the existence of a zero of F . One can see on this figure that T max as defined in the proof of Proposition 4.1 is approximately equal to 6. For T > 6, we can not conclude about the existence of bang-bang trajectories., neither about their optimality. On the contrary, for T < 6, the B + B − B + and B − B + B − strategies are admissible and optimal, and x m , x M , x − T , x + T are plotted as function of T on Fig. 7 (right). Remark that property (34) is fulfilled, for all T < 6. Note that equation h(x) =Ē has two solutions x <x E-mail address: terence.bayen@umontpellier.fr E-mail address: alain.rapaport@inra.fr E-mail address: fatima.tani@etu.umontpellier.fr