OPTIMAL CONTROL OF NORMALIZED SIMR MODELS WITH VACCINATION AND TREATMENT

. We study a model based on the so called SIR model to control the spreading of a disease in a varying population via vaccination and treat- ment. Since we assume that medical treatment is not immediate we add a new compartment, M , to the SIR model. We work with the normalized version of the proposed model. For such model we consider the problem of steering the system to a speciﬁed target. We consider both a ﬁxed time optimal control problem with L 1 cost and the minimum time problem to drive the system to the target. In contrast to the literature, we apply diﬀerent techniques of op- timal control to our problems of interest. Using the direct method, we ﬁrst solve the ﬁxed time problem and then proceed to validate the computed so- lutions using both necessary conditions and second order suﬃcient conditions. Noteworthy, we perform a sensitivity analysis of the solutions with respect to some parameters in the model. We also use the Hamiltonian Jacobi approach to study how the minimum time function varies with respect to perturbations of the initial conditions. Additionally, we consider a multi-objective approach to study the trade oﬀ between the minimum time and the social costs of the control of diseases. Finally, we propose the application of Model Predictive Control to deal with uncertainties of the model.


1.
Introduction. Recently, we have witnessed an increasing interest in the application of optimal control methods to the important task of controlling the spreading of such disease via vaccination, medical treatment and education measures. Using compartmental models to describe the dynamic behaviour of the populations when exposed to a disease, various different optimal control problems have been proposed (see, for example, [27], [4], [24], [25] to name but a few). These models differ on the choice of the dynamic model, on the constraints enforced and on the cost.
In this paper, we illustrate how various optimal control tehcniques can be of help to study the control of the spreading of infectious diseases within a certain population. Here, we work in an abstract framework and our data is not specific of any real population or disease. While some of the analysis used in this paper is now routine in the literature, we bring out some unexplored techniques of optimal control to mathematical epidemiology. Indeed, we discuss and perform a sensitivity analysis, we study minimum time problems and a multi-objective control approach and we illustrate the application of Model Predictive Control (MPC) to the control of diseases.
Our starting point is a SIR compartmental model of a horizontally transmitted disease. Assuming that the disease under consideration can be controlled via both vaccination and medical treatment, we propose the addition of an extra compartment, denoted M , where those infected with the diseases but under medical treatment, are placed (see section 2 for a description of our model) for the duration of the treatment. Our aim is to steer the number of individuals in a certain population to residual values. We call such a residual value the target. We consider both fixed terminal time problems and minimum time problems. Although we do not consider a constant population, we work with a normalized model to keep our model with a minimum (but meaningful) number of differential equations.
In Section 3, we first concentrate on a fixed optimal control problem with a L 1 cost representing the economical cost of the infection and the treatement. We solve this problem numerically via the direct optimization method and we validate the computed solution confronting it with the analysis of necessary conditions. Additionally, and as in [25], we check second order sufficient conditions for all occuring bang-bang controls. We then go a step further and we add a fresh discussion on the sensitivity of the the solutions with respect to the parameters.
If we put aside any considerations of economical costs to fight the disease (as some healthy populations may do), how much time do we need to drive the number of infectious individuals to the required residual values? This leads us to consider the minimum time problem to steer our system to the target. Minimum time problems are routine in Engineering problems but, to the best of our knowledge, they have not been treated in the context of Mathematical Epidemiology. We consider the minimum time as a function of the initial conditions and show that this function is the solution of a Hamiltonian-Jacobi-Bellman equation (HJB). We use the software ROC-JC to compute this function and then show how different trajectories can be computed by the same code. It is of interest to note that software codes based on the HJB approach profit from the fact that the number of differential equations describing the system is low.
Although the study of minimum time problem is of high relevance in Epidemiology, it does not take into account the costs of the control intervention which may be highly relevant to some societies. We study how to balance control costs and minimum time efficiency: we propose a multi-objective framework in section 5.
Finally, we turn to issues concerning the implementation of computed control policies. We consider a situation where public health measures to control a certain disease are implemented. In such cases it is of foremost importance to monitor the situation to make sure that all goals are achieved. Since any model used to design such measures is but a rough approximation of the real situation, it is no surprise that the real system does not respond as expected. We propose the use of a Model Predictive Control method to control the real system while maintaining the target and keeping the cost as closely as possible to the cost obtained from mathematical models. We assume that the implemented measure are determined by solving the fixed time optimal control problem proposed previously. To better simulate reality we use a dynamic system, where one of the parameters of the model, the incidence rate c, is perturbed by a time dependent function c p (t).
2. SIR model. To model the progress of infectious diseases in a population, SIR models place the individuals of a population of size N into three different compartments relevant to the epidemic: • recovered (R) (immune by vaccination or recovered after being infected).
An individual is in the S compartment if he/she is vulnerable or susceptible to catching the disease. Infected individuals are also infectious and so they can spread the disease when in contact with susceptible individuals. They are placed in compartment I. It is assumed that individuals who recovered from infection become immune. They are put into the compartment R that also contains those individuals who have become immune by vaccination.
Such models are also based on the conviction that all new born are susceptible to the disease. The disease modeled in this way is related to the so called horizontal incidence meaning that the transmission of a disease proceeds from one infected individual to a susceptible one. The contact may be direct (through touching, for example) or indirect as in the case of coughing. Those infected who recover from the disease become immune. These assumptions cover many known diseases. In contrast with SEIR models, SIR models do not considered diseases where individuals get exposed to the disease but do not become infectious right away.
To present the SIR model, let S(t), I(t), and R(t) denote the number of individuals in the susceptible, infectious and recovered class at time t, respectively. The total population then is N (t) = E(t) + I(t) + R(t). Here, we consider that the parameters in the spreading of a disease are kept constant on a fixed time horizon. The parameters describing the effect of the disease are g 1 , the rate at which infectious individuals recovered, and a 1 , the death rate caused by the disease. The rate of transmission is described by the number of contacts between susceptible and infectious individuals. If c is the incidence coefficient of horizontal transmission, such rate is c S(t) N (t) I(t). The population parameters of interest are the natural birth rate b and death rate d.
Let us further assume that the control of the spreading of the disease is done by a vaccination program aiming at the susceptible individuals and the medical treatment of the infected population. Let u(t) be the rate of susceptible individuals vaccinated per unit of time and η be the efficiency rate of the vaccine, the rate of those who are vaccinated and become immune. Let also v(t) be the rate of those infected that undergo medical treatment. Since the medical treatment does not have an immediate response, we introduce the new compartment • medical treatment (M ).
We define M (t) as the number of infected individuals under medical treatment. We assume that those under medical treatment recover fully and become immune at the rate g 2 . However, those under treatment can also die at a rate a 2 due to the disease or the treatment itself. Clearly, we assume that a 1 > a 2 and g 1 < g 2 since otherwise treatment would not be an option. The above considerations lead to the dynamical system with the initial conditions Here, we consider u(t) ∈ [0, u max ], where u max ∈ [0, 1] is the maximum rate of susceptible people that can be vaccinated per unit of time.
where v max ∈ [0, 1] is fixed. Both u max and v max translate the maximum capability of vaccination and treatment programs to be implemented. Note that the differential equation (4) can be removed from the system of equations (1)-(5), since R can be obtained from N (t) = S(t) + I(t) + M (t) + R(t). This gives rise to a model with four differential equations which can be further reduced to three if it is normalized. Although the number of equations is not an issue when solving optimal control problems numerically, it may be of importance when indirect methods are considered and for the solution of the Hamilton-Jacobi-Bellmann (HJB) equation; see, for example, the ROC-HJ code [6]. These observations together with some other features of interest of the normalized models discussed in [13], leads us to do our forthcoming analysis with the normalized version of (1)- (5). Such a normalized model is obtained from (1)-(5) by dividing each variable by N (t) and setting In particular, we get s(t) + i(t) + m(t) + r(t) = 1 for all t.

OPTIMAL CONTROL OF NORMALIZED SIMR MODELS 83
The normalized counterpart of (1)-(5) then iṡ with the initial conditions Once again we have the control constraints where u max ∈ [0, 1] and v max ∈ [0, 1] are fixed. Since and r(t) only appears in (10), equation (10) can be removed from the system. However, the system (7)-(9) model does not have such an easy and simple interpretation as the system (1)-(5); for a discussion regarding the interpretation of analogous SEIR models, see [13]. One remarkable fact is that the death rate disappears. Besides the reduction of one differential equation, normalized models describe the spreading of diseases in different populations with the same birth rate.
Throughout this work we work with the parameters presented in Table 1. Initial percentage of infected population under treatment 0 Note that we do not concentrate on any specific disease or population. Since our focus is on the illustration of how various optimization methods can be of help to better control infectious diseases, our parameters are not clinically valued and they are mainly illustrative. The birth rate chosen is close to that of some European countries, the incidence rate c of the disease is high (giving rise to a high reproduction number ). We obviously choose the death rate of those under treatment to be much lower than that of those infected, while the opposite holds for the recovery rates of those two compartments.

Fixed time optimal control problem.
In what follows we will concentrate on three different optimal control problems involving our model (7)- (9). The basic control problem is one with a fixed terminal time T (years). A simple examination of the solution to this basic problem leads us to consider two other problems, namely the minimum time problem and a two-objective control problem for which we determine the Pareto front; see Section 5.
The first problem depicts the situation, when in the presence of a infectious disease, the aim is to determine vaccination and treatment policies to control the spreading of the diseases in a time horizon of T years. It is reasonable to seek policies guaranteeing that the percentage of infected population is driving, if not to zero, then to a small residual quantity. Let us assume that such a quantity is 5 × 10 −4 . In optimal control terms this means that we want to ensure that As it is well known, the design of any public health policy always takes into account the costs associated with its implementation. For our model (7)- (9) such costs include the cost of vaccination, the cost of the medical treatment and also the economic cost of the infection itself (for example, absence to work of infected individuals or how the presence of the disease affects the revenues from tourists, etc). These considerations lead us to the following optimal control problem: Observe that problem (P 1 ) is affine in the two control variables u and v. There is an extensive literature on optimal control problems in epidemiology with a controlaffine system dynamics. Many authors have used control-quadratic L 2 costs which leads to continuous control functions that are difficult to administer in practice. For that reason, L 1 costs with an affine control are more suited in practical applications. The conditions of the Maximum Principle [29,19] seem to suggest that optimal controls are concatenations of bang-bang and singular controls. This control structure will be confirmed by our computations. The piecewise constant bang-bang controls are easy to administer, while in many cases the singular controls can be closely approximated by piecewise constant controls; cf. the piecewise constant control approximations (20) and the examples in [25] and [13]. Let us turn again to the problem (P 1 ) of interest. To characterize the solution we now apply the Maximum Principle (MP) [29,19]. Let x = (s, i, m) denote the state variable. Note that now we are maximizing −J 1 (u, v). Observe that our system can then be written in the forṁ with appropriate functions f , g 1 and g 2 . Let (s * , i * , m * , u * , v * ) be an optimal solution of (P 1 ) and let x * = (s * , i * , m * ) be the optimal state. Define the Hamiltonian as Then the MP asserts the existence of a scalar λ ≥ 0 and an absolutely continuous function p such that for almost every t ∈ [0, T ]: The condition (iv) above is a consequence of the final time constraint i(T ) ≤ 5×10 −4 which dictates that p i (T ) takes values in the normal cone to the set ] − ∞, 5 × 10 −4 ]. Because of such constraint we cannot a priori assume λ = 1. However, since our numerical values show that λ = 1, we proceed the analysis with λ = 1.
Define the switching functions In terms of the data of (P 1 ) we have From (ii) we get the adjoint equations (deleting the t dependency for readability) Moreover, from (iv) we deduce that p s (0) = 0, p i (T ) ≥ 0 and p m (T ) = 0. Let us now turn to the control law (14). It leaves out the possibility of φ i (t) = 0. If φ i (t) = 0 and d dt φ i (t) = 0, then the respective control switches between the maximum value and 0: the control is then said to be bang-bang. Singular controls appear if one switching function φ i (t) vanishes on a time interval. Our computations show that u * (t) can have singular arcs, whereas v * (t) is always bang-bang. This prompts us to derive a formula for the singular control u * .
Let us assume that u * (t) is singular in an interval [t 1 , t 2 ]. Then we have This inequality is called the strict Generalized Legendre-Clebsch Condition (GLC) [23]. After some long calculations we obtain the following expression for the singular value of u * (t) in terms of the state variable x and adjoint variable p (again we drop the t dependency): An analogous analysis of φ 2 for determining a singular control v results in long and tedious calculations. Such calculations are much longer than those done in [24] where a SIR model is considered. The reasoning in [24] cannot be applied in our case, since here we place the population under treatment in an extra compartment. Therefore, we refrain from presenting such calculations since all our numerical computations show that the control v * does not have singular arcs.
3.1. Numerical solutions. We now present the numerical solution of problem (P 1 ) for T = 20 and with the parameters in Table 1. We consider two cases with different weights in the objective J 1 = J 1 (u, v) of the control problem (P 1 ): Problem (P 1 ) is solved by the so-called direct method; for a description of this method see, for example, [28]. We first discretize the problem on a sufficiently fine grid to obtain a nonlinear optimization problem that is implemented by the Applied Modeling Programming Language AMPL [17]. AMPL can be interfaced with several large-scale nonlinear optimization solvers like the interior-point solver Ipopt; see [31]. We mostly use N = 10000 grid nodes and apply the Implicit Euler Scheme or the Trapezoidal Rule to compute the solution with an error tolerance eps = 10 −9 . In Figure 2 we present graphs of the computed state variables side by side with those of the corresponding adjoint (costate) variables.
The optimal controls are presented in Figure 4. The control u * is bang-singularbang with two switching points t u 1 and t u 2 while v * is bang-bang with one single switching point t  Table 1.
and Ipopt, we verify that the values of the switching times are approximatively t u 1 = 4.45, t u 2 = 13.55 and t v 1 = 6.46.  Table 1.
To show that the computed solution satisfies precisely the necessary optimality conditions, we compare the singular part of the computed u * with that of the analytical value u sing (x * (t), p * (t)) given in (19), where we insert the computed values of x * (t) and p * (t). Moreover, we also compare the behavior of v * with that of its switching function. The results are presented in Figure 4, where the graphs concerning u * are on the left and those of v * on the right. The graphs of u * and u sing (19) coincide in the interval ]t u 1 , t u 2 [. It is also clear that the switching point of v * coincides with the zero of the φ 2 .
We are not able to verify numerically that the second-order sufficient conditions (SSC) in Aronna et al. [1] hold. However, in Section 4.2 we will show that an approximative control with a nearly identical functional value satisfies (SSC). The construction of the approximative control is motivated by the fact that the computed singular u * (t) takes values around 0.188 for t ∈]t u 1 , t u 2 [ and is nearly constant.  (14) for v * is precisely satisfied.
It is worth mentioning that our computations show that the percentage of infected population reaches the target i(T ) = 5 × 10 −4 at around t = 15.5. This fact will be explored later on when we consider the minimum time problem.
Case 2 (A = 10, B = 3, C = 1) We keep the parameters as in Table 1  The computed controls in Case 1 and Case 2 are compared in Figure 5. Remarkably, in Case 2 both controls are bang-bang. Figure 5 also displays the switching functions φ 1 and φ 2 in Case 2 (now in dashed blue line) which shows that the computed controls and switching functions perfectly match the control law (14).

3.2.
Second-order sufficient conditions and and sensitivity analysis. Case 1 (A = 10, B = 1, C = 3). The computed control u * in Figure 3 is a bang-singularbang control, whereas v * is bang-bang. In principle, one could apply the secondorder sufficient conditions (SSC) for bang-singular controls in Aronna et al. [1] to verify the local optimality of the controls. The second-order test in [1] requires to check whether a certain quadratic form in a Hilbert space is positive definite on an associated cone. However, so far it is not clear how to design a numerical test of the positive definiteness of the quadratic form.
Instead, we shall verify SSC for the approximative control defined in (20). We recall from Figure 3 To optimize this structure, we study the so-called Induced Optimization Problem (IOP) which comprises the optimization variables t 1 , t 2 , t 3 , u c and the equation i(T ) = 5 × 10 −4 . The IOP can be solved using the arc-parametrization method [26,28] and its implementation in the optimal control package NUDOCCCS [9]. This gives the numerical results Since SSC hold, it follows from a standard sensitivity result in finite-dimensional optimization [16] (cf. also [11]) that the optimal solution (t 1 , t 2 , t 3 , u c ) of the IOP is locally a C 1 -function with respect to all parameters q in the system. The code NUDOCCCS also allows to compute the sensitivity derivatives dt k /dq for k = 1, 2, 3 and du c /dq at a nominal parameter value q 0 . Choosing, e.g., the parameter q ∈ {B, C, a 1 } we get the following sensitivity derivatives evaluated at their nominal values B 0 = 3, C 0 = 1, (a 1 ) 0 = 0.08: Note that the sensitivity derivatives dt k /dq are equal for k = 2, 3. This follows from the construction of the approximative control (20). The sensitivity derivatives quantify our more intuitive feeling on how switching times change under parameter perturbations. As an example, let us increase the weight parameter q = B in the objective J 1 . We then expect that the first switching time t 1 decreases while t 2 increases and that the constant value u(t) = u c decreases. The sensitivity derivatives could also be used in real-time control techniques based on Taylor expansions for computing approximations of perturbed solutions; cf. [12]. However, in a biological framework such real-time control methods are not needed.
Case 2 (A = 10, B = 3, C = 1). In this case both controls u and v are bang-bang with only one switching time t 1 , resp., t 2 . In this case we can show that second-order sufficient conditions (SSC) are satisfied not only for the associated IOP but for the bang-bang control problem [28]. First, we apply the arc-parametrization method [26,28] and the control package NUDOCCCS [9] to determine the switching times t 1 and t 2 . The IOP with the terminal constraint i(T ) = 5e − 04 has the solution J(u) = 15.56806, t 1 = 2.252141, t 2 = 11.16786.
The SSC for the IOP are satisfied, since the reduced (projected) Hessian of the Lagrangian is computed as the positive number P rojH = 0.6383 > 0. Moreover, the bang-bang controls in Figure 5, right, satisfy the strict bang-bang property with respect to the Maximum Principle: It follows from Theorem 7.1 in [28] that the controls (u * (t), v * (t)) provide a strict strong maximum. Again, it follows from the sensitivity results in finite-dimensional optimization (cf. [16,11]) that the optimal solution (t 1 , t 2 ) of the IOP is locally a C 1 -function of all parameters q in the system. Moreover, the bang-bang controls with switching times t 1 (q) and t 2 (q) provide a strict strong minimum for all parameters q in a neighborhood of a nominal parameter q 0 . The code NUDOCCCS allows to compute the sensitivity derivatives dt k /dq for k = 1, 2. Choosing again the parameter q ∈ {B, C, a 1 }, we get the following sensitivity derivatives evaluated at their nominal values B 0 = 3, C 0 = 1, (a 1 ) 0 = 0.08: In particular, note the high sensitivity of the switching time t 2 of the control v with respect to the infection induced death rate a 1 .

Minimum time problem.
Minimum time problems, routine in Engineering, have received little attention in Epidemiology. They however can be of help when studying the control of infectious diseases. It is clear from the numerical solution of problem (P 1 ) that there is no need to consider the time horizon of T years to drive the percentage of infected population to the residual value of 5 × 10 −4 . Putting aside any considerations of the cost, it is reasonable to ask how long it would take to drive the system (7)-(9) to the target. The answer is given by the solution of a control problem where the cost to be minimized is the final time T needed to steer the state to the target. As one may guess, the interest in this problem does not reside on the profiles of the optimal controls. Not surprisingly, our computations show that the optimal controls are both at maximum values throughout the interval [0, T ]. It is however of interest to see how the minimum time t f changes with the initial conditions.
We consider a family of control problems parametrized by the initial position y = (s 0 , i 0 , m 0 ) defined by: Now, consider the map T : y −→ T (y) that associates to each initial position y the optimal value of the problem (P y ): T (y) := inf(P y ).
Observe again that the state equation can be written in the form: with appropriate functions f, g 1 and g 2 . We denote by x u,v y the trajectory satisfying the state equation with the controls (u, v) and starting from the position y at the initial time 0.
It is known in control theory (see [5] and the references therein) that the map T satisfies the dynamic programming principle: for every h > 0, we have In general, the minimum time function is discontinuous and may take infinite values when there is no policy that can steer the system to the desired target. Though it is of interest to introduce the new control problem: x u,v y (0) = y, where ϕ(y) := i − 5 × 10 −4 . Observe that (P θ;y ) has no final state constraint. The problem is parametrized by θ, the final time, and the initial condition, y. For (P θ;y ), we define a value function ϑ(θ, y) = inf(P θ,y ).
The cost function in (P θ,y ) is the distance from the target. So the problem (P θ,y ) provides, at every final time θ, an indication of how far the system deviates from the desired target. In particular, at time θ > 0, the optimal value ϑ(θ, y) ≤ 0 if and only if there is an admissible control that drives i(t) values to values equal or lower than the threshold 5 × 10 −4 . As for the minimum time value, it turns out that the following relation holds for every initial position y: Results in control theory assure us that the function ϑ is Lipschitz continuous and it is a solution to a partial differential equation, called Hamilton-Jacobi-Bellman (HJB) equation (for y = (s 0 , i 0 , m 0 )): ∂ θ ϑ(θ, y) + H(y, D y ϑ(θ, y)) = 0; where H(y, p) = −f (y) · p + max u,v ((−g 1 (y)u − g 2 (y)v) · p) and ∂ θ ϑ and D y ϑ stand for the partial derivative with respect to the time variable θ and the gradient with respect to y. Solving this equation has been the subject of a vast literature in numerical analysis of PDEs. We refer the reader to [30,15,7,2]. Here we use the software ROC-HJ [6] to compute numerically the minimum time function as a solution of the HJB equation satisfied by ϑ.
In figure 6, we plot the minimum time as a function of i 0 and s 0 considering m 0 constant. Let us emphasize that the numerical computation of the minimum time function requires a great computation effort because we are solving a timedependent PDE. However, the SIMR model involves only three state variables and so the HJB equation is stated in a three dimensional space (for y) which is quite acceptable for computing the value function in a reasonable time. Once the value function is computed and the minimum time function is stored, one can reconstruct the optimal trajectories for different scenario without solving again the HJB equation. Indeed the trajectories can be reconstructed by appealing to the dynamic programming principle (21), see [3,2]. Here, the structure of the optimal control is obtained by (21) without requiring an analysis of the optimality conditions of the first or second order. This structure seems ultimately to confirm the natural intuition that the vaccination rate must be maximal if the state i(t) is to be reduced in a minimum time. However, the approach can also be used in situations where the structure of the control is more complicated (for example if we add other constraints on the state). In Figure 7, we plot different trajectories corresponding to various initial datum (s 0 , i 0 , m 0 ). In this figure, the three state variables corresponding to the same trajectory are plotted with the same color.

5.
Multi-objective optimal control. The computations of the minimum time function confirms that there is no need to consider the time horizon T = 20 years to drive the percentage of infected population to the residual value i(T ) = 5 × 10 −4 . However, the vaccination policy associated to the minimum time function may have high economic cost. For this reason it is interesting to treat the minimum time problem in the framework of a two-objective optimal control problem, where the first objective is the cost in problem (P 1 ) and the second objective is the minimal time J 2 (u, v) = T . Hence, we minimize the vector (J 1 (u, v), J 2 (u, v)) (22) subject to the the dynamics (7)-(9), boundary condition i(T ) = 5 × 10 −4 and the control constraint (u(t), v(t)) ∈ [0, u max ] × [0, v max ] in problem (P 1 ).
For the numerical treatment of multi-objective control problems, the reader is referred to Kaya, Maurer [22]. Multi-objective finite-dimensional optimization problems are thoroughly studied in Eichfelder [14]. The Pareto front (efficient set) of the two-objective control problem can be determined by minimizing the scalarized objective Henceforth, we shall use the abbreviations J k = J k (u, v), k = 1, 2, and J (w) = J (w) (u, v). Thus we have J (0) = J 1 and J (1) = J 2 = T . The optimal solution of the scalarized control problem constitutes a compromise solution between the two objectives. Since the Pareto front turns out to be convex, there is no need to apply the Tschebycheff scalarization introduced in [22] for studying non-convex fronts. We solve the scalarized control problem for weights w = k/100, k = 0, 1, . . . , 100, and determine the corresponding values J 1 and J 2 = T as well as the norm ||(J 1 , J 2 )|| 2 of the Pareto point.  compromise solution? For that purpose, we introduce a new objective which will be optimized over the Pareto front; cf. Bonnel, Kaya [8]. A good candidate for a new objective is the norm ||(J 1 , J 2 )|| 2 of the Pareto point measuring its distance to the origin; cf. Determining the Pareto front for other parameters in the system, we see that the Pareto fronts are always convex and look qualitatively the same as in Figure 8. Hence, we refrain from giving more numerical results. 5.2. Case 2 : A = 10, B = 1, C = 3. The optimal control u(t) for the scalarized functional J (w) (23) is bang-bang for every w ∈ [0, 1] with only one switch at t 1 , while always v(t) ≡ v max holds. The solutions for the limiting weights w = 0 and w = 1 are characterized by: The solutions for w = 1 in Case 1 and Case 2 agree by definition of the scalarized functional (23). The Pareto front, the objectives J 1 (u, v) and J 2 (u, v) = T as functions of w, and the distance of the Pareto front to the origin are shown in Figure 9. To determine a compromise solution for the two objectives, we consider again the distance ||(J 1 (u, v), J 2 (u, v))|| 2 of a point on the Pareto to the origin. The minimal norm has the value 15.2117 and is attained at w = 0.39 (Figure 9, Bottom row, left) for which we obtain the numerical results: w = 0.39 : J 1 (u) = 11.0135, J 2 (u) = T = 8.25137, t 1 = 2.066, v(t) ≡ v max .

6.
Model predictive control applied to (P 1 ). Up to now, our focus has been on optimal control techniques that can be of help to define strategies for the control of an infectious diseases. We now turn to the next phase consisting on the follow up of the measures applied.
Since any model used to define a control strategy is but a rough approximation of reality, it is no surprise that, once health control measures are implemented, the values of the state variable measured differ from the those obtained in the simulations by solving the original optimal control problem used to define control strategies. From the public health point of view, it is then important to follow up the "real" situation to make sure that the objectives defined from the beginning are achieved. If the control strategies are implemented with high precision, the differences between expected and real values of the states may be due to uncertainties in the parameters. For infectious diseases, while the death and recovery rates may be constant and not difficult to determine, the incidence rate c is known to vary during the time and may change due to weather conditions or changes in the behavior of the populations.
In this section we illustrate, via simulations, how Model Predictive Control (MPC) can be of help to guarantee that the real system performs according to the objectives first defined. We choose here MPC (see [18]), because this optimization based method has proved to be a very successful method in handling uncertainties in areas like power systems and robotics and it has been extensively used in industry. Surprisingly, to our knowledge, there is no literature on applications of MPC to Epidemiological control problems.
For our purpose, we assume that the implemented control policy is the solution of problem (P 1 ), Case 1, with T = 20. To make sure that the gap between the expected and the "real" values of the state variables does not jeopardize the objective, we propose the application of the following MPC algorithm: 1. Divide [0,20] into N sub intervals of equal length h = 20/N . Set k = 0 2. Calculate the solution of (P 1 ) for t ∈ [0, 20] (meaning we take T = 20). Here the initial values and parameters used are those in table 1. Let (u 0 * , v 0 * ) denote the optimal control for such problem. 3. For t ∈ [0, h], implement the computed controls (u 0 * , v 0 * ). 4. a. Set k = k + 1.
b. Evaluate the "real" states s real (kh), i real (kh) and m real (kh). c. Solve (P 1 ) (again, with parameters given in table 1) restricting now the time interval to [kh, 20] and using the sampled states s real (kh), i real (kh) and m real (kh) as initial conditions. Denote by (u k * , v k * ) the optimal control for such problem. d. For t ∈ [kh, (k + 1)h], implement the computed controls (u k * , v k * ) and go to 4. a.
Observe that at each iteration k we calculate the optimal policy by solving (P 1 ) with t ∈ [kh, 20] to ensure the minimization of the cost in this remaining interval while still driving the percentages of infected population to a residual value.
To simulate what we have been calling the "real" situation, where we can get the so called sampling values s real (kh), i real (kh) and m real (kh) at each iteration of MPC, we run our code assuming that all the parameters characterizing the disease and the population area as in Table 1 with the exception of the incidence rate c (and the initial conditions, of course) that we consider to be a function of the time defined as c p (t) = 1.8 − 0.7 tanh(t + 0.5) + 0.02 cos(πt) (24) whose graph is that in Figure 10. Applying the MPC as described above with N = 10, we obtain states variables of the form presented in figure 11. For readability, we show only the first 5 iterations of the MPC method. Figure 11. Six computed values of the state variables for the Fixed Horizon MPC described above assuming that the "real" system differs from the model of (P 1 ) solely with respect to the incidence rate c. From left to right on the first row, we show the s and i state variables; on the second row we show the m state variable. All the three graphs have different scales.
Observe that due to the perturbations of c, the sampled values of the percentage of infected population increases considerably with respect to the expected values in the first years although the control measures applied are at their maximum. However, as the perturbation of c p (t) swings around 1.1, the sampled states tend to the initial calculated one. To keep the exposition short, we refrain from presenting the graphs of the various controls. They do not differ greatly from those of the original problem (P 1 ).
In the above example we simulate a situation where the incidence rate c p (t), after the first three years, oscillates around the nominal value of 1.1. Although this may have some resemblance with some diseases, there may be other cases where the uncertainties of some parameters behave in a more erratic way. It may then be of interest to apply MPC with a receding horizon. In this method, the time horizon is increased at each iteration of the MPC so as to guarantee that the percentage of infected people is driven to the target with minimum cost.