Discrete & Continuous Dynamical Systems - S
December 2018 , Volume 11 , Issue 6
Issue on recent advances in control and optimization
Select all articles
In this paper we consider a class of stochastic reaction diffusion equations with polynomial nonlinearities. We prove existence and uniqueness of weak solutions and their regularity properties. We introduce a suitable topology on the space of stochastic relaxed controls and prove continuous dependence of solutions on controls with respect to this topology and the norm topology on the natural space of solutions. Also we prove that the attainable set of measures induced by the weak solutions is weakly compact. Then we consider some optimal control problems, including the Bolza problem, and some target seeking problems in terms of the attainable sets in the space of measures and prove existence of optimal controls. In the concluding section we present briefly some extensions of the results presented here.
A class of optimal control problems of hybrid nature governed by semilinear parabolic equations is considered. These problems involve the optimization of switching times at which the dynamics, the integral cost, and the bounds on the control may change. First- and second-order optimality conditions are derived. The analysis is based on a reformulation involving a judiciously chosen transformation of the time domains. For autonomous systems and a time-independent integral cost, we prove that the Hamiltonian is constant in time when evaluated along the optimal controls and trajectories. A numerical example is provided.
The goal of the paper is to design a constructive impulsive trajectory extension for a class of control-affine dynamical systems subject to a asymptotic mixed constraint of complementarity type. An inspiration for the addressed models comes from the framework of Lagrangian mechanical systems with impactively blockable degrees of freedom. The constraint formalizes the requirement that "control actions steer the system's state from one prescribed configuration
A control SEIR type model describing the spread of an Ebola epidemic in a population of a constant size is considered on the given time interval. This model contains four bounded control functions, three of which are distancing controls in the community, at the hospital, and during burial; the fourth is burial control. We consider the optimal control problem of minimizing the fraction of infectious individuals in the population at the given terminal time and analyze the corresponding optimal controls with the Pontryagin maximum principle. We use values of the model parameters and control constraints for which the optimal controls are bang-bang. To estimate the number of zeros of the switching functions that determine the behavior of these controls, a linear non-autonomous homogenous system of differential equations for these switching functions and corresponding to them auxiliary functions are obtained. Subsequent study of the properties of solutions of this system allows us to find analytically the estimates of the number of switchings and the type of the optimal controls for the model parameters and control constraints related to all Ebola epidemics from 1995 until 2014. Corresponding numerical calculations confirming the results are presented.
It is well known from the seminal Brockett's theorem that the openness property of the mapping on the right-hand side of a given nonlinear ODE control system is a necessary condition for the existence of locally asymptotically stabilizing continuous stationary feedback laws. However, this condition fails to be sufficient for such a feedback stabilization. In this paper we develop an approach of variational analysis to continuous feedback stabilization of nonlinear control systems with replacing openness by the linear openness property, which has been well understood and characterized in variational theory. It allows us, in particular, to obtain efficient conditions via the system data supporting the sufficiency in Brockett's theorem and ensuring local exponential stabilization by means of continuous stationary feedback laws. Furthermore, we derive new necessary conditions for local exponential and asymptotic stabilization of continuous-time control systems by using both continuous and continuously differentiable stationary feedback laws and establish also some counterparts of the obtained sufficient conditions for local asymptotic stabilization by continuous stationary feedback laws in the case of nonlinear discrete-time control systems.
This paper studies multiobjective optimal control problems in the discrete time framework and in the infinite horizon case for bounded processes. The paper generalizes to the multiobjective case results obtained for single-objective optimal control problems in that framework. The dynamics are governed by difference equations. Necessary conditions of Pareto optimality are presented namely Pontryagin maximum principles in the strong form and in the weak form. Sufficient conditions are also provided. Other notions of Pareto optimality are defined when the infinite series do not necessarily converge and links with these unbounded cases are established.
In this paper, we show that the existence of a global solution of a standard first-order partial differential equation can be reduced to the extendability of the solution of the corresponding ordinary differential equation under the differentiable and locally Lipschitz environments. By using this result, we can produce many known existence theorems for partial differential equations. Moreover, we demonstrate that such a result can be applied to the integrability problem in consumer theory. This result holds even if the differentiability condition is dropped.
We perform a geometric study of the equilibrium locus of the flow that models the diffusion process over a circular network of cells. We prove that when considering the set of all possible values of the parameters, the equilibrium locus is a smooth manifold with corners, while for a given value of the parameters, it is an embedded smooth and connected curve. For different values of the parameters, the curves are all isomorphic.
Moreover, we show how to build a homotopy between different curves obtained for different values of the parameter set. This procedure allows the efficient computation of the equilibrium point for each value of some first integral of the system. This point would have been otherwise difficult to be computed for higher dimensions. We illustrate this construction by some numerical experiments.
Eventually, we show that when considering the parameters as inputs, one can easily bring the system asymptotically to any equilibrium point in the reachable set, which we also easily characterize.
We study an optimal control problem for a non-autonomous SEIRS model with incidence given by a general function of the infective, the susceptible and the total population, and with vaccination and treatment as control variables. We prove existence and uniqueness results for our problem and, for the case of mass-action incidence, we present some simulation results designed to compare an autonomous and corresponding periodic model, as well as the controlled versus uncontrolled models.
We consider a nonlinear control system depending on two controls
We investigate variational problems with recursive integral functionals governed by infinite-dimensional differential inclusions with an infinite horizon and present an existence result in the setting of nonreflexive Banach spaces. We find an optimal solution in a Sobolev space taking values in a Banach space under the Cesari type condition. We also investigate sufficient conditions for the existence of solutions to the initial value problem for the differential inclusion.
In this article we study optimal control problems for systems that are affine with respect to some of the control variables and nonlinear in relation to the others. We consider finitely many equality and inequality constraints on the initial and final values of the state. We investigate singular optimal solutions for this class of problems, for which we obtain second order necessary and sufficient conditions for weak optimality in integral form. We also derive Goh pointwise necessary optimality conditions. We show an example to illustrate the results.
In this article, structure-exploiting optimisation algorithms of the sequential quadratic programming (SQP) type are considered for optimal control problems with control and state constraints. Our approach is demonstrated for a 1D mathematical model of a vehicle transporting a fluid container. The model involves a fully coupled system of ordinary differential equations (ODE) and nonlinear hyperbolic first-order partial differential equations (PDE), although the ideas for exploiting the particular structure may be applied to more general optimal control problems as well. The time-optimal control problem is solved numerically by a full discretisation approach. The corresponding nonlinear optimisation problem is solved by an SQP method that uses exact first and second derivative information. The quadratic subproblems are solved using an active-set strategy. In addition, two approaches are examined that exploit the specific structure of the problem: (A) a direct method for the KKT system, and (B) an iterative method based on combining the limited-memory BFGS method with the preconditioned conjugate gradient method. Method (A) is faster for our model problem, but can be limited by the problem size. Method (B) opens the door for a potential extension of the truck-container model to three space dimensions.
In this paper we study the structure of approximate solutions of autonomous Bolza variational problems on large finite intervals. We show that approximate solutions are determined mainly by the integrand, and are essentially independent of the choice of time interval and data.
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]