DCDS
Control of dynamical systems with discrete and uncertain observations
Aleksandar Zatezalo Dušan M. Stipanović
In this paper we provide a design methodology to compute control strategies for a primary dynamical system which is operating in a domain where other dynamical systems are present and the interactions between these systems and the primary one are of interest or are being pursued in some sense. The information from the other systems is available to the primary dynamical system only at discrete time instances and is assumed to be corrupted by noise. Having available only this limited and somewhat corrupted information which depends on the noise, the primary system has to make a decision based on the estimated behavior of other systems which may range from cooperative to noncooperative. This decision is reflected in a design of the most appropriate action, that is, control strategy of the primary system. The design is illustrated by considering some particular collision avoidance problem scenarios.
keywords: stochastic processes discrete observations. control theory estimation robust control Game theory
NACO
Safe and reliable coverage control
Dušan M. Stipanović Christopher Valicka Claire J. Tomlin Thomas R. Bewley
In this paper we consider a problem of designing control laws for multiple mobile agents trying to accomplish three objectives. One of the objectives is to sense a given compact domain while satisfying the other objective which is to avoid collisions between the agents themselves as well as with the obstacles. To keep the communication links between the agents reliable, the agents need to stay relatively close during the sensing operation which is the third and final objective. The design of control laws is based on carefully constructed objective functions and on an assumption that the agents' dynamic models are nonlinear yet affine in control laws. As an illustration of some performance characteristics of the proposed control laws, a numerical example is provided.
keywords: proximity control multiple objectives. coverage control Avoidance control
NACO
A note on monotone approximations of minimum and maximum functions and multi-objective problems
Dušan M. Stipanović Claire J. Tomlin George Leitmann
In paper [12] the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are formulated using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example of a capture of two evaders by two pursuers is provided.
    This note presents a preview of the treatment in [12].
keywords: minimum function multiple objectives Dynamic systems approximations of functions. maximum function

Year of publication

Related Authors

Related Keywords

[Back to Top]