MCRF aims to publish original research as well as expository papers on mathematical control theory and related fields. The goal is to provide a complete and reliable source of mathematical methods and results in this field. The journal will also accept papers from some related fields such as differential equations, functional analysis, probability theory and stochastic analysis, inverse problems, optimization, numerical computation, mathematical finance, information theory, game theory, system theory, etc., provided that they have some intrinsic connections with control theory.
MCRF is edited by a group of international leading experts in mathematical control theory and related fields. A key feature of MCRF is the journal's rapid publication, with a special emphasis on the highest scientific standard. The journal is essential reading for scientists and researchers who wish to keep abreast of the latest developments in the field.
- AIMS is a member of COPE. All AIMS journals adhere to the publication ethics and malpractice policies outlined by COPE.
- Publishes 4 issues a year in March, June, September and December.
- Publishes online only.
- Indexed in Science Citation Index-Expanded, Web of Science, ISI Alerting Services, Current Contents/Physical, Chemical & Earth Sciences (CC/PC&ES), MathSciNet, Scopus and Zentralblatt MATH.
- Archived in Portico and CLOCKSS.
- MCRF is a publication of the American Institute of Mathematical Sciences. All rights reserved.
Note: “Most Cited” is by Cross-Ref , and “Most Downloaded” is based on available data in the new website.
Select all articles
One proves via variational techniques the existence and uniqueness of a strong solution to the stochastic differential equation
In this paper we consider regional deterministic finite-dimensional optimal control problems, where the dynamics and the cost functional depend on the region of the state space where one is and have discontinuities at their interface.
Under the assumption that optimal trajectories have a locally finite number of switchings (i.e., no Zeno phenomenon), we use the duplication technique to show that the value function of the regional optimal control problem is the minimum over all possible structures of trajectories of value functions associated with classical optimal control problems settled over fixed structures, each of them being the restriction to some submanifold of the value function of a classical optimal control problem in higher dimension.
The lifting duplication technique is thus seen as a kind of desingularization of the value function of the regional optimal control problem.
In turn, we establish sensitivity relations for regional optimal control problems and we prove that the regularity of the value function of such problems is the same (i.e., is not more degenerate) than the one of the higher-dimensional classical optimal control problem that lifts the problem.
Partial and full sensitivity relations are obtained for nonauto-nomous optimal control problems with infinite horizon subject to state constraints, assuming the associated value function to be locally Lipschitz in the state. Sufficient structural conditions are given to ensure such a Lipschitz regularity in presence of a positive discount factor, as it is typical of macroeconomics models.
An infinite-dimensional bilinear optimal control problem with infinite-time horizon is considered. The associated value function can be expanded in a Taylor series around the equilibrium, the Taylor series involving multilinear forms which are uniquely characterized by generalized Lyapunov equations. A numerical method for solving these equations is proposed. It is based on a generalization of the balanced truncation model reduction method and some techniques of tensor calculus, in order to attenuate the curse of dimensionality. Polynomial feedback laws are derived from the Taylor expansion and are numerically investigated for a control problem of the Fokker-Planck equation. Their efficiency is demonstrated for initial values which are sufficiently close to the equilibrium.
In this paper we consider the initial boundary value problem of the Korteweg-de Vries equation posed on a finite interval
subject to the nonhomogeneous boundary conditions,
In this paper, we consider optimal control problems associated with a class of quasilinear parabolic equations, where the coefficients of the elliptic part of the operator depend on the state function. We prove existence, uniqueness and regularity for the solution of the state equation. Then, we analyze the control problem. The goal is to get first and second order optimality conditions. To this aim we prove the necessary differentiability properties of the relation control-to-state and of the cost functional.
This article is concerned with the model calibration for financial assets with mean-reverting price processes, which is an important topic in mathematical finance.
The discussion focuses on the recovery of local volatility from market data for Schwartz(1997) model. It is formulated as an inverse parabolic problem, and the necessary condition for determining the local volatility is derived under the optimal control framework. An iterative algorithm is provided to solve the optimality system and a synthetic numerical example is provided to illustrate the effectiveness.
In this paper we study the weak laws of large numbers for sublinear expectation. We prove that, without any moment condition, the weak laws of large numbers hold in the sense of convergence in capacity induced by some general sublinear expectations. For some specific sublinear expectation, for instance, mean deviation functional and one-side moment coherent risk measure, we also give weak laws of large numbers for corresponding capacity.
This paper is concerned with a dynamic game of N weakly-coupled linear backward stochastic differential equation (BSDE) systems involving mean-field interactions. The backward mean-field game (MFG) is introduced to establish the backward decentralized strategies. To this end, we introduce the notations of Hamiltonian-type consistency condition (HCC) and Riccati-type consistency condition (RCC) in BSDE setup. Then, the backward MFG strategies are derived based on HCC and RCC respectively. Under mild conditions, these two MFG solutions are shown to be equivalent. Next, the approximate Nash equilibrium of derived MFG strategies are also proved. In addition, the scalar-valued case of backward MFG is solved explicitly. As an illustration, one example from quadratic hedging with relative performance is further studied.
In this paper we analyze how changes in inverse S-shaped probability weighting influence optimal portfolio choice in a rank-dependent utility model. We derive sufficient conditions for the existence of an optimal solution of the investment problem, and then define the notion of a more inverse S-shaped probability weighting function. We show that an increase in inverse S-shaped weighting typically leads to a lower allocation to the risky asset, regardless of whether the return distribution is skewed left or right, as long as it offers a non-negligible risk premium. Only for lottery stocks with poor expected returns and extremely positive skewness does an increase in inverse S-shaped probability weighting lead to larger portfolio allocations.
We consider the infinite dimensional linear control system described by the population dynamics model of Lotka-McKendrick with spatial diffusion. Considering control functions localized with respect to the spatial variable but active for all ages, we prove that the whole population can be steered to zero in any positive time. The main novelty we bring is that, unlike the existing results in the literature, we can also control the population of ages very close to 0. Another novelty brought in is the employed methodology: as far as we know, the present work is the first one remarking that the null controllability of the considered system can be obtained by using the Lebeau-Robbiano strategy, originally developed for the null-controllability of the heat equation.
The present paper is devoted to the study of the well-posedness of BSDEs with mean reflection whenever the generator has quadratic growth in the
We study the well solvability of nonlinear backward stochastic evolutionary equations driven by a space-time white noise. We first establish a novel a priori estimate for solution of linear backward stochastic evolutionary equations, and then give an existence and uniqueness result for nonlinear backward stochastic evolutionary equations. A dual argument plays a crucial role in the proof of these results. Finally, an example is given to illustrate the existence and uniqueness result.
This paper concerns the recursive utility maximization problem. We assume that the coefficients of the wealth equation and the recursive utility are concave. Then some interesting and important cases with nonlinear and nonsmooth coefficients satisfy our assumption. After given an equivalent backward formulation of our problem, we employ the Fenchel-Legendre transform and derive the corresponding variational formulation. By the convex duality method, the primal "sup-inf" problem is translated to a dual minimization problem and the saddle point of our problem is derived. Finally, we obtain the optimal terminal wealth. To illustrate our results, three cases for investors with ambiguity aversion are explicitly worked out under some special assumptions.
In this work controlled systems of semilinear parabolic equations are considered. Only one control is acting in both equations and it is distributed in a subdomain. Local feedback stabilization is studied. The approach is based on approximate controllability for the linearized system and the use of an appropriate norm obtained from a Lyapunov equation. Applications to reaction-diffusion systems are discussed.
In this paper, we consider the stability of a laminated beam equation, derived by Liu, Trogdon, and Yong [
This paper is concerned with some optimal control problems for equations with blowup or quenching property. We first study the existence and Pontryagin's maximum principle for optimal controls which have the minimal energy among all the controls whose corresponding solutions blow up at the right-hand time end-point of a given functional. Then, the same problem for quenching case is discussed. Finally, we establish Pontryagin's maximum principle for optimal controls of extended problems after quenching.
This paper is devoted to a study of controllability and observability problems for some stochastic coupled linear parabolic systems only by one control and through an observer, respectively. In order to get a null controllability result, the Lebeau-Robbiano technique is adopted. The key point is to prove an observability inequality for certain stochastic coupled backward parabolic system by an iteration, when terminal values belong to a finite dimensional space. Different from deterministic systems, Kalman-type rank conditions for the controllability of stochastic coupled parabolic systems do not hold any more. Meanwhile, based on the Carleman estimates method, an observability inequality and unique continuation property for general stochastic linear coupled parabolic systems through an observer are derived.
Higher eigenvalues of composite materials for anisotropic conductors are considered. To get the existence result for minimizing problems, relaxed problems are introduced by the homogenization method. Then, necessary conditions for minimizers are yielded. Based on the necessary conditions, it is shown that in some cases, optimal conductivities of relaxed minimizing problems can be replaced equivalently by a weighted harmonic mean of conductivities.
This work continues and substantially extends our recent work on switching diffusions with the switching processes that depend on the past states and that take values in a countable state space. That is, the discrete component of the two-component process takes values in a countably infinite set and its switching rate at current time depends on the value of the continuous component involving past history. This paper focuses on recurrence, positive recurrence, and weak stabilization of such systems. In particular, the paper aims to providing more verifiable conditions on recurrence and positive recurrence and related issues. Assuming that the system is linearizable, it provides feasible conditions focusing on the coefficients of the systems for positive recurrence. Then linear feedback controls for weak stabilization are considered. Some illustrative examples are also given.
In this paper we investigate on a new strategy combining the logarithmic convexity (or frequency function) and the Carleman commutator to obtain an observation estimate at one time for the heat equation in a bounded domain. We also consider the heat equation with an inverse square potential. Moreover, a spectral inequality for the associated eigenvalue problem is derived.
In many practical applications of control theory some constraints on the state and/or on the control need to be imposed.
In this paper, we prove controllability results for semilinear parabolic equations under positivity constraints on the control, when the time horizon is long enough. As we shall see, in fact, the minimal controllability time turns out to be strictly positive.
More precisely, we prove a global steady state constrained controllability result for a semilinear parabolic equation with $C^1$ nonlinearity, without sign or globally Lipschitz assumptions on the nonlinear term. Then, under suitable dissipativity assumptions on the system, we extend the result to any initial datum and any target trajectory.
We conclude with some numerical simulations that confirm the theoretical results that provide further information of the sparse structure of constrained controls in minimal time.
This paper is about a stock trading rule involving two stocks. The trader may have a long position in either stock or in cash. She may also switch between them any time. Her objective is to trade over time to maximize an expected return. In this paper, we reduce the problem to the optimal trading control problem under a geometric Brownian motion model with regime switching. We use a two-state Markov chain to capture the general market modes. In particular, a single market cycle consisting of a bull market followed by a bear market is considered. We also impose a fixed percentage cost on each transaction. We focus on simple threshold-type policies and study all possible combinations. We establish algebraic equations to characterize these threshold levels. We also present sufficient conditions that guarantee the optimality of these policies. Finally, some numerical examples are provided to illustrate our results.
In this paper, for a time optimal control problem governed by a linear time-varying ordinary differential equation, we give a description to check whether the set of admissible controls is nonempty or not by finite times.
Although having been developed for more than two decades, the theory of forward backward stochastic differential equations is still far from complete. In this paper, we take one step back and investigate the formulation of FBSDEs. Motivated from several considerations, both in theory and in applications, we propose to study FBSDEs in weak formulation, rather than the strong formulation in the standard literature. That is, the backward SDE is driven by the forward component, instead of by the Brownian motion. We establish the Feyman-Kac formula for FBSDEs in weak formulation, both in classical and in viscosity sense. Our new framework is efficient especially when the diffusion part of the forward equation involves the $Z$-component of the backward equation.
In this paper, a kind of time-inconsistent recursive zero-sum stochastic differential game problems are studied by a hierarchical backward sequence of time-consistent subgames. The notion of feedback control-strategy law is adopted to constitute a closed-loop formulation. Instead of the time-inconsistent saddle points, a new concept named equilibrium saddle points is introduced and investigated, which is time-consistent and can be regarded as a local approximate saddle point in a proper sense. Moreover, a couple of equilibrium Hamilton-Jacobi-Bellman-Isaacs equations are obtained to characterize the equilibrium values and construct the equilibrium saddle points.
In this paper, we study the approximate null controllability for the stochastic heat equation with the control acting on a measurable subset, and the optimal actuator location of the minimum norm controls. We formulate a relaxed optimization problem for both actuator location and its corresponding minimum norm control into a two-person zero sum game problem and develop a sufficient and necessary condition for the optimal solution via Nash equilibrium. At last, we prove that the relaxed optimal solution is an optimal actuator location for the classical problem.
In this paper, we establish a Hölder-type quantitative estimate of unique continuation for solutions to the heat equation with Coulomb potentials in either a bounded convex domain or a
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]