Mathematical Control & Related Fields
December 2020 , Volume 10 , Issue 4
Select all articles
In this paper, we treat the problem of uniform exact boundary controllability for the finite-difference space semi-discretization of the
We establish Strichartz estimates for the regularized Schrödinger equation on a two dimensional compact Riemannian manifold without boundary. As a consequence we deduce global existence and uniqueness results for the Cauchy problem for the nonlinear regularized Schrödinger equation and we prove under the geometric control condition the Kato smoothing effect for solutions of this equation in this particular geometries.
In 2013, Mao initiated the study of stabilization of continuous-time hybrid stochastic differential equations (SDEs) by feedback control based on discrete-time state observations. In recent years, this study has been further developed while using a constant observation interval. However, time-varying observation frequencies have not been discussed for this study. Particularly for non-autonomous periodic systems, it's more sensible to consider the time-varying property and observe the system at periodic time-varying frequencies, in terms of control efficiency. This paper introduces a periodic observation interval sequence, and investigates how to stabilize a periodic SDE by feedback control based on periodic observations, in the sense that, the controlled system achieves
We consider variational discretization [
This paper studies the portfolio management problem for an individual with a non-exponential discount function and habit formation in finite time. The investor receives a deterministic income, invests in risky assets, buys insurance and consumes continuously. The objective is to maximize the utility of excessive consumption, heritage and terminal wealth. The non-exponential discounting makes the optimal strategy adopted by a naive person time-inconsistent. The equilibrium for a sophisticated person is Nash subgame perfect equilibrium, and the sophisticated person is time-consistent. We calculate the analytical solution for both the naive strategy and equilibrium strategy in the CRRA case and compare the results of the two strategies. By numerical simulation, we find that the sophisticated individual will spend less on consumption and insurance and save more than the naive person. The difference in the strategies of the naive and sophisticated person decreases over time. Furthermore, if an individual of either type is more patient in the future or has a greater tendency toward habit formation, he/she will consume less and buy less insurance, and the degree of inconsistency will also be increased. The sophisticated person's consumption and habit level are initially lower than those of a naive person but are higher in later periods.
In this paper, we investigate a class of time-inconsistent stochastic control problems for stochastic differential equations with deterministic coefficients. We study these problems within the game theoretic framework, and look for open-loop Nash equilibrium controls. Under suitable conditions, we derive a verification theorem for equilibrium controls via a flow of forward-backward stochastic partial differential equations. To illustrate our results, we discuss a mean-variance problem with a state-dependent trade-off between the mean and the variance.
We study an optimal control problem for a quasi-linear elliptic equation with anisotropic p-Laplace operator in its principal part and
We consider stochastic impulse control problems when the impulses cost functions depend on
We study one-parametric perturbations of finite dimensional real Hamiltonians depending on two controls, and we show that generically in the space of Hamiltonians, conical intersections of eigenvalues can degenerate into semi-conical intersections of eigenvalues. Then, through the use of normal forms, we study the problem of ensemble controllability between the eigenstates of a generic Hamiltonian.
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]