# American Institute of Mathematical Sciences

ISSN:
2164-6066

eISSN:
2164-6074

All Issues

## Journal of Dynamics & Games

October 2019 , Volume 6 , Issue 4

Select all articles

Export/Reference:

2019, 6(4): 259-275 doi: 10.3934/jdg.2019018 +[Abstract](1705) +[HTML](159) +[PDF](423.32KB)
Abstract:

We construct asymptotically optimal strategies in two-player zero-sum repeated games with incomplete information on both sides in which stages have vanishing weights. Our construction, inspired in Heuer (IJGT 1992), proves the convergence of the values for these games, thus extending the results established by Mertens and Zamir (IJGT 1971) for \begin{document}$n$\end{document}-stage games and discounted games to the case of arbitrary vanishing weights.

2019, 6(4): 277-289 doi: 10.3934/jdg.2019019 +[Abstract](1225) +[HTML](181) +[PDF](327.51KB)
Abstract:

In this paper we introduce a game whose value functions converge (as a parameter that measures the size of the steps goes to zero) uniformly to solutions to the second order Pucci maximal operators.

2019, 6(4): 291-314 doi: 10.3934/jdg.2019020 +[Abstract](1194) +[HTML](216) +[PDF](436.07KB)
Abstract:

This paper builds on the work of Degond, Herty and Liu in [16] by considering \begin{document}$N$\end{document}-player stochastic differential games. The control corresponding to a Nash equilibrium of such a game is approximated through model predictive control (MPC) techniques. In the case of a linear quadratic running-cost, considered here, the MPC method is shown to approximate the solution to the control problem by the best reply strategy (BRS) for the running cost. We then compare the MPC approach when taking the mean field limit with the popular mean field game (MFG) strategy. We find that our MPC approach reduces the two coupled PDEs to a single PDE, greatly increasing the simplicity and tractability of the original problem. We give two examples of applications of this approach to previous literature and conclude with future perspectives for this research.

2019, 6(4): 315-335 doi: 10.3934/jdg.2019021 +[Abstract](1499) +[HTML](278) +[PDF](501.57KB)
Abstract:

The recently developed mean-field game models of corruption and bot-net defence in cyber-security, the evolutionary game approach to inspection and corruption, and the pressure-resistance game element, can be combined under an extended model of interaction of large number of indistinguishable small players against a major player, with focus on the study of security and crime prevention. In this paper we introduce such a general framework for complex interaction in network structures of many players, that incorporates individual decision making inside the environment (the mean-field game component), binary interaction (the evolutionary game component), and the interference of a principal player (the pressure-resistance game component). To perform concrete calculations with this overall complicated model, we suggest working, in sequence, in three basic asymptotic regimes; fast execution of personal decisions, small rates of binary interactions, and small payoff discounting in time.

2019, 6(4): 337-366 doi: 10.3934/jdg.2019022 +[Abstract](1095) +[HTML](145) +[PDF](636.43KB)
Abstract:

This paper is concerned with the Markovian feedback strategies of piecewise deterministic differential games and their applications to business and management decision-making problems that involve multiple agents and continuous and impulse controls. For a class of piecewise deterministic differential games in finite or infinite horizons we formulate conditions for the value functions in the form of quasi-variational inequalities, prove a verification theorem, and derive a criterion for the Markovian regime change in certain case. These results are applied to a technology adoption problem that involves multiple companies engaged in extraction of an exhaustible resource with different technologies. Using the model proposed by Long et al in [16], we show the existence of a pure Markovian strategy and develop an algorithm for computing the solutions.