COMPETITION WITH HIGH NUMBER OF AGENTS AND A MAJOR ONE

. In the framework of mean ﬁeld game theory, a new optimization problem is presented by adding an additional player, called the principal. Af- ter introducing a proper payoﬀ for the principal, continuity and existence of minimum is proved. Some considerations about uniqueness and possible ways of continuing the analysis of this problem are given.


1.
Introduction. Mean Field Games is a fascinating theory developed by Lasry and Lions (see [4], [7], [8], [9], [10]) dealing with a class of games in which there is a huge amount of players with symmetrical properties. It has been developed a lot in the last years, both from a theoretical view point ( see [1], [12]), and from a practical view point (see [6]). In this paper a new approach is investigated: a new major player able to choose the rules of the game is introduced. This idea is taken from the theory of Mechanism Design, studying games in which a major player has such capability, see [3].
Consider the following problem. We have N players and a major one, called the Principal, who can establish the rules of the game. The N players act as in the classical theory of Mean Field Games: each player can choose a control α i ,which determines the state X i t through a SDE of the form dX i t = α i t dt + √ 2dB i t with the aim to minimize the payoff which however depends on the controls chosen by the other players; thus the players will accept the compromise of a Nash equilibrium. The Principal may choose F (this is what we mean in this work by choosing the rules of the game; other interpretations are possible) in a class of functions specified below. The principal payoff has the form where X i are the states associated to the controls α i . The dependence on F can be expressed by general assumptions that will be later clarified. The payoff of the Principal depends on the actions α i of the players. This arises a difficulty in the formulation of what we mean by an optimal choice F * of the Principal. The difficulty is similar to the classical one in game theory with the difference that the Principal is not like another player and a concept of Nash equilibrium between the N players and the Principal does not look suitable: because the Principal acts first, choosing the rules of the game, and only then the players make their choices.
The way we exit by this ambiguity of formulation is to prescribe the strategy of the players, given F . Assume we have a map F → (α 1,F , · · · , α N,F ) which prescribes the choice (α 1,F , · · · , α N,F ) made by the player, for each given F . Clearly the Principal payoff becomes dependent only on F J P N (F ) := J P N (α 1,F , · · · , α N,F , F ) and we may formulate a classical minimization problem. Using the concept of Nash equilibrium for the N players does not identifies uniquely, in general, the N -ple (α 1,F , · · · , α N,F ) and thus we make another choice, which however requires some preliminaries.
Before we proceed, we describe the class of admissible functions F . They are functional F : is the set of probability measures on R d with first moment, being uniformly bounded and Lipchitzian and such that We call F the class of such functionals. P. L. Lion has proved a theorem describing an approximate Nash equilibrium strategy, for large N , once F , namely the rules of the game are established.
The notion of -Nash equilibrium will be recalled in the next section. Assume now that, given F , due to the high number of players and the consequent complexity of the problem, the players will adopt the Mean Field choice given by the previous theorem. This specifies the Principal payoff in the form (1) and we may ask for an optimal choice F ∈ F. We provide below, as our main result, a sub-optimal choice.
Given F ∈ F, consider the Mean Field payoff of the Principal, defined as where m F t is given by Theorem 1.1 and where m F t , φ =´φdm F t . We prove two main results. Theorem 1.2. There exists a minimum forJ P over F. Theorem 1.3. If F * realizes the minimum forJ P over F, then there exists a sequence N → 0 such that Let us summarize the intuition behind this result. The Principal may choose F but not α 1 , · · · , α N ; the Principal has no way to minimize the payoff J P N (α 1 , · · · , α N , F ) unconditionally. The only way is to work conditionally to a presumed strategy of action of the players. Here we presume the Mean Field action, which is motivated by its simplicity for a very complex game problem. Conditional to this belief, the Principal could try to minimize the functional J P N (F ) given by equation (1), but this is still a formidable task. Hence the Principal may choose F by means of the Mean Field payoffJ P , getting a sub-optimal result, which is much easier than J P N (F ). Principal's belief about the fact that the agents will act according to a Nash equilibrium strategy is justified by classical game theory: in some way, before choosing F , the principal trusts the agents to act according to Nash equilibrium (thanks to game theory) and following the Mean Field model (justified by their symmetrical payoffs and the theory of Mean Field Game). Supposing that the agents will act according the rules predicted by Mean Field Games, the principal chooses the function F in order to maximizeJ P .
The structure of the paper is the following: in section 2 motivations for studying this problem are given; in section 3 the problem for the principal is formalized; in section 4 main assumptions are stated; in section 5 the continuity for principal's payoff is proved; in section 6 the existence of minimum is proved; in section 7 an idea of a different possible approach to the problem using Hamilton Jacobi Bellman optimization is given; in section 8 some proofs skipped in the dissertation are clarified.

2.
Motivations. The presence of a major player can be found in a lot of competitions based on rules very often decided by someone. Who establishes the rules of the game? In which sense the major decides the rules? For which aims? Imagine the following situation: the CEO of a company handling supermarkets has to decide how distributing n supermarkets in an area. He faces the following facts: 1. people want to go to the supermarket to buy food; 2. each supermarket can receive a finite number of persons, so people don't want to be too much concentrated at any point; 3. the company has to face costs depending on the number of supermarkets and their collocation (if they are too far one from each other it arises the costs); 4. minimum cost and maximum gain want to be achieved. In such a competition between the players, the rules of the game are represented by the distribution of the supermarkets.
We model the situation both from the players (buyers) and from the major player (CEO) point of view. The players have to face the following problem: each of them 322 VALERIA DE MATTEI decides a drift α i (the velocity) in order to minimize a payoff on the form: where X j t is the position for the i-th player at time i, a k is the position of the k−th supermarket and K is a function growing near zero. The first term of the integral represents the energy spent by the i-th player to move, the second term is the distance between the i−th player and the supermarkets, the last term represents the fact that two players do not want to be too crowded. The dynamic described by this game models in some way the point 1 and 2.
As previous said, the CEO wants to choose the rules of the games in order to minimize costs and maximize gains, that is he decides the number n and the positions a k in order to minimize a functional on the form Is it possible for the CEO to find the number n and the positions a k to minimize its functional J P ? More generally, can the major player choose the rules of the game in order to minimize its functional? The paper is meant to give a first answer to this problem from a more general point of view, considering a generic competition with a huge number of players and a major one who has the capability of deciding the rules of the game.

3.
A new player: Problem formalization. Notice that the dynamic of the system depends on the functional F which determines how much each player has to pay. Thus it is natural to suppose that the principal is able to choose the functional F . But in which sense could he choose F ? It is needed to introduce a functional for the principal too, describing how much he has to pay. The functional has the following form where φ is a bounded function and X i satisfies and J 0 is a semi continuous functional (with respect to the locally uniformly convergence).
Computationally speaking, it is difficult to work with such a functional. We are going to introduce a functional supporting us: where X F t is a random variable with law m F and (u F , m F ) is the solution of the Mean Field Equation with the functional F . Moreover X F satisfies the SDE Let us introduce some notation: if F is fixed, we definẽ t) Now the following theorem explain why J P is related toJ P : Using the fact that F * ∈ arg minJ P (F ), we get For doing that, notice that the random variables X i,F t are independent and identically distributed, then thanks to the Strong Law of Large Numbers, we get almost sure, that is 1 → 0. As to 2 we can do the same computation. The idea is the following: the principal choose the function F which establishes the rules of the game. Then the players choose their strategies: we expect them to act according to Nash equilibrium as Lions has proved. After that, the principal receives the payoff J P , depending on the strategies the players have played. Since this functional is difficult to deal with, a new functional, that isJ P , is introduced: the preceding theorem states thatJ P is a good approximation of J P when the number of players is huge, thus we can studyJ P to find information about the minimum for the real principal payoff when dealing with games with a high number of players.
The next section is devoted to prove existence of minimum forJ P .

Main assumptions and notations.
Before we proceed, we describe our main assumptions and notations.
As to F , we suppose that F : is the set of probability measures on R d with first moment, such that: • Uniform boundedness and lipschitzianity: there exists a constant C 0 for which In our notations, F is the set of functions F : R d × P 1 (R d ) → R being weakly monotone and such that there exists a constant C 0 such that We define Γ C0 the set of functionals F with this properties. Obviously we get We consider over Γ C0 the local uniform convergence . Over C 0 ([0, T ], P 1 ), we consider the following distance: As to the initial conditions for the PDEs, we suppose that the probability measure m 0 is absolutely continuous with respect to the Lebesgue measure and its density function The functional φ is a continuous and bounded functions over R d . As to J 0 , it is a semi continuous functional (with respect to the locally uniformly convergence) with the following property: put 5. Semi continuity ofJ P .

Preliminaries. Consider the following Mean Field Equations:
We want to study the continuous dependence between the function F and the solutions (u, m).

Definition 5.1. We say that a pair (u, m) is a weak solution to the Mean Field Equations if
• the function u : Lions has proved this fact we are going to use (see [10]): Theorem 5.2. Under hypothesis (1) and (2), we have strong existence and weak uniqueness.

5.2.
Continuous dependence of the solutions. We want to study the continuous dependence between the function F and the solutions (u, m).
To do so, fix a probability measure m 0 and positive number C 0 . Firstly, we are going to prove continuity over Γ C0 for all C 0 . Thanks to theorem 5.2, for each F ∈ Γ C0 , there exists a pair (u F , m F ) of classical solutions to the Mean Field Equations with F : thus we can define a function We consider over Γ C0 and C 2,1 (R d × [0, T ]) the local uniform convergence . Over C 0 ([0, T ], P 1 ), we consider the following distance: We want to prove that Φ is continuous with respect to these topologies.

VALERIA DE MATTEI
The proof is carried out in different steps. The proofs of many propositions and lemmas can be found in the appendix.
Let's begin by studying more accurately the image of Φ. To do that, we have to introduce these two new sets: We have the following result.
Proposition 1. The image of Γ C0 is contained in the product ∆ C0 × CC 0 , wherẽ C 0 is a constant depending only on C 0 and T.
Proof. Let F ∈ Γ C0 and let (u F , m F ) = Φ(F ). One can easily prove that u F is an element of ∆ C0 using the boundedness of F and applying maximum principle to the Mean Field Equation of u. As to m F , this condition is satisfied by constructing reasons (see [4]).
Once compactness of is C C obtained, we show the local uniform convergence of u F up to a subsequence.
Proposition 3. Suppose that F n converges locally uniformly toF in Γ C0 . Thus there exists a subsequence of (u Fn ) converging locally uniformly to uF .
We now show the convergence of m F up to a subsequence. Proof. For all F , the measures m F belong to C C0 ; by compactness, we deduce that there exists a subsequence (m Fn ) and an element m ∈ C C0 such that We want to show that m coincides with mF . Fix φ ∈ C ∞ c ; for all n we havê Taking the limit as n → ∞, it is possible to show, as already done, that m and mF satisfy the same equation, thus they coincide by uniqueness.
Generalizing this result to F ∈ F = ∪ C0>0 Γ C0 is trivial. We get the following result: Theorem 5.4. The function that associates F to the couple (u F , m F ) solving the MFE with F is continuous over F.
The next step is showing semi continuity ofJ P .
Theorem 5.5. The functionalJ P is semi continuous over the set Γ.
Proof. Let's splitJ P in two functionals: Let's start from J 1 : we are going to prove that it is continuous. Suppose that F n → F. Fix γ n ∈ Π(m Fn , m F ).
Taking the inf over Π(m Fn , m F ), we get In other words, J 1 is continuous.
6. Existence of minimum forJ P . We consider the mean field principal's payoff, which is on the formJ Theorem 6.1. In our hypothesis, there exists the minimum for the functionalJ P .
Proof. Let (F n ) be a minimizing sequence. We want to show that there existsF such thatJ P (F n ) →J P (F ) as n → ∞. We are going to use the following lemma.
Lemma 6.2. Given a minimizing sequence (F n ) ∈ Γ, there exists a subsequence (F n k ) converging locally uniformly.
The proof of this lemma will be explained in the appendix. Thanks to semi continuity we havẽ 7. An Hamilton Jacobi Bellmann approach: Hints for new work. Another approach to the problem can be the theory of Hamilton Jacobi Bellmann equations and optimization.
Here we give just an hint of how starting this approach: this way of approaching the problem could lead to some uniqueness results for the minimum. Notice that the PDE for u is resembling an HJB equation, so it is natural to think about this kind of approach. Fix We define the functional   From here, one could start with optimization theory to simplify the equation coming out from Bellmann's optimality principle to obtain a more friendly equation that could give information about the minimum.
Notice that with standard convex methods, the problem of finding the minimum it is not so easy because the set of solutions of Mean Field Equation does not stand in a convex space; the new approach could give information about uniqueness but also other properties of the minimum. 8. Appendix. We write here the proofs skipped in theorem 4.2 Proposition 6. The set C C is compact in the topology of C 0 ([0, T ], P 1 ).
Proof. Consider a subsequence (µ n ) ⊂ C 0 ([0, T ], P 1 ). For all t, the measure µ(t) belongs toP which is a compact subset of P 1 . Thus for each t, there exists a subsequence (µ n k (t)) of (µ n (t)) and there exists a probability measure µ(t) such that µ n k (t) converges to µ(t) as n → ∞. First of all, we show that the function µ belongs to C C . Let's start with continuity: fix t 0 ∈ [0, T ] and consider a sequence (t n ) converging to t 0 ; by diagonal proceedings, it is possible to find a subsequence (µ n k ) such that for all i, µ n k (t i ) converges to µ(t i ) as k → ∞. Thus from which we can derive the continuity of µ. We can also derive that Consider now for all n the continuous function that associates to each t ∈ [0, T ] the value d 1 (µ n (t), µ(t)). Since it is defined over a compact set, its supreme is reached for all t, i.e. for all n there exists t n ∈ [0, T ] such that d(µ n , µ) = d 1 (µ n (t n ), µ(t n )).
Proposition 7. Suppose that F n converges locally uniformly toF in Γ C0 . Thus there exists a subsequence of (u Fn ) converging locally uniformly to uF .
Proof. Consider the family {u F } F ∈Γ C 0 . Since u F ∈ ∆ C0 for all F , the family {u F } is uniformly continuous and uniformly bounded. Thus, thanks to Ascoli Arzelá Theorem, for each compact subset of R d × [0, T ], we can deduce the existence of subsequence uniformly converging over that compact. Fix a compact set B(0, N ) × [0, T ], where B(0, N ) stands for the closed d− dimensional ball in R d . Thus for all (u Fn ) there exists a subsequence (u Fn k ) such that for all N , there exists a continuous function u N : Observe that Notice that we can apply again Ascoli Arzelá Theorem to the derivative of u and get that, up to a subsequence, Du k → Du uniformly over the compact sets.
As to the second term, it converges to 0 thanks to uniform convergence. Consider the first.
Passing to the limit as k → ∞, we have by dominated convergencê We need the following lemma. We can apply this lemma to our case and conclude that Thanks to this, we conclude that F n k (x, m k (t)) →F (x, mF (t)) for all fixed (x, t). Then, applying dominated convergence to (1), we get  The first integral converges to 0 thanks to dominated convergence. Consider the second one. We firstly suppose that ψ is C−Lipschitzian. Let γ n (t) ∈ Π(m n (t), m(t)).