LOCAL STABILITY OF STRICT EQUILIBRIA UNDER EVOLUTIONARY GAME DYNAMICS

. We consider the stability of strict equilibrium under deterministic evolutionary game dynamics. We show that if the correlation between strate- gies’ growth rates and payoﬀs is positive and bounded away from zero in a neighborhood of a strict equilibrium, then this equilibrium is locally stable.


1.
Introduction. Evaluating the local stability of Nash equilibria is a basic issue in the study of deterministic evolutionary game dynamics. Much of the literature on this topic has focused on the notion of an evolutionarily stable state (ESS) [12], showing that variants of this notion provide sufficient conditions for local stability under various game dynamics. The analyses of local stability, which are based on either the construction of Lyapunov functions or on linearization, depend on the particular functional forms of the dynamics under consideration. 1 This paper considers the local stability of strict equilibrium. Since strict equilibria are ESSs, local stability results for the latter apply immediately to the former. Moreover, near a strict equilibrium, most agents are playing a strict best response, so for dynamics under which a popular optimal strategy must become more popular still, strict equilibria are locally stable.
It is nevertheless worth asking whether local stability of strict equilibrium can be established under weaker conditions. As we noted earlier, stability results for ESS impose conditions on dynamics' functional forms that may be difficult to verify in applications. Furthermore, when most agents are already playing a particular strategy, requiring this strategy to continue to grow in popularity is not completely innocuous, particularly when alternative strategies yield comparable if somewhat lower payoffs.
We therefore ask whether weaker conditions relating strategies' growth rates and payoffs are sufficient for stability of strict equilibrium. One possibility is to ask only that strategies' growth rates and payoffs be positively correlated. 2 The weakness 486 WILLIAM H. SANDHOLM of this condition is reflected in its constraining all strategies' growth rates using a single inequality; even optimal strategies may become less common, provided that this loss is balanced by the growth of other strategies with above-average payoffs. In potential games, positive correlation is precisely what is needed to ensure that the value of potential increases over time, and hence that local maximizers of potential are locally stable. 3 It is thus natural to ask whether some version of this condition is sufficient for local stability of strict equilibria.
In this paper, we introduce a slight strengthening of positive correlation which we call strong positive correlation. A dynamic satisfies this condition in some region of the state space if throughout this region, whenever the dynamic is not at rest, the correlation between strategies' growth rates and payoffs is bounded away from zero. Our main result shows that if a dynamic satisfies strong positive correlation in a neighborhood of a strict equilibrium, then this equilibrium is Lyapunov stable, and it is asymptotically stable as long as there is a neighborhood of the equilibrium containing no other rest point. While strong positive correlation allows the strict equilibrium strategy to decline in popularity at some states arbitrarily close to the equilibrium, local stability of the equilibrium is still assured.

Population games.
For most of the paper we focus on strategic interactions in a single unit-mass population of agents. We extend our results to multipopulation games in Section 5.
In the single-population framework, we denote by S = {1, . . . , n} the finite set of strategies available to each agent. We call X = {x ∈ R n : i∈S x i = 1}, the simplex in R n , the set of population states; for each x ∈ X, x i describes the fraction of agents playing strategy i. For each i ∈ S, we denote by e i the ith standard basis vector in R n , which represents the pure population state in which all agents play action i, as well as the mixed representation of pure action i.
Let T X = {z ∈ R n : i∈S z i = 0} denote the tangent space of the simplex. If we define the matrix Φ ∈ R n×n by Φ = I − 1 n 11 , where 1 ∈ R n denotes the vector of ones, then Φ represents the orthogonal projection of R n onto T X. For any vector π, the projected vector Φπ = π − ( 1 n i∈S π i )1 is obtained from π by reducing each component by the average of the components of π. Also, let T X(x) = {z ∈ T X : [x i = 0 =⇒ z i ≥ 0] for all i ∈ S} denote the tangent cone of X at state x ∈ X. This set contains those directions in which the state may move from x without immediately leaving the set X.
We identify a population game with a Lipschitz continuous payoff function F : X → R n . Here F i (x) is the payoff obtained by action i players at population state x.
The simplest population games are obtained by matching a population of agents to play a symmetric two-player normal form game A ∈ R n×n , in which A ij is the payoff an i player receives when matched against a j player. The induced population game F is defined by F i (x) = j∈S A ij x j = (Ax) i , or equivalently, by F (x) = Ax.

Evolutionary dynamics.
An evolutionary dynamic is a method of assigning each population game F a differential equatioṅ that is defined and forward invariant on the state space X. To ensure the latter properties, we require the function V F : X → T X to be Lipschitz continuous and satisfy V F (x) ∈ T X(x) for all x ∈ X. When we introduce examples of evolutionary dynamics below, we assume that the functions used to define these dynamics are Lipschitz continuous as well; these examples play no direct role in our main result.
3. Strong positive correlation. Positive correlation is a basic monotonicity condition for evolutionary game dynamics. It is expressed most simply as follows: Geometrically, condition (PC) requires that when the population is not at rest, the vector of growth rates form an acute angle with the vector of payoffs. To obtain a game-theoretic interpretation for this condition, regard the strategy set S = {1, . . . , n} as a probability space endowed with the uniform probability measure, so that vectors in R n can be interpreted as random variables taking values in R. Then as we explain below, condition (PC) can be formulated equivalently as Thus positive correlation requires that whenever the population is not at rest, the correlation between strategies' (absolute) growth rates and payoffs is positive. 4 Our local stability result requires a slightly more demanding condition. We say that a dynamic V F for game F satisfies strong positive correlation in Y ⊆ X if There exists a c > 0 such that for all x ∈ Y, Strong positive correlation requires that on the set Y , when the population is not at rest, the correlation between growth rates and payoffs is bounded away from zero.
To express (SPC) in a more primitive form, recall that ΦF (x) is the projection of the payoff vector F (x) onto T X, so that (ΦF ) i (x) = F i (x) − 1 n j∈S F j (x) is the difference between strategy i's payoff and the unweighted average of the strategies' payoffs. In Appendix A.1, we show that where | · | denotes the Euclidean norm on R n . The right-hand side of (2) is the cosine of the angle between the growth rate vector V F (x) and the projected payoff vector ΦF (x). Thus, while (PC) requires that the direction of motion form an acute angle with the projected payoff vector whenever V F (x) = 0, (SPC) requires further that the cosine of this angle be bounded away from zero, or equivalently, that the angle be bounded away from 90 • . It is well known that many of the evolutionary dynamics studied in the literature satisfy positive correlation. Families of dynamics satisfying this condition include monotone imitiative dynamics, 5

WILLIAM H. SANDHOLM
sign-preserving excess payoff dynamics, 6 and pairwise comparison dynamics, 7 Proposition 1, which is proved in Appendix A.2, verifies that all dynamics from these families also satisfy strong positive correlation near strict equilibria.
Proposition 1 shows that condition (SPC) is satisfied by commonly studied dynamics near strict equilibria. Still, this proposition is somewhat tangential to our main result, which establishes that (SPC) implies local stability of strict equilibrium regardless of the dynamic's functional form.
It is well known that all Nash equilibria, and hence all strict equilibria, are rest points of any dynamic that satisfies positive correlation (PC). 8 Our main result shows that strict equilibria are locally stable under any evolutionary dynamic satisfying strong positive correlation (SPC).
Stating this result requires us to introduce notions of local stability. State x * ∈ X is Lyapunov stable underẋ = V (x) if for every neighborhood O of x * there exists a neighborhood O of x * such that every solution that starts in O is contained in O. State x * is attracting if there is a neighborhood Q of x * such that every solution that starts in Q converges to x * . Lastly, state x * is asymptotically stable if it is Lyapunov stable and attracting. Theorem 4.1. Let e k be a strict equilibrium of F , and suppose that the dynamic (D) satisfies strong positive correlation (SPC) in some neighborhood of e k in X. Define the function L : X → R by Then L(x) ≥ 0, with equality only when x = e k , and there is a neighborhood of e k on whichL(x) ≤ 0, with equality only when V F (x) = 0. Thus e k is Lyapunov stable under (D), and if e k is an isolated rest point of (D), e k is asymptotically stable under (D).
The geometry behind Theorem 4.1 is pictured in Figure 1. In this figure, the projected payoff vector ΦF (e 1 ) is drawn with its tail at state e 1 . That e 1 is a Nash equilibrium is represented by the payoff vector lying in the normal cone of the state space X at state e 1 . The normal cone, which we denote by N X(e 1 ), is the set is the excess payoff of strategy i over the average payoff in the population. When τ i (π i ) = [π i ] + , (4) is the BNN dynamic [1]. Also see [20], [22], [25], [6], and [16]. 7 When ρ ij (π) = [π j − π i ] + , (4) is the Smith dynamic [21]. See also [18]. 8 See, for instance, Proposition 5.2.1 of [19]. However, since such dynamics may admit rest points that are not Nash equilibria, one must make assumptions about the locations of these rest points to obtain asymptotic stability results, as we do in Theorem 4.1.
of vectors whose tails are at e 1 and that extend through points into the shaded region; by definition, these vectors form obtuse or right angles with all vectors in the tangent cone T X(e 1 ). 9 That e 1 is a strict equilibrium is reflected in F (e 1 ) lying in the interior of the normal cone; the fact that it is nearly orthogonal to face e 1 e 2 means that at state e 1 , the payoff to strategy 1 is only slightly larger than the payoff to strategy 2. 10 By construction, the level sets of the Lyapunov function L(x) = (e 1 − x) F (e 1 ) are orthogonal to the vector F (e 1 ). In Figure 1, these level sets are represented by parallel gray lines. The proof of the theorem uses a continuity argument to show that at each state x in a neighborhood of strict equilibrium e 1 , the direction of motion V F (x), which by (SPC) forms an acute angle with the payoff vector F (x), also forms an acute angle with the payoff vector F (e 1 ) that obtains at the strict equilibrium. In terms of the figure, this means that when the state is near e 1 , motion under (D) leads the state to cross the level sets of L from southeast to northwest.
At any interior state, this crossing of the level sets may occur in a southwesterly direction, implying that the mass of agents playing strategy 1 is falling. But as the e 1 e 2 face is approached, the only way for the state to both continue to cross the level sets of L in the correct direction and stay in the simplex is for the the vector of motion to turn sharply northeast, in the direction of the equilibrium e 1 . Thus, to compensate for the weakness of condition (SPC), the geometry of the state space near e 1 plays a crucial role in ensuring the stability of strict equilibrium.
Proof. Once we establish the claims about the function L, the claims about stability follow from standard results on Lyapunov functions (see, e.g., Section 7.B of [19]).
To prove the first claim, observe that L(x) is the difference between the payoffs to pure strategy k and mixed strategy x at pure state e k . Thus, that L(x) ≥ 0, with equality only when x = e k , is immediate from the fact that e k is a strict equilibrium of F .
To prove the second claim, note first that since e k is a strict equilibrium, F (e k ) is not a constant vector, implying that ΦF (e k ) = 0. Thus, since F is continuous, there is a neighborhood O of e k in X on which ΦF (x)/|ΦF (x)| is continuous, and in particular on which where the constant c > 0 comes from condition (SPC). We may suppose that O is contained in the neighborhood where the implication in condition (SPC) holds. Thus if V F (x) = 0 and x ∈ O, inequality (6) and condition (SPC) imply that 9 In other words, the normal cone is the polar of the tangent cone: N X(x) ≡ T X(x) • . In the present context, we have N X(e k ) = {v ∈ R n : v k ≥ v i for all i ∈ S}. Since Figure 1 is twodimensional, we cannot draw the entire three-dimensional set N X(e 1 ), but only its projection Φ(N X(e 1 )) onto the two-dimensional tangent space T X. The vectors pointing along the two boundaries of Φ(N X(e 1 )) are proportional to ( 1 2 , 1 2 , −1) (which is itself orthogonal to face e 1 e 2 ) and to ( 1 2 , −1, 1 2 ) (which is orthogonal to e 1 e 3 ). For more on normal cones and Nash equilibria, see [11] or Section 2.3 of [19]. 10 To see this, note that vectors in T X orthogonal to face e 1 e 2 are proportional to ( 1 2 , 1 2 , −1), while those parallel to face e 3 e 1 are proportional to (1, 0, −1).
Now, the time derivative of L under (D) iṡ where the last equality holds because V F (x) ∈ T X and Φ is the orthogonal projection onto T X.
and (7) imply thatL This completes the proof of the theorem.
5. Multipopulation games. We conclude the paper by presenting the extension of Theorem 4.1 to multipopulation games.
To define a multipopulation game, we suppose that there are p > 1 populations of agents, with population p ∈ P = {1, . . . , p} having mass m p > 0. Agents in population p choose pure strategies from the set S p = {1, . . . , n p }, and the total LOCAL STABILITY OF STRICT EQUILIBRIA 491 number of pure strategies available in all populations is n = p∈P n p . Aggregate behavior in population p is represented by a population state in X p = {x p ∈ R n p + : i∈S p x p i = m p }, where x p i ∈ R + represents the mass of players in population p choosing strategy i ∈ S p . We let e p k denote the k th standard basis vector in R n p , so that m p e p k ∈ X p is the state in which all members of population p choose strategy k ∈ S p . The tangent spaces T X p and tangent cones T X p (x p ) are defined as in the single-population case. Elements of X = p∈P X p = {x = (x 1 , . . . , x p ) ∈ R n + : x p ∈ X p }, the set of social states, describe behavior in all p populations at once.
We identify a multipopulation game with its Lipschitz continuous payoff function F : X → R n . The component F p i : X → R denotes the payoff function for strategy i ∈ S p , while F p : X → R n p denotes the payoff functions for all strategies in S p .
An evolutionary dynamic assigns each multipopulation game F a differential equationẋ (D p ) To ensure existence and uniqueness of solutions and the forward invariance of the state space X, each function V F,p : X → T X p is required to be Lipschitz continuous and satisfy V F,p (x) ∈ T X p (x p ) for all x ∈ X.
The dynamic (D p ) satisfies strong positive correlation on Y ⊆ X if There exists a c > 0 such that for all x ∈ Y and p ∈ P , That is, the earlier condition (SPC) must hold population by population.
In the multipopulation context, pure social state x * = (m 1 e p k 1 , . . . , m p e Then L(x) ≥ 0, with equality only when x = x * , and there is a neighborhood of x * on whichL(x) ≤ 0, with equality only when V F (x) = 0. Thus x * is Lyapunov stable under (D p ), and if x * is an isolated rest point of (D p ), x * is asymptotically stable under (D p ).
Appendix A. Further derivations and proofs.
A.1. Derivation of equation (2). Let z ∈ T X and y ∈ R n . Viewing these vectors as random variables on the probability space S with P({i}) = 1 n for all i ∈ S, we have A.2. Proof of Proposition 1. Let e k be a strict equilibrium of F . Then F (e k ) is nonconstant, so ΦF (e k ) = 0. Therefore, by the continuity of F there is a neighborhood O ⊆ X of e k on which |ΦF (x)| is positive. We also have that ∈ T X and Φ is the orthogonal projection onto T X. In light of these facts, it is enough to establish that for each class of dynamics, there is a neighborhood Q ⊆ O of e k and a c > 0 such that for all x ∈ Q with V F (x) = 0. Since e k is a rest point of all of the dynamics we consider, it is enough to establish (9) for all x ∈ Q \ {e k }.
We start with monotone imitative dynamics (3). Since e k is a strict equilibrium, F k (e k ) > F i (e k ) for all i = k, andF k (e k ) = 0; also, since e k is a rest point, G F k (e k ) = 0. ThusF i (e k ) < 0 for i = k, and G F i (e k ) < 0 for i = k by the monotonicity condition in (3). We can therefore choose a neighborhood Q ⊆ O of e k and positive constants f and g such thatF i (x) ≤ −f and G F i (x) ≤ −g for all i = k and x ∈ Q, and such that k is the unique optimal strategy throughout Q; this last requirement implies thatF k (x) ≥ 0 and G F k (x) ≥ 0 on Q. Moreover, since G F is continuous and X is compact, there is a finite constant γ such that |G F i (x)| ≤ γ for all i = k and x ∈ X. Thus, since for all x ∈ X, we find that for all x ∈ Q \ {e k }, We next consider sign-preserving excess payoff dynamics (4). Since e k is a strict equilibrium, there is a neighborhood Q ⊆ O of e k and a positive constant d such that F k (x)−F i (x) ≥ d andF i (x) ≤ 0 for all i = k and x ∈ Q. The second inequality and sign-preservation imply that τ i (F (x)) = 0 for all i = k and x ∈ Q. So for all x ∈ Q \ {e k }, Finally, we consider pairwise comparison dynamics (5). To begin, note that by the sign-preservation requirement in (5), where the last equality follows from sign preservation. Since e k is a strict equilibrium, there is a neighborhood Q ⊆ O of e k and a positive constant d such that F k (x) − F i (x) ≥ d for all i = k and x ∈ Q. Sign preservation and the continuity of ρ then imply that there is a positive constant r such that ρ ik (F (x)) ≥ r and ρ ki (F (x)) = 0 for all i = k and x ∈ Q, and hence that for x ∈ Q. They also imply that for such x, Since ρ and F are continuous and X is compact, it follows that there is a finite constant R such that for x ∈ Q, We therefore conclude that for x ∈ Q \ {e k }, This completes the proof of the proposition.