Continuous game

From The Right Wiki
Jump to navigationJump to search

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite. In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution. If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.

Formal definition

Define the n-player continuous game G=(P,C,U) where

P=1,2,3,,n is the set of n players,
C=(C1,C2,,Cn) where each Ci is a compact set, in a metric space, corresponding to the i th player's set of pure strategies,
U=(u1,u2,,un) where ui:C is the utility function of player i
We define Δi to be the set of Borel probability measures on Ci, giving us the mixed strategy space of player i.
Define the strategy profile σ=(σ1,σ2,,σn) where σiΔi

Let σi be a strategy profile of all players except for player i. As with discrete games, we can define a best response correspondence for player i, bi. bi is a relation from the set of all probability distributions over opponent player profiles to a set of player i's strategies, such that each element of

bi(σi)

is a best response to σi. Define

b(σ)=b1(σ1)×b2(σ2)××bn(σn).

A strategy profile σ* is a Nash equilibrium if and only if σ*b(σ*) The existence of a Nash equilibrium for any continuous game with continuous utility functions can be proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem.[1] In general, there may not be a solution if we allow strategy spaces, Ci's which are not compact, or if we allow non-continuous utility functions.

Separable games

A separable game is a continuous game where, for any i, the utility function ui:C can be expressed in the sum-of-products form:

ui(s)=k1=1m1kn=1mnai,k1knf1(s1)fn(sn), where sC, siCi, ai,k1kn, and the functions fi,k:Ci are continuous.

A polynomial game is a separable game where each Ci is a compact interval on and each utility function can be written as a multivariate polynomial. In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem:

For any separable game there exists at least one Nash equilibrium where player i mixes at most mi+1 pure strategies.[2]

Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.

Examples

Separable games

A polynomial game

Consider a zero-sum 2-player game between players X and Y, with CX=CY=[0,1]. Denote elements of CX and CY as x and y respectively. Define the utility functions H(x,y)=ux(x,y)=uy(x,y) where

H(x,y)=(xy)2.

The pure strategy best response relations are:

bX(y)={1,if y[0,1/2)0 or 1,if y=1/20,if y(1/2,1]
bY(x)=x

bX(y) and bY(x) do not intersect, so there is no pure strategy Nash equilibrium. However, there should be a mixed strategy equilibrium. To find it, express the expected value, v=𝔼[H(x,y)] as a linear combination of the first and second moments of the probability distributions of X and Y:

v=μX22μX1μY1+μY2

(where μXN=𝔼[xN] and similarly for Y). The constraints on μX1 and μX2 (with similar constraints for y,) are given by Hausdorff as:

μX1μX2μX12μX2μY1μY2μY12μY2

Each pair of constraints defines a compact convex subset in the plane. Since v is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on

μi1=μi2 or μi12=μi2

Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on μi1=μi2, it will lie on the whole line, so that both 0 and 1 are a best response. bY(μX1,μX2) simply gives the pure strategy y=μX1, so bY will never give both 0 and 1. However bx gives both 0 and 1 when y = 1/2. A Nash equilibrium exists when:

(μX1*,μX2*,μY1*,μY2*)=(1/2,1/2,1/2,1/4)

This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.

Non-Separable Games

A rational payoff function

Consider a zero-sum 2-player game between players X and Y, with CX=CY=[0,1]. Denote elements of CX and CY as x and y respectively. Define the utility functions H(x,y)=ux(x,y)=uy(x,y) where

H(x,y)=(1+x)(1+y)(1xy)(1+xy)2.

This game has no pure strategy Nash equilibrium. It can be shown[3] that a unique mixed strategy Nash equilibrium exists with the following pair of cumulative distribution functions:

F*(x)=4πarctanxG*(y)=4πarctany.

Or, equivalently, the following pair of probability density functions:

f*(x)=2πx(1+x)g*(y)=2πy(1+y).

The value of the game is 4/π.

Requiring a Cantor distribution

Consider a zero-sum 2-player game between players X and Y, with CX=CY=[0,1]. Denote elements of CX and CY as x and y respectively. Define the utility functions H(x,y)=ux(x,y)=uy(x,y) where

H(x,y)=n=012n(2xn((1x3)n(x3)n))(2yn((1y3)n(y3)n)).

This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the Cantor singular function as the cumulative distribution function.[4]

Further reading

  • H. W. Kuhn and A. W. Tucker, eds. (1950). Contributions to the Theory of Games: Vol. II. Annals of Mathematics Studies 28. Princeton University Press. ISBN 0-691-07935-8.

See also

References

  1. I.L. Glicksberg. A further generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. Proceedings of the American Mathematical Society, 3(1):170–174, February 1952.
  2. N. Stein, A. Ozdaglar and P.A. Parrilo. "Separable and Low-Rank Continuous Games". International Journal of Game Theory, 37(4):475–504, December 2008. https://arxiv.org/abs/0707.3462
  3. Irving Leonard Glicksberg & Oliver Alfred Gross (1950). "Notes on Games over the Square." Kuhn, H.W. & Tucker, A.W. eds. Contributions to the Theory of Games: Volume II. Annals of Mathematics Studies 28, p.173–183. Princeton University Press.
  4. Gross, O. (1952). "A rational payoff characterization of the Cantor distribution." Technical Report D-1349, The RAND Corporation.