September 28, 2019
Modern game theory, the applied math branch established by Neumann & Nash, is the study of mathematical models in conflict & cooperation between intelligent, rational, decision-makers. A tool used in a wide array of industries & fields ranging from economics, to political science, to computer science — the basics of game theory are surprisingly tenable to the average high-schooler. The concepts aren’t too advanced, yet the potential knowledge can pay off in spades. Understanding the fundamentals of game theory is an endeavor worth undertaking for anyone in a position subject to group decision-making (hint: everyone).
While last time we discussed the early days & modernization, this time around we’ll review frequently-used terminology & the basics of diagramming games.
Most branches of math, particularly branches of applied math, require their lexicon — game theory doesn’t deviate here. To comprehend examples, it’s critical to familiarize ourselves with the appropriate language. Below we’ll find the basic vocabulary; the terms aren’t in alphabetical order, but rather in subjective order of decreasing relevance to the overall topic:
Game — Any interaction between two or more players in which each player’s payoff is affected by their decisions & the decisions made by others.
Players — The interdependent agents of the game, which might range from individuals to governments, to companies, etc…
Actions — The strictly-defined behaviors that a player has to choose between, what can players do? Enter a bid? End a strike? Bet on a coin flip?
Payoff — The specific, exact, increases or decreases of “value” within a value system that maps to a player’s action.
Value System — Abstract & context-dependent, this is the range of all possible values; anything from jail-time, to market share, to land occupation.
Zero-Sum — A situation in which one player’s gain is equivalent to another’s loss; the net change in wealth or benefit to the game as a whole is zero.
Non-Zero-Sum — A situation where there is a net benefit or loss to the system based on out the outcome of the game; winnings & losses of all players do not add up to zero.
Simultaneous — When players are making decisions & taking actions at approximately the same time; they don’t know the choices of other players when making their choices, such as rock-paper-scissors.
Sequential — When players are making decisions & taking actions in alternating turns such as monopoly or chess.
Non-Cooperative — The more common type of game, this is a strictly-competitive game among individual players.
Nash Equilibrium — The optimal outcome of a game where no player has an incentive to deviate from a chosen strategy; no incremental benefit from changing actions, assuming other players remain constant in their strategies.
Dominant Strategy — A strategy that, regardless of what other players do, is the most beneficial strategy among all others.
Cooperative — A type of game in which players can forge alliances & respond cooperatively to external credible threats.
Shapely Value — The average marginal contribution of a player’s value across all possible coalitions; used for cooperative/coalition games, it’s the average expected marginal contribution on a per-player basis.
Complete Information —A game in which knowledge about other players is available to all participants; the payoff functions, strategies & “types” of players are common knowledge.
Incomplete Information — A game in which players may or may not have information on the game-type, player actions player-type, strategies, payoffs..
Imperfect Information —A game in which players are unaware of the actions chosen by other players; however, everything else, player-type, strategies, payoffs, etc…is common knowledge.
Finito. The list of terms above covers the bare essentials of any game theory problem we’ll likely encounter. As you’ll learn below, in addition to these verbal describers, the best way to analyze a game is by visually mapping it out.
There are two primary ways of visualizing games in game theory: matrices & trees. In general both models are appropriate & should yield some insight; however, which model is most appropriate for any given scenario is a function of the type of game & its rules (simultaneous vs. sequential, complete vs incomplete vs imperfect, etc…).
The most basic tool of game theory is the payoff matrix. Typically, matrices are used to describe 2-player, simultaneous games. Seen in the template below, the two-player choices line up perpendicular to each other on the outer borders of our matrix— one stems across the top (left-to-right), & one spans down the left-side (top-to-bottom). Inside, is the expected payoff for each player assuming they took that action. For example, let’s pretend, for some really simple game, that two players (Alice & Bob) exist, each with the capacity to pick one of only two actions (therefore four scenarios total). Each “scenario,” or box in the matrix, contains two numbers, the specific scenario payoffs
The example above is context-less, but still serves its purpose of demonstrating the basic layout of a game theory matrix. Evident by the bold outer titles, this game is between Alice & Bob; additionally, it’s clear that they each only have one of two choices: “Action 1” or “Action 2.” Last, the ordered pairs inside the matrix represent the payoffs for both players given a specific scenario (Alice’s payoff is the left/x number, Bob’s payoff is the right/Y number).
Let’s consider a more relevant & relatable game: Rock Paper Scissors. A common game with global renown, it’s strictly a 2-person, simultaneous, non-cooperative game. The players, as is well-known, have only one of three strategies; additionally, we also know the payoffs & value system from those three strategies. We can map this game mathematically using a 3x3 matrix:
The example above should make much more intuitive sense. By following along a row (Alice’s choice) & a column (Bob’s choice), it’s clear what the pay-off for both individuals results in. Take note though, by representing games in matrices we can only represent situations where players move simultaneously — there is no concept of time.
A game tree is a directed graph with nodes & edges. Nodes represent either player positions or exits/payoffs from the game. Typically, there is an assignment of players to the decision nodes; a group of decision nodes, when on the same horizontal, x-axis, represent the array choices for a single player at a single point in time. Edges, on the other hand, represent the moves, or actions, taken by the player/node. The complete game tree for a game is the game tree starting at the initial position & contains all possible moves from each position.
With trees we essentially add a temporal dimension to the table, allowing for the optimal representation of a sequential game over time. The topology of the game tree varies between two possible setups, depending on whether that game is simultaneous or sequential. To gain insight from a meaningful comparison, we’ll review trees by continuing our previous example of rock-paper-scissors:
Note, Alice was chosen at random for the top node/origin choice, however, the entire diagram would remain the same if we simply switched her & Bob’s positioned. Like the matrix example above, this diagram provides the same payoff options.
For every game tree, there is only one way to represent it using a matrix; but for every matrix, there are multiple ways to represent it using a game tree. Given its inclusion of time, unless a game is really simple, game trees are typically the tool of choice for analysis.
And there we go! With common language & visual mappings now under our belt, in addition to its history, our self-taught game theory knowledge is steadily but surely growing. We’ve evolved our perspective of games, & this is exciting because it provides us with the ability to see common-day scenarios in a completely different light.
While this time we focused on structuring our games, next time, we’ll finally get into the meat of the bones — the true payoff from learning game theory: analyzing & arriving at a dominant strategy.