Solution Concept

A solution concept is a formal rule that defines how agents should play the game. As such it may be understood as akin to an algorithm that specifies what actions the agents should take

In game theory, a solution concept is a model or rule for predicting how a game will be played.1 These predictions are called “solutions”, and describe which strategies will be adopted by players and, hence, the result of the game. The most commonly used solution concepts are equilibrium concepts. Where we look for a set of choices, one for each player, such that each person’s strategy is best for them when all others are playing their stipulated best response. In other words, each picks their best response to what the others do. In game theory the term best response refers to the strategy (or strategies) which produce the most favorable outcome for a player, taking other players’ strategies as given. Best response is when you know what others are going to do and you choose your best response. Some examples of solutions concepts include dominant strategy equilibrium, Pareto optimality, Nash equilibrium, iterated deletion of strictly dominated strategies, and self-confirming equilibrium.2

Dominant Strategy

Sometimes one person’s best choice is the same no matter what the others do. This is called a “dominant strategy” for that player. Hence, a strategy is dominant if it is always better than any other strategy, for any profile of other players’ actions. A strategy is termed strictly dominant if, regardless of what any other players do, the strategy earns a player a strictly higher payoff than any other. If a player has a strictly dominant strategy then they will always play it in equilibrium. A strategy is weakly dominant if, regardless of what any other players do, the strategy earns a player a payoff at least as high as any other strategy. If there are better strategies to take within a game then there must also be worse strategies to take and we call these worse strategies dominated. A dominated strategy in a game means that there is some other choice for the agent to make that will have a better payoff than that one.3

When the game is noncooperative and players are assumed to be rational, strictly dominated strategies are eliminated from the set of strategies that might feasibly be played. Thus the search for an equilibrium typically begins by looking for dominant strategies and eliminating dominated ones.4 For example, in a single iteration of the prisoner’s dilemma game cooperation is strictly dominated by defect for both players. Because either player is always better off playing defect, regardless of what their opponent does. In searching for the equilibrium to this game we would simply look at each cell and ask is there a better option for the play? If so then the cell is dominated and we should not choose it. Once we have done this for both players we can identify a corresponding cell or number of cells that is optimal for each, giving us the equilibrium or possibly a number of different equilibria.


The ideas of minimax and maximin are formal rules to help us reason about a player’s highest and lowest potential payoffs and to act so as to maximize or minimize these values

In games of conflict and competition, we are often interested in knowing what is the strategy that one can play that will reduce one’s exposure to some negative event. For example, this might be a scenario of war, where we have a number of different options as to the route along which we will send our food supply to our troops. Along any of these routes, there is the possibility that they will get bombed. We would then try to choose the option that will minimize the amount of damage that might possibly be caused to the convoy. This is captured in the term minimax. Minimax is a decision rule for minimizing the possible loss for a worst case scenario. The minimax value of a player is the smallest value that the other players can force the player to receive, without knowing the agent’s actions.5

A minimax strategy is commonly chosen when a player cannot rely on the other party to keep any agreement or they have in their interest that you gain the minimum payoff, such as in a zero-sum game. Calculating the minimax value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions – the one that gives the player the smallest value. Then, we determine which action the player can take in order to make sure that this smallest value is the largest possible.

A maximin strategy is one where the player attempts to earn the maximum possible benefit available. This means they will prefer the option which offers the chance of achieving the best possible outcome – even if a highly unfavorable outcome is possible when taking that strategy. This maximin strategy that is often referred to as the best of the best, is also seen as ‘naive’ and an overly optimistic strategy, in that it assumes a highly favorable environment for decision making. In contrast, the minimax strategy is a more realistic strategy in that it takes account of the worst case scenario and prepares for that eventuality.6

1. (2017). Retrieved 13 May 2017, from

2. (2017). Retrieved 13 May 2017, from

3. (2017). Retrieved 13 May 2017, from

4. Game Theory: The Concise Encyclopedia of Economics | Library of Economics and Liberty. (2017). Retrieved 13 May 2017, from

5. (2017). Retrieved 13 May 2017, from

6. (2017). Retrieved 13 May 2017, from