You are on page 1of 223

To Kristin

viii

CONSISTENT PREFERENCES

5.3

Admissible consistency of preferences

62

6. RELAXING COMPLETENESS
6.1 Epistemic modeling of strategic games (cont.)
6.2 Consistency of preferences (cont.)
6.3 Admissible consistency of preferences (cont.)

69
69
73
75

7. BACKWARD INDUCTION
7.1 Epistemic modeling of extensive games
7.2 Initial belief of opponent rationality
7.3 Belief in each subgame of opponent rationality
7.4 Discussion

79
82
87
89
94

8. SEQUENTIALITY
8.1 Epistemic modeling of extensive games (cont.)
8.2 Sequential consistency
8.3 Weak sequential consistency
8.4 Relation to backward induction

99
101
104
107
113

9. QUASI-PERFECTNESS
9.1 Quasi-perfect consistency
9.2 Relating rationalizability concepts

115
116
118

10. PROPERNESS
10.1 An illustration
10.2 Proper consistency
10.3 Relating rationalizability concepts (cont.)
10.4 Induction in a betting game

121
123
124
127
128

11. CAPTURING FORWARD INDUCTION


THROUGH FULL PERMISSIBILITY
11.1 Illustrating the key features
11.2 IECFA and fully permissible sets
11.3 Full admissible consistency
11.4 Investigating examples
11.5 Related literature

133
135
138
142
149
152

List of Figures

2.1

G1 (battle-of-the-sexes).

12

2.2

G2 , illustrating deductive reasoning.

13

2.3

G3 , illustrating weak dominance.

2.4

G03

and a corresponding extensive form


tipede game).

2.5

G02
G01

2.6

13
03

(a cen-

and a corresponding extensive form 02 .


and a corresponding extensive form 01 (battle-

14
16

of-the-sexes with an outside option).

17

3.1

4 and its strategic form.

25

4.1

The basic structure of the analysis in Chapter 4.

39

7.1

5 (a four-legged centipede game).

93

8.1

6 and its strategic form.

111

8.2

06

and its pure strategy reduced strategic form.

112

10.1

G7 , illustrating common certain belief of proper


consistency.

123

10.2

A betting game.

129

10.3

The strategic form of the betting game.

130

11.1

G8 , illustrating that IEWDS may be problematic.

134

11.2

G9 , illustrating the key features of full admissible


consistency.

136

G10 , illustrating the relation between IECFA and


IEWDS.

142

11 and its pure strategy reduced strategic form.

156

11.3
12.1

xii

CONSISTENT PREFERENCES

12.2
12.3
12.4
12.5

Reduced form of 12 (a 3-period prisoners dilemma


game).
G13 (the pure strategy reduced strategic form of
burning money).
001 and its pure strategy reduced strategic form.
14 and its pure strategy reduced strategic form.

166
169
171
172

List of Tables

0.1
2.1
2.2
3.1
7.1
7.2
10.1
12.1

The main interactions between the chapters.


xvi
Relationships between different equilibrium concepts.
18
Relationships between different rationalizability concepts. 19
Relationships between different sets of axioms and
their representations.
29
0
An epistemic model for G3 with corresponding extensive form 03 .
89
An epistemic model for 5 .
93
An epistemic model for the betting game.
131
Applying IECFA to burning money.
170

Preface

During the last decade I have explored the consequences of what I


have chosen to call the consistent preferences approach to deductive
reasoning in games. To a great extent this work has been done in cooperation with my co-authors Martin Dufwenberg, Andres Perea, and Ylva
Svik, and it has lead to a series of journal articles. This book presents
the results of this research program.
Since the present format permits a more extensive motivation for and
presentation of the analysis, it is my hope that the content will be of
interest to a wider audience than the corresponding journal articles can
reach. In addition to active researcher in the field, it is intended for
graduate students and others that wish to study epistemic conditions
for equilibrium and rationalizability concepts in game theory.

Structure of the book


This book consists of twelve chapters. The main interactions between
the chapters are illustrated in Table 0.1.
As Table 0.1 indicates, the chapters can be organized into four different parts. Chapters 1 and 2 motivate the subsequent analysis by
introducing the consistent preferences approach, and by presenting examples and concepts that are revisited throughout the book. Chapters 3
and 4 present the decision-theoretic framework and the belief operators
that are used in later chapters. Chapters 5, 6, 10, and 11 analyze games
in the strategic form, while the remaining chaptersChapters 7, 8, 9,
and 12are concerned with games in the extensive form.
The material can, however, also be organized along the vertical axis
in Table 0.1. Chapters 5, 8, 9, and 10 are concerned with players that
are endowed with complete preferences over their own strategies. In con-

xvi

CONSISTENT PREFERENCES

Table 0.1.

The main interactions between the chapters.

Chapter 11 Chapter 12

Chapter 1

Chapter 2

Chapter 4
&

&

Chapter 3

Chapter 6

Chapter 5

Chapter 10
Motivation

Preliminaries

Strategic
games

Chapter 7
Chapter 8

Chapter 9
Extensive
games

strast, Chapters 4, 6, 7, 11, and 12 present analyses that allow players


to have incomplete preferences, corresponding to an inability to assign
subjective probabilities to the strategies of their opponents. The generalization to possibly incomplete preferences is motivated in Section 3.1,
and is an essential feature of the analysis in Chapter 11. Note also that
the concepts of Chapters 7, 8, 9, and 10 imply backward induction but
not forward induction, while the concept of Chapters 11 and 12 promotes
forward induction but not necessarily backward induction.

Notes on the history of the research program


While the arrows in Table 0.1 seek to guide the reader through the
material presented here, they are not indicative of the chronological
development of this work.
I started my work on non-equilibrium concepts in games in 1993 by
considering the games that are illustrated in Figures 12.112.4. After
joining forces with Martin Dufwenbergwho had independently developed the same basic intuition about what deductive reasoning could
lead to in these exampleswe started in 1994 work on our joint papers
Admissibility and common belief and Deductive reasoning in extensive games, published in Games and Economic Behavior and Economic
Journal in 2003, and incorporated as Chapters 11 and 12 in this book.1
1 Deductive

reasoning in extensive games was awarded the Royal Economic Society Prize
for the best paper published in the Economic Journal in 2003.

Chapter 1
INTRODUCTION

This book presents, applies, and synthesizes what my co-authors and I


have called the consistent preferences approach to deductive reasoning
in games. Briefly described, this means that the object of the analysis is
the ranking by each player of his own strategies, rather than his choice.
The ranking can be required to be consistent (in different senses) with
his beliefs about the opponents ranking of her strategies. This can be
contrasted to the usual rational choice approach where a players strategy choice is (in different senses) rational given his beliefs about the
opponents strategy choice. Our approach has turned out to be fruitful
for providing epistemic conditions for backward and forward induction,
and for defining or characterizing concepts like proper, quasi-perfect and
sequential rationalizability. It also facilitates the integration of game theory and epistemic analysis with the underlying decision-theoretic foundation.
The present text considers a setting where the players have preferences
over their own strategies in a game, and investigates the following main
question: What preferences may be viewed as reasonable, provided
that each player takes into account the rationality of the opponent, he
takes into account that the opponent takes into account the players own
rationality, and so forth? And in the extension of this: Can we develop
formal, intuitive criteria that eventually lead to a selection of preferences
for the players that may be viewed as reasonable?
The consistent preferences approach as such is not new. It is firmly
rooted in a thirty year old game-theoretic tradition where a strategy of a
player is interpreted as an expression of the belief (or the conjecture)

CONSISTENT PREFERENCES

of his opponent; cf., e.g., Harsanyi (1973), Aumann (1987a), and Blume
et al. (1991b). What is new in this book (and the papers on which
it builds) is that such a consistent preferences approach is used to
characterize a wider set of equilibrium concepts and, in particular, to
serve as a basis for various types of interactive epistemic analysis where
equilibrium assumptions are not made.
Throughout this book, games are analyzed from the subjective perspective of each player. Hence, we can only make subjective statements
about what a player will do, by considering reasonable preferences
(and the corresponding representation in terms of subjective probabilities) of his opponent. This subjective perspective is echoed by recent
contributions like Feinberg (2004a) and Kaneko and Kline (2004), which
however differ from the present approach in many respects.1
To illustrate the differences between the two approachesthe rational choice approach on the one hand and the consistent preferences
approach on the otherin a setting that will be familiar to most readers, Section 1.1 will be used to consider how epistemic conditions for
Nash equilibrium in a strategic game can be formulated within each of
these approaches.
The remaining Sections 1.2 and 1.3 will provide motivation for the
consistent preferences approach through the following two points:
1 It facilitates the analysis of backward and forward induction.
2 It facilitates the integration of game theory and epistemic analysis
with the underlying decision-theoretic foundation.

1.1

Conditions for Nash equilibrium

To fix ideas, consider a simple coordination game, where two drivers


must choose what side to drive on in order to avoid colliding. In an

1 In

the present text, reasoning about hypothetical events will be captured by each player having an initial (interim after having become aware of his own type) system of conditional
preferences; cf. Chapters 3 and 4. This system encodes how the player will update his beliefs
as actual play develops. In contrast, the subjective framework of Feinberg (2004a) does not
represent the reasoning from such an interim viewpoint, and beliefs are not constrained to
be evolving or revised. Instead, beliefs are represented whenever there is a decision to be
made based on the presumption that beliefs should only matter when a decision is made. In
Feinbergs framework, only the ex-post beliefs are present and all ex-post subjective views
are equally modeled. Even though also Kaneko and Kline (2004) consider a player having
a subjective view on the objective situation, their main point is the inductive derivation of
this individual subjective view from individual experiences.

Introduction

equilibrium in the rational choice approach, a driver chooses to drive


on the right side of the road if he believes that his opponent chooses
to drive on the right side of the road. This can be contrasted with
an equilibrium in the consistent preferences approach, where a driver
prefers to drive on the right side of the road if he believes that his
opponent prefers to drive on the right side of the road. As mentioned,
this follows a tradition in equilibrium analysis from Harsanyi (1973) to
Blume et al. (1991b). This section presents, as a preliminary analysis,
how these two interpretations of Nash equilibrium can be formalized.
First, introduce the concept of a strategic game. A strategic twoplayer game G = (S1 , S2 , u1 , u2 ) consists of, for each player i, a set of
pure strategies, Si , and a payoff function, ui : S1 S2 R.
Then, turn to the epistemic modeling. An epistemic model for a
strategic game within the rational choice approach will typically specify,
for each player i,
a finite set of types, Ti ,
a function that assigns a strategy choice to each type, si : Ti Si ,
and,
for each type ti in Ti , a probability distribution on the set of opponent
types, ti (Tj ),
where (Tj ) denotes the set of probability distributions on Tj .
When combined with is payoff function, the function sj and the probability distribution ti determine player is preferences at ti over his own
strategies; these preferences will be denoted ti :
X
X
ti (tj )ui (si , sj (tj ))
ti (tj )ui (s0i , sj (tj )) .
si ti s0i iff
tj

tj

This in turn determines is set of best responses at ti , which will throughout be referred to as is choice set at ti :
Siti := {si Si | s0i Si , si ti s0i } .
Finally, in the context of the rational choice approach, we can define
the set of type profiles for which player i chooses rationally:
[rati ] := {(t1 , t2 ) T1 T2 | si (ti ) Siti } .
Write [rat] := [rat1 ] [rat2 ].

CONSISTENT PREFERENCES

It is now straightforward to give sufficient epistemic conditions for a


pure strategy Nash equilibrium:
(s1 , s2 ) S1 S2 is a pure strategy Nash equilibrium
if there exists an epistemic model with (t1 , t2 ) [rat]
such that (s1 , s2 ) = (s1 (t1 ), s2 (t2 )) and, for each i, ti (tj ) = 1 .
In words, (s1 , s2 ) is a pure strategy Nash equilibrium if there is mutual
belief of a profile of types that rationally choose s1 and s2 . In fact,
we need not require mutual belief of the type profile: in line with the
insights of Aumann and Brandenburger (1995) (cf. their Preliminary
Observation) it is sufficient that there is mutual belief of the strategy
profile, as we need not be concerned with what one player believes that
the other player believes (or any higher order beliefs).
Consider next how to formulate epistemic conditions for a mixed strategy Nash equilibrium. Following, e.g., Harsanyi (1973), Armbruster and
Boge (1979), Aumann (1987a), Brandenburger and Dekel (1989), Blume
et al. (1991b), and Aumann and Brandenburger (1995), a mixed strategy Nash equilibrium is often interpreted as an equilibrium in beliefs.
According to this rather prominent view, a player need not randomize in a mixed strategy Nash equilibrium, but may choose some pure
strategy. However, the other player does not know which one, and the
mixed strategy of the one player is an expression of the belief (or the
conjecture) of the other.
The consistent preferences approach is well-suited for formulating
epistemic conditions for a mixed strategy Nash equilibrium according to
this interpretation. An epistemic model for a strategic game within the
consistent preferences approach will typically specify, for each player i,
a finite set of types, Ti , and
for each type ti in Ti , a probability distribution on the set of opponent
strategy-type pairs, ti (Sj Tj ).
Hence, instead of specifying a function that assigns strategy choices to
types, each types probability distribution is extended to the Cartesian
product of the opponents strategy set and type set.
We can still determine type is preferences at ti over his own strategies,
XX
XX
si ti s0i iff
ti (sj , tj )ui (si , sj )
ti (sj , tj )ui (s0i , sj ) ,
sj

tj

sj

tj

and is choice set at ti :


Siti := {si Si | s0i Si , si ti s0i } .

Introduction

However, we are now concerned with what i at ti believes that opponent types do, rather than with what i at ti does himself. Naturally,
such beliefs will only be well-defined for opponent types that ti deems
subjectively possible, i.e., for player j types in the set

n
o

Tjti := tj Tj ti (Sj , tj ) > 0 ,


P
where ti (Sj , tj ) := sj Sj ti (sj , tj ). Say that the mixed strategy pjti |tj
is induced for tj by ti if tj Tjti , and for each sj Sj ,
pjti |tj (sj ) =

ti (sj , tj )
.
ti (Sj , tj )

Finally, in the context of the consistent preferences approach, we can


define the set of type profiles for which ti i nduces a r ational mixed
strategy for any subjectively possible opponent type:

n
0 o
0

[iri ] := (t1 , t2 ) T1 T2 t0j Tjti , pjti |tj Sjtj .


If the true type profile is in [iri ], then player is preferences over his
strategies are consistent with the preferences of his opponent. Rather
than player j actually being rational, it entails that player i believes that
j is rational.
Write [ir] := [ir1 ] [ir2 ]. Through the event [ir] one can formulate
sufficient epistemic conditions for a mixed strategy Nash equilibrium,
interpreted as an equilibrium in beliefs:
(p1 , p2 ) (S1 ) (S2 ) is a mixed strategy Nash equilibrium
if there exists an epistemic model with (t1 , t2 ) [ir]

such that (p1 , p2 ) = p1t2 |t1 , p2t1 |t2 and, for each i, ti (Sj , tj ) = 1 .
In words, (p1 , p2 ) is a mixed strategy Nash equilibrium if there is mutual
belief of a profile of types, where each type induces the opponents mixed
strategy for the other, and where any pure strategy in the induced mixed
strategy is rational for the opponent type. Since any pure strategy Nash
equilibrium can be viewed as a degenerate mixed strategy Nash equilibrium, these epistemic conditions are sufficient for pure strategy Nash
equilibrium as well. Again, we need not require mutual belief of the type
profile; it is sufficient that there is mutual belief of each players belief
about the strategy choice of his opponent.
It is by no means infeasible to provide epistemic conditions for mixed
strategy Nash equilibrium, interpreted as an equilibrium in beliefs, with-

CONSISTENT PREFERENCES

in the rational choice approach. Indeed, this is what Aumann and


Brandenburger (1995) do through their Theorem A in the case of twoplayer games. One can still argue for the epistemic conditions arising
within the consistent preferences approach. If a mixed strategy Nash
equilibrium is interpreted as an expression of what each player believes
his opponent will do, then one can arguebased on Occams razorthat
the epistemic conditions should specify these beliefs only, and not also
what each player actually does. In particular, we need not require, as
Aumann and Brandenburger (1995) do, that the players are rational.

1.2

Modeling backward and forward induction

This book is mainly concerned with the analysis of deductive reasoning in gamesleading to rationalizability conceptsrather than the
study of steady states where coordination problems have been solved
corresponding to equilibrium concepts. Deductive reasoning within the
consistent preferences approach means that events like [ir] will be made
subject to interactive epistemology, without assuming that there is mutual belief of the type profile.
Backward induction is a prime example of deductive reasoning in
games. To capture the backward induction procedure, one must believe that each player chooses rationally at every information set of an
extensive game, also at information sets that the players own strategy
precludes from being reached. As will be indicated through the analysis
of Chapters 710based partly on joint work with Andres Pereathis
might be easier to capture by analyzing events where each player believes that the opponent chooses rationally, rather than events where
each player actually chooses rationally. The backward induction procedure can be captured by conditions on how each player revises his
beliefs after surprising choices by the opponent. Therefore, it might
be fruitful to characterize this procedure through restrictions on the belief revision policies of the players, rather than through restrictions on
their behavior at all information sets (also at information sets that can
only be reached if the behavioral restrictions at earlier information sets
were not adhered to). As will be apparent in Chapters 710, the consistent preferences approach captures the backward induction procedure
through conditions imposed directly on the players belief revision policies.
In certain gameslike the battle-of-the-sexes with outside option
game (cf. Figure 2.6)forward induction has considerable bite. To
model forward induction, one must essentially assume that each player

Introduction

believes that any rational choice by the opponent is infinitely more likely
than any choice that is not rational. Again, this might be easier to capture by analyzing events relating to the beliefs of the player, rather than
events relating to the behavior of the opponent. Chapters 11 and 12
will report on joint work with Martin Dufwenberg that shows how the
consistent preferences approach can be used to promote the forward
induction outcome.
For ease of presentation only two-player games will be considered in
this book. This is in part a matter of convenience, as much of the subsequent analysis can essentially be generalized to n-player games (with
n > 2). In particular, this applies to the analysis of backward induction
in Chapter 7, and to some extent, the analysis of forward induction in
Chapters 11 and 12. On the other hand, in the equilibrium analysis of
Chapters 5, 8, 9, and 10, a strategy of one player is interpreted as an
expression of the belief of his opponent. This interpretation is straightforward in two-player games, but requires that the beliefs of different
opponents coincide in games with more than two playerse.g., compare
Theorems A and B of Aumann and Brandenburger (1995). Moreover,
by only considering two-player games we can avoid the issue of whether
(and if so, how) each players beliefs about the strategy choices of his
opponents are stochastically independent.
Throughout, player 1 will be referred to in the male gender (e.g.,
he chooses among his strategies), while player 2 will be referred to
in the female gender (e.g., she believes that player 1 . . . ). Also, in
the examples the strategies of player 1 will be denoted by upper case
symbols (e.g., L and R), while the strategies of player 2 will be denoted
by lower case symbols (e.g., ` and r).

1.3

Integrating decision theory and game theory

When a player in a two-player strategic game considers what decision


to make (i.e., what strategy to choose), only his belief about the strategy
choice of his opponent matters for his decision. However, in order to
form a well-judged belief regarding the choice of his opponent, he should
take her rationality into account. This makes it necessary for the player
to consider his belief about her belief about his strategy choice. And
so forth. Hence, the uncertainty faced by a player i concerns (a) the
strategy choice of his opponent j, (b) js belief about is strategy choice,
and so on; cf. Tan and Werlang (1988). A type of a player i corresponds
to (a) a belief about js strategy choice, (b) a belief about js belief
about is strategy choice, and so on. Models of such infinite hierarchies

CONSISTENT PREFERENCES

of beliefssee, e.g., Boge and Eisele (1979), Mertens and Zamir (1985),
Brandenburger and Dekel (1993), and Epstein and Wang (1996)yield
S1 T1 S2 T2 as the belief-complete state space, where Ti is the
set of all feasible types of player i. Furthermore, for each i, there is a
homeomorphism between Ti and the set of beliefs on Si Sj Tj .
In the decision problem of any player i, is decision is to choose one
of his own strategies. For the modeling of this problem, is belief about
his own strategy choice is not relevant and can be ignored. This does
not mean that player i is not aware of his own choice. It signifies that
such awareness plays no role in the analysis, and is thus redundant.2
Hence, in the setting of a strategic game the belief of each type of player
i can be restricted to the set of opponent strategy-type pairs, Sj Tj .
Combined with the payoff function specified by the strategic game, a
belief on Sj Tj yields preferences over player is strategies.
As discussed in Section 5.1, the above results on belief-complete
state spaces are not needed (since only finite games are treated without
belief-completeness being imposed) and not always applicable in the
setting of the present text (since some of the analysise.g. in Chapters
6, 7, 11, and 12allows for incomplete preferences). Indeed, infinite hierarchies of beliefs can be modeled by an implicit but belief-incomplete
modelwith a finite type set Ti for each player iwhere the belief of
a player corresponds to the players type, and where the belief of the
player concerns the opponents strategy-type pair.
If we let each player be aware of his own type (as we will assume
throughout), this leads to an epistemic model where the state space of
player i is Ti Sj Tj . For each player, this is a standard decisiontheoretic formulation in the tradition of Savage (1954), Anscombe and
Aumann (1963), and Blume et al. (1991a):
Player i as a decision maker is uncertain about what strategy-type
pair in Sj Tj will be realized.
Player is type ti determines his belief on Sj Tj
Player is decision is to choose a (possibly mixed) strategy pi (Si );
each such strategy determines the (randomized) outcome of the game
as a function of the opponent strategy sj Sj .3
2 Tan

and Werlang (1988) in their Sections 2 and 3 characterize rationalizable strategies


without specifying beliefs about ones own choice.
3 Hence, a strategy for a player corresponds to an Anscombe-Aumann act, assigning a (possibly
randomized) outcome to any uncertain state; cf. Chapter 3.

Introduction

The model leads, however, to a different state space for each player,
which may perhaps be considered problematic.
In the framework for epistemic modeling of games proposed by Aumann (1987a)applied by Aumann and Brandenburger (1995) and illustrated in Section 1.1it is also explicitly modeled that each player is
aware of his own decision (i.e., his strategy choice). This entails that,
for each player i, there is function si from Ti to Si that assigns si (ti ) to
ti . Furthermore, it means that the relevant state space is T1 T2 , which
is identical for both players. In spite of its prevalence, Aumanns model
leads to the following potential problem: If player i is of type ti and in
spite of this were to choose some strategy si different from si (ti ), then
the player would no longer be of type ti (since only si (ti ) is assigned to
ti ). So what, starting with a state where player i is of type ti , would
player i believe about his opponents strategy choice if he were to choose
si 6= si (ti )?
In line with the defense by Aumann and Brandenburger (1995) on
pp. 1174-1175, one may argue that Aumanns framework is purely descriptive and contains enough information to determine whether a player
is rational and that we need not be concerned about what the player
would have believed if the state were different. An alternative is, however, to follow Board (2003) in arguing that ti s belief about his opponents strategy choice should remain unchanged in the counterfactual
event that he were to choose si 6= si (ti ).
The above discussion can be interpreted as support for the epistemic
structure that will underlie this book, and where the state space of player
i is Ti Sj Tj . This kind of epistemic model describes the factors
that are relevant for each player as a decision maker (namely, what his
opponent does and who his opponent is), while being silent about the
awareness of player i of his own decision. Also in this formulation, a
different choice by player i changes the state, as an element of S1
T1 S2 T2 , but it does not influence the type of player i, as a specific
strategy is not assigned to each type. Hence, a different choice by player
i does not change his belief about what the opponents do.
In this setting, the epistemic analysis concerns the type profile, and
not the strategy profile. As we have seen in Section 1.1, and which we
will return to in Chapter 5, this is, however, sufficient to state and prove,
e.g., a result that corresponds to Aumann and Brandenburgers (1995)
Theorem A, provided that mutual belief of rationality is weakened to
the condition that each player believes that his opponent is rational. As
we will see in Chapters 5 and 6 it also facilitates the introduction of

10

CONSISTENT PREFERENCES

caution, which then corresponds to players having beliefs that take into
account that opponents may make irrational choices, rather than players
trembling when they make their choice.
Chapters 3 and 4 are concerned with the decision-theoretic framework
and epistemic operators derived from this framework.
Chapter 3 spells out how the Anscombe-Aumann framework will be
used as a decision-theoretic foundation. Following Blume et al. (1991a),
continuity will be relaxed. Moreover, two different kinds of generalizations are presented. On the one hand, completeness will be relaxed, as
this is not an integral part of the backward induction procedure, and
cannot be imposed in the epistemic characterization of forward induction presented in Chapters 11 and 12. On the other hand, flexibility
concerning how to specify a system of conditional beliefs will be introduced, leading to a structure that encompasses both the concept of a
conditional probability system and conditionals derived from a lexicographic probability system. This flexibility turns out to be essential for
the analysis of Chapters 8 and 9.
Chapter 4 reports on joint work with Ylva Svik which derives beliefoperators from the preferences of decision makers and develop their semantics. These belief operators will in later chapters be used in the
epistemic characterizations.
First, however, motivating examples will be presented and discussed
in Chapter 2.

Chapter 2
MOTIVATING EXAMPLES

Through examples this chapter illuminates the features that distinguish the consistent preferences approach from the rational choice approach (cf. Chapter 1). The examples also illustrate issues of relevance
when capturing backward and forward induction in models of interactive
epistemology. The same examples will be revisited in later chapters.
Section 2.1 presents six different games, and contains a discussion of
how suggested outcomes in these games can be promoted by different
solution concepts. This discussion leads in Section 2.2 to an overview of
the solution concepts that will be covered in subsequent chapters. While
Section 2.1 will illustrate how various concepts work in the different
examples, Section 2.2 will relate the different concepts to each other,
and provide references to relevant literature.

2.1

Six examples

Consider the battle-of-the-sexes game, G1 , illustrated in Figure 2.1.


This game has two Nash equilibria in pure strategies: (L, `) and (R, r).
In the rational choice approach, the first of these Nash equilibrium is
interpreted as player 1 choosing L and player 2 choosing `, and these
choices being mutual belief. It is a Nash equilibrium since there is mutual
belief of the strategy choices and each players choice is rational, given
his belief about the choice of his opponent. In the consistent preferences
approach, in contrast, this Nash equilibrium is interpreted as player 1
believing that 2 chooses ` and player 2 believing that 1 chooses L, and
these conjectures being mutual belief. It is a Nash equilibrium since
there is mutual belief of the conjectures about opponent choice and each

12

CONSISTENT PREFERENCES

r
`
L 3, 1 0, 0
R 0, 0 1, 3

Figure 2.1.

G1 (battle-of-the-sexes).

player believes that the opponent chooses rationally given the opponents
conjecture. The preferences of player 1that he ranks L about Ris
consistent with the preferences of player 2that she ranks ` above r, and
vice versa. More precisely, that player 1 ranks L above R is consistent
with his beliefs about player 2, namely that he believes that she ranks `
above r and she chooses rationally (i.e., chooses a top ranked strategy).
The consistent preferences interpretation of Nash equilibrium carries
over to the mixed strategy equilibrium when interpreted as an equilibrium in beliefscf. the Harsanyi (1973) interpretation discussed in Section 1.1. If player 1 believes with probability 1/4 that 2 chooses ` and
with probability 3/4 that 2 chooses r and player 2 believes with probability 3/4 that 1 chooses L and with probability 1/4 that 1 chooses
R, and these conjectures are common belief, then the players beliefs
constitute a mixed-strategy Nash equilibrium. It is a Nash equilibrium
since there is mutual belief of the conjectures about opponent choice
and each player believes that the opponent chooses rationally given the
opponents conjecture.
Rationalizability concepts have no bite in the battle-of-the-sexes
game, G1 : Interactive epistemology based on rationality alone cannot
guide the players to one of the equilibria. Hence, to illustrate the force
of deductive reasoning in gamesleading to rationalizability concepts
we must consider other examples.
In game G2 of Figure 2.2, there is a unique Nash equilibrium, (L, `).
Furthermore, deductive reasoning will readily lead player 1 to L and
player 2 to `. In the rational choice approach this works as follows:
If player 1 chooses rationally, then he chooses L. This is independent
of his conjecture about 2s behavior since L strongly dominates R (as
4 > 3 and 1 > 0). Therefore, if player 2 believes that 1 chooses rationally, and 2 chooses rationally herself, then she chooses ` (since 1 > 0).
This argument shows that L is the unique rationalizable strategy for
player 1 and ` is the unique rationalizable strategy for player 2. In the
consistent preferences approach, we get: Player 1 ranks L above R,
independently of his conjecture about 2s behavior. If player 2 believes

13

Motivating Examples

r
`
L 4, 1 1, 0
R 3, 0 0, 3

Figure 2.2.

G2 , illustrating deductive reasoning.

r
`
L 1, 3 4, 2
R 1, 3 3, 5

Figure 2.3.

G3 , illustrating weak dominance.

that 1 chooses rationally, then she believes that 1 chooses L and ranks `
above r. Therefore, if player 1 believes that 2 chooses rationally, and he
believes that she believes that 1 chooses rationally, then he believes that
2 chooses `. As we will return to in Chapters 5 and 6, this is an alternative way to establish L and ` as the players rationalizable strategies. In
any case, the deductive reasoning leading to rationalizability corresponds
to iterated elimination of strongly dominated strategies (IESDS).
In game G3 of Figure 2.3, there is also a unique Nash equilibrium,
(L, `). However, deductive reasoning is more problematic and interesting in the case of this game. For each player, both strategies are
rationalizable, meaning that rationalizability has no bite in this game.
In particular, if player 1 deems it subjectively impossible that 2 may
choose r, then R is a rational choice. Moreover, if player 2 believes that
1 chooses R, then r is a rational choice. Still, we might argue that 1
should not rule out the possibility that 2 might choose r, leading him
to rank L above R (since L weakly dominates R) and player 2 to rank
` above r. Such deductive reasoning leads to permissible strategies in
the terminology of Brandenburger (1992). Permissibility corresponds
to one round of elimination of all weakly dominated strategies followed
by iterated elimination of strongly dominated strategiesthe so-called
Dekel-Fudenberg procedure, after Dekel and Fudenberg (1990). It can
be formalized in two different ways.
On the one hand, within an analysis based on what players do, one
can postulate that players make almost rational choices by, in the spirit
of Selten (1975) and his trembling hand, assuming that mistakes are
made with (infinitely) small probability. Borgers (1994) shows how such

14

CONSISTENT PREFERENCES

r
`
Out 2, 0 2, 0
InL 1, 3 4, 2
InR 1, 3 3, 5
Figure 2.4.

1c
Out
2
0

In

2s
`
1
3

1s
L
4
2

3
5

G03 and a corresponding extensive form 03 (a centipede game).

an approach does indeed correspond to the Dekel-Fudenberg procedure


and thus characterizes permissibility.
On the other hand, within an analysis based on what players believe,
one can impose that players are cautious, in the sense of deeming no
opponent strategy as subjectively impossible. This approach to permissibilitywhich is in the spirit of Blume et al. (1991b) and Brandenburger
(1992)combines such caution with an assumption that each player believes that the opponent is rational. It is shown in Chapters 5 and 6
how this yields an alternative characterization of permissibility, where
one need not consider whether players in fact are rational.
Let us then turn to an expanded version of G3 , namely the game G03
illustrated in Figure 2.4 with a corresponding extensive form 03 . Following Rosenthal (1981) 03 is often called a centipede game. Here,
(Out, `) is normally suggested as a solution for this game. In the strategic
form G3 , this suggestion can be obtained by iterated (maximal) elimination of weakly dominated strategies (IEWDS), and in the extensive form
03 , it is based on backward induction. While epistemic conditions for
the procedure of IEWDS have been given by Brandenburger and Keisler
(2002)see also the related work by Battigalli and Siniscalchi (2002)
IEWDS will fall outside the class of procedures that will be characterized
in this book. The procedure of backward induction, on the other hand,
will play a central role in Chapters 710.
Permissibility, which corresponds to the Dekel-Fudenberg procedure,
does not promote only (Out, `) in the games of Figure 2.4. While the
Dekel-Fudenberg procedure eliminates the weakly dominated strategy
InR, this procedure does not allow for further rounds of weak elimination. Hence, since r is not strongly dominated by ` even after the
elimination of InR, r will not be eliminated by the Dekel-Fudenberg
procedure. Hence, InL as well as Out are permissible for player 1, and
r as well as ` are permissible for player 2.
In the extensive game, 03 , one can give the following intuition for
how InL and r are compatible with the deductive reasoning underlying

Motivating Examples

15

permissibility: If player 1 believes that player 2 will choose `, then he


prefers Out to his two other strategies. Similarly, if player 2 assigns
probability one to player 1 choosing Out, and revises her beliefs by
assigning probability one to InL conditional on being asked to play,
then she prefers ` to r. However, if player 2 assigns probability one to
player 1 choosing Out, but revises her beliefs so that InL and InR are
equally likely conditional on being asked to play, then she prefers r to `.
So if player 1 assigns sufficient probability to player 2 being of the latter
type and believesconditional on her being of this typethat she will
be rational by choosing her top-ranked strategy r, then he will prefer
InL to his two other strategies. Following Ben-Porath (1997), Chapter
7 demonstrates within a formal epistemic model how such interactive
beliefs are consistent with the assumptions underlying permissibility.
As shown by Ben-Porath (1997), when permissibility is applied to
an extensive game like 03 , each player must believe that her opponent
chooses rationally as long as the opponents behavior is consistent with
the players initial beliefs. However, conditional on finding herself at an
information set that contradicts her previous belief about his behavior,
she is allowed to believe that he will no longer choose rationally. E.g.,
in 03 it is OK for player 2 to assign positive probability to the irrational
strategy InR conditional on being asked to play, provided that she had
originally assigned probability one to player 1 rationally choosing Out.
An alternative is that the player should still believe that her opponent
will choose rationally, even conditionally on being informed about surprising moves. Chapters 79 will consider the event that each player
believes that her opponent chooses rationally at all his information sets
within models of interactive epistemology. Building on joint work with
Andres Perea, this provides
epistemic conditions for backward induction and
definitions for the concepts of sequential and quasi-perfect rationalizability.
Note that imposing that a player believes that her opponent chooses
rationally at all his information sets is a requirement imposed on her
belief revision policy, not on her actual behavior. It therefore fits well
within the consistent preferences approach.
If we move to an expanded version of G2 , namely the game G02 illustrated in Figure 2.5 with a corresponding extensive form 02 , not even
the event that each player believes that the opponent chooses rationally
at all his information sets, will be sufficient for reaching the solution that

16

CONSISTENT PREFERENCES

r
`
2,
2
2,
2
Out
InL 4, 1 1, 0
InR 3, 0 0, 3

Figure 2.5.

2
2

1c
@

Out
InL

`@ r

4
1

1
0

@ InR
@
2
@s
` @r

3
0

0
3

G02 and a corresponding extensive form 02 .

one would normally suggest, namely (InL, `). This outcome is supported
by the following deductive reasoning: Since InL strongly dominates InR,
implying that player 1 prefers the former strategy to the latter, player 2
should deem InL much more likely than InR conditional on being asked
to play, and hence prefer ` to r. This in turn would lead player 1 to
prefer InL to his two other strategies if he believes that player 2 will be
rational by choosing her top-ranked strategy `.
However, the concepts of sequential and quasi-perfect rationalizability
only preclude that player 2 unconditionally assigns positive probability
to player 1 choosing InR. If player 2 assigns probability one to player
1 choosing Out, then she maywhen revising her beliefs conditional on
being asked to playassign sufficient probability to InR so that r is
preferred to `. If player 1 assigns sufficient probability to player 2 being
of such a type, then he will prefer Out to his two other strategies.
The outcome (InL, `) can be promoted by considering the event that
player 2 respects the preferences of her opponent by deeming one opponent strategy infinitely more likely than another if the opponent prefers
the former to the latter. Respect of opponent preferences was first considered by Blume et al. (1991b) in their characterization of proper equilibrium. Being a requirement on the beliefs of players, it fits nicely into
the consistent preferences approach. Within a model of interactive
epistemology Chapter 10 characterizes the concept of proper rationalizability by considering the event that each player respects opponent
preferences. Proper rationalizability implies backward induction. However, even though it yields conclusions that coincide with IEWDS in all
of the examples above, this conclusion does not hold in general, as will
be shown by the next example and further discussed in Chapter 10.
Lastly, turn to an expanded version of G1 , namely the game G01 illustrated in Figure 2.6 with a corresponding extensive form 01 . The exten-

17

Motivating Examples

r
`
2,
2
2,
2
Out
InL 3, 1 0, 0
InR 0, 0 1, 3

1c
@
@ InR
InL

@
2
s

@s
`@ r
` @r

@
2
2

Out

3
1

0
0

0
0

1
3

Figure 2.6. G01 and a corresponding extensive form 01 (battle-of-the-sexes with an


outside option).

sive game 01 is referred to as the battle-of-the-sexes with an outside


option game. This game was introduced by Kreps and Wilson (1982)
(who credit Elon Kohlberg) and is often used to illustrate forward induction, namely that player 2 through deductive reasoning should figure
out that player 1 has chosen InL and aims for the payoff 3 if 2 is being
asked to play. Respect of preferences only requires player 2 to deem InR
infinitely less likely than Out since the latter strategy strongly dominates the former; it does not require 2 to deem InR infinitely less likely
than InL and thereby prefer ` to r.
In contrast, IEWDS eliminates all strategies except InL for player 1
and ` for player 2, thereby promoting the forward induction outcome.
Chapter 11 contains a critical assessment of how iterated weak dominance promotes forward induction in this and other examples. Based on
joint work with Martin Dufwenberg, it will be suggested how forward induction can be promoted by strengthening the concept of permissibility
to our notion of full permissibility.
Full permissibility is characterized by conditions levied on the beliefs
of players, and therefore fits naturally into the consistent preferences
approach. In the final Chapter 12 this notion will be further illustrated
through a series of extensive games, illustrating how it yields forward
induction, while not always supporting backward induction (indeed, 03
is an example of an extensive game where full permissibility does not
promote the backward induction outcome).

2.2

Overview over concepts

To provide a structure for the concepts that will be defined and characterized in the subsequent chapters, it might be useful as a roadmap to
present an overview over these concepts and their relationships.

18

Table 2.1.

CONSISTENT PREFERENCES

Relationships between different equilibrium concepts.

Proper
equilibrium
Myerson (1978)

Strategic form
Quasi-perfect
perfect equil.

equilibrium
Selten (1975)
van Damme (1984)

Nash
Weak sequential
Sequential
equi-
equilibrium

equilibrium
librium
Reny (1992)
Kreps & Wilson (1982)

First, consider the equilibrium concepts of Table 2.1. Here, weak sequential equilibrium refers to the equilibrium conceptdefined by Reny
(1992)that results when each player only optimizes at information sets
that the players own strategy does not preclude from being reached.
Moreover, quasi-perfect equilibrium is the concept defined by van Damme
(1984) and which differs from Seltens (1975) extensive form perfect equilibrium by having each player ignore the possibility of his own future mistakes. The arrows indicate that any proper equilibrium corresponds to a
quasi-perfect equilibrium and so forth. Nash equilibrium and (strategic
form) perfect equilibrium will be characterized in Chapter 5, while sequential equilibrium, quasi-perfect equilibrium, and proper equilibrium
will be characterized in Chapters 8, 9, and 10, respectively.
The non-equilibrium analogs to these equilibrium concepts are illustrated in Table 2.2. Again, the arrows indicate that proper rationalizability implies quasi-perfect rationalizability and so forth. Of course, the
notion of rationalizability due to Bernheim (1984) and Pearce (1984) is a
non-equilibrium analog to Nash equilibrium. Likewise, the notion of permissibility due to Borgers (1994) and Brandenburger (1992) corresponds
to Seltens (1975) strategic form perfect equilibrium, and the notion of
weak sequential rationalizability due to Ben-Porath (1997)coined weak
extensive form rationalizablity by Battigalli and Bonanno (1999)is a
non-equilibrium analog of weak sequential equilibrium. Furthermore, sequential rationalizability due to Dekel et al. (1999, 2002), quasi-perfect
rationalizability due to Asheim and Perea (2004), and proper rational-

19

Motivating Examples

Table 2.2.

Relationships between different rationalizability concepts.

Common
cert. belief
that each
player . . .
. . . is cautious
and respects
preferences

. . . is cautious

. . . is not
necessarily
cautious

. . . believes the
oppon. chooses
rationally only
initially, in the
whole game

. . . believes the
oppon. chooses
rationally at
all reachable
info. sets

[n.a.]

[n.a.]

[n.a.]

Rationalizability
Bernh. (1984)
Pearce (1984)
[Chapters 56]
Does not imply
backward ind.

Permissibility
B
orgers (1994)
Brandenb. (1992)
Dek. & Fud. (1990)
[Chapters 56]

Weak sequential
rationalizability
Ben-Porath (1997)
[Chapter 8]
Does not imply
backward ind.

. . . believes the
oppon. chooses
rationally at
all info. sets

Proper
rationalizability
Schuhm. (1999)
[Chapter 10]

Quasi-perfect
rationalizability
Ash. & Per. (2004)
[Chapter 9]

Sequential
rationalizability
Dekel et al.
(1999, 2002)
[Chapter 8]
Implies
backward ind.

izability due to Schuhmacher (1999) are non-equilibrium analogs to sequential equilibrium, quasi-perfect equilibrium, and proper equilibrium,
respectively.
As indicated by Table 2.2, these concepts will be treated in Chapters
5, 6, 8, 9, and 10, and they are characterized by
on the one hand, whether each player is cautious and respects opponent preferences, and
on the other hand, whether each player believes that his opponent
chooses rationally only initially (in the whole game), or at all reachable information sets, or at all information sets.
This taxonomy defines events which are made subject to common certain
belief, where certain belief is the epistemic operator that will be used
for the interactive epistemology. This operator is defined in Chapter 4
and will have the following meaning: An event is said to be certainly
believed if the complement is deemed subjectively impossible.

20

CONSISTENT PREFERENCES

Throughout this book, we will analyze assumptions about players


preferences, leading to events that are subsets of type profiles. We can
still make subjective statements about what a player will do, by considering the preferences (and the corresponding representation in terms
of subjective probabilities) of the other player.
For the concepts in the left and center columns of Table 2.2, we can do
more than this, if we so wish. E.g., when characterizing weak sequential
rationalizability, we can consider the event of rational pure choice at
all reachable information sets, and assume that this event is commonly
believed (where the term belief is used in the sense of belief with
probability one). These assumptions yield subsets of strategy profiles,
leading to direct behavioral implications within the model.
This does not carry over to the concepts in the right column. It is
problematic to define the event of rational pure choice at all information sets, since reaching a non-reachable information set may contradict
rational choice at earlier information sets. Also, if we consider the event
of (any kind of) rational pure choice, then we cannot use common certain belief, since thiscombined with rational choicewould prevent
well-defined conditional beliefs after irrational opponent choices. However, common belief (with probability one) of the event that each player
believes his opponent chooses rationally at all information sets does not
yield backward induction in generic perfect information games, as shown
in the counterexample illustrated in Figure 7.1. Common certain belief
is essential for our analysis of the concepts in the right column of Table
2; this complicates obtaining direct behavioral implications.
Before defining the various belief operators that will be used in the
later chapters, the decision-theoretic framework will be presented and
analyzed in Chapter 3.

Chapter 3
DECISION-THEORETIC FRAMEWORK

In the consistent preferences approach to deductive reasoning in


games, the object of the analysis is each players preferences over his
own strategies, rather than his choice. The preferences can be required
to be consistent (in different senses) with his beliefs about the opponents preferences over her strategies. The players preferences depend
on his belief about the strategy choice of his opponent. Furthermore,
in order for the player to consider the preferences of his opponent, her
belief about his strategy profile matters, and so forth. What kind of
decision-theoretic framework is suited for such analysis?
This chapter spells out how the framework proposed by Anscombe
and Aumann (1963) will be used as a decision-theoretic foundation. Following Blume et al. (1991a), the Archimedean property will be relaxed.
Moreover, two different kinds of generalizations will be presented:
(i) Completeness will be relaxed, as this is not an integral part of the
backward induction procedure (cf. the analysis of Chapter 7), and
cannot be imposed in the epistemic characterization of forward induction presented in Chapters 11 and 12.
(ii) Flexibility concerning how to specify conditional preferences, leading to a structure that encompasses both the concept of a conditional
probability system and conditionals derived from a lexicographic probability system. This flexibility turns out to be essential for the analysis
of Chapters 8 and 9.
Section 3.1 motivates these generalizations, as well as providing reasons for the choice of the Ascombe-Aumann framework. Section 3.2

22

CONSISTENT PREFERENCES

introduces the different sets of axioms that will be considered, while the
final Section 3.3 presents the corresponding representation results.

3.1

Motivation

Standard decision theory under uncertainty concerns two different


kinds of decisions.
1 In the first kind, the object of choice is lotteries. There is a given
set of outcomes, and a lottery is an objective probability distribution
over outcomes. If the decision maker satisfies the von NeumannMorgenstern axiomscf. von Neumann and Morgenstern (1947)
then one can assign utilities to outcomes, so that the decision maker
prefers one lottery to another if the former has higher expected utility.
2 In the second kind, the object of choice is acts. There is a given set of
outcomes and a given set of uncertain states, and an act is a function
from states to outcomes. If the decision maker satisfies the Savage
(1954) axioms, then one can assign utilities to outcomes and subjective probabilities to states, so that the decision maker prefers one act
to another if the former has higher (subjective) expected utility.
An act in the sense of Anscombe and Aumann (1963) is a function
from states to objective randomizations over outcomes.1 By considering
acts in this sense they are able to extend the von Neumann-Morgenstern
theory so that the utilities assigned to outcomes are determined solely
from preferences over lotteries, while the subjective probabilities assigned to states are determined when also acts are considered.
A strategy in a game is a function that, for each opponent strategy
choice, determines an outcome. A pure strategy determines for each
opponent strategy a deterministic outcome, while a mixed strategy determines for each opponent strategy an objective randomization over the
set of outcomes. Hence, a pure strategy is an example of an act in the
sense of Savage (1954), while a mixed strategy is an example of an act
in the generalized sense of Anscombe and Aumann (1963).
Allowing for objective randomizations and using Anscombe-Aumann
acts are convenient for two reasons in the present context:
The Anscombe-Aumann framework allows a players payoff function
to be a von Neumann-Morgenstern (vNM) utility function deter1 Anscombe

and Aumann (1963) use the term roulette lottery for what we here call lotteries,
horse lotteries for acts from states to deterministic outcomes, i.e., acts in the Savage (1954)
sense, and compound horse lotteries for what we here refer to as Anscombe-Aumann acts.

Decision-theoretic Framework

23

mined from his preferences over randomized outcomes, independently


of the likelihood that he assigns to the different strategies of his opponent. This is consistent with the way games are normally presented,
where payoff functions for each player are provided independently of
the analysis of the strategic interaction.2
When relaxing completeness, it turns out to be important to allow
mixed strategies as objects of choice when determining maximal elements of a players incomplete preferences, for similar reasons as
domination by mixed strategies is needed for dominated strategies to
correspond to strategies that can never be best replies.
We will consider three kinds of generalizations of the AnscombeAumann framework.
First, as mentioned in the introduction to this chapter, throughout
this book we will follow Blume et al. (1991a) by imposing the conditional Archimedean property (also called conditional continuity) instead
of Archimedean property (also called continuity). This is important for
modeling caution, which requires a player to take into account the possibility that the opponent makes an irrational choice, while assigning
probability one to the event that the opponent makes a rational choice.
I.e., even though any irrational choice is infinitely less likely than some
rational choice, it is not ruled out. Such discontinuous preferences will
also be useful when modeling players preferences in extensive games.
Second, we will relax the axiom of completeness to conditional completeness. While complete preferences will normally be represented by
means of subjective probabilities (cf. Propositions 1, 2, 3, and 5 of this
chapter), incomplete preferences are insufficient to determine the relative
likelihood of the uncertain states. One possibility is, following Aumann
(1962) and Bewley (1986), to represent incomplete preferences by means
of a set of subjective probability distributions.
Subjective probabilities are not part of the most common deductive
procedures in game theorylike IESDS, the Dekel-Fudenberg procedure, and the backward induction procedure. One can argue that, since
they make no use of subjective probabilities, one should seek to provide
2 This

argument is in line with the analysis of Aumann and Dreze (2004), who however depart from the Anscombe-Aumann framework by considering preferencesnot over all functions from states to randomized outcomesbut only on the subset of mixed strategies. The
Ascombe-Aumann framework requires that the decision maker has access to objective probabilities; however, Machina (2004) points to how this requirement can be weakened.

24

CONSISTENT PREFERENCES

epistemic conditions for such procedures without reference to subjective


probabilities. Indeed, subjective probabilities play no role in epistemic
analysis of backward induction by Aumann (1995).
In Chapters 6 and 7 we follow Aumann in this respect and provide
epistemic conditions for IESDS, the Dekel-Fudenberg procedure, and the
backward induction procedure through modeling players endowed with
(possibly) incomplete preferences that are not represented by subjective
probabilities. Moreover, for the modeling of forward induction in Chapters 11 and 12, it is a necessary part of the analysis that preferences are
incomplete.
Third, we will allow for flexibility concerning how to specify conditional preferences. Such flexibility can be motivated in the context of
the modeling of sequentiality and quasi-perfectness in Chapters 8 and
9. Sequential rationalizability will be defined and sequential equilibrium
characterized by considering the event that each player believes that the
opponent chooses rationally at all her information sets. Adding preference for cautious behavior to this event yields the concepts of quasiperfect rationalizability and equilibrium. For these definitions and characterizations, we must describe what a player believes both conditional
on reaching his own information sets (to evaluate his rationality) and
conditional on his opponent reaching her information sets (to determine
his beliefs about her choices). In other words, we must specify a system
of conditional beliefs for each player.
There are various ways to do so. One possibility is a conditional probability system (CPS) where each conditional belief is a subjective probability distribution.3 This is sufficient to model sequentiality. Another
possibility, which is sufficient to model quasi-perfectness, is to apply a
single sequence of subjective probability distributionsa so-called lexicographic probability system (LPS) as defined by Blume et al. (1991a)
and derive the conditional beliefs as the conditionals of such an LPS.
Since each conditional LPS is found by constructing a new sequence,
which includes the well-defined conditional probability distributions of
the original sequence, each conditional belief is itself an LPS.
However, quasi-perfectness cannot always be modeled by a CPS since
the modeling of preference for cautious behavior may require lexicographic probabilities. To see this, consider 4 of Figure 3.1. In this
3 This

is the terminology introduced by Myerson (1986). In philosophical literature, related


concepts are called Popper measures. For an overview over relevant literature and analysis,
see Hammond (1994) and Halpern (2003).

25

Decision-theoretic Framework

1c
D
1
1

2s
d
1
1

Figure 3.1.

0
0

f
d
1,
1
0,
0
F
D 1, 1 1, 1

4 and its strategic form.

game, if player 1 believes that player 2 chooses rationally, then player 1


must assign probability one to player 2 choosing d. Hence, if each (conditional) belief is associated with a subjective probability distributionas
is the case with the concept of a CPSand player 1 believes that his
opponent chooses rationally, then player 1 is indifferent between his two
strategies. This is inconsistent with quasi-perfectness, which requires
players to have preference for cautious behavior, meaning that player 1
in 4 prefers D to F .
Moreover, sequentiality cannot always be modeled by means of conditionals of a single LPS since preference for cautious behavior is induced.
To see this, consider a modified version of 1 where an additional subgame is substituted for the (0, 0)payoff, with all payoffs in that subgame
being smaller than 1. If player 1s conditional beliefs over strategies for
player 2 is derived from a single LPS, then a well-defined belief conditional on reaching the added subgame entails that player 1 deems
possible the event that player 2 chooses f , and hence, player 1 prefers D
to F . This is inconsistent with sequentiality, under which F is a rational
choice.
Therefore, this chapter will present a new way of describing a system
of conditional beliefs, called a system of conditional lexicographic probabilities (SCLP), and which is based on joint work with Andres Perea;
cf. Asheim and Perea (2004). In contrast to a CPS, an SCLP may induce
conditional beliefs that are represented by LPSs rather than subjective
probability distributions. In contrast to the system of conditionals derived from a single LPS, an SCLP need not include all levels in the sequence of the original LPS when determining conditional beliefs. Thus,
an SCLP ensures well-defined conditional beliefs representing nontrivial
conditional preferences, while allowing for flexibility w.r.t. whether to
assume preference for cautious behavior.

26

3.2

CONSISTENT PREFERENCES

Axioms

Consider a decision maker under uncertainty, and let F be a finite set


of states. The decision maker is uncertain about what state in F will be
realized. Let Z be a finite set of outcomes. For each 2F \{}, the
decision maker is endowed with a binary relation (preferences) over all
functions that to each element of assign an objective randomization
on Z. Any such function is called an act on , and is the subject of
analysis in the deciscion-theoretic framework introduced by Anscombe
and Aumann (1963). Write p and q for acts on 2F \{}. (For
acts on F , write simply p and q.) A binary relation on the set of acts
on is denoted by , where p q means that p is preferred or
indifferent to q . As usual, let (preferred to) and (indifferent to)
denote the asymmetric and symmetric parts of .
Consider the following five axioms, where the numbering of axioms
follows Blume et al. (1991a).

Axiom 1 (Order) is complete and transitive.


Axiom 2 (Objective Independence) p0 (resp. ) p00 iff p0
+(1 )q (resp. ) p00 + (1 )q , whenever 0 < < 1 and q
is arbitrary.
Axiom 3 (Nontriviality) There exist p and q such that p q .
Axiom 4 (Archimedean Property) If p0 q p00 , then 0 <
< < 1 such that p0 +(1 )p00 q p0 + (1 )p00 .
Say that e F is Savage-null if p{e} {e} q{e} for all acts p{e} and
q{e} on {e}. Denote by the non-empty set of states that are not Savagenull; i.e., the set of states that the decision maker deems subjectively
possible. Write := { 2F \{}| 6= }. Refer to the collection
{ | } as a system of conditional preferences on the collection of
sets of acts from from subsets of F to outcomes.
Whenever 6= , denote by p the restriction of p to .

Axiom 5 (Non-null State Independence) p{e} {e} q{e} iff p{f }


{f } q{f } , whenever e, f , and p{e,f } and q{e,f } satisfy p{e,f } (e) =
p{e,f } (f ) and q{e,f } (e) = q{e,f } (f ).
Define the conditional binary relation | by p0 | p00 if, for some
q , (p0 , q\ ) (p00 , q\ ). By Axioms 1 and 2, this definition does not
depend on q . The following axiom states that preferences over acts on
, , equal the conditional of on , whenever 6= .

Decision-theoretic Framework

27

Axiom 6 (Conditionality) p (resp. ) q iff p | (resp.


| ) q , whenever 6= .
It is an immediate observation that Axioms 5 and 6 imply non-null
state independence as stated in Axiom 5 of Blume et al. (1991a).

Lemma 1 Assume that the system of conditional preferences { |


} satisfies Axioms 5 and 6. Then, , p |{e} q iff p |{f }
q whenever e, f , and p and q satisfy p (e) = p (f ) and
q (e) = q (f ).
Turn now the relaxation of Axioms 1, 4, and 6, as motivated in the
previous section.
Axiom 10 (Conditional Order) is reflexive and transitive and,
e , |{e} is complete.
Axiom 40 (Conditional Archimedean Property) e , if p0
|{e} q |{e} p00 , then 0 < < < 1 such that p0 +(1)p00 |{e}
q |{e} p0 + (1 )p00 .
Axiom 60 (Dynamic Consistency) p q whenever p | q
and 6= .
Since completeness implies reflexivity, Axiom 10 constitutes a weakening of Axioms 1. This weakening is substantive since, in the terminology
of Anscombe and Aumann (1963), it means that the decision maker has
complete preferences over roulette lotteries where objective probabilities are exogenously given, but not necessarily complete preferences
over horse lotteries where subjective probabilities, if determined, are
endogenously derived from the preferences of the decision maker.
Say that e is deemed infinitely more likely than f F (and write
e f ) if p{e,f } {e,f } q{e,f } whenever p{e} {e} q{e} . Consider the
following two auxiliary axioms.
Axiom 11 (Partitional priority) If e0 e00 , then f F , e0 f
or f e00 .
Axiom 16 (Compatibility) There exists a binary relation F satisfying Axioms 1, 2, and 4 0 such that p F | q whenever p q and
6= F .
While it is straightforward that Axiom 1 implies Axiom 10 , Axiom 4
implies Axiom 40 , and Axiom 6 implies Axiom 60 , it is less obvious that

28

CONSISTENT PREFERENCES

Axiom 1 together with Axioms 2, 40 , 5, and 6 imply Axiom 11, and


Axiom 6 together with Axioms 1 2, 40 , and 5, imply Axiom 16.
This is demonstrated by the following lemma.

Lemma 2 Assume that (a) satisfies Axioms 1 and 2 if 2F \{},


and Axiom 4 0 if and only if , and (b) the system of conditional
preferences { | } satisfies Axioms 5 and 6. Then { | }
satisfies Axioms 11 and 16.
Proof. Part 1: Axiom 11 is implied. We must show, under the given
premise, that if e0 e00 , then, f F , e0 f or f e00 . Clearly,
e0 e00 entails e0 , implying that e0 f or f e00 if f
/ or
e00
/ . The case where f = e0 or f = e00 is trivial. The case where
f 6= e0 , f 6= e00 , f and e00 remains. Assume that e0 f
does not hold, which by completeness (Axiom 1) entails the existence of
p0{e0 ,f } and q0{e0 ,f } such that p0{e0 ,f } {e0 ,f } q0{e0 ,f } and p0{e0 } {e0 } q0{e0 } .
It suffices to show that f e00 is obtained; i.e., p{f } {f } q{f } implies
p{e00 ,f } {e00 ,f } q{e00 ,f } . Throughout we invoke Axiom 6 and Lemma 1,
and choose so that {e0 , e00 , f } .
Let p |{f } q . Assume w.l.o.g. that p (d) = q (d) for d 6= f, e00 ,
and p0 (d) = q0 (d) for d 6= e0 , f . By transitivity (Axiom 1), p0 |{e0 ,f }
q0 and p0 |{e0 } q0 imply p0 |{f } q0 . However, since satisfies
Axioms 2 and 40 , (0, 1) such that p + (1 )p0 |{f } q +
(1 )q0 . Moreover, p (e0 ) = q (e0 ) and p0 |{e0 } q0 entail that
p + (1 )p0 |{e0 } q + (1 )q0 by Axiom 2, which implies that
p + (1 )p0 |{e0 ,e00 } q + (1 )q0 since e0 e00 . Hence, by
transitivity, p +(1)p0 |{e0 ,e00 ,f } q +(1)q0 or equivalently,
p + (1 )p0 q + (1 )q0 . Now, q0 |{e0 ,f } p0 means that
p + (1 )q0 p + (1 )p0 by Axiom 2, implying that p +
(1 )q0 q + (1 )q0 by transitivity (Axiom 1), and p q
or equivalently, p |{e00 ,f } q by Axiom 2. Thus, p |{f } q
implies p |{e00 ,f } q , meaning that f c.
Part 2: Axiom 16 is implied. We must show, under the given premise,
that here exists a binary relation F satisfying Axioms 1, 2, and 40 such
that p F | q whenever p q and 6= F . Clearly, since Axiom
6 is satisfied, F fulfil these requirements.
We end this section by stating for later use an axiom which is implied by Axiom 4 and which implies Axiom 40 . Also for this axiom the
numbering follows the one used by Blume et al. (1991a).

29

Decision-theoretic Framework

Table 3.1.

Relationships between different sets of axioms and their representations.

Complete and
continuous

123456
Prob. distr.

Complete and
partitionally continuous

1 2 3 400 5 6
LCPS

Complete and
discontinuous

1 2 3 40 5 6
LPS

Incomplete and
discontinuous

1 2 3 4 5 60 16
CPS

1 2 3 40 5 60 16
SCLP

10 11 2 3 40 5 6
Conditionality

Dynamic
consistency

Axiom 400 (Partitional Archimedean Property) There is a parti0 } of such that


tion {10 , . . . , L|
` {1, . . . , L|}, if p0 |`0 q |`0 p00 , then 0 < < < 1 such
that p0 + (1 )p00 |`0 q |`0 p0 + (1 )p00 , and
0
` {1, . . . , L| 1}, p |`0 q implies p |`0 `+1
q .

Table 3.1 illustrates the relationships between the sets of axioms that
we will consider. The arrows indicate that one set of axioms implies
another. The figure indicates what kind of representations the different
sets of axioms correspond to, as reported in the next section.

3.3

Representation results

In view of Lemma 1 and using the characterization result of Anscombe


and Aumann (1963), we obtain the following result under Axioms 1, 2,
3, 4, 5, and 6; cf. Theorem 2.1 of Blume et al. (1991a).
For the statement of this and later results, denote by : Z R
aPvNM utility function, and abuse notation slightly by writing (p) =
zZ p(z)(z) whenever p (Z) is an objective randomization. In
this and later results, is unique up to positive affine transformations.

Proposition 1 (Anscombe and Aumann, 1963) The following two


statements are equivalent.

30

CONSISTENT PREFERENCES

1 (a) satisfies Axioms 1, 2, and 4 if 2F \{}, and Axiom 3


if and only if , and (b) the system of conditional preferences
{ | } satisfies Axioms 5 and 6.
2 There exist a vNM utility function : (Z) R and a unique
subjective probability distribution on F with support that satisfies,
for any ,
X
X
| (e)(p (e))
| (e)(q (e)) ,
p q iff
e

where | is the conditional of on .


In view of Lemma 1, and using Theorem 3.1 of Blume et al. (1991a),
we obtain the following result under Axioms 1, 2, 3, 40 , 5, and 6.
For the statement of this and later results, we need to introduce
formally the concept of a lexicographic probability system. A lexicographic probability system (LPS) consists of L levels of subjective probability distributions: If L 1 and, ` {1, . . . , L}, ` (F ), then
= (1 , . . . , L ) is an LPS on F . Denote by L(F ) the set of LPSs on
F . Write supp := L
`=1 supp ` for the support of . If supp 6= ,
0
denote by | = (1 , . . . 0L| ) the conditional of on .4
Furthermore, for two utility vectors v and w, denote by v L w that,
whenever w` > v` , there exists k < ` such that vk > wk , and let >L and
=L denote the asymmetric and symmetric parts, respectively.

Proposition 2 (Blume et. al, 1991a) The following two statements


are equivalent.
1 (a) satisfies Axioms 1, 2, and 4 0 if 2F \{}, and Axiom 3
if and only if , and (b) the system of conditional preferences
{ | } satisfies Axioms 5 and 6.
2 There exist a vNM utility function : (Z) R and an LPS on
F with support that satisfies, for any ,
p q iff
X
L|
X
0` (e)(p (e))
L
e

4 I.e.,

`=1

L|
0` (e)(q (e))
,
`=1

` {1, . . . , L|}, 0` = k`| , where the indices k` are given by k0 = 0, k` =


min{k|k () > 0 and k > k`1 } for ` > 0, and {k|k () > 0 and k > kL| } = , and
where k`| is given by the usual definition of conditional probabilities; cf. Definition 4.2 of
Blume et al. (1991a).

31

Decision-theoretic Framework

where | = (01 , . . . 0L| ) is the conditional of on .


In view of Lemma 1 and using Theorem 5.3 of Blume et al. (1991a),
we obtain the following rusult under Axioms 1, 2, 3, 400 , 5, and 6.
For the statement of this results, we need to introduce the concept that
is called a lexicographic conditional probability system in the terminology that Blume et al. (1991a) use in their Definition 5.2. A lexicographic
conditional probability system (LCPS) consists of L levels of non-overlapping subjective probability distributions: If = (1 , . . . , L ) is an LPS
on F and the supports of the ` s are disjoint, then is an LCPS on F .

Proposition 3 (Blume et. al, 1991a) The following two statements


are equivalent.
1 (a) satisfies Axioms 1, 2, and 4 00 if 2F \{}, and Axiom 3
if and only if , and (b) the system of conditional preferences
{ | } satisfies Axioms 5 and 6.
2 There exist a vNM utility function : (Z) R and a unique LCPS
on F with support that satisfies, for any ,
p q iff
X
L|
X
0` (e)(p (e))
L
e

`=1

L|
0` (e)(q (e))
,
`=1

where | = (01 , . . . 0L| ) is the conditional of on (with the LCPS


| satisfying, ` {1, . . . , L|}, supp0` = `0 ).
Say that is conditionally represented by a vNM utility function
if (a) is non-trivial and (b) p |{e} q iff (p (e)) (q (e))
whenever e is deemed subjectively possible. Under Axioms 10 , 2, 3, 40 , 5,
and 6 conditional representation follows directly from the vNM theorem
of expected utility representation.

Proposition 4 Assume that (a) satisfies Axioms 1 0 , 2, and 4 0 if


2F \{}, and Axiom 3 if and only if , and (b) the system of
conditional preferences { | } satisfies Axioms 5 and 6. Then
there exists a vNM utility function : (Z) R such that, ,
p |{e} q iff (p (e)) (q (e)) whenever e .
Under Axioms 1, 2, 3, 40 , 5, 60 , and 16 we obtain the characterization
result of Asheim and Perea (2004).
For the statement of this result, we need to introduce the concept of
a system of conditional lexicographic probabilities. For this definition,

32

CONSISTENT PREFERENCES

if := (1 , . . . , L ) is an LPS and ` {1, . . . , L}, then write ` :=


(1 , . . . , ` ) for the LPS that includes only the ` top levels of the original
sequence of probability distributions.

Definition 1 A system of conditional lexicographic probabilities (SCLP)


(, `) on F with support consists of
an LPS = (1 , . . . , L ) on F with support , and
a function ` : {1, . . . , L} satisfying (i) supp `() 6= , (ii)
`() `() whenever =
6 , and (iii) `({e}) ` whenever
e supp ` .
The interpretation is that the conditional belief on is given by the
conditional on of the LPS `() , `() | = (01 , . . . 0`()| ). To determine preference between acts conditional on , first calculate expected
utilities by means of the top level probability distribution, 01 , and then,
if necessary, use the lower level probability distributions, 02 , . . . , 0`()| ,
lexicographically to resolve ties. The function ` thus determines, for every event , the number of levels of the original LPS that can be used,
provided that their supports intersect with , to resolve ties between
acts conditional on .
Condition (i) ensures well-defined conditional beliefs that represent
nontrivial conditional preferences. Condition (ii) means that the system
of conditional preferences is dynamically consistent, in the sense that
strict preference between two acts would always be maintained if new
information, ruling out states at which the two acts lead to the same
outcomes, became available. To motivate condition (iii), note that if
e supp ` and `({e}) < `, then it follows from condition (ii) that `
could as well ignore e without changing the conditional beliefs.

Proposition 5 (Asheim and Perea, 2004) The following two statements are equivalent.
1 (a) satisfies Axioms 1, 2, and 4 0 if 2F \{}, and Axiom 3
if and only if , and (b) the system of conditional preferences
{ | } satisfies Axioms 5, 6 0 , and 16 .
2 There exist a vNM utility function : (Z) R and an SCLP (, `)
on F with support that satisfies, for any ,
p q iff
X
`()|
X
0` (e)(p (e))
L
e

`=1

`()|
0` (e)(q (e))
,
`=1

33

Decision-theoretic Framework

where `() | = (01 , . . . 0`()| ) is the conditional of `() on .


Proof. 1 implies 2. Since is trivial if
/ , we may w.l.o.g. assume that Axiom 16 is satisfied with F | being trivial for any
/ .
Consider any e . Since {e} satisfies Axioms 1, 2, 3, and 40
(implying Axiom 4 since {e} has only one state), it follows from the
vNM theorem of expected utility representation that there exists a vNM
utility function {e} : (Z) R such that {e} represents {e} . By
Axiom 5, we may choose a common vNM utility function to represent
{e} for all e . Since Axiom 16 implies, for any e , F |{e}
satisfies Axioms 1, 2, 3, and 40 , and furthermore, p F |{e} q whenever
p{e} {e} q{e} , we obtain that represents F |{e} for all e . It now
follows that F satisfies Axiom 5 of Blume et al. (1991a).
By Theorem 3.1 of Blume et al. (1991a) F is represented by and
an LPS = (1 , . . . , L ) on F with support . Consider any . If
p q iff p F | q, then
p q iff
X

L|

0 (e)(p (e))
e `

`=1

0 (e)(q (e))
e `

L|
`=1

where | = (01 , . . . 0L| ) is the conditional of on , implying that we


can set `() = L. Otherwise, let `() {0, . . . , L 1} be the maximum
` for which it holds that
p q if
X
e

`|
X
0k (e)(p (e))
>L

k=1

0k (e)(q (e))

`|
k=1

where the r.h.s. is never satisfied if ` < min{k |supp k 6= }, entailing


that the implication holds for any such `. Define a set of pairs of acts
on , I, as follows:
(p , q ) I iff
X
`()|
X
0` (e)(p (e))
=L
e

`=1

`()|
0` (e)(q (e))
,
`=1

with (p , q ) I for any acts p and q on if `() < min{` |supp `


6= }. Note that I is a convex set. To show that and `() |
represent , we must establish that p q whenever (p , q ) I.
Hence, suppose there exists (p , q ) I such that p q . It follows

34

CONSISTENT PREFERENCES

from the definition of `() and the completeness of (Axiom 1) that


there exists (p0 , q0 ) I such that
p0 q0 and
X
e

`()+1 (e)(p0 (e)) <

X
e

`()+1 (e)(q0 (e)) .

Objective independence of (Axiom 2) now implies that, if 0 < < 1,


then
p + (1 )p0 q + (1 )p0 q + (1 )q0 ;
hence, by transitivity of (Axiom 1),
p + (1 )p0 q + (1 )q0 .

(3.1)

However, by choosing sufficiently small, we have that


X
`()+1 (e)(p (e) + (1 )p0 (e))
e
X
<
`()+1 (e)(q (e) + (1 )q0 (e)) .
e

Since I is convex so that (p + (1 )p0 , q + (1 )q0 ) I, this


implies that
p + (1 )p0 F | q + (1 )q0 .
(3.2)
Since (3.1) and (3.2) contradict Axiom 16, this shows that p q
whenever (p , q ) I. This implies in turn that `() min{` |supp `
6= } since is nontrivial. By Axiom 60 , `() `() whenever
6= . Finally, since, represents {e} for all e , it follows that
p{e} {e} q{e} iff p F |{e} q. Hence, we can set `({e}) = L, implying
`({e}) ` whenever e supp ` .
2 implies 1. This follows from routine arguments.
By strengthening Axiom 40 to Axiom 4, we get the following corollary. For the statement of this result, we need to introduce formally the
concept of a conditional probability system. A conditional probability
system (CPS) consists of a collection of subjective probability distributions: If, for each , is a subjective probability distribution on
, and { | } satisfies () () = () whenever
and , , then { | } is a CPS on F with support .

35

Decision-theoretic Framework

Corollary 1 The following three statements are equivalent.


1 (a) satisfies Axioms 1, 2, and 4 if 2F \{}, and Axiom 3
if and only if , and (b) the system of conditional preferences
{ | } satisfies Axioms 5, 6 0 , and 16 .
2 There exist a vNM utility function : (Z) R and a unique LCPS
= (1 , . . . , L ) on F with support that satisfies, for any ,
X
X
(e)(p (e))
(e)(q (e)) ,
p q iff
e

where is the conditional of `() on and `() = min{`| supp`


6= }.
3 There exist a vNM utility function : (Z) R and a unique CPS
{ | } on F with support that satisfies, for any ,
X
X
p q iff
(e)(p (e))
(e)(q (e)) .
e

Proof. 1 implies 2. By Proposition 5, the system of conditional


preferences is represented by an SCLP (, `) on F with support . By
the strengthening Axiom 40 to Axiom 4, it follows from the representation result of Anscombe and Aumann (1963) that only the top level
probability distribution is needed to represent each conditional preferences; i.e., for any , `() = min{`| supp` 6= }. This implies
that any overlapping supports in can be removed without changing,
for any , the conditional of `() on , turning into an LCPS.
Furthermore, the LCPS thus determined is unique.
2 implies 1. This follows from routine arguments.
2 implies 3. { | } is a CPS on F with support since ()
() = () is satisfied whenever and , . If an
alternative CPS {
| } were to satisfy, for any ,
X
X
p q iff

(e)(p (e))

(e)(q (e)) ,
e

= (
then one could construct an alternative LCPS
1 , . . . ,
L ) such

that, for any ,


is the conditional of
`()
on
,
where
`()
:=

min{`| supp
` 6= }, contradicting the uniqueness of .
3 implies 2. Construct the LCPS = (1 , . . . , L ) by the following
algorithm: (i) 1 = F , (ii) ` {2, . . . , L}, ` = , where =
a
L
F \`1
k=1 suppk 6= F \, and (iii) k=1 suppk = . Then, for any ,

36

CONSISTENT PREFERENCES

is the conditional of `() on , where `() := min{`| supp` 6= },


and is the only LCPS having this property.
A full support SCLP (i.e., an SCLP where = F ) combines the
structural implication of a full support LPSnamely that conditional
preferences are nontrivialwith flexibility w.r.t. whether to assume the
behavioral implication of any conditional of such an LPSnamely that
the conditional LPSs full support induces preference for cautious behavior. A full support SCLP is a generalization of both
(1) conditional beliefs described by a single full support LPS = (1 , . . . ,
L ) (cf. Proposition 2): Let, for all , `() = L. Then the
conditional belief on is described by the conditional of on , | .
(2) conditional beliefs described by a CPS (cf. Corollary 1): Let, for
all , `() = min{`| supp` 6= }. Then, it follows from
conditions (ii) and (iii) of Definition 1 that the full support LPS
= (1 , . . . , L ) has non-overlapping supportsi.e., is an LCPS
and the conditional belief on is described by the top level probability
distribution of the conditional of on . This corresponds to the
isomorphism between CPS and LCPS noted by Blume et al. (1991a)
on p. 72 and discussed by Hammond (1994) and Halpern (2003).
However, a full support SCLP may describe a system of conditional
beliefs that is not covered by these special cases. The following is a simple
example: Let = F = {d, e, f } and = (1 , 2 ), where 1 (d) = 1/2,
1 (e) = 1/2, and 2 (f ) = 1. If `(F ) = 1 and `() = 2 for any other
non-empty subset , then the resulting SCLP falls outside cases (1) and
(2).

Chapter 4
BELIEF OPERATORS

Belief operators play an important role in epistemic analyses of games.


For any event, a belief operator determines the set of states where this
event is (in some precise sense) believed. Belief operators may satisfy
different kinds of properties, like
if one event implies another, then belief of the former implies belief
of the latter (monotonicity),
if two events are believed, then the conjunction is also believed,
an event that is always true is always believed,
an event that is never true is never believed,
if an event is believed, then the event that the event is believed is
also believed (positive introspection), and
if an event is not believed, then the event that the event is not believed
is believed (negative introspection).
Belief operators satisfying this list are called KD45 operators.1
In epistemic analyses of games, it is common to derive belief operators
from preferences, leading to what can be called subjective belief operators. Examples of subjective KD45 operators are belief with probability one, as used by, e.g., Tan and Werlang (1988), belief with primary
probability one, as used by Brandenburger (1992), and conditional belief with probability one, as used by Ben-Porath (1997). More recently,
1A

KD45 operator satisfies that belief of an event implies that the complement is not believed,
but need not satisfy the truth axiomi.e. that a believed event is always true.

38

CONSISTENT PREFERENCES

Brandenburger and Keisler (2002), Battigalli and Siniscalchi (2002) and


Asheim and Dufwenberg (2003a) have proposed non-monotonic subjective belief operators called assumption, strong belief and full belief,
respectively. With the exception of Asheim and Dufwenbergs (2003a)
full belief, these operators have in common that they are based on
subjective probabilitiesarising from a probability distribution, a lexicographic probability system, or a conditional probability systemthat
represent the preferences of the player as a decision maker.
An alternative approach to belief operators, applied by e.g. Stalnaker
(1996, 1998), is to define belief operators by means of accessibility relations, as used in modal logic. Of particular interest is Stalnakers
non-monotonic absolutely robust belief operator.
Reproducing joint work with Ylva SvikAsheim and Svik (2004)
this chapter integrates these two approaches by showing how accessibility
relations can be derived from preferences and in turn be used to define
and characterize belief operators; see Figure 4.1 for an illustration of the
basic structure of the analysis in this chapter. These belief operators
will in later chapters be used in the epistemic analysis.
Morris (1997) observes that it is unnecessary to go via subjective
probabilities to derive subjective belief operators from the preferences
of a decision maker. This suggestion has been followed in Asheim (2002)
and Asheim and Dufwenberg (2003a), the content of which will be reproduced in Chapters 7 and 11. Epistemic conditions for backward
induction are provided in Chapter 7 without the use of subjective probabilities (since one can argue that subjective probabilities play no role
in the backward induction argument), while Chapter 11 promotes forward induction within a structure based on incomplete preferences that
cannot be represented by subjective probabilities.
When deriving belief operators from preferences, it is essential that
the preferences determine subjective possibility (so that it can be determined whether an event is subjectively impossible) as well as epistemic priority (so that one allows for non-trivial belief revision). As
we shall see, preferences need not satisfy completeness in order to determine subjective possibility and epistemic priority. This chapter shows
how belief operators corresponding to those used in the literature can
be derived from preferences that need not be complete.
We assume that preferences satisfy Axioms 10 , 11, 2, 3, 40 , 5, and
6, entailing that preferences are (possibly) incomplete, but allow conditional representation (cf. Proposition 4 of Chapter 3). Following the
structure illustrated in Figure 4.1, Section 4.1 shows how a binary acces-

39

Belief operators

Preferences over acts


(functions from states to
randomized outcomes)

.
&
Infinitely more likely
Admissibility

Q Accessibility relation
(R1 , . . . , RL ) Vector of nested
of epistemic priority
accessibility relations
.characterizes
defines&
Belief operators
s certain belief
s conditional belief
s robust belief
Figure 4.1.

The basic structure of the analysis in Chapter 4.

sibility relation of epistemic priority Q can be derived from preferences


satisfying these axioms, by means of the infinitely-more-likely relation.
The properties of this priority relation are similar to but more general
than those found, e.g., in Lamerre and Shoham (1994) and Stalnaker
(1996, 1998) in that reflexivity of Q is not required.2 Furthermore, it
is shown how preferences through admissibility give rise to a vector
of nested binary accessibility relations (R1 , . . . , RL ), where, for each `,
R` fulfills the usual properties of Kripke representations of beliefs; i.e.,
they are serial, transitive and Euclidean. Finally, we establish that the
two kinds of accessibility relations yield two equivalent representations
of subjective possibility and epistemic priority.
In Section 4.2 we first use the accessibility relation of epistemic priority
Q to define the following belief operators:
Certain belief coincides with what Morris (1997) calls Savage-belief
and means that the complement of the event is subjectively impossible.
Conditional belief generalizes conditional belief with probability one.
2 The

term epistemic priority will here be used to refer to what elsewhere is sometimes
referred to as plausibility or prejudice; see, e.g., Friedman and Halpern (1995) and Lamerre
and Shoham (1994). This is similar to preference among states (or worlds) in nonmonotonic
logiccf. Shoham (1988)leading agents towards some states and away from others. In
contrast, we use the term preferences in the decision-theoretic sense of a binary relation on
the set of functions (acts) from states to outcomes.

40

CONSISTENT PREFERENCES

Robust belief coincides with what Stalnaker (1998) calls absolutely


robust belief.
We then show how these operators can be characterized by means of
the vector of nested binary accessibility relations (R1 , . . . , RL ), thereby
showing that the concept of full belief as used by Asheim and Dufwenberg (2003a) coincides with robust belief.
Section 4.3 establishes properties of these belief operators. In particular, the robust belief operator (while poorly behaved) is bounded by
certain and conditional belief, which are KD45 operators.
Section 4.4 shows how the characterization of robust belief corresponds to the concept of assumption as used by Brandenburger and
Keisler (2002), and observes how the definition of robust belief is related
to the concept of strong belief as used by Battigalli and Siniscalchi
(2002). We thereby reconcile and compare these non-standard notions
of belief which have recently been used in epistemic analyses of games.
The proofs of the results in this chapter are included in Appendix A.

4.1

From preferences to accessibility relations

The purpose of this section is to show how two different kinds of accessibility relationssee, e.g., Lamerre and Shoham (1994) and Stalnaker
(1996, 1998)can be derived from preferences.
Consider the decision-theoretic framework of Chapter 3. However, as
motivated below, assume that the decision makers preferences may vary
between states. Hence, denote by d the preferences over acts on at
state d, and use superscript d throughout in a similar manner.
Assume that, for each d F , (a) d satisfies Axioms 10 , 2, and 40 if
2F \{}, and Axiom 3 if and only if d (recalling from Chapter
3 that d denotes { 2F \{}| d 6= }), and (b) the system of
conditional preferences {d | d } satisfies Axioms 5, 6, and 11. In
view of Axiom 6 we simplify notation and write
p d q instead of p dF | q p d q ,
and simplify further by substituting d for dF . By Proposition 4, d
is conditionally represented: There exist a vNM utility function d :
(Z) R such that p d{e} q iff d (p(e)) d (q(e)) whenever e d .
If E F , say that pE weakly dominates qE at d if, e E, d (pE (e))
d (qE (e)), with strict inequality for some f E. Say that d is

Belief operators

41

admissible on E if E is non-empty and p d q whenever pE weakly


dominates qE at d. The following connection between admissibility on
subsets and the infinitely-more-likely relation is important for relating
the two kinds of accessibility relations derived from preferences below;
the one kind is based on the infinitely-more-likely relation, while the
other is based on admissibility on subsets. Write E for F \E.

Proposition 6 Let E 6= and E 6= . d is admissible on E iff


e E and f E imply e d f .
An epistemic model. In a semantic formulation of belief operators
one can, following Aumann (1999), start with an information partition
of F , and then assume that the decision maker, for each element of the
partition, is endowed with a probability distribution that is concentrated
on this element of the partition. Since all states within one element of
the partition are indistinguishable, they are assigned the same probability distribution, which however differ from the probability distributions
assigned to states outside this element. In particular, probability distributions assigned to two states in different elements of the partition
have disjoint supports. Hence, in Aumanns (1999) formulation, the decision makers probability distribution depends on in which element of
the information partition the true state is.
This is consistent with the approach chosen here, where the probability distributionor more generally, the preferencesof the decision
maker will be different for states in different elements of the information
partition, and be the same for all states within the same element. However, in line with our subjective perspective, we will construct the information partition from the preferences of the decision maker, so that each
element of the partition is defined as a maximal set of states where the
decision makers preferences are the same, having the interpretation that
states within this set are indistinguishable. Moreover, Aumanns (1999)
assumption that the probability distribution is concentrated within the
corresponding element of the partition will in our framework be captured
by the property that all states outside (and possibly some states inside)
the element are deemed subjectively impossible.
Thus, for each d F , let d := {e F | p e q iff p d q} be the set
of states that are subjectively distinguishable, and write d e if e d .
Note that is a reflexive, transitive, and symmetric binary relation; i.e.,
is an equivalence relation that partitions F into equivalence classes
(or types).

42

CONSISTENT PREFERENCES

Moreover, d denotes the set of states that are subjectively possible


(i.e., not Savage-null) at d. In line with the above discussion, assume
that, for each d F , d d . This assumption will ensure that the
preference-based operators satisfy positive and negative introspection;
it corresponds to being aware of ones own type.
Refer to the collection {d | d F } as an epistemic model for the
decision maker.
In view of Axiom 6, it holds that p d q p d q whenever d
F ; in particular, p dd q p d q. The interpretation is the
decision makers preferences at d are not changed by ruling out states
that he can distinguish from the true state at d. Hence, we can adopt
an interim perspective where the decision maker has already become
aware of his own preferences (his own type); in particular, the decision
makers unconditional preferences are not obtained by conditioning ex
ante preferences on his type.
Accessibility relation of epistemic priority. Consider the following definition of the accessibility relation Q.

Definition 2 dQe (d does not have higher epistemic priority than


e) if (1) d e, (2) e is not Savage-null at d, and (3) d is not deemed
infinitely more likely than e at d.
Proposition 7 The relation Q is serial,3 transitive, and satisfies forward linearity4 and quasi-backward linearity.5
A vector of nested accessibility relations. Consider the collection of all sets E satisfying that d is admissible on E. Since d is
admissible on d , it follows that the collection is non-empty as it is contains d . Also, since no e E is Savage-null at d if d is admissible on
E, it follows that any set in this collection is a subset of d . Finally,
since e d f implies that f d e does not hold, it follows from Proposition 6 that E 0 E 00 or E 00 E 0 if d is admissible on both E 0 and E 00 ,
implying that the sets in the collection are nested. Hence, there exists a
vector of nested sets, (d1 , . . . , dLd ), on which d is admissible, satisfying:
6= d1 d` dLd = d d
(where denotes and 6=).
3 d,

e such that dQe.


and dQf imply eQf or f Qe.
5 If d0 F such that d0 Qe, then dQf and eQf imply dQe or eQd.
4 dQe

43

Belief operators

If we assume that d satisfies not only Axiom 10 but also Axiom 1, so


that, as reported in Proposition 2, d is represented by d and an LPS,
d = (d1 , . . . , dLd )i.e., a sequence of Ld levels of subjective probability
distributionsthen (d1 , . . . , dLd ) can in an obvious way be derived from
the supports of these probability distributions:
` {1, . . . , Ld },

d` =

[`
k=1

suppdk .

McLennan (1989a) develops an ordering of d that is related to (d1 ,


. . . , dLd ) in a context where a system of conditional probabilities is taken
as primitive. In a similar context, van Fraassen (1976) and Arlo-Costa
and Parith (2003) propose a concept of (belief/probability) cores that
correspond to the sets d1 , . . . , dLd . Grove (1988) spheres and Spohns
(1988) ordinal conditional functions are also related to these sets.
For d F with Ld < L := maxeF Le , let dLd = d` = d for ` {Ld +
1, . . . , L}. The collection of sets, {d` | d F }, defines an accessibility
relation, R` .

Definition 3 dR` e (at d, e is deemed possible at the epistemic level


`) if e d` .
Proposition 8 The vector of relations, (R1 , . . . , RL ), has the following properties: For each ` {1, . . . , L}, R` is serial, transitive, and
Euclidean.6 For each ` {1, . . . , L 1}, (i) dR` e implies dR`+1 e and
(ii) (f such that dR`+1 f and eR`+1 f ) implies (f 0 such that dR` f 0 and
eR` f 0 ).
The correspondence between Q and (R1 , . . . , RL ). That d is
not Savage-null at d can be interpreted as d being deemed subjectively
possible (at some epistemic level) at any state in the same equivalence
class. By part (i) of the following result, d being not Savage-null at
d has two equivalent representations in terms of accessibility relations:
dQd and dRL d. Likewise, e d d can be interpreted as e having higher
epistemic priority than d. By part (ii) of the following result, e d
d have two equivalent representations: (dQe and not eQd) and (`
{1, . . . , L} such that dR` e and not eR` d). Thus, both Q and (R1 , . . . , RL )
capture subjective possibility and epistemic priority as implied by the
preferences of the preference system.

6 dR e
`

and dR` f imply eR` f .

44

CONSISTENT PREFERENCES

Proposition 9 (i) dQd iff dRL d. (ii) (dQe and not eQd) iff (`
{1, . . . , L} such that dR` e and not eR` d).
If Axiom 4 is substituted for Axiom 40 so that the conditional Archimedean property is strengthened to the Archimedean propertythen e
being deemed infinitely more likely than f at d implies that f is Savagenull. Hence, L = 1, and by Definitions 2 and 3, Q = R1 . Hence, we are
left with a unique serial, transitive, and Euclidean accessibility relation
if preferences are continuous.

4.2

Defining and characterizing belief operators

In line with the basic structure illustrated in Figure 4.1, we now use
the accessibility relations of Section 4.1 to define and characterize belief
operators.
Defining certain, conditional, and robust belief. Consider the
accessibility relation of epistemic priority, Q, having the properties of
Proposition 7. In Asheim and Svik (2003) we show how equivalence
classes can be derived from Q with the properties of Proposition 7, implying that Q with such properties suffices for defining the belief operators. In particular, we show that the set of states that are subjective
indistinguishable at d is given by
d = {e F | f F such that dQf and eQf } ,
and the set of states that are deemed subjectively possible at d equals
d = {e d | f F such that f Qe} = {e d | eQe} ,
where d 6= since Q is serial, and where the last equality follows since,
by quasi-backward linearity, eQe if f Qe.
Define certain belief as follows.

Definition 4 At d the decision maker certainly believes E if d KE,


where KE := {e F |e E}.
Hence, at d an event E is certainly believed if the complement is deemed
subjectively impossible at d. This coincides with what Morris (1997)
calls Savage-belief.
Conditional belief is defined conditionally on sets that are subjectively possible at any state; i.e., sets in the following collection:
\
:=
d , where d F, d = { 2F \{}| d 6= } .
dF

45

Belief operators

Hence, a non-empty set is not in if and only if there exists d F


such that d = . Note that F and, , 6= F .
Since every is subjectively possible at any state, it follows that,
,
d () := {e d |f d , f Qe}
is non-empty, as demonstrated by the following lemma.

Lemma 3 If d 6= , then e d such that f d , f Qe.


Define conditional belief as follows.

Definition 5 At d the decision maker believes E conditional on if


d B()E, where B()E := {e F | e () E}.
Hence, at d an event E is believed conditional on if E contains any
state in d with at least as high epistemic priority as any other state
in d . This way of defining conditional belief is in the tradition of,
e.g., Grove (1988), Boutilier (1994), and Lamerre and Shoham (1994).
Let E be the collection of subjectively possible events having the
property that E is subjectively possible conditional on whenever E is
subjectively possible:
\
E :=
dE , where d F,
dF

dE := { d | E d 6= if E d 6= } .
Hence, a non-empty set is not in E if and only if (1) there exists d F
such that d = or (2) there exists d F such that E d 6= and
E d = . Note that E is a subset of that satisfies F E ;
hence, 6= E .
Define robust belief as follows.

Definition 6 At d the decision maker robustly believes E if d B0 E,


where B0 E := E B()E.
Hence, at d an event E is robustly believed in the following sense: E is
believed conditional on any event that does not make E subjectively
impossible. Indeed, B 0 coincides with what Stalnaker (1998) calls absolutely robust belief when we specialize to his setting where Q is also
reflexive. The relation between this belief operator and the operators
full belief, assumption, and strong belief, introduced by Asheim and
Dufwenberg (2003a), Brandenburger and Keisler (2002), and Battigalli
and Siniscalchi (2002), respectively, will be discussed at the end of this
section as well as in Section 4.4.

46

CONSISTENT PREFERENCES

Characterizing certain, conditional, and robust belief. Consider the vector of nested accessibility relations (R1 , . . . , RL ) having the
properties of Proposition 8 and being related to Q as in Proposition 9. In
Asheim and Svik (2003) we first derive (R1 , . . . , RL ) from Q and then
show how (R1 , . . . , RL ) characterizes the belief operators. In particular,
it holds for any ` {1, . . . , L} that
d = {e F | f F such that dR` f and eR` f } ,
and
d` = {e F | dR` e} .
Furthermore,
d = {e d | eRL e} = {e F | dRL e} .
The latter observations yield a characterization of certain belief.

Proposition 10 KE = {d F | dL E}.
Proposition 10 entails that certain belief as defined in Definition 4 corresponds to what Arlo-Costa and Parith (2003) call full belief.
Furthermore, by the next result, (unconditional) belief, B(F ), corresponds to what van Fraassen (1995) calls full belief.

Proposition 11 , B()E = {d F | ` {1, . . . , L} such that


6= d` E}.
Finally, by Proposition 9(ii) and the following result, E is robustly
believed iff any subjectively possible state in E has higher epistemic
priority than any state in the same equivalence class outside E.

Proposition 12 B0 E = {d F | ` {1, . . . , L} such that d` = E


d }.
Asheim and Dufwenberg (2003a) say that an event A is fully believed
at a if the preferences at a are admissible on the set of states in A that
are deemed subjectively possible at a. It follows from Proposition 12
that this coincides with robust belief as defined in Definition 6.

4.3

Properties of belief operators

The present section presents some properties of certain, conditional,


and robust belief operators. We do not seek to establish sound and
complete axiomatic systems for these operators; this should, however, be

47

Belief operators

standard for the certain and conditional belief operators, while harder
to establish for the robust belief operator. Rather, our main goal is to
show how the non-monotonic (and thus poorly behaved) robust belief
operator is bounded by the two KD45 operators certain and conditional
belief. While the results certain belief and conditional belief are included
as a background for the results on robust belief, the latter findings in
combination with the results of Sections 4.2 and 4.4 shed light on the
non-standard notions of belief recently used in epistemic analyses of
games.
Properties of certain and conditional belief. Note that certain
belief implies conditional belief since, by Definitions 4 and 5, d ()
d .

Proposition 13 For any , KE B()E.


Furthermore, combined with Proposition 13 the following result implies that both operators K and B() correspond to KD45 systems.

Proposition 14 For any , the following properties hold:


KE KE 0 = K(E E 0 )

B()E B()E 0 = B()(E E 0 )

KF = F

B() =

KE KKE

B()E KB()E

KE K(KE)

B()E K(B()E).

Note that K = , B()F = F , B()E B()B()E and B()E


B()(B()E) follow from Proposition 14 since KE B()E.
Since an event can be certainly believed even though the true state is
an element of the complement of the event, it follows that neither certain
belief nor conditional belief satisfies the truth axiom (i.e. KE E and
B()E E need not hold).
Belief revision. Conditional belief satisfies the usual properties for
belief revision as given by Stalnaker (1998); see also Alchourr
on et al.
d
(1985). To show this we must define the set, , that determines the
decision makers unconditional belief at the state d:
d := {e d | f d , f Qe} ,
i.e. d = d (F ). Then the following result can be established.

Proposition 15 1 d () .

48

CONSISTENT PREFERENCES

2 If d 6= , then d () = d .
3 If , then d () 6= .
4 If d () 0 6= , then d ( 0 ) = d () 0 .
Properties of robust belief. It is easy to show that certain belief
implies robust belief, which in turn implies (unconditional) belief.

Proposition 16 KE B0 E B(F )E.


Even though robust belief is thus bounded by two KD45 operators,
robust belief is not itself a KD45 operator.

Proposition 17 The following properties hold:


B0 E B0 E 0 B0 (E E 0 )
B0 E KB0 E
B0 E K(B0 E).
Note that B0 = , B0 F = F , B0 E B0 B0 E and B0 E B0 (B0 E)
follow from Propositions 14 and 17 since KE B0 E B(F )E. However,
even though the operator B 0 satisfies B 0 E B 0 E as well as positive
and negative introspection, it does not satisfy monotonicity since E E 0
does not imply B 0 E B 0 E 0 . To see this let d1 = {d} and d2 = d =
{d, e, f } for some d F . Now let E = {d} and E 0 = {d, e}. Clearly,
E E 0 , and since d1 = E d we have d B 0 E. However, since neither
d1 = E 0 d nor d2 = E 0 d , d
/ B0E0.

4.4

Relation to other non-monotonic operators

The purpose of this section is to show how robust belief corresponds


to the assumption operator of Brandenburger and Keisler (2002) and is
related to the strong belief operator of Battigalli and Siniscalchi (2002).
The assumption operator. Brandenburger and Keisler (2002)
consider an epistemic model which
is more general than the one that we consider in Section 4.1, since
the set of states need not be finite, and
is more special than ours, since, for all d F , Axioms 10 , 11, and
40 are strengthened to Axioms 1 and 400 , so that completeness and
the partitional Archimedean property are substituted for conditional
completeness, partitional priority, and the conditional Archimedean
property.

Belief operators

49

Within our setting with a finite set of states, F , it now follows, as


reported in Proposition 3, that d is represented by d and an LCPS,
d = (d1 , . . . , dLd )i.e., a sequence of Ld levels of non-overlapping subjective probability distributions. Hence, ` {1, . . . , Ld }, suppd` = `d ,
where (1d , . . . , Ld d ) is a partition of d . In their Appendix B, Brandenburger and Keisler (2002) employ an LCPS to represent preferences in
their setting with an infinite set of states.
Provided that completeness and the partitional Archimedean property are satisfied, Brandenburger and Keisler (2002) introduce the following belief operator in their Definition B1; see also Brandenburger and
Friedenberg (2003).

Definition 7 (Brandenburger and Keisler, 2002) At d the decision maker assumes E if dE is nontrivial and p dE q implies p d q.
Proposition 18 Assume that d satisfies Axioms 1 and 4 00 (in addition to the assumptions made in Section 4.1). Then E is assumed at d
iff d B 0 E.
Proposition 18 shows that the assumption operator coincides with
robust belief (and thus with Stalnakers absolutely robust belief) under
completeness and the partitional Archimedean property.
However, if the partitional Archimedean property is weakened to the
conditional Archimedean property, then this equivalence is not obtained.
To see this, let d = {d, e, f }, and let the preferences d , in addition
to the properties listed in Section 4.1, also satisfy completeness. It then
follows from Proposition 2 that a is represented by a and a LPSi.e.,
a sequence of subjective probability distributions with possibly overlapping supports. Consider the example provided by Blume et al. (1991a) in
their Section 5 of a two-level LPS, where the primary probability distribution, d1 , is given by d1 (d) = 1/2 and d1 (e) = 1/2, and the secondary
probability distribution, d2 , used to resolve ties, is given by d2 (d) = 1/2
and d2 (f ) = 1/2. Consider the acts p and q, where d (p(d)) = 2,
d (p(e)) = 0, and d (p(f )) = 0, and where d (q(d)) = 1, d (q(e)) = 1,
and d (q(f )) = 2. Even though d is admissible on {d, e}, and thus
{d, e} is robustly believed at d, it follows that {d, e} is not assumed at
d since
pd{d,e} q while pd q .
Brandenburger and Keisler (2002) do not indicate that their definition
as stated in Definition 7should be used outside the realm of preferences that satisfy the partitional Archimedean property. Hence, our

50

CONSISTENT PREFERENCES

definition of robust beliefcombined with the characterization result of


Proposition 12 and its interpretation in term of admissibilityyields a
preference-based generalization of Brandenburger and Keisler (2002) operator (in our setting with a finite set of states) to preferences that need
only satisfy the properties of Section 4.1.
The strong belief operator. In the setting of extensive form
games, Battigalli and Siniscalchi (2002) have suggested a non-monotonic
strong belief operator. We now show how their strong belief operator
is related to robust belief, and thereby, to absolutely robust belief of
Stalnaker (1998), full belief of Asheim and Dufwenberg (2003a), and
assumption of Brandenburger and Keisler (2002).
Battigalli and Siniscalchi (2002) base their strong belief operator on
a conditional belief operator derived from an epistemic model where,
at each state d F , the decision maker is endowed with a system of
conditional preferences {d | d } (with, as before, d denoting {
2F \{}| d 6= }). However, Battigalli and Siniscalchi (2002) assume
that, if the true state is d, then the decision makers system of conditional
preferences is represented by d and a CPS {d | d }. Since a CPS
does not satisfy conditionality as specified by Axiom 6, we must embed
their conditional belief operator in the framework of the present chapter.
We can do so using Corollary 1 of Chapter 3.
One the one hand, Battigalli and Siniscalchi (2002) and Ben-Porath
(1997) define conditional belief with probability one in the following
way: At d the decision maker believes E conditional on if suppd
E, where {d | d } is a CPS on F with support d .
On the other hand, according to Definition 5 of the present chapter,
at d the decision maker believes E conditional on if d () E.
If, however, Axioms 10 , 40 , and 11 are strengthened to Axioms 1 and
400 , so that by Proposition 3 d is represented by d and an LCPS,
d = (d1 , . . . , dLd ), on F with support d , then Lemma 14 of Appendix
A implies that d () = suppd` , where ` := min{k| suppdk 6= }.
Hence, by Corollary 1, conditional belief with probability one as
defined by Battigalli and Siniscalchi (2002) and Ben-Porath (1997) is
isomorphic to the conditional belief operator B() derived from an epistemic model satisfying the assumptions of Section 4.1 of the present
chapter.
Given that the conditional belief operator of Battigalli and Siniscalchi
(2002) thus coincides with the B() operator of the present paper, we

Belief operators

51

can define their strong belief operator as follows: Let H ( ) be some


non-empty subcollection of the collection of subsets that are subjectively
possible at any state; e.g., in an extensive game H may consist of the
subsets that correspond to subgames. Then H E is the collection of
subsets satisfying H and having the property that E is subjectively possible conditional on whenever E is subjectively possible.

Definition 8 (Battigalli and Siniscalchi,


2002) At d the deciT
sion maker strongly believes E if d H E B().
Hence, at d an event E is strongly believed if E is robustly believed in
the following sense: E is believed conditional on any subset in H that
does not make E subjectively impossible. Since E H E {F },
it follows that the strong belief operator is bounded by the robust belief
and (unconditional) belief operators.

Proposition 19 If d B 0 (E), then E is strongly believed at d. If E


is strongly believed at d, then d B(F )E.
As suggested by Battigalli and Bonanno (1999), the strong belief
operator may also be defined w.r.t. other subcollections of than the
collection of subsets that correspond to subgames, and may be seen as
a generalization of robust belief by not necessarily requiring belief to be
absolutely robust in the sense of Stalnaker (1998). However, provided
that F is included, Proposition 19 still holds.
In any case, the strong belief operator shares the properties of robust
belief: Also strong belief satisfies the properties of Proposition 17, but
is not monotonic.

Chapter 5
BASIC CHARACTERIZATIONS

In this chapter we present characterizations of basic game-theoretic


concepts. After presenting the concept of an epistemic model of a strategic game form in Section 5.1, we turn to the characterizations of Nash
equilibrium and rationalizability in Section 5.2 and characterizations of
(strategic form) perfect equilibrium and permissibility in the final Section 5.3.
The characterizations of Nash equilibrium and rationalizability will
be done by means of the event that each player has preferences that are
consistent with the game and the preferences of the opponent. Likewise, the characterizations of (strategic form) perfect equilibrium and
permissibility will be done by means of the event that each player has
preferences that are admissibly consistent with the game and the preferences of the opponent. Hence, the chapter illustrates the consistent
preferences approach and set the stage for the analysis of subsequent
chapters.
Note that the results of this chapter are variants of results that can be
found in the literature. In particular, the characterizations of Nash equilibrium and (strategic form) perfect equilibrium are variants of Propositions 3 and 4 of Blume et al. (1991b).

5.1

Epistemic modeling of strategic games

The purpose of this section is to present a framework for strategic


games where each player is modeled as a decision maker under uncertainty. The analysis builds on the two previous chapters and introduces

54

CONSISTENT PREFERENCES

the concept of an epistemic model for a strategic game form. In this


chapter preferences are assumed to be complete, an assumption that
will be relaxed in Chapter 6.
A strategic game form. Denote by Si player is finite set of pure
strategies, and let z : S Z map strategy profiles into outcomes, where
S = S1 S2 is the set of strategy profiles and Z is the finite set of
outcomes. Then (S1 , S2 , z) is a finite strategic two-player game form.
An epistemic model. For each player i, any of is strategies is
an act from strategy choices of his opponent j to outcomes. The uncertainty faced by a player i in a strategic game form concerns (a) js
strategy choice, (b) js preferences over acts from is strategy choices
to outcomes, and so on (cf. the discussion in Section 1.3). A type of a
player i corresponds to (a) preferences over acts from js strategy choices,
(b) preferences over acts from js preferences over acts from is strategy
choices, and so on.
For any player i, is decision is to choose one of his own strategies. As
the player is not uncertain of his own choice, the players preferences over
acts from his own strategy choices is not relevant and can be ignored.
Hence, in line with the discussion in Section 1.3, consider an implicit
modelwith a finite type set Ti for each player iwhere the preferences
of a player corresponds to the players type, and where the preferences
of the player are over acts from the opponents strategy-type pairs to
outcomes.
If we let each player be aware of his own type (as we will assume
throughout), this leads to an epistemic model where the state space of
player i is Ti Sj Tj , and where, for each ti Ti ,
{ti } Sj Tj
constitutes an equivalence class, being the set of states that are indistinguishable for player i at ti , and a non-empty subset of {ti } Sj Tj ,
ti , is the set of states that player i deems subjectively possible at ti .

Definition 9 An epistemic model for the finite strategic two-player


game form (S1 , S2 , z) is a structure
(S1 , T1 , S2 , T2 ) ,
where, for each type ti of any player i, ti corresponds to a system of
conditional preferences on the collection of sets of acts from elements of
ti := { Ti Sj Tj | ti 6= }

Basic characterizations

55

to (Z), where ti is a non-empty subset of {ti } Sj Tj .


An implicit model with a finite set of types for each player, as considered throughout this book, does not allow for preference-completeness,
where, for each player i, there exists some type of i for any feasible preferences that i may have.1 Still, even a finite implicit model gives rise to
infinite hierarchies of preferences, and in effect we assume that each
player as a decision maker is able to represent his subjective hierarchy
of preferences by means of a finite implicit model. Then, at the true
profile of types, the two players subjective hierarchies can be embedded
in a single implicit model that includes the types of the two players that
are needed to represent each players hierarchy. Such a construction can
fruitfully be used to analyze a wide range of game-theoretic concepts, as
will be demonstrated throughout this book.
However, when embedding the two players subjective hierarchies into
a single implicit model, it is illegitimate to require that player i deems
the true type of his opponent j subjectively possible. Rather, we cannot rule out that, at the true type profile, player js true type is not
needed to represent player is subjective hierarchy of preferences; this is
particularly relevant for the analysis of non-equilibrium game-theoretic
concepts. Hence, when applying finite implicit models for interactive
analysis of games, it is important to allowas we do in the framework
of the present textthe decision maker to hold objectively possible opponent preferences as subjectively impossible.
Throughout this book we will consider two different kinds of epistemic
models that differ according to the kind of assumption imposed on the
set of conditional preferences that ti determines. For the present chapter,
as well Chapters 8, 9, and 10, we will make the following assumption.

Assumption 1 For each ti of any player i, (a) ti satisfies Axioms 1,


2, and 4 0 if 6= Ti Sj Tj , and Axiom 3 if and only if ti , (b)
the system of conditional preferences {ti | ti } satisfies Axioms 5,
6 0 , and 16 , and (c) there exists a non-empty subset of opponent types,
Tjti , such that ti = {ti } Sj Tjti .

1 Preference-completeness

is needed for the interactive epistemic analyses of, e.g., Brandenburger and Keisler (2002) and Battigalli and Siniscalchi (2002), but not for the analysis
presented in this book. Brandenburger and Keisler (1999) show that there need not exist
a preference-complete interactive epistemic model when preferences are not representable
by subjective probabilities, implying that preference-completeness may be inconsistent with
the analysis of Chapters 6, 7, 11, and 12, where Axiom 1 is not imposed.

56

CONSISTENT PREFERENCES

In this assumption, Tjti is the non-empty set of opponent types that


player i deems subjectively possible at ti . The assumption explicitly
allows for preferences over acts from subsets of Ti Sj Tj , , where
projSj may be a strict subset of Sj . This accommodates the analysis
of extensive game concepts in Chapters 8 and 9 and will permit the
concepts in Tables 2.1 and 2.2 to be treated in a common framework.
Write ti for player is preferences conditional on being of type ti ;
i.e., for ti when = {ti } Sj Tj . We will refer to ti as player is
unconditional preferences at ti .
Under Assumption 1 it follows from Proposition 5 that, for each type
ti of any player i, is system of conditional preferences at ti can be
represented by a vNM utility function iti : (Z) R and an SCLP
(ti , `ti ) on Ti Sj Tj with support ti = {ti } Sj Tjti . Throughout,
we will adopt an interim perspective, where player i has already become
aware of his own type. This
entails that we can
w.l.o.g. assume that,
for any ti , `() = ` ({ti } Sj Tj ) . The interpretation is
player is preferences at ti are not changed by ruling out states that i
can distinguish from the true state at ti . Consequently, for expositional
simplicity we choose to let the SCLP (ti , `ti ) be defined on Sj Tj with
support Sj Tjti
Preferences over strategies. It follows from the above assumptions
that, for each type ti of any player i, player is unconditional preferences
at ti , ti , are a complete and transitive binary relation on the set of acts
from Sj Tj to (Z) that is represented by a vNM utility function iti
and an LPS t`i = (t1i , . . . , t`i ), where ` = `(Sj Tj ). Since each pure
strategy si Si is a function that assigns the deterministic outcome
z(si , sj ) to any (sj , tj ) Sj Tj and is thus an act from Sj Tj to
(Z), we have that ti determines complete and transitive preferences
on is set of pure strategies, Si .
Player is choice set at ti , Siti , is player is set of rational pure strategies at ti :
Siti := {si Si | s0i Si , si ti s0i } .
Since ti is complete and transitive and satisfies objective independence,
and Si is finite, it follows that the choice set Siti is non-empty, and that
the set of rational mixed strategies equals (Siti ).
A strategic game. Let, for each i, ui : S R be a vNM utility function that assigns payoff to any strategy profile. Then G = (S1 , S2 , u1 , u2 )

57

Basic characterizations

is a finite strategic two-player game. Assume that, for each i, there exist
s = (s1 , s2 ), s0 = (s01 , s02 ) S such that ui (s) > ui (s0 ). The event that i
plays the game G is given by
[ui ] := {(t1 , t2 ) T1 T2 | iti z is a positive affine transformation of ui } ,
while [u] := [u1 ] [u2 ] is the event that both players play G.
Denote by pi , qi (Si ) mixed strategies for player i, and let Sj0
( Sj ) be a non-empty set of opponent strategies. Say that pi strongly
dominates qi on Sj0 if, sj Sj0 , ui (pi , sj ) > ui (qi , sj ). Say that qi is
strongly dominated on Sj0 if there exists pi (Si ) such that pi strongly
dominated qi on Sj0 . Say that pi weakly dominates qi on Sj0 if, sj Sj0 ,
ui (pi , sj ) ui (qi , sj ) with strict inequality for some s0j Sj0 . Say that qi
is weakly dominated on Sj0 if there exists pi (Si ) such that pi weakly
dominated qi on Sj0 .
The following two results will be helpful for some of the proofs.

Lemma 4 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player game.


For each i, pi (Si ) is strongly dominated on Sj0 if and only if there
does not exist (Sj0 ) with supp Sj0 such that, s0i Si ,
X

(sj )ui (pi , sj )

(sj )ui (s0i , sj ) .

sj Sj0

sj Sj0

Proof. Lemma 3 of Pearce (1984).

Lemma 5 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player game.


For each i, pi (Si ) is weakly dominated on Sj0 if and only if there
does not exist (Sj0 ) with supp = Sj0 such that, s0i Si ,
X
sj Sj0

(sj )ui (pi , sj )

(sj )ui (s0i , sj ) .

sj Sj0

Proof. Lemma 4 of Pearce (1984).


Certain belief. In the present chapter, as well in Chapters 8, 9, and
10, we will apply the certain belief operator (cf. Definition 4 of Chapter
4) for events that are subsets of the set of type profiles, T1 T2 . In
Assumption 1 we allow for the possibility that each player deems some
opponent types subjectively impossible, corresponding to an SCLP that
does not have full support along the type dimension. Therefore, certain
belief (meaning that the complement is subjectively impossible) can be

58

CONSISTENT PREFERENCES

derived from the epistemic model and defined for events that are subsets
of T1 T2 . For any E T1 T2 , say that player i certainly believes the
event E at ti if ti projTi Ki E, where
Ki E := {(t1 , t2 ) T1 T2 | projT1 T2 ti = {ti } Tjti E} .
Say that there is mutual certain belief of E at (t1 , t2 ) if (t1 , t2 ) KE,
where KE := K1 E K2 E. Say that there is common certain belief of E
at (t1 , t2 ) if (t1 , t2 ) CKE, where CKE := KE KKE KKKE . . . .
As established in Proposition 14, Ki corresponds to a KD45 system.
Moreover, the mutual certain belief operator, K, has the following properties, where we write K0 E := E, and for each g 1, Kg E := KKg1 E.

Proposition 20 (i) For any E T1 T2 and all g > 1, Kg E Kg1 E.


If E = E 1 E 2 , where, for each i, E i = projTi E i Tj , then KE E.
(ii) For any E T1 T2 , there exists g 0 0 such that Kg E = CKE
for g g 0 , implying that CKE = KCKE.
Proof. Part (i). If E = E 1 E 2 , where, for each i, E i = projTi E i Tj ,
then KE = K1 E K2 E K1 E 1 K2 E 2 = E 1 E 2 = E, establishing
the second half of part (i).
Since, for any E T1 T2 , KE = K1 E K2 E, where, for each i,
Ki E = projTi Ki E Tj , the first half of part (i) follows from the result
of the second half.
Part (ii) is a consequence of part (i) and T1 T2 being finite.

5.2

Consistency of preferences

In the present section we define the event of consistency of preferences


and show how this event can be used to provide characterizations of
mixed-strategy Nash equilibrium and mixed rationalizable strategies.
Inducing rational choice. In line with the discussion in Section 1.1,
and following a tradition from Harsanyi (1973) to Blume et al. (1991b),
a mixed strategy will interpreted, not as an object of choice, but as an
expression of the beliefs for the other player. Say that the mixed strategy
pjti |tj is induced for tj by ti if tj Tjti and, for all sj Sj ,
pjti |tj (sj ) =

t`i (sj , tj )

t`i (Sj , tj )

P
where t`i (Sj , tj ) := sj Sj t`i (sj , tj ), and where ` denotes the first level
` of ti for which t`i (Sj , tj ) > 0. Furthermore, define the set of type profiles for which ti i nduces a r ational mixed strategy for any subjectively

59

Basic characterizations

possible opponent type:

n
0 o
0

[iri ] := (t1 , t2 ) T1 T2 t0j Tjti , pjti |tj Sjtj .


Write [ir] := [ir1 ] [ir2 ].
Say that at ti player is preferences over his strategies are consistent
with the game G = (S1 , S2 , u1 , u2 ) and the preferences of his opponent,
if ti projTi ([ui ] [iri ]). Refer to [u] [ir] as the event of consistency.
Characterizing Nash equilibrium. In line with the discussion in
Section 1.1, we now characterize the concept of a mixed-strategy Nash
equilibrium as profiles of induced mixed strategies at a type profile in
[u] [ir] where there is mutual certain belief of the type profile (i.e., for
each player, only the true opponent type is deemed subjectively possible). Before doing so, we define a mixed-strategy Nash equilibrium.

Definition 10 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player


game. A mixed strategy profile p = (p1 , p2 ) is a mixed-strategy Nash
equilibrium if for each i,
ui (pi , pj ) = max
ui (p0i , pj ) .
0
pi

The characterization resultwhich is a variant of Proposition 3 of


Blume et al. (1991b)can now be stated.

Proposition 21 Consider a finite strategic two-player game G. A profile of mixed strategies p = (p1 , p2 ) is a mixed-strategy Nash equilibrium
if and only if there exists an epistemic model with (t1 , t2 ) [u][ir] such
that (1) there is mutual certain belief of {(t1 , t2 )} at (t1 , t2 ), and (2) for
each i, pi is induced for ti by tj .
Proof. (Only if.) Let (p1 , p2 ) be a mixed-strategy Nash equilibrium.
Construct the following epistemic model. Let T1 = {t1 } and T2 = {t2 }.
Assume that, for each i,
iti satisfies that iti z = ui ,
the SCLP (ti , `ti ) has the properties that ti = (t1i , . . . , tLi ) with
support Sj {tj } satisfies that, sj Sj , t1i (sj , tj ) = pj (sj ), and `ti
satisfies that `(Sj {tj }) = 1.
Then, it is clear that (t1 , t2 ) [u], that there is mutual certain belief of
{(t1 , t2 )} at (t1 , t2 ), and that, for each i, pi is induced for ti by tj . It

60

CONSISTENT PREFERENCES

remains to show that (t1 , t2 ) [ir], i.e., for each i, pi (Siti ). Since,
by Definition 10, it holds for each i that, s0i Si , ui (pi , pj ) ui (s0i , pj ),
it follows from the construction of (ti , `ti ) that pi (Siti ).
(If.) Suppose that there exists an epistemic model with (t1 , t2 )
[u] [ir] such that there is mutual certain belief of {(t1 , t2 )} at (t1 , t2 ),
and, for each i, pi is induced for ti by tj . Then, for each i, ti is
represented by iti satisfying that iti z is a positive affine transformation
of ui and an LPS t`i = (t1i , . . . , t`i ), where sj Sj , t1i (sj , tj ) = pj (sj ),
and where ` = `(Sj Tj ) 1. Suppose, for some i and p0i (Si ),
ui (pi , pj ) < ui (p0i , pj ). Then there is some si Si with pi (si ) > 0 and
some s0i Si such that ui (si , pj ) < ui (s0i , pj ), or equivalently
X
X
t1i (sj , tj )ui (si , sj ) <
t1i (sj , tj )ui (s0i , sj ) .
sj

sj

This means that si


/ Siti , which, since pi (si ) > 0, contradicts (t1 , t2 )
[irj ]. Hence, by Definition 10, (p1 , p2 ) is a Nash equilibrium.
For the if part of Proposition 21, it is sufficient that there is mutual
certain belief of the beliefs that each player has about the strategy choice.
We do not need the stronger condition that (1) entails. Hence, higher
order certain belief plays no role in the characterization, in line with the
fundamental insights of Aumann and Brandenburger (1995).
Characterizing rationalizability. We now turn to analysis of deductive reasoning in games and present a characterization of (ordinary)
rationalizability. Since we are only concerned with two-player games,
there is no difference between rationalizability, as defined by Bernheim
(1984) and Pearce (1984), and correlated rationalizability, where conjectures are allowed to be correlated. As it follows that rationalizability in
two-player games corresponds to IESDS, we use the latter procedure as
the primitive definition. For any ( =
6 ) X = X1 X2 S1 S2 , write
c(X) := c1 (X2 ) c2 (X1 ), where
ci (Xj ) := Si \ {si Si | pi (Si ) s.t. pi strongly dominates si on Xj } .

Definition 11 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player


game. Consider the sequence defined by X(0) = S1 S2 and, g 1,
X(g) = c(X(g 1)). A pure strategy si is said to be rationalizable if
\
Xi (g) .
si Ri :=
g=0

A mixed strategy pi is said to be rationalizable if pi is not strongly


dominated on Rj .

Basic characterizations

61

While any pure strategy in the support of a rationalizable mixed strategy is itself rationalizable (due to what Pearce calls the pure strategy
property), the mixture on a set of rationalizable pure strategies need not
be rationalizable.
The following lemma is a straightforward implication of Definition 11.

Lemma 6 (i) For each i, Ri 6= . (ii) R = c(R). (iii) For each i,


si Ri if and only if there exists X = X1 X2 with si Ri such that
X c(X).
We next characterize the concept of rationalizable mixed strategies as
induced mixed strategies under common certain belief of [u] [ir].

Proposition 22 A mixed strategy pi for i is rationalizable in a finite


strategic two-player game G if and only if there exists an epistemic model
with (t1 , t2 ) CK([u] [ir]) such that pi is induced for ti by tj .
Proof. Part 1: If pi is rationalizable, then there exists an epistemic
model with (t1 , t2 ) CK([u] [ir]) such that pi is induced for ti by tj .
Step 1: Construct an epistemic model with T1 T2 CK([u] [ir])
such that for each si Ri of any player i, there exists ti Ti with,
si Siti . Construct an epistemic model with, for each i, a bijection
si : Ti Ri from the set of types to the the set of rationalizable pure
strategies. Assume that, for each ti Ti of any player i, iti satisfies that
(a) iti z = ui (so that T1 T2 [u]),
and the SCLP (ti , `ti ) on Sj Tj has the properties that
(b) ti = (t1i , . . . , tLi ) with support Sj Tjti satisfies that suppt11
(Sj {tj }) = {(sj (tj ), tj )} for all tj Tjti (so that, tj Tjti ,
piti |tj (sj (tj )) = 1),
(c) `ti satisfies `ti (Sj Tj ) = 1.
Property (b) entails that the support of the marginal of t1i on Sj is
included in Rj . By properties (a) and (c) and Lemmas 4 and 6(ii),
we can still choose t1i (and Titi ) so that si (ti ) Siti . This combined
with property (b) means that T1 T2 [ir]. Furthermore, T1 T2
CK([u] [ir]) since Tjti Tj for each ti Ti of any player i. Since, for
each player i, si is onto Ri , it follows that, for each si Ri of any player
i, there exists ti Ti with si Siti .

Step 2: Add type ti to Ti . Assume that iti satisfies (a) and (ti , `ti )

satisfies (b) and (c). Then 1ti can be chosen so that pi (Siti ).

62

CONSISTENT PREFERENCES

Furthermore, (Ti {ti }) Tj [u] [ir], and since Tjti Tj , (Ti


{ti }) Tj CK([u] [ir]).

Step 3: Add type tj to Tj . Assume that jtj satisfies (a) and the SCLP

(tj , `tj ) on Si (Ti {ti }) has the property that tj = (1tj , . . . , Ltj )

with support Si {ti } satisfies that, si Si , 1tj (si , ti ) = pi (si ), so that


pi is induced for ti by tj . Furthermore, (Ti {ti })(Tj {tj }) [u][ir],

and since Titj Ti {ti }, (Ti {ti })(Tj {tj }) CK([u][ir]). Hence,
(t1 , t2 ) CK([u] [ir]) and pi is induced for ti by tj .
Part 2: If there exists an epistemic model with (t1 , t2 ) CK([u] [ir])
such that pi is induced for ti by tj , then pi is rationalizable.
Assume that there exists an epistemic model with (t1 , t2 ) CK([u]
6= .
[ir]) such that pi is induced for ti by tj . In particular, CK([u][ir])
S
ti
0
Let, for each i, Ti := projTi CK([u] [ir]) and Xi := ti T i0 Si . By
Proposition 20(ii), for each ti Ti0 of any player i, ti deems (sj , tj )
subjectively impossible if tj Tj \Tj0 since CK([u] [ir]) = KCK([u]
[ir]) Ki CK([u] [ir]), implying Tjti Tj0 . By the definitions of [u] and
[ir], it follows that, for each ti Ti0 of any player i, ti is represented
by iti satisfying that iti z is a positive affine transformation of ui
and an LPS t`i = (t1i , . . . , t`i ), where ` = `(Sj Tj ) 1, and where
suppt1i Xj Tj . Hence, by Lemma 4, for each ti Ti0 of any player i, if
pi (Siti ), then no strategy in the support of pi is strongly dominated
on Xj , since it follows from pi (Siti ) and suppt1i Xj Tj that,
si supppi and s0i Si ,
X X t
X X t
1i (sj , tj )ui (s0i , sj ) .
1i (sj , tj )ui (si , sj )
sj Xj tj Tj

sj Xj tj Tj

This implies X c(X), entailing by Lemma 6(iii) that, for each i,


Xi Ri . Furthermore, since (t1 , t2 ) CK([u] [ir]) and the mixed

strategy induced for ti by tj , pi , satisfies pi (Siti ), it follows that


pi is not strongly dominated on Xj Rj . By Definition 11 this implies
that pi is a rationalizable mixed strategy.

5.3

Admissible consistency of preferences

We next refine the event of consistency of preferences and show how


this leads to characterizations of (strategic form) perfect equilibrium and
mixed permissible strategies.
Caution. Player i has preference for cautious behavior at ti if he
takes into account all opponent strategies for any opponent type that is
deemed subjectively possible.

Basic characterizations

63

Throughout this chapter as well as chapters 8, 9, and 10, we assume


that Assumption 1 is satisfied so that ti = {ti } Sj Tjti . Under Assumption 1 player i is cautious at ti if {ti | 6= ti } satisfies Axiom
6. Because then it follows from Proposition 2 that player is unconditional preferences at ti , ti , are represented by iti and an LPS ti with
support Sj Tjti . Since thus (sj , tj ) suppti for any (sj , tj ) satisfying
tj Tj ti , player i at ti takes into account all opponent strategies for
any opponent type that is deemed subjectively possible. Hence, under
Assumption 1, we can define the event
[caui ] := {(t1 , t2 ) T1 T2 | {ti | 6= ti } satisfies Axiom 6 } .
In terms of the representation of the system of conditional preferences,
6 ti }, by means of a vNM utility function and an SCLP
{ti | =
(cf. Proposition 5), caution imposes the additional requirement that for
each type ti of any player i the full LPS ti is used to form the conditional
beliefs over opponent strategy-type pairs. Formally, if L denotes the
number of levels in the LPS ti , then
[caui ] = {(t1 , t2 ) T1 T2 | `ti (Sj Tj ) = L} .
Since `ti is non-increasing w.r.t. set inclusion, ti projTi [caui ] implies
that `ti (projSj Tj ) = L for all subsets of {ti } Sj Tj with welldefined conditional beliefs. Since it follows from Assumption 1 that ti
has full support on Sj , ti projTi [caui ] means that is choice set at ti
never admits a weakly dominated strategy, thereby inducing preference
for cautious behavior.
Write [cau] := [cau1 ] [cau2 ].
Say that at ti player is preferences over his strategies are admissibly
consistent with the game G = (S1 , S2 , u1 , u2 ) and the preferences of his
opponent, if ti projTi ([ui ] [iri ] [caui ]). Refer to [u] [ir] [cau] as
the event of admissible consistency.
Characterizing perfect equilibrium. We now characterize the
concept of a strategic form (or trembling-hand) perfect equilibrium as
profiles of induced mixed strategies at a type profile in [u] [ir] [cau]
where there is mutual certain belief of the type profile (i.e., for each
player, only the true opponent type is deemed subjectively possible).
Before doing so, we define a (strategic form) perfect equilibrium.

Definition 12 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player


game. A mixed strategy profile p = (p1 , p2 ) is a (strategic form) perfect
equilibrium if there is a sequence (p(n))nN of completely mixed strategy

64

CONSISTENT PREFERENCES

profiles converging to p such that for each i and every n N,


ui (pi , pj (n)) = max
ui (p0i , pj (n)) .
0
pi

The following holds in two-player games.

Lemma 7 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player game.


A mixed strategy profile p = (p1 , p2 ) is a (strategic form) perfect equilibrium if and only if p is a mixed-strategy Nash equilibrium and, for each
i, pi is not weakly dominated.
Proof. Proposition 248.2 of Osborne and Rubinstein (1994).
The characterization resultwhich is a variant of Proposition 4 of
Blume et al. (1991b)can now be stated.

Proposition 23 Consider a finite strategic two-player game G. A profile of mixed strategies p = (p1 , p2 ) is a (strategic form) perfect equilibrium if and only if there exists an epistemic model with (t1 , t2 )
[u] [ir] [cau] such that (1) there is mutual certain belief of {(t1 , t2 )}
at (t1 , t2 ), and (2) for each i, pi is induced for ti by tj .
Proof. (Only if.) Let (p1 , p2 ) be a (strategic form) perfect equilibrium. Then, by Lemma 7, (p1 , p2 ) be a mixed-strategy Nash equilibrium
and, for each i, pi is not weakly dominated. Construct the following
epistemic model. Let T1 = {t1 } and T2 = {t2 }. Assume that, for each i,
iti satisfies that iti z = ui ,
the SCLP (ti , `ti ) has the properties that ti = (t1i , t2i ) with support Sj {tj } has two levels, with the first level chosen so that,
sj Sj , t1i (sj , tj ) = pj (sj ), and the second level chosen so that
suppt2i = Sj {tj } and, s0i Si ,
X
X
t2i (sj , tj )ui (pi , sj )
t2i (sj , tj )ui (s0i , sj )
sj

sj

(which is possible by Lemma 5 since pi is not weakly dominated),


and `ti satisfies that `ti (Sj Tj ) = 2.
Then, it is clear that (t1 , t2 ) [u] [cau], that there is mutual certain
belief of {(t1 , t2 )} at (t1 , t2 ), and that, for each i, pi is induced for ti by tj .
It remains to show that (t1 , t2 ) [ir], i.e., for each i, pi (Siti ). Since,
by Lemma 7, it holds for each i that, s0i Si , ui (pi , pj ) ui (s0i , pj ), it
follows from the construction of (ti , `ti ) that pi (Siti ).

65

Basic characterizations

(If.) Suppose that there exists an epistemic model with (t1 , t2 )


[u] [ir] [cau] such that there is mutual certain belief of {(t1 , t2 )} at
(t1 , t2 ), and, for each i, pi is induced for ti by tj . Then, for each i, ti is
represented by iti satisfying that iti z is a positive affine transformation
of ui and an LPS ti = (t1i , . . . , tLi ), where sj Sj , t1i (sj , tj ) = pj (sj ),
and where suppti = Sj {tj }. Suppose first that (p1 , p2 ) is not a Nash
equilibrium; i.e., for some i and p0i (Si ), ui (pi , pj ) < ui (p0i , pj ). Then
there is some si Si with pi (si ) > 0 and some s0i Si such that
ui (si , pj ) < ui (s0i , pj ), or equivalently
X
X
t1i (sj , tj )ui (si , sj ) <
t1i (sj , tj )ui (s0i , sj ) .
sj

sj

This means that si


/ Siti , which, since pi (si ) > 0, contradicts (t1 , t2 )
[irj ]. Suppose next that, for some i, pi is weakly dominated. Since
suppti = Sj {tj }, this also implies that si
/ Siti for some si Si
with pi (si ) > 0, again contradicting (t1 , t2 ) [irj ]. Hence, by Lemma 7,
(p1 , p2 ) is a (strategic form) perfect equilibrium.
As for Proposition 21, higher order certain belief plays no role in this
characterization.
Characterizing permissibility. We now turn to the non-equilibrium analog to (strategic form) perfect equilibrium, namely the concept of permissibility; cf. Borgers (1994) as well as Brandenburger (1992)
who coined the term permissibility. To define the concept of permissible strategies, we use the equivalent Dekel-Fudenberg procedure as the
primitive definition. For any ( 6=) X = X1 X2 S1 S2 , write
a
(X) := a
1 (X2 ) a
2 (X1 ), where
a
i (Xj ) := Si \ {si Si | pi (Si ) s.t. pi strongly dominates si on Xj
or pi weakly dominates si on Sj } .

Definition 13 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player


game. Consider the sequence defined by X(0) = S1 S2 and, g 1,
X(g) = a
(X(g 1)). A pure strategy si is said to be permissible if
\
si Pi :=
Xi (g) .
g=0

A mixed strategy pi is said to be permissible if pi is not strongly dominated on Pj and not weakly dominated on Sj .
While any pure strategy in the support of a permissible mixed strategy
is itself permissible, the mixture over a set of permissible pure strategies
need not be permissible.

66

CONSISTENT PREFERENCES

The following lemma is a straightforward implication of Definition 13.

Lemma 8 (i) For each i, Pi 6= . (ii) P = a


(P ). (iii) For each i, si Pi
if and only if there exists X = X1 X2 with si Pi such that X a
(X).
We next characterize the concept of permissible mixed strategies as
induced mixed strategies under common certain belief of [u][ir][cau].

Proposition 24 A mixed strategy pi for i is permissible in a finite


strategic two-player game G if and only if there exists an epistemic model
with (t1 , t2 ) CK([u] [ir] [cau]) such that pi is induced for ti by tj .
Proof. Part 1: If pi is permissible, then there exists an epistemic
model with (t1 , t2 ) CK([u] [ir] [cau]) such that pi is induced for ti
by tj .
Step 1: Construct an epistemic model with T1 T2 CK([u] [ir]
[cau]) such that for each si Pi of any player i, there exists ti Ti with,
si Siti . Construct an epistemic model with, for each i, a bijection
si : Ti Pi from the set of types to the the set of permissible pure
strategies. Assume that, for each ti Ti of any player i, iti satisfies that
(a) iti z = ui (so that T1 T2 [u]),
and the SCLP (ti , `ti ) on Sj Tj has the properties that
(b) ti = (t1i , t2i ) with support Sj Tjti has two levels and satisfies
that suppt11 (Sj {tj }) = {(sj (tj ), tj )} for all tj Tjti (so that,
tj Tjti , piti |tj (sj (tj )) = 1),
(c) `ti satisfies `ti (Sj Tj ) = 2 (so that T1 T2 [cau]).
Property (b) entails that the support of the marginal of t1i on Sj is
included in Pj . By properties (a) and (c) and Lemmas 4, 5 and 8(ii),
we can still choose t1i and t2i (and Ti ti ) so that si (ti ) Siti . This
combined with property (b) means that T1 T2 [ir]. Furthermore,
T1 T2 CK([u] [ir] [cau]) since Tjti Tj for each ti Ti of any
player i. Since, for each player i, si is onto Pi , it follows that, for each
si Pi of any player i, there exists ti Ti with si Siti .

Step 2: Add type ti to Ti . Assume that iti satisfies (a) and (ti , `ti )

satisfies (b) and (c). Then 1ti and 2ti can be chosen so that pi

(Siti ). Furthermore, (Ti {ti }) Tj [u] [ir] [cau], and since

Tjti Tj , (Ti {ti }) Tj CK([u] [ir] [cau]).

Step 3: Add type tj to Tj . Assume that jtj satisfies (a) and the SCLP

(tj , `tj ) on Si (Ti {ti }) has the property that tj = (1tj , . . . , Ltj )

67

Basic characterizations

with support Si {ti } satisfies that, si Si , 1tj (si , ti ) = pi (si ), so

that pi is induced for ti by tj , and `tj satisfies that `tj (Ti {ti }) = L.
Furthermore, (Ti {ti }) (Tj {tj }) [u] [ir] [cau], and since

Titj Ti {ti }, (Ti {ti }) (Tj {tj }) CK([u] [ir] [cau]). Hence,
(t1 , t2 ) CK([u] [ir] [cau]) and pi is induced for ti by tj .
Part 2: If there exists an epistemic model with (t1 , t2 ) CK([u]
[ir] [cau]) such that pi is induced for ti by tj , then pi is permissible.
Assume that there exists an epistemic model with (t1 , t2 ) CK([u]
[ir] [cau]) such that pi is induced for ti by tj . In particular, CK([u]
[ir] [cau])
6= . Let, for each i, Ti0 := projTi CK([u] [ir] [cau]) and
S
Xi := ti T i0 Siti . By Proposition 20(ii), for each ti Ti0 of any player
i, ti deems (sj , tj ) subjectively impossible if tj Tj \Tj0 since CK([u]
[ir] [cau]) = KCK([u] [ir] [cau]) Ki CK([u] [ir] [cau]), implying
Tjti Tj0 . By the definitions of [u], [ir], and [cau], it follows that, for
each ti Ti0 of any player i, ti is represented by iti satisfying that iti z
is a positive affine transformation of ui and an LPS ti = (t1i , . . . , tLi ),
and where suppt1i Xj Tj , and where suppti = Sj Tjti . Hence,
by Lemma 4, for each ti Ti0 of any player i, if pi (Siti ), then no
strategy in the support of pi is strongly dominated on Xj , since it follows
from pi (Siti ) and suppt1i Xj Tj that, si supppi and s0i Si ,
X X t
X X t
1i (sj , tj )ui (s0i , sj ) .
1i (sj , tj )ui (si , sj )
sj Xj tj Tj

sj Xj tj Tj

Furthermore, since the projection of ti on Sj has full support, no


strategy in the support in pi is weakly dominated on Sj . This implies
Xa
(X), entailing by Lemma 8(iii) that, for each i, Xi Pi . Finally,
since (t1 , t2 ) CK([u] [ir] [cau]) and the mixed strategy induced for

ti by tj , pi , satisfies pi (Siti ), it follows that pi is not strongly dominated on Xj Pj and pi is not weakly dominated on Sj . By Definition
13 this implies that pi is a permissible mixed strategy.

Chapter 6
RELAXING COMPLETENESS

In the previous chapter, we have presented epistemic characterizations of rationalizability and permissibility. For these non-equilibrium
deductive concepts, we have used, respectively, IESDS and the DekelFudenberg procedure (one round of weak elimination followed by iterated
strong domination) as the primitive definitions. Neither of these procedures rely on players having subjective probabilities over the strategy
choice of the opponent. In contrast, the epistemic characterizationsby
relying on Assumption 1require that players have complete preferences
that are representable by means of subjective probabilities.
In this chapter we show how rationalizability and permissibility can be
epistemically characterized without requiring that players have complete
preferences that are representable by means of subjective probabilities.
The resulting structure will also be used for the epistemic analysis of
backward induction in Chapter 7 and forward induction in Chapter 11.
Hence, even though the results of the present chapter may have limited
interest in their own right, they set the stage for later analysis.

6.1

Epistemic modeling of strategic games (cont.)

The purpose of this section is to present a framework for strategic


games where each player is modeled as a decision maker under uncertainty with preferences that are allowed to be incomplete.
An epistemic model. Consider an epistemic model for a finite
strategic game form (S1 , S2 , z) as formalized in Definition 9, with a finite
type set Ti for each player i, and where the preferences of a player corresponds to the players type. Hence, for each type ti of any player i,

70

CONSISTENT PREFERENCES

{ti } Sj Tj is the set of states that are indistinguishable for player


i at ti ,
a non-empty subset of {ti } Sj Tj , ti , is the set of states that
player i deems subjectively possible at ti , and
ti corresponds to a system of conditional preferences on the collection
of sets of acts from subsets of Ti Sj Tj whose intersection with
ti is non-empty to (Z).
However, instead Assumption 1, impose the following assumption,
where ti still denotes { Ti Sj Tj | ti =
6 }.

Assumption 2 For each ti of any player i, (a) ti satisfies Axioms 1 0 ,


2, and 4 0 if =
6 Ti Sj Tj , and Axiom 3 if and only if ti ,
and (b) the system of conditional preferences {ti | ti } satisfies
Axioms 5, 6, and 11 .
As before, write ti for player is unconditional preferences at ti ;
i.e., for ti when = {ti } Sj Tj . W.l.o.g. we may consider ti
to be preferences over acts from Sj Tj to (Z) (instead of acts from
{ti }Sj Tj to (Z)). Under Assumption 2 it follows from Proposition
4 that, for each ti of any player i, is unconditional preferences at ti can
be conditionally represented by a vNM utility function iti : (Z) R.
Conditional representation implies that strong and weak dominance
are is well-defined: Let Ej Sj Tj . Say that one act pEj strongly
dominates another act qEj at ti if,
(sj , tj ) Ej , iti (pEj (sj , tj ))) > iti (qEj (sj , tj ))) .
Say that pEj weakly dominates qEj at ti if,
(sj , tj ) Ej , iti (pEj (sj , tj ))) iti (qEj (sj , tj ))) ,
with strict inequality for some (s0j , t0j ) Ej . Say that ti is admissible on
{ti }Ej if Ej is non-empty and p ti q whenever pEj weakly dominates
qEj at ti . Assumption 2 entails that ti is admissible on ti . Indeed, as
shown in Section 4.1, there exists a vector of nested sets, (t1i , . . . , tLi ),
on which ti is admissible, satisfying:
6= t1i t`i tLi = ti {ti } Sj Tj
(where denotes and 6=).
Preferences over strategies. It follows from the above assumptions
that, for each type ti of any player i, player is unconditional preferences

71

Relaxing completeness

at ti , ti , is a reflexive and transitive binary relation on acts from Sj Tj


to (Z) that is conditionally represented by a vNM utility function iti .
Since each mixed strategy pi (Si ) is a function that assigns the
randomized outcome z(pi , sj ) to any (sj , tj ) Sj Tj and is thus an
act from Sj Tj to (Z), we have that ti determines reflexive and
transitive preferences on is set of mixed strategies, (Si ).
Player is choice set at ti , Siti , is player is set of maximal pure strategies at ti :
Siti := {si Si | @pi (Si ), pi ti si } .
Hence, a pure strategy, si , is in is choice set at ti if there is no mixed
strategy that is strictly preferred to si given is (possibly incomplete)
preferences at ti . If is preferences at ti are complete, then @pi
(Si ), pi ti si is equivalent to s0i Si , si ti s0i , and the definition of
Siti coincides with the one given in Section 5.1.
Since ti is reflexive and transitive and satisfies objective independence, and Si is finite, it follows that the choice set Siti is non-empty and
supports any maximal mixed strategies: If qi (Si ) and @pi (Si )
such that pi ti qi , then qi (Siti ).
On the other hand, with incomplete preferences, it is not the case that
all mixed strategies in (Siti ) are maximal. We may have that pi
(Si ) such that pi ti qi even though qi (Siti ). As an illustration,
consider the case where ti is defined by
p ti q if and only if pprojS

j Tj

weakly dominates qprojS

j Tj

at ti ,

and p ti q if and only if iti (p(sj , tj ))) = iti (q(sj , tj ))) for all (sj , tj )
projSj Tj i . Since a mixed strategy qi may be weakly dominated by
a pure strategy si that does not weakly dominate any pure strategy
in the support of qi , this illustrates the possibility that a non-maximal
mixed-strategy qi is supported by maximal pure strategies.
The event that player i is rational is defined by
[rati ] := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 | si Siti } .
A strategic game. As before, G = (S1 , S2 , u1 , u2 ) denotes a finite
strategic two-player game, where S = S1 S2 is the set of strategy
profiles and, for each i, ui : S R is a vNM utility function that
assigns payoff to any strategy profile. Assume that, for each i, there
exist s = (s1 , s2 ), s0 = (s01 , s02 ) S such that ui (s) > ui (s0 ). As in
Chapter 5but transferred to S1 T1 S2 T2 spacethe event that

72

CONSISTENT PREFERENCES

i plays the game G is given by


[ui ] := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 |
iti z is a positive affine transformation of ui } ,
while [u] := [u1 ] [u2 ] is the event that both players play G.
Belief operators. Since Assumptions 2 is compatible with the framework of Chapter 4, we can in line with Section 4.2 define belief operators
as follows. For these definitions, say that E S1 T1 S2 T2 does
not concern player is strategy choice if E = Si projTi Sj Tj E.
If E does not concern player is strategy choice, say that player i
certainly believes the event E at ti if ti projTi Ki E, where
Ki E := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 | ti projT1 S2 T2 E} .
If E does not concern the strategy choice of either player, say that
there is mutual certain belief of E at (t1 , t2 ) if (t1 , t2 ) projTi T2 KE,
where KE := K1 E K2 E. If E does not concern the strategy choice of
either player, say that there is common certain belief of E at (t1 , t2 ) if
(t1 , t2 ) projTi T2 CKE, where CKE := KE KKE KKKE . . . .
If E does not concern player is strategy choice, say that player i
(unconditionally) believes the event E at ti if ti projTi Bi E, where
Bi E := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 | ti projT1 S2 T2 E} ,
and where ti := t1i denotes the smallest set on which ti is admissible.
If E does not concern the strategy choice of either player, say that there
is mutual belief of E at (t1 , t2 ) if (t1 , t2 ) projTi T2 BE, where BE :=
B1 E B2 E. If E does not concern the strategy choice of either player,
say that there is common belief of E at (t1 , t2 ) if (t1 , t2 ) projTi T2 CBE,
where CBE := BE BBE BBBE . . . .
As established in Proposition 14, Ki and Bi correspond to KD45 systems. Moreover, the mutual certain belief and mutual belief operators,
K and B, have the following properties, where we write K0 E := E and
B0 E := E, and for each g 1, Kg E := KKg1 E and Bg E := BBg1 E.

Proposition 25 (i) For any E T1 T2 and all g > 1, Kg E Kg1 E


and Bg E Bg1 E. If E = E 1 E 2 , where, for each i, E i = Si
projTi E i Sj Tj , then KE E and BE E.
(ii) For any E T1 T2 , there exist g 0 and g 00 0 such that Kg E =
CKE for g g 0 and Bg E = CBE for g g 00 , implying that CKE =
KCKE and CBE = BCBE.
Proof. See the proof of Proposition 20.

73

Relaxing completeness

6.2

Consistency of preferences (cont.)

In the present section we define the event of consistency of preferences


in the case described by Assumption 2, where preferences need not be
complete, and use this event to characterize the concept of rationalizable
pure strategies.
Belief of opponent rationality. In the context of the present chapter, define as follows the event that player is preferences over his strategies are consistent with the game G = (S1 , S2 , u1 , u2 ) and the preferences
of his opponent:
Ci := [ui ] Bi [ratj ] .
Write C := C1 C2 for the event of consistency.
Characterizing rationalizability. We now characterize the concept of rationalizable pure strategies (cf. Definition 11 of Chapter 5) as
maximal pure strategies under common certain belief of consistency.

Proposition 26 A pure strategy si for i is rationalizable in a finite


strategic two-player game G if and only if there exists an epistemic model
with si Siti for some (t1 , t2 ) projT1 T2 CKC.
To prove Proposition 26, it is helpful to establish a variant of Lemma
6. Write, for any ( 6=) X = X1 X2 S1 S2 , c(X) := c1 (X2 )c2 (X1 ),
where
ci (Xj ) := {si Si | ( 6=) Yj Xj such that, pi (Si ),
pi does not weakly dominate si on Yj } .

Lemma 9 (i) R = c(R). (ii) For each i, si Ri if and only if there


exists X = X1 X2 with si Ri such that X c(X).
Proof. In view of Lemma 6, it is sufficient to show that, for any ( 6=)
Xj Sj , ci (Xj ) = ci (Xj ).
Part 1: ci (Xj ) ci (Xj ). If si
/ ci (Xj ), then pi (Si ) s.t. pi
strongly dominates si on Xj . From this it follows that ( 6=) Yj Xj ,
pi (Si ) s.t. pi weakly dominates si on Yj , implying that si
/ ci (Xj ).
Part 2: ci (Xj ) ci (Xj ). If si ci (Xj ), then there does not exist
pi (Si ) s.t. pi strongly dominates si on Xj . Hence, by Lemma 4, there
exists a subjective probability distribution (Sj ) with supp Xj
such that si is maximal in (Si ) w.r.t. the preferences represented by
the vNM utility function ui and the subjective probability distribution

74

CONSISTENT PREFERENCES

. Then there does not exist pi (Si ) s.t. pi weakly dominates si on


supp ( Xj ), implying that si ci (Xj ).
Proof of Proposition 26. Part 1: If si is rationalizable, then there
exists an epistemic model with si Siti for some (t1 , t2 ) projT1 T2 CKC.
It is sufficient to construct a belief system with S1 T1 S2 T2 CKC
such that, for each si Ri of any player i, there exists ti Ti with si
Siti . Construct a belief system with, for each i, a bijection si : Ti Ri
from the set of types to the the set of rationalizable pure strategies. By
Lemma 9(i) we have that, for each ti Ti of any player i, there exists
Yjti Ri such that there does not exist pi (Si ) such that pi weakly
dominates si (ti ) on Yjti . Determine the set of opponent types that ti
deems subjectively possible as follows: Tjti = {tj Tj | sj (tj ) Yjti }.
Let, for each ti Ti of any player i, ti satisfy
1. iti z = ui (so that S1 T1 S2 T2 [u]), and
2. p ti q iff pEj weakly dominates qEj for Ej = Ejti := {(sj , tj )|sj =
sj (tj ) and tj Tjti }, which implies that ti = ti = {ti } Ejti .
By the construction of Ejti , this means that Siti 3 si (ti ) since, for any
acts p and q on Sj Tj satisfying that there exist mixed strategies
pi , qi (Si ) such that, (sj , tj ) Sj Tj , p(sj , tj ) = z(pi , sj ) and
q(sj , tj ) = z(qi , sj ), p ti q iff pEj weakly dominates qEj for Ej = Yjti
Tj . This in turn implies, for each ti Ti any player i,
3. ti projTi Sj Tj [ratj ] (so that S1 T1 S2 T2 Bi [ratj ]
Bj [rati ]).
Furthermore, S1 T1 S2 T2 CKC since Tjti Tj for each ti Ti
of any player i. Since, for each player i, si is onto Ri , it follows that, for
each si Ri of any player i, there exists ti Ti with si Siti .

Part 2: If there exists an epistemic model with si Siti for some


(t1 , t2 ) projT1 T2 CKC, then si is rationalizable.

Assume that there exists an epistemic model with si Siti for some
(t1 , t2 ) projT1 T2 CKC. InSparticular, CKC 6= . Let, for each i,
Ti0 := projTi CKC and Xi := ti T i0 Siti . It is sufficient to show that, for
each i, Xi Ri . By Proposition 25(ii), for each ti Ti0 of any player
i, ti ti {ti } Sj Tj0 since CKC = KCKC Ki CKC. By the
definition of C, it follows that, for each ti Ti0 of any player i,
1. ti is conditionally represented by iti satisfying that iti z is a
positive affine transformation of ui , and

75

Relaxing completeness

2. p ti q if pEj weakly dominates qEj for Ej = Ejti := projSj Tj ti ,


where ti projTi Sj Tj [ratj ].
Write Yjti := projSj Ejti = projSj ti , and note that ti ({ti }Sj Tj0 )
projTi Sj Tj [ratj ] implies Yjti Xi . It follows that, for any acts p and
q on Sj Tj satisfying that there exist mixed strategies pi , qi (Si )
such that, (sj , tj ) Sj Tj , p(sj , tj ) = z(pi , sj ) and q(sj , tj ) = z(qi , sj ),
p ti q if pEj weakly dominates qEj for Ej = Yjti Tj . Hence, if si Siti ,
then there does not exist pi (Si ) such that pi weakly dominates si
on Yjti . Since this holds for each ti Ti0 of any player i, we have that
X c(X). Hence, Lemma 9(ii) entails that, for each i, Xi Ri .
Proposition 26 is obtained also if CBC is used instead of CKC.

6.3

Admissible consistency of preferences (cont.)

In the present section we define the event of admissible consistency of


preferences in the case considered by Assumption 2, where preferences
need not be complete, and use this event to characterize the concept of
permissible pure strategies.
Caution. As in Section 5.3, player i has preference for cautious
behavior at ti if he takes into account all opponent strategies for any
opponent type that is deemed subjectively possible. Throughout this
chapter, as well as Chapter 7 and 11, we assume that Assumption 2 is
satisfied, so that the system of conditional preferences {ti | ti }
satisfies Axiom 6, where ti denotes { Ti Sj Tj | ti 6= }, and
where ti the set of states that player i deems subjectively possible at
ti satisfies =
6 ti {ti } Sj Tj . Hence, Tjti := projTj ti is the set
of opponent types that player i deems subjectively possible.
Under Assumption 2, player i is cautious at ti if ti = {ti } Sj Tjti .
Because then, player i at ti takes into account all opponent strategies for
any opponent type that is deemed subjectively possible. This means that
is choice set at ti never admits a weakly dominated strategy, thereby
inducing preference for cautious behavior. Hence, under Assumption 2
we can define the event
[caui ] := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 |
Tjti such that ti = {ti } Sj Tjti } .
Write [cau] := [cau1 ] [cau2 ].
In the context of the present chapter, define as follows the event that
player is preferences over his strategies are admissibly consistent with

76

CONSISTENT PREFERENCES

the game G = (S1 , S2 , u1 , u2 ) and the preferences of his opponent:


Ai := [ui ] Bi [ratj ] [caui ] .
Write A := A1 A2 for the event of admissible consistency.
Characterizing permissibility. We now characterize the concept
of permissible pure strategies (cf. Definition 13 of Chapter 5) as maximal
pure strategies under common certain belief of admissible consistency.

Proposition 27 A pure strategy si for i is permissible in a finite strategic two-player game G if and only if there exists an epistemic model with
si Siti for some (t1 , t2 ) projT1 T2 CKA.
To prove Proposition 27, it is helpful to establish a variant of Lemma
8. Define, for any ( 6=) Yj Sj ,
Di (Yj ) := {si Si | pi (Si ) such that
pi weakly dominates si on Yj or Sj } ,
and write, for any ( 6=) X = X1 X2 S1 S2 , a(X) := a1 (X2 )
a2 (X1 ), where
ai (Xj ) := {si Si | ( 6=) Yj Xj such that si Si \Di (Yi )} .

Lemma 10 (i) P = a(P ). (ii) For each i, si Pi if and only if there


exists X = X1 X2 with si Pi such that X a(X).
Proof. In view of Lemma 8, it is sufficient to show that, for any ( 6=)
Xj Sj , ai (Xj ) = a
i (Xj ).
Part 1: ai (Xj ) a
i (Xj ). If si
/ a
i (Xj ), then pi (Si ) s.t. pi
strongly dominates si on Xj or pi weakly dominates si on Sj . From this
it follows that ( 6=) Yj Xj , pi (Si ) s.t. pi weakly dominates si
on Yj or Sj , implying that ( 6=) Yj Xj , si Di (Yj ). This means
that si
/ ai (Xj ).
Part 2: ai (Xj ) a
i (Xj ). If si a
i (Xj ), then there does not exist pi
(Si ) s.t. pi strongly dominates si on Xj or pi weakly dominates si on Sj .
Hence, by Lemmas 4 and 5, there exists an LPS = (1 , 2 ) L(Sj )
with supp1 Xj and supp2 = Sj such that si is maximal in (Si )
w.r.t. the preferences represented by the vNM utility function ui and the
LPS . Then there does not exist pi (Si ) s.t. pi weakly dominates
si on supp1 ( Xj ) or supp2 (= Sj ), implying that si
/ Di (Yj ) for
Yj = supp1 Xj . This means that si ai (Xj ).

Relaxing completeness

77

Proof of Proposition 27. Part 1: If si is permissible, then there exists an epistemic model with si Siti for some (t1 , t2 ) projT1 T2 CKA.
It is sufficient to construct a belief system with S1 T1 S2 T2
CKA such that, for each si Pi of any player i, there exists ti Ti
with si Siti . Construct a belief system with, for each i, a bijection
si : Ti Pi from the set of types to the the set of permissible pure
strategies. By Lemma 10(i) we have that, for each ti Ti of any player
i, there exists Yjti Pi such that si (ti ) Si \Di (Yjti ). Determine the
set of opponent types that ti deems subjectively possible as follows:
Tjti = {tj Tj | sj (tj ) Yjti }. Let, for each ti Ti of any player i, ti
satisfy
1. iti z = ui (so that S1 T1 S2 T2 [u]), and
2. p ti q iff pEj weakly dominates qEj for Ej = Ejti := {(sj , tj )|sj =
sj (tj ) and tj Tjti } or Ej = Sj Tjti , which implies that ti =
{ti }Ejti and ti = {ti }Sj Tjti (so that S1 T1 S2 T2 [cau]).
By the construction of Ejti , this means that Siti = Si \Di (Yjti ) 3 si (ti )
since, for any acts p and q on Sj Tj satisfying that there exist mixed
strategies pi , qi (Si ) such that, (sj , tj ) Sj Tj , p(sj , tj ) = z(pi , sj )
and q(sj , tj ) = z(qi , sj ), p ti q iff pEj weakly dominates qEj for Ej =
Yjti Tj or Ej = Sj Tj . This in turn implies, for each ti Ti any
player i,
3. ti projTi Sj Tj [ratj ] (so that S1 T1 S2 T2 Bi [ratj ]
Bj [rati ]).
Furthermore, S1 T1 S2 T2 CKA since Tjti Tj for each ti Ti
of any player i. Since, for each player i, si is onto Pi , it follows that, for
each si Pi of any player i, there exists ti Ti with si Siti .

Part 2: If there exists an epistemic model with si Siti for some


(t1 , t2 ) projT1 T2 CKA, then si is permissible.

Assume that there exists an epistemic model with si Siti for some
(t1 , t2 ) projT1 T2 CKA. InSparticular, CKA 6= . Let, for each i,
Ti0 := projTi CKA and Xi := ti T i0 Siti . It is sufficient to show that, for
each i, Xi Pi . By Proposition 25(ii), for each ti Ti0 of any player
i, ti ti {ti } Sj Tj0 since CKA = KCKA Ki CKA. By the
definition of A, it follows that, for each ti Ti0 of any player i,
1. ti is conditionally represented by iti satisfying that iti z is a
positive affine transformation of ui , and

78

CONSISTENT PREFERENCES

2. p ti q if pEj weakly dominates qEj for Ej = Ejti := projSj Tj ti


or Ej = Sj Tjti , where ti projTi Sj Tj [ratj ].
Write Yjti := projSj Ejti = projSj ti , and note that ti ({ti }Sj Tj0 )
projTi Sj Tj [ratj ] implies Yjti Xi . It follows that, for any acts p and
q on Sj Tj satisfying that there exist mixed strategies pi , qi (Si )
such that, (sj , tj ) Sj Tj , p(sj , tj ) = z(pi , sj ) and q(sj , tj ) = z(qi , sj ),
p ti q if pEj weakly dominates qEj for Ej = Yjti Tj or Ej = Sj Tj .
Hence, Siti Si \Di (Yjti ). Since this holds for each ti Ti0 of any player
i, we have that X a(X). Hence, Lemma 10(ii) entails that, for each
i, Xi Pi .
Proposition 27 is obtained also if CBA is used instead of CKA; this
is essentially the corresponding result by Brandenburger (1992). One
may argue that the result above is more complicated as it involves two
different epistemic operators. Still, it yields the insight that the essential
feature in a characterization of the Dekel-Fudenberg procedure is to let
irrational opponent choice be deemed subjectively possible. It also turns
out to be a useful benchmark for the analysis of backward induction in
Section 7.3 where the certain belief operator Ki rather than the belief
operator Bi must be used for the interactive epistemology (cf. the
analysis of 5 illustrated in Figure 7.1).

Chapter 7
BACKWARD INDUCTION

In recent years, two influential contributions on backward induction in


finite perfect information games have appeared, namely Aumann (1995)
and Ben-Porath (1997). These contributionsboth of which consider
generic perfect information games (where all payoffs are different)
reach opposite conclusions: While Aumann establishes that common
knowledge of rationality implies that the backward induction outcome
is reached, Ben-Porath shows that the backward induction outcome is
not the only outcome that is consistent with common certainty of rationality. The models of Aumann and Ben-Porath are different. One
such difference is that Aumann makes use of knowledge in the sense of
true knowledge, while Ben-Poraths analysis is based on certainty in
the sense of belief with probability one. Another is that the term rationality is used in different senses: Aumann imposes rationality in all
subgames, while Ben-Porath assumes rationality initially, in the whole
game, only (not after a surprise has occurred).
The present chapter, which reproduces Asheim (2002), shows how the
conclusions of Aumann and Ben-Porath can be captured by imposing requirements on the players within the same general framework. Furthermore, the interpretations of the present analysis correspond closely to
the intuitions that Aumann and Ben-Porath convey in their discussions.
Hence, the analysis of this chapter may increase our understanding of
the differences between the analyses of Aumann and Ben-Porath, and
thereby enhance our understanding of the epistemic conditions underlying backward induction. For ease of presentation, the analysis will be
limited to two-player games, as in the rest of the book. In this chap-

80

CONSISTENT PREFERENCES

ter, this is purely a matter of convenience as everything can directly be


generalized to n-player games (with n > 2).
Among the large literature on backward induction during the last
couple of decades,1 Renys (1993) impossibility result is of special importance. Reny associates a players rationality in an extensive game
with perfect (or almost perfect) information with what is called weak
sequential rationality; i.e., that a player chooses rationally in all subgames that are not precluded from being reached by the players own
strategy. He shows that there exist perfect information games where
the event that both players satisfy weak sequential rationality cannot be
commonly believed in all subgames. E.g., in the centipede game that
is illustrated in Figure 2.4, common belief of weak sequential rationality cannot be held in the subgame defined by 2s decision node. The
reason is that if 1 believes that 2 is rational in the subgame, and if 1
believes that 2 believes that 1 will be rational in the subgame defined by
1s second decision node, then 1 believes that 2 will choose `, implying
that only Out is a best response for 1. Then the fact that the subgame
defined by 2s decision node has been reached, contradicts 2s belief that
1 is rational in the whole game.
As a response, Ben-Porath (1997) imposes that common belief of weak
sequential rationality is held initially, in the whole game, only. However,
backward induction is not implied if weak sequential rationality is commonly believed initially, in the whole game, only. In the centipede
game of Figure 2.4, the strategies Out and InL for player 1 and ` and
r for player 2 are consistent with such common belief, while backward
induction implies that down is played at any decision node.
In order to obtain an epistemic characterization of backward induction, Aumann (1995) considers sequential rationality in the sense that
a player chooses rationally in all subgames (see also footnote 3 of this
chapter). However, the event that players satisfy sequential rationality
is somewhat problematic. Ifin the centipede game of Figure 2.41
believes or knows that 2 chooses `, then only by choosing the strategy
OutL will 1 satisfy sequential rationality. However, what does it mean
that 1 chooses OutL in the counterfactual event that player 2s decision
node were reached? It is perhaps more naturalas suggested by Stal-

1 Among

contributions that are not otherwise referred to in this chapter are Basu (1990),
Bicchieri (1989), Binmore (1987, 1995), Bonanno (1991, 2001), Clausing and Wilks (2000),
Dufwenberg and Lind
en (1996) Feinberg (2004a), Gul (1997), Kaneko (1999), Rabinowicz
(1997), and Rosenthal (1981).

Backward induction

81

naker (1998)to consider 2s belief about 1s subsequent action if 2s


decision node were reached. Since Aumann (1995) assumes knowledge
of rational choice in an S5 partition structure, such a question of belief
revision cannot be asked within Aumanns model.
By imposing a full support restriction by considering players of types
in projT1 T2 [cau] (cf. the definition of [cau] in Section 6.3), the present
chapter ensures that each player takes all opponent strategies into account, having the structural implication that conditional beliefs are welldefined and the behavioral implication that a rational choice in the whole
game is a rational choice in all subgames that are not precluded from
being reached by the players own strategy. Hence, by this restriction,
we may consider rationality instead of weak sequential rationality (as
shown by Lemma 11 and the subsequent text).
The main distinguishing feature of the present analysis is, however, to
consider the event that a player believes in opponent rationality rather
than the event that the player himself chooses rationally. This is of
course in line with the consistent preferences approach that is the basis
for this book. As is shown by Proposition 27 of Chapter 6, permissible pure strategiesstrategies surviving the Dekel-Fudenberg procedure, where one round of weak elimination is followed by iterated strong
eliminationcan be characterized as maximal strategies when there is
common certain belief that each player believes initially, in the whole
game, that the opponent chooses rationally (belief of opponent rationality). For generic perfect information games, Ben-Porath shows that
the set of outcomes consistent with common belief of weak sequential
rationality corresponds to the set of outcomes that survives the DekelFudenberg procedure. Hence, maximal strategies when there is common
certain belief of belief of opponent rationality correspond to outcomes
that are promoted by Ben-Poraths analysis.
An extensive game offers choice situations, not only initially, in the
whole game, but also in proper subgames. In perfect information games
(and, more generally, in multi-stage games) the subgames constitute an
exhaustive set of such choice situations. Hence, in perfect information
games one can replace belief of opponent rationality by belief in each
subgame of opponent rationality: Each player believes in each subgame
that his opponent chooses rationally in the subgame. The main results of
the present chapter (Propositions 28 and 29 of Section 7.3) show how,
for generic perfect information games, common certain belief of belief
in each subgame of opponent rationality is possible and uniquely deter-

82

CONSISTENT PREFERENCES

mines the backward induction outcome. Hence, by substituting belief


in each subgame of opponent rationality for belief of opponent rationality, the present analysis provides an alternative route to Aumanns
conclusion, namely that common knowledge (or certain belief) of an
appropriate form of (belief of) rationality implies backward induction.
This epistemic foundation for backward induction requires common
certain belief of belief in each subgame of opponent rationality, where
the term certain belief is being used in the sense that an event is certainly believed if the complement is subjectively impossible. As shown
by a counterexample in Section 7.3, the characterization does not obtain if instead common belief is applied.2 Furthermore, the event of
which there is common certain belief namely belief in each subgame
of opponent rationality cannot be further restricted by taking the
intersection with the event of rationality. The reason is that the full
support restriction (i.e., that players are of types in projT1 T2 [cau]) is
inconsistent with certain belief of opponent rationality, as the latter
prevents a player from taking into account irrational opponent choices
and rules out a well-defined theory of belief revision.

7.1

Epistemic modeling of extensive games

The purpose of this section is to present a framework for extensive


games of almost perfect information where each player is modeled as a
decision maker under uncertainty, with preferences that are allowed to
be incomplete.
An extensive game form. Inspired by Dubey and Kaneko (1984)
and Chapter 6 of Osborne and Rubinstein (1994), a finite extensive twoperson game form of almost perfect information with M 1 stages can
be described as follows. The set of histories is determined inductively:
The set of histories at the beginning of the first stage 1 is H 1 = {}.
Let H m denote the set of histories at the beginning of stage m. At
h H m let, for each player i, is action set be denoted Ai (h), where i
is inactive at h if Ai (h) is a singleton. Write A(h) := A1 (h) A2 (h).
Define the set of histories at the beginning of stage m + 1 by H m+1 :=
{(h, a) |hS
H m and a A(h)}. This concludes the induction. Denote
1
m the set of subgames and denote by Z := H M the set
by H := M
m=1 H
of outcomes.

2 For

definitions of the certain belief operator Ki and the belief operator Bi in the current
context, see Section 6.1.

Backward induction

83

A pure strategy for player i is a function si that assigns an action in


Ai (h) to any h H. Denote by Si player is finite set of pure strategies,
and let z : S Z map strategy profiles into outcomes, where S := S1
S2 is the set of strategy profiles.3 Then (S1 , S2 , z) is the corresponding
finite strategic two-person game form. For any h H Z, let S(h) =
S1 (h) S2 (h) denote the set of strategy profiles that are consistent with
h being reached. Note that S() = S. For any h, h0 H Z, h (weakly)
precedes h0 if and only if S(h) S(h0 ). If si Si and h H, let
si |h denote the strategy in Si (h) having the following properties: (1) at
subgames preceding h, si |h determines the unique action leading to h,
and (2) at all other subgames, si |h coincides with si .
Epistemic modeling. Since the extensive game form determines a
finite strategic game form, we may represent the strategic interaction
by means of an epistemic model as defined by Definition 9 of Chapter
5. Since backward induction is a procedurelike IESDS and the DekelFudenberg procedurethat does not rely on subjective probabilities,
the analysis will allow for incomplete preferences. Hence, the epistemic
model is combined with Assumption 2 of Chapter 6. In this respect the
present analysis follows Aumann (1995) who presents a characterization
of backward induction where subjective probabilities play no role.
Conditional preferences over strategies. Write thi for player
is preferences at ti conditional on subgame h H being reached; i.e.,
for ti when = {ti } Sj (h) Tj . W.l.o.g. we may consider thi to
be preferences over acts from Sj (h) Tj to (Z) (instead of acts from
{ti } Sj (h) Tj to (Z)). Denote by
H ti := {h H| ti ({ti } Sj (h) Tj ) 6= }
the set of subgames that i deems subjectively possible at ti . Under
Assumption 2 it follows from Proposition 4 that, for each ti of any player
i and all h H ti , is conditional preferences at ti in subgame h can be
conditionally represented by a vNM utility function iti : (Z) R that
does not depend on h.
3 A pure strategy s S can be viewed as an act on S that assigns z(s , s ) Z to any
i
i
j
i j
sj Sj . The set of pure strategies Si is partitioned into equivalent classes of acts since a
pure strategy si also determines actions in subgames which si prevents from being reached.
Each such equivalent class corresponds to a plan of action, in the sense of Rubinstein (1991).
As there is no need here to differentiate between identical acts in the present analysis, the
concept of a plan of action would have sufficed.

84

CONSISTENT PREFERENCES

Hence, for each type ti of any player i, player is conditional preferences at ti in subgame h, thi , is a reflexive and transitive binary relation on acts from Sj (h) Tj to (Z) that is conditionally represented
by a vNM utility function iti if h H ti . Since each mixed strategy
pi (Si (h)) is a function that assigns the randomized outcome z(pi , sj )
to any (sj , tj ) Sj (h) Tj and is thus an act from Sj (h) Tj to (Z),
we have that thi determines reflexive and transitive preferences on is
set of mixed strategies, (Si ).
Player is choice function at ti is a function Siti () that assigns to every
h H player is set of maximal pure strategies at ti in subgame h:
Siti (h) := {si Si (h)| @pi (Si (h)), pi thi si } .
Hence, a pure strategy, si , is in the set determined by is choice function
at ti in subgame h if there is no mixed strategy in (Si (h)) that is strictly
preferred to si given is (possibly incomplete) conditional preferences at
ti in subgame h. Refer to Siti (h) as player is choice set at ti in subgame
h, and write Siti = Siti (), thereby following the notation of Chapter 6.
Since thi is reflexive and transitive and satisfies objective independence, and Si (h) is finite, it follows that the choice set Siti (h) is nonempty and supports any maximal mixed strategies: If qi (Si (h)) and
@pi (Si (h)) such that pi thi qi , then qi (Siti (h)).
By the following lemma, if si is maximal at ti in subgame h, then si
is maximal at ti in any later subgame that si is consistent with.

Lemma 11 If si Siti (h), then si Siti (h0 ) for any h0 H with si


Si (h0 ) Si (h).
Proof. The proof of this lemma is based on the concept of a strategically independent set due to Mailath et al. (1993). The set S 0
S is strategically independent for player i in a strategic game G =
(S1 , S2 , u1 , u2 ) if S 0 = S10 S20 and, si , s0i Si0 , s00i Si0 such that
ui (s00i , sj ) = ui (s0i , sj ) for all sj Sj0 and ui (s00i , sj ) = ui (si , sj ) for all
sj Sj \Sj0 . It follows from Mailath et al. (Definitions 2 and 3 and the
if part of Theorem 1) that S(h) is strategically independent for i for any
subgame h in a finite extensive game of almost perfect information, and
this does not depend on the vNM utility function that assigns payoff to
any outcome. The argument is based on the property that s00i Si (h)
such that z(s00i , sj ) = z(s0i , sj ) for all sj Sj (h) and z(s00i , sj ) = z(si , sj )
for all sj Sj \Sj (h). The point is that is decision conditional on j
choosing a strategy consistent with h and is decision conditional on j
choosing a strategy inconsistent with h can be made independently.

85

Backward induction

Suppose that si is not a maximal strategy at ti in the subgame h0 .


Then there exists s0i Si (h0 ) such that s0i thi si . As noted above, S(h0 ) is
strategically independent for i. Hence, s00i Si (h0 ) such that z(s00i , sj ) =
z(s0i , sj ) for all sj Sj (h0 ) and z(s00i , sj ) = z(si , sj ) for all sj Sj \Sj (h0 ).
By Assumption 2 this implies that s00i thi si , which contradicts that si is
most preferred at ti in the subgame h.
The event that player i is rational in subgame h is defined by
[rati (h)] := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 | si Siti (h)} .
Write [rati ] = [rati ()], thereby following the notation of Chapter 6.
The imposition of a full support restriction by considering players of
types in projT1 T2 [cau] (cf. the definition of [cau] in Section 6.3) has the
structural implication that, for all h, the conditional preferences, thi , are
nontrivial. Moreover, by Lemma 11 it has the behavioral implication that
any choice si that is rational in h is also rational in any later subgame
that si is consistent with. This means that rationality implies weak
sequential rationality. In fact, thi is admissible on {ti } Sj (h) Tjti
(cf. Section 6.1), implying that any strategy that is weakly dominated
in h cannot be rational in h. Thus, preference for cautious behavior is
induced. However, in the context of generic perfect information games
(cf. Section 7.2 of the present chapter) such admissibility has no cutting power beyond ensuring that rationality implies weak sequential
rationality; see, e.g., Lemmas 1.1 and 1.2 of Ben-Porath (1997). Hence,
in the class of games considered in our main results it is of no consequence to use rationality combined with full support rather than weak
sequential rationality.
An extensive game. Consider an extensive game form, and let, for
each i, i : Z R be a vNM utility function that assigns a payoff to any
outcome. Then the pair of the extensive game form and the vNM utility
functions (1 , 2 ) is a finite extensive two-player game of almost perfect
information, . Let G = (S1 , S2 , u1 , u2 ) be the corresponding finite
strategic game, where for each i, the vNM utility function ui : S R is
defined by ui = i z (i.e., ui (s) = i (z(s)) for any s = (s1 , s2 ) S).
Assume that, for each i, there exist s = (s1 , s2 ), s0 = (s01 , s02 ) S such
that ui (s) > ui (s0 ).
As before, the event that i plays the game G is given by
[ui ] := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 |
iti z is a positive affine transformation of ui } ,

86

CONSISTENT PREFERENCES

while [u] := [u1 ] [u2 ] is the event that both players play G.
Conditional belief. As before, say that E S1 T1 S2 T2 does
not concern player is strategy choice if E = Si projTi Sj Sj E. If E
does not concern player is strategy choice and h is deemed subjectively
possible by i at ti (i.e., h H ti ), say that player i at ti believes the event
E conditional on subgame h if ti projTi Bi (h)E, where
Bi (h)E := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 | ` {1, . . . , L}
such that 6= t`i (Ti Sj (h) Tj ) projTi Sj Tj E} ,
and (t1i , . . . , tLi ) is the vector of nested sets on which ti is admissible.
By writing, for each h H ti , ti (h) := t`i (Ti Sj (h) Tj ), where
` := min{k {1, . . . , L}| tki (Ti Sj (h) Tj ) 6= }, we have that
Bi (h)E = {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 | ti (h) projTi Sj Tj E} .
It follows from the analysis of Chapter 4 that, for each h H ti , thi is
admissible on ti (h), and there is no smaller subset of {ti } Sj (h) Tj
on which thi is admissible.4
The collection of sets { ti (h)| h H ti } is a system of conditional
filter generating sets as defined in Section 5 of Brandenburger (1998).
Although completeness of preferences is not imposed under Assumption
2, ti may encode more information about is preferences at ti that what
is recoverable from such a system of conditional filter generating sets.
It follows from the full support restriction imposed by considering
players of types in projT1 T2 [cau] (cf. the definition of [cau] in Section
6.3) that ti has full support on Sj , implying in turn that H ti = H
and, at ti , is belief conditional on the subgame h is well-defined (in
the sense that the non-empty set iti (h) is uniquely determined) for all
h H. Hence, a well-defined belief conditional on h is implied by full
support alone; it does not require that h is actually being reached. This
means that a requirement on is belief conditional on the subgame h is
a requirement on the preferences (the type) of player i only; it does not
impose that i makes a strategy choice consistent with h.
Since the conditional belief operator is used only for objectively knowable events that are subjectively possible, we do not consider hypothetical events. Hence, hypothetical epistemic operators of the kind developed by Samet (1996) are not needed in the present framework.
4 The

existence of such a smaller subset would contradict Propositions 6 and 9(ii).

Backward induction

7.2

87

Initial belief of opponent rationality

A finite extensive game is


... of perfect information if, at any h H, there exists at most one
player that has a non-singleton action set.
... generic if, for each i, i (z) 6= i (z 0 ) whenever z and z 0 are different
outcomes.
Generic extensive games of perfect information have a unique subgameperfect equilibrium. Moreover, in such games the procedure of backward
induction yields in any subgame the unique subgame-perfect equilibrium
outcome. If s denotes the unique subgame-perfect equilibrium, then,
for any subgame h, z(s |h ) is the backward induction outcome in the
subgame h, and S(z(s |h )) is the set of strategy vectors consistent with
the backward induction outcome in the subgame h.
Both Aumann (1995) and Ben-Porath (1997) analyze generic extensive games of perfect information. As already pointed out, while Aumann establishes that common (true) knowledge of (sequential) rationality5 implies that the backward induction outcome is reached, Ben-Porath
shows that the backward induction outcome is not the only outcome that
is consistent with common belief (in the whole game) of (weak sequential) rationality. The purpose of the present section is to interpret the
analysis of Ben-Porath by applying Proposition 27 to the class of generic
perfect information games.
Applying admissible consistency to extensive games. Recall
that the event of admissible consistency is defined as A := A1 A2 ,
where
Ai := [ui ] Bi [ratj ] [caui ] .
Again note that a full support restriction is imposed by considering
players of types in projT1 T2 [cau], ensuring that each player takes all
opponent strategies into account.
In Proposition 27 of Chapter 6 we have established that the concept of
permissible pure strategies can be characterized as maximal pure strategies under common certain belief of admissible consistency. Recall also
that permissible strategies (cf. Definition 13 of Chapter 5) correspond to
5 Aumann

(1995) uses the term substantive rationality, meaning that for all histories h, if a
player were to reach h, then the player would choose rationally at h. See Aumann (1995, pp.
1416) and Aumann (1998) as well as Halpern (2001) and Stalnaker (1998, Section 5).

88

CONSISTENT PREFERENCES

strategies surviving the Dekel-Fudenberg procedure, where one round of


weak elimination is followed by iterated strong elimination. In the context of generic perfect information games, Ben-Porath (1997) establishes
through his Theorem 1 that the set of outcomes consistent with common belief (initially, in the whole game) of (weak sequential) rationality
corresponds to the set of outcomes that survive the Dekel-Fudenberg
procedure. Hence, by Proposition 27, maximal strategies when there
is common certain belief of admissible consistency correspond to the
outcomes promoted by Ben-Poraths analysis.
An example. To illustrate how common certain belief of admissible
consistency is consistent with outcomes other than the unique backward
induction outcome, consider the strategic game G03 , with corresponding extensive form 03 ; i.e., the centipede game illustrated in Figure
2.4. Here, backward induction implies that down is being played at any
decision node. Let T1 = {t01 , t001 } and T2 = {t02 , t002 }. Assume that the preferences of each type ti of any player i are represented by a vNM utility
function iti satisfying iti z = ui and a 2-level LPS on Sj Tj . In Table 7.1, the first numbers in the parentheses express primary probability
distributions, while the second numbers express secondary probability
distributions. The strategies OutL and OutR are merged as their relative likelihood does not matter; see footnote 3. Note that all types
are in projT1 T2 [cau], implying that players take all opponent strategies
into account. With these 2-level LPSs each types preferences over the
players own strategies are given by
0

Out t1 InL t1 InR


00
00
InL t1 Out t1 InR
0
` t2 r
00
r t2 `
It is easy to check that both players satisfy belief of opponent rationality
at each of their types; e.g., both t02 and t002 assign positive (primary)
probability to an opponent strategy-type pair only if it is a maximal
strategy for the opponent type (i.e., Out in the case of t01 and InL in the
case of t001 ). Thus, S1 T1 S2 T2 A. Since, for each ti Ti of any
player i, ti {ti } Sj Tj , it follows that S1 T1 S2 T2 CKA.
Hence, preferences consistent with common certain belief of admissible
consistency need not reflect backward induction since InL and r are
maximal strategies.
Note that, conditional on player 2s decision node being reached (i.e.
1 choosing InL or InR), player 2 at t02 updates her beliefs about the type

89

Backward induction

An epistemic model for G03 with corresponding extensive form 03 .

Table 7.1.

t01 :

t02 :

`
r

Out
InL
InR

0
00
4 t27 t21
10
0,
5 , 10
1 1
1
,
0, 10
5 10

t001 :

0
1t11
2 , 14
0, 81
0, 8

t002 :

00

t11
8
0,
1 1
,
2
14
0, 8

`
r

0
00
3 t25 t21
10
5 , 10
0,
2 3
1
0, 10
,
5 10
0

Out
InL
InR

t11
1, 12
0, 41
0, 4

t001
(0, 0)
(0, 0)
(0, 0)

of player 1 and assigns (primary) probability one to player 1 being of


type t001 . Consequently, the conditional belief of player 2 at t02 assigns
(primary) probability one to player 1 choosing InL. Player 2 at t002 , on
the other hand, does not admit the possibility that 1 is of another type
than t01 . Since the choice of In at 1s first decision node is not rational
for player 1 at t01 , there is no restriction concerning the conditional belief
of player 2 at t002 about the choice at 1s second decision node. In the
terminology of Ben-Porath, a surprise has occurred. Subsequent to
such a surprise, a player need not believe that the opponent chooses
rationally among his remaining strategies.

7.3

Belief in each subgame of opponent


rationality

A simultaneous game offers only one choice situation. Hence, for a


game in this class, it seems reasonable that belief of opponent rationality
is held in the whole game only, as formalized by the requirement belief
of opponent rationality. An extensive game with a nontrivial dynamic
structure, however, offers such choice situations, not only initially, in
the whole game, but also in proper subgames. Moreover, for extensive
games of almost perfect information, the subgames constitute an exhaustive set of such choice situations. This motivates imposing belief in
each subgame of opponent rationality. Hence, consider the event that i
believes conditional on subgame h H ti that j is rational in h:
Bi (h)[ratj (h)] = {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 |
0

(s0j , t0j ) projSj Tj ti (h) implies s0j Sjtj (h)} ,

90

CONSISTENT PREFERENCES

ti = H whenever t proj [cau ], it follows that, if t


Since
HT
i
i
i
Ti
projTi
hH ti Bi (h)[ratj (h)] [caui ] , then at ti player i believes conditional on any subgame h that j is rational in h. In other words,
\

B
(h)[rat
(h)]
[caui ]
i
j
t
hH

is the event that player i believes in each subgame h that the opponent
j is rational in h.6
Consider a finite extensive two-player game of almost perfect information with corresponding strategic game G. Say that at ti player
is preferences over his strategies are admissibly subgame consistent with
the game and the preferences of his opponent if ti projTi Ai , where
\

Ai := [ui ]
B
(h)[rat
(h)]
[caui ] .
i
j
t
hH

Refer to A := A1 A2 as the event of admissible subgame consistency.


This definition of admissible subgame consistency can be applied to any
finite extensive game of almost perfect information. However, in order
to relate to Aumanns (1995) Theorems A and B, the following analysis
is concerned with generic perfect information games.
The example revisited. In the belief system of Table 7.1, player 2
at type t002 does not satisfy belief in each subgame of opponent rationality. By belief in each subgame of opponent rationality, player 2 must
believe, conditional on the subgame defined by 2s decision node, that
1 chooses his maximal strategy, InL, in the subgame. This means that
player 2 prefers ` to r, implying that player 1 must prefer Out to InL
if he satisfies belief in each subgame of opponent rationality. Thus,
common certain belief of admissible subgame consistency entails that
any types of players 1 and 2 have the preferences
Out t1 InL t1 InR
` t2 r

6 Note

that the requirement of such belief in each subgame of opponent rationality allows
a player to update his belief about the type of his opponent. Hence, there is no assumption
of epistemic independence between different agents in the sense of Stalnaker (1998); cf.
the remark after the proof of Proposition 28 as well as Section 7.4. Still, the requirement
can be considered a non-inductive analog to forward knowledge of rationality as defined by
Balkenborg and Winter (1997), and it is related to the requirement in Section 5 of Samet
(1996) that each player hypothesizes that if h were reached, then the opponent would behave
rationally at h.

Backward induction

91

respectively, meaning that if a player chooses a maximal strategy in a


subgame, then his choice is made in accordance with backward induction.
Demonstrating that this conclusion holds in general for generic perfect
information games constitutes the main results of the present chapter.
Main results. In analogy with Aumanns (1995) Theorems A and
B, it is established that
... any vector of maximal strategies in a subgame of a generic perfect
information game, in a state where there is common certain belief
of admissible subgame consistency, leads to the backward induction
outcome
in the subgame (Proposition 28). Hence, by substituting
T
B
t
hH i i (h)[ratj (h)] for Bi [ratj ], the present analysis yields support
to Aumanns conclusion, namely that if there is common knowledge
(or certain belief) of an appropriate form of (belief of) rationality,
then backward induction results.
... for any generic perfect information game, common certain belief of
admissible subgame consistency is possible (Proposition 29). Hence,
the result of Proposition 28 is not empty.

Proposition 28 Consider a finite generic extensive two-player game


of perfect information with corresponding strategic game G. If, for
some epistemic model, (t1 , t2 ) projT1 T2 CKA , then, for each h H,
S1t1 (h) S2t2 (h) S(z(s |h )), where s denotes the unique subgameperfect equilibrium.
Proof. In view of properties of the certain belief operator (cf. Proposition 25(ii)), it suffices to show for any g = 0, . . . , M 2 that S1t1 (h)
S2t2 (h) S(z(s |h )) for any h H M 1g if there exists an epistemic
model with (t1 , t2 ) projT1 T2 Kg A . This is established by induction.
(g = 0) Let h H M 1 . First, consider j with a singleton action set
at h. Then trivially Sjtj (h) = Sj (h) = Sj (z(s |h )). Now, consider i with
a non-singleton action set at h; since has perfect information, there is
at most one such i. Let ti projTi K0 A = projTi A . Then it follows
that Siti (h) = Si (z(s |h )) since is generic and A [ui ] [caui ].
(g = 1, . . . , M 2) Suppose that it has been established for g 0 =
0
0, . . . , g 1 that S1t1 (h0 ) S2t2 (h0 ) S(z(s |h0 )) for any h0 H M 1g
0
if there exists an epistemic model with (t1 , t2 ) projT1 T2 K g A . Let
h H M 1g . Part 1. Consider j with a singleton action set at h. Let
tj projTj Kg1 A . Then Sj (h) = Sj (h, a) and, by Lemma 11 and the
premise, Sjtj (h) Sj tj (h, a) Sj (z(s |(h,a) )) if a is a feasible action

92

CONSISTENT PREFERENCES

vector at h. This implies that


\
Sjtj (h)
Sj (z(s |(h,a) )) Sj (z(s |h )) .
a

tj (h),

Hence, if sj Sj
then sj is consistent with the backward induction outcome in any subgame (h, a) immediately succeeding h. Part
2. Consider i with a non-singleton action set at h; since has perfect
g
information, there is at most one such i. Let
Ti K A . The
T ti proj
t

j
preceding argument implies that Sj (h) a Sj (z(s |(h,a) )) whenever
tj Tjti since ti projTi Kg A projTi Ki Kg1 A . Let si Si (h) be a
strategy that differs from si |h by assigning a different action at h (i.e.,
z(si , sj |h ) 6= z(s |h ) and si (h0 ) = si |h (h0 ) whenever Si (h) Si (h0 )).
Let p and q be acts on Sj Tj satisfying that, (sj , tj ) Sj Tj ,
p(sj , tj ) = z(si , sj ) and q(sj , tj ) = z(si , sj ). Then,
pa Sj (z(p|(h,a) ))Tj

strongly dominates qa Sj (z(p|(h,a) ))Tj

by backward induction
since is generic and ti projTi Kg A [ui ].
T
Since Sjtj (h) a Sj (z(s |(h,a) )) whenever tj Tj ti , it follows that,
tj Tjti ,
pSjtj (h){tj }

strongly dominates qSjtj (h){tj } ,

and, thus, ti projTi Kg A Bi (h)[ratj (h)] [caui ] implies that


p thi q .
It has thereby been established that si Si (h)\Siti (h) if si differs from
backward induction only by the action taken at h. However, the premise
that Siti (h, a) Si (z(s |(h,a) )) if a is a feasible action vector at h, it
follows that any si Siti (h) is consistent with the backward induction
outcome in the subgame (h, (si (h), aj )) immediately succeeding h when
i plays the action si (h) at h (since si Si (h, (sj (h), aj )) and, by Lemma
11, si Siti (h, (si (h), aj )). Hence, Siti (h) Si (z(p|h )).
It follows from the proof of Proposition 28 that, for a generic perfect
information game with M 1 stages, it is sufficient with M 2 order mutual certain belief of admissible subgame consistency in order to obtain
backward induction. Hence, KM 2 A can be substituted for CKA .
Backward induction will not be obtained, however, if CBA is substituted for CKA . This can be shown by considering a counter-example
that builds on the four-legged centipede game of Figure 7.1 and the epistemic model of Table 7.2. In the table the preferences of each type ti
of any player i are represented by a vNM utility function iti satisfying

93

Backward induction

1c
Out
2
0

In

Figure 7.1.

Table 7.2.
t01 :
`
r`0
rr 0
t02 :
Out
InL
InR

2s
`
1
3

1s

2s

L
4
2

`0
3
5

6
4

r0

5 (a four-legged centipede game).

An epistemic model for 5 .

t02
4 7
, ,
5 10
1
0, 10
,
0, 0,

7
12 
1
12
1
12

t01
1 1
, ,
2 3
0, 16 ,
0, 0,

1
4
1
8
1
8

t001 :

t002
1
0, 10 ,
1 1
, ,
5 10
0, 0,

t001
0, 16 ,
1 1
, ,
2 3
0, 0,

1
12 
1
12
1
12

1
8
1
4
1
8

`
r`0
rr0

t000
1
(0, 0, 0)
(0, 0, 0)
(0, 0, 0)

t02
3 5
, ,
5 10
1
0, 10
,
0, 0,

5
12 
1
12
1
12

t002 :
Out
InL
InR

t002
1
0, 10 ,
2 3
, ,
5 10
0, 0,

t000
1 :


1
12 
3
12
1
12

t01 
1, 21 , 13 
0, 41 , 16 
0, 0, 61

`
r`0
rr 0

t001
(0, 0, 0)
(0, 0, 0)
(0, 0, 0)

t02 
1
10 
1
10 
3
10

t002 
1
10 
1
10 
3
10

t000
1

1
0, 0, 12

1
0, 0, 12
1 1
0, 4 , 6

iti z = ui and a 1 or 3-level LPS on Sj Tj , where T1 = {t01 , t001 , t000


1}
0
00
and T2 = {t2 , t2 }. While all types are in projT1 T2 [cau], implying that
players take all opponent strategies into account, inspection shows that
A = S1 {t01 , t001 }S2 {t02 , t002 }, since player 1 at t000
1 does not satisfy belief in each subgame of opponent rationality. Furthermore, each player i
believes at t0i or t00i that the opponent is of a type in {t0j , t00j }. This implies
that CBA = A . Since InL is the maximal strategy for 1 at t001 and r`0
is the maximal strategy for 2 at t002 , it follows that preferences consistent
with common belief of admissible subgame consistency need not reflect
backward induction. However, 2 does not certainly believe at t002 that the

0 00
0
opponent is not of type t000
1 . Therefore, KA = A = S {t1 , t1 } {t2 },
while KKA = . Hence, preferences that yield maximal strategies in
contradiction with backward induction are not consistent with common
certain belief of admissible subgame consistency.
The example shows that ti projTi Ai is consistent with player i at ti
updating his beliefs about the preferences of his opponent conditional on
a subgame being reached. I.e., 1 at t01 assigns initially, in the whole, (primary) probability 45 to 2 being of type t02 with preferences ` r`0 rr0 ,

94

CONSISTENT PREFERENCES

while in the subgame defined by 1s second decision node 1 at t01 assigns (primary) probability one to 2 being of type t002 with preferences
r`0 ` rr. This shows that Stalnakers (1998) assumption of epistemic independence is not made; a player is in principle allowed to learn
about the type of his opponent on the basis of previous play. However,
in an epistemic model with CKA 6= , ti projTi CKA implies that 1
certainly believes at ti that 2 is of a type with preferences ` r`0 rr0 .
In other words, if there is common certain belief of admissible subgame
consistency, there is essentially nothing to learn about the opponent.

Proposition 29 For any finite generic two-player extensive game of


perfect information with corresponding strategic game G, there exists
a belief system for G with CKA 6= .
Proof. Construct an epistemic model with one type of each player:
T1 = {t1 } and T2 = {t2 }. Write, for each player j, m {1, . . . , M 1},
Sj m := {sj |h | h H m } and, Sj M := Sj . Let, for each player i,
ti = (t1i , . . . , tMi ) L(Sj {tj }) satisfy the following requirement:
m {1, . . . , M }, supptmi = Sj m {tj }. By letting ti be represented by a vNM utility function iti satisfying iti z = ui and the
LPS ti , then (1) [ui ] [caui ] = S1 T1 S2 T2 . Let, h H,
ti |h = (01ti , . . . 0Mti|h ) denote the conditional of ti on Sj (h)Tj . By the
properties of a subgame-perfect equilibrium, h H, 01ti (sj |h , tj ) = 1
ti

tj
and
T si |h Si (h). Hence, since likewise sj |h Sj (h), we have that (2)
hH ti Bi (h)[ratj (h)] = S1 T1 S2 T2 . As (1) and (2) hold for both
players, it follows that CKA = A = S1 T1 S2 T2 6= .

7.4

Discussion

In this section we first interpret our analysis in view of Aumann (1995)


and then present a discussion of the relationship to Battigalli (1996a).
Adding belief revision to Aumanns analysis. Consider a generic
perfect information game. Say that a players preferences (at a given
type) are in accordance with backward induction if, in any subgame, a
strategy is a rational choice only if it is consistent with the backward
induction outcome. Using this terminology, Proposition 28 can be restated as follows: Under common certain belief of admissible subgame
consistency, players are of types with preferences that are in accordance
with backward induction. Furthermore, common certain belief of admissible subgame consistency implies that each player deems it subjectively

Backward induction

95

impossible that the opponent is of a type with preferences not in accordance with backward induction.
However, since admissible subgame consistency is imposed on preferences, reaching 2s decision node and 1s second decision node in the
centipede game of Figure 2.4 does not contradict common certain belief of admissible subgame consistency. Of course, these decision nodes
will not be reached if players choose rationally. But that players satisfy
belief in each subgame of opponent rationality is not a requirement
concerning whether their own choice is rational; rather, it means that
they believe (with probability one) in any subgame that their opponent
will choose rationally. Combined with the assumption that all types
are in projT1 T2 [cau], which entails that each player deems any opponent strategy subjectively possible, this means that belief revision is
well-defined.
Hence, on the one hand, we capture the spirit of a conclusion that
can be drawn from Aumanns (1995) analysis, namely that when being
made subject to epistemic modeling backward induction corresponds to
each player having knowledge (or being certain) of some essential feature
of the opponent. In Aumanns case, each player deems it impossible
under common (true) knowledge of (sequential) rationalitythat the
opponent makes an action inconsistent with backward induction. The
analogous result in the present case is that each player deems it subjectivly impossibleunder common certain belief of admissible subgame
consistencythat the opponent has preferences not in accordance with
backward induction.
On the other hand, we are still able to present an explicit analysis of
how players revise their beliefs about the opponents subsequent choice
if surprising actions were to be made. As noted in the introduction to
this chapter, this fundamental issue of belief revision cannot formally be
raised within Aumanns framework.
Stalnaker (1998) arguescontrary to statements made by Aumann
(1995, Section 5f)that an assumption of belief revision is implicit in
Aumanns motivation, namely that information about different agents
of the opponent is treated as epistemically independent. In the reformulation by Halpern (2001),7 this means that in a state closest to the
current state when a player learns that the opponent has not followed

7 See

Halpern (2001) for an instructive discussion of the differences between Aumann (1995)
and Stalnaker (1998), as well as how these relate to Samet (1996).

96

CONSISTENT PREFERENCES

her strategy, he believes that the opponent will follow her strategy in
the remaining subgame.
There is no assumption of epistemic independence in the current interpretation of Aumanns result. Instead, we have changed statements
about opponents from being concerned with strategy choice to being
related to preferences. While it is desirable when modeling backward
induction to have an explicit theory of revision of beliefs about opponent choice, a theory of revision of beliefs about opponent preferences is
inconsistent with maintaining both (a) that preferences are necessarily
revealed from choice, and (b) that there is common certain belief of the
game being played (i.e., consider the case where Ai () is non-singleton,
and ai Ai () ends the game and leads to an outcome that is preferred by i to any other outcome). Here we have kept the assumption
that there is common certain belief of the game, meaning that the game
is of complete information, while requiring only conditional belief in
each subgame of opponent rationality, meaning that irrational opponent
choicesalthough being probability zero eventsare not subjectively
impossible.
We have shown how common certain belief of admissible subgame
consistency implies that each player deems it impossible that the opponent has preferences not in accordance with backward induction and
thus interprets any deviation from the backward induction path as the
opponent not having made a rational choice. In this way we present a
model that combines a result that resembles Aumann (1995) by associating backward induction with certainty about opponent type, with an
analysis that unlike Aumanns yields a theory of belief revision about
opponent choice.
Rationality orderings. The constructive proof of Proposition 29
shows how common certain belief of admissible subgame rationality may
lead player i at ti to have preferences over is strategies that are represented by a vNM utility function iti satisfying iti z = ui and an
LPS ti = (t1i , ..., tLi ) L(Sj Tj ) with more than two levels of subjective probability distributions (i.e., L > 2). E.g., in the centipede
game of Figure 2.4, common certain belief of admissible subgame consistency implies that player 2 at any type t2 has preferences that can
be represented by 2t2 satisfying 2t2 z = u2 and t2 = (t12 , t22 , t32 )
satisfying projS1 suppt12 = {Out}, projS1 suppt22 = {Out, InL}, and
projS1 suppt32 = S1 . One may interpret
projSj suppt1i to be js most rational strategies,

97

Backward induction

S
projSj supptLi \ k<L projSj supptki to be js completely irrational
strategies, and
S
projSj suppt`i \ k<` projSj supptki , for ` = 2, . . . , L 1, to consist of
strategies for j that are at intermediate degrees of rationality.
This illustrates that

[
projSj suppt1i , . . . , projSj supptLi \

k<L

projSj supptki

corresponds closely to what Battigalli (1996a) calls a rationality ordering


for j.
However, the present construction of such a rationality ordering differs from the one proposed by Battigalli. This difference is along two
dimensions:
1 Battigalli considers best responses in reachable subgames only (see
his Definition 2.1), while here belief of opponent rationality is held in
all subgames (cf., belief in each subgame of opponent rationality).
2 Battigalli considers best responses given beliefs where opponent strategies that are less than most rational are given positive probability,
while here each player always believes that the opponent chooses
rationally.
This difference has the following consequences:
Although Battigallis construction of rationality orderings also yields
the backward induction outcome in any generic perfect information
game, his proof (cf. Battigalli, 1997) is not tied to the backward
induction procedure.
Battigallis construction promotes the forward induction outcome
(InL, `) in the battle-of-the-sexes with an outside option game illustrated in Figure 2.6. This conclusion is not reached in the present
analysis since there is no choice situation in which 1 under all circumstances will have a particular preference between his battle-ofthe-sexes strategies.8
This also indicates how the epistemic foundation for the backward induction procedure offered here differs from the epistemic foundation for
backward (and forward) induction outcomes provided by Battigalli and
Siniscalchi (2002).
8 Chapter

11 will, following Asheim and Dufwenberg (2003a), demonstrate how the concept
of admissible consistency can be strengthened so that the forward induction outcome is
promoted in the battle-of-the-sexes with an outside option game.

Chapter 8
SEQUENTIALITY

One major problem in the theory of extensive games is the following:


How should a player react when he finds himself at an information set
that contradicts his previous belief about the opponents strategy choice?
Different approaches have been proposed to this problem. As mentioned
in Chapter 2, Ben-Porath (1997) and Reny (1992) have formulated rationalizability and equilibrium notions based on weak sequentiality, in
which a player is allowed to believe, in this situation, that his opponent
will no longer choose rationally. Battigalli and Siniscalchi (2002) have
shown that Pearces (1984) extensive form rationalizability can be characterized by assuming that a player, in such a situation, should look for
the highest degree of strategic sophistication that is compatible with
the event of reaching this information set, and stick to this degree until
it is contradicted later on in the game. Perea (2002, 2003) suggests that
the player, in such a situation, may revise his conjecture about the opponents utility function in order to rationalize her surprising move,
while maintaining common belief of rational choice at all information
sets. The most prominent position, however, is that the player should
still believe that his opponent will choose rationally in the remainder
of the game; this underlies concepts that promote backward induction.
Such concepts will be presented in this and the next chapter, which
reproduce joint work with Andres Perea, cf. Asheim and Perea (2004).
We define sequential rationalizability by imposing common certain
belief of the event that each player believes that the opponent chooses
rationally at all her information sets. Since this is a non-equilibrium
concepts, each player need not be certain of the beliefs that the oppo-

100

CONSISTENT PREFERENCES

nent has about the players own action choice. However, by assuming
that each player is certain of the beliefs that the opponent has about
the players own action choice, we obtain an epistemic characterization
of the corresponding equilibrium concept: sequential equilibrium. When
applied to generic games with perfect information, sequential rationalizability yields the backward induction procedure. As elsewhere, to avoid
the issue of whether (and if so, how) each players beliefs about the
action choice of his opponents are stochastically independent, all analysis is limited to two-player games. The assumption is essential in the
present context where a behavior strategy of a player will be interpreted
as an expression of the belief of his opponent.
For the above mentioned definitions and characterizations, we must
describe what a player believes both conditional on reaching his own
information sets (to evaluate his rationality) and conditional on his opponent reaching her information sets (to determine his beliefs about her
choices). Hence, we must specify a system of conditional beliefs for each
player. For reasons given in Section 3.1, this will be done by means of our
concept of a system of conditional lexicographic probabilities (SCLP) as
defined in Definition 1 and characterized in Proposition 5.
We embed the notion of an SCLP in an epistemic model, as defined
by Definition 9 of Chapter 5, by invoking Assumption 1. For each type ti
of any player i, ti is described by an SCLP, inducing a behavior strategy
for each opponent type tj that is deemed subjectively possible by ti . The
event that player i believes that the opponent j chooses rationally at
each information set can then be defined as the event where player i is
of a type ti that, for each subjectively possible opponent type tj , induces
a behavior strategy which is sequentially rational given tj s own SCLP.
The characterization of sequential equilibrium reported in Proposition
30 is included in order to motivate the analogous non-equilibrium concept, namely sequential rationalizability. The result may, however, be
of interest in its own right and in comparison with other such epistemic
characterizations; see, e.g., Theorem 2 of Feinberg (2004b).
The concept of sequential rationalizability as stated in Definition 15
is related to various other concepts proposed in the literature. Already
in Bernheim (1984) there are suggestions concerning how to define nonequilibrium concepts that involve rational choice at all information sets.
By requiring rationalizability in every subgame, Bernheim defines the
concept of subgame rationalizabilitywhich coincides with our definition
of sequential rationalizability for games of almost perfect information

Sequentiality

101

but no epistemic characterization is offered. On p. 1022 Bernheim claims


that it is possible to define a concept of sequential rationalizability, but
does not indicate how this can be done. After related work by Greenberg (1996), sequential rationalizability was finally defined by Dekel et
al. (1999, 2002), whose concept coincides with ours in our two-player
setting. Our definition of quasi-perfect rationalizability is new. Dekel et
al. (1999) and Greenberg et al. (2003) consider also extensive game concepts that lie between equilibrium and rationalizability; such concepts
will not be considered here.

8.1

Epistemic modeling of extensive games


(cont.)

The purpose of this section is to present a framework for a general


class of extensive games where each player is modeled as a decision maker
under uncertainty with complete preferences.
An extensive game form. Consider a finite extensive two-player
game form without chance moves. Assume that the extensive game form
satisfies perfect recall. Denote by Hi the finite collection of information
sets controlled by player i. For every information set h Hi , let A(h)
be the set of actions available at h. A pure strategy for player i is a
function si which assigns to every information set h Hi some action
si (h) A(h). Denote by Si the set of pure strategies for player i, where,
in the subsequent analysis, there is no need to differentiate between pure
strategies in Si that differ only at non-reachable information sets. Write
S = S1 S2 , denote by Z the set of outcomes (or terminal nodes), and
let z : S Z map strategy profiles into terminal nodes. Then (S1 , S2 , z)
is the corresponding finite strategic two-player game form.
For any h H1 H2 , let Si (h) be the set of strategies si for which
there is some strategy sj such that (si , sj ) reaches h. For any h and any
node x h, denote by S(x) = S1 (x) S2 (x) the
S set of pure strategy
profiles for which x is reached, and write S(h) := xh S(x). By perfect
recall, it holds that S(h) = S1 (h) S2 (h) for all information sets h. For
any h, h0 Hi , h (weakly) precedes h0 if and only if S(h) S(h0 ). For
any h Hi and a A(h), write Si (h, a) := {si Si (h)|si (h) = a}.
A behavior strategy for player i is a function i that assigns to every
h Hi some randomization i (h) (A(h)) on the set of available actions. If h Hi , denote by i |h the behavior strategy with the following
properties: (1) at player i information sets preceding h, i |h determines
with probability one the unique action leading to h, and (2) at all other

102

CONSISTENT PREFERENCES

player i information sets, i |h coincides with i . Say that i is outcomeequivalent to a mixed strategy pi ( (Si )) if, for any sj Sj , i and pi
induce the same probability distribution over terminal nodes. For any
h Hi , i |h is outcome-equivalent to some pi (Si (h)).
Epistemic modeling. Since the extensive game form determines a
finite strategic game form, we may represent the strategic interaction by
means of an epistemic model as defined by Definition 9 of Chapter 5.
Since a behavior strategy of a player will be interpreted as an expression
of the belief of his opponent, it is essential that the analysis assumes
complete preferences. Hence, the epistemic model is combined with
Assumption 1 of Chapter 5.
Under Assumption 1 it follows from Proposition 5 that, for each type
ti of any player i, is system of conditional preferences at ti can be
represented by a vNM utility function iti : (Z) R and an SCLP
(ti , `ti ), which for expositional simplicity is defined on Sj Tj with
support Sj Tjti (instead of being defined on Ti Sj Tj with support
ti = {ti } Sj Tjti ). Hence, writing thi for player is preferences
at ti conditional on player i information set h Hi being reached, we
consider w.l.o.g. thi to be preferences over acts from Sj (h) Tj to (Z)
(instead of acts from {ti } Sj (h) Tj to (Z)).
Conditional preferences over strategies. It follows that, for each
ti of any player i and all h Hi , is conditional preferences at ti in
subgame h can be represented by the vNM utility function iti : (Z)
R that does not depend on h, and an LPS
i
t`(S
|
= (01 , . . . 0`(Sj (h)Tj )|Sj (h)Tj )
j (h)Tj ) Sj (h)Tj

derived from the SCLP (ti , `ti ) on Sj Tj with support Sj Tjti .


Recall from Assumption 1 that player i deems an opponent strategytype pair (sj , tj ) subjectively possible at ti if and only if sj Sj and
tj Tj ti . This means that conditional preferences are non-trivial for
an event Ej ( Sj Tj ) if and only if Ej (Sj Tjti ) 6= . Note that
{Sj (h)Tj | h Hi } is the set of events that are objectively observable by
i. Hence, conditional preferences are always non-trivial for such events
since, for any h Hi , (Sj (h) Tj ) (Sj Tjti ) = (Sj (h) Tjti ) 6= .
Since, for all h Hi , each pure strategy si Si (h) is a function that
assigns the deterministic outcome z(si , sj ) to any (sj , tj ) Sj (h) Tj ,
it follows that si Si (h) is an act from Sj (h) Tj to (Z), and we
have that thi determines complete and transitive preferences on is set
of pure strategies, Si (h), conditional on h Hi being reached.

103

Sequentiality

Player is choice function at ti is a function Siti () that assigns to


every h Hi player is set of rational pure strategies at ti conditional
on h Hi :
Siti (h) := {si Si (h)| s0i Si (h), si thi s0i } .
Refer to Siti (h) as player is choice set at ti conditional on player i
information set h, and write Siti = Siti (), thereby following the notation
of Chapter 5.
Since thi is complete and transitive and satisfies objective independence, and Si (h) is finite, it follows that the choice set Siti (h) is nonempty, and that the set of rational mixed strategies equals (Siti (h)).
Note that Lemma 11 does not hold under Assumption 1, unless also
caution is imposed (cf. Section 9.1). The assumption that player is system of conditional preferences at ti is representable by means of an SCLP
where the set of subjectively possible opponent types equals Sj Tj ti
has the structural implication that, for all h Hi , the conditional preferences, thi , are nontrivial, even without imposing caution. However,
representation by means of an SCLP does not have the behavioral implication that any choice si that is rational conditional on h is also rational
in any later player i information set that si is consistent with. This
means that rationality does not imply weak sequential rationality if
caution is not imposed.
An extensive game. As in Chapter 7, a finite extensive two-player
game consists of the pair of the extensive game form and the vNM utility
functions (1 , 2 ), with G = (S1 , S2 , u1 , u2 ) denoting the corresponding
finite strategic game, where for each i, the vNM utility function ui :
S R is defined by ui = i z. As beforebut transferred to T1 T2
spacethe event that i plays the game G is given by
[ui ] := {(t1 , t2 ) T1 T2 |iti z is a positive affine transformation of ui },
while [u] := [u1 ] [u2 ] is the event that both players play G.
Certain belief. As in Chapter 5, say for any E T1 T2 that player
i certainly believes the event E at ti if ti projTi Ki E, where
Ki E := {(t1 , t2 ) T1 T2 | projT1 T2 ti = {ti } Tjti E} .
Say that there is mutual certain belief of E at (t1 , t2 ) if (t1 , t2 ) KE,
where KE := K1 E K2 E. Say that there is common certain belief of E
at (t1 , t2 ) if (t1 , t2 ) CKE, where CKE := KE KKE KKKE . . . .

104

CONSISTENT PREFERENCES

8.2

Sequential consistency

In this section, we use the epistemic model of the previous section


based on the concept of an SCLPto formalize the requirement that
each player believes that the opponent chooses rationally at each of her
information sets, given her preferences at these information sets. This
enables us
to characterize sequential equilibrium (Kreps and Wilson, 1982), and
to define sequential rationalizability as a non-equilibrium analog to
the concept of Kreps and Wilson (1982).
Inducing sequential rationality. In our setting a behavior strategy
is not an object of choice, but an expression of the system of beliefs of
the other player. Say that the behavior strategy j ti |tj is induced for tj
by ti if tj Tjti and, for all h Hj and a A(h),
jti |tj (h)(a) :=

t`i (Sj (h, a), tj )


t`i (Sj (h), tj )

where ` is the first level ` of ti for which t`i (Sj (h), tj ) > 0, implying that t`i restricted to Sj (h) {tj } is proportional to the top
level probability distribution of the LPS that describes
ti s conditional
P
ti
ti
belief on Sj (h) {t
}.
Here,

(S
(h),
t
)
:=
j
j
sj Sj (h) ` (sj , tj ) and
`
Pj
ti
ti
` (Sj (h, a), tj ) := sj Sj (h,a) ` (sj , tj ).
Say that the behavior strategy i is sequentially rational for i at ti if,
h Hi , i |h is outcome-equivalent to some mixed strategy in (Siti (h)).
Define the event that player i is of a type that i nduces a sequentially
r ational behavior strategy for any opponent type that is deemed subjectively possible:
[isri ] := {(t1 , t2 ) T1 T2 | t0j Tjti ,
0

j ti |tj is sequentially rational for t0j } .


Write [isr] := [isr1 ] [isr2 ] for the event where both players are of such
a type.
Say that at ti player is preferences over his strategies are sequentially consistent with the game and the preferences of his opponent, if
ti projTi ([ui ] [isri ]). Refer to [u] [isr] as the event of sequential
consistency.
Note that the behavior strategy induced for t0j by ti specifies is belief
revision policy at ti about the behavior of t0j , as it defines probability

105

Sequentiality

distributions also at player j information sets that are unreachable given


is initial belief at ti about t0j s behavior. Hence, if ti projTi [isri ], then
player i believes at ti that each subjectively possible opponent type t0j
chooses rationally also at player j information sets that contradict ti s
initial belief about the behavior of t0j . The above observation explains
why we can characterize a sequential equilibrium as a profile of induced
behavior strategies at a type profile in [isr] where there is mutual certain
belief of the type profile (i.e., for each player, only the true opponent
type is deemed subjectively possible).
Characterizing sequential equilibrium. We first define sequential
equilibrium. Player is beliefs over past opponent actions at is information sets is a function i that to any h Hi assigns a probability distribution over the nodes in h. An assessment (, ) = ((1 , 2 ), (1 , 2 )),
consisting of a pair of behavior strategies and a pair of beliefs, is consistent if there is a sequence ((n), (n))nN of assessments converging to
(, ) such that for every n, (n) is completely mixed and (n) is induced
by (n) using Bayes rule. If i and j are any behavior strategies for i
and j, and i are the beliefs of i, then let, for each h Hi , ui (i , j ; i )|h
denote is expected payoff conditional on h, given the belief i (h), and
given that future behavior is determined by i and j .

Definition 14 An assessment (, ) = ((1 , 2 ), (1 , 2 )) is a sequential equilibrium if it is consistent and it satisfies that for each i and every
h Hi ,
ui (i , j ; i )|h = max
ui (i0 , j ; i )|h .
0
i

The characterization result can now be stated; it is proven in Appendix B.

Proposition 30 Consider a finite extensive two-player game . A profile of behavior strategies = (1 , 2 ) can be extended to a sequential
equilibrium if and only if there exists an epistemic model with (t1 , t2 )
[u] [isr] such that (1) there is mutual certain belief of {(t1 , t2 )} at
(t1 , t2 ), and (2) for each i, i is induced for ti by tj .
For the if part, it is sufficient that there is mutual certain belief of
the beliefs that each player has about the action choice of his opponent
at each of her information sets. We do not need the stronger condition
that (1) entails. Hence, higher order certain belief plays no role in the
characterization, in line with the fundamental insights of Aumann and
Brandenburger (1995).

106

CONSISTENT PREFERENCES

Defining sequential rationalizability. We next define the concept


of sequentially rationalizable behavior strategies as induced behavior
strategies under common certain belief of [isr].

Definition 15 A behavior strategy i for i is sequentially rationalizable


in a finite extensive two-player game if there exists an epistemic model
with (t1 , t2 ) CK([u] [isr]) such that i is induced for ti by tj .
It follows from Proposition 30 that a behavior strategy is sequentially
rationalizable if it is part of a profile of behavior strategies that can
be extended to a sequential equilibrium. Since a sequential equilibrium
always exists, we obtain as an immediate consequence that sequentially
rationalizable behavior strategies always exist.
For the concept of sequential rationalizabilityas indeed, throughout
the bookwe restrict our attention to games with two players. A natural
question which arises is whether, and if so how, the present analysis can
be extended to the case of three or more players. In order to illustrate
the potential difficulties of such an extension, consider a three player
game in which player 3 has an information set h with two nodes, x
and y, where x is preceded by the player 1 action a and the player 2
action c, and y is preceded by the player 1 action b and the player 2
action d. Suppose that player 3 views b and c as suboptimal choices,
and hence player 3 deems a infinitely more likely than b, and deems d
infinitely more likely than c. Then, player 3s LPS at h over player 1s
strategy choice and player 3s LPS at h over player 2s strategy choice do
not provide sufficient information to derive player 3s relative likelihoods
attached to nodes x and y, and these relative likelihoods are crucial to
assess player 3s rational behavior at h. Hence, in addition to the two
LPSs mentioned above, we need another aggregated LPS for player 3 at
h over his opponents collective strategy profiles.
The key problem would then be what restrictions to impose upon the
connection between the LPSs over individual strategies on the one hand
and the aggregate LPS over strategy profiles on the other hand. Both
classes of LPSs are needed, since the former are crucial in order to evaluate the beliefs about rationality of individual players, and the latter are
needed in order to determine the conditional preferences of each player,
as shown above. This issue is closely related to the problem of how
to characterize consistency of assessments in algebraic terms, without
the use of sequences; cf. McLennan (1989a, 1989b), Battigalli (1996b),
Kohlberg and Reny (1997), and Perea et al. (1997). In these papers,
the consistency requirement for assessments has been characterized by

107

Sequentiality

means of conditional probability systems, relative probability systems


and lexicographic probability systems, satisfying some appropriate additional conditions. Perea et al. (1997), for instance, use a refinement
of LPS in which, at every information set, not only an LPS over the
available actions is defined, but moreover the relative likelihood level
between actions is quantified by an additional parameter, whenever
one action is deemed infinitely more likely than the other. This additional parameter makes it possible to derive a unique aggregate LPS over
action profiles (and hence also over strategy profiles). A similar approach
can be found in Govindan and Klumpp (2002). Such an approach could
possibly be useful when extending our analysis of, e.g., sequentiality to
the case of more than two players. For the moment, we leave this issue
for future research.

8.3

Weak sequential consistency

In the previous section we have shown how imposing that each player
believes that the opponent chooses rationally at all her information sets
can be used to characterize sequential equilibrium and define sequential
rationalizability. Table 2.2 suggests the following claim: Imposing that
each player believes that the opponent chooses rationally only at her
reachable information sets can be used to characterize the notion of weak
sequential rationalizability, due to Ben-Porath (1997) and coined weak
extensive form rationalizablity by Battigalli and Bonanno (1999). In
this section we verify this claim and shed light on the difference between
sequentiality and weak sequentiality.
Inducing weak sequential rationality. Recall from Chapter 5 that
the mixed strategy pjti |tj is induced for tj by ti if tj Tjti and, for all
sj Sj ,
ti (sj , tj )
pjti |tj (sj ) = t`i
,
` (Sj , tj )
where ` is the first level ` of ti for which t`i (Sj , tj ) > 0.
Say that a mixed strategy pi is weak sequentially rational for i at ti
if, h Hi s.t. supppi Si (h) 6= , supppi Si (h) Siti (h), and define
the event that player i is of a type that i nduces a w eakly sequentially
r ational mixed strategy for any opponent type that is deemed subjectively possible:
[iwri ] := {(t1 , t2 ) T1 T2 | t0j Tjti ,
0

pj ti |tj is weak sequentially rational for t0j } .

108

CONSISTENT PREFERENCES

Write [iwr] := [iwr1 ] [iwr2 ].


Say that at ti player is preferences over his strategies are weak sequentially consistent with the game and the preferences of his opponent, if
ti projTi ([ui ] [iwri ]). Refer to [u] [iwr] as the event of weak sequential consistency.
Note that the mixed strategy induced for t0j by ti may be interpreted
as is initial belief at ti about the behavior of t0j . In contrast to the
behavior strategy induced for t0j by ti , as defined in the previous section,
the induced mixed strategy gives no information about how i at ti revises
his belief about the behavior of t0j at player j information sets that
are unreachable given is initial belief at ti about t0j s behavior. Hence,
if ti projTi [iwri ], then player i believes at ti that each subjectively
possible opponent type t0j chooses rationally at player j information sets
that do not contradict is initial belief at ti about the behavior of t0j .
However, and this is the crucial difference when compared to the case
where ti projTi [isri ]: ti projTi [iwri ] entails no restriction on how
i at ti revises his beliefs about t0j s behavior conditional on t0j reaching
surprising information sets. The above observation explains why weak
sequentially rationalizable mixed strategies can be shown to correspond
to induced mixed strategies under common certain belief of [u] [iwr].
Characterizing weak sequential rationalizability. We first define weak sequential rationalizability. Since weak sequential rationalizability in two-player games corresponds to iterated elimination of strategies that are strongly dominated at some reachable information set, we
use the latter procedure as the primitive definition. For any ( 6=)
X = X1 X2 S, write b(X) := b1 (X2 ) b2 (X1 ), where
bi (Xj ) := Si \ {si Si | pi (Si ) s.t. pi strongly dominates si on Xj
or h Hi with Si (h) 3 si and qi (Si (h))
s.t. qi strongly dominates si on Sj (h)} .
If pi is a mixed strategy and h Hi satisfies that supppi Si (h) 6= ,
then let pi |h be defined by
(
pi (si )
if si Si (h)
pi (Si (h))
pi |h (si ) =
0
otherwise .

Definition 16 Let be a finite extensive two-player game. Consider the sequence defined by X(0) = S1 S2 and, g 1, X(g) =
b(X(g 1)). A pure strategy si is said to be weak sequentially rational-

109

Sequentiality

izable if
si Wi :=

\
g=0

Xi (g) .

A mixed strategy pi is said to be weak sequentially rationalizable if pi


is not strongly dominated on Wj and there does not exist h Hi with
supppi Si (h) 6= such that pi |h is strongly dominated on Sj (h).
While any pure strategy in the support of a weak sequentially rationalizable mixed strategy is itself weak sequentially rationalizable, the mixture
over a set of weak sequentially rationalizable pure strategies need not be
weak sequentially rationalizable.
The following lemma is a straightforward implication of Definition 11.

Lemma 12 (i) For each i, Wi 6= . (ii) W = b(W ). (iii) For each i,


si Wi if and only if there exists X = X1 X2 with si Wi such that
X b(X).
We next characterize the concept of weak sequentially rationalizable
mixed strategies as induced mixed strategies under common certain belief of [u] [iwr].

Proposition 31 A mixed strategy pi for i is weak sequentially rationalizable in a finite extensive two-player game if and only if there exists
an epistemic model with (t1 , t2 ) CK([u] [iwr]) such that pi is induced
for ti by tj .
Proof. Part 1: If pi is weak sequentially rationalizable, then there
exists an epistemic model with (t1 , t2 ) CK([u] [iwr]) such that pi is
induced for ti by tj .
Step 1: Construct an epistemic model with T1 T2 CK([u] [iwr])
such that for each si Wi of any player i, there exists ti Ti with,
si Siti . Construct an epistemic model with, for each i, a bijection
si : Ti Wi from the set of types to the the set of weak sequentially
rationalizable pure strategies. Assume that, for each ti Ti of any player
i, iti satisfies that
(a) iti z = ui (so that T1 T2 [u]),
and the SCLP (ti , `ti ) on Sj Tj has the properties that
(b) ti = (t1i , . . . , tLi ) with support Sj Tjti satisfies that suppt11
(Sj {tj }) = {(sj (tj ), tj )} for all tj Tjti (so that, tj Tjti ,
piti |tj (sj (tj )) = 1),
(c) Ej Sj Tj such that Ej (Sj Tjti ) 6= , `ti (Ej ) = min{`| suppt`i 6=
} (so that, by Corollary 1, the SCLP corresponds to a CPS).

110

CONSISTENT PREFERENCES

Property (b) entails that the support of the marginal of t1i on Sj is


included in Wj . By properties (a) and (c) and Lemmas 4 and 12(ii),
we can still choose t1i (and Titi ) so that si (ti ) Siti . Since information
sets correspond to strategically independent sets (cf. the discussion in
connection with Lemmas 11 and 13) we have that, h Hi s.t. Si (h) 3
si (ti ) and suppt1i Sj (h) 6= , si (ti ) Siti (h), while, h Hi s.t. Si (h) 3
si (ti ) and suppt1i Sj (h) = , si (ti ) Siti (h) by choosing the lower levels
of ti appropriately (again invoking properties (a) and (c) and Lemmas 4
and 12(ii)). This combined with property (b) means that T1 T2 [iwr].
Furthermore, T1 T2 CK([u] [iwr]) since Tjti Tj for each ti Ti
of any player i. Since, for each player i, si is onto Wi , it follows that,
for each si Wi of any player i, there exists ti Ti with si Siti .

Step 2: Add type ti to Ti . Assume that iti satisfies (a) and (ti , `ti )

satisfies (b) and (c). Then 1ti can be chosen so that pi (Siti ), and

consequently, h Hi s.t. supppi Si (h) 6= and supp1ti Sj (h) 6= ,

supppi Si (h) Siti (h), while, h Hi s.t. supppi Si (h) 6= and

supp1ti Sj (h) = , supppi Si (h) Siti (h) by choosing the lower

levels of ti appropriately. Furthermore, (Ti {ti }) Tj [u] [iwr],

and since Tjti Tj , (Ti {ti }) Tj CK([u] [iwr]).

Step 3: Add type tj to Tj . Assume that jtj satisfies (a) and the SCLP

(tj , `tj ) on Si (Ti {ti }) has the property that tj = (1tj , . . . , Ltj )

with support Si {ti } satisfies that, si Si , 1tj (si , ti ) = pi (si ), so that


pi is induced for ti by tj . Furthermore, (Ti {ti }) (Tj {tj }) [u]

[iwr], and since Titj Ti {ti }, (Ti {ti })(Tj {tj }) CK([u][iwr]).
Hence, (t1 , t2 ) CK([u] [iwr]) and pi is induced for ti by tj .
Part 2: If there exists an epistemic model with (t1 , t2 ) CK([u]
[iwr]) such that pi is induced for ti by tj , then pi is weak sequentially
rationalizable.
Assume that there exists an epistemic model with (t1 , t2 ) CK([u]
[iwr]) such that pi is induced for ti by tj . In particular, CK([u][iwr]) 6=
. Let, for each i, Ti0 := projTi CK([u] [iwr]) and
[
Xi :=
{si Si |h Hi s.t. Si (h) 3 si , si Siti (h)} .
0
ti Ti

By Proposition 20(ii), for each ti Ti0 of any player i, ti deems (sj , tj )


subjectively impossible if tj Tj \Tj0 since CK([u] [iwr]) = KCK([u]
[iwr]) Ki CK([u] [iwr]), implying Tjti Tj0 . By the definitions of
[u] and [iwr], it follows that, for each ti Ti0 of any player i, ti is
represented by iti satisfying that iti z is a positive affine transformation
of ui and an LPS t`i = (t1i , . . . , t`i ), where ` = `(Sj Tj ) 1, and
where suppt1i Xj Tj . Hence, by Lemma 4, for each ti Ti0 of any

111

Sequentiality

2c
d
1
1

1s
D
0
0

Figure 8.1.

3
3

f
d
1,
1
0,
0
D
F 1, 1 3, 3

6 and its strategic form.

player i, if pi (Si ) satisfies that, h Hi s.t. supppi Si (h) 6= ,


supppi Si (h) Siti (h), then
no strategy in the support of pi is strongly dominated on Xj , since
then pi (Siti ), and it follows from pi (Siti ) and suppt1i
Xj Tj that, si supppi and s0i Si ,
X X t
X X t
1i (sj , tj )ui (s0i , sj ) ,
1i (sj , tj )ui (si , sj )
sj Xj tj Tj

sj Xj tj Tj

h Hi s.t. supppi Si (h) 6= , no strategy in supppi Si (h) is


strongly dominated on Sj (h) since supppi Si (h) Siti (h).
This implies X b(X), entailing by Lemma 12(iii) that, for each i, Xi
Wi . Furthermore, since (t1 , t2 ) CK([u] [iwr]) and the mixed strategy
induced for ti by tj , pi , satisfies that, h Hi s.t. supppi Si (h) 6= ,
supppi Si (h) Siti (h), it follows that pi is not strongly dominated on
Xj Wj and there does not exist h Hi with supppi Si (h) 6= such
that pi |h is strongly dominated on Sj (h). By Definition 16 this implies
that pi is a weak sequentially rationalizable mixed strategy.
The following observation (which is stated without proof) can now be
used to establish the relationships between the rationalizability concepts
on the lower row of Table 2.2.

Proposition 32 For any epistemic model and for each player i,


[isri ] [iwri ] [iri ] .
Since [iwri ] [iri ], Propositions 31 and 22 entail that weak sequential
rationalizability refines (ordinary) rationalizability, and since [isri ]
[iwri ], Definition 15 and Proposition 31 entail that sequential rationalizability refines weak sequential rationalizability. That the two latter inclusions can be strict, is illustrated by 6 and 06 of Figures 8.1 and 8.2,
respectively. In 6 rationalizability does not have any bite, while weak

112

CONSISTENT PREFERENCES

1c
D
2
2

2s
d
1
1

Figure 8.2.

1s
D
0
0

3
3

f
d
D 2, 2 2, 2
FD 1, 1 0, 0
FF 1, 1 3, 3

06 and its pure strategy reduced strategic form.

sequential rationalizability promotes that player 1 plays F and player 2


plays f . In 06 introduced by Reny (1992, Figure 1)weak sequential
rationalizability only precludes the play of D at 1s second decision node.
This can be established by applying the Dekel-Fudenberg procedure (i.e.,
one round of weak elimination followed by iterated strong elimination)
which eliminates a strategy if and only if it is not permissible. Since all
terminal nodes yield different payoffs, weak sequential rationalizability
leads to the same conclusion.1 However, only the play of F at both
of 1s decision nodes and the play of f at 2s single decision node are
sequentially rationalizable. This follows from Proposition 33 of the next
section, showing that the latter concepts imply the backward induction
procedure.
Extensive form rationalizability (EFR), cf. Pearce (1984) as well
as Battigalli (1997) and Battigalli and Siniscalchi (2002), is an iterative
deletion procedure where, at any information set reached by a remaining
strategy, any deleted strategy is deemed infinitely less likely than some
remaining strategy. Even though EFR only requires players to choose
rationally at reachable information sets and preference for cautious behavior is not imposed, EFR is different from weak sequential rationalizability. Unlike all concepts in Tables 2, EFR yields forward induction in
common examples like the battle-of-the-sexes with an outside option
game, see Figure 2.6.2 EFR also leads to the backward induction out-

1 To

see how the characterization in Proposition 31 of weak sequential rationalizability is


consistent with (D, d) in 06 , let T1 = {t1 } with t1 = ((1, 0), (0, 1)) (assigning probabilities
to (d, t2 ) and (f, t2 ) respectively), and T2 = {t2 } with t2 = ((1, 0, 0), (0, 1, 0), (0, 0, 1))
(assigning probabilities to (D, t1 ), (F D, t1 ), and (F F, t1 ) respectively). Then, independently
of how `t1 and `t2 are specified, (t1 , t2 ) CK([u] [iwr]), and, for each i, pi is induced for
ti by tj , where p1 (D) = 1 and p2 (d) = 1.
2 By strengthening permissibility, Asheim and Dufwenberg (2003a) define a rationalizability
concept, fully permissible sets, which is different from those of Table 2.2 as well as EFR, as
it yields forward induction, but does not always promote backward induction. This concept
will be presented in Chapters 11 and 12.

113

Sequentiality

come. However, unlike sequential rationalibility, EFR need not promote


the backward induction procedure.

8.4

Relation to backward induction

The following result shows how sequential rationalizability implies the


backward induction procedure in perfect information games. A finite
extensive game , as introduced in Section 8.1, is of perfect information
if, at any information set h H1 H2 , h = {x}; i.e., h contains only
one node. It is generic if, for each i, i (z) 6= i (z 0 ) whenever z and z 0
are different outcomes. A generic extensive game of perfect information
has a unique subgame-perfect equilibrium in pure strategies. Moreover,
in such games the backward induction procedure yields in any subgame
the unique subgame-perfect equilibrium outcome.

Proposition 33 Consider a finite generic extensive two-player game of


perfect information . If there exists an epistemic model with (t1 , t2 )
CK([u][isr]) and, for each i, i is induced for ti by tj , then = (1 , 2 )
is the subgame-perfect equilibrium.
Proof. In a perfect information game, the action a A(h) taken at
the information set h determines the immediate succeeding information
set, which can thus be denoted (h, a). Also, any information set h
H1 H2 determines a subgame. Set H 1 = Z (i.e. the set of terminal
nodes) and determine H g for g 0 by induction: h H g if and only if
h satisfies
0

max{g 0 | h0 H g and a A(h) such that h0 = (h, a)} = g 1 .


In words, h H g if and only if g is the maximal number of decision
nodes between h and a terminal node in the subgame determined by
h. If is a profile of behavior strategies and h H1 H2 , denote by
|h the strategy profile with the following properties: (1) at information
sets preceding h, |h determines with probability one the unique action
leading to h, and (2) at all other information sets, |h coincides with
. Say that 0 is outcome-equivalent to 00 if 0 and 00 induce the same
probability distribution over terminal nodes.
In view of properties of the certain belief operator (cf. Proposition 20
0
of Chapter 5), it is sufficient to show for any g = 0, . . . , max{g 0 |H g 6= }
that if there exists an epistemic model with (t1 , t2 ) Kg ([u] [isr]) and,
for each i, i is induced for ti by tj , then, h H g , |h is outcome-

114

CONSISTENT PREFERENCES

equivalent to |h , where = (1 , 2 ) denotes the subgame-perfect


equilibrium. This is established by induction.
(g = 0) Let (t1 , t2 ) K0 ([u] [isr]) = [u] [isr] and, for each i, i
be induced for ti by tj . Let h H 0 and assume w.l.o.g. that h Hi .
Since (t1 , t2 ) [ui ] [isrj ] and j takes no action at h, |h is outcome
equivalent to |h .
0
(g = 1, . . . , max{g 0 |H g 6= }) Suppose that it has been established for
g 0 = 0, . . . , g 1 that if there exists an epistemic model with (t1 , t2 )
0
0
Kg ([u] [isr]) and, for each i, i is induced for ti by tj , then, h0 H g ,
|h0 is outcome-equivalent to |h0 . Let (t1 , t2 ) Kg ([u] [isr]) and,
for each i, i be induced for ti by tj . Let h H g and assume w.l.o.g.
that h Hi . Since (t1 , t2 ) Ki Kg1 [isr], it follows from the premise of
the inductive step that ti s SCLP (ti , `ti ) satisfies, t0j Tjti , h0 Hj
succeeding h, and a0 A(h0 ),
t`i (Sj (h0 , a0 ), t0j )
t`i (Sj (h0 ), t0j )

= j (h0 )(a0 ) ,

where ` is the first level ` of ti for which t`i (Sj (h0 ), t0j ) > 0. Since
is generic, i is sequentially rational for ti only if i (h) = i (h). Since
(t1 , t2 ) [ui ] [isrj ] and j takes no action at h, it follows from the
premise that |h is outcome-equivalent to |h .
Since sequentially rationalizable strategies always exist, there is an
epistemic model with (t1 , t2 ) CK([u] [isr]), implying that the result
of Proposition 33 is not empty.

Chapter 9
QUASI-PERFECTNESS

In Chapter 5 we saw how the characterizations of Nash equilibrium


and rationalizability lead to characterizations of (strategic form) perfect
equilibrium and permissibility by adding preference for cautious behavior. In this chapter we show that the characterization of sequential
equilibrium leads to a characterization of quasi-perfect equilibrium by
adding caution. The concept of a quasi-perfect equilibrium, proposed by
van Damme (1984), differs from Seltens (1975) extensive form perfect
equilibrium by the property that, at each information set, the player
taking an action ignores the possibility of his own future mistakes.
So, parallelling Chapter 8, we define quasi-perfect rationalizability by
imposing common certain belief of the event that each player has preference for cautious behavior (i.e., at every information set, one strategy is preferred to another if the former weakly dominates the latter)
and believes that the opponent chooses rationally at all her information
sets. Moreover, by assuming that each player is certain of the beliefs
that the opponent has about the players own action choice, we obtain
an epistemic characterization of the corresponding equilibrium concept:
quasi-perfect equilibrium. Since quasi-perfect rationalizability refines sequential rationalizability, it follows from Proposition 33 that also the
former concept yields the backward induction procedure.
By embedding the notion of an SCLP in an epistemic model with a set
of epistemic types for each player, we are able to model quasi-perfectness
as a special case of sequentiality. For each type ti of any player i, ti is
described by an SCLP, which under the event that player i believes
that the opponent j chooses rationally at each information set induces,

116

CONSISTENT PREFERENCES

for each opponent type tj that is deemed subjectively possible by ti , a


behavior strategy which is sequentially rational given tj s own SCLP.
An SCLP ensures well-defined conditional beliefs representing nontrivial conditional preferences, while allowing for flexibility w.r.t. whether to
assume preference for cautious behavior. Preference for cautious behavior, as needed for quasi-perfect rationalizability, is obtained by imposing
the following additional requirement on ti s SCLP for each conditioning
event: If an opponent strategy-type pair (sj , tj ) is compatible with the
event and tj is deemed subjectively possible by ti , then (sj , tj ) is in the
support the LPS that represents type ti s conditional preferences.
This chapters definition of quasi-perfect rationalizability was proposed by Asheim and Perea (2004).

9.1

Quasi-perfect consistency

In this section, we add preference for cautious behavior to the analysis


of Chapter 8. This enables us to
characterize quasi-perfect equilibrium (van Damme, 1984), and
define quasi-perfect rationalizability as a non-equilibrium analog to
the concept of van Damme (1984).
The epistemic modeling is identical to the one given in Section 8.1; hence,
this will not be recapitulated here.
Caution. Under Assumption 1 it follows from Proposition 5 that,
for each type ti of any player i, is system of conditional preferences at
ti can be represented by a vNM utility function iti : (Z) R and an
SCLP (ti , `ti ) on Sj Tj with support Sj Tjti . Recall from Section
5.3 that caution imposes the additional requirement that for each type
ti of any player i the full LPS ti is used to form the conditional beliefs
over opponent strategy-type pairs. Formally, if L denotes the number of
levels in the LPS ti , then
[caui ] = {(t1 , t2 ) T1 T2 | `ti (Sj Tj ) = L} .
Since `ti is non-increasing w.r.t. set inclusion, ti projTi [caui ] implies
that `ti (projSj Tj ) = L for all subsets of {ti } Sj Tj with welldefined conditional beliefs. Since it follows from Assumption 1 that ti
has full support on Sj , ti projTi [caui ] means that is choice function at
ti never admits a weakly dominated strategy, thereby inducing preference
for cautious behavior.
As before, write [cau] := [cau1 ] [cau2 ].

117

Quasi-perfectness

Say that at ti player is preferences over his strategies are quasiperfectly consistent with the game and the preferences of his opponent,
if ti projTi ([ui ] [isri ] [caui ]). Refer to [u] [isr] [cau] as the event
of quasi-perfect consistency.
Characterizing quasi-perfect equilibrium. We now characterize
the concept of a quasi-perfect equilibrium as profiles of induced behavior
strategies at a type profile in [u] [isr] [cau] where there is mutual
certain belief of the type profile (i.e., for each player, only the true
opponent type is deemed subjectively possible). To state the definition of
quasi-perfect equilibrium, we need some preliminary definitions. Define
the concepts of a behavior representation of a mixed strategy and the
mixed representation of a behavior strategy in the standard way, cf., e.g.,
p. 159 of Myerson (1991). If a behavior strategy j and a mixed strategy
pj are both completely mixed, and j is a behavior representation of pj
or pj is the mixed representation of j , then, h Hj , a A(h),
j (h)(a) =

pj (Sj (h, a))


.
pj (Sj (h))

If i is any behavior strategy for i and j is a completely mixed behavior


strategy for j, then abuse notation slightly by writing, for each h Hi ,
ui (i , j )|h := ui (pi , pj |h ) ,
where pi is outcome-equivalent to i |h and pj is the mixed representation
of j .

Definition 17 A behavior strategy profile = (1 , 2 ) is a quasiperfect equilibrium if there is a sequence ((n))nN of completely mixed
behavior strategy profiles converging to such that for each i and every
n N and h Hi ,
ui (i , j (n))|h = max
ui (i0 , j (n))|h .
0
i

The characterization result can now be stated; it is proven in Appendix


B.

Proposition 34 Consider a finite extensive two-player game . A profile of behavior strategies = (1 , 2 ) is a quasi-perfect equilibrium if and
only if there exists an epistemic model with (t1 , t2 ) [u] [isr] [cau]
such that (1) there is mutual certain belief of {(t1 , t2 )} at (t1 , t2 ), and
(2) for each i, i is induced for ti by tj .

118

CONSISTENT PREFERENCES

As for Proposition 31, higher order certain belief plays no role in this
characterization.
Defining quasi-perfect rationalizability. We next define the concept of quasi-perfectly rationalizable behavior strategies as induced behavior strategies under common certain belief of [u] [isr] [cau].

Definition 18 A behavior strategy i for i is quasi-perfectly rationalizable in a finite extensive two-player game if there exists an epistemic
model with (t1 , t2 ) CK([u] [isr] [cau]) such that i is induced for
ti by tj .
It follows from Proposition 34 that a behavior strategy is quasi-perfectly
rationalizable if it is part of a quasi-perfect equilibrium. Since a quasiperfect equilibrium always exists, we obtain as an immediate consequence that quasi-perfectly rationalizable behavior strategies always exist.
Propositions 30 and 34 imply the well-known result that every quasiperfect equilibrium can be extended to a sequential equilibrium, while
Definitions 15 and 18 imply that the set of quasi-perfectly rationalizable
strategies is included in the set of sequentially rationalizable strategies.
To illustrate that this inclusion can be strict, consider 4 of Figure 3.1.
Both concepts predict that player 2 plays d with probability one. However, only quasi-perfect rationalizability predicts that player 1 plays D
with probability one. Preferring D to F amounts to preference for cautious behavior since by choosing D player 1 avoids the risk that player
2 may choose f .
Since quasi-perfect rationalizability is thus a refinement of sequential rationalizability, it follows from Proposition 33 that quasi-perfect
rationalizability implies the backward induction procedure in perfect information games.

9.2

Relating rationalizability concepts

The following result helps establishing some of the remaining relationships between the rationalizability concepts of Table 2.2.

Proposition 35 For any epistemic model and for each player i,


[iri ] Ki [cauj ] [iwri ] .
To prove Proposition 35 we need the following lemma.

119

Quasi-perfectness

Lemma 13 If ti projTi Ki [cauj ], then, for each tj Tj ti and any


h Hj , sj Sj (h)\Sjtj (h) implies that there exists s0j Sj (h) such that
s0j tj sj .
Proof. As for Lemma 11 the proof of this lemma is based on the concept
of a strategically independent set due to Mailath et al. (1993). It follows
from Mailath et al. (Definitions 2 and 3 and the if part of Thm. 1)
that S(h) is strategically independent for j at any player j information
set h in a finite extensive game, and this does not depend on the vNM
utility function that assigns payoff to any outcome.
If ti projTi Ki [cauj ], then the following holds for each tj Tj ti :
Player js system of conditional preferences at tj satisfies Axiom 6 (Conditionality). Suppose sj Sj (h)\Sjtj (h). Then there exists s0j Sj (h)
t
such that s0j hj sj . As noted above, S(h) is a strategically independent
set for j. Hence, s0j can be chosen such that z(s0j , si ) = z(sj , si ) for all
si Si \Si (h). By Axiom 6 (Conditionality), this implies s0j tj sj .
Proof of Proposition 35. Consider any epistemic model with
ti projTi ([iri ] Ki [cauj ]) .
Suppose ti
/ projTi [iwri ]; i.e., there exist tj Tjti and h Hj such that
pjti |tj (sj ) > 0 for some sj Sj (h)\Sjtj (h). Since ti projTi Ki [cauj ], it
follows from Lemma 13 that s0j Sj (h) s.t. s0j tj sj . Hence,
pjti |tj
/ (Sjtj ) ,
contradicting ti projTi [iri ]. This shows that ti projTi [iwri ].
Since [iri ] Ki [cauj ] [iwri ], the cell in Table 2.2 to the left of
permissibility is not applicable, and permissibility refines weak sequential rationalizability. Figure 3.1 shows that the inclusion can be strict:
Permissibility, but not weak sequential rationalizability, precludes that
player 1 plays F in 4 .
Since [isri ] [iri ], Definition 18 and Proposition 24 entail that quasiperfect rationalizability refines permissibility. That the latter inclusion
can be strict is illustrated by 06 of Figure 8.2. Since this is a generic extensive game, imposing preference for cautious behavior has no bite, and
the difference between permissibility and quasi-perfect rationalizability
corresponds to the difference between weak sequential rationalizability
and sequential rationalizability, as discussed in Section 8.3.

Chapter 10
PROPERNESS

Most contributions on the relation between common knowledge/belief


of rationality and backward induction in perfect information games perform the analysis in the extensive form of the game. Indeed, the analyses in Chapters 7 and 8 of this book are examples of this. An exception to this rule is Schuhmacher (1999) whobased on Myersons
(1978) concept of a proper equilibrium, but without making equilibrium assumptionsdefines the concept of proper rationalizability in the
strategic form and shows that proper rationalizable play leads to backward induction.
Schuhmacher defines the concept of -proper rationalizability by assuming that players make mistakes, but where more costly mistakes
are made with a much smaller probability than less costly ones. A
properly rationalizable strategy can then be defined as the limit of a
sequence of -properly rationalizable strategies as goes to zero. For
a given , Schuhmacher offers an epistemic foundation for -proper rationalizability. However, this does not provide an epistemic foundation
for the limiting concept, i.e. proper rationalizability. It is one purpose
of the present chapter, which reproduces Asheim (2001), to establish
how proper rationalizability can be given an epistemic characterization
in strategic two-player games, within an epistemic model where preferences are represented by a vNM utility function and an SCLP (i.e., an
epistemic model satisfying Assumption 1 of Chapter 5).
Blume et al. (1991b) characterize proper equilibrium as a property
of preferences. When doing so they represent a players preferences

122

CONSISTENT PREFERENCES

by a vNM utility function and an LPS, whereby the player may deem
one opponent strategy to be infinitely more likely than another while
still taking the latter strategy into account. In two-player games, their
characterization of proper equilibrium can be described by the following
two properties.
1 Each player is certain of the preferences of his opponent,
2 Each players preferences satisfies that the player takes all opponent
strategies into account (caution) and that the player deems one opponent strategy to be infinitely more likely than another if the opponent prefers the one to the other (respect of opponent preferences).
The present characterization of proper rationalizability in two-player
games drops property 1, which is an equilibrium assumption; instead it
will be assumed that there is common certain belief of property 2, which
will be referred to as proper consistency.
Since, in the present framework, a player is not certain of the preferences of his opponent, player is preferences must be defined on acts from
Sj Tj , where Sj denotes the set of opponent strategies and Tj denotes
the set of opponent types. Under Assumption 1, each type of player i
corresponds to a vNM utility function and an SCLP on Sj Tj . As before, a player i has preference for cautious behavior at ti if he takes into
account all strategies of any opponent type that is deemed subjectively
possible. Moreover, a player i is said to respect opponent preferences
at ti if, for any opponent type that is deemed subjectively possible, he
deems one strategy of the opponent type to be infinitely more likely than
another if the opponent type prefers the one to the other. At ti player
is preferences are said to be properly consistent with the game and the
preferences of his opponent if at ti i both has preference for cautious behavior and respects opponent preferences. Hence, the present analysis
follows the consistent preferences approach by imposing requirements
on the preferences of players rather than their choice.
In this chapter it is first shown (in Proposition 36) how the event
of proper consistency combined with mutual certain belief of the type
profile can be used to characterize the concept of proper equilibrium.
It is then established (in Proposition 37) that common certain belief
of proper consistency corresponds to Schuhmachers (1999) concept of
proper rationalizability. Furthermore, by relating respect of preferences
to inducement of sequential rationality in Proposition 38, it follows by
comparing Proposition 37 with Proposition 33 of Chapter 8 that only
strategies leading to the backward induction outcome are properly ra-

123

Properness

c
r
`
1,
1
1,
1
1,
0
U
M 1, 1 2, 2 2, 2
D 0, 1 2, 2 3, 3
Figure 10.1.

G7 , illustrating common certain belief of proper consistency.

tionalizable in the strategic form of a generic perfect information game.


Thus, Schuhmachers Theorem 2 (which shows that the backward induction outcome obtains with high probability for any given small ) is
strengthened, and an epistemic foundation for the backward induction
procedure, as an alternative to Aumanns (1995) and others, is provided.
Lastly, it is illustrated through an example how proper rationalizability
can be used to test the robustness of inductive procedures.

10.1

An illustration

The symmetric game of Figure 10.1 is an example where common


certain belief of proper consistency is sufficient to determine completely
each players preferences over his or her own strategies. The game is due
to Blume et al. (1991b, Figure 1).
In this game, caution implies that player 1 prefers M to U since
M weakly dominates U . Likewise, player 2 prefers c to `. Since 1
respects the preferences of 2 and, in addition, certainly believes that 2
has preference for cautious behavior, it follows that 1 deems c infinitely
more likely than `. This in turn implies that 1 prefers D to U . Likewise,
since 2 respects the preferences of 1 and, in addition, certainly believes
that 1 has preference for cautious behavior, it follows that 2 prefers r
to `. As a consequence, since 1 respects the preferences of 2, certainly
believes that 2 respects the preferences of 1, and certainly believes that
2 certainly believes that 1 has preference for cautious behavior, it follows
that 1 deems r infinitely more likely than `. Consequently, 1 prefers D
to M . A symmetric reasoning entails that 2 prefers r to c. Hence, if
there is common certain belief of proper consistency, it follows that the
players preferences over their own strategies are given by
1s preferences: D M U
2s preferences: r c ` .
The facts that D is the unique most preferred strategy for 1 and r is the
unique most preferred strategy for 2 mean that only D and r are properly

124

CONSISTENT PREFERENCES

rationalizable; cf. Proposition 37 of Section 10.2. By Proposition 36 of


the same section, it then follows that the pure strategy profile (D, r) is
the unique proper equilibrium, which can easily be checked. However,
note that in the argument above, each player obtains certainty about
the preferences of his opponent through deductive reasoning; i.e. such
certainty is not assumed as in the concept of proper equilibrium.
The concept of proper rationalizability yields a strict refinement of
(ordinary) rationalizability (cf. Definition 11 of Chapter 5). All strategies for both players are rationalizable, which is implied by the fact
that, in addition to (D, r), the pure strategy profiles (U, `) and (M, c)
are also Nash equilibria. The concept of proper rationalizability yields
even a strict refinement when compared permissibility (cf. Definition 13
of Chapter 5), corresponding to the Dekel-Fudenberg procedure, where
one round of weak elimination followed by iterated strong elimination.
When the Dekel-Fudenberg procedure is employed, only U is eliminated
for 1, and only ` is eliminated for 2, reflecting that also the pure strategy profile (M, c) is a strategic form perfect equilibrium. It is a general
result that proper rationalizability refines the Dekel-Fudenberg procedure; this follows from Section 10.3 as well as Theorem 4 of Herings and
Vannetelbosch (1999).

10.2

Proper consistency

In this section, we add respect for opponent preferences to the analysis


of Chapter 5. This enables us to characterize
proper equilibrium (Myerson, 1978), and
proper rationalizability (Schuhmacher, 1999).
The epistemic modeling is identical to the one given in Section 5.1; hence,
this will not be recapitulated here.
Respect of opponent preferences. Player i respects the preferences of his opponent at ti if the following holds for any opponent type that
is deemed subjectively possible: Player i deems one opponent strategy of
the opponent type to be infinitely more likely than another if the opponent type prefers the one to the other. To capture this, define the event
[respi ] := {(t1 , t2 ) T1 T2 | (sj , t0j ) ti (s0j , t0j )
0

whenever t0j Titi and sj tj s0j } ,


where the notation ti means infinitely more likely at ti , as defined
in Section 3.2.

125

Properness

Write [resp] := [resp1 ] [resp2 ].


Say that at ti player is preferences over his strategies are properly
consistent with the game G = (S1 , S2 , u1 , u2 ) and the preferences of his
opponent, if ti projTi ([ui ] [respi ] [caui ]). Refer to [u][resp][cau]
as the event of proper consistency.
Characterizing proper equilibrium. We now characterize the
concept of a proper equilibrium as profiles of induced mixed strategies
at a type profile in [u] [resp] [cau] where there is mutual certain belief
of the type profile (i.e., for each player, only the true opponent type
is deemed subjectively possible). Before doing so, we define a proper
equilibrium.

Definition 19 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player


game. A completely mixed strategy profile p = (p1 , p2 ) is a -proper
equilibrium if, for each i,
pi (si ) pi (s0i ) whenever ui (si , pj ) > ui (s0i , pj ) .
A mixed strategy profile p = (p1 , p2 ) is a proper equilibrium if there is
a sequence (p(n))nN of (n)-proper equilibria converging to p, where
(n) 0 as n .
The characterization resultwhich is a variant of Proposition 5 of
Blume et al. (1991b)can now be stated. For this result, recall from
Sections 5.2 and 8.3 that the mixed strategy pjti |tj is induced for tj by
ti if tj Tjti and, for all sj Sj ,
pjti |tj (sj ) =

t`i (sj , tj )

t`i (Sj , tj )

where ` is the first level ` of ti for which t`i (Sj , tj ) > 0.

Proposition 36 Consider a finite strategic two-player game G. A profile of mixed strategies p = (p1 , p2 ) is a proper equilibrium if and only if
there exists an epistemic model with (t1 , t2 ) [u] [resp] [cau] such
that (1) there is mutual certain belief of {(t1 , t2 )} at (t1 , t2 ), and (2) for
each i, pi is induced for ti by tj .
The proof is contained in Appendix B. As for similar earlier results,
higher order certain belief plays no role in this characterization.
Characterizing proper rationalizability. We now turn to the
non-equilibrium analog to proper equilibrium, namely the concept of
proper rationalizability; cf. Schuhmacher (1999). To define the concept
of properly rationalizable strategies, we must introduce the following

126

CONSISTENT PREFERENCES

variant of an epistemic model, with a mixed strategy piti being associated


to each type ti of any player i, where piti is completely mixed.

Definition 20 An -epistemic model for the finite strategic two-player


game form (S1 , S2 , z) is a structure
(S1 , T1 , S2 , T2 ) ,
where, for each type ti of any player i, ti corresponds to (1) mixed strategy piti , where supppiti = Si , and (2) a system of conditional preferences
on the collection of sets of acts from elements of
ti := { Ti Sj Tj | ti 6= }
to (Z), where ti is a non-empty subset of {ti } Sj Tj .
Moreover, Schuhmacher (1999) in effect makes the following assumption.

Assumption 3 For each ti of any player i, (a) ti satisfies Axioms 1,


2, and 4 if 6= Ti Sj Tj , and Axiom 3 if and only if ti , (b)
the system of conditional preferences {ti | ti } satisfies Axioms 5
and 6, and (c) there exists a non-empty subset of opponent types, Tjti ,
such that ti = {ti } Sj Tjti .
Under Assumption 3 it follows from Proposition 1 that, for each type
ti of any player i, is system of conditional preferences at ti can be
represented by a vNM utility function iti : (Z) R and a subjective
probability distribution ti which for expositional simplicity is defined on
Sj Tj with support Sj Tjti (instead of being defined on Ti Sj Tj with
support ti = {ti } Sj Tjti ). Hence, as before we consider w.l.o.g. is
unconditional preferences at ti , ti , to be preferences over acts from
Sj Tj to (Z) (instead of acts from {ti } Sj Tj to (Z)).
The combination of ti having full support on Si and Axiom 6 (Conditionality) being satisfied means that all opponent strategies are taken
into account for any opponent type that is deemed subjectively possible, something that is reflected by ti having full support on Sj . Hence,
preference for cautious behavior need not be explicitly imposed. Rather,
following Schuhmacher (1999) we consider the following events. First,
define the set of type profiles for which ti , for any subjectively possible
opponent type, ind uces that types mixed strategy:

n
o
0
0

[indi ] := (t1 , t2 ) T1 T2 t0j Tjti , pjti |tj = pjtj .


Write [ind] := [ind1 ][ind2 ]. Furthermore, define the set of type profiles
for which ti , according to his mixed strategy piti , plays a pure strategy

127

Properness

with much greater probability than another if player i at ti prefers the


former to the latter:

[-prop tremi ] := (t1 , t2 ) T1 T2


o
piti (si ) piti (si ) whenever si ti s0i .
If ti projTi [-prop tremi ], then player i is said to satisfy the -proper
trembling condition at ti . Schuhmachers (1999) definition of -proper
rationalizability can now be formally stated.

Definition 21 (Schuhmacher, 1999) A mixed strategy pi for i is properly rationalizable in a finite strategic two-player game G if there
exists an -epistemic model with piti = pi for some ti projTi CK([u]
[ind] [-prop trem]). A mixed strategy pi for i is properly rationalizable if there exists a sequence (pi (n))nN of (n)-properly rationalizable
strategies converging to pi , where (n) 0 as n .
We next characterize the concept of properly rationalizable strategies
as induced mixed strategies under common certain belief of [u] [resp]
[cau]. The result is proven in Appendix B.

Proposition 37 A mixed strategy pi for i is properly rationalizable in a


finite strategic two-player game G if and only if there exists an epistemic
model with (t1 , t2 ) CK([u] [resp] [cau]) such that pi is induced for
ti by tj .
It follows from Propositions 36 and 37 that any mixed strategy is properly rationalizable if it is part of a proper equilibrium. Since a proper
equilibrium always exists, we obtain as an immediate consequence that
properly rationalizable strategies always exist.

10.3

Relating rationalizability concepts (cont.)

As shown by van Damme (1984), any proper equilibrium in the strategic form corresponds to a quasi-perfect equilibrium in the extensive form.
The following result shows, by Propositions 34 and 36, this relationship
between the equilibrium concepts and establishes, by Definition 18 and
Proposition 37, the corresponding relationship between the rationalizability concepts. Furthermore, it means that the two cells in Table 2.2
to the left of proper rationalizability are not applicable.

Proposition 38 For any epistemic model and for each player i,


[respi ] Ki [cauj ] [isri ] .

128

CONSISTENT PREFERENCES

Proof. Consider any epistemic model with


ti projTi ([respi ] Ki [cauj ]) .
Suppose ti
/ projTi [isri ]; i.e., there exist tj Tj ti and h Hj such
that jti |tj |h is outcome equivalent to pj , where pj (sj ) > 0 for some
sj Sj (h)\Sjtj (h). Since ti projTi Ki [cauj ], it follows from Lemma
13 that s0j Sj (h) s.t. s0j tj sj . Since ti projTi [respi ], this means
that s0j Sj (h) s.t. (s0j , tj ) ti (sj , tj ). Furthermore, pj (sj ) > 0 implies
t`i (sj , tj ) > 0, where ` is the first level ` of ti for which t`i (Sj (h), tj ) >
0. Since then ` is also the first level ` of ti for which t`i ({sj , s0j }, tj ) > 0,
this contradicts (s0j , tj ) ti (sj , tj ) and shows that ti projTi [isri ].
Since proper rationalizability is thus a refinement of quasi-perfect rationalizability, which in turn is a refinement of sequential rationalizability, it follows from Proposition 33 that proper rationalizability implies
the backward induction procedure in perfect information games. E.g.,
in the centipede game illustrated in 03 of Figure 2.4, common certain
belief of proper consistency implies that the players preferences over
their own strategies are given by
1s preferences: Out InL OutR
2s preferences: ` r .
This property of proper rationalizability has been discussed by both
Schuhmacher (1999) and Asheim (2001).
From the proof of Proposition 1 in Mailath et al. (1997) one can
conjecture that quasi-perfect rationalizability in every extensive form
corresponding to a given strategic game coincides with proper rationalizability in that game. However, for any given extensive form the set
of proper rationalizable strategies can be a strict subset of the set of
quasi-perfect rationalizable strategies, as illustrated by 02 of Figure 2.5.
Here, quasi-perfect rationalizability only precludes the play of InR with
positive probability. However, since InL strongly dominates InR, it follows that 2 prefers ` to r if she respects 1s preferences. Hence, only `
with probability one is properly rationalizable for 2, which implies that
only InL with probability one is properly rationalizable for 1.

10.4

Induction in a betting game

The games G7 (of Figure 10.1), 03 (of Figure 2.4), and 02 (of Figure
2.5) have in common that the properly rationalizable strategies coincide
with those surviving iterated (maximal) elimination of weakly domi-

129

Properness

a
-9
9

b
6
-6

c
-3
3

1/3

1/3

1/3

Player 1
Player 2

Figure 10.2.

A betting game.

nated strategies (IEWDS). In the present section it will be shown that


this conclusion does not hold in general. Rather, the concept of proper
rationalizability can be used to test the robustness of IEWDS and other
inductive procedures.
Figure 10.2 illustrates a simplified version of a betting game introduced by Sonsino et al. (2000) for the purpose of experimental study;
Svik (2001) has subsequently repeated their experiment in alternative
designs. The two players consider to bet and have a common and uniform prior over the states that determine the outcome of the bet. If the
state is a, then 1 looses 9 and 2 wins 9 if betting occurs. If the state is
b, then 1 wins 6 and 2 looses 6 if betting occurs. Finally, if the state is
c, then 1 looses 3 and 2 wins 3 if betting occurs. Player 1 is informed of
whether the state of the bet is equal to a or in the set {b, c}. Player 2 is
informed of whether the state of the bet is in the set {a, b} or equal to c.
As a function of their information, each player can announce to accept
the bet or not. For player 1 the strategy YN means to accept the bet if
informed of a and not to accept the bet if informed of {b, c}, etc. For
player 2 the strategy yn means to accept the bet if informed of {a, b} and
not to accept the bet if informed of c, etc. Betting occurs if and only
if both players have accepted the bet. This yields the strategic game of
Figure 10.3.
An inductive procedure. If player 2 naively believes that player 1
is equally likely to accept the bet when informed of a as when informed
of {b, c}, then 2 will wish to accept the bet when informed of {a, b}.
However, the following, seemingly intuitive, inductive procedure appears
to indicate that 2 should never accept the bet if informed of {a, b}: Player
1 should not accept the bet when informed of a since he cannot win by
doing so. This eliminates his strategies YY and YN. Player 2, realizing
this, should never accept the bet when informed of {a, b}, sinceas
long as 1 never accepts the bet when informed of ashe cannot win by
doing so. This eliminates her strategies yy and yn. This in turn means

130

CONSISTENT PREFERENCES

YY
YN
NY
NN
Figure 10.3.

yy yn ny
-2, 2 -1, 1 -1, 1
-3, 3 -3, 3 0, 0
1, -1 2, -2 -1, 1
0, 0 0, 0 0, 0

nn
0, 0
0, 0
0, 0
0, 0

The strategic form of the betting game.

that player 1, realizing this, should never accept the bet when informed
of {b, c}, sinceas long as 2 never accepts the bet when informed of
{a, b}he cannot win by doing so. This eliminates his strategy NY.
This inductive argument corresponds to IEWDS, except that the latter
procedure eliminates 2s strategies yn and nn in the first round. The
argument seems to imply that player 2 should never accept the bet
if informed of {a, b} and that player 1 should never accept the bet if
informed of {b, c}. Is this a robust conclusion?
Proper rationalizability in the betting game. The strategic
game of Figure 10.3 has a set of Nash equilibria that includes the pure
strategy profiles (NN, ny) and (NN, nn), and a set of (strategic form)
perfect equilibria that includes the pure strategy profile (N N, ny). However, there is a unique proper equilibrium where player 1 plays NN with
probability one, and where player 2 mixes between yy with probability
1/5 and ny with probability 4/5. It is instructive to see why the pure
strategy profile (NN, ny) is not a proper equilibrium. If 1 assigns probability one to 2 playing ny, then he prefers YN to NY (since the more
serious mistake to avoid is to accept the bet when being informed of
{b, c}). However, if 2 respects 1s preferences and certainly believes that
1 prefers YN to NY, then she will herself prefer yy to ny, undermining
(NN, ny) as a proper equilibrium. The mixture between yy and ny in
the proper equilibrium is constructed so that 1 is indifferent between YN
and NY.
Since any mixed strategy is properly rationalizable if it is part of a
proper equilibrium, it follows that both yy and yn are properly rationalizable pure strategies for 2. Moreover, if 1 certainly believes that
2 is of a type with only yy as a most preferred strategy, then NY is
a most preferred strategy for 1, implying that NY in addition to NN
is a properly rationalizable strategy for 1. That these strategies are
in fact properly rationalizable is verified by the epistemic model of Ta-

131

Properness

Table 10.1.

t01

t02

An epistemic model for the betting game.

yy
yn
ny
nn

t02
(0, 0, 1, 0)
(0, 0, 0, 1)
(1, 0, 0, 0)
(0, 1, 0, 0)

t002
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)

t001

YY
YN
NY
NN

t01
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)

t001
(0, 0, 1, 0)
(0, 0, 0, 1)
(1, 0, 0, 0)
(0, 1, 0, 0)

t002

yy
yn
ny
nn

t02
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)

t002
(1, 0, 0, 0)
(0, 1, 0, 0)
(0, 0, 1, 0)
(0, 0, 0, 1)

YY
YN
NY
NN

t01
(0, 0, 0, 1)
(0, 1, 0, 0)
(0, 0, 1, 0)
(1, 0, 0, 0)

t001
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)

ble 10.1. In the table the preferences of any player i at each type ti
are represented by a vNM utility function iti satisfying iti z = ui
and a 4-level LPS on Sj {t0j , t00j }, with the first numbers in the parantheses expressing primary probability distributions, the second numbers
expressing secondary probability distributions, etc. It can be checked
that {t01 , t001 } {t02 , t002 } [u] [resp] [cau], which in turn implies
{t01 , t001 } {t02 , t002 } CK([u] [resp] [cau]) since, for each ti Ti of
any player i, Tjti {t0j , t00j }. Since each types preferences over his/her
own strategies are given by
0

N N t1 Y N t1 N Y t1 Y Y
00
00
00
N Y t1 N N t1 Y Y t1 Y N
0
0
0
ny t2 nn t2 yy t2 yn
00
00
00
yy t2 yn t2 ny t2 nn ,
it follows that NY and NN are properly rationalizable for player 1 and
yy and ny are properly rationalizable for player 2. Note that YY and YN
for player 1 and yn and nn for player 2 cannot be properly rationalizable
since these strategies are weakly dominated and, thus, cannot be most
preferred strategies for cautious players.
The lesson to be learned from this analysis is that is not obvious that
deductive reasoning should lead players to refrain from accepting the
bet in the betting game. The experiments by Sonsino et al. (2000)
and Svik (2001) show that some subjects do in fact accept the bet
in a slightly more complicated version of this game. By comparison to

132

CONSISTENT PREFERENCES

Propositions 33 and 38, the analysis can be used to support the argument
that backward induction in generic perfect information games is more
convincing than the inductive procedure for the betting game discussed
above.

Chapter 11
CAPTURING FORWARD INDUCTION
THROUGH FULL PERMISSIBILITY

The procedure of iterated (maximal) elimination of weakly dominated


strategies (IEWDS) has a long history and some intuitive appeal, yet it
is not as easy to interpret as iterated elimination of strongly dominated
strategies (IESDS). IESDS is known to be equivalent to common belief
of rational choice; cf. Tan and Werlang (1988) as well as Propositions 22
and 26 of this book. IEWDS would appear simply to add a requirement
of admissibility, i.e., that one strategy should be preferred to another if
the former weakly dominates the latter on a set of strategies that the
opponent may choose. However, numerous authorsin particular,
Samuelson (1992)have noted that it is not clear that we can interpret
IEWDS this way. To see this, consider the following two examples.
The left-hand side of Figure 2.6 shows G01 , the pure strategy reduced
strategic form of the battle-of-the-sexes with an outside option game.
Here IEWDS works by eliminating InR, r, and Out, leading to the forward induction outcome (InL, `). This prediction appears consistent: if
2 believes that 1 will choose InL, then she will prefer ` to r as 2s preference over her strategies depends only on the relative likelihood of InL
and InR.
The situation is different in G8 of Figure 11.1, where IEWDS works
by eliminating D, r, and M , leading to (U, `). Since 2 is indifferent
at the predicted outcome, we must here appeal to admissibility on a
superset of {U }, namely {U, M }, to justify the statement that 2 must
play L. However, it is not clear that this is reasonable. Admissibility on
{U, M } means that 2s preferences respect weak dominance on this set
and implies that M is deemed infinitely more likely than D (in the sense
of Blume et al., 1991a, Definition 5.1; see also Chapter 3). However, why

134

CONSISTENT PREFERENCES

r
`
1,
1
1,
1
U
M 0, 1 2, 0
D 1, 0 0, 1

Figure 11.1.

G8 , illustrating that IEWDS may be problematic.

should 2 deem M more likely than D? If 2 believes that 1 believes in


the prediction that 2 plays ` (as IEWDS suggests), then it seems odd to
assume that 2 believes that 1 considers D to be a less attractive choice
than M .
A sense in which D is less rational than M is simply that it was
eliminated first. This hardly seems a justification for insisting on the
belief that D is much less likely than M . Still, Stahl (1995) has shown
that IEWDS effectively assumes this: a strategy survives IEWDS if and
only if it is a best response to a belief where one strategy is infinitely less
likely than another if the former is eliminated at an earlier round than
the latter. Thus, IEWDS adds extraneous and hard-to-justify restrictions on beliefs, and may not appear to correspond to the most natural
formalization of deductive reasoning under admissibility. So what does?
Reproducing joint work with Martin Dufwenberg, cf. Asheim and
Dufwenberg (2003a), this chapter presents the concept of fully permissible sets as an answer. In G01 this concept agrees with the prediction
of IEWDS, as seems natural. The procedure leading to this prediction
is quite different, though, as is its interpretation. In G8 , however, full
permissibility predicts that 1s set of rational choices is either {U } or
{U, M }, while 2s set of rational choices is either {`} or {`, r}. This has
interesting implications. If 2 is certain that 1s set is {U }, thenabsent
extraneous restrictions on beliefsone cannot conclude that 2 prefers `
to r or vice versa. On the other hand, if 2 considers it possible that 1s
set is {U, M }, then ` weakly dominates r on this set and justifies {`} as
2s set of rational choices. Similarly, one can justify that U is preferred
to M if and only if 1 considers it impossible that 2s set is {`, r}. Thus,
full permissibility tells a consistent story of deductive reasoning under
admissibility, without adding extraneous restrictions on beliefs.
This chapter is organized as follows. Section 11.1 illustrates the key
features of the requirementcalled full admissible consistencythat is
imposed on players to arrive at full permissibility. Section 11.2 formally
defines the concept of fully permissible sets through an algorithm that

Capturing forward induction through full permissibility

135

eliminates strategy sets under full admissible consistency. General existence as well as other properties are shown. Section 11.3 establishes
epistemic conditions for the concept of fully permissible sets, and checks
that these conditions are indeed needed and thereby relates full permissibility to other concepts. Section 11.4 investigates examples, showing
how forward induction is promoted and how multiple fully permissible
sets may arise. Section 11.5 compares our epistemic conditions to those
provided in related literature. As elsewhere in this book, the analysis
will be limited to two-player games. In this chapter (and the next), this
is for ease of presentation, as everything can essentially be generalized
to n-player games (with n > 2).

11.1

Illustrating the key features

Our modeling captures three key features:


1 Caution. A player should prefer one strategy to another if the former weakly dominates the latter. Such admissibility of a players
preferences on the set of all opponent strategies is defended, e.g., in
Chapter 13 of Luce and Raiffa (1957) and is implicit in procedures
that start out by eliminating all weakly dominated strategies.
2 Robust belief of opponent rationality. A player should deem any opponent strategy that is a rational choice infinitely more likely than
any opponent strategy not having this property. This is equivalent
to preferring one strategy to another if the former weakly dominates
the latter on the set of rational choices for the opponent. Such admissibility of a players preferences on a particular subset of opponent strategies is an ingredient of the analyses of weak dominance by
Samuelson (1992) and Borgers and Samuelson (1992), and is essentially satisfied by extensive form rationalizability (EFR; cf. Pearce,
1984 and Battigalli, 1996a, 1997) and IEWDS.
3 No extraneous restrictions on beliefs. A player should prefer one
strategy to another only if the former weakly dominates the latter
on the set of all opponent strategies or on the set of rational choices
for the opponent. Such equal treatment of opponent strategies that
are all rationalor all irrationalhave in principle been argued by
Samuelson (1992, p. 311), Gul (1997), and Mariotti (1997).
These features are combined as follows. A players preferences over
his own strategies leads to a choice set (i.e., a set of maximal pure
strategies; cf. Section 6.1). A players preferences is said to be fully

136

CONSISTENT PREFERENCES

r
`
1,
1
1,
1
U
M 1, 1 1, 0
D 1, 0 0, 1

Figure 11.2.

G9 , illustrating the key features of full admissible consistency.

admissibly consistent with the game and the preferences of his opponent
if one strategy is preferred to another if and only if the former weakly
dominates the latter
on the set of all opponent strategies, or
on the union of the choice sets that are deemed possible for the opponent.
A subset of strategies is a fully permissible set if and only if it can be
a choice set when there is common certain belief of full admissible consistency. Hence, the analysis yields a solution concept that determines
a collection of choice sets for each player. This collection can be found
via a simple algorithm, introduced in the next section.
We use G9 of Fig. 11.2 to illustrate the consequences of imposing
caution and robust belief of opponent rationality. Since caution
means that each player takes all opponent strategies into account, it
follows that player 1s preferences over his strategies will be U M D
(where and denote indifference and preference, respectively). Player
1 must prefer each of the strategies U and M to the strategy D, because
the former strategies weakly dominate D. Hence, U and M are maximal,
implying that 1s choice set is {U, M }.
The requirement of robust belief in opponent rationality comes into
effect when considering the preferences of player 2. Suppose that 2
certainly believes that 1 is cautious and therefore (as indicated above)
certainly believes that {U, M } is 1s choice set. Our assumption that 2
has robust belief of 1s rationality captures that 2 deems each element
of {U, M } infinitely more likely than D. Thus, 2s preferences respect
weak dominance on 1s choice set {U, M }, regardless of what happens if
1 chooses D. Hence, 2s preferences over her strategies will be ` r.
Summing up, we get to the following solution for G9 :
1s preferences: U M D
2s preferences: ` r

Capturing forward induction through full permissibility

137

Hence, {U, M } and {`} are the players fully permissible sets.
The third feature of full admissible consistencyno extraneous restrictions on beliefsmeans in G9 that 2 does not assess the relative
likelihood of 1s maximal strategies U and M . This does not have any
bearing on the analysis of G9 , but is essential for capturing forward induction in G01 of Figure 2.6. In this case the issue is not whether a player
assesses the relative likelihood of different maximal strategies, but rather
whether a player assesses the relative likelihood of different non-maximal
strategies. To see the significance in G01 , assume that 1 deems r infinitely
more likely than `, while 2 deems Out infinitely more likely than InR
and InR infinitely more likely than InL. Then the players rank their
strategies as follows:
1s preferences: Out InR InL
2s preferences: r `
Both caution and robust belief of opponent rationality are satisfied
and still the forward induction outcome (InL, `) is not promoted. However, the requirement of no extraneous restrictions on beliefs is not
satisfied since the preferences of 2 introduce extraneous restrictions on
beliefs by deeming one of 1s non-maximal strategies, InR, infinitely
more likely than another non-maximal strategy, InL. When we return
to G01 in Sections 11.4 and 11.5, we show how the additional imposition
of no extraneous restrictions on beliefs leads to (InL, `) in this game.
Several concepts with natural epistemic foundations fail to match
these predictions in G01 and G9 . In the case of rationalizabilitycf.
Bernheim (1984) and Pearce (1984)this is perhaps not so surprising
since this concept in two-player games corresponds to IESDS. It can be
understood as a consequence of common belief of rational choice without imposing caution, so there is no guarantee that a player prefers one
strategy to another if the former weakly dominates the latter. In G9 ,
for example, all strategies are rationalizable.
It is more surprising that the concept of permissibility does not
match our solution of G9 . Permissibility can be given rigorous epistemic foundations in models with cautious playerscf. Borgers (1994)
and Brandenburger (1992), who coined the term permissible; see also
Ben-Porath (1997) and Gul (1997) as well as Propositions 24 and 27
of this book. In these models players take into account all opponent
strategies, while assigning more weight to a subset of those deemed to

138

CONSISTENT PREFERENCES

be rational choices. As noted earlier, permissibility corresponds to the


Dekel-Fudenberg procedure where one round of elimination of all weakly
dominated strategies is followed by iterated elimination of strongly dominated strategies. In G9 , this means that 1 cannot choose his weakly
dominated strategy D. However, while 2 prefers ` to r in our solution,
permissibility allows that 2 chooses r. To exemplify using Brandenburgers (1992) approach, this will be the case if 2 deems U to be infinitely more likely than D which in turn is deemed infinitely more likely
than M . The problem is that robust belief of opponent rationality is
not satisfied: Player 2 deems D more likely than M even though M is in
1s choice set, while D is not. In Section 11.3 we establish in Proposition
40 that the concept of fully permissible sets refines the Dekel-Fudenberg
procedure.

11.2

IECFA and fully permissible sets

We present in this section an algorithmiterated elimination of choice


sets under full admissible consistency (IECFA)leading to the concept
of fully permissible sets. This concept will in turn be given an epistemic
characterization in Section 11.3 by imposing common certain belief of
full admissible consistency. We present the algorithm before the epistemic characterization for different reasons:
IECFA is fairly accessible. By defining it early, we can apply it early,
and offer early indications of the nature of the solution concept we
wish to promote.
By defining IECFA, we point to a parallel to the concepts of rationalizable strategies and permissible strategies. These concepts are
motivated by epistemic assumptions, but turn out to be identical in
2-player games to the set of strategies surviving simple algorithms:
respectively, IESDS and the Dekel-Fudenberg procedure.
Just like IESDS and the Dekel-Fudenberg procedure, IECFA is easier to use than the corresponding epistemic characterizations. The
algorithm should be handy for applied economists, independently of
the foundational issues discussed in Section 11.3.
IESDS and the Dekel-Fudenberg procedure iteratively eliminate dominated strategies. In the corresponding epistemic models, these strategies
in turn cannot be rational choices, cannot be rational choices given that
other players do not use strategies that cannot be rational choices, etc.

Capturing forward induction through full permissibility

139

IECFA is also an elimination procedure. However, the interpretation of


the basic item thrown out is not that of a strategy that cannot be a
rational choice, but rather that of a set of strategies that cannot be a
choice set for any preferences that are in a given sense consistent with
the preferences of the opponent. The specific kind of consistency involved in IECFAwhich will be defined in Section 11.3 and referred to
as full admissible consistencyrequires that a players preferences are
characterized by the properties of caution, robust belief of opponent
rationality and no extraneous restrictions on beliefs. Thus, IECFA
does not start with each players strategy set and then iteratively eliminates strategies. Rather, IECFA starts with each players collection
of non-empty subsets of his strategy set and then iteratively eliminates
subsets from this collection.
Definition. Consider a finite strategic two-player game G = (S1 , S2 ,
u1 , u2 ), and recall the following notation from Chapter 6: For any ( 6=)
Yj Sj ,
Di (Yj ) := {si Si | pi (Si ) such that
pi weakly dominates si on Yj or Sj } .
Interpret Yj as the set of strategies that player i deems to be the set of
rational choices for his opponent. Let is choice set be equal to Si \Di (Yj ),
entailing that is choice set consists of pure strategies that are not weakly
dominated by any mixed strategy on Yj or Sj . In Section 11.3 we show
how this corresponds to a set of maximal strategies given the players
preferences over his own strategies.
Let = 1 2 , where i := 2Si \{} denotes the collection of nonempty subsets of Si . Write i ( i ) for a subset of pure strategies. For
any ( 6=) = 1 2 , write () := 1 (2 ) 2 (1 ), where
i (j ) := {i i |( 6=) j j s.t. i = Si \Di (j0 j j0 )} .
Hence, i (j ) is the collection of strategy subsets that can be choice
sets for player i if he associates Yj the set of rational choices for his
opponentwith the union of the strategy subsets in a non-empty subcollection of j .
We can now define the concept of a fully permissible set.

Definition 22 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player


game. Consider the sequence defined by (0) = and, g 1, (g) =
((g 1)). A non-empty strategy set i is said to be fully permissible
if

140

CONSISTENT PREFERENCES

\
g=0

i (g) .

Let = 1 2 denote the collection of profiles of fully permissible


sets. Since 6= i (0j ) i (00j ) i (j ) whenever 6= 0j 00j j
and since the game is finite, (g) is a monotone sequence that converges
to in a finite number of iterations. IECFA is the procedure that in
round g eliminates sets in (g 1)\(g) as possible choice sets. As
defined in Definition 22, IECFA eliminates maximally in each round in
the sense that, g 1, (g) = ((g 1)). However, it follows from
the monotonicity of i that any non-maximal procedure, where g 1
such that (g 1) (g) ((g 1)), will also converge to .
A strategy subset survives elimination round g if it can be a choice set
when the set of rational choices for his opponent is associated with the
union of some (or all) of opponent sets that have survived the procedure
up till round g 1. A fully permissible set is a set that survives in
this way for any g. The analysis of Section 11.3 justifies that strategy
subsets that this algorithm has not eliminated by round g be interpreted
as choice sets compatible with g 1 order of mutual certain belief of full
admissible consistency.
Applications. We illustrate IECFA by applying it. Consider G9 of
Figure 11.2. We get:
(0) = 1 2
(1) = {{U, M }} 2
= (2) = {{U, M }} {{`}} .
Independently of Y2 , S1 \D1 (Y2 ) = {U, M }, so for 1 only {U, M } survives
the first elimination round, while S2 \D2 ({U, M }) = {`}, S2 \D2 ({D}) =
{r} and S2 \D2 ({U }) = {`, r}, so that no elimination is possible for
player 2. However, in the second round only {`} survives since ` weakly
dominates r on {U, M }, implying that S2 \D2 ({U, M }) = {`}.
Next, consider G01 of Figure 2.6. Applying IECFA we get:
(0) = 1 2
(1) = {{Out}, {InL}, {Out, InL}} 2
(2) = {{Out}, {InL}, {Out, InL}} {{`}, {`, r}}
(3) = {{InL}, {Out, InL}} {{`}, {`, r}}
(4) = {{InL}, {Out, InL}} {{`}}
= (5) = {{InL}} {{`}} .

Capturing forward induction through full permissibility

141

Again the algorithm yields a unique fully permissible set for each player.
Finally, apply IECFA to G8 of Figure 11.1:
(0) = 1 2
(1) = {{U }, {M }, {U, M }} 2
(2) = {{U }, {M }, {U, M }} {{`}, {`, r}}
= (3) = {{U }, {U, M }} {{`}, {`, r}} .
Here we are left with two fully permissible sets for each player. There is
no further elimination, as {U } = S1 \D1 ({`}), {U, M } = S1 \D1 ({`, r}),
{`} = S2 \D2 ({U, M }), and {`, r} = S2 \D2 ({U }).
The elimination process for G01 and G8 is explained and interpreted
in Section 11.4.
Results. The following proposition characterizes the strategy subsets
that survive IECFA and thus are fully permissible, and is a straightforward implication of Definition 22 (keeping in mind that is finite and,
for each i, i is monotone).

Proposition 39 (i) For each i, i 6= . (ii) = (). (iii) For each


i, i i if and only if there exists = 1 2 with i i such that
().
Proposition 39(i) shows existence, but not uniqueness, of each players
fully permissible set(s). In addition to G2 , games with multiple strict
Nash equilibria illustrate the possibility of such multiplicity; by Proposition 39(iii) any strict Nash equilibrium corresponds to a profile of fully
permissible sets. Proposition 39(ii) means that is a fixed point in terms
of a collection of profiles of strategy sets as illustrated by G2 above. By
Proposition 39(iii) it is the largest such fixed point.
We close this section by recording some connections between IECFA
on the one hand, and IESDS, the Dekel-Fudenberg procedure (i.e., permissibility), and IEWDS on the other. First, we note through the following Proposition 40 that IECFA has more bite than the Dekel-Fudenberg
procedure. Both G1 and G3 illustrate that this refinement may be strict.

Proposition 40 A pure strategy si is permissible if there exists a fully


permissible set i such that si i .
Proof. Using Proposition 39(ii), the definitions of () (given above)
and a() (given in Chapter 6) imply, for each i,
Pi0 := i i i = i i (j ) i ai (Pj0 ) .

142

CONSISTENT PREFERENCES

UU
UD
DU
DD

Figure 11.3.

`
1, 1
1, 1
0, 1
0, 0

c
1, 1
0, 1
0, 0
0, 1

r
0, 0
1, 0
2, 0
0, 2

G10 , illustrating the relation between IECFA and IEWDS.

Since P 0 a(P 0 ) implies P 0 P , by Lemma 10(iii), it follows that, for


each i, i i i Pi .
It is a corollary that IECFA has also more cutting power than IESDS.
However, neither of IECFA and IEWDS has more bite than the other,
as demonstrated by the game G10 of Fig. 11.3. It is straightforward to
verify that UU and UD for player 1 and ` for player 2 survive IEWDS,
while {UU} for 1 and {`, c} for 2 survive IECFA and are thus the fully
permissible sets, as shown below:
(0) = 1 2
(1) = {{UU}, {DU}, {UU, UD}, {UU, DU}, {UD, DU},
{UU, UD, DU}} {{`}, {r}, {`, c}, {`, r}, {c, r}, {`, c, r}}
(2) = {{UU}, {DU}, {UU, UD}, {UU, DU}, {UD, DU},
{UU, UD, DU}} {{`}, {`, c}}
(3) = {{UU}, {UU, UD}} {{`}, {`, c}}
(4) = {{UU}, {UU, UD}} {{`, c}}
= (5) = {{UU}} {{`, c}} .
Strategy UD survives IEWDS but does not appear in any fully permissible set. Strategy c appears in a fully permissible set but does not survive
IEWDS.

11.3

Full admissible consistency

When justifying rationalizable and permissible strategies through epistemic conditions, players are usually modeled as decision makers under uncertainty. Tan and Werlang (1988) characterize rationalizable
strategies by common belief (with probability one) of the event that
each player chooses a maximal strategy given preferences that are represented by a subjective probability distribution. Hence, preferences
are both complete and continuous (cf. Proposition 1). Brandenburger
(1992) characterizes permissible strategies by common belief (with pri-

Capturing forward induction through full permissibility

143

mary probability one) of the event that each player chooses a maximal strategy given preferences that are represented by an LPS with full
support on the set of opponent strategies (cf. Proposition 2). Hence,
preferences are still complete, but not continuous due to the full support requirement. Since preferences are complete and representable by
a probability distribution or an LPS, these epistemic justifications differ
significantly from the corresponding algorithms, IESDS and the DekelFudenberg procedure, neither of which makes reference to subjective
probabilities.1
When doing analogously for fully permissible sets, not only must continuity of preferences be relaxed to allow for caution and robust belief
of opponent rationality, as discussed in Section 11.1. One must also
relax completeness of preferences to accommodate no extraneous restrictions on beliefs, which is a requirement of minimal completeness
and implies that preferences are expressed solely in terms of admissibility on nested sets. Hence, preferences are not in general representable by
subjective probabilities (except through treating incomplete preferences
as a set of complete preferences; cf. Aumann, 1962; Bewley, 1986). This
means that epistemic operators must be derived directly from the underlying preferencesas observed by Morris (1997) and explored further
in Chapter 4 of this booksince there is no probability distribution or
LPS that represents the preferences. It also entails that the resulting
characterization, given in Proposition 41, must be closely related to the
algorithm used in the definition of fully permissible sets.
There is another fundamental difference. When characterizing rationalizable and permissible strategies within the rational choice approach,
the event that is made subject to interactive epistemology is defined by
requiring that each players strategy choice is an element of his choice set
(i.e. his set of maximal strategies) given his belief about the opponents
strategy choice.2 In contrast, in the characterization of Proposition 40,
the event that is made subject to interactive epistemology is defined by
imposing requirements on how each players choice set is related to his
belief about the opponents choice set. Since a players choice set equals
the set of maximal strategies given the ranking that the player has over
his strategies, the imposed requirements relate a players ranking over

1 However,

as shown by Propositions 26 and 27 of this book, epistemic characterization of


rationalizability and permissibility can be provided without using subjective probabilities.
2 As illustrated in Chapters 5 and 6 of this book, it is also possible to characterize rationalizable and permissible strategies within the consistent preferences approach.

144

CONSISTENT PREFERENCES

his strategies to the opponents ranking. Hence, fully permissible sets


are characterized within the consistent preferences approach.
The epistemic modeling is identical to the one given in Section 6.1;
hence, this will not be recapitulated here. Recall, however, that ti
( {ti }Sj Tj ) denotes the set of states that player i deems subjectively
possible at ti , that ti ( ti ) denotes the smallest set of states on which
player is preferences at ti , ti , are admissible, and that Assumption 2
is imposed so that preferences are conditionally represented by a vNM
utility function (cf. Proposition 4).
Characterizing full permissibility. To characterize the concept of
fully permissible sets, consider for each i,
0i [ratj ] := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 |
B
ti = (projTi Sj Tj [ratj ]) ti , and
p ti q only if pEj weakly dominates qEj
for Ej = projSj Tj ti or Ej = projSj Tj ti } ,
Define as follows the event that player is preferences over his strategies are fully admissibly consistent with the game G = (S1 , S2 , u1 , u2 )
and the preferences of his opponent:
0i [ratj ] [caui ] .
A0i := [ui ] B
Write A0 := A01 A02 for the event of full admissible consistency.

Proposition 41 A strategy set i for i is fully permissible in a finite


strategic two-player game G if and only if there exists an epistemic model
with i = Siti for some (t1 , t2 ) projT1 T2 CKA0 .
Proof. Part 1: If i is fully permissible, then there exists an epistemic
model with i = Siti for some (t1 , t2 ) projT1 T2 CKA0 . It is sufficient to
construct a belief system with S1 T1 S2 T2 CKA0 such that, for
each i i of any player i, there exists ti Ti with i = Siti . Construct
a belief system with, for each i, a bijection i : Ti i from the set
of types to the the collection of fully permissible sets. By Proposition
39(ii) we have that, for each ti Ti of any player i, there exists jti j
such that i (ti ) = Si \Di (Yjti ), where Yjti := {sj Sj | j jti s.t.
sj j }. Determine the set of opponent types that ti deems subjectively
possible as follows: Tjti = {tj Tj | j (tj ) jti }. Let, for each ti Ti
of any player i, ti satisfy
1. iti z = ui (so that S1 T1 S2 T2 [u]), and

Capturing forward induction through full permissibility

145

2. p ti q iff pEj weakly dominates qEj for Ej = Ejti := {(sj , tj )|sj


j (tj ) and tj Tjti } or Ej = Sj Tjti , which implies that ti =
{ti }Ejti and ti = {ti }Sj Tjti (so that S1 T1 S2 T2 [cau]).
By the construction of Ejti , this means that Siti = Si \Di (Yjti ) = i (ti )
since, for any acts p and q on Sj Tj satisfying that there exist mixed
strategies pi , qi (Si ) such that, (sj , tj ) Sj Tj , p(sj , tj ) = z(pi , sj )
and q(sj , tj ) = z(qi , sj ), p ti q iff pEj weakly dominates qEj for Ej =
Yjti Tj or Ej = Sj Tj . This in turn implies, for each ti Ti any
player i,
3. ti = (projTi Sj Tj [ratj ]) ti (so that, in combination with 2., S1
0i [ratj ] B
0j [rati ]).
T1 S2 T2 B
Furthermore, S1 T1 S2 T2 CKA0 since Tjti Tj for each ti Ti
of any player i. Since, for each player i, i is onto i , it follows that,
for each i i of any player i, there exists ti Ti with i = Siti .

Part 2: If there exists an epistemic model with i = Siti for some


(t1 , t2 ) projT1 T2 CKA0 , then i is fully permissible.

Assume that there exists an epistemic model with i = Siti for some
(t1 , t2 ) projT1 T2 CKA0 . In particular, CKA0 6= . Let, for each i,
Ti0 := projTi CKA0 and i := {Siti | ti Ti0 }. It is sufficient to show that,
for each i, i i . By Proposition 25(ii), for each ti Ti0 of any player
i, ti ti {ti } Sj Tj0 since CKA0 = KCKA0 Ki CKA0 . By the
definition of A0 , it follows that, for each ti Ti0 of any player i,
1. ti is conditionally represented by iti satisfying that iti z is a
positive affine transformation of ui , and
2. p ti q iff pEj weakly dominates qEj for Ej = Ejti := projSj Tj ti
or Ej = Sj Tjti , where ti = (projTi Sj Tj [ratj ]) ti .
Write jti := {Sj tj | tj Tj ti } and Yjti := {sj Sj | j jti s.t.
sj j }, and note that ti {ti } Sj Tj0 implies jti j . It
follows that, for any acts p and q on Sj Tj satisfying that there exist
mixed strategies pi , qi (Si ) such that, (sj , tj ) Sj Tj , p(sj , tj ) =
z(pi , sj ) and q(sj , tj ) = z(qi , sj ), p ti q iff pEj weakly dominates qEj
for Ej = Yjti Tj or Ej = Sj Tj . Hence, Siti = Si \Di (Yjti ). Since this
holds for each ti Ti0 of any player i, we have that (). Hence,
Proposition 39(iii) entails that, for each i, i i .
Interpretation. We now show how the event used to characterize
fully permissible setsfull admissible consistencycan be interpreted in
terms of the requirements of caution, robust belief of opponent ratio-

146

CONSISTENT PREFERENCES

nality, and no extraneous restrictions on beliefs. Following a common


procedure of the axiomatic method, this will in turn be used to verify that these requirements are indeed needed for the characterization
in Proposition 41 by investigating the consequences of relaxing one requirement at a time. These exercises contribute to the understanding
of fully permissible sets by showing that the concept is related to properly rationalizable, permissible, and rationalizable pure strategies in the
following manner:
When allowing extraneous restrictions on beliefs, we open for any
properly rationalizable pure strategy, implying that forward induction is no longer promoted in G01 of Figure 2.6.3
When weakening robust belief of opponent rationality to belief of
opponent rationality, we characterize the concept of permissible pure
strategies independently of whether a requirement of no extraneous
restrictions on beliefs is retained.
When removing caution, we characterize the concept of rationalizable pure strategies independently of whether extraneous restrictions
on beliefs are allowed and robust belief of opponent rationality is
weakened.
Since it is clear that [cau] = [cau1 ] [cau2 ] corresponds to caution
0 [rat2 ] B
0 [rat1 ] into robust belief
(cf. Section 6.3), it remains to split B
1
2
of opponent rationality and no extraneous restrictions on beliefs.
To state the condition of robust belief of opponent rationality we
need to recall the robust belief operator as defined and characterized
in Chapter 4. Since Assumption 2 is compatible with the framework
of Chapter 4, we can in line with Section 4.2 define robust belief as
follows. If E does not concern player is strategy choice (i.e., E =
Si projTi Sj Sj E), say that player i robustly believes the event E at ti
if ti projTi B0i E, where
B0i E := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 |
` {1, . . . , L} s.t. t`i = projT1 S2 T2 E ti } ,

3 To

relax no extraneous restrictions on beliefs we need an epistemic modelas the one


introduced in Section 6.1that is versatile enough to allow for preferences that are more
complete than being determined by admissibility on two nested sets.

Capturing forward induction through full permissibility

147

and where (t1i , . . . , tLi ) is the profile of nested sets on which ti is admissible, and which satisfies:
6= ti = t1i t`i tLi = ti {ti } Sj Tj
(where denotes and 6=).
If ti projTi B0i [ratj ], then i robustly believes at ti that j is rational.
By Proposition 6 this means that any (sj , tj ) that is deemed subjectively
possible and where sj is a rational choice by j at tj is considered infinitely
more likely than any (s0j , t0j ) where s0j is not a rational choice by j at t0j .
0 [ratj ] entails that ti = (projT S T [ratj ]) ti , it
As ti projTi B
i
i
j
j
0 [ratj ] B0 [ratj ]. Hence, relative to B0 [rat2 ] B0 [rat1 ],
follows that B
1
2
i
i
0 [rat2 ] B
0 [rat1 ] is obtained by imposing minimal completeness, which
B
1
2
in this context yields the requirement of no extraneous restrictions on
beliefs.
As established in Section 4.3, robust belief B0i is a non-monotone operator which is bounded by the two KD45 operators, namely belief Bi
and certain belief Ki . Furthermore, as shown in Chapter 4, the robust
belief operator coincides with the notions of absolutely robust belief,
as introduced by Stalnaker (1998), and assumption, as proposed by
Brandenburger and Keisler (2002), and is closely related to the concept
of strong belief, as used by Battigalli and Siniscalchi (2002). However,
in contrast to the use of non-monotonic operators in these contributions,
our non-monotonic operator B0i is used only to interpret full admissible consistency, while the KD45 operator Ki is used for the interactive
epistemology. The importance of this will be discussed in Section 11.5.
There we also comment on how the present requirement of no extraneous restrictions on beliefs is related to Brandenburger and Keislers
and Battigalli and Siniscalchis use of a preference-complete epistemic
model.
Allowing extraneous restrictions on beliefs. In view of the previous discussion, we allow extraneous restrictions on beliefs by replacing,
0 [ratj ] by B0 [ratj ]. Hence, let for each i,
for each i, B
i

A0i := [ui ] B0i [ratj ] [caui ] .


The following result is proven in Appendix C and shows that any properly rationalizable pure strategy is consistent with common certain belief
of A0 := A01 A02 .

148

CONSISTENT PREFERENCES

Proposition 42 Consider a finite strategic two-player game G. If a


pure strategy si for i is properly rationalizable, then there exists an epistemic model with si Siti for some (t1 , t2 ) projT1 T2 CKA0 .
Note that both Out and r are properly rationalizable pure strategies
(and, indeed (Out, r) is a proper equilibrium) in G01 , the battle-of-thesexes-with-an-outside-option game of Figure 2.6, while neither Out nor
r is consistent with common certain belief of full admissible consistency.
This demonstrates that no extraneous restrictions on beliefs is needed
for the characterization in Proposition 41 of the concept of fully permissible sets, which in G01 promotes only the forward induction outcome
(InL, `) (cf. the analysis of G01 in Sections 11.2 and 11.4).
Weakening robust belief of opponent rationality. By applying
the belief operator Bi , as defined in Section 6.1, we can weaken B01 [rat2 ]
B02 [rat1 ] (i.e., robust belief of opponent rationality) to B1 [rat2 ]B2 [rat1 ]
0 [rat2 ]
(i.e., belief of opponent rationality). Moreover, we can weaken B
1
0 [rat1 ] to B
1 [rat2 ] B
2 [rat1 ], where for each i,
B
2
i [ratj ] := {(s1 , t1 , s2 , t2 ) S1 T1 S2 T2 |
B
ti (projTi Sj Tj [ratj ]), and
p ti q only if pEj weakly dominates qEj
for Ej = projSj Tj ti or Ej = projSj Tj ti } ,
1 [rat2 ] B
2 [rat1 ] is obtained by imRelative to B1 [rat2 ] B2 [rat1 ], B
posing minimal completeness, which in the context of belief of opponent
rationality yields the requirement of no extraneous restrictions on beliefs.
To impose caution and belief of opponent rationality, recall from
Section 6.3 that A = A1 A2 is the event of admissible consistency
where, for each i,
Ai = [ui ] Bi [ratj ] [caui ] ,
To add no extraneous restrictions on beliefs, consider for each i,
i [ratj ] [caui ] ,
Ai := [ui ] B
and write A := A1 A2 . Since A A, the following proposition implies
that permissibility (i.e., the Dekel-Fudenberg procedure; see Definition
13) is characterized if robust belief of opponent rationality is weakened
to belief of opponent rationality, independently of whether a requirement of no extraneous restrictions on beliefs is retained. This result,
which is a strengthening of Proposition 27 and is proven in Appendix

Capturing forward induction through full permissibility

149

C, shows that robust belief of opponent rationality is needed for the


characterization in Proposition 41 of the concept of fully permissible
sets.

Proposition 43 Consider a finite strategic two-player game G. If a


pure strategy si for i is permissible, then there exists an epistemic model
A pure strategy si for i
with si Siti for some (t1 , t2 ) projT1 T2 CKA.
is permissible if there exists an epistemic model with si Siti for some
(t1 , t2 ) projT1 T2 CKA.
Removing caution. Recall from Section 6.2 that C = C1 C2 is
the event of consistency where, for each i,
Ci = [ui ] Bi [ratj ] .
To add no extraneous restrictions on beliefs and robust belief of opponent rationality, consider for each i,
0i [ratj ] ,
Ci0 := [ui ] B
and write C 0 := C10 C20 . Since C 0 C, the following strengthening of
Proposition 25 means that the removal of caution leads to a characterization of rationalizability (i.e., IESDS; see Definition 11), independently
of whether extraneous restrictions on beliefs are allowed and robust belief of opponent rationality is weakened. Thus, caution is necessary for
the characterization in Proposition 41.

Proposition 44 Consider a finite strategic two-player game G. If a


pure strategy si for i is rationalizable, then there exists an epistemic
model with si Siti for some (t1 , t2 ) projT1 T2 CKC 0 . A pure strategy
si for i is rationalizable if there exists an epistemic model with si Siti
for some (t1 , t2 ) projT1 T2 CKC.
Also the proof of this result is contained in Appendix C.

11.4

Investigating examples

The present section illustrates the concept of fully permissible sets


by returning to the previously discussed games G01 and G8 . Here, G01
will serve to show how our concept captures aspects of forward induction, while G8 will be used to interpret the occurrence of multiple fully
permissible sets.
The two examples will be used to shed light on the differences between,
on the one hand, the approach suggested here and, on the other hand,

150

CONSISTENT PREFERENCES

IEWDS as characterized by Stahl (1995): A strategy survives IEWDS if


and only if it is a best response to a belief where one strategy is infinitely
less likely than another if the former is eliminated at an earlier round
than the latter.4
Forward induction. Reconsider G01 Figure 2.6, and apply our algorithm IECFA to this battle-of-the-sexes with an outside option game.
Since InR is a dominated strategy, InR cannot be an element of 1s
choice set. This does not imply, as in the procedure of IEWDS (given
Stahls, 1995, characterization), that 2 deems InL infinitely more likely
than InR. However, 2 certainly believes that only {Out}, {InL} and
{Out, InL} are candidates for 1s choice set. This excludes {r} as 2s
choice set, since {r} is 2s choice set only if 2 deems {InR} or {Out, InR}
possible. This in turn means that 1 certainly believes that only {`} and
{`, r} are candidates for 2s choice set, implying that {Out} cannot be
1s choice set. Certainly believing that only {InL} and {Out, InL} are
candidates for 1s choice set does imply that 2 deems InL infinitely more
likely than InR. Hence, 2s choice set is {`} and, therefore, 1s choice
set {InL}. Thus, the forward induction outcome (InL, `) is promoted.
To show how common certain belief of the event A0 is consistent with
the fully permissible sets {InL} and {`}and thus illustrate Proposition
41consider an epistemic model with only one type of each player; i.e.,
T1 T2 = {t1 } {t2 }. Let, for each i, ti satisfy that iti z = ui . Also,
let
t1 = {t1 } {`} {t2 }
t1 = {t1 } S2 {t2 }
t
2
= {t2 } {InL} {t1 }
t2 = {t2 } S1 {t1 } .
Finally, let for each i, p ti q if and only if pEj weakly dominates qEj
for Ej = projSj Tj ti or Ej = projSj Tj ti . Then
S1t1 = {InL}

S2t2 = {`} .

Inspection will verify that CK A0 = A0 = S1 T1 S2 T2 .


Multiple fully permissible sets. Let us also return to G8 of Figure
11.1, where IEWDS eliminates D in the first round, r in the second
round, and M in the third round, so that U and ` survive. Stahls
(1995) characterization of IEWDS entails that 2 deems each of U and
M infinitely more likely than D. Hence, the procedure forces 2 to deem
4 Cf.

Brandenburger and Keisler (2002, Theorem 1) as well as Battigalli (1996a) and Rajan
(1998). See also Bicchieri and Schulte (1997), who give conceptually related interpretations
of IEWDS.

151

Capturing forward induction through full permissibility

M infinitely more likely than D for the sole reason that D is eliminated
before M , even though both M and D are eventually eliminated by the
procedure.
Applying our algorithm IECFA yields the following result. Since D
is a weakly dominated strategy, D cannot be an element of 1s choice
set. Hence, 2 certainly believes that only {U }, {M } and {U, M } are
candidates for 1s choice set. This excludes {r} as 2s choice set, since
{r} is 2s choice set only if 2 deems {D} or {U, D} possible. This in turn
means that 1 certainly believes that only {`} and {`, r} are candidates
for 2s choice set, implying that {M } cannot be 1s choice set. There is
no further elimination. This means that 1s collection of fully permissible sets is {{U }, {U, M }} and 2s collection of fully permissible sets is
{`}, {`, r}}. Thus, common certain belief of full admissible consistency
implies that 2 deems U infinitely more likely than D since U (respectively, D) is an element of any (respectively, no) fully permissible set for
1. However, whether 2 deems M infinitely more likely than D depends
on the type of player 2.
To show how common certain belief of the event A0 is consistent with
the collections of fully permissible sets {{U }, {U, M }} and {{`}, {`, r}}
and thus illustrate Proposition 41 also in the case of G8 consider an
epistemic model with two types of each player; i.e., T1 T2 = {t01 , t001 }
{t02 , t002 }. Let, for each type ti of any player i, ti satisfy that iti z = ui .
Moreover, let
0

t1 = {t01 } S2 {t02 }
00
t1 = {t001 } S2 T2

t2 = {t02 } S1 T1
00
t2 = {t002 } S1 {t01 } .

t1 = {t01 } {`} {t02 }


00
t1 = {t001 } {(`, t02 ), (`, t002 ), (r, t002 )}

t2 = {t02 } {(U, t01 ), (U, t001 ), (M, t001 )}


00
t2 = {t002 } {U } {t01 }

Finally, let for each type ti of any player i, p ti q if and only if pEj
weakly dominates qEj for Ej = projSj Tj ti or Ej = projSj Tj ti . Then
t0

S11 = {U }

t00

S11 = {U, M }

t0

S22 = {`}

t00

S22 = {`, r} .

Inspection will verify that CK A0 = A0 = S1 T1 S2 T2


Our analysis of G8 allows a player to deem an opponent choice set
to be subjectively impossible even when it is the true choice set of the
opponent. E.g., at (t01 , t002 ), player 1 deems it subjectively impossible that
player 2s choice set is {`, r} even though this is the true choice set of
player 2. Likewise, at (t001 , t002 ), player 2 deems it subjectively impossible
that player 1s choice set is {U, M } even though this is the true choice set

152

CONSISTENT PREFERENCES

of player 1. This is an unavoidable feature of this game as there exists


no pair of non-empty strategy subsets (Y1 , Y2 ) such that Y1 = S1 \D1 (Y2 )
and Y2 = S2 \D2 (Y1 ). It implies that under full admissible consistency
we cannot have in G8 that each player is certain of the true choice set
of the opponent.
Multiplicity of fully permissible sets arises also in the strategic form of
certain extensive games in which the application of backward induction
is controversial, e.g. the centipede game 03 illustrated in Figure 2.4.
For more on this, see Chapter 12 where the concept of fully permissible
sets is used to analyze extensive games.

11.5

Related literature

It is instructive to explain how our analysis differs from the epistemic


foundations of IEWDS and EFR provided by Brandenburger and Keisler
(2002) (BK) and Battigalli and Siniscalchi (2002) (BS), respectively. It
is of minor importance for the comparison that EFR makes use of the
extensive form, while the present analysis is performed in the strategic
form. The reason is that, by caution, a rational choice in the whole
game implies a rational choice at all information sets that are not precluded from being reached by the players own strategy (cf. Lemma 11).
To capture forward induction players must essentially deem any opponent strategy that is a rational choice infinitely more likely than any
opponent strategy not having this property. An analysis incorporating
this feature must involve a non-monotonic epistemic operator, which is
called robust belief in the present analysis (cf. Section 11.3), while the
corresponding operators are called assumption and strong belief by
BK and BS, respectively (see Chapter 4 for an analysis of the relationship between these non-monotonic operators).
We use robust belief only to define the event that the preferences of
each player is fully admissibly consistent with the preferences of his
opponent, while the monotonic certain belief operator is used for the
interactive epistemology:
each player certainly believes (in the sense of deeming the complement
subjectively impossible) that the preferences of his opponent are fully
admissibly consistent,
each player certainly believes that his opponent certainly believes
that he himself has preferences that are fully admissibly consistent,
and so on. As the examples of Section 11.4 illustrate, it is here a central
question what opponent types (choice sets) a player deems subjectively

Capturing forward induction through full permissibility

153

possible. Consequently, the certain belief operator is appropriate for the


interactive epistemology.
In contrast, BK and BS use their non-monotonic operators for the
interactive epistemology. In the process of defining higher order beliefs
both BK and BS impose that lower order beliefs are maintained. This
is precisely how BK obtain Stahls (1995) characterization whiche.g.,
in G8 of Figure 11.1seems to correspond to extraneous and hard-tojustify restrictions on beliefs.
Stahls characterization provides an interpretation of IEWDS where
strategies eliminated in the first round are completely irrational, while
strategies eliminated in later rounds are at intermediate degrees of rationality. Likewise, Battigalli (1996a) has shown how EFR corresponds
to the best rationalization principle, entailing that some opponent
strategies are neither completely rational nor completely irrational. The
present analysis, in contrast, differentiates only between whether a strategy is maximal (i.e., a rational choice) or not. As the examples of Section
11.4 illustrate, although a strategy that is weakly dominated on the set
of all opponent strategies is a stupid choice, it need not be more
stupid than any remaining admissible strategy, as this depends on the
interactive analysis of the game.
The fact that a non-monotonic epistemic operator is involved when
capturing forward induction also means that the analysis must ensure
that all rational choices for the opponent are included in the epistemic
model. BK and BS ensure this by employing preference-complete epistemic models, where all possible epistemic types of each player are represented. Instead, the present analysis achieves this by requiring no
extraneous restrictions on beliefs, meaning that the preferences are minimally complete (cf. Section 11.3). Since an ordinary monotonic operator is used for the interactive epistemology, there is no more need for
a preference-complete epistemic model here than in usual epistemic
analyses of rationalizability and permissibility.
Our paper has a predecessor in Samuelson (1992), who also presents
an epistemic analysis of admissibility that leads to a collection of sets
for each player, called a generalized consistent pair. Samuelson requires
that a players choice set equals the set of strategies that are not weakly
dominated on the union of choice sets that are deemed possible for the
opponent; this implies our requirements of robust belief of opponent
rationality and no extraneous restrictions on beliefs (cf. Samuelson,
1992, p. 311). However, he does not require that each player deems no

154

CONSISTENT PREFERENCES

opponent strategy impossible, as implied by our requirement of caution.


Hence, his analysis does not yield {{U, M }} {{`}} in G9 of Figure
11.2. Furthermore, he defines possibility relative to a knowledge operator
that satisfies the truth axiom, while our analysisas illustrated by the
discussion of G8 in Section 11.4allows a player to deem an opponent
choice set to be subjectively impossible even when it is the true choice
set of the opponent. This explains why we in contrast to Samuelson
obtain general existence (cf. Proposition 39(i)).
If each player is certain of the true choice set of the opponent, one
obtains a consistent pair as defined by Borgers and Samuelson (1992),
a concept that need not exist even when a generalized consistent pair exists. Ewerhart (1998) modifies the concept of a consistent pair by adding
caution. However, since he allows extraneous restrictions on beliefs to
ensure general existence, his concept of a modified consistent pair does
not promote forward induction in G01 . A self-admissible set in the terminology of Brandenburger and Friedenberg (2003) is a Cartesian product of strategy subsets, where each players subset consists of strategies
that weakly dominated neither on the subset of opponent strategies nor
on the set of all opponent strategies. Also Brandenburger and Friedenberg allow extraneous restrictions on beliefs. Hence, modified consistent
pairs and self-admissible sets need not correspond to profiles of fully
permissible sets. However, if there is a unique fully permissible set for
each player, then the pair constitutes both a modified consistent pair
and a self-admissible set. Basu and Weibulls (1991) tight curb* set
is another variant of a consistent pair that ensures existence without
yielding forward induction in G01 , as they impose caution but weaken
robust belief of opponent rationality to belief of opponent rationality.
In particular, the set of permissible strategy profiles is tight curb*.
Caution and robust belief of opponent rationality are admissibility
requirements on the preferences of players, thus positioning the analysis
of the present chapter in the consistent preferences approach. Moreover, by imposing no extraneous restrictions on beliefs as a requirement
of minimal completeness, preferences are not in general representable by
subjective probabilities, thus showing the usefulness of an analysis that
relax completeness. 5

5 By

not employing subjective probabilities, the analysis is related to the filter model of beliefs
presented by Brandenburger (1997, 1998).

Chapter 12
APPLYING FULL PERMISSIBILITY
TO EXTENSIVE GAMES

In many economic contexts decision makers interact and take actions


that extend through time. A bargaining party makes an offer, which
is observed by the adversary, and accepted, rejected or followed by a
counter-offer. Firms competing in markets choose prices, levels of advertisement, or investments with the intent of thereby influencing the future
behavior of competitors. One could add many examples. The standard
economic model for analyzing such situations is that of an extensive
game. Reproducing joint work with Martin Dufwenberg, cf. Asheim and
Dufwenberg (2003b), this chapter revisits a question that was already
posed in Chapters 710: What happens in an extensive game if players
reason deductively by trying to figure out one anothers moves? We have
in Asheim and Dufwenberg (2003a), incorporated in Chapter 11 of this
book, proposed a model for deductive reasoning leading to the concept
of fully permissible sets, which can be applied to many strategic situations. In the present chapter we argue that the model is appropriate for
analyzing extensive games and we apply it to several such games.

12.1

Motivation

There is already a literature exploring the implications of deductive


reasoning in extensive games, but the answers provided differ and the
issue is controversial. Much of the excitement concerns whether or not
deductive reasoning implies backward induction in games where that
principle is applicable. We next discuss this issue, since it provides a
useful backdrop against which to motivate our own approach.

156

CONSISTENT PREFERENCES

1c
D
1
0

2s
d
0
2

Figure 12.1.

1s
D
3
0

0
3

f
d
D 1, 0 1, 0
FD 0, 2 3, 0
FF 0, 2 0, 3

11 and its pure strategy reduced strategic form.

Consider the 3-stage take-it-or-leave-it, introduced by Reny (1993)


(a version of Rosenthals, 1981, centipede game, see 03 of Figure 2.4),
and shown in Figure 12.1 together with its pure strategy reduced strategic form.1 What would 2 do in 11 if called upon to play? Backward
induction implies that 2 would choose d, which is consistent with the following idea: 2 chooses d because she figures out that 1 would choose
D at the last node. Many models of deductive reasoning support this
story, starting with Bernheims concept of subgame rationalizability
and Pearces concept of extensive form rationalizability (EFR). More
recently, Battigalli and Siniscalchi (2002) provide a rigorous epistemic
foundation for EFR, while Chapters 710 of this book epistemically
model rationalizability concepts that resemble subgame rationalizability.
However, showing that backward induction can be given some kind
of underpinning does not imply that the underpinning is convincing.
Indeed, skepticism concerning backward induction can be expressed by
means of 11 . Suppose that each player believes the opponent will play
in accordance with backward induction; i.e., 1 believes that 2 chooses d
if asked to play, and 2 believes that 1 plays D at his initial note. Then
player 1 prefers playing D to any of his two other strategies FD and
FF. Moreover, if 2 is certain that 1 believes that 2 chooses d if she were
asked to play, then 2 realizes that 1 has not chosen in accordance with his
preferences if she after all is asked to play. Why then should 2 believe
that 1 will make any particular choice between his two less preferred
strategies, FD and FF, at his last node? So why then should 2 prefer d
to f ?
This kind of perspective on the take-it-or-leave-it game is much inspired by the approach proposed by Ben-Porath (1997), where similar

1 We

need not consider what players plan to do at decision nodes that their own strategy
precludes them from reaching (cf. Section 12.2).

Applying full permissibility to extensive games

157

objections against backward inductive reasoning are raised. We shall


discuss his contribution in some detail, since the key features of our approach can be appreciated via a comparison to his model. Applied to
11 , Ben-Poraths model captures the following intuition: Each player
has an initial belief about the opponents behavior. If this belief is
contradicted by the play (a surprise occurs) he may subsequently entertain any belief consistent with the path of play. The only restriction
imposed on updated beliefs is Bayes rule. In 11 , Ben-Poraths model
allows player 2 to make any choice. In particular, 2 may choose f if she
initially believes with probability one that player 1 will choose D, and
conditionally on D not being chosen assigns sufficient probability on FF.
This entails that if 2 initially believes that 1 will comply with backward
induction, then 2 need not follow backward induction herself.
In 11 , our analysis captures much the same intuition as Ben-Poraths
approach, and it has equal cutting power in this game. However, it yields
a more structured solution as it is concerned with what strategy subsets
that are deemed to be the set of rational choices for each player. While
agreeing with Ben-Porath that deductive reasoning may lead to each of
D and FD being rational for 1 and each of d and f being rational for 2,
our concept of full permissibility predicts that 1s set of rational choices
is either {D} or {D, F }, and 2s set of rational choices is either {d} or
{d, f }. This has appealing features. If 2 is certain that 1s set is {D},
thenunless 2 has an assessment of the relative likelihood of 1s less
preferred strategies FD and FF one cannot conclude that 2 prefers d
to f or vice versa; this justifies {d, f } as 2s set of rational choices. On
the other hand, if 2 considers it possible that 1s set is {D, FD}, then
d weakly dominates f on this set and justifies {d} as 2s set of rational
choices. Similarly, one can justify that D is preferred to FD if and only
if 1 considers it impossible that 2s set is {d, f }.
This additional structure is important for the analysis of 06 , illustrated in Figure 8.2. This game is due to Reny (1992, Figure 1) and has
appeared in many contributions. Suppose in this game that each player
believes the opponent will play in accordance with backward induction
by choosing FF and f respectively. Then both players will prefer FF
and f to any alternative strategy. Moreover, as will be shown in Section 12.3, our analysis implies that {FF} and {f } are the unique sets of
rational choices.
Ben-Poraths approach, by contrast, does not have such cutting power
in 06 , as it entails that deductive reasoning may lead to each of the
strategies D and FF being rational for 1 and each of the strategies d and

158

CONSISTENT PREFERENCES

f being rational for 2. The intuition for why the strategies D and d are
admitted is as follows: D is 1s unique best strategy if he believes with
probability one that 2 plays d. Player 1 is justified in this belief in the
sense that d is 2s best strategy if she initially believes with probability
one that 1 will choose D, and if called upon to play 2 revises this belief
so as to believe with sufficiently high probability (e.g., probability one)
that 1 is using FD. This belief revision is consistent with Bayes rule,
and so is acceptable.
Ben-Poraths approach is a very important contribution to the literature, since it is a natural next step if one accepts the above critique
of backward induction. Yet we shall argue below that it is too permissive, using 06 as an illustration. Assume that 1 deems d infinitely more
likely than f , while 2 deems D infinitely more likely than FD and FD
infinitely more likely than FF. Then the players rank their strategies as
follows:
1s preferences: D FF FD
2s preferences: d f
This is in fact precisely the justification of the strategies D and d given
above when applying Ben-Poraths approach to 06 . Here, caution is
satisfied since all opponent strategies are taken into account; in particular, FF is preferred to FD as the former strategy weakly dominates
the latter. Moreover, robust belief of opponent rationality is satisfied
since each player deems the opponents maximal strategy infinitely more
likely that any non-maximal strategy. However, the requirement of no
extraneous restrictions on beliefs, as described in Chapter 11, is not
satisfied since the preferences of 2 introduce extraneous restrictions on
beliefs by deeming one of 1s non-maximal strategies, FD, infinitely more
likely than another non-maximal strategy, FF. When we return to G06
in Section 12.3, we show how the additional imposition of no extraneous restrictions on beliefs means that deductive reasoning leads to the
conclusion that {FF} and {f } are the players choice sets in this game.
As established in Chapter 11, our concept of fully permissible sets is
characterized by caution, robust belief of opponent rationality, and
no extraneous restrictions on beliefs. In Section 12.2 we prove results
that justify the claim that interesting implications of deductive reasoning
in a given extensive game can be derived by applying this concept to the
strategic form of that game.
Sections 12.3 and 12.4 are concerned with such applications, with the
aim of showing how our solution concept gives new and economically relevant insights into the implications of deductive reasoning in extensive

Applying full permissibility to extensive games

159

games. The material is organized around two central themes: backward


and forward induction. Other support for forward induction through
the concept of EFR and the procedure of IEWDS precludes outcomes
in conflict with backward induction; see, e.g., Battigalli (1997). In contrast, we will show how the concept of fully permissible sets promotes
forward induction in the battle-of-the-sexes with an outside option and
burning money games as well as an economic application from organization theory, while not insisting on the backward induction outcome
in games (like 11 and the 3-period prisoners dilemma) where earlier
contributions, like Basu (1990), Reny (1993) and others, have argued on
theoretical grounds that this is problematic. Still, we will show that the
backward induction outcome is obtained in 06 , and that our concept
has considerable bite in the 3-period prisoners dilemma game.
Lastly, in Section 12.5 we compare our approach to related work.

12.2

Justifying extensive form application

The concept of fully permissible sets, presented and epistemically


characterized in Chapter 11 of this book, is designed to analyze the implications of deductive reasoning in strategic form games. In this chapter,
we propose that this concept can be fruitfully applied for analyzing any
extensive game through its strategic form. In fact, we propose that it
is legitimate to confine attention to the games pure strategy reduced
strategic form (cf. Definition 23 below), which is computationally more
convenient. In this section we prove two results which, taken together,
justify such applications.
An extensive game. A finite extensive two-player game (without
chance moves) includes a set of terminal nodes Z and, for each player i,
a vNM utility function i : Z R that assigns a payoff to any outcome.
For each player i, there is a finite collection of information sets Hi , with
a finite set of actions A(h) being associated with each h Hi . A pure
strategy for player i is a function si that to any h Hi assigns an action
in A(h). Let Si denote player is finite set of pure strategies, and let
S = S1 S2 . As before, write si ( Si ) for pure strategies and pi and
qi ( (Si )) for mixed strategies. Define ui : S R by ui = i z, and
refer to G = (S1 , S2 , u1 , u2 ) as the strategic form of the extensive game
. For any h H1 H2 , let S(h) = S1 (h) S2 (h) denote the set of
strategy profiles that are consistent with h being reached.
Weak sequential rationality. Consider any strategy that is maximal given preferences that satisfy that one strategy is preferred to an-

160

CONSISTENT PREFERENCES

other if and only if the one weakly dominates the other on Yj the
set of strategies that player i deems to be the set of rational choices for
his opponent or Sj the set of all opponent strategies. Hence, the
strategy is maximal at the outset of a corresponding extensive game.
Corollary 2 makes the observation that this strategy is still maximal
when the preferences have been updated upon reaching any information
set that the choice of this strategy does not preclude.
Assume that player is preferences over his own strategies satisfy that
pi is preferred to qi if and only if pi weakly dominates qi on Yj or Sj .
Let, for any h Hi , Yj (h) := Yj Sj (h) denote the set of strategies
in Yj that are consistent with the information set h being reached. If
pi , qi ( (Si (h))), then is preferences conditional on the information
set h Hi being reached satisfy that pi is preferred to qi if and only
if pi weakly dominates qi on Yj (h) or Sj (h) (where it follows from the
definition that weak dominance on Yj (h) is not possible if Yj (h) = ).
Furthermore, is choice set conditional on h Hi , SiYj (h), is given by
SiYj (h) := Si (h) \ {si Si (h)| xi (Si (h)) s.t.
xi weakly dominates si on Yj (h) or Sj (h)} .
Write SiYj := SiYj () (= Si \Di (Yj ) in earlier notation). By the result
below, if si is maximal at the outset of an extensive game, then it is also
maximal at later information sets for i that si does not preclude.

Corollary 2 Let ( =
6 ) Yj Sj . If si SiYj , then si SiYj (h) for
any h Hi with Si (h) 3 si .
Proof. This follow from Lemma 11 by letting is preferences (at ti )
on is set of mixed strategies satisfy that pi is preferred to qi if and only
if pi weakly dominates qi on Yj or Sj .
By the assumption of caution, each player i takes into account the
possibility of reaching any information set for i that the players own
strategy does not preclude from being reached. Hence, rationality implies weak sequential rationality; i.e., that a player chooses rationally
at all information sets that are not precluded from being reached by the
players own strategy.
Reduced strategic form. It follows from Proposition 45 below that
it is in fact sufficient to consider the pure strategy reduced strategic
form when deriving the fully permissible sets of the game. The following
definition is needed.

Applying full permissibility to extensive games

161

Definition 23 Let G = (S1 , S2 , u1 , u2 ) be a finite strategic two-player


game. The pure strategies si and s0i ( Si ) are equivalent if for each
player k, uk (s0i , sj ) = uk (si , sj ) for all sj Sj . The pure strategy reduced
strategic form (PRSF) of G is obtained by letting, for each player i,
each class of equivalent pure strategies be represented by exactly one
pure strategy.
Since the maximality of one of two equivalent strategies implies that
the other is maximal as well, the following observation holds: If si and
s0i are equivalent and i is a fully permissible set for i, then si i if
and only if s0i i . To see this formally, note that if si i for some
fully permissible set i , then, by Proposition 39(ii), there exists ( 6=)
j j such that si i = SiYj for Yj = j0 j j0 . Since si and s0i are
equivalent, s0i SiYj = i . This observation explains why the following
result can be established.
= (S1 , S2 , u
Proposition 45 Let G
1 , u
2 ) be a finite strategic two-player
0
game where si and si are two equivalent strategies for i. Consider G =
(S1 , S2 , u1 , u2 ) where Si = Si \{s0i } and Sj = Sj for j 6= i, and where, for
each player k, uk is the restriction of u
k to S = S1 S2 . Let, for each

player k, k (k ) denote the collection of fully permissible sets for k in


Then i is obtained from
i by removing s0 from any
i
G (G).
i
i
j.
with si
i , while, for j 6= i, j =
Proof. By Proposition 39(iii) it suffices to show that
()
for G,
then () for G, where i is obtained from
1 If

i with si
i by removing s0i from any
i
i , while, for j 6= i,

j = j .
()
for G,
where
i is obtained from
2 If () for G, then
0
j = j .
i by adding si to any i i with si i , while, for j 6= i,
().
By the observation preceding Proposition
Part 1. Assume
0

45, if
i i , then si
i if and only if si
i . Pick any player k

and any
k k . Let ` denote the other player. By the definition of

`
` such that
k (), there exists ( 6=)
k = SkY` for Y` = `0 ` `0 .
i with si
Construct i by removing s0i from any
i
i and replace

Si by Si , while, for j 6= i, j = j and Sj = Sj . Let Y` = `0 ` `0 .


k \{s0k } if k = i
Then it follows from the definition of SkY` that SkY` =
k if k 6= i. Since, for each player k, ( 6=) k k , we have
and SkY` =
that (). Part 2 is shown similarly.

162

CONSISTENT PREFERENCES

Proposition 45 means that the PRSF is sufficient for analyzing common certain belief of full admissible consistency, which is the epistemic
foundation for the concept of fully permissible sets. Consequently, in the
strategic form of an extensive game, it is unnecessary to specify actions
at information sets that a strategy precludes from being reached. Hence,
instead of fully specified strategies, it is sufficient to consider what Rubinstein (1991) calls plans of action. For a generic extensive game, the
set of plans of action is identical to the strategy set in the PRSF.
In the following two sections we apply the concept of fully permissible
sets to extensive games. We organize the discussion around two themes:
backward and forward induction. Motivated by Corollary 2 and Proposition 45, we analyze each extensive game via its PRSF (cf. Definition 23),
given in conjunction to the extensive form. In each example, each plan
of action that appears in the underlying extensive game corresponds to
a distinct strategy in the PRSF.

12.3

Backward induction

Does deductive reasoning in extensive games imply backward induction? In this section we show that the answer provided by the concept
of fully permissible sets is sometimes, but not always.
Sometimes. There are many games where Ben-Poraths approach
does not capture backward induction while our approach does (and the
converse is not true). Ben-Porath (1997) assumes initial common certainty of rationality in extensive games of perfect information. As discussed in Chapter 7 he proves that in generic games (with no payoff
ties at terminal nodes for any player) the outcomes consistent with that
assumption coincide with those that survive the Dekel-Fudenberg procedure (where one round of elimination of all weakly dominated strategies
is followed by iterated elimination of strongly dominated strategies).
It is a general result that the concept of fully permissible sets refines
the Dekel-Fudenberg procedure (cf. Proposition 40). Game 06 of Figure
8.2 shows that the refinement may be strict even for generic extensive
games with perfect information, and indeed that fully permissible sets
may respect backward induction where Ben-Poraths solution does not.
The strategies surviving the Dekel-Fudenberg procedure, and thus consistent with initial common certainty of rationality, are D and FF for
player 1 and d and f for player 2. In Section 12.2 we gave an intuition for

Applying full permissibility to extensive games

163

why the strategies D and d are possible. This is, however, at odds with
the implications of common certain belief of full admissible consistency.
Applying IECFA to the PRSF of 06 of Figure 8.2 yields:
(0) = 1 2
(1) = {{D}, {FF}, {D, FF}} 2
(2) = {{D}, {FF}, {D, FF}} {{f }, {d, f }}
(3) = {{FF}, {D, FF}} {{f }, {d, f }}
(4) = {{FF}, {D, FF}} {{f }}
= (5) = {{FF}} {{f }}
Interpretation: (1): Caution implies that FD cannot be a maximal strategy (i.e., an element of a choice set) for 1 since it is weakly
dominated (in fact, even strongly dominated). (2): Player 2 certainly
believes that only {D}, {FF} and {D, FF} are candidates for 1s choice
set. By robust belief of opponent rationality and no extraneous restrictions on beliefs this excludes {d} as 2s choice set, since d weakly
dominates f only on {FD} or {D, FD}. (3): 1 certainly believes that
only {f } and {f, d} are candidates for 2s choice set. By robust belief of
opponent rationality and no extraneous restrictions on beliefs this excludes {D} as 1s choice set, since D weakly dominates FD and FF only
on {d}. (4): Player 2 certainly believes that only {FF} and {D, FF}
are candidates for 1s choice set. By robust belief of opponent rationality this implies that 2s choice set is {f } since f weakly dominates d
on both {FF} and {D, FF}. (5): 1 certainly believes that 2s choice
set is {f }. By robust belief of opponent rationality this implies that
{FF} is 1s choice set since FF weakly dominates D on {f }. No further
elimination of choice sets is possible, so {FF} and {f } are the respective
players unique fully permissible sets.
Not always. While fully permissible sets capture backward induction in 06 and other games, the concept does not capture backward
induction in certain games where the procedure has been considered controversial.2 The background for the controversy is the following paradoxical aspect: Why should a player believe that an opponents future
play will satisfy backward induction if the opponents previous play is
incompatible with backward induction? A prototypical game for cast-

2 See

discussion and references in Chapter 7 of the present text.

164

CONSISTENT PREFERENCES

ing doubt on backward induction is the take-it-or-leave-it game 11 of


Figure 12.1, which we next analyze in detail.
Applying IECFA to the PRSF of 11 of Figure 12.1 yields:
(0) = 1 2
(1) = {{D}, {FD}, {D, FD}} 2
(2) = {{D}, {FD}, {D, FD}} {{d}, {d, f }}
= (3) = {{D}, {D, FD}} {{d}, {d, f }}
Interpretation: (1): FF cannot be a maximal strategy for 1 since
it is strongly dominated. (2): Player 2 certainly believes that only
{D}, {FD} and {D, FD} are candidates for 1s choice set. This excludes
{f } as 2s choice set since {f } is 2s choice set only if 2 deems {FF} or
{FD, FF} subjectively possible. (3): 1 certainly believes that only {d}
and {d, f } are candidates for 2s choice set, implying that {FD} cannot
be 1s choice set. No further elimination of choice sets is possible and
the collection of profiles of fully permissible sets is as specified.
Note that backward induction is not implied. To illustrate why, we
focus on player 2 and explain why {d, f } may be a choice set for her.
Player 2 certainly believes that 1s choice set is {D} or {D, FD}. This
leaves room for two basic cases. First, suppose 2 deems {D, FD} subjectively possible. Then {d} must be her choice set, since she must consider
it infinitely more likely that 1 uses FD than that he uses FF. Second,
and more interestingly, suppose 2 does not deem {D, FD} subjectively
possible. Then conditional on 2s node being reached 2 certainly believes
that 1 is not choosing a maximal strategy. As player 2 does not assess
the relative likelihood of strategies that are not maximal (cf. the requirement of no extraneous restrictions on beliefs), {d, f } is her choice set
in this case. Even in the case where 2 deems {D} to be the only subjectively possible choice set for 1, she still considers it subjectively possible
that 1 may choose one of his non-maximal strategies FD and FF (cf.
the requirement of caution), although each of these strategies is in this
case deemed infinitely less likely than the unique maximal strategy D.
Applied to (the PRSF of) 11 , our concept permits two fully permissible sets for each player. How can this multiplicity of fully permissible
sets be interpreted? The following interpretation corresponds to the underlying formalism: The concept of fully permissible sets, when applied
to 11 , allows for two different types of each player. Consider player 2.
Either she may consider that {D, FD} is a subjectively possible choice
set for 1, in which case her choice set will be {d} so that she complies

Applying full permissibility to extensive games

165

with backward induction. Or she may consider {D} to be the only subjectively possible choice set for 1, in which case 2s choice set is {d, f }.
Intuitively, if 2 is certain that 1 is a backward inducter, then 2 need not
be a backward inducter herself! In this game, our model captures an
intuition that is very similar to that of Ben-Poraths model.
Reny (1993) defines a class of belief consistent games, and argues on
epistemic grounds that backward induction is problematic only for games
that are not in this class. It is interesting to note that the game where
our concept of fully permissible sets differs from Ben-Poraths analysis
by promoting backward induction, 06 , is belief-consistent. In contrast,
the game where the present concept coincides with his by not yielding
backward induction, 11 , is not belief-consistent. There are examples of
games that are not belief consistent, where full permissibility still implies
backward induction, meaning that belief consistency is not necessary for
this conclusion. It is, however, an as-of-yet unproven conjecture that
belief consistency is sufficient for the concept of fully permissible sets to
promote backward induction.
We now compare our results to the very different findings of Aumann
(1995), cf. also Section 5 of Stalnaker (1998) as well as Chapter 7 of this
book. In Aumanns model, where it is crucial to specify full strategies
(rather than plans of actions), common knowledge of rational choice implies in 11 that all strategies for 1 but DD (where he takes a payoff of 1
at his first node and a payoff of 3 at his last node) are impossible. Hence,
it is impossible for 1 to play FD or FF and thereby ask 2 to play. However, in the counterfactual event that 2 is asked to play, she optimizes as
if player 1 at his last node follows his only possible strategy DD, implying that it is impossible for 2 to choose f (cf. Aumanns Sections 4b, 5b,
and 5c). Thus, in Aumanns analysis, if there is common knowledge of
rational choice, then each player chooses the backward induction strategy. By contrast, in our analysis player 2 being asked to play is seen to
be incompatible with 1 playing DD or DF. For the determination of 2s
preference over her strategies it is the relative likelihood of FD versus
FF that is important to her. As seen above, this assessment depends on
whether she deems {D, FD} as a possible candidate for 1s choice set.
Prisoners dilemma. We close this section by considering a finitely
repeated prisoners dilemma game. Such a game does not have perfect
information, but it can still be solved by backward induction to find
the unique subgame perfect equilibrium (no one cooperates in the last
period, given this no one cooperates in the penultimate period, etc.).

166

CONSISTENT PREFERENCES

T
sN
1
N
V
s1
E
sN
1
sRT
1
sRV
1
sRE
1

Figure 12.2.

T
sN
2
7, 7
8, 4
8, 4
5, 5
6, 2
6, 2

V
sN
2
4, 8
5, 5
5, 5
8, 4
9, 1
9, 1

E
sN
2
4, 8
5, 5
5, 5
5, 5
6, 2
6, 2

sRT
2
5, 5
4, 8
5, 5
3, 3
2, 6
3, 3

sRV
2
2, 6
1, 9
2, 6
6, 2
5, 5
6, 2

sRE
2
2, 6
1, 9
2, 6
3, 3
2, 6
3, 3

Reduced form of 12 (a 3-period prisoners dilemma game).

This solution has been taken to be counterintuitive; cf, e.g. Pettit and
Sugden (1989). We consider the case of a 3-period prisoners dilemma
game (12 ) and show that, again, the concept of fully permissible sets
does not capture backward induction. However, the fully permissible
sets nevertheless have considerable cutting power. Our solution refines
the Dekel-Fudenberg procedure and generates some special structure
on the choice sets that survive.
The payoffs of the stage game are given as follows, using Aumanns
(1987b, pp. 4689) description: Each player decides whether he will
receive 1 (defect) or the other will receive 3 (cooperate). There is no
discounting. Hence, the action defect is strongly dominant in the stage
game, but still, each player is willing to cooperate in one stage if this induces the other player to cooperate instead of defect in the next stage. It
follows from Proposition 45 that we need only consider what Rubinstein
(1991) calls plans of action.
There are six plans of actions for each player that survive the DekelFudenberg procedure. In any of these, a player always defects in the 3rd
stage, and does not always cooperate in the 2nd stage. The six plans
T N V , sN E , sRT , sRV and
of actions for each player i are denoted sN
i ,si
i
i
i
sRE
,
where
N
denotes
that
i
is
nice
in
the
sense
of
cooperating
in
the 1st
i
stage, where R denotes that i is rude in the sense of defecting in the 1st
stage, where T denotes that i plays tit-for-tat in the sense of cooperating
in the 2nd stage if and only j 6= i has cooperated in the 1st stage, where
V denotes that i plays inverse tit-for-tat in the sense of defecting in the
2nd stage if and only if j 6= i has cooperated in the 1st stage, and where
E denotes that i is exploitive in the sense of defecting in the 2nd stage
independently of what j 6= i has played in the 1st stage. The strategic
form after elimination of all other plans of actions is given in Figure

Applying full permissibility to extensive games

167

12.2. Note that none of these plans of actions are weakly dominated in
the full strategic form.
Proposition 40 shows that any fully permissible set is a subset of the
set of strategies surviving the Dekel-Fudenberg procedure. Hence, only
subsets of
T NV
E RT RV
RE
{sN
, sN
i , si
i , si , si , si }
can be is choice set under common certain belief of full admissible consistency. Furthermore, under common certain belief of full admissible
consistency, we have for each player i that
T must also contain sN E , since sN T is
any choice set that contains sN
i
i
i
E is a maximal strategy,
a maximal strategy only if sN
i
V must also contain sN E , since sN V
any choice set that contains sN
i
i
i
N
is a maximal strategy only if si E is a maximal strategy,
RT is
any choice set that contains sRT
must also contain sRE
i
i , since si
RE
a maximal strategy only if si is a maximal strategy,
RV is
any choice set that contains sRV
must also contain sRE
i
i , since si
a maximal strategy only if sRE
is a maximal strategy,
i

Given that the choice set of the opponent satisfies these conditions, this
implies that
E is included in is choice set, only the following sets are candiif sN
i
T N E RT RE
N V , sN E , sRV , sRE },
dates for is choice set: {sN
i , si , si , si }, {si
i
i
i
N
E
RE
N
E
or {si , si }. The reason is that si is a maximal strategy only
T
if i considers it subjectively possible that js choice set contains sN
j
E
RT (and hence, sRE ).
(and hence, sN
j ) or sj
j
NE
if sRE
i , but not si , is included in is choice set, only the followRE
RV
RE
ing sets are candidates for is choice set: {sRT
i , si }, {si , si }, or
RE is a maximal strategy only if i con{sRE
i }. The reason is that si
V
NE
siders it subjectively possible that js choice set contains sN
j , sj ,
RV
RE
sj , or sj .

This in turn implies that


V or sRV since any candidate for js
is choice set does not contain sN
i
i
E is preferred to sN V and
choice set contains sRE
,
implying
that
sN
j
i
i
RE
RV
si is preferred to si .

Hence, the only candidates for is choice set under common certain beT N E RT RE
N E RE
lief of full admissible consistency are {sN
i , si , si , si }, {si , si },
RE
RE
{sRT
i , si }, and {si }. Moreover, it follows from Proposition 39(iii)
that all these sets are indeed fully permissible since

168

CONSISTENT PREFERENCES

T N E RT RE
RT RE
{sN
i , si , si , si } is is choice set if he deems {sj , sj }, but not
N
E
RE
N
T
N
E
RT
RE
{sj , sj } and {sj , sj , sj , sj }, as possible candidates for js
choice set,
E RE
N T N E RT RE
{sN
i , si } is is choice set if he deems {sj , sj , sj , sj } as a
possible candidate for js choice set,
RE
RE
{sRT
i , si } is is choice set if he deems {sj } as the only possible
candidate for js choice set,
N E RE
RT RE
{sRE
i } is is choice set if he deems {sj , sj }, but not {sj , sj }
RT
RE
N
E
N
T
and {sj , sj , sj , sj }, as possible candidates for js choice set.

While play in accordance with strategies surviving the Dekel-Fudenberg


procedure does not provide any prediction other than both players defecting in the 3rd stage, the concept of fully permissible sets has more
bite. In particular, a player cooperates in the 2nd stage only if the opponent has cooperated in the 1st stage. This implies that only the following
paths can be realized if players choose strategies in fully permissible sets:
((cooperate, cooperate), (cooperate, cooperate), (defect, defect))
((cooperate, cooperate), (cooperate, defect), (defect, defect)) and vice versa
((cooperate, defect), (defect, cooperate), (defect, defect)) and vice versa
((cooperate, cooperate), (defect, defect), (defect, defect))
((cooperate, defect), (defect, defect), (defect, defect)) and vice versa
((defect, defect), (defect, defect), (defect, defect)).
That the path ((cooperate, defect), (cooperate, defect), (defect, defect))
or vice versa cannot be realized if players choose strategies in fully permissible sets can be interpreted as an indication that the present analysis
seems to produce some element of reciprocity in the 3-period prisoners
dilemma game.

12.4

Forward induction

In Chapter 11 we have already seen how the concept of fully permissible sets promotes the forward induction outcome, (InL, `), in the
PRSF of the battle-of-the-sexes with an outside option game 01 , illustrated in Figure 2.6. In this section we first investigate whether this
conclusion carries over to two other variants of the battle-of-the-sexes
game, before testing the concept of fully permissible sets in an economic
application.

Applying full permissibility to extensive games

NU
ND
BU
BD
Figure 12.3.

169

`` `r r` rr
3, 1 3, 1 0, 0 0, 0
0, 0 0, 0 1, 3 1, 3
2, 1 -1, 0 2, 1 -1, 0
-1, 0 0, 3 -1, 0 0, 3

G13 (the pure strategy reduced strategic form of burning money).

The battle-of-the-sexes game with variations. Consider first


the burning money game due to van Damme (1989) and Ben-Porath
and Dekel (1992). Game G13 of Figure 12.3 is the PRSF of a battleof-the-sexes game with the addition that 1 can publicly destroy 1 unit
of payoff before the battle-of-the-sexes game starts. BU (NU ) is the
strategy where 1 burns (does not burn), and then plays U , etc., while `r
is the strategy where 2 responds with ` conditional on 1 not burning and
r conditional on 1 burning, etc. The forward induction outcome (supported e.g. by IEWDS) involves implementation of 1s preferred battleof-the-sexes outcome, with no payoff being burnt.
One might be skeptical to the use of IEWDS in the burning money
game, because it effectively requires 2 to infer that BU is infinitely more
likely than BD based on the sole premise that BD is eliminated before
BU, even though all strategies involving burning (i.e. both BU and
BD) are eventually eliminated by the procedure. On the basis of this
premise such an inference seems at best to be questionable. As shown
in Table 12.1, the application of our algorithm IECFA yields a sequence
of iterations where at no stage need 2 deem BU infinitely more likely
than BD, since {NU} is always included as a candidate for 1s choice set.
The procedure uniquely determines {NU} as 1s fully permissible set and
{``, `r} as 2s fully permissible set. Even though the forward induction
outcome is obtained, 2 does not have any assessment concerning the
relative likelihood of opponent strategies conditional on burning; hence,
she need not interpret burning as a signal that 1 will play according with
his preferred battle-of-the-sexes outcome.3

3 Also

Battigalli (1991), Asheim (1994), and Dufwenberg (1994), as well as Hurkens (1996) in
a different context, argue that (NU, `r) in addition to (NU, ``) is viable in burning money.

170

CONSISTENT PREFERENCES

Table 12.1.

Applying IECFA to burning money.

(0) = 1 2
(1) = {{NU}, {ND}, {BU}, {NU, ND}, {ND, BU}, {NU, BU}, {NU, ND, BU}}
2
(2) = {{NU}, {ND}, {BU}, {NU, ND}, {ND, BU}, {NU, BU}, {NU, ND, BU}}
{{``}, {r`}, {``, `r}, {r`, rr}, {``, r`}, {``, `r, r`, rr}}
(3) = {{NU}, {BU}, {ND, BU}, {NU, BU}, {NU, ND, BU}}
{{``}, {r`}, {``, `r}, {r`, rr}, {``, r`}, {``, `r, r`, rr}}
(4) = {{NU}, {BU}, {ND, BU}, {NU, BU}, {NU, ND, BU}}
{{``}, {r`}, {``, `r}, {``, r`}}
(5) = {{NU}, {BU}, {NU, BU}} {{``}, {r`}, {``, r`}, {``, r`}}
(6) = {{NU}, {BU}, {NU, BU}} {{``}, {``, `r}, {``, r`}}
(7) = {{NU}, {NU, BU}} {{``}, {``, `r}, {``, r`}}
(8) = {{NU}, {NU, BU}} {{``}, {``, `r}}
(9) = {{NU}} {{``}, {``, `r}}
= (10) = {{NU}} {{``, `r}}

We next turn now to a game introduced by Dekel and Fudenberg


(1990) (cf. their Figure 7.1) and discussed by Hammond (1993), and
which is reproduced here as 001 of Figure 12.4. It is a modification of
01 which introduces an extra outside option for player 2. In this
game there may seem to be a tension between forward and backward
induction: For player 2 not to choose out may seem to suggest that 2
signals that she seeks a payoff of at least 3/2, in contrast to the payoff
of 1 that she gets when the subgame structured like 01 is considered in
isolation (as seen in the analysis of 01 ). However, this intuition is not
quite supported by the concept of fully permissible sets.
Applying our algorithm IECFA to the PRSF of 001 yields:
(0) = 1 2
(1) = {{Out}, {InL}, {Out, InL}}
{{out}, {inr}, {out, in`}, {out, inr}, {in`, inr}, {out, in`, inb}}
(2) = {{Out}, {InL}, {Out, InL}} {{out}, {out, in`}, {in`, inr}}
(3) = {{InL}, {Out, InL}} {{out}, {out, in`}, {in`, inr}}
= (4) = {{InL}, {Out, InL}} {{out}, {out, in`}} .
The only way for Out to be a maximal strategy for player 1 is that
he deems {out} as the only subjectively possible candidate for 2s choice

171

Applying full permissibility to extensive games


3
2
3
2

2
2

out

2c

Out

s1
@

in
InL

`@ r

3
1

0
0

Figure 12.4.

@ InR
@
2
@s
` @r

0
0

1
3

out in`
3
Out
2 2, 2
3
InL
2 3, 1
3
InR
2 0, 0
3
2,
3
2,
3
2,

inr
2, 2
0, 0
1, 3

001 and its pure strategy reduced strategic form.

set, in which case 1s choice set is {Out, InL}. Else {InL} is 1s choice
set. Furthermore, 2 can have a choice set different from {out} only if
she deems {Out, InL} as a subjectively possible candidate for 1s choice
set. Intuitively this means that if 2s choice set differs from {out} (i.e.,
equals {out, in`}), then she deems it subjectively possible that 1 considers it subjectively impossible that in` is a maximal strategy for 2.
Since it is only under such circumstances that in` is a maximal element
for 2, perhaps this strategy is better thought of in terms of strategic
manipulation than in terms of forward induction. Note that the concept of fully permissible sets has more bite than the Dekel-Fudenberg
procedure; in addition to the strategies appearing in fully permissible
sets also inr survives the Dekel-Fudenberg procedure.
An economic application. Finally, we apply the concept of fully
permissible sets to an economic model from organization theory. Schotter (2000) discusses in his Chapter 8 incentives schemes for firms and the
moral hazard problems that may plague them. Revenue-sharing contracts, for example, often invite free-riding behavior by the workers,
and so lead to inefficient outcomes. However, Schotter points to forcing contractsincentive schemes of a kind introduced by Holmstrom
(1982)as a possible remedy: Each worker is paid a bonus if and only
if the collective of workers achieve a certain level of total production.
If incentives are set right, then there is a symmetric and efficient Nash
equilibrium in which each worker exerts a substantial effort. Each worker
avoids shirking because he feels that his role is pivotal, believing that
any reduction in effort leads to a loss of the bonus.

172

CONSISTENT PREFERENCES

1c
@

Out
s

out @ in

@
w
w
w
w

@ In
@
2
@s
out @ in

@s1
w
@
w
@ H
S

@
2
s

@s
s @ h
s@h

0
0

Figure 12.5.

0
c

c
0

bc
bc

out
ins
inh
Out w, w w, w w, w
InS w, w 0, 0 0, -c
InH w, w -c, 0 b-c,b-c

14 and its pure strategy reduced strategic form.

However, forcing contracts are often problematic in that there typically exists a Nash equilibrium in which no worker exerts any effort at
all. How serious is this problem? Schotter offers the following argument
in support of the forcing-contract (p. 302): While the no-work equilibrium for the forcing-contract game does indeed exist, it is unlikely that
we will ever see this equilibrium occur. If workers actually accept such a
contract and agree to work under its terms, we must conclude that they
intend to exert the necessary effort and that they expect their coworkers
to do the same. Otherwise, they would be better off obtaining a job elsewhere at their opportunity wage and not wasting their time pretending
that they will work hard.
Schotter appeals to intuition, but his argument has a forward induction flavor to it. We now show how the concept of fully permissible
sets lends support. Consider the following situation involving a forcing
contract: A firm needs two workers to operate. The workers simultaneously choose shirking at zero cost of effort, or high effort at cost c > 0.
They get a bonus b > c if and only if both workers choose high effort.
As indicated above, this situation can be modeled as a game with two
Nash equilibria (S, s) and (H, h), where (H, h) Pareto-dominates (S, s).
However, let this game be a subgame of a larger game. In line with
Schotters intuitive discussion, add a preceding stage where each worker
simultaneously decides whether to indicate willingness to join the firm
with the forcing contract, or to work elsewhere at opportunity wage w,
0 < w < b c. The firm with the forcing contract is established if and
only if both workers indicate willingness to join it.

Applying full permissibility to extensive games

173

This situation is depicted by the extensive game 14 . Again, we analyze the PRSF (cf. Figure 12.5). Application of IECFA yields:
(0) = 1 2
(1) = {{Out}, {InH}, {Out, InH}} {{out}, {inh}, {out, inh}}
(2) = {{InH}, {Out, InH}} {{inh}, {out, inh}}
= (3) = {{InH}} {{inh}} .
Interpretation: (1): Shirking cannot be a maximal strategy for either
worker since it is weakly dominated. (2): This excludes the possibility
that a workers choice set contains only the outside option. (3): Since
each worker certainly believes that hard work is, while shirking is not, an
element of the opponents choice set, it follows that each worker deems it
infinitely more likely that the opponent chooses hard work rather than
shirking. This means that, for each worker, only hard work is in his
choice set, a conclusion that supports Schotters argument.

12.5

Concluding remarks

In this final chapter of the book we have explored the implications of


the concept of fully permissible sets in extensive games. In Chapter 11
we have already seenbased on Asheim and Dufwenberg (2003a)that
this concept can be characterized as choice sets under common certain
belief of full admissible consistency. Full admissible consistency consists
of the requirements
caution,
robust belief of opponent rationality, and
no extraneous restrictions on beliefs,
and entails that one strategy is preferred to another if and only if the
former weakly dominates the latter on the union of the choice sets that
are deemed possible for the opponent, or on the set of all opponent
strategies.
The requirement of robust belief of opponent rationality is concerned
with strategy choices of the opponent only initially, in the whole game,
not with choices among the remaining available strategies at each and
every information set. To illustrate this point, look back at 11 and
consider a type of player 2 who deems {D} as the only subjectively
possible choice set for 1. Conditional on 2s node being reached it is
clear that 1 cannot be choosing a strategy that is maximal given his
preferences. Conditional on 2s node being reached, the modeling of the

174

CONSISTENT PREFERENCES

current chapter imposes no constraint on 2s assessment of likelihood


concerning which non-maximal strategy FF or FD that 1 has chosen.
This crucially presumes that 2 assesses the likelihood of different strategies as chosen by player 1 initially, in the whole game.
It is possible to model players being concerned with opponent choices
at all information sets. In 11 this would amount to the following when
player 2 is of a type who deems {D} as the only possible choice set
for 1: Conditional on 2s node being reached she realizes that 1 cannot
be choosing a strategy which is maximal given his preferences. Still,
2 considers it infinitely more likely that 1 at his last node chooses a
strategy that is maximal among his remaining available strategies given
his conditional preferences at that node. In Section 12.2 we argued
with Ben-Porath (1997) that this is not necessarily reasonable, a view
which permeates the working hypotheses on which the current chapter
in grounded.
Yet, research on the basis of this alternative approach is illuminating
and worthwhile. Indeed, Chapters 79 of this book have reproduced the
epistemic models of Asheim (2002) and Asheim and Perea (2004) where
each player believes that his opponent chooses rationally at all information sets. The former model yields an analysis that is related to Bernheims (1984) subgame rationalizability, while the latter model demonstrates how itin accordance with Bernheims conjectureis possible to
define sequential rationalizability. Moreover, Chapter 10 has considered
the closely related strategic form analyses of Schuhmacher (1999) and
Asheim (2001) that define and characterize proper rationalizability as a
non-equilibrium analog to Myersons (1978) proper equilibrium.
Analysis that goes in this alternative direction promotes concepts that
imply backward induction without yielding forward induction. Thus,
they lead to implications that are significantly different from those of
the current final chapter, where forward induction is promoted without
insisting on backward induction in all games.
The tension between these two approaches to extensive games cannot be resolved by formal epistemic analysis alone. It is worth noting,
though, that the analysisindependently of this issuemakes use of the
consistent preferences approach to deductive reasoning in games.

Appendix A
Proofs of results in Chapter 4

Proof of Proposition 6. Only if. Assume that d is admissible on E. Let


e E and f E. It now follows directly that e is not Savage-null at ` and that
p d{e} q implies p d{e,f } q. If. Assume that e E and f E imply e d f . Let
p and q satisfy that pE weakly dominates qE at d. Then there exists e0 E such
that d (p(e0 )) > d (q(e0 )). Write A = {f1 , . . . , fn }. Let, for m {0, . . . , n},

pm (d0 ) =

8
0
n+1m
>
> n+1 p(d ) +
>
>
<p(d0 )
>
q(d )
>
>
>
:
0
0

p(d )

m
q(d0 )
n+1

if d0 = e0
if d0 E\e0
if d0 = fm0 and m0 {1, . . . , m}
if d0 = fm0 and m0 {m + 1, . . . , n}.

Then p = p0 , pm1 d pm for all m = {1, . . . , n} (since e E and f E


imply that e d f ), and p(n) d q (since p(n) weakly dominates q at d with
d (pn (e0 )) > d (q(e0 ))). By transitivity of e , it follows that p d q.
Proof of Proposition 7. (Q serial.) If d is Savage-null at d, then there exists
e d such that e is not Savage-null at d since d is nontrivial. Clearly, d is not
infinitely more likely than e at d, and dQe. If d is not Savage-null at d, then dQd
since d is not infinitely more likely than itself at d.
(Q transitive.) We must show that dQe and eQf imply dQf . Clearly, dQe and
eQf imply d e f , and that f is not Savage-null at d. It remains to be shown
that d d f does not hold if dQe and eQf . Suppose to the contrary that d d f .
It suffices to show that dQe contradicts eQf . Since f is not Savage-null at d e,
e d f is needed to contradict eQf . This follows from Axiom 11 because dQe entails
that d d e does not hold.
(Q satisfies forward linearity.) We must show that dQe and dQf imply eQf or
f Qe. From dQe and dQf it follows that d e f and that both e and f are not
Savage-null at e f . Since e e f and f f e cannot both hold, we have that eQf
or f Qe.
(Q satisfies quasi-backward linearity.) We must show that dQf and eQf imply
dQe or eQd if d0 F such that d0 Qe. From dQf and eQf it follows that d e f ,

176

CONSISTENT PREFERENCES

while d0 Qe implies that e is not Savage-null at d0 d e. If d is Savage-null at d,


then d d e cannot hold, implying that dQe. If d is not Savage-null at d e, then
d d e and e e d cannot both hold, implying that dQe or eQd.
Proof of Proposition 8. (R` serial.) For all d F , d` 6= .
(R` transitive.) We must show that dR` e and eR` f imply dR` f . Since dR` e
implies that d e, we have that d` = e` . Now, eR` f (i.e., f e` ) implies dR` f (i.e.,
f d` ).
(R` Euclidean.) We must show that dR` e and dR` f imply eR` f . Since dR` e
implies that d e, we have that d` = e` . Now, dR` f (i.e., f d` ) implies eR` f (i.e.,
f e` ).
(dR` e implies dR`+1 e.) This follows from the property that d` d`+1 .
(f such that dR`+1 f and eR`+1 f ) implies (f 0 such that dR` f 0 and eR` f 0 ). Since
dR`+1 f implies that d f and eR`+1 f implies that e f , we have that d e and
d` = e` . By the non-emptiness of this set, f 0 such that dR` f 0 and eR` f 0 .
Proof of Proposition 9. (i) (dQd is equivalent to d being not Savage-null at
d.) If dQd, then it follows directly from Definition 2 that d is not Savage-null at d.
If d is not Savage-null at d, then by Definition 2 it follows that dQd since d d and
not d d d. (dRL d is equivalent to d being not Savage-null at d.) By Definition 3,
dRL d iff d dL = d , which directly establishes the result.
(ii) Only if. Assume that dQe and not eQd. From dQe it follows that d e and
e is not Savage-null at d, i.e. e d ( d ). Consider E := {e0 F | eQe0 }. Clearly,
e E d ( d ) and d d \E 6= . If e0 E and f d \E, then not e0 Qf ,
since otherwise it would follow from eQf 0 and the transitivity of Q that eQf , thereby
contradicting f
/ E. If, on the one hand, f d \E, then e0 d f since f is not
Savage-null at d e0 and e0 Qf does not hold. If, on the other hand, f
/ d , then
0
d
0
0
e f since f is Savage-null at d and e is not. Hence, e E and f E imply
e0 d f . By Proposition 6, d is admissible on E, entailing that ` {1, . . . , L} such
that d` = E. By Definition 3, dR` e and not eR` d since e E and d d \E.
If. Assume that ` {1, . . . , L} such that dR` e and not eR` d. From dR` e it
follows that d e and e d` ( d ); in particular, e is not Savage-null at d. Since
eR` d does not hold, however, d
/ e` = d` . By construction, d is admissible on d` ,
and it now follows from Proposition 6 that e d d. Furthermore, e d d implies that
d d e does not hold. Hence, dQe since d e, e is not Savage-null at d and d d e
does not hold, while not eQd since e d d.
Proof of Lemma 3. Since d = {e d |eQe}, it follows that e1 d
such that e1 Qe1 if d 6= . Either, f d , f Qe1 in which case we are
through or not. In the latter case, e2 d such that e2 Qe1 does not hold.
Since e1 , e2 d , e02 d such that e1 Qe02 and e2 Qe02 . Since e1 Qe1 and not e2 Qe1 it
now follows from quasi-backward linearity that e1 Qe2 . Moreover, not e2 Qe1 implies
e2 6= e1 . Either f d , f Qe2 in which case we are through or not. In the
latter case we can, by repeating the above argument and invoking transitivity, show
the existence of some e3 d such that e1 Qe3 , e2 Qe3 , and e3 6= e1 , e2 . Since
d is finite, this algorithm converges to some e satisfying, f d , f Qe.
To prove Proposition 11 it suffices to show the following lemma.

Appendix A: Proofs of results in Chapter 4

177

Lemma 14 If , then d () = d` , where ` := min{k {1, . . . , L}| dk 6=


}.
Proof. ( d () d` ) Assume that ( d )\d` 6= . Let e ( d )\d` .
Since d` 6= , f d` . Then, by Definition 3 eR` f and not f R` e, which
by Proposition 9(ii) implies eQf and not f Qe. Hence, e ( d )\ d (), and
d` = ( d ) d` ( d ) d () = d (). Assume then that ( d )\d` = .
In this case, d` = ( d ) d` = d d ().
(d` d ()) Let e d` . If f d` , then f RL f since d` dL , and
f Qf by Proposition 9(i). Since e, f d and f Qf , it follows by quasi-backward
linearity of Q that f Qf or eQf . However, since by construction, k {1, . . . ` 1},
dk = , there is no k {1, . . . ` 1} such that f Rk e and not eRk f or vice versa,
and Proposition 9(ii) implies that both f Qe and eQf must hold. In particular, f Qe.
If, on the other hand, f ( d )\d` , then by Definition 3 f R` e and not eR` f ,
implying by Proposition 9(ii) that f Qe. Thus, f d , f Qe, and e d ()
follows.
Proof of Proposition 12.
Recall that B0 E := E B()E, where E :=
d
dF E is non-empty and defined by, d F , dE := { d | E d 6=
if E d 6= }.
(If ` {1, . . . , L} such that d` = E d , then d B0 E.) Let d` = E d
and consider any E . We must show that d B()E. By the definition of E ,
E d 6= since E and E d = d` 6= . Since d` = E d , it
follows that 6= d` E, so by Proposition 11, d B()E.
(If d B0 E, then ` {1, . . . , L} such that d` = E d .) Let d B0 E; i.e.,
E , d B()E. We first show that d1 E. Consider some 0 E satisfying
d 0 = (E d ) d1 . Since d B(0 )E, k {1, . . . , L} such that 6= dk
0 = dk (E d1 ) E. Since d1 dk , d1 E. Let ` = max{k|dk E}. If
` = L, then d` = d , and d` E implies d` = E d . If ` < L, then, since
d` dL = d , d` = d` d E d . To show that d` = E d also in this
case, suppose instead that (E d )\d` 6= , and consider some 00 E satisfying
d 00 = ((Ed )d`+1 )\d` . Since, k {1, .., `}, dk d` , it follows from d` 00 =
that, k {1, .., `}, dk 00 = . Since by construction, d` E, while d`+1 E does
not hold, d`+1 00 = d`+1 \d` is not included in E. Since d1 dL , there is
no k {0, . . . , L} such that 6= dk 00 E, contradicting by Proposition 11 that
d B(00 )E. Hence, d` = E d .
Proof of Proposition 14. (KE KE 0 = K(E E 0 )) To prove KE KE 0
K(E E 0 ), let d KE and d KE 0 . Then, by Definition 4, d E and d E 0 and
hence, d E E 0 , implying that d K(E E 0 ). To prove KE KE 0 K(E E 0 ),
let d K(E E 0 ). Then d E E 0 and hence, d E and d E 0 , implying that
d KE and d KE 0 .
(B()E B()E 0 = B()(E E 0 )) Using Defintion 5 the proof of conjunction for
B() is identical to the one for K except that d () is substituted for d .
(KF = F ) KF F is obvious. That KF F follows from Definition 4 since,
d F , d d F .
(B() = ) This follows from Definition 5 since, d F , d () 6= , implying
that there exists no d F such that d () .

178

CONSISTENT PREFERENCES

(KE KKE) Let d KE. By Definition 4, d KE is equivalent to d E.


Since e d , e = d , it follows that d KE. Hence, d d KE, implying by
Definition 4 that d KKE.
(B()E KB()E) Let d B()E. By Definition 5, d B()E is equivalent
to d () E. Since e d , e () = d (), it follows that d B()E. Hence,
d d B()E, implying by Definition 4 that d KB()E.
(KE K(KE)) Let d KE. By Definition 4, d KE is equivalent to
d E not holding. Since e d , e = d , it follows that d KE. Hence,
d d KE, implying by Definition 4 that d K(KE).
(B()E K(B()E)) Let d B()E. By Definition 5, d B()E is
equivalent to d () E not holding. Since e d , e () = d (), it follows
that d B()E. Hence, d d B()E, implying by Definition 4 that
d K(B()E).
Proof of Proposition 15. (1.) d () follows by definition since, e
d (), e .
(2.) By Definitions 2 and 3 and Proposition 9, d = d1 . Hence, d 6= implies
d
1 6= and min{`|d` 6= } = 1. By Lemma 14, d () = d1 = d .
(3.) This follows directly from Lemma 3, since implies that, d F ,
d 6= .
(4.) Let d () 0 6= . By Lemma 14, d () = d` 6= where ` := min{k|dk
6= }. Likewise, d ( 0 ) = d`0 0 , where `0 := min{k|dk 0 6= }.
It suffices to show that ` = `0 . Obviously, ` `0 . However, 6= d () 0 =
(d` ) 0 = d` 0 implies that `0 `.
Proof of Proposition 16.
That KE B0 E follows from Definition 4 and
Propositions 9 and 12 since d E implies that dL = d = d E. That B0 E
B(F )E follows from Definition 6 since F E .
Proof of Proposition 17.
(B0 E B0 E 0 B0 (E E 0 )) Let d B0 E and
d B0 E 0 . Then, by Proposition 12, there exist k such that d` = E d and k0 such
that d`0 = E 0 d . Since d1 dL , either d` d`0 or d` d`0 , or equivalently,
E d E 0 d or E d E 0 d . Hence, either d` = E d = E E 0 d or
d`0 = E 0 d = E E 0 d , implying by Proposition 12 that a B0 (E E 0 ).
(B0 E KB0 E) Let d B0 E. By Proposition 12, d B0 E is equivalent to
` {1, . . . , L} such that d` = E d . Since e d , e` = d` and e = d , it follows
that d B0 E. Hence, d d B0 E, implying by Definition 4 that d KB0 E.
(B0 E K(B0 E)) Let d B0 E. By Proposition 12, d B0 E is equivalent to
there not existing k {1, . . . , L} such that d` = E d . Since e d , e` = d` and
e = d , it follows that d B0 E. Hence, d d B0 E, implying by Definition
4 that d K(B0 E).
To prove Proposition 18 the following lemma is helpful.

Lemma 15 Assume that d satisfies Axioms 1 and 4 00 (in addition to the assumptions made in Section 4.1), and let `, `0 {1, . . . , Ld } satisfy ` < `0 . Then pdd q
`
implies pdd d q.
`

`0

Proof. This follows from Proposition 3.

179

Appendix A: Proofs of results in Chapter 4

Proof of Proposition 18. (If E is assumed at d, then d B 0 E.) Let E be


assumed at d. Then it follows that dE is nontrivial; hence, E d 6= . Assume that
pEd weakly dominates qEd at d. Since E d 6= , we have that p dE q. Hence,
it follows from the premise (viz., that E is assumed at d) that p d q. This shows
that d is admissible on E d , and, by Proposition 12, d B 0 E.
(If d B 0 E, then E is assumed at d.) Let d B 0 E, so by Proposition 12 d is
admissible on E d (6= ). Hence, by Proposition 6, e E d and f (E d )
implies e d f . By Axiom 400 this in turn implies that ` such that
E d =

[`
k=1

kd ,

since the first property of Axiom 400 the Archimedean property of d within each
partitional element rules out that e and f are in the same element of the partition
d
d
{1d , . . . , L
d } if e f .
d
Assume that p E q. Then p dEd q, and, by the above argument,
p dS`

k=1

d
k

q.

By completeness and the partitional Archimedean property, Lemma 15 entails that


`0 {1, . . . , `} such that
pdd q and, k {1, . . . , `0 1}, pdd q .
`0

d
d
since L
k=1 k
d

dE

By Lemma 15, p q
= . Hence, p
q implies p d q. Moreover,
d
E is nontrivial since E 6= , and it follows from Definition 7 that E is assumed
at d.

Appendix B
Proofs of results in Chapters 810

For the proofs of Propositions 30, 34, 36, and 37 we need two results from Blume
et al. (1991b). To state these results, introduce the following notation. Let =
(1 , ..., L ) be an LPS on a finite set F and let r = (r1 , ..., rL1 ) (0, 1)L1 . Then,
r denotes the probability distribution on F given by the nested convex combination
(1 r1 )1 + r1 [(1 r2 )2 + r2 [(1 r3 )3 + r3 [. . . ] . . . ]] .
The first is a restatement of Proposition 2 in Blume et al. (1991b).

Lemma 16 Let (x(n))nN be a sequence of probability distributions on a finite set

F . Then, there exists a subsequence x(m) of (x(n))nN , an LPS = (1 , ..., L ), and


a sequence r(m) of vectors in (0, 1)L1 converging to zero such that x(m) = r(m)
for all m.
The second is a variant of Proposition 1 in Blume et al. (1991b).

Lemma 17 Consider a type ti of player i whose preferences over acts on Sj Tj

are represented by iti with iti z = ui and ti = (t1i , . . . , tLi ) L(Sj Tj ).


Then, for every sequence (r(n))nN in (0, 1)L1 converging to zero there is an n0 such
that, si , s0i Si , si ti s0i if and only if
XX
sj

(r(m)ti )(sj , tj )ui (si , sj ) >

tj

XX
sj

(r(m)ti )(sj , tj )ui (s0i , sj )

tj

for all n n0 .
Proof. Suppose that si ti s0i . Then, there is some ` {1, ..., L} such that
XX
sj

tki (sj , tj )ui (si , sj ) =

XX
sj

tj

tki (sj , tj )ui (s0i , sj )

(B.1)

t`i (sj , tj )ui (s0i , sj ).

(B.2)

tj

for all k < ` and


XX
sj

tj

t`i (sj , tj )ui (si , sj ) >

XX
sj

tj

182

CONSISTENT PREFERENCES

Let (r(n))nN be a sequence in (0, 1)L1 converging to zero. By (B.1) and (B.2),
XX

(r(n)ti )(sj , tj )ui (si , sj ) >

sj

tj

XX
sj

(r(n)ti )(sj , tj )ui (s0i , sj )

tj

if n is large enough. Since Si is finite, this is true if n is large enough for any si ,
s0i Si satisfying si ti s0i . The other direction follows from the proof of Proposition
1 in Blume et al. (1991b).
For the proofs of Propositions 30 and 34 we need the following definitions. Let
the LPS i = (i1 , . . . , iL ) L(Sj ) have full support on Sj . Say that the behavior
strategy j is induced by i if for all h Hj and a A(h),
j (h)(a) :=

i` (Sj (h, a))


,
i` (Sj (h))

where ` = min{k| supp ik Sj (h) 6= }. Moreover, say that player is beliefs over past
opponent actions i are induced by i if for all h Hi and x h,
i (h)(x) :=

i` (Sj (x))
,
i` (Sj (h))

where ` = min{k| supp ik Sj (h) 6= }.


Proof of Proposition 30. (Only if.) Let (, ) be a sequential equilibrium.
Then (, ) is consistent and hence there is a sequence ((n))nN of completely mixed
behavior strategy profiles converging to such that the sequence ((n))nN of induced belief systems converges to . For each i and all n, let pi (n) (Si ) be the
mixed representation of i (n). By Lemma 16, the sequence (pj (n))nN of probability distributions on Sj contains a subsequence pj (m) such that we can find an LPS
i = (i1 , . . . , iL ) with full support on Sj and a sequence of vectors r(m) (0, 1)L1
converging to zero with
pj (m) = r(m)i
for all m. W.l.o.g., we assume that pj (n) = r(n)i for all n N.
We first show that i induces the behavior strategy j . Let
j be the behavior
strategy induced by i . By definition, h Hj , a A(h),

j (h)(a)

=
=

i` (Sj (h, a))


(r(n)i )(Sj (h, a))
= lim
i
n (r(n)i )(Sj (h))
` (Sj (h))
pj (n)(Sj (h, a))
= lim j (n)(h)(a) = j (h)(a) ,
lim
n
n pj (n)(Sj (h))

where ` = min{k| supp ik Sj (h) 6= }. For the fourth equation we used the fact that
pj (n) is the mixed representation of j (n). Hence, for each i, i induces j .
We then show that i induces the beliefs i . Let i be player is beliefs over past
opponent actions induced by i . By definition, h Hi , x h,
i (h)(x)

=
=

r(n)i (Sj (x))


i` (Sj (x))
= lim
i
n r(n)i (Sj (h))
` (Sj (h))
pj (n)(Sj (x))
= lim i (n)(h)(x) = i (h)(x),
lim
n
n pj (n)(Sj (h))

Appendix B: Proofs of results in Chapters 810

183

where ` = min{k| supp ik Sj (h) 6= }. For the fourth equality we used the facts that
pj (n) is the mixed representation of j (n) and i (n) is induced by j (n). Hence, for
each i, i induces i .
We now define the following epistemic model. Let T1 = {t1 } and T2 = {t2 }.
Let, for each i, iti satisfy iti z = ui , and (ti , `ti ) be the SCLP with support
Sj {tj }, where (1) ti coincides with the LPS i constructed above, and (2) `ti (Ej ) =
min{`| supp t`i Ej 6= } for all ( 6=) Ej Sj {tj }. Then, it is clear that
(t1 , t2 ) [u], there is mutual certain belief of {(t1 , t2 )} at (t1 , t2 ), and for each i, i
is induced for ti by tj . It remains to show that (t1 , t2 ) [isr].
For this, it is sufficient to show, for each i, that i is sequentially rational for ti .
Suppose not. By the choice of `ti , it then follows that there is some information set
h Hi and some mixed strategy pi (Si (h)) that is outcome-equivalent to i |h
such that there exist si Si (h) with pi (si ) > 0 and s0i Si (h) having the property
that
ui (si , t`i |Sj (h) ) < ui (s0i , t`i |Sj (h) ) ,
where ` = min{k| supp tki (Sj (h) {tj }) 6= } and t`i |Sj (h) (Sj (h)) is the
conditional probability distribution on Sj (h) induced by t`i . Recall that t`i is the `-th
level of the LPS ti . Since the beliefs i and the behavior strategy j are induced by i ,
it follows that ui (si , t`i |Sj (h) ) = ui (si , j ; i )|h and ui (s0i , t`i |Sj (h) ) = ui (s0i , j ; i )|h
and hence
ui (si , j ; i )|h < ui (s0i , j ; i )|h ,
which is a contradiction to the fact that (, ) is sequentially rational.
(If ) Suppose that there is an epistemic model with (t1 , t2 ) [u] [isr] such that
there is mutual certain belief of {(t1 , t2 )} at (t1 , t2 ), and for each i, i is induced for
ti by tj . We show that = (1 , 2 ) can be extended to a sequential equilibrium.
For each i, let i = (i1 , . . . , iL ) L(Sj ) be the LPS coinciding with ti , and let
i be player is beliefs over past opponent choices induced by i . Write = (1 , 2 ).
We first show that (, ) is consistent.
Choose sequences (r(n))nN in (0, 1)L1 converging to zero and let the sequences
(pj (n))nN of mixed strategies be given by pj (n) = r(n)i for all n. Since i has
full support on Sj for every n, pj (n) is completely mixed. For every n, let j (n)
be a behavior representation of pj (n) and let i (n) be the beliefs induced by j (n).
We show that (j (n))nN converges to j and that (i (n))nN converges to i , which
imply consistency of (, ).
Note that the inducement of j by ti depends on ti through, for each h Hj ,
t`i , where ` = min{k| supp tki (Sj (h) {tj }) 6= }. This implies that j is induced
by i . Since j (n) is a behavior representation of pj (n) and j is induced by i , we
have, h Hj , a A(h),
lim j (n)(h)(a)

=
=

lim

pj (n)(Sj (h, a))


r(n)i (Sj (h, a))
= lim
n r(n)i (Sj (h))
pj (n)(Sj (h))

i` (Sj (h, a))


= j (h)(a),
i` (Sj (h))

where ` = min{k| supp ik Sj (h) 6= }. Hence, (j (n))nN converges to j .

184

CONSISTENT PREFERENCES

Since i (n) is induced by j (n) and j (n) is a behavior representation of pj (n),


and furthermore, i is induced by i , we have, h Hi , x h,
lim i (n)(h)(x)

=
=

lim

pj (n)(Sj (x))
r(n)i (Sj (x))
= lim
n r(n)i (Sj (h))
pj (n)(Sj (h))

i` (Sj (x))
= i (h)(x),
i` (Sj (h))

where ` = min{k| suppik Sj (h) 6= }. Hence, (i (n))nN converges to i .


This establishes that (, ) is consistent.
It remains to show that for each i and h Hi ,
ui (i0 , j ; i )|h .
ui (i , j ; i )|h = max
0
i

Suppose not. Then, ui (i , j ; i )|h < ui (i0 , j ; i )|h for some h Hi and some i0 .
Let pi (Si (h)) be outcome-equivalent to i |h . Then, there is some si Si (h) with
pi (si ) > 0 and some s0i Si (h) such that
ui (si , j ; i )|h < ui (s0i , j ; i )|h .
Since the beliefs i and the behavior strategy j are induced by i , it follows (using the notation that has been introduced in the only if part of this proof) that
ui (si , j ; i )|h = ui (si , t`i |Sj (h) ) and ui (s0i , j ; i )|h = ui (s0i , t`i |Sj (h) )|h and hence
ui (si , t`i |Sj (h) ) < ui (s0i , t`i |Sj (h) ),
which contradicts the fact that i is sequentially rational for ti . This completes the
proof of this proposition.
Proof of Proposition 34. (Only if.) Let (1 , 2 ) be a quasi-perfect equilibrium.
By definition, there is a sequence ((n))nN of completely mixed behavior strategy
profiles converging to such that for each i and every n N and h Hi ,
ui (i , j (n))|h = max
ui (i0 , j (n))|h .
0
i

For each j and every n, let pj (n) be the mixed representation of j (n). By Lemma
16, the sequence (pj (n))nN of probability distributions on Sj contains a subsequence
pj (m) such that we can find an LPS i = (i1 , . . . , iL ) with full support on Sj and a
sequence of vectors r(m) (0, 1)L1 converging to zero with
pj (m) = r(m)i
for all m. W.l.o.g., we assume that pj (n) = r(n)i for all n N.
By the same argument as in the proof of Proposition 30, it follows that i induces
the behavior strategy j . Now, we define an epistemic model as follows. Let T1 = {t1 }
and T2 = {t2 }. Let, for each i, iti satisfy iti z = ui , and (ti , `ti ) be the SCLP
with support Sj {tj }, where (1) ti coincides with the LPS i constructed above,
and (2) `ti (Sj {tj }) = L. Then, it is clear that (t1 , t2 ) [u], there is mutual certain
belief of {(t1 , t2 )} at (t1 , t2 ), and for each i, i is induced for ti by tj . It remains to
show that (t1 , t2 ) [isr] [cau].

185

Appendix B: Proofs of results in Chapters 810

Since, obviously, (t1 , t2 ) [cau], it suffices to show, for each i, that i is sequentially
rational for ti . Fix a player i and let h Hi be given. Let pi ( (Si (h))) be outcomeequivalent to i |h and let pj (n) be the mixed representation of j (n). Then, since
(1 , 2 ) is a quasi-perfect equilibrium, it follows that
ui (pi , pj (n)|h ) =

max

p0i (Si (h))

ui (p0i , pj (n)|h )

for all n. Hence, pi (si ) > 0 implies that


X
sj Sj (h)

pj (n)|h (sj )ui (si , sj ) = 0 max

si Si (h)

pj (n)|h (sj )ui (s0i , sj )

(B.3)

sj Sj (h)

for all n. Let thi be is preferences at ti conditional on h. Since ti projTi [caui ]so
that is system of conditional preferences at ti satisfies Axiom 6 (Conditionality)and
pj (n) = r(n)projSj ti for all n, there exist vectors r(n)|h converging to zero such
that pj (n)|h = r(n)|h projSj thi for all n. Together with equation (B.3) we obtain
that pi (si ) > 0 implies
X

(r(n)|h projSj thi )(sj )ui (si , sj )

sj Sj (h)

= 0 max

si Si (h)

(r(n)|h projSj thi )(sj )ui (s0i , sj ) .

(B.4)

sj Sj (h)

Siti (h).

We show that pi (si ) > 0 implies si


Suppose that si Si (h)\Siti (h). Then,
ti
0
0
there is some si Si (h) with si h si . By applying Lemma 17 in the case of acts on
Sj (h) {tj }, it follows that r(n)|h has a subsequence r(m)|h for which
X

(r(m)|h projSj thi )(sj )ui (s0i , sj ) >

sj Sj (h)

(r(m)|h projSj thi )(sj )ui (si , sj )

sj Sj (h)

for all m, which is a contradiction to (B.4). Hence, si Siti (h) whenever pi (si ) > 0,
which implies that pi (Siti (h)). Hence, i |h is outcome equivalent to some pi
(Siti (h)). This holds for every h Hi , and hence i is sequentially rational for ti .
(If ) Suppose, there is an epistemic model with (t1 , t2 ) [u] [isr] [cau] such that
there is mutual certain belief of {(t1 , t2 )} at (t1 , t2 ), and for both i, i is induced for
ti by tj . We show that (1 , 2 ) is a quasi-perfect equilibrium.
For each i, let i = (i1 , . . . , iL ) L(Sj ) be the LPS coinciding with ti . Choose
sequences (r(n))nN in (0, 1)L1 converging to zero and let the sequences (pj (n))nN
of mixed strategies be given by pj (n) = r(n)i for all n. Since i has full support
on Sj for every n, pj (n) is completely mixed. For every n, let j (n) be a behavior
representation of pj (n). Since i induces j , it follows that (j (n))nN converges to
j ; this is shown explicitly under the if part of Proposition 30. Hence, to establish
that (1 , 2 ) is a quasi-perfect equilibrium, we must show that, for each i and n N
and h Hi ,
ui (i , j (n))|h = max
ui (i0 , j (n))|h .
(B.5)
0
i

Fix a player i and an information set h Hi . Let pi ( (Si (h))) be outcomeequivalent to i |h . Then, equation (B.5) is equivalent to
ui (pi , pj (n)|h ) =

max

p0i (Si (h))

ui (p0i , pj (n)|h )

186

CONSISTENT PREFERENCES

for all n. Hence, we must show that pi (si ) > 0 implies that
X

pj (n)|h (sj )ui (si , sj ) = 0 max

si Si (h)

sj Sj (h)

pj (n)|h (sj )ui (s0i , sj )

(B.6)

sj Sj (h)

for all n. In fact, it suffices to show this equation for infinitely many n, since in this
case we can choose a subsequence for which the above equation holds, and this would
be sufficient to show that (1 , 2 ) is a quasi-perfect equilibrium.
Since, by assumption, i is sequentially rational for ti , i |h is outcome equivalent
to some mixed strategy in (Siti (h)). Hence, pi (Siti (h)). Let pi (si ) > 0. By
construction, si Siti (h). Suppose that si would not satisfy (B.6) for infinitely many
n. Then, there exists some s0i Si (h) such that
X

pj (n)|h (sj )ui (si , sj ) <

sj Sj (h)

pj (n)|h (sj )ui (s0i , sj )

sj Sj (h)

for infinitely many n. Assume, w.l.o.g., that it is true for all n. Let thi be is preferences at ti conditional on h. Since ti projTi [caui ]so that is system of conditional
preferences at ti satisfies Axiom 6 (Conditionality)and pj (n) = r(n)projSj ti for
all n, there exist vectors r(n)|h converging to zero such that pj (n)|h = r(n)|h projSj thi
for all n. This implies that
X

(r(n)|h projSj thi )(sj )ui (si , sj ) <

sj Sj (h)

(r(n)|h projSj thi )(sj )ui (s0i , sj )

sj Sj (h)

for all n. By applying Lemma 17 in the case of acts on Sj (h){tj }, it follows that i at
ti strictly prefers s0i to si conditional on h, which contradicts the fact that si Siti (h).
Hence, pi (si ) > 0 implies (B.6) for infinitely many n, and as a consequence, (1 , 2 )
is a quasi-perfect equilibrium.
Proof of Proposition 36. (Only if.) Let (p1 , p2 ) be a proper equilibrium. Then,
by Definition 7, there is a sequence (p(n))nN of (n)-proper equilibria converging to
p, where (n) 0 as n . By the necessity part of Proposition 5 of Blume et
al. (1991b), there exists an epistemic model with T1 = {t1 } and T2 = {t2 } where, for
each i,
iti satisfies that iti z = ui ,
the SCLP (ti , `ti ) has the properties that ti = (t1i , . . . , tLi ) with support
Sj {tj } satisfies that, sj Sj , t1i (sj , tj ) = pj (sj ), and `ti satisfies that
`ti (Sj Tj ) = L,
such that (t1 , t2 ) [resp]. This argument involves Lemma 16 (which yields, for each
i, the existence of ti with full support on (Sj {tj }) by means of a subsequence
pj (m) of (pj (n))nN ) and Lemma 17 (which yields that, for m large enough, i having
the conjecture pj (m) leads to the same preferences over is strategies as ti ). The
only-if part follows since it is clear that (t1 , t2 ) [u] [cau], that there is mutual
certain belief of {(t1 , t2 )} at (t1 , t2 ), and that, for each i, pi is induced for ti by tj .
(If.) Suppose that there exists an epistemic model with (t1 , t2 ) [u][resp][cau]
such that there is mutual certain belief of {(t1 , t2 )} at (t1 , t2 ), and, for each i, pi
is induced for ti by tj . Then, by the sufficiency part of Proposition 5 in Blume
et al. (1991b), there exists, for each i, a sequence of completely mixed strategies

Appendix B: Proofs of results in Chapters 810

187

(pi (n))n converging to pi , where, for each n, (p1 (n), p2 (n)) is an (n)-proper equilibrium and (n) 0 as n . This argument involves Lemma 17 (which yields,
for each j, the existence of (xj (n))nN so that, for all n, i having the conjecture pj (n)
leads to the same preferences over is strategies as ti ).
Proof of Proposition 37. Part 1: If pi is properly rationalizable, then there
exists an epistemic model with (t1 , t2 ) CK([u] [resp] [cau]) such that pi is
induced for ti by tj . In the definition of proper rationalizability, g in Kg [-prop trem]
goes to infinity for each , and then converges to 0. The strategy for the proof
of the only if part of Proposition 37 is to reverse the order of g and , by first
noting that -proper rationalizability implies -proper g-rationalizability for all g,
then showing that -proper g-rationalizability as converges to 0 corresponds to
the gth round of a finite algorithm, and finally proving that any mixed strategy
surviving all rounds of the algorithm is rational under common certain belief of [u]
[resp] [cau] in some epistemic model. The algorithm eliminates preference relations
on the players strategy sets. It is related to, but differs from, Hammonds (2001)
rationalizable dominance relations, which are recursively constructed by gradually
extending a single incomplete binary relation on each players strategy set.
Say that a mixed strategy pi for i is -properly g-rationalizable if there exists an
-epistemic model with piti = pi for some ti projTi Kg ([u] [ind] [-prop trem]).
Since, for all g,
CK[-prop trem]) Kg [-prop trem]) ,
it follows from Definition 21 that if pi is an -properly rationalizable strategy, then,
for all g, there exists an -epistemic model with piti = pi for some ti projTi Kg ([u]
[ind] [-prop trem]). Consequently, if a mixed strategy pi for i is -properly rationalizable, then, for all g, there exists a sequence (pi (n))nN of (n)-properly grationalizable strategies converging to pi , where (n) 0 as n . This means
that it is sufficient to show that if pi satisfies that, for all g, there exists a sequence
(pi (n))nN of (n)-properly g-rationalizable strategies converging to pi and (n) 0
as n , then pi is rational under common certain belief of [u] [resp] [cau] in
some epistemic model. This will in turn be shown in two steps:
1 If a sequence of (n)-properly g-rationalizable strategies converges to pi , then pi
survives the gth round of a finite algorithm.
2 Any mixed strategy surviving all rounds of the algorithm is rational under common
certain belief of [u] [resp] [cau] in some epistemic model.
To construct the algorithm, note that any complete and transitive binary relation
on Si can be represented by a vector of sets (Si (1), . . . , Si (L)) (with L 1) that
constitute a partition of Si . The interpretation is that si is preferred or indifferent to
s0i if and only if si Si (`), s0i Si (`0 ) and ` `0 . Let, for each i, i := 2Si \{} be
the collection of non-empty subsets of Si and
i

i := {i = (Si (1), . . . , Si (Li )) iL | {Si (1), . . . , Si (Li )} is a partition of Si }


denote the collection of vectors of sets that constitute a partition of Si . Define the
algorithm by, for each i, setting 1
= i and determining, g 0, gi as follows:
i
g
i
i = (Si (1), . . . , Si (L )) i if and only if i i and there exists an LPS i
L(Sj j ) with suppi = Sj j i for some ji g1
, satisfying that
j
(sj , j ) (s0j , j ) according to i

188

CONSISTENT PREFERENCES

if j = (Sj (1), . . . , Sj (Lj )) ji , sj Sj (`), s0j Sj (`0 ) and ` < `0 , and


si i s0i
if and only if si Si (`), s0i Si (`0 ) and ` < `0 , where i is represented by ii
satisfying ii z = ui and i .
Write := 1 2 and, g 0, g = g1 g2 . Since 0 , it follows by
induction that, g 0, g g1 . Moreover, since the finiteness of S = S1 S2
implies that is finite, it follows that g converges to in a finite number of
rounds. Say that pi survives the gth round of the algorithm if there exists i =
(Si (1), . . . , Si (Li )) gi with (Si (1)) 3 pi .
Step 1. We first show that pi survives the gth round of the algorithm if there
exists a sequence (pi (n))nN of (n)-properly g-rationalizable strategies converging to
pi , where (n) 0 as n . Say that the probability distribution (Sj Tj ) is
an -properly g-rationalizable belief for i if there is an -epistemic model with ti =
for some ti projTi Kg ([u] [ind] [-prop trem]). It is sufficient to establish the
following result:
If i = (Si (1), . . . , Si (Li )) i satisfies that there exists a sequence (i (n))n of
(n)-properly g-rationalizable beliefs for i, where (n) 0 as n , and where, for
all n,
XX
XX
i (n)(sj , tj )ui (si , sj ) >
i (n)(sj , tj )ui (s0i , sj )
(B.7)
sj

tj

sj

if and only if si Si (`),

s0i

tj

Si (` ) and ` < ` , then i gi .


0

This result is established by induction.


If (i (n))nN is a sequence of (n)-properly g-rationalizable beliefs for i, then, for
each n, there exists an -epistemic model with T1 (n)T2 (n) as the set of type vectors,
such that i (n) (Sj Tj (n)). For the inductive proof we can w.l.o.g. partition
Tj (n) into j , where j = (Sj (1), . . . , Sj (Lj )) j corresponds to the subset of
j-types in Tj (n) satisfying that
XX
si

tj (n)(si , ti )uj (sj , si ) >

ti

XX
si

s0j

tj (n)(si , ti )uj (s0j , si )

ti
0

if and only if sj Sj (`),


Sj (` ) and ` < ` , since is certain belief of js (n)proper trembling only matters through j-types preferences over js pure strategies .
Hence, we can w.l.o.g. assume that i (n) (Sj j ).
(g = 0) Let (i (n))nN be a sequence of (n)-properly 0-rationalizable beliefs for
i, where (n) 0 as n , and where, for all n, (B.7) is satisfied. By Lemma 16,
the sequence (i (n))nN contains a subsequence i (m) such that one can find an
LPS i L(Sj j ) and a sequence of vectors ri (m) (0, 1)L1 (for some L)
converging to 0 with
i (m) = ri (m)i
for all m. By Definition 20, suppi = Sj ji for some ji j . Let i
be represented by ii satisfying ii z = ui and i . Since Definition 20 is the
only requirement on (i (n))nN for g = 0, we may, for each j ji , associate j
with (Sj (1), . . . , Si (Lj )) 1
satisfying that (sj , j ) (s0j , j ) according to i
j
0
0
if sj Sj (`), sj Sj (` ) and ` < `0 . By Lemma 2, i yields the same preferences
on Si as i (n) (for any n). Hence, i 0i .

189

Appendix B: Proofs of results in Chapters 810

(g > 0) Suppose the result holds for g 0 = 0, . . . , g 1. Let (i (n))nN be a


sequence of (n)-properly g-rationalizable beliefs for i, where (n) 0 as n ,
and where, for all n, (B.7) is satisfied. As for g = 0, use Lemma 16 to construct an
LPS i L(Sj j ), where suppi = Sj ji for some ji j , and where
i is represented by ii satisfying ii z = ui and i . Since


Kg [-prop trem] Ki [-prop tremj ] Kg1 [-prop trem] ,


the induction hypothesis implies that ji g1
and (sj , j ) (s0j , j ) according
j
i
j
i
to if j = (Sj (1), . . . , Si (L )) j , sj Sj (`), s0j Sj (`0 ) and ` < `0 . By
Lemma 17, i yields the same preferences on Si as i (n) (for any n). Hence,
i gi . This concludes the induction and thereby Step 1.
Step 2. We then show that if a mixed strategy pi survives all rounds of the

algorithm, then there exists an epistemic model with pi (Siti ) for some ti
projTi CK([u] [resp] [cau]). It is sufficient to show that one can construct an
epistemic model with T1 T2 CK([u] [resp] [cau]) such that, for each i, i =
ti 0
(Si (1), . . . , Si (Li ))
si if and only if
i , there exists ti Ti satisfying that si
0
0
0
si Si (`), si Si (` ) and ` < ` . Construct an epistemic model with, for each i, a

bijection  i : Ti
i from the set of types to the collection of vectors in i . Since
g 0 such that g = for g g 0 , it follows from the definition of the algorithm
i
(g )g0 that, for each i,
i is characterized as follows: i = (Si (1), . . . , Si (L ))

i if and only if there exists ti Ti such that  i (ti ) = i , and an LPS ti =


(t1i , . . . tLi ) L(Sj Tj ) with suppti = Sj Tjti for some Tjti Tj , satisfying
for each tj Tjti that
(sj , tj ) (s0j , tj ) according to ti
if

j (tj ) = (Sj (1), . . . , Sj (Lj (tj ) ), sj Sj (`), s0j Sj (`0 ) and ` < `0 , and
si ti s0i

if and only if si Si (`), s0i Si (`0 ) and ` < `0 , where iti satisfies iti z = ui and
the SCLP (ti , `ti ) has the property that `ti satisfies `ti (Sj Tj ) = L (so that ti
is represented by iti and ti ). Consider any i = (Si (1), . . . , Si (Li ))
i . By
the construction of the type sets, there exists ti Ti such that  i (ti ) = i , and
si ti s0i if and only if si Si (`), s0i Si (`0 ) and ` < `0 ; in particular, Si (1) = Siti . It
remains to be shown that, for each i, T1 T2 [ui ] [respi ] [caui ], implying that
T1 T2 CK([u] [resp] [cau]) since Tjti Tj for each ti Ti of any player i.
It is clear that T1 T2 [ui ] [caui ]. That T1 T2 [respi ] follows from the
property that, for any ti Ti , (sj , tj ) (s0j , tj ) according to ti whenever tj Tjti if
sj Sj (`), s0j Sj (`0 ) and ` < `0 , while sj tj s0j if and only if sj Sj (`), s0j Sj (`0 )
and ` < `0 (where  j (tj ) = (Sj (1), . . . , Sj (Lj (tj ) )). This concludes Step 2.

In the construction in Step 2, let ti Ti satisfy that pi (Siti ). To conclude


Part 1 of the proof of Proposition 37, add type tj to Tj having the property that pi is

induced for ti by tj . Assume that jtj satisfies jtj z = uj and the SCLP (ti , `ti ) on

Si Tj with support Si {ti } has the property that tj = (1tj , . . . , Ltj ) satisfies,

t
t
si Si , 1 j (si , ti ) = pi (si ) and ` i satisfies ` i (Sj Tj ) = L (so that tj is

t
t
represented by j j and j ). Furthermore, assume that

(si , ti ) (s0i , ti ) according to tj

190

CONSISTENT PREFERENCES

if  i (ti ) = (Si (1), . . . , Si (Li (ti ) )), si Si (`), s0i Si (`0 ) and ` < `0 . Then tj

projTj ([uj ] [respj ] [cauj ]), and since Titj Ti , Ti (Tj {tj }) CK([u] [resp]
[cau]). Hence, (t1 , t2 ) CK([u] [resp] [cau]) and pi is induced for ti for tj .
Part 2: If there exists an epistemic model with (t1 , t2 ) CK([u] [resp] [cau])
such that p1 is induced for t1 by t2 , then p1 is properly rationalizable. Schuhmacher
(1999) considers a set of type profiles T = T1 T2 , where each type ti of either player
i plays a completely mixed strategy piti and has a subjective probability distribution
on Sj Tj , for which the conditional distribution on Sj {tj } coincides with pjtj
whenever the conditional distribution is defined. His formulation implies that all
types of a player agrees not only on the preferences but also on the relative likelihood
of the strategies for any given opponent type. In contrast, the characterization given
in Proposition 37 requires the types of a player only to agree on the preferences of
any given opponent type. This difference implies that expanded type sets must be
constructed for the if part of the proof of Proposition 37.
Assume that there exists an epistemic model with (t1 , t2 ) CK([u] [resp] [cau])
such that p1 is induced for t1 by t2 . In particular, CK([u] [resp] [cau]) 6= ,

and p1 (S1t1 ) since CK([u] [resp] [cau]) [resp2 ]. Let, for each i, Ti0 :=
projTi CK([u] [resp] [cau]). Note that, for each ti Ti0 of any player i, ti deems
(sj , tj ) subjectively impossible if tj Tj \Tj0 since CK([u] [ir] [cau]) = KCK([u]
[ir] [cau]) Ki CK([u] [ir] [cau]), implying Tjti Tj0 .
We first construct a sequence, indexed by n, of -epistemic models. By Definition
20 and Assumption 3 this involves, for each n and for each player i, a finite set of
typeswhich we below denote by Ti00 and which will not vary with nand, for each n,
for each i, and for each type i Ti00 , a mixed strategy and a probability distribution
(pi i (n), i (n)) (Si ) (Sj Tj00 ) that will vary with n.
For either player i and each type ti Ti0 of the original epistemic model, make as
many clones of ti as there are members of Tj0 : For each i, Ti00 := {i (ti , tj )| ti Ti0
and tj Tj0 }, where i (ti , tj ) is the clone of ti associated with tj . The term clone
in the above statement reflects that, tj Tj0 , i (ti , tj ) is assumed to share the
preferences of ti in the sense that

1 the set of opponent types that i (ti , tj ) does not deem subjectively possible,
Tji (ti ,tj ) , is equal to {j (t0j , ti )|t0j Tjti } ( Tj00 since Tjti Tj0 ), and
2 the likelihood of (sj , j (t0j , ti )) according to i (ti ,tj ) is equal to the likelihood of
(sj , t0j ) according to ti .
Since Tji (ti ,tj ) = {j (t0j , ti )| t0j Tjti } is independent of tj , but corresponds to disjoint
subsets of Tj00 for different ti s, we obtain the following conclusion for any pair of type
vectors (t1 , t2 ), (t01 , t02 ) T10 T20 :
0

Tji (ti ,tj ) = Tji (ti ,tj )


0 0
i (ti ,tj )
Tj
Tji (ti ,tj ) =

if
if

ti = t0i ,
ti 6= t0i .

This ends the construction of type sets in the sequence of -epistemic models.
Fix a player i and consider any i Ti00 . Since CK([u] [resp] [cau]) [ui ], i
can be represented by a vNM utility function ii satisfying ii z = ui and an LPS
i on Sj Tji . Since CK([u] [resp] [cau]) [caui ], this LPS yields, for each
j Tji , a partition {Eji (1), . . . , Eji (Li )} of Sj Tji , where (sj , j ) (s0j , j )
according to i if and only (sj , j ) Eji (`), (s0j , j0 ) Eji (`0 ) and ` < `0 . Since

191

Appendix B: Proofs of results in Chapters 810

CK([u] [resp] [cau]) [respi ], it follows that sj is a most preferred strategy for j
in {s0j Sj |(s0j , j ) Eji (`) Eji (Li )} if (sj , j ) Eji (`).
Consider any i and i Ti00 . Construct the sequence (i (n))nN as follows. Choose
i
i {i (ti , tj )|tj Tj0 } one common sequence (ri (n))nN in (0, 1)L 1 converging
i
to 0 and let the sequence of probability distributions ( (n))nN be given by i (n) =
ri (n)i . For all n, suppi (n) = Sj Tji . By Lemma 17 (ri (n))n can be
chosen such that, for all n,
XX
sj

i (n)(sj , j )ui (si , sj ) >

XX
sj

i (n)(sj , j )ui (s0i , sj )

if and only if si i s0i . Hence, for all n, the belief i (n) leads to the same preferences
over is strategies as i . This ends the construction of the sequences (i (n))nN in
the sequence of -epistemic models.
Consider now the construction of the sequence (pi i (n))nN for any i and i Ti00 .
There are two cases. Case 1: If there is j Tj00 such that i Tij , implying that
Si {i } suppj (n), then let pi i (n) be determined by
pi i (n)(si ) =

j (n)(si , i )
.
j (n)(Si , i )

Moreover, for each n, there exists (n) such that, for each player i, the (n)-proper
trembling condition is satisfied at all such types in Ti00 : Since
pi i (n)(s0i )
j (n)(s0i , i )
= j
0 as n
i
pi (n)(si )
(n)(si , i )

if (si , i ) Ei j (`), (s0i , i ) Ei j (`0 ) and ` < `0 , and since si is a most preferred

strategy for i in {s0i Si |(s0i , i ) Ei j (`) Ei j (Lj )} if (si , i ) Ei j (`), it


i
follows that there exists a sequence ( (n))nN converging to 0 such that, for all n,
i (n)pi i (n)(si ) pi i (n)(s0i )
XX
sj

whenever

i (n)(sj , j )ui (si , sj ) >

XX
sj

i (n)(sj , j )ui (s0i , sj ) .

Let, for each n,


(n) := max {1 (n)|2 T200 s.t. 1 T12 } {2 (n)|1 T100 s.t. 2 T21 } .
Since the type sets are finite, (n) 0 as n . Case 2: If there is no j Tj00
such that i Tij , then let pi i (n) be any mixed strategy having the property that
i satisfies the (n)-proper trembling condition given the belief i (n). This ends the
construction of the sequences (pi i (n))nN in the sequence of -epistemic models.

We then turn to the construction of a sequence (p11 (n))nN converging to p1 . Add

type 1 to T100 having the property that 1 (n) = 1 (t1 ,t2 ) (n) for some t2 T20 , but

(t
,t
)
where p1 (n) = (1 n1 )p1 + n1 p1 1 1 2 (n). For all n, we have that the belief 1 (n)
t
leads to the same preferences over 1s strategies as i . This in turn implies that i

satisfies the (n)-trembling condition at 1 since p1 (C1t1 ).


Consider the sequence, indexed by n, of -epistemic models,
with T100 {1 } as the type set for 1 and T200 as the type set for 2,

192

CONSISTENT PREFERENCES

with, for each type i of any player i, (pi i (n), i (n)) as the sequence of a mixed
strategy and a probability distribution,
as constructed above. Furthermore, it follows that, for all n, the (n)-proper trembling
condition is satisfied at all types in T100 {1 } and at all types in T200 , where (n) 0
as n . Hence, for all n,
(T100 {1 }) T200 CK[(n)-prop trem] ;

in particular, p11 (n) is (n)-properly rationalizable. Moreover, (p11 (n))nN converges to p1 . By Definition 21, p1 is a properly rationalizable strategy.

Appendix C
Proofs of results in Chapter 11

Proof of Proposition 42. Assume that the pure strategy si for i is properly
rationalizable in a finite strategic two-player game G. Then, there exists an epistemic
model satisfying Assumption 1 with si Siti for some (t1 , t2 ) projT1 T2 CK([u]
[resp] [cau]) (this follows from Proposition 37 since CK([u] [resp] [cau]) =
KCK([u] [resp] [cau]) Kj CK([u] [resp] [cau])). In particular, CK([u]
[resp] [cau]) 6= .
By Proposition 20(ii), for each i, CK([u] [resp] [cau]) = KCK([u] [resp]
[cau]) Ki CK([u] [resp] [cau]). Hence, we can construct a new epistemic model
(S1 , T10 , S2 , T20 ) where, for each i, Ti0 := projTi CK([u] [resp] [cau]), as for each
ti Ti0 of any player i, ti = {ti } Sj Tjti {ti } Sj Tj0 . Since T10 T20 [cau],
according to the definition of caution given in Section 5.3, it follows that the new
epistemic model satisfies Axiom 6 for each ti Ti0 of any player i. Therefore, the
new epistemic model satisfies Assumption 2 with S1 T10 S2 T20 [cau] according
to the definition of caution given in Section 6.3. Also, S1 T10 S2 T20 [u]. It
remains to be shown that, for each i, S1 T10 S2 T20 B0i [ratj ], since by the fact
that ti {ti } Sj Tj0 for each ti Ti0 of any player i, we then have an epistemic
model with si Siti for some (t1 , t2 ) projT1 T2 CKA0 .
Since T10 T20 [resp], we have that, for each ti Ti0 of any player i, (sj , tj ) ti
whenever tj Titi and sj tj s0j . In particular, for each ti Ti0 of any player
i, (sj , tj ) ti (s0j , tj ) whenever tj Titi , sj Sjtj and s0j
/ Sjtj . By Proposition 6 this
0
ti
means that, for each ti Ti of any player i, is admissible on projT1 S2 T2 [ratj ]
ti , showing that S1 T10 S2 T20 (B01 [rat2 ] B02 [rat1 ]).
(s0j , tj )

Proof of Proposition 43.


Part 1: If si is permissible, then there exists an
It is sufficient to
epistemic model with si Siti for some (t1 , t2 ) projT1 T2 CKA.
construct a belief system with S1 T1 S2 T2 CKA such that, for each si Pi
of any player i, there exists ti Ti with si Siti . Construct a belief system with,
for each i, a bijection si : Ti Pi from the set of types to the the set of permissible
pure strategies. By Lemma 10(i) we have that, for each ti Ti of any player i, there
exists Yjti Pi such that si (ti ) Si \Di (Yjti ). Determine the set of opponent types

194

CONSISTENT PREFERENCES

that ti deems subjectively possible as follows: Tjti = {tj Tj | sj (tj ) Yjti }. Let, for
each ti Ti of any player i, ti satisfy
1. iti z = ui (so that S1 T1 S2 T2 [u]), and
2. p ti q iff pEj weakly dominates qEj for Ej = Ejti := {(sj , tj )|sj = sj (tj ) and
tj Tjti } or Ej = Sj Tjti , which implies that ti = {ti } Ejti and ti =
{ti } Sj Tjti (so that S1 T1 S2 T2 [cau]).
By the construction of Ejti , this means that Siti = Si \Di (Yjti ) 3 si (ti ) since, for any
acts p and q on Sj Tj satisfying that there exist mixed strategies pi , qi (Si )
such that, (sj , tj ) Sj Tj , p(sj , tj ) = z(pi , sj ) and q(sj , tj ) = z(qi , sj ), p ti q iff
pEj weakly dominates qEj for Ej = Yjti Tj or Ej = Sj Tj . This in turn implies,
for each ti Ti any player i,
3. ti projTi Sj Tj [ratj ] (so that, in combination with 2., S1 T1 S2 T2
i [ratj ] B
j [rati ]).
B
Furthermore, S1 T1 S2 T2 CKA since Tjti Tj for each ti Ti of any player
i. Since, for each player i, si is onto Pi , it follows that, for each si Pi of any player
i, there exists ti Ti with si Siti .
Part 2: If there exists an epistemic model with si Siti for some (t1 , t2 )
projT1 T2 CKA, then si is permissible. Part 2 of the proof of Proposition 27.
Proof of Proposition 44. Part 1: If si is rationalizable, then there exists an
0 . It is sufficient to
epistemic model with si Siti for some (t1 , t2 ) projT1 T2 CKC
construct a belief system with S1 T1 S2 T2 CKC such that, for each si Ri of
any player i, there exists ti Ti with si Siti . Construct a belief system with, for each
i, a bijection si : Ti Ri from the set of types to the the set of rationalizable pure
strategies. By Lemma 9(i) we have that, for each ti Ti of any player i, there exists
Yjti Ri such that there does not exist pi (Si ) such that pi weakly dominates
si (ti ) on Yjti . Determine the set of opponent types that ti deems subjectively possible
as follows: Tjti = {tj Tj | sj (tj ) Yjti }. Let, for each ti Ti of any player i, ti
satisfy
1. iti z = ui (so that S1 T1 S2 T2 [u]), and
2. p ti q iff pEj weakly dominates qEj for Ej = Ejti := {(sj , tj )|sj = sj (tj ) and
tj Tjti }, which implies that ti = ti = {ti } Ejti .
By the construction of Ejti , this means that Siti 3 si (ti ) since, for any acts p and
q on Sj Tj satisfying that there exist mixed strategies pi , qi (Si ) such that,
(sj , tj ) Sj Tj , p(sj , tj ) = z(pi , sj ) and q(sj , tj ) = z(qi , sj ), p ti q iff pEj weakly
dominates qEj for Ej = Yjti Tj . This in turn implies, for each ti Ti any player i,
3. ti projTi Sj Tj [ratj ] (so that, in combination with 2., S1 T1 S2 T2
0i [ratj ] B
0j [rati ]).
B
0 since Tjti Tj for each ti Ti of any player
Furthermore, S1 T1 S2 T2 CKC
i. Since, for each player i, si is onto Ri , it follows that, for each si Ri of any player
i, there exists ti Ti with si Siti .
Part 2: If there exists an epistemic model with si Siti for some (t1 , t2 )
projT1 T2 CKC, then si is rationalizable. Part 2 of the proof of Proposition 25.

References

n, C., P. Ga
rdenfors, and D. Makinson (1985), On the logic of theory
Alchourro
change: Partial meet contraction functions and their associated revision functions,
Journal of Symbolic Logic 50, 510530.
Anscombe, F.J. and R.J. Aumann (1963), A definition of subjective probability,
Annals of Mathematical Statistics 34, 199205.
-Costa, H. and R. Parith (2003), Conditional probability and defeasible inArlo
ference, CMU and CUNY.
ge (1979), Bayesian game theory. In: Game Theory and
Armbruster, W. and W. Bo
Related Topics (F. Moeschlin, D. Pallaschke, Eds.), North-Holland, Amsterdam.
Asheim, G.B. (1994), Defining rationalizability in 2-player extensive games, Memorandum No. 25/1994, Department of Economics, University of Oslo.
Asheim, G.B. (2001), Proper rationalizability in lexicographic beliefs, International
Journal of Game Theory 30, 453478.
Asheim, G.B. (2002), On the epistemic foundation for backward induction, Mathematical Social Sciences 44, 121144.
Asheim, G.B. and M. Dufwenberg (2003a), Admissibility and common belief,
Games and Economic Behavior 42, 208234.
Asheim, G.B. and M. Dufwenberg (2003b), Deductive reasoning in extensive
games, Economic Journal 113, 305325.
Asheim, G.B. and A. Perea (2004), Sequential and quasi-perfect rationalizability
in extensive games, forthcoming in Games and Economic Behavior.
Asheim, G.B. and Y. Svik (2003), The semantics of preference-based belief operators, Memorandum No. 05/2003, Department of Economics, University of Oslo.
Asheim, G.B. and Y. Svik (2004), Preference-based belief operators, Department
of Economics, University of Oslo.
Aumann, R.J. (1962), Utility theory without the completeness axiom, Econometrica
30, 445462.
Aumann, R.J. (1987a), Correlated equilibrium as an expression of Bayesian rationality, Econometrica 55, 118.
Aumann, R.J. (1987b), Game theory. In: The New Palgrave: A Dictionary of Economics (J. Eatwell, M. Milgate, P. Newman, Eds.), Macmillan Press, London and
Basingstoke, pp. 46082.

196

REFERENCES

Aumann, R.J. (1995), Backward induction and common knowledge of rationality,


Games and Economic Behavior 8, 619.
Aumann, R.J. (1998), On the centipede game, Games and Economic Behavior 23,
97105.
Aumann, R.J. (1999), Interactive epistemology II: Probability. International Journal
of Game Theory 28, 301314.
Aumann, R.J. and A. Brandenburger (1995), Epistemic conditions for Nash equilibrium, Econometrica 63, 11611180.
Aumann, R.J. and J.H. Dreze (2004), Assessing strategic risk, Center for Rationality, Hebrew University of Jerusalem.
Balkenborg, D. and E. Winter (1997), A necessary and sufficient epistemic condition for playing backward induction, Journal of Mathematical Economics 27,
325345.
Basu, K. (1990), On the non-existence of a rationality definition for extensive games,
International Journal of Game Theory 19, 3344.
Basu, K. and J.W. Weibull (1991), Strategy subsets closed under rational behavior,
Economics Letters 36, 141146.
Battigalli, P. (1991), Algorithmic solutions for extensive games. In: Decision Processes in Economics (G. Ricci, Ed.), Springer Verlag, Berlin.
Battigalli, P. (1996a), Strategic rationality orderings and the best rationalization
principle, Games and Economic Behavior 13, 178200.
Battigalli. P. (1996b), Strategic independence and perfect Bayesian equilibria,
Journal of Economic Theory 70, 201234.
Battigalli, P. (1997), On rationalizability in extensive games, Journal Economic
Theory 74, 4061.
Battigalli, P. and G. Bonanno (1999), Recent results on belief, knowledge and
the epistemic foundations of game theory, Research in Economics 53, 149225.
Battigalli, P. and M. Siniscalchi (2002), Strong belief and forward induction
reasoning, Journal of Economic Theory 106, 356391.
Ben-Porath, E. (1997), Rationality, Nash equilibrium, and backwards induction in
perfect information games. Review of Economic Studies 64, 2346.
Ben-Porath, E. and E. Dekel (1992), Coordination and the potential for selfsacrifice, Journal of Economic Theory 57, 3651.
Bernheim, D. (1984), Rationalizable strategic behavior, Econometrica 52, 1007
1028.
Bewley, T.F. (1986), Knightian decision theory: Part 1, Cowles Foundation DP 807.
Bicchieri, C. (1989), Self-refuting theories of strategic interaction: A paradox of
common knowledge, Erkenntnis 30, 6985.
Bicchieri, C. and O. Schulte (1997), Common reasoning about admissibility,
Erkenntnis 45, 299325.
Binmore, K. (1987), Modelling rational players I, Econonomics and Philosophy 3,
179214.
Binmore, K. (1995), Backward induction and rationality, DP 9510, University College London.
Blume, L., A. Brandenburger, and E. Dekel (1991a), Lexicographic probabilities
and choice under uncertainty, Econometrica 59, 6179.

REFERENCES

197

Blume, L., A. Brandenburger, and E. Dekel (1991b), Lexicographic probabilities


and equilibrium refinements, Econometrica 59, 8198.
Board, O. (2003), The equivalence of Bayes and causal rationality in games, mimeo.
ge, W. and T. Eisele (1979), On the solutions of Bayesian games, International
Bo
Journal of Game Theory 8, 193215.
Bonanno, G. (1991), The logic of rational play in games of perfect information,
Economics and Philosophy 7, 3765.
Bonanno, G. (2001), Branching time logic, perfect information games and backward
induction, Games and Economic Behavior 36, 5773.
rgers, T. (1994), Weak dominance and approximate common knowledge, Journal
Bo
of Economic Theory 64, 265276.
rgers, T. and L. Samuelson (1992), Cautious utility maximization and iterated
Bo
weak dominance, International Journal of Game Theory 21, 1325.
Boutilier, G. (1994), Unifying default reasoning and belief revision in a model
framework, Artificial Intelligence 68, 3385.
Brandenburger, A. (1992), Lexicographic probabilities and iterated admissibility.
In: Economic Analysis of Markets and Games (P. Dasgupta, D. Gale, O. Hart,
E. Maskin, Eds.), MIT Press, Cambridge, MA, pp. 282290.
Brandenburger, A. (1997), A logic of decision. Harvard Business School Working
Paper 98-039.
Brandenburger, A. (1998), On the existence of a complete belief model. Harvard
Business School Working Paper 99-056.
Brandenburger, A. and E. Dekel (1989), The role of common knowledge assumptions in game theory. In: The Economics of Missing Markets, Information,
and Games (F. Hahn, Ed.), Basil Blackwell, Oxford, pp. 105150.
Brandenburger, A. and E. Dekel (1993), Hierarchies of beliefs and common
knowledge, Journal of Economic Theory 59, 189198.
Brandenburger, A. and A. Friedenberg (2003), Common assumption of rationality in games, NYU and Washington University.
Brandenburger, A. and H.J. Keisler (1999), An impossibility theorem on beliefs
in games, Harvard Business School Working Paper 00-010.
Brandenburger, A. and H.J. Keisler (2002), Epistemic conditions for iterated
admissibility, Harvard Business School.
Clausing, T. and A. Vilks (2000), Backward induction in general belief structures
with and without strategies, Handelhochschule Leipzig.
Dekel, E. and D. Fudenberg (1990), Rational behavior with payoff uncertainty,
Journal of Economic Theory 52, 24367.
Dekel, E., D. Fudenberg, and D.K. Levine (1999), Payoff information and selfconfirming equilibrium. Journal of Economic Theory 89, 165185.
Dekel, E., D. Fudenberg, and D.K. Levine (2002), Subjective Uncertainty Over
Behavior Strategies: A Correction, Journal of Economic Theory 104, 473478.
Dubey, P. and M. Kaneko (1984), Informational patterns and Nash equilibria in
extensive games: I, Mathematical Social Sciences 8, 111139.
Dufwenberg, M. (1994), Tie-break rationality and tie-break rationalizability, Working Paper 1994:29, Department of Economics, Uppsala University.
n (1996), Inconsistencies in extensive games: ComDufwenberg, M. and J. Linde
mon knowledge is not the issue, Erkenntnis 45, 103114.

198

REFERENCES

Epstein, L.G. and T. Wang (1996), Beliefs about beliefs without probabilities,
Econometrica 64, 13431373.
Ewerhart, C. (1998), Rationality and the definition of consistent pairs, International
Journal of Game Theory 27, 4959.
Feinberg, Y. (2004a), Subjective reasoningdynamic games, forthcoming in Games
and Economic Behavior.
Feinberg, Y. (2004b), Subjective reasoningsolutions, forthcoming in Games and
Economic Behavior.
Friedman, N. and J.Y. Halpern (1995), Plausibility measures: A users guide.
Proceedings of the Eleventh Conference on Uncertainty in AI, pp. 175184.
Govindan, S. and T. Klumpp (2002), Perfect Equilibrium and Lexicographic Beliefs, International Journal of Game Theory 31, 229243.
Greenberg, J. (1996), Towering over Babel: Worlds apart but acting together,
McGill University.
Greenberg, J., S. Gupta, and X. Luo (2003), Towering over Babel: Worlds apart
but acting together, McGill University.
Grove, A. (1988), Two models for theory change, Journal of Philosophical Logic 17,
157170.
Gul, F. (1997), Rationality and coherent theories of strategic behavior, Journal of
Economic Theory 70, 131.
Halpern, J.Y. (2001), Substantive rationality and backward induction, Games and
Economic Behavior 37, 425-435.
Halpern, J.Y. (2003), Lexicographic probability, conditional probability, and nonstandard probability, Cornell University.
Hammond, P.J. (1993), Aspects of rationalizable behavior. In: Frontiers of Game
Theory (K. Binmore, A. Kirman, P. Tani, Eds.), MIT Press, Cambridge, MA, pp.
277305.
Hammond, P.J. (1994), Elementary non-archimedean representations of probability
for decision theory and games. In: Patrick Suppes: Scientific Philosopher, Vol. 1,
Probability and Probabilistic Causality (P. Humphreys, Ed.), Kluwer Academic
Publishers, Dordrecht, pp. 2559.
Hammond, P.J. (2001), Utility as a tool in non-cooperative game theory. In: Handbook of Utility Theory, Vol. 2 (S. Barber`
a, P.J. Hammond, C. Seidl, Eds.), Kluwer
Academic Publishers, Dordrecht.
Harsanyi, J. (1973), Games with randomly disturbed payoffs, International Journal
of Game Theory 2, 123.
Herings, P.J.-J. and V.J. Vannetelbosch (1999), Refinments of rationalizability
for normal-form games. International Journal of Game Theory 28, 5368.
m, B. (1982), Moral hazard in teams, Bell Journal of Economics 13, 324
Holmstro
341.
Hurkens, S. (1996), Multi-sided pre-play communication by burning money, Journal
of Economic Theory 69, 186197.
Kaneko, M. (1999), On paradoxes in the centipede and chain-store games I:
Nonepistemic considerations, IPPS-DP 810, University of Tsukuba.
Kaneko, M. and J.J. Kline (2004), Modeling a players perspective II: Inductive
derivation of an individual view, University of Tsukuba.

REFERENCES

199

Kohlberg, E. and P.J. Reny (1997), Independence on relative probability spaces


and consistent assessments in game trees, Journal of Economic Theory 75, 280
313.
Kreps, D.M. and R. Wilson (1982), Sequential equilibria, Econometrica 50, 863
894.
Lamarre, P. and Y. Shoham (1994), Knowledge, certainty, belief, and conditionalisation. In: Proceedings of the 4th International Conference on Principles of Knowledge Representation and Reasoning (KR94) (J. Doyle, E. Sandewall, P. Torasso,
Eds.), Morgan Kaufmann, San Francisco, pp. 415424.
Luce, D. and H. Raiffa (1957), Games and Decisions, Wiley, New York.
Machina, M. (2004), Almost-objective uncertainty, Economic Theory 24, 154.
McLennan, A. (1989a), The space of conditional systems is a ball, International
Journal of Game Theory 18, 125139.
McLennan, A. (1989b), Consistent conditional systems in noncooperative game
theory, International Journal of Game Theory 18, 141174.
Mailath, G., L. Samuelson, and J. Swinkels (1993), Extensive form reasoning
in normal form games, Econometrica 61, 273302.
Mailath, G., L. Samuelson, and J. Swinkels (1997), How proper is sequential
equilibrium? Games and Economic Behavior 18, 193218.
Mariotti, M. (1997), Decisions in games: why there should be a special exemption
from Bayesian rationality, Journal of Economic Methodology 4, 4360.
Mertens, J.-M. and S. Zamir (1985), Formulation of Bayesian analysis for games
of incomplete information, International Journal of Game Theory 14, 129.
Morris, S. (1997), Alternative notions of belief. In: Epistemic Logic and the Theory
of Games and Decisions (Bacharach, Gerard-Varet, Mongin, Shin, Eds.), Kluwer
Academic Publishers, Dordrecht, pp. 217233.
Myerson, R. (1978), Refinement of the Nash equilibrium concept, International
Journal of Game Theory 7, 7380.
Myerson, R. (1986), Multistage games with communication, Econometrica 54, 323
358.
Myerson, R. (1991), Game Theory, Harvard University Press, Cambridge, MA.
Osborne, M.J. and A. Rubinstein (1994), A Course in Game Theory, MIT Press,
Cambridge, MA.
Pearce, D.G. (1984), Rationalizable strategic behavior and the problem of perfection, Econometrica 52, 10291050.
Perea, A. (2002), Forward induction and the minimum revision principle, Meteor
research memorandum 02/010, University of Maastricht.
Perea, A. (2003), Rationalizability and minimal complexity in dynamic games, Meteor research memorandum 03/030, University of Maastricht.
Perea, A., M. Jansen, and H. Peters (1997), Characterization of consistent assessments in extensive form games, Games and Economic Behavior 21, 238252.
Pettit, P. and R. Sugden (1989), The backward induction paradox, Journal of
Philosophy 4, 169-182.
Rabinowicz, W. (1997), Grappling with the centipede: Defence of backward induction for BI-terminating games, Economics and Philosophy 14, 95126.
Rajan, U. (1998), Trembles in the Bayesian foundations of solution concepts of
games, Journal of Economic Theory 82, 248266.

200

REFERENCES

Reny, P.J. (1992), Backward induction, normal form perfection and explicable equilibria, Econometrica 60, 627649.
Reny, P.J. (1993), Common belief and the theory of games with perfect information,
Journal of Economic Theory 59, 257274.
Rosenthal, R. (1981), Games of perfect information, predatory pricing and the
chain-store paradox, Journal of Economic Theory 25, 92100.
Rubinstein, A. (1991), Comments on the interpretation of game theory, Econometrica 59, 909924.
Samet, D. (1996), Hypothetical knowledge and games with perfect information,
Games and Economic Behavior 17, 230251.
Samuelson, L. (1992), Dominated strategies and common knowledge, Games and
Economic Behavior 4, 284313.
Savage, L.J. (1954), The Foundations of Statistics, Wiley, New York.
Shoham, Y. (1988), Reasoning about Change, MIT Press, Cambridge.
Schotter, A. (2000), Microeconomics: A Modern Approach, Addison Wesley Longman, Boston, 3rd edition.
Schuhmacher, F. (1999), Proper rationalizability and backward induction, International Journal of Game Theory 28, 599615.
Selten, R. (1975), Reexamination of the perfectness concept for equilibrium points
in extensive games, International Journal of Game Theory 4, 2555.
Sonsino D., I. Erev, and S. Gilat (2000), On rationality, learning and zero-sum
betting An experimental study of the no-betting conjecture, Technion.
Svik, Y. (2001), Impossible bets: An experimental study, Department of Economics,
University of Oslo
Spohn, W. (1988), A general non-probabilistic theory of inductive inference. In: Causation in Decisions, Belief Change and Statistics (Harper, Skyrms, Eds.), Reidel,
Dordrecht, pp. 105134.
Stahl, D. (1995), Lexicographic rationality, common knowledge, and iterated admissibility, Economics Letters 47, 155159.
Stalnaker, R. (1996), Knowledge, belief and counterfactual reasoning in games,
Economics and Philosophy 12, 133163.
Stalnaker, R. (1998), Belief revision in games: forward and backward induction,
Mathematical Social Sciences 36, 5768.
Tan, T. and S.R.C. Werlang (1988), The Bayesian foundations of solution concepts
of games, Journal of Economic Theory 45, 370391.
van Damme, E. (1984), A relation between perfect equilibria in extensive form games
and proper equilibria in normal form games, International Journal of Game Theory
13, 113.
van Damme, E. (1989), Stable equilibria and forward induction, Journal of Economic
Theory 48, 476496.
van Fraassen, B.C. (1976), Representation of conditional probabilities, Journal of
Philosophical Logic 5, 417430.
van Fraassen, B.C. (1995), Fine-grained opinion, probability, and the logic of full
belief, Journal of Philosophical Logic 24, 349377.
von Neumann, J. and O. Morgenstern (1947), Theory of Games and Economic
Behavior, Princeton University Press, Princeton, 2nd edition.

Index

Accessibility relation, 3944, 46


Act
Anscombe-Aumann act, 8, 22, 26, 3233,
3940, 49, 54, 56, 7071, 7475,
7778, 8384, 92, 102, 122, 126, 145,
181, 185186, 194
Admissibility, 39, 4142, 46, 4950, 70, 72,
8586, 133135, 143144, 147,
153154, 175176, 179, 193
Backward induction, 2, 67, 1011, 1417,
2021, 2324, 38, 69, 7880, 8283,
8788, 9197, 99100, 112113, 115,
118, 121, 123, 128, 132, 152, 155159,
162166, 170, 174
Belief operators
absolutely robust belief, 38, 40, 45, 4950,
147
assumption, 38, 40, 45, 4850, 147, 152
certain belief, 19, 39, 44, 4648, 5761,
6366, 7273, 76, 78, 8182, 8788,
9096, 99, 103, 105106, 108109,
113, 115, 117118, 122123, 125,
127128, 136, 138, 140, 147148,
150153, 162163, 167, 173, 183188
conditional belief, 3940, 4445, 47, 50, 86
full belief, 38, 40, 4546, 50
robust belief, 40, 4446, 4851, 135139,
143, 145149, 152154, 158, 163, 173
strong belief, 38, 40, 45, 48, 5051, 147,
152
Caution, 10, 14, 23, 6263, 75, 103, 115116,
123, 135137, 139, 143, 145146,
148149, 152, 154, 158, 160, 163164,
173, 193
Consistency of preferences
(ordinary) consistency, 5, 12, 53, 5859,
73, 149

admissible consistency, 53, 63, 7576,


8788, 97, 148
admissible subgame consistency, 9096
full admissible consistency, 134140,
144145, 147148, 151152, 162163,
167, 173
proper consistency, 122123, 125, 128
quasi-perfect consistency, 117
sequential consistency, 104
weak sequential consistency, 108
Consistent preferences approach, 17, 1112,
1517, 21, 53, 81, 144, 154, 174
Epistemic independence, 90, 94, 96
Epistemic model, 35, 89, 15, 4142, 48,
50, 5355, 5862, 6467, 69, 7374,
7677, 83, 9192, 94, 100, 102,
104106, 109111, 113115, 117119,
121, 125128, 130, 138, 144145,
147151, 153, 174, 183190, 193194
Epistemic priority, 3839, 4246
Equilibrium
Nash equilibrium, 26, 1113, 18, 53,
5860, 6465, 115, 124, 130, 141,
171172
perfect equilibrium, 18, 53, 6265, 115,
124, 130
proper equilibrium, 16, 1819, 121122,
124125, 127, 130, 148, 174, 186
quasi-perfect equilibrium, 1819, 24,
115118, 127, 184186
sequential equilibrium, 1819, 24, 100,
104107, 115, 118, 182183
subgame-perfect equilibrium, 87, 91, 94,
113114
weak sequential equilibrium, 18
Forward induction, 2, 67, 1011, 17, 21, 24,
38, 69, 97, 112, 133, 135, 137, 146,
148150, 152154, 159, 162, 168172,

202
174
Game
extensive game, 6, 1415, 17, 23, 5051,
56, 8085, 87, 8991, 94, 99, 101103,
105106, 108109, 113, 117119, 152,
155, 158160, 162, 173
of perfect information, 20, 7982, 8485,
8792, 94, 97, 100101, 113, 118, 121,
123, 128, 132, 162
strategic game, 24, 78, 5354, 5657,
5961, 6366, 69, 71, 73, 76, 8385,
88, 9091, 94, 101103, 121, 125130,
139, 144, 148149, 159, 161, 193
pure strategy reduced strategic form
(PRSF), 133, 156, 159164, 168170,
173
Inducement (of rationality)
of a rational mixed strategy, 5, 58
of a sequentially rational behavior
strategy, 104
of a weak sequentially rational mixed
strategy, 107
Iterated elimination
Dekel-Fudenberg procedure, 1314, 2324,
65, 69, 78, 81, 83, 88, 112, 124, 138,
141, 143, 148, 162, 166168, 171
of choice sets under full admissible
consistency (IECFA), 138142,
150151, 163164, 169170, 173
of strongly dominated strategies (IESDS),
13, 2324, 60, 69, 83, 133, 137138,
141143, 149
of weakly dominated strategies (IEWDS),
14, 1617, 129130, 133135, 141142,
150, 152153, 159, 169
No extraneous restrictions on beliefs, 135,
137, 139, 143, 146149, 153154, 158,
163164, 173
Probability system
conditional probability system (CPS),
2425, 3436, 50, 109

REFERENCES
lexicographic conditional probability
system (LCPS), 31, 3536, 4950
lexicographic probability system (LPS),
2425, 3033, 36, 43, 49, 56, 60,
6263, 65, 67, 76, 88, 9394, 96, 102,
104, 106107, 110, 116, 122, 131, 143,
181185, 187190
system of conditional lexicographic
probabilities (SCLP), 25, 32, 3536,
5657, 59, 6164, 66, 100, 102104,
109110, 114116, 121122, 183184,
186, 189
Rational choice approach, 13, 6, 1112, 143
Rationalizability
(ordinary) rationalizability, 8, 13, 18, 53,
60, 69, 73, 124, 137138, 142143,
146, 149, 153
extensive form rationalizability, 99,
112113, 135, 152153, 156, 159
full permissibility, 17, 112, 134146,
148152, 154155, 157173
permissibility, 1315, 1718, 53, 62, 6567,
69, 7577, 81, 87, 112, 115, 119, 124,
137138, 141143, 146, 148149,
153154, 193194
proper rationalizability, 1, 16, 1819,
121125, 127131, 146148, 174, 187,
190, 192193
quasi-perfect rationalizability, 1, 1516,
18, 24, 101, 115116, 118119, 128
sequential rationalizability, 1, 1516, 18,
99101, 104, 106107, 111115,
118119, 128, 174
weak sequential rationalizability, 18, 20,
107112, 119
Strategic manipulation, 171
Strategically independent set, 8485, 110,
119
Subjective possibility, 3839, 43

About the Author

Geir B. Asheim is Professor of Economics at the University of Oslo,


Norway. In additional to investigating epistemic conditions for gametheoretic solution concepts, he is doing research on questions relating to
intergenerational justice.

You might also like