You are on page 1of 27

The Economic Crisis and New-Keynesian DSGE models:

an Evaluation
Wim Meeusen
University of Antwerp (Belgium)
June 2009

1. Introduction

Economics as a scientific discipline was severely battered in the wake of the Credit
Crunch and the ensuing worldwide economic crisis. Especially the strand of what came to
be known as ‘mainstream’ macroeconomics in the shape of the ‘New-Keynesian Dy-
namic Stochastic General Equilibrium (DSGE) model’ took a severe blow. Very few had
seen the crisis coming, and the few distinguished economists that indeed did, like Nouriel
Roubini (‘Doctor Doom’) and Robert Shiller (1), were not taken seriously.
The signs of a coming catastrophe were nevertheless ominous. To name only some of
the most blatant signs in the US:
• The appearance of an asset price boom: the S&P/Case-Shiller home price in-
dex was equal to 100 in January 2000; it had climbed to a peak of 226.29 in
June 2006;
• The increasing importance on the mortgage market of sub-prime and Alt-A
loans: in 2003 sub-prime and Alt-A loans amounted to 48% of the total, which
is of course already alarming; in 2006 this had increased to 74% (2);
• The explosion of the (OTC) market of credit default swaps (CDS), ‘weapons
of financial mass destruction’ in the prophetic words of Warren Buffet (3): by
the end of 2007 this market had a value of 45 trillion USD, more than 20 tril-
lion of which was purely speculative (4);
• In the summer of 2004, the spread between short- (3 months) and long-term
(20 years) interest rates was at its highest level in half a century, leading to
1
E.g., Roubini, 2004 and Shiller, 1989.
2
R. Dodd, 2007.
3
W. Buffet, Berkshire Hathaway Annual Report for 2002.
4
R.R. Zabel, 2008.

1
• the rapid expansion of bank credit above its long-run trend (in March 1990 the
proportion of bank credits to M1 was 3.30; in March 2008 this had risen to
6.90), and
• the development of a fragile debt structure of the banks, excessively depend-
ent on liquidity;
• The growth of the US financial sector beyond what is sustainable: between
1973 and 1985 the US financial sector represented approximately 16% of do-
mestic corporate profits; in the 90s this fluctuated in a range from 21 to 30%;
in the beginning of the present century it soared to 41% (5).
• Over-consumption of US households that, from 1999 onwards, increasingly
spend more than they earned, net financial investment as a percentage of dis-
posable personal income falling to a record low of -8.25 in 2005, and continu-
ing to remain negative in the years after;
• The deficit on the US current account increasing from a near equilibrium posi-
tion in 1991 to a record low of 6.15% of GDP in 2006 (about 1.8% of World
GDP), and continuing to remain in the red in the years after that; the concomi-
tant accumulation of huge dollar reserves by the Chinese government rein-
vested in US government bonds making it possible that the proportionate
value of both currencies remained stable.

All this notwithstanding, for the large majority of so-called ‘mainstream’ macroecono-
mists, it was ‘business as usual’. Michael Woodford, one of the most prominent of pre-
sent-day DSGE modellers, published as late as in the first issue of the new AEA publica-
tion American Economic Journal: Macroeconomics (2009) an article that proclaimed the
‘new convergence’ of ideas in modern macroeconomics, stating that “while there is not
yet agreement on the degree to which the greater stability of the U.S. and other econo-
mies in recent decades (sic) can be attributed to improvements in the conduct of monetary
policy, the hypothesis that monetary policy has become conducive to stability, for reasons
argued by Taylor among others, is certainly consistent with the general view of business

5
S. Johnson, 2009.

2
fluctuations presented by current-generation empirical DSGE models” (Woodford, 2009,
p. 273) (6).
Patricia Cohen, in her New York Times afterthoughts on the last yearly meeting of the
AEA (Cohen’s title is ‘Ivory Tower Unswayed by Crashing Economy’), cites Robert
Shiller who blames ‘groupthink’, i.e. “the tendency to agree with the consensus. People
don’t deviate from the conventional wisdom for fear they won’t be taken seriously. (…)
Wander too far and you find yourself on the fringe. The pattern is self-replicating. Gradu-
ate students who stray too far from the dominant theory and methods seriously reduce
their chances of getting an academic job.” (7)

It is thus worthwhile to analyse what has happened on the academic scene in somewhat
greater detail.
The practice of ‘slicing and dicing’ debt and ‘securitisation’, itself a result of the hu-
bris of bankers and institutional investors, but also of the policy-makers’ urge to deregu-
late, was based on the so-called ‘Efficient Market Hypothesis’, which assumes – errone-
ously – allocative and expectational rationality and assets prices reflecting market fun-
damentals (8).
A minority of distinguished scholars in the past have heavily criticised these assump-
tions. There is the vast body of empirical work by R.J. Shiller (e.g. his 1989 book), but
also the earlier contribution of Tobin (1984) who concluded that financial markets neither
show Arrow-Debreu full insurance efficiency (a tall order anyway because this would re-
quire that all assets and their prices are defined, not only by their obvious characteristics,
but also by all the contingencies at which they can possibly be exchanged), nor informa-
tion arbitrage efficiency (impossibility to earn a profit on the basis of publicly available
information), fundamental valuation efficiency and functional (i.e. macroeconomic) effi-
ciency. Financial markets in his view often are even not technically efficient (it is possible
to buy or sell large quantities with very low transaction costs) (see also Buiter, 2009c).
6
The atmosphere of broad consensus conveyed by Woodford’s text is expressed by the numerous use of
expressions like “it is now widely accepted…” (8 times in the space of the 6 pages on which he documents
the ‘New Synthesis’).
7
P. Cohen, NYT, 5/3/2009.
8
Alan Greenspan, in a congressional testimony in October 2008, described himself as being “in a state of
shocked disbelief [over the failure of the] self-interest of lending institutions to protect shareholders’ eq-
uity” (cited by M. Wolf, 2009).

3
The present events should now convince every reasonable economist that the EMH is a
fallacy.
The paradigmatic issue, laid bare by the present economic crisis, is however of a
wider nature.
At stake are an additional number of traditional assumptions made in the ‘mainstream’
macroeconomic literature:
- the existence of ‘representative’ agents (households and firms)
- that maximise an intertemporal utility function, resp. the present value of pre-
sent and future profits
- under perfect foresight or rational expectations.
These assumptions are made by new-classical as well as new-Keynesian macroecono-
mists, and lead, if coupled with an ‘appropriate’ so-called ‘transversality’ condition, to
particularly optimistic conclusions on the inherent (saddle path) stability of the general
equilibrium involved.
New-Keynesian economists try nevertheless to keep in touch with reality, relatively
speaking, by using this framework in models by means of which they explore the impli-
cations of imperfect or monopolistic competition in conditions of price and/or wage
stickiness. New-classical economists, on the contrary, add insult to injury by moreover
assuming
- perfect competition and
- flexible prices and wages.

We need only go into a discussion of the hard-line new-classical theories and policy pre-
scriptions in as far as elements of their credo survive – which they obviously do – in pre-
sent day theorising. John Kay, a senior editorialist of the Financial Times (not exactly a
left-wing publication), and one-time staunch defender of neo-liberal recipes, wrote re-
cently “[…] these people discredit themselves by opening their mouths” (Kay, 2009).
Hard-core new-classical theories are indeed already for some time in full retreat after
having dominated academia and policy circles during the days of Margaret Thatcher and
Ronald Reagan.

4
The ‘grand old man of economics’, Robert Solow – still active and still very critical
on what is going on in the profession – already in his AEA Presidential Address of 1980,
called it “foolishly restrictive” for the new classical economists to rule out by assumption the
existence of wage and price rigidities and the possibility that markets do not clear. “I remem-
ber reading once that it is still not understood how the giraffe manages to pump an adequate
blood supply all the way up to its head; but it is hard to imagine that anyone would therefore
conclude that giraffes do not have long necks” (9).
The ubiquitous presence of new-Keynesian DSGE models in present day ‘main-
stream’ macroeconomic research, since the publication of Obstfeld and Rogoff’s seminal
‘redux’ paper in 1995, is however an altogether different matter.
In section 2 we discuss a baseline new-Keynesian DSGE model and its variants and
extensions. In section 3 we look at the solution of these models. The next section deals
with calibration and estimation issues. In section 5 we draw conclusions for policy and
economic theory.

2. The specification of new-Keynesian DSGE models

We concentrate on a particular variant of the baseline new-Keynesian DSGE model, and


consider extensions and variants along the way.
A representative household j in the continuum j ∈ [0,1] maximises the following
intertemporal utility function:


E 0 ∑ β t ut j , [1]
t =0

where E0 is the expectations operator, conditional on the information available to house-


hold j in period 0, β < 1 is a uniform discount factor, and

mj 
1−σ m
+ m  t 
λ λ
ut j = − l lt j
1 1−σ c 1+σ l
1−σ c 1+ σ l 1 − σ m  Pt 
ctj [2]

9
A. Klamer, 1984.

5
is the instantaneous utility of the j-th household, ltj being labour supply by that house-
hold, mtj / Pt being the real value of its money holdings, and ctj being aggregate con-
sumption by the household, usually given by a Dixit-Stiglitz aggregator function (10):

ε
 ε −1  ε −1
ct = ∫0 cit di 
j  1 j ε
 
, [3]
 

where citj is consumption of good i by the j-th household (i ∈ [0,1]) and ε > 1 is the
contemporaneous elasticity of substitution between the different goods (11).
σ c is the inverse of the intertemporal elasticity of substitution w.r.t. aggregate consump-
tion, σ l is the inverse of the elasticity of labour supply w.r.t. the real wage and σ m is the
inverse of the intertemporal elasticity of substitution w.r.t. the holding of money; λl and
λm are (positive) relative weights of work effort and money holdings in the utility func-
tion.
It holds, for the labour supply of the j-th household, that

lt j = ∫0 litj di ,
1
[4]

where litj is the supply of labour by the j-th household to the monopolistically competi-
tive firm producing the i-th good.
[1] is maximised over the unknown functions cij , m j / P and l j , subject to the fol-
lowing period budget constraint:

∫0 pit cit di + mt = wt lt + mt −1 + π t .
1 j j j j j j
[5]

π tj are the profits accruing to the j-th household (it is assumed that the households are
the owners of the firms; it is also assumed that the households share the revenues from
owning the firms in equal proportion).
The first order conditions for citj yield the usual demand equation:

10
Instead of a continuum of different consumption goods, each produced by a single firm on a monopolis-
tically competitive market, a number of authors have considered an economy with a single final good
traded under perfect competition, but produced with a technology using a continuum of intermediate inputs,
each of them produced by a single firm on a monopolistically competitive market (see e.g. Smets and
Wouters (2003), Christiano, Eichenbaum and Evans (2005)).
11
Lower case alphabetic symbols denote variables defined at the level of households and firms. Capitals
denote variables defined at the macroeconomic level.

6
 
−ε
=  
pit
citj ctj ,
 
[6]
Pt

pit being the price of the i-th good, and the price index Pt being given by

[ ]
1
Pt = ∫0 pit di
1 1−ε 1−ε
. [7]

It holds that

∫0 pit cit di = Pt ct .
1 j j
[8]

The intertemporal path of aggregate real consumption and the optimal value for the real
wage rate that is set by the household are yielded by the familiar Euler conditions:

 ( −σ c ) P 
= β E t ctj+1
(−σ c )
ctj 
t
 Pt +1 
(σ l )
wtj lj
= λl t [9]
Pt ( −σ c )
ctj
mj 
−σ m
j ( −σ c )
= λm  t 
 Pt 
ct .
 

The first-order conditions in expressions [9], together with the budget constraint in [5]
and transversality conditions that are assumed to be satisfied, ruling out explosive devel-
opments, determine the optimal time-path of consumption, work time and money hold-
ings of the representative household.
In this baseline version of the new-Keynesian DSGE model we simplify the supply
side of the economy by assuming that there is a continuum of firms, each producing a
differentiated good i with the following linear technology:

p  p 
−ε −ε
∫0 cit dj ≡ cit =  it  ∫0 ct dj =  it  Ct = yit = Z t lit ,
1 j 1 j
 Pt   Pt 
[10]

where lit is a composite CES labour quantity specified by

7
φ
φ −1
 φ −1 

lit =  ∫0 litj dj 
φ
 
1
, [11]
 

and Z t = Z t −1 exp(ηt ) is a stochastic aggregate technology index, with ηt being an in-


dependently distributed Gaussian process. Ct is of course the national consumption level.
φ > 1 is the elasticity of substitution between different sorts of labour. According to the
usual Dixit-Stiglitz logic, the wage index can be written as follows:

 1 1−φ  1−φ
1

Wt =  ∫0 wtj dj 
 
. [12]

It again holds, as in [8], that

∫0 wt lit dj = Wt lit .
1 j j
[13]

Since in this version of the model there is no physical capital (labour is the only primary
factor of production), there is no accumulation equation, and profit maximisation by
firms reduces to the static case. The pricing decision takes in this setting of monopolistic
competition the familiar simple mark-up form:

ε Wt
pit = . [14]
ε − 1 Zt

It also holds, for labour demand by the i-th firm for the j-th variety of labour that

wj 
−φ
litj =  t 
 Wt 
yit , [15]
 

which coincides with labour supply in expressions [2] and [4]. The model implies full
employment since each household sells labour according to its own preferences.
[14], unsurprisingly in view of the uniform values of all the parameters across households
and firms, implies symmetry. This allows to write the period profits of the i-th firm as
follows:

8
p y p y
π it = π t = it it = t t = π tj . [16]
ε ε

π t , pt and yt are representative profits of individual firms, resp. the representative price
and output of individual goods.
The symmetry that is present in the model allows to model the money supply in a very
simple way:

∫0 mt dj = mt = mt = M t = M t −1 exp(ξ t + γηt ) ,
1 j j
[17]

with ξ t being another Gaussian white noise process and γ being a reaction parameter of
the monetary authority with respect to technological shocks.

In its above form the model exhibits flexible prices and wages and is therefore in essence
not much more than a new-classical RBC model in the tradition initiated by Kydland and
Prescott (1982) and Long and Plosser (1983), plus the assumption of monopolistically
competitive markets. The model becomes ‘new-Keynesian’ with the additional assump-
tion of price and/or wage rigidity.
A popular approach among new-Keynesian DSGE modellers is to use the Calvo
specification (Calvo, 1983). Wage- and price-setters receive, as it were, ‘green’ or ‘red’
light signals enabling or preventing them to adjust their prices. These signals arrive with
a given fixed probability. Let ω w and ω p be the respective probabilities that households
and firms can ‘re-optimise’ their prices in a given period.
Optimal wage-setting by the households and optimal price-setting by the firms that
are ‘permitted’ to re-optimise is now more complicated than suggested by the Euler con-
ditions in [9] and the simple mark-up pricing equation in [14], because both types of eco-
nomic agents have to consider the possibility that they may not be able to adjust
prices/wages in the future. The expected future costs that are entailed have to be ac-
counted for in the optimal decision taken today (see e.g. Erceg, Henderson and Levin
(2000) for details). This introduces additional dynamics in the model and accentuates the
role of future expectations.
Let ~pit – as the alternative for pit in expression [14] – be the optimal price obtained
in this way, and let w~ j be the alternative optimal real wage set by the j-th household (12).
t
Symmetry allows now to redefine Pt and Wt as follows:

12
Smets and Wouters (2003, 2007), among others, assume that those agents that are not allowed to re-
optimise in the Calvo sense adjust their price/wage to past inflation levels. This slightly changes the form
of the expressions in [18].

9
Pt1−ε = (1 − ω p )( ~
pt )1−ε + ω p ( Pt −1 )1−ε

W 1−φ = (1 − ω )(w
t w
~ )1−φ + ω (W )1−φ
t w t −1 . [18]

This base-line version of the new-Keynesian DSGE model has in recent years been
adapted and extended in a number of directions. We review the most important instances.
• Physical capital as a second primary input. Capital is owned by households
and rented to the firms. The capital accumulation equation adds to the dynam-
ics of the model. Capital income is assumed to equal marginal productivity of
capital, which becomes an endogenous variable in the model (see e.g. Chris-
tiano et al., 2005). Some authors consider also fixed costs in the production
sphere (e.g. Adolfson et al., 2007).
• Households can invest part of their wealth in government bonds at an interest
rate that is set by the central bank. Monetary policy is in that case modelled by
means of a Taylor reaction rule. This option is chosen in many papers.
• Variable capacity use of capital and labour. Galì (1999), for instance, consid-
ers the disutility from work in the utility function as a positive function both
of hours worked and effort supplied. Christiano et al. (2005) and also Smets
and Wouters (2003, 2007) include the rate of capital utilisation, next to the in-
vestment decision, in the decision set of the representative household.
• Habit formation in the consumption function (e. g. Smets and Wouters, 2003,
2007).
• Wage stickiness modelled either through the intermediate role of a monopolis-
tic trade union or through a Nash bargaining process between a union and a
representative firm, possibly combined with Calvo-type rigidity (e.g. Smets
and Wouters, 2007), or through the use of a search friction model (Gertler et
al., 2008).
• Open economy aspects. Adolfson et al. (2005) extend the Christiano et al.
(2005) model to a small open economy. Other contributions in this field in-
clude Galì and Monacelli (2005) and Lindé et al. (2008). Two-country new-
Keynesian DSGE models are analysed in Lubik and Schorfheide (2005) and
Rabanal and Tuesta Reátegui (2006). Galì and Monacelli (2008) examine
monetary and fiscal policy in a currency union.
• In open economy models, incomplete markets are introduced by considering
transaction costs for undertaking positions in the foreign bonds market, and by

10
gradual exchange rate pass-through, i.e. import prices do not immediately re-
flect prices on the world market expressed in domestic currency (see e.g.
Adolfson et al. (2007), Lindé et al. (2008) and Benigno (2009)).
• Additional types of shocks. The Smets and Wouters paper of 2007 is one that
goes far along this path: they consider shocks on technology, investment rela-
tive prices, intertemporal preference, government spending (including net ex-
ports), monetary policy, the price mark-up and the wage mark-up. Rabanal
and Tuesta Reátegui (2006), in their 2-country modellisation, consider also
country-specific technology shocks and UIP shocks.

Three important variants/extensions of the base-line model merit specific attention: the
specification of monetary policy, the presence of a commercial banking sector and the
issue of unemployment.
Although optimal monetary policy decisions are one of the main focuses of the large
majority of (new-Keynesian) DSGE models papers, the concept of money is nearly al-
ways only weakly defined. In the much cited papers of Smets and Wouters (2003, 2007),
for instance, a so-called ‘cashless limit economy’ is considered. Money as such is absent
in the model, even if there is a central bank pursuing a monetary policy in the form of a
Taylor interest rule. The background of this modelling choice is the old Walras and Ar-
row-Debreu general equilibrium concept of an economy under perfect competition. These
models, that surely had no pretence to describe reality, were insufficiently detailed to deal
with the ways in which people pay for goods, otherwise than by saying that they had to
stay within the borders of a budget constraint. If these models wanted to tell something
meaningful about the money supply or monetary policy, they had to make simplifying
assumptions like the ‘cash-in-advance’ hypothesis that states that each economic agent
must have the necessary cash available before buying goods.
Another simplifying option is the one that Woodford chooses in his ‘neo-Wicksellian’
approach (cf. Woodford (1998) and Woodford’s magnum opus Interest and Prices,
2003). Woodford – Smets and Wouters and a number of other authors follow in his suit –
observing that paper and metal currency is gradually losing importance, assumes that the
limit case where paper and metal money have disappeared and only electronic money
remains, continues to yield in their DSGE models a meaningful solution for the nominal
price level and the nominal rate of interest. Buiter (2002) strongly objects. He states that
“Woodford’s cashless limit is simply the real equilibrium solution to an accounting sys-
tem of exchange to which money or credit, be it cash (in-advance or in-arrears) or elec-
tronic transfer, is an inessential addition”. Cashless limit economies in the sense of

11
Woodford produce an equilibrium by means of the computing power of the auctioneer in
an Arrow-Debreu auction, and should not be confused with an electronic money system
(see also Rogers, 2006). Cashless limit models in the sense of Woodford may have peda-
gogical merits, but are unable to describe what is going on in a modern, highly monetised
economy, let alone to say something meaningful about the way in which the central bank
should act.
This is not to say that DSGE models that include a monetary supply variable are
much more realistic. The basic problem remains that in DSGE models savers and inves-
tors are united in the same economic agent, the ‘representative’ household (13). This im-
plies frictionless financial markets, and also no hierarchy of interest rates. The single in-
terest rate set by the central bank is at the same time the rate of return on capital, the rate
of return earned by firms and households on savings, and the rate paid by borrowers.
There is no place, and no need for a commercial bank sector that acts as intermediary.
Recently in Cúrdia and Woodford (2008), an (exogenous) credit friction is introduced,
allowing for a time-varying wedge between the debit and credit interest rate, but in the
continuing absence of commercial banks.
If there are no commercial banks in the model, insolvency and illiquidity problems
caused by these banks are not answered. Obviously, the models do not allow such ques-
tions to be asked in the first place.
The full employment implication of, specifically new-Keynesian, DSGE models is
another sore point. The reason for this feature is of course the symmetry in the continuum
of households. Each household is ‘representative’ in its own right. If one household finds
employment, all do. No involuntary unemployment can occur, only voluntary movements
in hours of work or intensity of effort, i.e. movements on the ‘intensive’ margin. This re-
mains true regardless the particular form taken by wage or price rigidity.
Both Blanchard and Galì (2008) and Gertler et al. (2008) provide examples of new-
Keynesian DSGE models in which there are movements in employment along the exten-
sive margin (14). They do so by redefining the representative household as consisting of
family members with and without a job, and combining this feature with a wage bargain-
ing process. Gertler et al. also consider the probability of finding a matching between un-
employed workers and vacancies. We note in passing that both models are of the ‘cash-
less limit’ type.
13
An interesting alternative is analysed in De Graeve et al. (2008), who introduce some degree of hetero-
geneity by considering three different types of households: workers, shareholders and bondholders.
14
Blanchard and Galì start their analysis by noting that the absence of involuntary unemployment was
viewed as one of the main weaknesses of the RBC model (see e.g. Summers, 1991), but was then ‘ex-
ported’ to new-Keynesian DSGE models.

12
This brings us to the fundamental weaknesses of (new-Keynesian) DSGE models. We
discuss successively
• the issue of representative economic agents, and the symmetry it entails,
• the rationality assumption,
• imposed stability: the transversality condition,
• the ‘efficient markets’ and ‘complete markets’ paradigms.

It should be well understood that the use of representative economic agents in DSGE
models is a way to circumvent the ‘fallacy of composition’, i.e. the implications of the
Sonnenschein-Mantel-Debreu theorem that states that the properties of individual behav-
iours, after aggregation, generally do not lead to equally nice and transparent properties
of the aggregated entities (see e.g. Kirman, 1992). The only way out to preserve logical
consistency is therefore to assume that all agents are alike. Symmetry in this context is
automatic and inevitable.
But does this make representative agents an acceptable scientific concept? The an-
swer is ‘no’ if one uses the traditional argument as voiced by Atkinson (2009) that in the
real world people have different, often conflicting, interests and aspirations and that by
neglecting these differences, one rules out the most interesting welfare economic prob-
lems.
It is certainly again ‘no’ if we realise that individual agents that are clones of each
other act on their own, and therefore do not interact. This is what is called the agent co-
ordination problem. Macroeconomics is different from microeconomics in the sense
that it should study the complex properties of the whole that emerge from the interaction
of individual agents. The whole is not equal to the sum of its parts. Representative agent
models fail to address this very basic macroeconomic question. Howitt et al. (2008)
therefore ask what is so sound about the ‘sound microfoundations’ that DSGE modellers
insist on.
The representative household then maximises an intertemporal utility under a budget
constraint. Firms maximise an intertemporal profit function under constraints, like the
available production technology and the time path followed by the capital stock. The im-

13
plied rationality, also with respect to the formation of expectations, and the ability of
these agents to get hold of the necessary information is taken for granted.
No one says it better than Solow (2008, pp. 243-244): “(…) basically this is the Ram-
sey model transformed from a normative account of socially optimal growth into a posi-
tive story that is supposed to describe day-to-day behavior in a modern industrial capital-
ist economy. It is taken as an advantage that the same model applies in the short run, the
long run, and every run with no awkward shifting of gears. And the whole thing is given
the honorific label of ‘dynamic stochastic general equilibrium’. No one would be driven
to accept this story because of its obvious ‘rightness’. After all, a modern economy is
populated by consumers, workers, pensioners, owners, managers, investors, entrepre-
neurs, bankers, and others, with different and sometimes conflicting desires, information,
expectations, capacities, beliefs, and rules of behavior. Their interactions in markets and
elsewhere are studied in other branches of economics; mechanisms based on those inter-
actions have been plausibly implicated in macro-economic fluctuations. To ignore all this
in principle does not seem to qualify as mere abstraction – that is setting aside inessential
details. It seems more like the arbitrary suppression of clues merely because they are in-
convenient for cherished preconceptions. I have no objection to the assumption, at least
as a first approximation, that individual agents optimize as best they can. That does not
imply – or even suggest – that the whole economy acts like a single optimizer under the
simplest possible constraints. So in what sense is this ‘dynamic stochastic general equi-
librium’ model firmly grounded in the principles of economic theory?”
Buiter (2009a) concurs and points out that Ramsey’s model actually was a model for
a social planner trying to determine the long-run optimal savings rate. The mathematical
programming problem to be solved by the central planning agency only leads to a mean-
ingful solution if this agency, at the same time, also makes sure that terminal boundary
conditions (the so-called ‘transversality conditions’), that preclude explosive time-
paths, are met. These conditions express the necessity that the influence on the present of
what happens in an infinitely distant future vanishes.
DSGE modellers transplant this social planner’s programming problem to the ‘real
life’ situation of a ‘representative’ individual, expecting to describe in this way, not only
his long-run behaviour, but also his behaviour in the short and the medium run. Only, in a

14
decentralised market economy, there is no such a thing as a mathematical programmer
that imposes the necessary terminal conditions. There is no real life counterpart to the
transversality conditions imposed on Ramsey’s social planner. Panics, manias and
crashes do happen, and are not confined to the nearly cataclysmic events of the Credit
Crunch. Post-war economic history abounds with examples. Only in the period since the
Stock Exchange Crash in New-York of October 1987, we have had, successively, the
Mexican Crisis (1994), the Asian Crisis (1997), the LTCM Crisis (1998 to early 2000),
the burst of the dot-com bubble (2000-2001), and the threatening panic following
9/11/2001.
Much has been written in the last forty years on rationality, and the questionable ex-
istence of the ‘homo economicus’. This is not the place to expand on the important con-
tributions by economists and social psychologists like Tversky, Kahneman, Selten, Fehr
a.o., documenting, by means of controlled experiments, the systematic deviation of eco-
nomic actors from rational behaviour. The rationality assumption, especially when ap-
plied to asset markets – regardless of model uncertainty that is always present – has how-
ever recently received additional severe blows. Obvious phenomena in the Credit Boost
and Bust were hubris (15) and power play by the main actors, and herd behaviour by the
crowd. With respect to the latter phenomenon, it is now clear that most investors and
bankers who bought the securitised mortgages did so mainly because other smart people,
who where supposed to be knowledgeable, did so too.
DSGE models completely miss Keynes’ ‘animal spirits’ point (see Akerlof and Shiller
(2009), Shiller (2009), and an interesting paper by De Grauwe (2008) using a ‘heuristic’
expectations formalisation).
With respect to the formation of expectations, there is however more to it than out-
right irrationality. The issue is foremost one of the unknowability of the future as a result
of so-called ‘Knightian uncertainty’. Knight made the difference between ‘risk’ and ‘un-
certainty’, risk being randomness with a known probability distribution and therefore in-
surable, and (Knightian) uncertainty being randomness with an unknown or even un-
knowable probability distribution and therefore uninsurable. Phelps (2009) argued that
risk management by banks related to ‘risk’ observed as variability over some recent past.

15
Buiter (2009b) e.g. refers to the role of testosterone in traders’ rooms.

15
This was understood as variability around some equilibrium path, while the volatility of
the equilibrium path itself was not considered. Stable and knowable equilibrium paths
play however a crucial role in (new-Keynesian) DSGE models (see further in section 3).
Another illuminating angle to approach this unknowability problem is to see that on
the micro- as well as the macro-scale there is most of the time path-dependency of the
long-run dynamics. Examples of hysteresis have been well documented in international
trade, industrial innovation, localisation of industries, consumer behaviour, the function-
ing of labour markets and consequently in the determination of the long-run rate of eco-
nomic growth itself (see Cross (2008) on DSGE modelling and hysteresis). DSGE model-
lers seem moreover to have neglected the important insights offered by endogenous
growth theory. Instead they have regressed to the old Solow-Cass-Koopmans growth
model used by the first RBC theorists (16).
This brings us to what perhaps is the most crucial assumption made by DSGE model-
lers: ‘complete and efficient markets’. The ‘Complete Market Hypothesis’ refers to the
existence of markets. A ‘complete system of markets’ is one in which there is a market
for every good, in the broadest possible sense of the word. A ‘good’ is then, as in the Ar-
row-Debreu approach of general equilibrium, defined not only in terms of its immanent
physical properties, but it is also indexed by time, place and state of nature or state of the
world. It is then possible for agents to instantaneously enter into any position with respect
to whatever future state of the economy. The ‘Efficient Market Hypothesis’ refers to the
working of markets. Allocative and expectational rationality holds and market prices re-
flect market fundamentals.
Add to this the assumption made by DSGE modellers that intertemporal budget con-
straints are always satisfied, and one gets an ‘economy’ where there are no contract en-
forcement problems, no funding or market illiquidity, no insolvency, no defaults and no
bankruptcies.
The comments of Goodhart, former member of the Monetary Policy Committee of
the Bank of England, are devastating: “This makes all agents perfectly creditworthy.
Over any horizon there is only one interest rate facing all agents, i.e. no risk premia. All

16
Solow is very much aware of this and distances himself from the use by DSGE modellers of his own
growth theory (Solow, 2008).

16
transactions can be undertaken in capital markets; there is no role for banks. Since all
IOUs are perfectly creditworthy, there is no need for money. There are no credit con-
straints. Everyone is angelic; there is no fraud; and this is supposed to be properly micro-
founded!” (Goodhart, 2008).

3. The solution of new-Keynesian DSGE models

The baseline model presented in the previous section, and of course also the extensions of
it, are highly non-linear. In order to be able to have a workable and estimable version of
them, it is a current procedure to (log)linearise the model around the equilibrium path and
to reduce stochasticity in the model to well-behaved additive normally distributed distur-
bances with a given distribution (17). In the determination of the optimal time-paths (in
levels) of the different variables of the model it was assumed that the transversality con-
ditions were satisfied. This, in principle, should have ruled out explosive behaviour of
these variables, but, since these transversality conditions actually do not intervene in the
actual derivation of the optimal time-paths (most DSGE modellers do not even bother to
mention them), saddle path stability of the long-run equilibrium is not automatically en-
sured. The latter is however a necessary condition for the long-run equilibrium to be
meaningful in the presence of rational expectations.
To this end the linearised version of the model is subjected to the so-called Blanch-
ard-Kahn test. This test requires that, in order to have a unique and stable path, the num-
ber of eigenvalues of the linearised system smaller than 1 in absolute value should be
equal to the number predetermined endogenous variables, and the number of eigenvalues
with absolute value larger than 1 should be equal to the number of anticipated variables
(Blanchard and Kahn, 1980). The problem of course is that this test, in nearly all cases,
can only be carried out when the parameters of the model are known, either through cali-
bration of the model, or through econometric analysis (see next section).
The linearisation takes place around the steady state solution of the model. But this
steady state, by its very nature, does not refer to average situations, but to the extreme

17
Some authors have started to experiment with second-order Taylor expansions as an alternative to lin-
earisation (see e.g. Schmitt-Grohé and Uribe, 2004).

17
situations of full capacity use, zero average inflation, purchasing power parity (in open
economy models), etc. A good illustration of this is the point that is conceded by Chris-
tiano et al., when they make the following comment on the fact that they take zero profits
as the steady state value: “Finally, it is worth noting that since profits are stochastic, the
fact that they are zero, on average, implies that they are often negative. As a conse-
quence, our assumption that firms cannot exit is binding. Allowing for firm entry and exit
dynamics would considerably complicate our analysis” (Christiano et al., 2005, p. 14).
Perhaps zero profits are an interesting benchmark, but it can hardly be a steady state
value in a monopolistically competitive environment.
Combined with the requirement that shocks in a linearised version of a non-linear model
have to remain small, one cannot but conclude that, in the very best of cases, new-
Keynesian DSGE models can only describe what happens in the immediate neighbour-
hood of a state of blissful tranquillity.
The need to linearise around a steady state also implies that one has to limit the analy-
sis to effects of temporary shocks. Permanent shocks cannot be accommodated (see e.g.
Mancini Griffoli, 2007).
More fundamentally, stripping a non-linear model from its non-linearities may very
well mean – the more so if you consider the interaction of these non-linearities with un-
certainty – that you delete from the model everything that makes the dynamics of reality
interesting: threshold effects, critical mass effects, switching of regimes points etc. If
there is one thing that recent economic history has made clear, then it is that economic
systems can be tranquil (i.e. ‘stable’) for some time, but that, once in a while, unforeseen
events push the system out of the ‘corridor of stability’. Linear systems, by their very na-
ture, cannot have this corridor property.
The nature of stochasticity in linearised DSGE models (Buiter (2009a) cynically
speaks of ‘trivialising’) is another sore issue. Firstly, linear models with independently
distributed disturbances have the ‘certainty equivalence’ property. Linearising, as far as
the mean of the solved time path goes, reduces in actual fact the model to a deterministic
one. Secondly, if one assumes that the disturbances are normally distributed, as DSGE
modellers traditionally do, one dramatically misses one of the essential aspects of, in par-
ticular, movements of prices on asset markets. As an illustration of this, De Grauwe, in a

18
witty contribution, has shown that the 10.88% fall of the Dow-Jones Industrial Average
on 28/10/2008, if you would assume an underlying normal distribution, would only take
place once every 73,357,946,799,753,900,000,000 years, exceeding by far the assumed
age of the universe (De Grauwe, 2009).

4. Calibration and estimation of DSGE models

In older DSGE models, in line with what was common in new-classical RBC models, pa-
rameters were a-prioristically chosen so that the dynamic qualities of the solution, in
terms of the lower moments of the underlying distributions, conformed with what was
observed. This ‘calibration’ approach, as opposed to a traditional econometric approach,
was preferred because of the complicated, highly non-linear, nature of the models, and
presumably also because RBC theorists and early DSGE modellers – probably uncon-
sciously – did not wish to confront directly their very sketchy and unrealistic models with
the data.
Solow is very caustic on this practice. We cite: “The typical ‘test’ of the model, when
thus calibrated, seems to be a very weak one. It asks whether simulations of the model
with reasonable disturbances can reproduce a few of the low moments of observed time
series: ratios of variances or correlation coefficients, for instance. I would not know how
to assess the significance level associated with this kind of test. It seems offhand like a
rather low hurdle. What strikes me as more important, however, is the likelihood that this
kind of test has no power to speak of against reasonable alternatives. How are we to
know that there are not scores of quite different macro models that could leap the same
low hurdle or a higher one? That question verges on the rhetorical, I admit. But I am left
with the feeling that there is nothing in the empirical performance of these models that
could come close to overcoming a modest skepticism. And more certainly, there is noth-
ing to justify reliance on them for serious policy analysis” (Solow, 2008, p. 245).
In more recent DSGE models one usually follows a mixed strategy, but the inauspi-
cious heritage of calibration lingers on. It does so in two ways. Firstly, part of the often
numerous parameters are still calibrated. Secondly, another part is estimated with Bayes-
ian procedures in which the choice of priors, whether or not inspired by calibrated values

19
taken from previous studies, by the very nature of the Bayesian philosophy, heavily bi-
ases the ultimate estimates.
One of the reasons to opt for Bayesian estimation techniques is that likelihood func-
tions of DSGE models often show numerous local maxima and nearly flat surfaces at the
global maximum. Traditional maximum likelihood estimation strategies therefore often
fail (see Fernandez-Villaverde, 2009). But, rather than choosing for the flight forward
and reverting to Bayesian techniques, this should perhaps warn one that DSGE models do
not marry well with real life data.
In the frequently cited Christiano et al. paper, the estimation strategy is, to be sure,
more careful, in the sense that the authors in a preparatory step use an unrestricted VAR
procedure to estimate the impulse response of eight key macroeconomic variables of the
model to a monetary policy shock, in order, in a second step, to minimise a distance
measure between these estimated IRFs and the corresponding reaction functions implied
by the model. However, eight other very crucial parameters are fixed a priori (among
which the discount factor, the parameters of the utility function of the households, the
steady state share of capital in national income, the annual depreciation rate, the fixed
cost term in the profit function, the elasticity of substitution of labour inputs in the pro-
duction function, and the mean growth rate of the money supply). This implies of course
that the remaining ‘free’ parameters are highly restricted and thus remain heavily biased.
In the case of normality, when the variance-covariance matrix of the disturbances is
known, the posterior mean can be written as a matrix weighted average of the prior mean
and the least-squares coefficient estimates, where the weights are the inverses of the prior
and the conditional covariance matrices. If the variance-covariance matrix is not known,
as is nearly always the case, the relation between prior and posterior values of the pa-
rameters is of course more complicated, but the general picture remains valid (see e.g.
Green, 2003).
The conclusion is that the practice of calibration is still widespread. Bayesian statisti-
cal techniques produce a particular kind of hysteresis effect. Parameter values, once fixed
by an ‘authorative’ source, live on in the priors of subsequent studies, which in turn per-
petuate possible errors. Blanchard, although himself author of a few new-Keynesian
DSGE papers, worries that “once introduced, these assumptions [about the priors and a

20
priori fixed parameters used in models] can then be blamed on others. They have often
become standard, passed on from model to model with little discussion” (Blanchard,
2008).

5. Conclusions on policy and theory

New-Keynesian DSGE models and new-Keynesianism as such are not the same thing.
Criticism on the former does not mean that one does not recognise the progress that has
been made by economists who fall under the latter denomination. Mankiw, in a brilliant
review of the present state of economic theory (Mankiw, 2006), distinguishes four phases
in ‘modern’ Keynesian theory: 1) the Keynesian-neoclassical synthesis of Samuelson,
Modigliani and Tobin; 2) the first so-called new-Keynesian wave with Malinvaud and
Barro and Grossman’s disequilibrium approach; 3) the second new-Keynesian wave with
Akerlof, Mankiw, Summers, Stiglitz and Blanchard and Kiyotaki’s insights on sources of
wage and price rigidity; and 4) the third new-Keynesian DSGE wave. Mankiw (as for
that matter Krugman (2009)), sees a scientific progression in each of the first three
phases, but discerns a regression in the fourth.
It is, for that matter, an open question whether new-Keynesian DSGE models are
‘new-Keynesian’ in the original sense of the word. It is of course the case that price and
wage stickiness is one of the basic ingredients of these models, but the simple fact that a
representative household at all times maximises its intertemporal utility (even when, as in
Blanchard and Galì, not every member of that household does so), implies that new-
Keynesian DSGE models are, in actual effect, nothing more than revamped new-classical
RBC models. Goodfriend and King (1997), followed by Clarida, Galì and Gertler (1999),
Woodford (2003), but also Smets and Wouters (2007), have understood this and dubbed
this, in their view ‘consensus’, paradigm the ‘New Neoclassical Synthesis’.
Armed with this ‘consensus’ view on the economy, DSGE modellers also claim to be
able to formulate a ‘consensus’ view on optimal monetary policy. Woodford (2009, p.
273) summarises the obtained DSGE result in this way: “Monetary policy is now widely
agreed to be effective, especially as a means of inflation control. The fact that central
banks can control inflation if they want to (and are allowed to) can no longer be debated,

21
after the worldwide success of disinflationary policies in the 1980s and 1990s; and it is
now widely accepted as well that it is reasonable to charge central banks with responsibil-
ity for keeping the inflation rate within reasonable bounds.”
Apart from the fact that this sounds very much as an application of the ‘Coué Method’, it
should not come as a surprise that inflation-targeting by the central bank comes out as a
result (even in models, like Smets and Wouters’ 2007 exercise, that do not contain a
monetary supply variable in the first place), given that most recent new-Keynesian DSGE
papers model the behaviour of the central bank in the form of a Taylor rule, and given
that the validation of DSGE models is at best shaky.
The record of the last decade tells however a very different story, especially if we
consider the monetary policy pursued by the Fed. Instead of following an explicit infla-
tion-targeting policy Alan Greenspan obviously, on practically a day-to-day basis, ma-
nipulated the official discount rate with a view of stabilising financial markets. Not only
was this ‘Greenspan Put’ exercised with systematic regularity, in the years 2002-2004
interest rates were kept too long at an abnormally low level. Only on the surface this
could appear as the evidence of an inflation-targeting policy: inflation, as measured by
the CPI, remained low in that period, the implication being then that the interest rate level
is ‘right’, whatever the level of interest rates at each particular moment. According to
Leijonhufvud (2008), the inflation rate however could remain low only because of the
fact that US consumer goods prices stabilised through competition with (massive) price-
elastic imports from countries like China, that had chosen not to let its currency appreci-
ate, in view of the amount of US dollars it continued to accumulate as a result of the con-
tinuing deficit on the US current account (see also James K. Galbraith (2008) for a very
critical account of the monetary policy of the Fed and of present-day economic theories
that try to explain it).
Goodhart (2008, p. 14), a central banker himself, asks: “How on earth did central banks
get suckered into giving credence to a model which is so patently unsatisfactory?”
Mankiw (2006) for that matter disputes, in a closely reasoned argument, countering in
this ways the sometimes boisterous claims by new-Keynesian DSGE modellers (e.g.
Woodford and Goodfriend in many of their publications), that central bankers have in-

22
deed inspired their actual policy on the results obtained by DSGE models (see Woodford
(2009) for a rebuttal).
Central bank independence, i.e. the doctrine that there should be a separation of re-
sponsibility for monetary and fiscal policy, with the latter ‘flying on auto-pilot’, is an-
other principle hallowed by new-classicals and new-Keynesian DSGE modellers. It broke
down once the going got rough, especially in the US and the UK (see e.g. Buiter (2009b)
on this, and also Leijonhufvud (2008)) (18), the Fed acting, in the words of Buiter, “like
an off-balance and off-budget ‘special purpose vehicle’ of the US Treasury”.
Solow (2008) asks himself what accounts for the ability of new-Keynesian DSGE
modeling “to win hearts and minds among bright and enterprising academic econo-
mists”? One type answer is purely psychological: the ‘purist streak’ of young people, the
search for a ‘theory of everything’, as one also witnesses in the efforts of elementary par-
ticle physicists in their quest for first principles (19).
Another answer is sociological. Streeten (2002, p. 15-16) comes with an interesting
theory: “The problem with American undergraduate education is that most American
schools (with a few notable exceptions) teach so badly that the young people have to go
through remedial training in their early university years. They are often almost illiterate
when they enter university. At the same time, these youngsters are often eager to learn,
have open minds, and are asking big questions. But while their minds are open and while
they are eager to ask these large questions, they do not have the basic training to explore
them. By the time they reach graduate studies, the groundwork has been done, but the
need to chase after credits and learn the required techniques tends to drive out time and
interest in exploring wider areas, asking interesting questions. As a result, only very few
exceptional young people are led to approach the subject with a sense of reality and vi-
sion. The majority is stuck in the mould of narrow experts.” Rodrik (2009) comes to a
similar conclusion.

18
The ECB has of course been able, by the very structure of the EMU, to safeguard its independence, but
one might perhaps at the same time question its relevance in the face of the economic crisis.
19
It is, for instance, illustrative that some DSGE modellers, when they speak of (log)linearising their mod-
els, use the term ‘perturbation techniques’ (as used by quantum physicists) (see e.g. Fernandez-Villaverde,
2009).

23
As already mentioned in the introduction, ‘groupthink’ is of course the obvious third
explanation (20).
It is time we move to a final conclusion. It is better for macroeconomics, if it wants to
regain relevance, to move away from formalism, even if it is logically consistent and
therefore aesthetic, to a more engineering-like approach. This point is very forcefully
made by Mankiw (2006) and Howitt et al. (2008). There is nothing ‘unscientific’ about
this. It is indeed difficult to agree with Woodford (2009, p. 274) when he writes that “one
hears expressions of scepticism about the degree of progress in macroeconomics from
both sides of this debate – from those who complain that macroeconomics is too little
concerned with scientific rigor, and from those who complain that the field has been too
exclusively concerned with it.” One should rather say that DSGE modelling is not a mat-
ter of scientific rigor, but of formal rigor.
Better no micro foundations than bad micro foundations.
New-Keynesian DSGE models are “self-referential, inward-looking distractions at
best” (Buiter, 2009a) – toy models in the words of Blanchard (2008). Is it not sufficiently
ambitious, with the limited knowledge that macroeconomists have about the real world,
to move on (back), as Solow (2008) suggests, to small, transparent, tailored models, often
partial equilibrium?
Could it be that present-day mainstream macroeconomics is a ‘degenerative research
programme’ in the sense that Imre Lakatos gave to that term (Lakatos, 1970)?

20
Lee Smolin (2006) tells about a similar situation of a dominating paradigm in elementary particle physics
(string theory), with what he sees (rightly or wrongly) as a possible case of ‘groupthink’.

24
References

Adolfson, M., S. Laséen, J. Lindé and M. Villani (2005), ‘Bayesian Estimation of an Open Econ-
omy DSGE Model with Incomplete Pass-through’, Journal of International Economics, 72:
481-511.
Akerlof, G.A. and R.J. Shiller (2009), Animal Spirits: How Human Psychology Drives the Econ-
omy and Why It Matters for Global Capitalism (Princeton: Princeton University Press).
Atkinson, A.B. (2009), ‘Economics as a Moral Science’, Economica, published online at
http://www3.interscience.wiley.com/journal/122314362/abstract?CRETRY=1&SRETRY=0
Benigno, P. (2009), ‘Price Stability with Imperfect Financial Integration’, Journal of Money,
Credit and Banking, 41(suppl.): 121-149.
Blanchard, O.J. (2008), ‘The State of Macro’, NBER Working Paper 14259, August 2008.
Blanchard, O.J. and J. Galì (2008), ‘A New-Keynesian Model with Unemployment’, CEPR Dis-
cussion Paper DP 6765.
Blanchard, O.J. and C.M. Kahn (1980), ‘The Solution of Linear Difference Models under Ra-
tional Expectations’, Econometrica, 48: 1305-1313.
Buffet, W. (2002), Berkshire Hathaway Annual Report for 2002
(www.fintools.com/docs/Warren%20Buffet%20on%20Derivatives.pdf).
Buiter, W.H. (2002), ‘The Fallacy of the Fiscal Theory of the Price Level: a critique’, Economic
Journal, 112: 459-480.
Buiter, W.H. (2009a), ‘The Unfortunate Uselessness of most ‘State of the Art’ Academic Mone-
tary Economics’, FT.com/Maverecon, 3/3/2009 (http://blogs.ft.com/maverecon/2009/03/the-
unfortunate-uselessness-of-most-state-of-the-art-academic-monetary-economics/#more-667).
Buiter, W.H. (2009b), ‘The Green Shoots are Weeds Through the Rubble in the Ruins of the
Global Economy’, FT.com/Maverecon, 8/4/2009 (http://blogs.ft.com/maverecon/2009/04/the-
green-shoots-are-weeds-growing-through-the-rubble-in-the-ruins-of-the-global-
economy/#more-1276).
Buiter, W.H. (2009c), ‘Useless Finance, Harmful Finance and Useful Finance’,
FT.com/Maverecon, 12/4/2009 (http://blogs.ft.com/maverecon/2009/04/useless-finance-
harmful-finance-and-useful-finance/#more-1357)
Calvo, G.A. (1983), ‘Staggered Prices in a Utility-maximizing Framework’, Journal of Monetary
Economics, 12: 383-398.
Christiano, L.J., M. Eichenbaum and C.L. Evans (2005), ‘Nominal Rigidities and the Dynamic
Effects of a Shock to Monetary Policy’, Journal of Political Economy, 113: 1-45.
Clarida, R., J. Galì and M. Gertler (1999), ‘The Science of Monetary Policy: a new-Keynesian
perspective’, Journal of Economic Literature, 37: 1661-1707.
Cohen, P. (2009), ‘Ivory Tower Unswayed by Crashing Economy’, New York Times, 5/3/2009.
Cross, R. (2008), ‘Mach, Methodology, Hysteresis and Economics’, Journal of Physics: Confer-
ence Series, 138: 1-7.
Cúrdia, V. and M. Woodford (2008), ‘Credit Frictions and Optimal Monetary Policy’, National
Bank of Belgium Working Paper 146.
De Graeve, F., M. Dossche, H. Sneessens and R. Wouters (2008), ‘Risk Premiums and Macro-
economic Dynamics in a Heterogeneous Agent Model’, National Bank of Belgium Working
Paper 150.
De Grauwe, P. (2008), ‘DSGE-modelling when Agents are Imperfectly Informed’, ECB Working
Paper Series no. 897, May 2008.
De Grauwe, P. (2009), ‘The Banking Crisis: cause, consequences and remedies’, Itinera Institute
Memo, 19/2/2009.
Dodd, R. (2007), ‘Subprime: tentacles of a crisis’, Finance and Development, IMF, 44(4).
Erceg, C.J., D.W. Henderson and A.T. Levin (2000), ‘Optimal Monetary Policy with Staggered
Wage and Price Contracts’, Journal of Monetary Economics, 46: 281-313.

25
Fernandez-Villaverde, J. (2009), ‘The Econometrics of DSGE Models’, NBER Working Paper
14677, January 2009.
Galbraith, James K. (2008), ‘The Collapse of Monetarism and the Irrelevance of the New Mone-
tary Consensus’, The Levy Economics Institute at Bard College, Policy Note, 2008(1).
Galì, J. and T. Monacelli (2005), ‘Monetary Policy and Exchange Rate Volatility in a Small Open
Economy’, Review of Economic Studies, 72: 707-734.
Galì, J. and T. Monacelli (2008), ‘Optimal Monetary and Fiscal Policy in a Currency Union’,
Journal of International Economics, 76: 116-132.
Gertler, M., L. Sala and A. Trigari (2008), ‘An Estimated Monetary DSGE Model with Unem-
ployment and Staggered Wage Bargaining’, Journal of Money, Credit and Banking, 40:
1713-1764.
Goodfriend, M. and R. King (1997), ‘The New Neoclassical Synthesis and the Role of Monetary
Policy’, NBER Macroeconomics Annual, 1997: 231-283.
Goodhart, C.A.E. (2008). ‘The Continuing Muddle of Monetary Theory: a steadfast refusal to
face facts’, Financial Markets Group, London School of Economics.
Green, W.H. (2003), Econometric Analysis, 5th ed. (Upper Saddle River, NJ: Prentice Hall).
Howitt, P., A. Kirman, A. Leijonhufvud, P. Mehrling and D. Colander (2008), ‘Beyond DSGE
Models: toward an empirically based macroeconomics’, American Economic Review: Papers
and Proceedings, 98: 236-40.
Johnson, S. (2009), ‘The Quiet Coup’, Atlantic Monthly, May 2009.
Kay, J. (2009), ‘How Economics Lost Sight of the Real World’, Financial Times, 22/4/2009.
Kirman, A. (1992), ‘Whom or What Does the Representative Individual Represent?’, Journal of
Economic Perspectives, 6: 117-136..
Klamer, A. (1984), Conversations with Economists (Totowa, NJ: Rowman and Allanheld).
Krugman, P. (2009), ‘A Dark Age of Macroeconomics’, New York Times, 27/1/2009.
Kydland, F.E. and E.C. Prescott (1982), ‘Time to Build and Aggregate Fluctuations’, Economet-
rica, 70: 1345-1370.
Lakatos, I. (1970), ‘Falsification and the Methodology of Scientific Research Programmes’, in: I.
Lakatos and A. Musgrave (eds), Criticism and the Growth of Knowledge (Cambridge: Cam-
bridge University Press), pp. 91-196.
Leijonhufvud, A. (2008), ‘Keynes and the Crisis’, CEPR Policy Insight, no. 23, May 2008.
Lindé, J., M. Nessén and U. Söderström (2008), ‘Monetary Policy in an Estimated Open-
economy Model with Imperfect Pass-through’, International Journal of Finance & Econom-
ics, published online at http://www3.interscience.wiley.com/cgi-
bin/fulltext/119877054/PDFSTART .
Long Jr., J.B. and C.I. Plosser (1983), ‘Real Business Cycles’, Journal of Political Economy, 91:
39-69.
Lubik, T. and F. Schorfheide (2005), ‘A Bayesian Look at New Open Economy Macroeconom-
ics’, NBER Macroeconomics Annual 2005, 2005: 313-366.
Mancini Griffoli, T. (2007), Dynare v4 – User Guide.
Mankiw, N.G. (2006), ‘The Macroeconomist as Scientist and Engineer’, Journal of Economic
Perspectives, 20(4): 29-46.
Obstfeld, M. and K. Rogoff (1995), ‘Exchange Rate Dynamics Redux’, Journal of Political
Economy, 103: 624-660.
Phelps, E.S. (2009), ‘Uncertainty Bedevils the Best System’, Financial Times, 14/4/2009.
Rabanal, P. and V. Tuesta Reátegui (2006), ‘Euro-dollar Real Exchange Rate Dynamics in an
Estimated Two-Country Model: what is important and what is not’, CEPR Discussion Paper
5957.
Rodrik, D. (2009), ‘Blame the Economists, not Economics’, Harvard Kennedy School, 11/3/2009
(http://hks.harvard.edu/news-events/commentary/blame-the-economists.htm).

26
Rogers, C. (2006), ‘Doing Without Money: a critical assessment of Woodford's analysis’, Cam-
bridge Journal Of Economics, 30: 293-306.
Roubini, N. (2004), ‘The Upcoming Twin Financial Train Wrecks of the US’, RGE EconoMoni-
tor, 5/11/2004.
Schmitt-Grohé, S. and M. Uribe (2004), ‘Solving Dynamic General Equilibrium Models Using a
Second-order Approximation to the Policy Function’, Journal of Economic Dynamics and
Control, 28: 755-775.
Shiller, R.J. (1989), Market Volatility (Cambridge, Mass.: MIT Press).
Shiller, R.J. (2009), ‘A Failure to Control the Animal Spirits’, Financial Times, 8/3/2009.
Smets, F. and R. Wouters (2003), ‘An Estimated Dynamic Stochastic General Equilibrium Model
of the Euro Area’, Journal of the European Economic Association, 1: 1123-1175.
Smets, F. and R. Wouters (2007), ‘Shocks and Frictions in US Business Cycles: a Bayesian
DSGE approach’, American Economic Review, 97: 586-606.
Smolin, L. (2006), The Trouble with Physics (New York: Houghton Mifflin Harcourt).
Solow, R.M. (2008), ‘The State of Macroeconomics’, Journal of Economic Perspectives, 22:
243-249.
Streeten, P. (2002), ‘What’s Wrong with Contemporary Economics?’, Interdisciplinary Science
Reviews, 27: 13-24.
Summers, L. (1991), ‘The Scientific Illusion in Empirical Macroeconomics’, Scandinavian Jour-
nal of Economics, 93: 129-148.
Tobin, J. (1984), ‘On the Efficiency of the Financial System’, Lloyds Bank Review, no. 153, July
1984, 1-15.
Wolf, M. (2009), ‘Seeds of its own Destruction’, Financial Times, 8/3/2009.
Woodford, M. (1998), ‘Doing without Money: controlling inflation in a post-monetary world’,
Review of Economic Dynamics, 1: 173-219.
Woodford, M. (2003), Interest and Prices: Foundations of a Theory of Monetary Policy (Prince-
ton: Princeton University Press).
Woodford, M. (2009), ‘Convergence in Macroeconomics: elements of the New Synthesis’,
American Economic Journal: Macroeconomics, 1: 267-279.
Zabel, R.R. (2008), ‘Credit Default Swaps: from protection to speculation’, Pratt’s Journal of
Bankruptcy Law, September 2008.

27

You might also like