Are macroeconomists ignoring the research program of the last 30 years?

There is a large set of people saying that macroeconomists have just ignored what they have researched over the last 30 when trying to deal with the recession. In fact, many of the posts I have written could be seen as tacitly agreeing with this point of view.

However, my answer to the posted question would be NO – macroeconomists are not ignoring their recent research.

Many people may then ask: Why are people giving Keynesian, or IS/LM, style explanations to the crisis? Why do economists differ so heavily in what they think is the right policy? Why do they differ on what they think is actually going on!

My answer would be that macroeconomists are using old school language to EXPLAIN the conclusions they have reached with more modern methods. People are more comfortable with the idea of IS/LM etc, and so macroeconomists can use this type of language to explain what they are doing – even though it isn’t the real justification 😛 . I don’t particularly like this – but I’m sure I do it myself 🙁

Economists also disagree so heavily about PRESCRIPTIONS and DESCRIPTIONS. This is because the recent “research program” has not removed the importance of value judgments – fundamentally, people may agree on a general framework but there is no central model for stating the values of the parameters in a given game.

Even if economists could agree with that, the importance of game theory has opened up the realm of the FOLK THEOREM and MULTIPLE EQUILIBRIUM. Fundamentally, macroeconomists can now explain anything in multiple ways – making any explaination by itself empirically empty. Microeconomists discovered this a long time ago – and it is still a vexing methodological issue.

So macroeconomists are not ignoring recent research – recent research just hasn’t put macroeconomics in a position where it is all encompassing and all powerful. Something that some macroeconomists need to realise methinks 😀

5 replies
  1. Greg Ransom
    Greg Ransom says:

    Are you going to hold fast to this, Matt?

    Barro and Mankiw have spilled the bean on the fact that Krugman and economists in the Obama government are ignoring the empirical research of the last 30 years on fiscal policy — a central achievement of macroeconomic research in the last 30 years.

    There’s also this.

    Mario Rizzo and others have pointed out that most macroeconomists are ignoring 40 years of research in the history of economic thought on Keynesian economics and the economics of Keynes — getting Keynes wrong and ignoring what we’ve learned about the inadequacies of various brands of “Keynesian economics”.

    Finally, it’s not for no reason that Steve Sumner writes this:

    “There is a reason why modern graduate macro texts place so little emphasis on the ideas that Keynes developed in the GT, they are very hard to justify in a model with rational expectations. What puzzles me is why concepts such as the MPC, the multiplier, the paradox of thrift, and fiscal stimulus have recently become so widely debated among economists. Do these concepts help us understand movements in nominal spending? And if so, what is the model that justifies that view?”

    If we’re talking about the research program of the last 30 years, we’re talking post Lucas, right?

    So how do you square Sumner with the Obama vulgar IS-LM Keynesians?

  2. Matt Nolan
    Matt Nolan says:

    Hi Greg,

    Fine questions.

    My thoughts would be as follows. The current research program is definitely focused on rational expectations. However, when economists have described theory they aren’t just focused on what the strict model says, but where data has deviated strongly from the model.

    In order to explain these deviations we have had a hodge-podge of New-Keynesian models, which built on the general methodology of New Classicalism (representative agents, rational expectations) with rigidities.

    Now, the real issue here is that we can’t seem to come up clear answers to policy questions – unless we first make value judgments surrounding some parameters in the model.

    The whole idea of endogenous growth theory (from increasing returns to scale) has made the entire idea of a single equilibrium untenable – but if we have multiple pareto ranked eqm in an economy how do we know that we are heading to the right one?

    The ideas of “multipliers”, “the paradox of thrift” etc have all come back in this form over the last decade – fundamentally, these are a potential part of our models, and they can be found if we make appropriate value judgments.

    Now economists like Krugman know that – they have different value judgments to a lot of the discipline, and they use the well known example of IS/LM to explain themselves. However, their underlying justification for these results still stems from modern models.

    Fundamentally, I am saying that economists models are so broad that you can set them up to PRESCRIBE ANYTHING. I get annoyed with Krugman etal because, by using IS/LM they are not transparent about the value judgments they are making when doing this.


    Let me put this another way. The last 30 years of research has broadened the domain of economics. Now economics has this great framework, but it itself it is empirically empty. The benefit of the models stems from the fact that we can make our assumptions (value judgments) transparent.

    As a result, economists are doing a bad job because they aren’t making their assumptions obvious – but they are still using the last 30 years of research to make the same conclusions they would have 30 years ago 🙂

Trackbacks & Pingbacks

  1. […] they haven’t. The last 30 years has seen macroeconomists define their field more strongly and begin working on a […]

Comments are closed.