The Economist’s misguided lecture to macroeconomists

In a bizarre leader article The Economist praises microeconomists for their use of data to better predict people’s behaviour and recommend macroeconomists do the same:

Macroeconomists are puritans, creating theoretical models before testing them against data. The new breed [of microeconomists] ignore the whiteboard, chucking numbers together and letting computers spot the patterns. And macroeconomists should get out more. The success of micro is its magpie approach, stealing ideas from psychology to artificial intelligence. If macroeconomists mimic some of this they might get some cool back. They might even make better predictions.

I’m tempted to label this as obvious baiting but the misunderstanding is deeper than that. The newspaper appears to be suggesting that the way forward for better macroeconomic forecasts is to replace theory with data mining. Economists well remember when they last thought that empirical models and relationships could be used to improve forecasts and set policy. The heady days of the 1960s saw economists attempting to fine-tune the economy using empirical relationships such as the Phillips curve. As the empirical relationship disintegrated in the 1970s the developed world fell into a disastrous period of stagflation; a situation not anticipated by the empirical models in use.

Enter our heroes: Milton Friedman, Robert Lucas, Finn Kydland and Ed Prescott. These intrepid macroeconomists convincingly demonstrated that nearly any empirical model would fail to predict the outcome of policy changes. The core problem is that data-driven predictive models incorporate a myriad of implicit assumptions about the relationships and interactions between people in the economy. Policy changes alter those relationships and the models then become very poor predictors. That insight ultimately led to the development of micro-founded models such as the New-Keynesian DSGE models used by most central banks today.

Anyone who has worked with general equilibrium models will know that they are immensely data-hungry and require vast amounts of the stuff to produce simple predictions. But they do so in a fashion that is theoretically structured to avoid the problems of the 1960s. Better data complements better theory, it is not a substitute. The Economist’s misguided recommendation would throw out some of the greatest advances in policy-making of the past half century. Economists must resist the lure of Big Data mining and ensure that theoretical innovation keeps up with the explosion in available data.

6 replies
  1. Feminist Optimal
    Feminist Optimal says:

    It’s pretty patronising to suggest that we seriously don’t already know how to get better forecasts. Of course a data-driven approach will forecast better, but that actually isn’t the performance target for macro, no matter what a misguided media would have people believe.

    More precisely, there’s an important difference between micro and macro in terms of data use – and it’s the reason why data mining can work for micro but not macro. Micro uses data mining to infer the deep parameters (and uses data appropriate to that aim). Old-school empirical macro uses data under the assumption that the deep parameters are known. That’s necessary because the data available for macro is much less descriptive. With that data, the extra step of trying to estimate the deep parameters is asking way too much of standard data mining algorithms; hence the need for theoretical structure, like a DSGE framework, to help the estimation along.

    • jamesz
      jamesz says:

      I suppose there is some hope that better data can help us to build richer, more disaggregated models and start to built out some parts where previously we’ve had to make assumptions. If we could, for instance, model price stickiness rather than assuming a process then that might improve our short-run forecasts.

      The question of forecasts as a test of models is a tricky one but it is clear that we’re very bad at it right now. I don’t mean that in the ‘central prediction’ sense but rather that our confidence intervals don’t tend to represent the actual probabilities of things occurring. I’m not a forecaster and I don’t know why that is but it’s something we could hope to improve.

  2. mike smitka
    mike smitka says:

    So how do we get more data, and data that includes variation? Do we really believe that US data from the 1970s, before interstate banking was permitted, and international finance & international trade mattered, should be added? So 30 years of data only give us 120 quarters, no large uptick in inflation, and only a couple recessions — that variation thing. Going to OECD panel data doesn’t help much, unless you believe the US and Spain are fundamentally the same. So the idea of “big data” in macro is ludicrous — we aren’t able to measure macro variables at weekly or even monthly frequencies, nor would we care if we could.

    Or should we go back to the structural models (Ray Fair’s is online for all to play with), that do use higher frequency data and do aggregate micro markets? That approach may offer more scope for “data mining” and building in bubbles and so on, but it comes at a cost: less room for theoretical pyrotechnics and algorithmic innovation. Such models thus present no scope for grad students to do clever, novel things, and get tenure….no responsible dissertation committee would allow such projects.

    • jamesz
      jamesz says:

      ABM, right? Hungry for HF data, emergent behaviour, theoretical hotness, new and shiny! Forecast performance may be lagging just a shade right now but I think just means there’s ‘scope for improvement’. More seriously, I think the big data stuff is more useful for helping determine macro models’ elasticities etc than for improving forecast performance.

      • mike smitka
        mike smitka says:

        Yes, there is indeed scope for this in the old-school structural models. However, the modern DSGE models all operate at a very high level of aggregation / abstraction so it’s generally unclear what their “deep parameters” could possibly represent (or why they should be stable, since the components of the aggregates aren’t stable). Big data doesn’t help with attaching elasticities to things that can’t be defined outside the context of an abstract theoretical construct.

Comments are closed.