jetpack domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131updraftplus domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131avia_framework domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131Yes, there is indeed scope for this in the old-school structural models. However, the modern DSGE models all operate at a very high level of aggregation / abstraction so it’s generally unclear what their “deep parameters” could possibly represent (or why they should be stable, since the components of the aggregates aren’t stable). Big data doesn’t help with attaching elasticities to things that can’t be defined outside the context of an abstract theoretical construct.
]]>ABM, right? Hungry for HF data, emergent behaviour, theoretical hotness, new and shiny! Forecast performance may be lagging just a shade right now but I think just means there’s ‘scope for improvement’. More seriously, I think the big data stuff is more useful for helping determine macro models’ elasticities etc than for improving forecast performance.
]]>Or should we go back to the structural models (Ray Fair’s is online for all to play with), that do use higher frequency data and do aggregate micro markets? That approach may offer more scope for “data mining” and building in bubbles and so on, but it comes at a cost: less room for theoretical pyrotechnics and algorithmic innovation. Such models thus present no scope for grad students to do clever, novel things, and get tenure….no responsible dissertation committee would allow such projects.
]]>I suppose there is some hope that better data can help us to build richer, more disaggregated models and start to built out some parts where previously we’ve had to make assumptions. If we could, for instance, model price stickiness rather than assuming a process then that might improve our short-run forecasts.
The question of forecasts as a test of models is a tricky one but it is clear that we’re very bad at it right now. I don’t mean that in the ‘central prediction’ sense but rather that our confidence intervals don’t tend to represent the actual probabilities of things occurring. I’m not a forecaster and I don’t know why that is but it’s something we could hope to improve.
]]>More precisely, there’s an important difference between micro and macro in terms of data use – and it’s the reason why data mining can work for micro but not macro. Micro uses data mining to infer the deep parameters (and uses data appropriate to that aim). Old-school empirical macro uses data under the assumption that the deep parameters are known. That’s necessary because the data available for macro is much less descriptive. With that data, the extra step of trying to estimate the deep parameters is asking way too much of standard data mining algorithms; hence the need for theoretical structure, like a DSGE framework, to help the estimation along.
]]>