Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the jetpack domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the updraftplus domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the avia_framework domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131

Warning: Cannot modify header information - headers already sent by (output started at /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php:6131) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/feed-rss2.php on line 8
Methodology – TVHE http://www.tvhe.co.nz The Visible Hand in Economics Wed, 23 Mar 2022 06:04:27 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 3590215 How we talk about the nature of work http://www.tvhe.co.nz/2022/03/24/how-we-talk-about-the-nature-of-work/ Wed, 23 Mar 2022 20:00:00 +0000 http://www.tvhe.co.nz/?p=14311 Over two days Betsey Stevenson had two posts on the nature of work – both of which I agree with, and both of which sound like they may contradict.

So wait a second, if someone in a high status job gets paid more for the same effort and same contribution then why are we talking about marginal revenue product? Shouldn’t they already be rewarded by status? Is this a product of power? Let’s have a think

Lets write the three elements here out:

  • Work that is separated by “social standing”, and remunerated differently, may be similar in form.
  • A labour market represented by the standard neoclassical model – which represents a tendency in markets where there is effective competition – wage is equal to the marginal revenue product.
  • Wages do not represent the value of the individual within society, and may even differ from their value in employment.

Betsey uses these items as a way of making explicit that descriptive assumptions of a model should NOT be confused with broader normative assumptions. And that it is the role of someone using economics to, as far as possible, clearly separate these. Very true.

However, I think it also illustrates a general confusion about economics. Economists build models that explain tendencies, and the reason the tendency occurs – then when we understand the conditions within which these tendencies occur we can use these models as a framework to use real data, and in turn investigate whether these tendencies occur in the area of interest or not.

There are MANY other determinants of wage than marginal revenue product in many circumstances. But the neoclassical model has value in so far as it describes a tendency for the level of wages to move with marginal revenue product – and how it can be used to evaluate where the incidence of this falls given a variety of market structures.

An economist is never saying that a single incentive causes everything, or that a price or wage is the sole determinant of an action. It is evaluating the tendency for change in outcomes given an isolated change in an incentive – be it a price, wage, or some other attribute we are able to measure. Given that we can then use this framework to measure and understand our measurement.

That is pretty cool.

]]>
14311
Randomized control trials and economic models: friends or foes? http://www.tvhe.co.nz/2019/11/21/randomized-control-trials-and-economic-models-friends-or-foes/ Wed, 20 Nov 2019 19:00:41 +0000 http://www.tvhe.co.nz/?p=13731 Randomized control trial (RTC) studies are getting more and more attention among policymakers in the last few decades. In addition, the RCT is one of the core experimental methodologies used by the recent nobel prize laureates in economics Duflo, Kremer and Banerjee

Given the excitement around these methods, Chicago University has recently run the IGM Economic Experts Panel asking economic experts on whether the “ Randomized control trials are a valuable tool for making significant progress in poverty reduction”. The results of the poll are summarized in the graph below. 

The chart above highlights respondents’ agreement distribution. What struck me most from the results was Angus Deaton’s strong disagreement with the statement – especially given that he is an expert in the field.

Why does Deaton strongly disagree? 

To answer that we would like to think about what the RCT is and how does it fit to answer the policy question. Let’s shed some light on it.

What is an RCT? 

RCT is a technique used predominantly in medical sciences, but also applied in economics quite intensively , especially in the last few decades. The technique works in the following way. Researchers randomly select a group of people to allocate them a clinical intervention (such as an anti-cancer pill). The comparison group (which is called the control group) is also randomly selected where they received a placebo intervention (such as a sugar pill). 

Then the researchers compare the difference between the groups to quantify the significance of the treatment (clinical intervention or “treatment effect”). 

In economic research, RCT is often applied in poverty alleviation schemes to help quantify the effect of the policy intervention. However, it has been applied much more widely giving insights about the labour market, behavioural economics, health economics, taxation, and industrial economics.

So an RCT tells me what a policy does? 

RCT gives us an empirical treatment effect given specific conditions. This is the type of thing economists will often call a stylised fact.

However, stylised facts cannot give us general policy effects – they tell us what the policy response was in a specific set of circumstances, but we need to be able to generalize that effect to apply it in other circumstances. 

This is where Deaton gets concerned, and where some of the push-back against RCT stems from. 

To get a policy effect we still need a model – simply scaling up an RCT involves imposing an implicit model about how the policy and behavioural responses work, one that assumes the scale of the policy change does not matter and that there are no general equilibrium effects.

This matters.  If we provided a minimum income payment in Treviso, Italy we may find certain changes in prices and labour supply responses in that community.  However, we could not then take that result and “scale it up” across Italy as a whole – as Treviso was not a closed system in the same way an entire country may be, and the larger scale of the policy would influence prices and labour market responses differently as a result (eg if a minimum income increased demand for particular goods, doing so in a small region may not change the price for that good – while doing it for the whole country would).

How economic models fit in here? 

Economic models provide the mechanism for generating generalisability.  At the same time, models and RCT results should work in a recipricotive way. 

Given the same conditions as the RCT, a good economic model should be able to replicate the result – or at least key attributes of it.  Given the ability to replicate an RCT for those conditions, the model then embeds key assumptions about why that result held and a description of the systems that make up the question at hand – this allows an economist to ask counterfactual questions about what would happen if the policy introduced was much larger.

However, it isn’t all one way. Models should in turn be reevaluated if a robust body of RCT evidence suggests that – for a given set of conditions – the models results are false.  RCTs provide the pieces of evidence that models should be able to replicate, while models provide a framework for understanding what can’t be measured and how other, counterfactual, policy changes will work.

Examples of policy implementations (treatments):

To clarify let’s talk about specific examples of how the RCT can be used. 

Minimum wage and labour market 

Let’s consider an example with minimum wage increase and the labour market outcomes. Card and Kruger (1993) found that the minimum wage increase in New Jersey led to employment increases in the state compared to the other state (Pennsylvania), where the same policy was not applied. 

Now if we want to take this result and generalise it to the population level, saying that if we increase minimum wage, it will lead to an increase in employment rate, we are making a mistake. Why? Because the same increase in the minimum wage in all states would have different impacts due to the composition of those states, the overall change in prices in the economy, and the capital structure and industries that are viable across the US economy.

However, it showed there were real shortcomings with models that could ONLY indicate that an increase in the minimum wage could reduce employment. This helped to generate a literature that has more carefully considered the role of minimum wages given the potential for market power and strategic interaction in the market for low wage workers. 

What is the solution then? 

In Deaton’s view too much is being asked of RCTs, and indeed people need to recognise how to “transport” the results to another context:

“More generally, demonstrating that a treatment works in one situation is exceedingly weak evidence that it will work in the same way elsewhere; this is the ‘transportation’ problem: what does it take to allow us to use the results in new contexts, whether policy contexts or in the development of theory?

It can only be addressed by using previous knowledge and understanding, i.e. by interpreting the RCT within some structure, the structure that, somewhat paradoxically, the RCT gets its credibility from refusing to use. If we want to go from an RCT to policy, we need to build a bridge from the RCT to the policy.”

Deaton’s concern, which is reasonable, is that RCTs are treated as a sole source of truth. But such a focus isn’t just misleading, it would be bad science.

Card and Kruger’s paper did not tell us that a higher minimum wage would increase employment – it taught us that reality is complicated, and the evaluation of policy must be based on trying to understand how this works, using both evidence and theory.  Duflo, Kremer, and Banerjee similarly see the importance of both – in her Economist as Plumber article Duflo notes:

“However, because the economist-plumber intervenes in the real world, she has a responsibility to assess the effects of whatever manipulation she was involved with, as rigorously as possible, and help correct the course: the economist-plumber needs to persistently experiment, and repeat the cycle of trying something out, observing, tinkering, trying again”

Deaton’s concern is that people will experiment and measure without ever trying to model and understand what they are doing – thereby generating a stream of published studies but no understanding.  Those that are more positive about the RCT revolution instead see such experimentation as part of this very iterative process that helps to describe the “transport” problem that Deaton is concerned about.

To sum it up 

Predicting a policy result from a given policy involves an implicit model – irrespective of the number of RCTs that have been run. However, these RCT provide a discipline that any worthwhile predictive model needs to be able to replicate – they provide the true stylised facts (if done properly) that a predictive model must match to be credible.

]]>
13731
GDP in three different charts http://www.tvhe.co.nz/2015/02/25/gdp-in-three-different-charts/ http://www.tvhe.co.nz/2015/02/25/gdp-in-three-different-charts/#comments Tue, 24 Feb 2015 14:08:30 +0000 http://www.tvhe.co.nz/?p=12571 Flipchart Rick has a post up about Andy Haldane’s speech the other day and, like all Haldane’s work, it’s witty and engaging so you should definitely read it. The subject is the recent slowdown in growth in the developed world and it illustrates how different views of the same data can lead to very different conclusions.

Haldane plots the last 3000 years of GDP to show what a recent phenomenon exponential growth is:

screen-shot-2015-02-23-at-18-02-36

Rick then emphasises how very recent growth has been with this chart. Note that the x-axis compresses as the dates advance, which makes the growth rate appear greater for later dates. He uses this chart to suggest that most of the growth in incomes has actually occurred since the 1950s, which is entirely true in an absolute sense.

1700_ad_through_2008_ad_per_capita_gdp_of_china_germany_india_japan_uk_usa_per_angus_maddison

But now look at how steady UK GDP growth looks if I plot it with an even x-axis and a log y-axis to account for the exponential nature of growth since the Industrial Revolution. The two World Wars cause noticeable spikes in output followed by sharp falls, as expected: war output tends to be funded by borrowing, kills millions of people, and destroys productive assets. Despite that, a prediction of 2 per cent growth back in 1800 would have been fairly accurate for the next 200 years.

Herein lies the problem for extrapolation from trends. We’ve had 200 years of fairly steady growth that encompasses multiple world wars, the fall of numerous empires, the steam engine, motorisation, computerisation, flight, and many other life-changing technologies. It’s difficult to argue that the past five years have seen greater upheaval, such that growth must be permanently changed. On the other hand, the past 300 years are a blip in human history and have seen unprecedented growth in output. It is hard to argue that these huge outliers represent a trend that must continue. Both sides of the argument can present convincing charts but no graph answers the essential question: What are the last 300 years different? Until we truly understand the changes that began in the Industrial Revolution we cannot begin to predict when our lucky streak will end.

Update: Economist points out to me in a comment that I never really excluded the possibility that the first chart demonstrates exponential growth and is just poorly plotted. I grabbed Brad Delong’s estimates of growth since 1 million BC (confidence intervals not included) and plotted it with a logged y-axis but it turns out that the last 300 years is such a blip that it doesn’t even show up! So here is the most misleading chart of the entire post, in which I’ve also logged the time axis to give you a fairly linear relationship in log-log space!

]]>
http://www.tvhe.co.nz/2015/02/25/gdp-in-three-different-charts/feed/ 4 12571
The Economist’s misguided lecture to macroeconomists http://www.tvhe.co.nz/2015/01/14/the-economists-misguided-lecture-to-macroeconomists/ http://www.tvhe.co.nz/2015/01/14/the-economists-misguided-lecture-to-macroeconomists/#comments Tue, 13 Jan 2015 14:47:13 +0000 http://www.tvhe.co.nz/?p=12045 In a bizarre leader article The Economist praises microeconomists for their use of data to better predict people’s behaviour and recommend macroeconomists do the same:

Macroeconomists are puritans, creating theoretical models before testing them against data. The new breed [of microeconomists] ignore the whiteboard, chucking numbers together and letting computers spot the patterns. And macroeconomists should get out more. The success of micro is its magpie approach, stealing ideas from psychology to artificial intelligence. If macroeconomists mimic some of this they might get some cool back. They might even make better predictions.

I’m tempted to label this as obvious baiting but the misunderstanding is deeper than that. The newspaper appears to be suggesting that the way forward for better macroeconomic forecasts is to replace theory with data mining. Economists well remember when they last thought that empirical models and relationships could be used to improve forecasts and set policy. The heady days of the 1960s saw economists attempting to fine-tune the economy using empirical relationships such as the Phillips curve. As the empirical relationship disintegrated in the 1970s the developed world fell into a disastrous period of stagflation; a situation not anticipated by the empirical models in use.

Enter our heroes: Milton Friedman, Robert Lucas, Finn Kydland and Ed Prescott. These intrepid macroeconomists convincingly demonstrated that nearly any empirical model would fail to predict the outcome of policy changes. The core problem is that data-driven predictive models incorporate a myriad of implicit assumptions about the relationships and interactions between people in the economy. Policy changes alter those relationships and the models then become very poor predictors. That insight ultimately led to the development of micro-founded models such as the New-Keynesian DSGE models used by most central banks today.

Anyone who has worked with general equilibrium models will know that they are immensely data-hungry and require vast amounts of the stuff to produce simple predictions. But they do so in a fashion that is theoretically structured to avoid the problems of the 1960s. Better data complements better theory, it is not a substitute. The Economist’s misguided recommendation would throw out some of the greatest advances in policy-making of the past half century. Economists must resist the lure of Big Data mining and ensure that theoretical innovation keeps up with the explosion in available data.

]]>
http://www.tvhe.co.nz/2015/01/14/the-economists-misguided-lecture-to-macroeconomists/feed/ 6 12045
Merry Christmas from TVHE http://www.tvhe.co.nz/2014/12/25/merry-christmas-from-tvhe/ Wed, 24 Dec 2014 13:10:53 +0000 http://www.tvhe.co.nz/?p=12029 Have a great Christmas and we’ll be back in the New Year. If you’re feeling starved of economics over the next ten days The Atlantic has a selection of beautiful Christmas cards to send to your loved ones:

If you need something a little more stimulating then you can catch up on some of the debates you missed on the blogs over the past week.

The effectiveness of monetary policy at the ZLB

This debate has been going for a while but it has flared up again recently with a disagreement between Paul Krugman and Ambrose Evans-Pritchard. Krugman has a follow-up and Tony Yates, as ever, brings the theory to the debate. Scott Sumner reviews the evidence against Krugman.

Are we teaching economics the right way?

Aditya Chakrabortty’s radio show on teaching economics drew on heterodox economists such as Steve Keen an Ha-Joon Chang to criticise the current state of the economics profession and the way the subject is taught. It drew a strong reaction from Tony Yates and Diane Coyle. Karl Whelan was moved to write a long, thoughtful reflection on how economists present their work.

]]>
12029
In support of dynamic scoring http://www.tvhe.co.nz/2014/04/23/in-support-of-dynamic-scoring/ http://www.tvhe.co.nz/2014/04/23/in-support-of-dynamic-scoring/#comments Tue, 22 Apr 2014 16:16:15 +0000 http://www.tvhe.co.nz/?p=11251 Estimating the impact of tax cuts is a tricky business. You can fairly easily calculate how the revenue from current income and spending will change, but that’s just the beginning. The problem is that people don’t stand still: they change their earning and spending habits in response to your tax changes, which changes the revenues from the taxes. The UK government is pretty good at estimating that but economists have long known that there are a couple more stages before you have a full picture of what’s going on. That’s why HM Treasury has begun to use a dynamic, computable, general-equilibrium (CGE) model to estimate the effect of tax changes.

CGE models bring us closer to reality…

The CGE model accounts for the long-term effect on the economy of changing behaviour. In the case of cuts in the fuel duty it accounts for the growth in production caused by a reduction in transport costs. Increasing production generates more road traffic, which yields more fuel duty revenues and partially offsets the cost of the cut. Using the CGE model to ‘dynamically score’ (as the jargon goes) the cost of the tax cut incorporates effects these effects that are not a part of the traditional approach.

…but it doesn’t account for everything

Even the CGE model doesn’t include all known effects: commentators have been quick to point out that externalities, such as pollution and congestion, are not included in the model. Nonetheless, it is better than the previous estimates and HM Treasury should be applauded for their efforts. The current, static scoring suffers from the same problems so dismissing the CGE estimates on those grounds would allow the good to be the enemy of the perfect.

John McDermott, in the FT, also criticises the model for ignoring the effects of monetary policy and the current slump, saying that “their absence means we should be sceptical about any attempt to simulate GDP two decades down the track”. This is a tricky question. The absence of money from a long-run model shouldn’t matter because, over a twenty year horizon, monetary policy rarely has much effect. However, a tax can have different effects in a slump than it would in a boom and CGE analysts commonly ignore these subtleties because it is difficult to know their magnitude. But these are minor quibbles since the current, static scoring method also suffers from the same problems.

The counterfactual is crucial

The most telling critique is made by Chris Giles and Simon Wren-Lewis, who claim that the Treasury has modelled the tax cut in manner designed to make it look good. The issue is that when you ask, ‘how much will it save?’ the answer is ‘compared to what?’ A reduction in the fuel duty could be matched by either a cut in expenditure or a rise in taxes elsewhere. The Treasury chose to compensate with extra taxes that change growth by the least amount possible. Chris and Simon’s point is that the overall impact would be far less rosy had the Treasury chosen to hike income taxes instead. That is true but, most likely, the Treasury analysts don’t know what the Government would do to compensate and tried to be as neutral as possible. Ideally, they would model various scenarios to give an idea of the possible range of impacts, but it is quite possible that they did not have the time to do that. Certainly, it would be good to see more scenarios in future work but it is telling that this question is never asked of static scoring. The reason is that static analysis tends to hide these assumptions through omission, rather than making them explicit as CGE analysis requires. The discipline of having to think about, and debate, these questions is a good one and certainly not a reason to favour the current, static techniques.

Conclusion

It’s great to see the Treasury conducting more sophisticated analyses of tax policy and doing so publicly. Publishing the results allows for these discussions and can only improve work they do in future. The analysis of fuel duties has obviously hit a political nerve and commentators have been quick to jump on the difficulties inherent in complex estimation tasks. In the hubbub we shouldn’t lose sight of the fact that the CGE model is still a great advance on what came before.

]]>
http://www.tvhe.co.nz/2014/04/23/in-support-of-dynamic-scoring/feed/ 1 11251
Economics, theory, and data http://www.tvhe.co.nz/2014/03/23/economics-theory-and-data/ http://www.tvhe.co.nz/2014/03/23/economics-theory-and-data/#comments Sat, 22 Mar 2014 20:00:27 +0000 http://www.tvhe.co.nz/?p=10643 This post was titled “Why data alone is not enough for economic inference”.  I was all prepared to write a post on the fact we need data and theory in order to do economic inference and create knowledge. I had links (*,*,*,*,*).  Then Noah Smith wrote this like really good post on the issue, so I’d suggest reading that.

On the other side there are those who are “too in love” with theory without any reference to data, or prior literature (which is a way of building a case for inference between theory and data).  A clear example of that comes from some of the comments that specific physicists moving into economics make – and Chris House has expressed that here.

I’d note that I want physicists to come in and add tools and debate to economics – things like agent-based modeling, which they are more familiar with, are growing exponentially in popularity in sociology and economics (due to recent data availability), so pro-tips from this would actually be superb!  Also keeping economics in a silo from other disciplines would be patently ridiculous – the questions we ask are too important not to have as many ideas as possible competing to create knowledge.

But many of these econphysicists (not all) that are most vocal they need to learn to accept the idea of “many models” and the lack of a general theory for economics (I am sure most physicists do in their own discipline, the people Chris mentions do not).  Even more importantly, these physicists do not understand the normative-positive distinction with economic modeling – their determination to say what we “should” do based on one perspective is incredibly unsound.  There are multiple ethical dimensions involved due to the existence of individual choice, and economic models are predicated not on giving a “solution” to policy questions, but informing them of tendencies and trade-offs that need to be considered.

Examples of concerns

The two worst examples of physicists being snide at economists recently were this article and this paper – although for very different reasons.

The article frustrates me a lot more than the paper, as it is someone talking about terms they don’t have a complete appreciation of!  They complain about the microfoundations endevour, and then follow it up with a call to heterogeneity.  Explanatory models that include more heterogeneity are indeed a form of microfounding a model – so this in itself is strange.  He may defend it by saying “I mean the representative agent microfounding you guys do” – ok but then he is ignoring all the work on introducing heterogeneity, accounting for habit formation, forms of bias, etc etc and attacking a strawman.  Note:  Attacking the political implementation of economic policies is so very different to attacking the academic discipline – academics got pretty frustrated during this crisis as well.

And he makes claims about journal articles being all about rationality and single equilibrium … does he actually read journal articles, as this is not what I’ve found.  Instead there is a LOT about data and stylized facts, and a bunch about trying to isolate causes from idealized models given a range of assumptions.  The call about rational choice is an overused archetype of a concern – these issues are complicated, economists (since Herbert Simon – but also a lot prior in a more ad hoc way) tend to view rational choice through a lens of limitations to cogitative efficiency, and try to work in a framework that could incorporate that – this makes us similar to psychologists, and makes neuroeconomics exciting.

I have also heard the author of that article say that economists need to study the behaviour of idealised systems/an artificial world, and then think about how that translates and can help us answer questions.  This solely tells me the author doesn’t know what economists do – and is selling books attacking a straw man.  Why?  As the behaviour he wants us to exhibit happens to be what the economics method is!

Update:  James had a very nice post earlier in the month about some of the specific claims made by Mark, it can be found here.

The papers concept was reasonable enough, exchange rates exhibit Brownian motion.  However, it was just weird to see scientists claiming it and sort of saying “hey it’s like fluid and stuff and this is a new idea”.  Exchange rates are asset prices, and within economics and finance asset prices are constantly modeled using Brownian motion.  It just disappoints me that these claims are made in a way that make economists sound stupid, even when it turns out there is already a well used current literature on the topic in economics that doesn’t look like it has been read 🙁 – Note:  It may be in the paper itself, I don’t have access right now – but this doesn’t change the tone of the abstract!

The key point of House’s piece was that there is an existing literature, and many of these econophysicists don’t feel like they need to reference and read it and that they can come up with things from scratch.  This is a neat way of doing research without being incumbered by others “priors” – but when it comes to actually publishing results, claiming things as “truth” without reference to whether the literature is already there, or whether there are good reasons already given for these things, is pretty naff.

I world usually ignore this sort of stuff – except I constantly hear comments like this:

FFS.  There are a bunch of crazy smart people who have spent their lives studying these issues, and are in competition so would jump at the chance for a new model that outperformed!  But the odd random person with their “alternative accounting” knows the real truth and is being suppressed – conspiracy!!!!!

Trust me, if you have evidence of a forecasting model that outperforms Bayesian moving average combinations of time series and structural models then don’t tell me – get hold of your local central bank.  Given they have to “target their forecast” they are the guys who really really care about this stuff, and if its really improving policy they’ll be first on the line to try to get you your Nobel Prize.

Update:  I wrote this post as soon as House’s post came out.  It led to some arguments on Twitter – so I’m putting this up a day early, and also clearing myself up a bit.  John is right when he says this:

Yeah, that call was too far by House – that was part of the reason I’d written in this post about how much I want them to contribute, as I didn’t want that to get confused in my blog.  When House says “Neural-networks, agent based modeling, path-dependent equilibria” haven’t produced much I felt a bit uncomfortable – as they are areas that can offer significant marginal gain, and in the case of ABM where I see a lot of economists working (even I’m trying to build up capability in that field).

But my guess is it came from frustration, I read it as the sort of thing I feel after getting another call from a physicist/engineering telling me an economic variable is JUST X, and I’m stupid – and economists should feel bad.  I work in a consultancy, god knows why these people contact me, but anyway.  I say something like “interesting, what behaviour drives that, and what does that mean” and they’ll tend to say “it is X, are you listening!!”.  This is a philosophical gap pure and simple, and we need two way communication – instead of insults – to bridge it.

We solve that, and physicists will bring a lot to economics – not as much as psychology or biology – but still a bunch.

One extra point – often on the same day I get told economics isn’t mathematical enough (engineering, physics, drunk people in bars) and then get told economics is too mathematical (politics, sociology, drunk people in bars).  Could these people also argue with each other for a little bit, since they seem to have different definitions of what economics is, and what the “right” amount of math is (whatever the hell that means).  Is there a way to get them in the same room?

]]>
http://www.tvhe.co.nz/2014/03/23/economics-theory-and-data/feed/ 6 10643
On economics: Germs of choice http://www.tvhe.co.nz/2014/03/14/on-economics/ http://www.tvhe.co.nz/2014/03/14/on-economics/#comments Thu, 13 Mar 2014 18:30:04 +0000 http://www.tvhe.co.nz/?p=11061 Recently Alex Coleman stated on twitter that he found economics ridiculous (in his defense, I specifically believe he is talking about macroeconomics – not the other 95% of economics that is not macroeconomics.  Also, he probably heard an economist on the radio – we always sound a bit ridiculous floating in the media).  That’s cool.  A lot of tweets were written by people, most of which I won’t bother replying to as they tend to be hogwash that people throw out when they know nothing about economics but just want to attack economists – it frustrates me, and I’d rather not be frustrated right now 😉 .

However, I read these sorts of threads as sometimes people make interesting points I had not heard, or had heard before but feel like I want to consider them more.  I found this comment by Danyl from Dim Post interesting in that way:

So what does that mean?  I’ll give it a go, and hopefully my discussion is in the same way he is considering it!

Germ theory = choice theory?

Guessing from posts in the past, Danyl views the “theory of choice” as a key weakness in what economists do.  Down the bottom of the post you can see me quickly saying something about choice, but it is a fair issue for us to all think about.

It is a common view in the public that economists rely on “homo economicus” with excessive views of “rational choice” and “perfect information”.  Hell you even see economists saying it to each other as an insult.  But it is not true – our views on choice are more varied than that!

However, this is not to say there is nothing in what Danyl is saying – far from it.  I think it is one of those points people “whip out” to sound smart, but if it is done thoughtfully it is a really smart point!  Economics as I’ve described it in the past is similar to most social and physical scientific disciplines in that it follow scientific realism.  In this way our characterization of our “counterfactual world” is pretty important!

The push towards neuroeconomics and behavioural economics (which are significant parts of macroeconomics and finance now) can be seen as part of the way economists are trying to incorporate and understand those ideas.  Furthermore, in papers like MacCurdy etal (1990 – Assessing empirical approach for analyzing taxes and labor supply) the idea of ‘utility maximisation’  as an assumption, and its impact on the domain of possible results we could find from data is criticised – and a lot of literature on estimating effects has tried carefully to take this into account ever since.  Note:  An interesting paper I only read recently discussing part of MacCurdy etal regarding utility maximization is this (REPEC) – it is an area where I think debate is not just still active, but still fascinating!

Now, the limitations of collecting data, and also of perceiving the actions (and their costs and benefits) of others, makes all of this very difficult – this is part of the reason why economists hold back from looking for general or universal laws, and tend to use “many models” in order to answer specific conditional questions.  Furthermore, when answering those questions economists try to “generalise” the choice requirements as much as possible – to know what “core assumptions” the result may rely upon – the simplistic examples, with restrictive choice conditions, are not chosen because they are how economists do things, they are chosen for the simplicity of exposition!

On the limits of intuition

But here is the thing, economists more than accept these shortcomings (in general) and should as a result be humble about their knowledge.  I also agree that economists can, at times, overuse appeal to authority.  However, they still do spend a long time researching, using data, and testing different types of hypotheses about what is going on.

The fact that our knowledge is of an inexact nature implies that following a principle such as “first do no harm”, like medicine does, is incredibly valuable.  This places limits on what we should expect the government to be able to come in and do – as the choice of government policy is a treatment.  Of course, there are specific ethical principles we may have that government, as a representative of all of us, can fill – given that an economist can help work out the trade-offs and costs and benefits both ex-ante and ex-post, but the discipline doesn’t tell us whether the policy is a bad or good idea.  Also, we can ask where the burden of proof lies given some assumption about where we start!

In this way, economists key focus is trade-offs.

Furthermore, the fact macroeconomics is complicated and our knowledge is conditional in nature does not in turn imply that the opinions stated by experts (eg the RBNZ) are equivalent to the opinions stated by everyone else.  It is not black and white – a discipline doesn’t provide perfect deductive knowledge or nothing.  I find it very strange in economics when the RBNZ says something and someone random will just say “na I don’t agree, that is just their opinion” – it is an opinion of a team of the smartest people in the country, using the data and understanding built up by thousands of other very smart people, I would trust that more than a random person.

Finally, Alex stated a concern about empiricism in his tweets to this – but the counter to empiricism is intuitionism, and economists have spent a long time asking whether “intuition” and/or “mathmatical logic” could allow us to find definitive results about social organisation.  Unsurprisingly, it did not.  This implies we need empiricism to test competing hypotheses.

I still don’t trust economists

That’s fine, maybe you had a bad experience with an economist when you are young, maybe you’ve met the wrong economists, maybe economists method makes them seem unconcerned with ethical issues you care about.  It doesn’t matter, I am not trying to tell you to trust economists.

All I’m trying to do here is say “hold up, and lets think about our concerns””.  The conclusions are:

  1. Our concerns about the assumptions of economists are valid ones – but can also be overblown.  These are the types of issues economists spend a lot of time thinking about when discussing a characterisation of reality.
  2. Our knowledge is conditional, and for specific questions – but this also implies that “folk economists” knowledge is much less than they often believe.  Saying that economists can’t explain everything does not immediately mean people can say whatever they want!
]]>
http://www.tvhe.co.nz/2014/03/14/on-economics/feed/ 9 11061
There is no One Model to Rule Them All http://www.tvhe.co.nz/2014/03/06/there-is-no-one-model-to-rule-them-all/ http://www.tvhe.co.nz/2014/03/06/there-is-no-one-model-to-rule-them-all/#comments Wed, 05 Mar 2014 20:14:29 +0000 http://www.tvhe.co.nz/?p=11026 I like the influence that physicists are having on economics. Moving towards agent-based modelling in some areas of the discipline is a great idea. But, in addition to lending their novel insights, some seem to enjoy piling on economics generally. Generally you have to take the good with the bad but Mark Buchanan’s latest article is so shockingly bad that I can’t help picking on it.

Buchanan’s article is intended as a critique of DSGE models. He first alleges that they are poor forecast models and uses their absence from Wall St as evidence of that. He then claims that they are useless for telling stories about the economy because they’re unfalsifiable.

On the first point he is half right: DSGE models have poor forecast performance. What he fails to point out is that there are no other models that do better! DSGE models are the core of the Bank of England’s COMPASS forecast model, the Reserve Bank of New Zealand’s KITT forecast model, the European Commission’s Smet-Wouters forecast model, and others. Sure, they struggle to accurately forecast more than one quarter ahead but they are equally as good as anything else out there.

So why aren’t they used on Wall St if they’re at the cutting edge of central bank forecasting? Largely because time series models provide equally good forecast performance and Wall St analysts have less need for the other benefits of a DSGE framework. Those benefits are its infinitely better performance in analysing competing policy options. The idea behind DSGE is to distil the economy down to ‘deep, structural’ factors; things that don’t change when the Government changes its mind. That is why they are useful for analysing what might happen if the Government did change its mind. Of course, that structural analysis takes a lot of extra effort and isn’t going to improve your forecasts but it is crucial for Government and central bank analysis. Different tasks require different tools.

Economists have spent a lot of time thinking about how to analyse policy questions and construct meaningful counterfactuals. The influence of physicists may well help us build better models but we shouldn’t be so bamboozled by their technical prowess that we throw out the expertise economists have spent over a century building.

]]>
http://www.tvhe.co.nz/2014/03/06/there-is-no-one-model-to-rule-them-all/feed/ 4 11026
The arguments about “macroeconomic methodology” http://www.tvhe.co.nz/2014/02/13/the-arguments-about-macroeconomic-methodology/ http://www.tvhe.co.nz/2014/02/13/the-arguments-about-macroeconomic-methodology/#comments Wed, 12 Feb 2014 22:00:53 +0000 http://www.tvhe.co.nz/?p=10854 There has been a series of posts by people discussing a new book, “Big Ideas in Macroeconomics“.  Ryan Decker points out a good post by Steven Williamson that has links to other posts.  I haven’t read the book, in fact I haven’t ordered it yet (but intend to) – but I don’t really intend to talk about the book, so I think I’ll be ok.  Instead, I am going to discuss the posts – as I’ve been reading them as they have come out.

The first post was over at Uneasy Money, a blog I really enjoy if you don’t already read it 🙂

Here the book was discussed, and although David Glasner was uncomfortable with elements of it he found it interesting.  This is cool.  From his description of the book I immediately decided I wanted to buy it, there were two reasons for this:

  1. David notes that the book was strongly tied to GE/representative agent theory, which is a standard starting point macro.  I wanted to see how this was delivered “in words”.
  2. David notes that co-ordination failure concepts were generally put to the side.  I would like to see why, given I find the argument for activist monetary policy as a type of co-ordination failure argument.

So this was all gravy, at this point there is nothing about trying to undermine some mythical macroeconomic methodology.  Then John Quiggin came along.  I had a lot of issues with his post.

As a starting point, I get frustrated when people say “macroeconomics began with Keynes”.  Keynes was extending on contemporary concepts (in a way he wanted to sell as a type of revolution – go marketing), and even a number of ideas that are seen as archetypically Keynesian were already being discussed at the time (this type of historical context is part of the reason why I like David Glasner’s blog so much).  Keynes tied together ideas about inflation and interest rates (coming from thinkers as diverse as Hume and Fisher) with the debate between Ricardo and Malthus regarding unemployment.  In the General Theory I got the impression that a) Keynes liked to badmouth contempories and b) he saw himself as extending Malthus’s argument.

Activist monetary policy as a concept existed before Keynes, and outside of economics.  Activist fiscal policy was more novel, but was already taking place in countries like Sweden before Keynes released the General Theory.  Now don’t get me wrong, Keynes was amazing and contributed a lot to the discipline – but treating him like some touchstone messiah that splits good economics from bad is a mixture of historic revisionism and dis-ingenuity.

Sidenote:  When I was 14 my brother gave me my second book on economics (the first being a brief loan of Das Kapital by my English teacher) it was “Keynes for beginners”.  I did my speech that year for school on Keynesian economics, and won, so yah.  The books conclusion was that we’ve forgotten Keynesian economics and it is “due for a comeback” in 1992.  Nowadays I’m glad I don’t have a copy of my old speech, and I realise that people simply use these “names” to create easy good and evil narratives to argue about.

It is in this environment that John writes about the book, and honestly as a criticism of “macroeconomics” he is well off point – many of the ideas he attacks as irrelevant are important for understanding the “macroeconomy” and should be part of a mainstream research program.

He has a comment below his post which I am a lot closer to agreeing with:

 I am interested in general equilibrium theory and think that, viewed with an appropriate scepticism, it yields some useful insights. I’ve even had a go at it myself

But the idea of using Walrasian GE to understand either the Great or Lesser Depression seems to me to be self-evidently silly.

Indeed, if we want to understand what happens with a particularly large shock, we may need to use a different set of tools.  In this context if he had said “I think this book has too little on analyzing what happens in the face of large economic shocks (the type where assumptions about log-linearization become too strong)” that would be fine – instead he attacks all of macroeconomics.

We are starting to get to something here.  Do we actually have different definitions of macroeconomics floating around?

Noah Smith then goes on to discuss how it appears the book is more about macroeconomic method, than economic history/hypotheses.  I wouldn’t really find that terribly surprising in a book that is about explaining the method macroeconomists do – however, I do see his point that macroeconomists in turn need to explain “why” they do it.  Again, we need to define what we mean by “macroeconomists” here, and things are left just vague enough for me to have little to say.

Stephen Williamson then comes out and is not happy with the negative comments, specifically from Quiggin and Smith.  He notes that even if we focus on just the crisis, the methods discussed in the book allow us to actually have testable hypotheses about the crisis, and to figure out what went wrong and how we can improve policy and institutions.  Quiggin’s comments degrading the use of tools such as “mechanism design” come off as incredibly inappropriate under Williamson’s definition of macroeconomics – as mechanism design is a central tool for understanding how we can improve policies and make institutions function better (or be more robust where appropriate).

Most importantly, macroeconomics is not a “finished field” with a set of “known answers for all states of the world” – we are researching and trying to create knowledge.

I agree with Williamson’s view.  A lot of the “criticism” of macroeconomics seems to stem from loose definitions and an annoyance that they couldn’t persuade policy makers during the GFC.  Your inability to persuade someone is not a reason to degrade a scientific research programme – instead you should look more carefully at why you can’t seem to persuade people.

What is macroeconomics?

There is a kicker here as well – no-one seems willing to define macroeconomics in of itself.  It is the study of “aggregates” rather than individual markets, sure.  But we know there is a relation between aggregates and individual choices – and so we need to think through this issue in more detail before we could really define a scope for the discipline!

It turns out that different views about this relation, and about the scope of macroeconomics that comes out of it, lead to different views on what is “good macroeconomics”.

Macroeconomics, science, methodology – what the hell do those things have in common?

I’m glad you asked.  In macroeconomics, just like in other economic, social, and physical disciplines, we use the scientific method.  That’s nice.  But in terms of the relation between methods, and thereby what is “scientific” in macroeconomics there are two broad areas to think about:

  1. Reductionism:  Can we reduce macroeconomic phenomenon (aggregates, their trends and movements) into microeconomic arguments given perfect data.
  2. How close can we get to “perfect data”.

In Kevin Hoover’s “Is Macroeconomics for real” he discusses the fact that reductionism of aggregates may not be possible (in fact there is a lot of literature to suggest that already as I note in this document), and the “synthetic” aggregates we do measure do not bear a relation to our perfect data. [I would note here that measurement issues, and the lack of unique identification of causal mechanisms, are issues that exist within microeconomics – and in fact in pretty much everything]

However, this is not a call to just compare aggregates to each other and leave it at that without causal relationships!  Macroeconomic events are still the result of microeconomic choices, we just may not be able to identify them uniquely – but macroeconomic variables still supervene upon microeconomic ones.  We still need to understand the structure of causal relationships in order to understand POLICY – and this is why the Lucas Critique still holds firm.

Now as I also note in this document at the start, representative agent models do not fully satisfy the Lucas Critique – in fact we can’t make anything that does “fully satisfy it” due to data limitations (although the types of assumptions we make play a role, and should be discussed).  In that context, understanding that our knowledge of effects are conditional and partial is important.  Attacking “macroeconomists” because they are taking this into account and not just making sweeping generalisations and following universal laws isn’t a good idea.

Conclusion

The Lucas Critique remains important as a call to understand policy by actually understanding cause-and-effect in some manner.  As Mas-Colell (1989) states with regards to GE and capital markets, the fact we can’t a prori come up with tendencies that will always hold is a call for modesty and the use of empirical evidence.  That is exactly what the modern macroeconomists are trying to do – create conditional slices of knowledge.

If economists are answering the “wrong” questions, and statisticians are measuring the “wrong” things, then we should chat about that – and we should ask why incentives are aligned the way they are.  In fact, I am convinced that this is exactly what thinkers like Krugman and Smith are considering when they discuss these issues – and it is in this way that this comment on Smith’s post (which he tweeted approvingly) is interesting.  Update:  Kling seems to think it is the wrong methods as well as wrong questions – I am still not convinced.

However, acting like we have universal laws (which is how Quiggin’s expression of Keynes in his post comes across), and that people that disagree are idiots (which is a common theme from a number of bloggers), may get people attention in public.  But it is bad form for economists.

Disclaimer:  I actually think all the above authors will agree with the sentiments I have here about methodology in a large part – but they view the discipline as more partisan than I do, and perhaps also believe that their evidence is “self evidently” more persuasive than I may.  I also think that they all discuss interesting ideas, and I am in no way dismissive of their actual work – I am just disappointed at the rhetoric they are using to discuss the discipline, a negative rhetoric I think is undeserved.

Update:  Interesting comments from Chris House.

]]>
http://www.tvhe.co.nz/2014/02/13/the-arguments-about-macroeconomic-methodology/feed/ 2 10854