Randomized control trials and economic models: friends or foes?

Randomized control trial (RTC) studies are getting more and more attention among policymakers in the last few decades. In addition, the RCT is one of the core experimental methodologies used by the recent nobel prize laureates in economics Duflo, Kremer and Banerjee

Given the excitement around these methods, Chicago University has recently run the IGM Economic Experts Panel asking economic experts on whether the “ Randomized control trials are a valuable tool for making significant progress in poverty reduction”. The results of the poll are summarized in the graph below. 

The chart above highlights respondents’ agreement distribution. What struck me most from the results was Angus Deaton’s strong disagreement with the statement – especially given that he is an expert in the field.

Read more

Our competitive nature is not natural

The heart of economics is an understanding of human choice. Our theories are almost all constructed on a utilitarian model of preferences and choice so understanding how we can improve that model is crucial to progress. Plenty of work has been done by behavioural empiricists on the heuristics that guide our behaviour and the anomalies in our choices. Now a paper by Leibbrandt, Gneezy, and List demonstrates that preferences also change over time in a predictable way. Read more

Nobel 2013: Fama, Shiller, and Hansen

Yah, Nobel prize.  All guys that deserved it … I just wouldn’t have expected them to get it together.  To be honest, the reasoning makes sense though – they have all added significantly to the empirical analysis of asset prices, albeit in quite different ways 🙂

Still, don’t read me.  Read Cochrane (here, here, herehere, here).  And Marginal Revolution (here, here, here, here, here, here).

Also I enjoyed this.  And this post on why the Chicago school gets so many Nobel laureates is a good counter-measure to all the arbitrary bile that can be thrown around on the interwebs 🙂 .  I also enjoyed this post from Noah Smith.

I have a bias towards Shiller in all of this because of my interests.  He is a big proponent of trying to view economic phenomenon through a lens of history dependence (with the regulatory difficulties that entails) and has talked about how exciting neuroeconomics is – completely agree.  However, this has nothing to do with empirical finance in of itself, as this is not my field.  While I think some of the stuff is pretty cool (and remember really like GMM a few years back) I have nothing to say.  Hence why you should be going back and clicking those links to Cochrane and Marginal Revolution 😉

Quote of the day: Pinker on changes in social sciences

Via Noah Smith.  We have the following post on the Pinker vs Wieseltier debate on science and humanities (if you have a chance I would suggest reading the debates themselves as well).

The era in which an essayist can get away with ex cathedra pronouncements on factual questions in social science is coming to an end.

Very good, and Pinker’s co-operative version of science with the humanities seems appropriate to me (where instead we are merely asking about how to deal with certain propositions and using the best tools available).  I think Pinker won this debate, I am unsure why Wieseltier felt it necessary to take such an extreme position though – I think he initially believed Pinker was trying to force through a view based on the superiority of scientific authority (one that Pinker rules out in his initial article!), when he was really just suggesting the use of the scientific method (namely introducing a degree of the positivist view of theory creation) given the improvements in data availability and usability we have had.

As XKCD says:

But even within Pinker’s reasonable claims there is one area where I would be a touch careful Read more

Quote(s) of the day: Keuzenkamp on controls and data mining

As I indicated here I am reading Probability, Econometrics, and Truth.  A nice outline of things, I’m enjoying it at present (I was 20% of the way through when I wrote this post over five weeks ago).

Two quotes I’d like to note down here: Read more

Lies, damn lies, and statistics

Last week David Grimmond wrote (here and here):

However, despite the great informational power of statistics, bear in mind that sample based statistics are still always measured with error.

How often do we hear news items that note something like: ‘according to the latest political poll, the support for the Haveigotadealforyou Party has increased from 9% to 9.5%” etc, but then just before closing the item they state that the survey has a 2% margin of error.

If you are awake to this point you suddenly realise that you have just been totally misled.

With an error margin of 2 percentage points, you cannot make any inference about anything within a 2 percentage point margin.

After discussing this point he states:

One can react to this article in (at least) two ways: one could become a bit more relaxed about the significance of changes reported in statistics or one could seek improvements to the accuracy of statistical collection.

In what areas do you think the first option is appropriate, and in what ways would it be worthwhile to increase spending to improve accuracy?