As you saw from Shamubeel’s post this morning, there was a discussion on well-being and statistics to celebrate the International Year of Statistics – an event that Shamubeel spoke at. Donal summarised the event here.
It was good times and all, well-being is important, as is measurement. All the speeches were good, with Phillip Walker drilling home the importance of measuring wellbeing, Mai Chen adding that we need to be more intelligent about how we consider social capital and culture (as well as measuring it), Shamubeel pushed everyone to think past aggregates and consider data in relation to the choices of individuals, and Campbell Roberts indicated that the reporting of statistics, and the narrative, are incredibly important. Another key point that Roberts stated was that statistics offers a lens on reality and in this way they are useful – very much so.
However, we have a summary from Donal and Shamubeel’s post on his speech. Given I was in the audience trying to eat all the food Shamubeel told me I should post something – so I thought I would point out that there were a couple of areas where I felt a touch nervous. This isn’t to criticise anyone – it was a great day with a lot of good points raised. However, I just felt I should add some detail on a couple of points I felt were left to the side during the day – perhaps because they were too obvious, or seen as inconsequential at the time.
Data and theory
There seemed to be a presumption that a) there is an objective social reality and b) that data can be used to independently represent this social reality. Now, the first assumption I feel duty bound to accept – although I like it to be transparent. The second assumption, that is far too far – this makes me a “(social) scientific realist“, like most of the economics discipline.
We require theory, and a clear recognition of where the data “fails” in regards to the “ideal” data set, in order to interpret data – let alone to decide what we can and should measure! In this regard, I felt there needed to be a little more said about the limitations of data – and the limitations that should impose on policy! There was discussion about how policy has a bias toward things that are quantified, but no-one mentioned why that bias exists – it exists because what we can quantify we (often) have a better understanding of, and as a result can make choices. Yes, we have to be super duper careful because of unquantifiable, unexpected, or even unknownable consequences. Yes things that are unmeasurable don’t “matter less”. But, how can we sort of need to quantify things in some sense to discuss relative value!!!
I also worried that the opposite of this presumption of data knowing everything was on show – when listening to questions and even some of the presentations, was the presumption that – if the data was being collected properly – it would simply confirm the individuals priors.
An acceptance that neither data or our priors/experience is all encompasing is important. We require STRUCTURE, and a shared language so we can compare premises, to do this. An expectation that data alone can provide us sufficient justification for policy is too much – and feeds into an overconfidence about what policy can do.
Statistics NZ’s Social Indicators
Stats also used this opportunity to announce their new Social Indicators part of the website – with nifty infographics and data! Good stuff. The site brings together a lot of useful information, and is a good starting point for people wanting to look at data on social indicators in New Zealand. Stats NZ’s work to make all this stuff accessible and understandable for a general audience is brilliant!
However, there is one note I’d like to make about the idea of self-assessed data – just because it was an issue that I did not see raised at the time. Self assessment tends to give a biased picture of what is going on – it does not provide an “objective” or “comparable” assessment between individuals, it just tells us how someone in a given situation with given preferences will respond to a survey. Any link to actual social outcomes is assumed.
And here is the kicker, people that tend to self-report themselves as having too little money, or having poor health outcomes, are more likely to self report as unhappy – even if their actual health outcomes etc are the same. Now if policy is interested in maximising wellbeing/happiness we need to think this through a bit. In this case people if all people were the same self reported satisfaction based on self reported outcomes would be likely to be a biased expression of the impact of actual outcomes on actual happiness. However, when people are different (heterogenous), these relationships are going to be even more fraught in terms of the “marginal increase” in their happiness – both in terms of measures of actual health outcomes, and the responsiveness of subjective surveys. This suggests we must be a bit cautious when we interpret nifty new interactive tools such as this one.
This feeds right back to the overconfidence point – if we start to convince ourselves we are measuring happiness, we will pursue policies to “target” these measures. But if the functional relationship between the measures and actual happiness is imperfect and biased, these policies can lead to harm.
There is a deeper point about whether the government should be targeting “happiness” as an outcome – or simply trying to provide the situation where everyone has the capability to produce their own happiness. This is a very very important point when things are measured imperfectly – as they always are. But this is the sort of ground Shamubeel nicely covered off earlier 😉