jetpack domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131updraftplus domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131avia_framework domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131I’m not going to come down the middle in this, I’m going to disagree with both of them. The focus on an individual decision regarding something that effects only the individual (superannuation) misses the point of common sense beliefs – and isn’t the best way for Scott to make his claim regarding the difference between “truth” and “plausibility”. Remember, beliefs include our beliefs about others – and how this corresponds to (and can be valued as) aggregate action.
I’ve written about common sense and economics before. To quote wikipedia.
“Common sense is a basic ability to perceive, understand, and judge things which is shared by (“common to”) nearly all people, and can be reasonably expected of nearly all people without any need for debate”
As a result, common sense is a theory of belief formation – not just beliefs regarding your own actions, but beliefs regarding the actions of those around you.
If we just took “common sense” reasoning, we are accepting the current base of beliefs – a base that is founded on history contingent outcomes, and descriptive elements that involve biased categorisation. If we pin ourselves to an argument being one that involves beliefs that are “common sense” or “not common sense” then we end up with two divergent camps – one that thinks common sense is worthless, and one that thinks it is everything (note psychology and philosophy have had these debates as well). I’ll quote my post for the rest of this:
Of course, these definitions are not actually mutually exclusive – and actual economic thought about common sense includes both of them. The clearest example of this is Rubin’s work on folk economics (REPEC).
However, when it comes to thinking about what economists do (rather than just their view on what common sense entails) the requirement to sit in one of these ‘common sense camps’ becomes even weaker. The exceedingly harsh conclusions of the two views above (common sense sucks, common sense rules) don’t actually follow from the premises involved – economics describes situations where common sense beliefs are right, and when they are wrong!
And this is the kicker. Economics is a social science – it is applying the scientific method to help describe and understand social phenomenon. It is not about saying whether beliefs are right or wrong, it is about trying to create knowledge to help us form beliefs.
Now, given that the formation of beliefs is an important part of describing social phenomenon (since choices are based in part on beliefs, and social phenomenon are the result of choices), the issue of belief formation does appear in full force. However, this does not imply that we have to make an a prori choice to state that common sense is wrong or right – instead it suggests we need to think how beliefs, and expectations, are formed and (where possible) use data to help us find what appears to be the most appropriate assumption.
This is why criticising economics as only the study of common sense, or stating that it is ignorant of common sense, is a weak criticism.
However, it is also important to note that stating that policy conclusions are ignorant of ‘common sense beliefs’ can actually be a more poignant criticism – as it indicates simply that we do no agree with the set of beliefs and/or value judgements involved in the policy conclusion.
Summary for this debate – the economic method allows us to discuss social outcomes and the existence of social beliefs in a descriptive way that helps to frame trade-offs. The choice of the “type” of beliefs used when forming policy can be seen as another form of value judgment.
In this way “assumptions” about beliefs matter – and have to be made in a transparent way (as they are). Instead of relying on “common sense” notions alone, which often reinforce bigotry, economists allow a great degree of respect for the individual. It is not reducible to common sense or independent of the plausibility of the assumptions used.
And before you say “ok ok, but the ‘core’ assumptions are common sense” (as Bryan does at the end of his post) I would note this argument is a touch ridiculous. Common sense and scientific consensus are infact two different things – the core assumptions of a discipline have been determined by debate, evidence, and a history contingent process within the discipline not broad “common sense” of beliefs. They are beliefs formed by people within that field due to specialisation, a small subset of all people – when common sense beliefs are believed to be share by most!
Note: None of this said “what matters if prediction not the realism of the assumptions” a la Friedman. That argument could be made as well when considering elements of macropolicy – but it is generally one I’m not a fan of in the policy space due to the Lucas Critique.
]]>The basic inequality that plagues economies the world over may have a simple explanation—at least, according to physicists who’ve turned to economics.
Simple explanation aye, this sounds dodgy. Let’s assume that they are talking solely about income inequality, as it looks like the article doesn’t understand the importance of the difference. Let’s see what the “simple explanation” is.
Pick a country, they claim, and you’ll find multitudes of people who earn next to nothing, a few who rake in plenty, and a distribution between the extremes that falls exponentially as income increases (see figure). That distribution applies to all but the very rich, they say, and it arises from an analogy to the concept of entropy, a measure of disorder in a physical system such as a gas. Just as a gas evolves to a state of maximum entropy, they argue, random churning in the economy ensures that the income distribution naturally tends to this inequitable form.
…
The reasoning is “not very close to the thinking of economists, but it’s pretty persuasive,” says Thomas Lux, an economist at the University of Kiel in Germany. But Frank Cowell of the London School of Economics and Political Science says “I’m extremely skeptical” that the argument provides any insight into the economy.
Two things that make me furious with Thomas Lux here are:
On the second point, the original (read first half of the 20th century) view of looking at income distributions was through a power law (Pareto distribution) and a log-normal distribution. When it came to looking at inequality indices, one of the key indices used was the generalised entropy family, specifically the Theil index and log mean deviation. The later two were the only distributions that met all the “axioms” economists put together for rating a distribution.
In modern times parametric forms have been studied to death, but with data increasingly bimodal (and these forms unimodal) there has been a strong push towards non-parametric analysis (without functional form, just using the microdata). Teasing these things out is difficult, and any introductory book would have given these points – hell even read Cowell’s notes on measuring inequality here. If you read Italian, just go back and read the Pareto and Gini papers from the late 19th and early 20th century and you will see a discipline that knows the points that are being raised (about the distribution being exponential, entropy measures came in the 1960s
) and was trying to tackled with “value judgments” and “mechanisms”.
The “natural tendency” that can exist for the dispersion of income is something that is a centerpiece of economic analysis – instead of saying the dispersion happens though, we try to work out, and test, different mechanisms. Hence this post by Shaz and Piketty’s text both talking about “natural” movements in income – pro-tip, what this “natural” is due to is an important element of whether leaning against it, or with it, is a good thing, and whether it is policy relevant.
If anyone thinks this is just “nit picking” and doesn’t indicate that the article is practically useless, then you don’t understand the basics of economics and policy making – that sounds strong, but it is so far away from being useful for policy or adding to knowledge that I’m comfortable saying it. Note that I know practically nothing about physics.
That is cool, the article should be informing you of those things – but as it is written by someone with no understanding of these basic points they can’t inform you. The article says virtually nothing and insults economists (directly) showing a complete lack of engagement with the literature that economists get to read in an undergraduate policy economics course.
If you want to look at economic data and say things, that is cool, do so physicists – I actually hold hope that you dudes and dudettes will add heaps to our understanding. But before saying “look at this 100 level result, economists haven’t thought of it and are morons” actually read what economists have f’ing said in the past. Arrogant sons of guns – I am utterly furious, and this was defacto published by the AAAS as it is their mag. FFS!
]]>Economists try to answer questions about “the allocation of resources given scarcity”. Every question is quite specific and different, economics education involves learning a broad set of skills that allow us to tackle questions. To do this economists use models. Models embody a set of assumptions, assumptions that create an “artificial world” that we can deduce conclusions from given these assumptions. We then use data, robustness testing, and rhetorical debate to help us inductively infer conclusions about the real world question we are asking from these artificial worlds.
As a result, economics is a discipline that can discuss a wide range of social questions that range from deterministic statements, to prediction, to description, to exploration – but the answers provided are always conditional on the question asked, and the assumptions we have made for answering that specific question.
Further details can be found in these (in order):
If you know of any literature I should peek at to help inform myself on the status of the accumulation of knowledge and method in the discipline (as there is A LOT of improvement I can do in my understanding here) I would really appreciate it.
]]>His focus is explicitly on what economists actually “do”, noting that economists tend to focus on small questions we can actually go someway to answering – and that economists through economics, in no way, try to derive sweeping universal rules for society. Furthermore, the focus of economists, and the assumptions economists make, are a product of their times and the questions that “matter”.
My favourite quote though:
I said earlier that modern economics treats people with respect; it does not regard them as mere dupes and foils of Business and Government.
Note, I largely agree with the essay, even though I personally fall in a different camp – I find myself having to look at economics issues in the way he stated his father did:
My father, Amiya Dasgupta, lectured on both advanced economic theory and the history of economic thought. In later life he often told me he wouldn’t have known how to proceed on one front without keeping a look-out on the other.
I don’t think the two styles are in conflict, they just offer different attributes to “answering a question” – in fact this makes the disciplines of economic history and economics complementary, even if the individuals involved have to specialise in one or the other!
Dasgupta’s essay nicely discusses why many of the compelling criticisms we hear currently about ‘economics’, that we heard in the 1990s (when he wrote the essay), and that we heard in the 1970s, are really the product of not understanding what economists do mixed with poor historic analysis.
He also makes the point that there are many things to criticise and argue about, but appeals to the “Masters” and “paradigm shifts” are often vacuous. This supports the point made by Chris House recently.
Note: Before those on the left criticise him out of hand, look at the highly interesting (left-leaning) concepts he writes about. Read the essay, and you’ll notice him talk about how very high inequality is related to lower economic activity, and how this is the result of stratification.
Before libertarians criticise him, recognise that his essay pushes us to think about individuals, incentives, and individual choice – he is rallying against the lazy, and impersonal, way that many will clump things into “government”, “business”, GDP” etc.
Unlike politics, in economics we are willing to admit that these issuses, and trade-offs, are hard – and that there is no “silver bullet” to improving welfare in society. Incremental change to go with incremental improvements in knowledge make sense. To pretend we know more would be the height of arrogance, and I think Dasgupta’s essay indicates that economists realise this!
Update: CPW just sent me this, it is relevant. I am surprised to hear Noah Smith state that microeconomics is a dismissive term though, the one I always hear is “microeconomics is economics, macroeconomics is just applied micro”.
]]>To anyone who has studied microeconomics, or applied microeconomics, to any level this wouldn’t be surprising. Furthermore, it would be seen as the common view of many economists. This may seem incredibly weird to non-economists – especially since many economists and non-economists share the view that there is an ‘objective reality’, and therefore this single reality seems like it should be described by one ‘super model’. But let me explain.
Let us start with the points that we cannot intuitively know/reason everything about a social system, and that data provides an imperfect lens on any ‘objective reality’ that may exist. In fact the constraint on data can be viewed even more harshly if we include the recognition that often data can only be used when we impose a priori theoretical structure in the first place!
Now this is fine, as when it comes to answering a question we can use the scientific method to give us a bit of a hand. The hypothetico-deductive method is the way we head in this case.
However, things are never quite that neat. The questions economists ask are often ‘ceteris paribus’ questions (if this one variable changes, what is the marginal impact on another variable holding all others unchanged). Because we do not have the observations to measure and test a ‘complete’ model (one we can indicate in theory) we suffer from the Duhem-Quine issue – as a result, we can always blame failure on an auxiliary assumption instead of the one we are testing! Add to this that the conjectures we make are often probabilistic, and actual rejection is incredibly difficult!
This forces economists to go back into theory and think about “using reason”. I strongly remember reading Mill talk about this, and ‘deducing outcomes through hypothetical situations in your mind’ before moving to induction for the real world, but I for the life of me have not been able to find the quote 
In this context, a theoretical model is incredibly useful for helping us to tie down assumptions about a counterfactual world, simulate what would happen in that world, measure the assumptions and outcomes against data, and then update our beliefs about the relationship between X and Y. This is, at least how I interpret, the credible worlds view of economic modelling (and it can be seen in some sense as “Bayesian”). We discussed this here with links to papers.
The importance of assumptions
So we have simulations/models, which are sets of assumptions, some of which are accepted and some of which are not but are used for simplification purposes. Cool.
We can then ask “how do these assumptions influence the question we are asking”. In some cases the simplifying assumptions are irrelevant for our SPECIFIC QUESTION in that simulation – that is cool, we can just roll forward and ask how our simulation compares to data (which is the closest thing we have to a lens on the objective reality we are chasing around).
If the simplifying assumptions do have an impact, we can try to simulate without those assumptions. If this is not possible to link to data (as the simplified assumptions are not measurable) we are doing this just so we know where the bias in our description is, so we can include that in our answer to the question!
Whether an assumption is defendable, reasonable, and/or accepted, will depend strongly on the question being asked. In this case, we need to make different sets of assumptions to answer different questions. We need to create different models, to simulate different elements, in order to answer different questions! On top of this, we can ask how vulnerable our answer is to slight changes in the assumption – this gives us a way to infer whether the result is robust in the real world!
Furthermore, even for a single question using many models can be of use if they enlighten different elements – this is a way of extending the Gibbard and Varian view of models as caricatures for the fact a given question has a multiplicity of smaller questions embedded in it. Essentially, we model the elements/questions where there may be a debate, and we don’t model the elements/questions that we/our audience already agrees upon.
Conclusion
If we an ‘intuit’ everything (a priori knowledge) then we would only need one model. If data was a perfect representation of objective reality (both in scope and quality) then we would only need one model.
We don’t have these things. As a result, we need to use a mix of simulation/modelling (given a set of assumptions) and estimation in order to answer specific questions. A recognition that this is the process we have to use is the default among academic and policy economists already – but perhaps this is one of those ‘limits of knowledge’ issues that isn’t very clearly communicated at large 
Note 1: This is largely incompatible with Friedman’s instrumentalism – as I am stating that our view on the usefulness of models for answering a question relies in a large part on assumptions, when in his view it is the predictive power of the model that matters.
Now, if our question is solely predictive, not used for policy, and many ‘causal’ factors are immeasurable then instrumental models are legitimate in a different sense to the one I have described – and the ‘value’ of a model is separate from the realism of assumptions. But this is due to the weak power of our tacit assumptions, and the opportunity cost of time associated with trying to do something that would add more value – in the vast majority of circumstances this is not the case.
Note 2: I have been cheeky in this post, and have acted as if models=simulation. As discussed here, a more common view is that it is models vs simulation [quick note here, the authors inference that economists are trying to find “universal laws” is a common misconception of what economists do – one this entire post is ruling out as a starting point
]
My personal view, stemming from the idea of credible worlds, is that all simulations are models, but not all models are simulations (as models can have other purposes – such as to communicate ideas). Not only is this unpopular, it is probably a touch imprecise (especially if we try to break down the purpose of models/simulations a bit further).
However, I think my distinction bears a closer resemblance to what economists are doing and why at a broad level – it is the way we create knowledge. Economies are not just complex adaptive systems in the sense they are currently modelled – they do have significant forward looking behaviour as well, which is currently difficult to incorporate in agent based models.
This also leaves the elephant in the room, social welfare. All model types, especially the more complex, face a gap between the simulatable ‘is’ and the relevant unobervabe ‘ought’ of policy making. There is a highly nonlinear, unobserved, functional relationship between the two we must assume – and this is a large part of economists caution regarding the use of ABMs!
The dream is that “recursive representative agent” models will become more heterogeneous, while agent based models will allow for more complicated behavioural rules that include forward looking expectations – hence moving towards each other. As long as they remain so different, using both to help understand what is going on is useful.
Let me give an example. An agent based approach may give us a great way to describe the history dependent process of growth along the single path we have experienced – but if we were to change policy, or to aim to forecast the future, these approaches act as a “black box”. We cannot infer actual causal effects. In this sense we would like to use “other models” which specialise in this – models that represent behaviour etc etc. Thinking “a modelling form” can dominate how we answer questions of the allocation of scarce resources (economics) doesn’t make sense to me – and I think the model vs simulation fight is actually a methodological misspecification!
]]>“Is economics a respectful and useful reality-oriented discipline or just an intellectual game that economists play in their sandbox filled with toy models?”
Variants of these questions fly around all the time. Why are economists making unrealistic assumptions? Why won’t they just assage to “common sense” about the situation? Why are they making the ‘wrong’ choice in some trade-off between looking at the real world and narrative and/or mathmatical beauty?
These questions sound appealing, but in many ways they are often misspecified – not pointless, but without enough content to actually be answerable. As Maki says:
“As soon as one looks more closely, what one starts seeing is fact and fiction, in a variety of combinatory incarnations. One also begins to appreciate both of them as necessary elements in a scientific study of the social world”
The book then heads through essays by a number of prominent writers, discussing the form of models, causal ordering, constructionism (and collective beliefs – I would also note a paper “Why I’m not a constructivist” by Blaug here), and the nature and incentives of economists. I haven’t read every essay in the book yet – but I’ve read most a while back and found them very useful.
Note: I promise more substantial posts from my end will return – but just not the number I was previously writing. I am in the throws of adjusting what I write about, and this has limited my posting somewhat! Luckily it appears a bunch of more interesting people have taken up the slack here – spectacular! By the way, if any other economists want a sounding board for their views – flick me an email and we’ll work something out 
Also you may have noticed that what I have been writing has been about why economics is useful and worth studying (here and here). This is true. Hopefully this provides some counterbalance to the poorly put together criticisms floating around about ‘studying economics’ which seem determined to make us all sound like archetypical monsters from some Grimm Fairy Tale. Saying “we shouldn’t look at trade-offs because then we lose our sense of community” sounds strangely like “we shouldn’t study the natural world or we will lose our sense of faith” don’t you think – and when all the ‘evidence’ is subject to an ignorance about the basic concept of revealed preferences I can’t help but feel that there are some people out there that were bullied by an economist at grad school, and have tarred us all with that brush 
Interdisciplinary, group, social science is going to be extremely powerful in the coming decades – let us not undermine it by introducing ill informed prejudices.
Note 2: Then again, it looks like arbitrary attacks of economics within our own discipline is nearly as wrong-headed.
Seriously, attacking the U-shaped cost curve … it is a framework to allow us to empirically test actual industries FFS. When teaching it in first year you discuss with students the idea that you can have diminishing marginal product in factors and still have economies of scale.
The understanding and framework provides a base that allows us to actually test things – the theories ARE NOT about the way the world “ought” to work, it is a series of assumptions (a theory) that we can use along with data to perform inductive inference about whether the logical results of the theory hold in the ‘real world’.
@pdmsero @Spencer_DavidA When it is actually a recognition we need theory/sets of assumptions before we can interpret data.
— TVHE (@TVHE) November 10, 2013
We are creating a series of credible worlds, and economics teaching shows us how to do this for a given question – it isn’t a f’ing set of universal rules regarding what we should do passed down from on high, and it is the people who want a discipline that does that who are overconfident in their own knowledge and are frankly dangerous.
]]>GDP measures the value produced within a geographic entity.
Even though it seems damaging property could increase GDP, this is unlikely – and when it does, it does not do so in a way that increases wellbeing. In truth, it would be better if we didn’t have to repair what we have and could just add to it.
Although this is not possible in the case of natural disasters, it is possible when we can prevent destructive events from happening.
The moral of the story is: If it ain’t broke, don’t break it.
]]>
I agree with these in part, especially when it comes to macroeconomics (whether this is appropriate depends on your view on the nature of how we should treat macroeconomic aggregates). But as with all things it is a matter of balance.
To quote myself (as I am a touch short of time to find new words):
Even given this, in truth macroeconomics still needs to be trying to understand the behaviour and structure behind any relationships they observe (or at least the behaviour and structure they are assuming when coming up with policy advice) – and this corresponds to the central research program of macroeconomics as it stands. Many of us – me included in my weak moments
– keep trying to find relationships that seem either theory or history invariant, where we can just either use the data or the concept and say “this is what will happen”. While this is a useful heuristic, this isn’t what these macroeconomic scientists are (or should be) relying on!
This is especially important for the social sciences, as compared to the physical sciences, given that our “target” is unobservable and its relation to measurable targets can only be worked out by (implicit or explicit) reliance on some model. Transparency about our micro-macro relations which define not just the way we can predict a measurable target, but also its relation to our true unobservable target, captures an important element of why some form of “microfoundations” are very important.
A team of physicists can get us to the moon, but can’t tell us whether we should go. Economics, and social sciences, are not just constrained because they can’t prevent certain social occurrences (recessions) – but because even if they could, its relation to what we value and whether its something society should do is very unclear. This is why economists get obsessed with very specific questions, as the nature of value forces us to look at things in very specific, and conditional, ways!
In terms of monetary policy (which is different from most other economics), market monetarists would likely go as far as saying we can avoid (demand side) recessions, but the concern of economists about the related policy actions, and the inaction of analysts and politicians, is what ensures that we don’t.
Update: On thing that has occurred to me following the above Marxist piece, and this conversations between economists:
@TVHE I hope so. Agent based models seem to be pretty good at approximating the stylised facts. Is the era of analytical solutions over?
— JamesZ (@jzuccollo) October 23, 2013
Is that the question of “appropriate microfoundations” makes close to zero sense unless we can figure out what question we are asking, and trying to answer, in the first place. I suspect that much of the disagreement stems from the suspected scope and purpose of given questions in the social sciences! We touched on this idea above by stating that economists are trying to answer very specific, conditional, statements – as Arrow points out here.
]]>The era in which an essayist can get away with ex cathedra pronouncements on factual questions in social science is coming to an end.
Very good, and Pinker’s co-operative version of science with the humanities seems appropriate to me (where instead we are merely asking about how to deal with certain propositions and using the best tools available). I think Pinker won this debate, I am unsure why Wieseltier felt it necessary to take such an extreme position though – I think he initially believed Pinker was trying to force through a view based on the superiority of scientific authority (one that Pinker rules out in his initial article!), when he was really just suggesting the use of the scientific method (namely introducing a degree of the positivist view of theory creation) given the improvements in data availability and usability we have had.
As XKCD says:

But even within Pinker’s reasonable claims there is one area where I would be a touch careful – a direction I was hoping the debate would actually go in! I would just caution being too confident about placing beliefs on the basis of ’empirical fact’. We should definitely use the information and update our beliefs, but the Duhem-Quine thesis is even more binding in the social sciences than it is in the physical sciences – due to the lack of natural experiments and that more complicated causal chains involved. [Note: Would have also enjoyed a free will vs determinism debate]
Language and rhetoric allows us to give this context and give alternate hypotheses and elements of heterogeneity in society a fair go – even looking straight at empirical data, the use of these allows us to know what we can’t measure, helps establish a limit to the use of data, and helps us ‘pick’ what we should be trying to measure! We should definitely make use of empirical data, and use it to establish underlying premises – but when it comes to writing an essay or op-ed the premise established from data could conceivably be so far in the background of the argument (due to the conditional nature of its use) that essayists may appear to be making ex cathedra pronouncements on issues that ‘on the surface’ appear to be factually false, but are actually appropriate.
Just to take the example in the piece, the 30-60% of Americans saying they take the bible literary (our fact) just tells us that this is what they reported to someone – it is not a revealed preference. For this we need to see actual behaviour when making choice. We could argue that they were saying this “to sound good to the interviewer” – even unconsciously. Given this, the true number is well lower, and our view that literal views in religion are not widespread survives as a premise for whatever claim we are making. In this way, the writer really has to appeal to authority – they do not have the space to write this out, but they have the argument around it in their backpocket. Directly old auxiliary hypotheses!
I’d also note I really don’t know terribly much about these things, and it would make a lot more sense to find someone who works in an interdisciplinary field which already does this. Someone who has a lot of experience with empirical data and models, but also write about language and works in a history focused type field. Someone with an Economics History background. The answer is pretty clear for the economists out there – Deirdre McCloskey.
]]>We need good time series data from developing countries to see whether the distributional impact is bigger there than what we find for Australia. Until then, the analysis here seems timely and relevant, not just for Australia, but for all resource-rich developing countries as the price volatility experienced by the former since the late 19th century was greater than that for the average commodity-exporting low-income country.
The distributional impact of commodity-price shocks in Australia (Canada and New Zealand) should yield important lessons for primary producers from the developmental south.
True – the idea that taxation should be more progressive the more dispersed income and wealth is is an old and widely accepted idea. And this gives us another way to conceptualise it, with a relevant shock for the NZ and Australian context. However, a couple of things to keep in mind when thinking about these issues are: