jetpack domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131updraftplus domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131avia_framework domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /mnt/stor08-wc1-ord1/694335/916773/www.tvhe.co.nz/web/content/wp-includes/functions.php on line 6131I was brought up in the Soviet Union, and so this is a day that was always seen as very important – in fact it is a national holiday.
International Womens Day was established at a time when women were treated as second class citizens in society – either given legal status as a dependent, or potentially treated as property in marriage.
The original idea behind the day was that women and men should be treated equally by the law and by society, should be afforded the same opportunities as men, which constitutes things such as equal pay.
This is not to say that the average woman should have the same outcomes as the average man – but that a woman who is otherwise the same as a man, in terms of her abilities and desires, have the opportunity to achieve the same things. This opportunity is, furthermore, not just about getting the same pay when in a job – but having the same chance of getting the job in the first place!
So what does the data tell us about that? Mary Jo Vergara at Kiwibank has a great post on this. Her conclusion is that female economic and social status has genuinely improved over the past 50 years – but when it comes to the labour market, and the predominance of women still being second earners, there is a gap.
This raises the question – are these difference due to systematic biases (people in society not wanting to hire women) or due to women not applying for the roles/not putting themselves forward. This is a tough question, but in my own life there is one thing I see as a proactive solution to both of them – finding women in my life and career that can inspire me to do more.
For this reason, the actions of the old World Bank Chief Economist in standing down about research aren’t important just because they were morally right – but they also give other women like myself confidence to stand up in their jobs when it is needed.
Chatting with Matty on the 8th about the economic status of women in legislation through time, and the importance WWII had for putting pressure on this (and on racist legislation) really reinforced to me how important experience with work – and having people that are similar to you doing things so that you feel confident to – is to people.
]]>Is the former BOE governor and academic icon correct, or is this an unfair critique of the mainstream? As a summary, I have two issues with his argument:
Let me explain.
In Professor King’s view:
“Escaping from a low-growth trap sprung by radical uncertainty isn’t like climbing out of a Keynesian downturn, with temporary monetary or fiscal stimulus restoring demand to its trend path. It requires instead a reallocation of resources from one component of demand to another, from one economic sector to another, and from one company to another.”
https://www.bloomberg.com/opinion/articles/2019-10-27/economics-revolution-needed-to-fix-great-stagnation
This is quite a confusing opinion, as King is either saying nothing is of consequence (eg that a reallocation will occur in the future), or that “people won’t reallocate unless we make them” which is a pretty awkward statement.
I agree with Prof. King that some economic questions and stylised facts need to be changed post-GFC. But before moving into the further argument, I would like to discuss the possible causes of the Great Depression first – as King’s argument is based on a comparison to that time period.
There were lots of things happening at once during that economic downturn between 1929 and 1939 – some of them demand shocks, some of them supply shocks, some of them were technology, some were policy, some were solely about coordination.
During the Great Depression the prevailing view was that there was a “secular stagnation” – growth was lower because the easy gains had been used up.
Keynes stated that there could be a coordination issue with a lack of demand – and although active fiscal policy appeared to help, it didn’t get the economy back to trend – and so it was seen as a bit of both. Then World War II happened, and the output surged.
Seeing that, economists got really confused and started investigating – although the idea there was a demand/investment shortfall compared with secular stagnation remained. Friedman then came out and said it was all about monetary policy. Although, I don’t think this is as major a departure as some people act. Instead, it is the changing tone from “supply” to “demand” in how people think about the relative shocks.
Let’s look at all the supply shocks: 1) The dustbowl droughts in the US destroying their food producing capacity. 2) A breakdown in lending due to bank runs. 3) The breakdown of the gold standard and lift in uncertainty. 4) International trade retaliation and the rise of tariffs.
All these things reduced the capacity to produce of the US and global economies. But falling prices, and the fact that output could be ramped up A LOT due to the war tells that there was a demand based coordination failure.
The “coordination failure” idea captures a number of the same issues that were raised in debate then as essentially structural (debt deflation, weak expectations) – but Friedman’s innovation was to cut through and note that anchoring macro-variables could help to solve these very issues, and that was the role of monetary policy (eg NGDP targeting).
We can see how an understanding of the Great Depression – and the debates they had at the time – gives quite a perspective on all the things people talk about now.
Slowing population growth, a slowdown in “technological innovation” from finding the “easy gains”, slowing convergence between countries. All these things explain why the “trend” in output growth may have slowed, and if demand didn’t immediately slow this would have been represented in above trend output and a slump after – however, that should have been represented by rising factor prices as well.
In the Great Depression, unemployment was still very high when economists were saying it was “solved”. Where now, the unemployment rate has been pushed down finally – as a result, if activity is still below trend the supply hypothesis appears more reasonable than it was then.
But, we have to also then ask if we are measuring capacity in the labour market properly (eg do a bunch of countries have low labour force participation rates). It doesn’t appear that is the issue (unlike 3 years ago).
The Great Depression didn’t require a “forced reallocation”, it just required that people expected sufficient growth in order to be willing to invest – at a level where these expectations were self-reinforcing at the aggregate level.
Imagine the government says- “I know that you should invest in perfume bottles, and if you don’t start doing it I am going to make you”.
It does! Because the economy ALWAYS needs reallocations – that is what markets and relative prices do.
Saying that there needs to be a reallocation is an empty statement, and then if we make it non-empty (eg saying there should be more investment in perfume bottles) we need to ask why private agents are not doing it.
The government has neither the information or even the right to force turn around and state that people should change what they are doing because of technology (as it is not an externality), and instead it SHOULD be a question about the way policy can help shape expectations of “average demand” to ensure that private agents don’t face undue uncertainty.
Prof. King is saying “it isn’t a monetary policy issue, as it needs a reallocation”, but it IS a monetary policy issue.
The private sector will reallocate if they are facing the incentive to. And if their expectations were based on a consistent rate of growth in nominal GDP, then any deviation from that average provides a signal to reallocate towards or away from that activity.
Instead, because they expect average nominal growth to be low, they fall into a self-reinforcing cycle of not undertaking the high return “reallocation” investments.
Also there is another issue. If there is a required “reallocation” there HAS to be possible high return activities – this is a bit different from the “secular stagnation” and “low natural interest rates” that he is pointing to as evidence of his own point.
If there is necessary reallocation, where is the reallocation to? If private agents can’t observe the answer then it is unclear how government would be able to.
So surely the solution must be appropriate monetary conditions – not active reallocation of “something”.
]]>250 years earlier, on August 10 1519, the first circumnavigation of the world began when Magellan and 270 others left Seville with five ships (the Victoria, Trinidad, Santiago, Concepcion, and San Antonio). The fleet departed the coast of Spain on September 20, and three years later 18 survivors led by Elcano returned in the sole remaining ship, the Victoria. Elcano is one of many people you could imagine being more famous – being the first person to sail around the world is no mean achievement. Magellan, of course, was victim of that well known Sicilian adage “Never get involved in a land war in Asia,” if being killed on an island in the Philippines counts.
Elcano died in an attempt to be the first person to circumnavigate twice, an honour that belongs to von Aachen, a member of the same voyage who was captured by the Portuguese near the Mollucas and repatriated to Europe as a prisoner several years later. In fact, it was 60 years before there was another successful circumnavigation of the world, in the sense that it started and ended on the same ship, led by the same captain, Sir Francis Drake.
Twenty-five trips around the world in 250 years is slow progress. Long sea trips were expensive, difficult and dangerous. In many ways it is a similar explanation for why so few manned trips to the moon have occurred since the first landing fifty years ago. Some things just don’t seem necessary to do again and again until technology improves and substantially reduces the cost.
The reasons for why it was so expensive and dangerous were wonderfully described in Alfred Crosby’s brilliant book “Ecological Imperialism: The biological expansion of Europe, 900-1900 ” (1986). This book was the original “Guns, Germs, and Steel“, but better; and since Crosby spent some time in New Zealand there is even a chapter dedicated to the colonial experience of this country.
One of the real gems of the book is its explanation of the difficulties European sailors faced crossing the Atlantic and the Pacific oceans in the face of prevailing winds that blew against them. I never understood the primary technological reason for pirates in the Caribbean until reading this chapter: Spanish galleons had to sail north past Cuba to find the winds that would enable them to sail home. Nor did I understand the reasons for the rise and decline of Dunedin and Hobart.
Any visitors to Dunedin and Hobart will notice a lot of similarities: splendid harbours, beautiful old stone buildings, especially in the warehouse district; magnificent old residential houses; and some of the best students and university staff in either country.
Oh, yes, both have fairly stagnant economies, which explains why mutton-headed developers didn’t spend the 1960s, 1970s, and 1980s replacing the wonderful stone buildings with the architectural marvels that we celebrate in other New Zealand cities with such awe and wonder.
One of the reasons for the decline is the changing importance of wind power. In the 1860s and 1870s, ships from Britain sailed round South Africa and then south of Australia to take advantage of the “roaring-forties” (the winds that still plague Wellington today). Hobart and Dunedin were the first ports of call after a long voyage, and developed vibrant merchant communities.
In Dunedin you can see evidence of this if you visit Olveston House, owned by a prosperous merchant, the Bell Tea company building, or some of the buildings owned by the formerly sizeable Lebanese community. But the steamship, the Suez Canal, and later the Panama Canal put an end to all of that. Auckland was now closer to the rest of the world and, given some climatic advantages, it rapidly developed into the preeminent merchant town.
These advantages have not ceased and in fact Auckland’s advantages in the wholesaling sector are still increasing. If you delve into the census (no, not that one, the last accurate one) it shows that between 1996 and 2013 Auckland gained 4300 new jobs in the wholesale sector, whereas employment in this sector in the rest of the country declined by 1600.
Economic geography is a fascinating subject. It is the perfect mix of history, math, geography and economics, with some all-purpose complexity theory thrown in. One of its findings is that the world can become more uniform on a wide scale as transport costs fall, but it also can become more locally concentrated.
Cheap international travel means technology, political power and disease become global, with all the benefits and troubles that entails, but at the same time economic activity becomes concentrated into a smaller number of increasing large cities. And that, in a nutshell, is a history of New Zealand in the 250 years since Cook sailed into Tūranganui-ā-Kiwa.
]]>Lets have a look shall we.
Interesting! But something seems a bit off – surely this can’t be true!? Let us investigate.
It is from this book. The data sources can be found here. Cheers to Shamubeel Eaqub for the heads up.
So we have the following sources tied together:
My studies have gone back to 1988 (when the QEX started conveniently) but I have noticed the difference between the PWI and QEX in the past and wanted a correspondence. Two points come up here – the ERN was published until 1986Q1 (with index figures on the Stats site) and the QES/QEX started in 1988Q1. Why was the PWI used for such a large section in the middle – also note the published PWI on the site is an index as well. My biggest query for this period has to be – why use the PWI and not the Employment Survey that QES/QEX replaced?
Looking at the data it is the middle series that looks most out of line with other data – although the 1970s data it quite interesting in of itself, it looks like it might be very methodologically different, being collected by the Department of Labour and not including a sex split until 1973. So lets look at the data history here on Stats NZ site. Although the wage data is on the site under Infoshare there is nothing on methodology – so a trip to the National Library is in order. That can be a future post 
Hey, we can tell there are issues with the data here – but is there some way we can create our own wage series? Take the assumption that the levels are measured differently – but the “growth rates” would be the same irrespective of the measure we used. Taking that idea we can tie the data together.
First off I am going to stick to the series tied together here, but instead of using the periods of time mentioned I will kick the series off as soon as it is released. I am also going to include the Employment Survey (ES) to give us some idea of how much using the PWI “matters” for the result.
So to understand the data I have taken each index, and once the index starts I set it to the value of the prior index (except for the ES and QES which are both set to match the PWI when they start). What do we get?
Right. The PWI drives the result. Not just that, but both the ERN and ES behaved significantly differently – in real level terms – than the PWI. And in an issue that annoyed a lot of economists, the switch from ES to QEX saw measured average wages change – and there was no overlap between the two series to actually consider what was happening!!
What would a wage index look like if it followed the ERN, then switched to the ES, then switched to the QEX (assuming the real wage was flat rather than collapsing in a single quarter)? It would look like the following:
Why ignore the PWI? What was it measuring – from discussions I’ve been told it is a measure of labour cost, like the labour cost index. It is supposed to adjust for compositional changes in the types of jobs and work, and if it is like the LCI it would have tried to remove productivity improvements. What were the other surveys measuring – the actual hourly wage paid by firms.
Why would the real PWI have been falling? If goods and services prices are rising more quickly than the product wage it implies that the labour share is falling. In the 1970s the labour share of income had shot up, and this declined back from the late-1970s through the 1980s. As a result, we would expect a labour cost index to rise more slowly than consumer prices during such a period – and if there is any technological or productivity growth (which there was) that can lead to a situation where real wages are rising while the product wage, or wage paid per unit of output produced, is falling!
Real wages – in terms of peoples ability to buy goods and services from the wage they are paid for a job – are higher than they were in the 1970s.
This real wage series fits nicely within the existence of labour income share (LIS) information for New Zealand. Something was fishy about the 1970s data, the labour income share and real wage data rose so quickly in the early 1970s to the point where I am not sure they are even believable! Even if they were, the chaos in the late 1970s and early 1980s which then led to reforms shows that whatever was happening was not sustainable.
Add to this concerns about the unemployment data (also shown in the slides). The unemployment data comes from a period where significant groups were not deemed as unemployed (such as a lot of women who would have been willing to work) and government job schemes (not just state sector employment, which was large, but direct job schemes for those who otherwise wouldn’t be working) were in place which may have allowed the government a mechanism to ensure that unemployed people wouldn’t show up in the statistics – prior to the reforms New Zealand wasn’t short on corruption and the manipulation of data. We may believe such schemes serve a purpose – but it does change your measure of unemployment!
So what is it, bad data or was it truly a mad economic time – would it upset you if I said it was a bit of both? These are issues I will write about in the future, I just wanted to put the idea in your head as a starting point 
Then 10 years ago today I tried to provide some predictions. The terms of trade fell a little more than I expected (to their 2005 levels rather than to their 2007 level), but otherwise they weren’t that bad – credit rationing was predominantly in the construction sector, mortgage rates fell, and in NZ the crisis was nothing like the Great Depression. But this:
As long as the information transfer between market participants begins to improve again this crisis will be a historical point of interest in a years time – rather than the beginning of the end.
Glad I conditioned it on the idea that there were be a recognition of loss between debtors and creditors – because once that didn’t happen in Europe the crisis just kept on trucking. With everything calming down by mid-2009 the world was recovering. Then Greece in May 2010. Then my goodness just look at this this cluster. Finally in 2012 there was a recognition of the need for a lender of last resort in Europe.
If you want a retrospective I did one back in 2014 
The review doesn’t break any new ground but it is eloquent and engaging. Her central themes are:
A fiscal rule is simply a set of objectives that guide and constrain the Government as it makes policy. The rule usually comprises targets for debt and the deficit, with many variations in the details. Rules were introduced to the UK in 1997 by the then-Chancellor, Gordon Brown. Since then they have had a rocky history, as the chart shows:
The first rule required the current budget to balance over the economic cycle and debt to remain below 40% of GDP. It was considered close to optimal because it excluded investment expenditure, which usually requires borrowing, and was measured over a cycle, which allows for counter-cyclical fiscal policy. Unfortunately, it turned out to be a perfect illustration of the trade-off between optimality and enforceability. The Government gamed the rules by re-classifying some spending as investment and re-dating the economic cycle to allow themselves the maximum amount of borrowing. The result was rising net debt even as the economy experienced a long period of strong growth.
Lesson 1: Complex rules need genuinely independent monitoring and enforcement.
When the financial crisis hit in 2007 debt rocketed through the 40% boundary and the rules were abandoned. The new Government set itself a new rule in 2010 but poor economic performance led to that being sidelined within two years. At the time the rule was created growth was picking up and most people thought that a recovery was imminent; that turned out to be a mirage. Unfortunately, the commitment to reduce debt as a percentage of GDP by 2015-16 relied on strong growth and it quickly became apparent that the goal would not be met.
Lesson 2: Durable rules must be resilient to changing economic conditions.
Both of the broken rules had two parts: a rolling deficit target and a fixed debt target. In both cases it is the debt target that was broken because of the change in economic circumstances. Intuitively, that makes sense. The rolling deficit target sets the trajectory of the public finances and that can be stuck to, irrespective of the starting point. But the debt target is a fixed end point and, when economic conditions shift, it can quickly fall out of reach. Consequently, it is usually the fixed, debt targets that turn out to be the most fragile element of fiscal rules.
It is worth mentioning that some countries, such as New Zealand, have long-run debt targets with no fixed date by which they must be achieved. The problem of fragility doesn’t arise because the government’s operational target remains the deficit. That deficit target is simply set such that the debt target will eventually be met.
Lesson 3: Fiscal rules should have operational deficit targets, not debt targets.
]]>I’m not going to come down the middle in this, I’m going to disagree with both of them. The focus on an individual decision regarding something that effects only the individual (superannuation) misses the point of common sense beliefs – and isn’t the best way for Scott to make his claim regarding the difference between “truth” and “plausibility”. Remember, beliefs include our beliefs about others – and how this corresponds to (and can be valued as) aggregate action.
I’ve written about common sense and economics before. To quote wikipedia.
“Common sense is a basic ability to perceive, understand, and judge things which is shared by (“common to”) nearly all people, and can be reasonably expected of nearly all people without any need for debate”
As a result, common sense is a theory of belief formation – not just beliefs regarding your own actions, but beliefs regarding the actions of those around you.
If we just took “common sense” reasoning, we are accepting the current base of beliefs – a base that is founded on history contingent outcomes, and descriptive elements that involve biased categorisation. If we pin ourselves to an argument being one that involves beliefs that are “common sense” or “not common sense” then we end up with two divergent camps – one that thinks common sense is worthless, and one that thinks it is everything (note psychology and philosophy have had these debates as well). I’ll quote my post for the rest of this:
Of course, these definitions are not actually mutually exclusive – and actual economic thought about common sense includes both of them. The clearest example of this is Rubin’s work on folk economics (REPEC).
However, when it comes to thinking about what economists do (rather than just their view on what common sense entails) the requirement to sit in one of these ‘common sense camps’ becomes even weaker. The exceedingly harsh conclusions of the two views above (common sense sucks, common sense rules) don’t actually follow from the premises involved – economics describes situations where common sense beliefs are right, and when they are wrong!
And this is the kicker. Economics is a social science – it is applying the scientific method to help describe and understand social phenomenon. It is not about saying whether beliefs are right or wrong, it is about trying to create knowledge to help us form beliefs.
Now, given that the formation of beliefs is an important part of describing social phenomenon (since choices are based in part on beliefs, and social phenomenon are the result of choices), the issue of belief formation does appear in full force. However, this does not imply that we have to make an a prori choice to state that common sense is wrong or right – instead it suggests we need to think how beliefs, and expectations, are formed and (where possible) use data to help us find what appears to be the most appropriate assumption.
This is why criticising economics as only the study of common sense, or stating that it is ignorant of common sense, is a weak criticism.
However, it is also important to note that stating that policy conclusions are ignorant of ‘common sense beliefs’ can actually be a more poignant criticism – as it indicates simply that we do no agree with the set of beliefs and/or value judgements involved in the policy conclusion.
Summary for this debate – the economic method allows us to discuss social outcomes and the existence of social beliefs in a descriptive way that helps to frame trade-offs. The choice of the “type” of beliefs used when forming policy can be seen as another form of value judgment.
In this way “assumptions” about beliefs matter – and have to be made in a transparent way (as they are). Instead of relying on “common sense” notions alone, which often reinforce bigotry, economists allow a great degree of respect for the individual. It is not reducible to common sense or independent of the plausibility of the assumptions used.
And before you say “ok ok, but the ‘core’ assumptions are common sense” (as Bryan does at the end of his post) I would note this argument is a touch ridiculous. Common sense and scientific consensus are infact two different things – the core assumptions of a discipline have been determined by debate, evidence, and a history contingent process within the discipline not broad “common sense” of beliefs. They are beliefs formed by people within that field due to specialisation, a small subset of all people – when common sense beliefs are believed to be share by most!
Note: None of this said “what matters if prediction not the realism of the assumptions” a la Friedman. That argument could be made as well when considering elements of macropolicy – but it is generally one I’m not a fan of in the policy space due to the Lucas Critique.
]]>I honestly don’t see the point in doing this. The Cold War is over now, and both social and physical scientists are finally getting free of the constraints inherited from that period. To say that the Cold War constrained and influenced the debate on what to research, and the rhetoric to use, would be an understatement. Just read a biography on scientists during the period (eg Lakatos), or listen to an interview where Piketty talks about his willingness to discuss trends in capital to output ratios – with the Cold War open we can have a more open and honest discussion on trade-offs.
Honestly, calling someone out as Communist nowadays means as much as calling them Nazi, I can’t help but ignore whatever is being said. This is a pity, as inherent in the extremes of Nazism and Communism were negative attributes that can easily be underplayed in policy – ignoring the agency, and value, of the individual. Instead of cat calling, it is probably better to make arguments along this line 
Here I am not trying to say we can’t disagree with social policies, I’m just saying it is possible to do so on merit. This is why the discussions about trade-offs, and the limits to knowledge, are what matters. Furthermore, collectivist thinking is not solely the domain of obvious social policy, but other views that may seem right of centre. Thinking about trade-offs makes this clearer.
Now, my impression is that social scientists and economists had it a bit better than physical scientists – a lot of economists had Communist sympathies, and fell in love with the managerialist command and control nature of policy. Furthermore, they saw themselves as these (high status) managers. This would be a trend that is of genuine concern as it would involve managerialism and valuing the individual as a unit (or production or health), rather than through their capability to live a good life. These arguments deserve thought, not cheeky political gamesmanship!
]]>
I was surprised that there was a chapter focused solely in general equilibrium – and not GE in general, but competitive, neo-classical, GE. I was especially surprised as such a model isn’t really “built” for distributional analysis – economists often say we need a different framework to do distributional work!
It is a neat chapter though, so let’s pop it in here
To quote:
The quality that designates an economics as neo-classical is the derivation of agents’ supplies and demands from particular types of maximization problems. It is assumed that each consumer’s domain of choice, preferences, and assets, and each producer’s technology, are exogenously given, as are the institutions allowing economic interaction to occur through voluntary contracts. On this basis, every consumer maximizes utility subject only to a budget constraint, and every producer maximises profit constrained only by technology. A set of prices that makes their choices compatible is a general equilibrium.
Hold on, you may say, those assumptions are not realistic. This is true. However, remember we can’t observe the types of choices people are making – in that sense we can use an idealized world that incorporates certain types of choices as a means of comparison. The way our assumptions fail will tell us something about how our model does not fit the data. Although even in this it is important to be careful, this is the chain of reasoning used by the author to help discuss distributional issues (and their interpretation) using a GE framework.
Eg:
Nevertheless, there are some definite conclusions on distributional matters. For example, the theory implies that in an equilibrium pure profits will be zero.
….
[However,] it means that any observed profits over and above interest must derive from those phenomena from which Arrow-Debreu theory abstracts.
Ina similar light to last times discussion of Neo-Ricardian models, and the high level of abstraction and lack of conclusions we have the following note in the essay:
Obviously, the less abstract the assumption are, the more concrete the propositions will be that can be derived
However, it isn’t all positive – if there are institutional features we treat as exogneous, which are in fact an endogenous part of the model. This leads to the point:
Neo-classicism provides nothing by which this problem maybe resolved; not surprisingly, the theory we have considered remains at a high level of abstraction
…
This mean, among other things, that Walrasian analysis cannot shed much light upon the distribution of income and wealth in any actual capitalist economy.
Nevertheless, it is also true to say that, for those who take the neo-classical perspective at all seriously, it provides an invaluable standard by which clear thinking can be enhanced and alternative theories evaluated.
Conclusion
I would note here that the discussion on GE models, and their modern use as CGE models, corresponds to my own view. We ask small conditional questions, given a transparent set of assumptions, and a view on the inductive relationship between this idealised system and reality.
In order to answer questions about “distribution” we need to tackle specific issues relative to our analysis, asking specific question about distribution.
It is for this reason I strictly favour analysis of household level data for discussing points on distribution, rather than analysis of factor shares. If I was to build up larger “groups” than households to discuss, I can tie them together on shared characteristics in this framework – compared to the arbitrary and loaded combinations of “parts of individuals” that occur with factor work.
This has been fun, thanks for putting up with me rattling off a bunch of points I wrote up going through this book in a single day – I will use the comments you’ve put in your posts to challenge my own priors, which no doubt have come through in my writing here 