A lot of attention (myself included) has been recently put on the Top 1% income and wealth. However, there is also substantial inequality in the other 99% that is worth exploring. To get an idea, if we took all the Top 1% income growth between 1979 and 2012 and distributed it among the other 99%, each of us (I assume you also belong to the other 99%...) would earn around $7000. However, the increase in the earnings gap between a college-educated and a high-school educated household is four times that in the same period. Hence, here we will focus on wage inequality among the other 99%, but particularly between the bottom 10% and top 90%, so as to exclude the very extreme cases (which deserve a different attention). But first, Figure 1 shows how wages have changed between 1963 and 2005 by wage percentile. Here we see that generally there was a much bigger increase in wages among the top half than the bottom one.
Figure 1: Change in real wages by percentile, 1963-2005.
A common measure for overall inequality is the ratio of those at the 90 and 10 percentiles. A typical issue is that the population might be changing its structure, with more people getting educated or more work experience. As this happens, the typical person in either of these percentiles might be changing, hence changing our standard interpretation of increasing inequality. Another take on inequality is to look at between-groups inequality, where the typical comparison is those with a college degree and those with a High School degree. This tries to avoid the issues of other characteristics of the population changing as in the overall inequality measure. However, another alternative look at inequality is to look within-groups, hence evaluating how much variation there is among small groups (for example: college educated, 25-30 years old, male). This three measures of inequality are displayed in Figure 2, where we see that even though the three of them have increased over the long haul, they have done so at different paces and through different paths. Particularly the college premium follows a strange path, increasing in the 1960s, decreasing in the 1970s and increasing very fast since the 1980s. This suggests that a simple, unique explanation for the recent increase in inequality is not likely to work.
Figure 2: Three measures of Income Inequality.
But has inequality changed more among the top or the bottom? An easy way to look at this is to compare the 90 and 50 percentiles (upper-tail inequality) and, separately, the 50 and 10 percentiles (lower-tail inequality). Note this still excludes the very bottom and very top. Figure 3 shows that even though lower-tail inequality grew in the 1980s, it has not grown since then. On the other hand, upper-tail inequality has increased continuously.
Figure 3: Upper- and Lower-Tail Inequality.
So what is behind this monumental increase in inequality? Identifying the cause of this change is very hard or probably impossible, but what we can do better is identify the proximate causes, meaning what seems to be closely associated with this change, even if we do not understand what led to the first thing. And this is where the skills of economists Autor, Katz and Kearney comes into play. They argue that we cannot simply think of people as belonging to one of two groups - skilled and unskilled - where the top one is associated with higher education. Figure 4 shows that from 1979 to 2005 the wages of those with a post-college education grew by a lot more than those with college degree. Moreover, the difference between those with exactly college and high-school degrees slowed down significantly since the 1980s. And finally, the difference between those with high school degrees and those without one has flattened or even decreased since the mid-1990s. All this suggests that, since the 1990s we are in a situation where the income among the very high- and very low-skilled workers has increased relative to those in the middle. Income has polarized.
Figure 4: Changes in wages by Education.
What explains this? The main hypothesis is that computerization has changed the demand for job tasks and affected the demand for skills in such a way that explains this polarization of income. Computers are good at doing routine tasks which are codifiable, like bookkeeping, clerical work or repetitive production tasks. (If you have interacted with a call-center lately, you will probably know how computers have improved in voice recognition and seem to have taken over those tasks that require gathering the same information all the time). On the other hand, abstract tasks like those performed by "high-skills" managers or educated professionals are hard to automatize since they require cognitive and interpersonal skills and adaptability. Similarly, manual tasks used in many "low-skilled" jobs like security guards, cleaners and servers are hard to computerize and hence have not been affected much by the advance of computers. Figure 5 confirms this intuition that low-skill jobs (taking the average education of those performing such jobs) usually have manual tasks. On the other end, high-skill jobs are mainly filled with abstract tasks. However, routine tasks are concentrated between the 20th and 60th percentiles.
Figure 5: Task intensity by Occupational Skill.
The conclusion is that the change in wage inequality may be substantially explained by changes in the demand of skills, which has been lately polarized by the introduction of computers. As the demand for these jobs increased, so did their wages. But why haven't workers matched the increase in demand by educating themselves more? Well, most likely this change was very hard to predict and so not enough people found higher-education to be "worth it." However, recent trends in education attainment suggest that young people are catching up to this increased demand.
Based on an article by Autor, Katz and Kearney.
After a long long time devoted to education, economists do need to look for a job. But they (generally) do not do it in the standard way: calling, sending CVs and so on. There is something called the Job Market that takes place every year early in January. Obsessed with efficiency, the Economics Job Market has a particular advantage: applications and initial interviews are centralized. Most of them take place at the American Economic Association annual meeting. And after some very stressful days, interested employers call back and schedule fly-outs for February-March. After meeting them, going for drinks and dinner, and also presenting your research, job offers are determined. But the question I have is what determines the outcome of this stressful process? As a person that will hopefully eventually go through this ordeal, I wondered if there was any data about it.
Even though all economists have experienced this, I wasn't able to find much research about the job market unfortunately. But I found one paper where they asked what aspects of education are associated with good outcomes in the job market. They collected data on graduates from Top departments (Harvard, MIT, Princeton, Stanford and Chicago) and checked what was associated with the best job outcomes. Obviously, the sample of graduates coming from those departments is not representative of all economics graduates. Since they were accepted in such departments, they are most likely representative of the very top of the distribution of applicants to PhDs. Studied by academics, another caveat is that job placements in the business sector were generally assigned a much lower ranking than university ones (a good business sector job was similar to a university in the 200-250 rank). But well, the questions are:
0) What is the typical PhD graduate? 1990-1999.
He (only 25% female) is a foreigner (63% non-US) who might come from a foreign undergrad school (49%). Most likely he does not come from a top undergrad school though (22% coming from top-15) nor does he have a masters degree (24% with masters). 3 out 4 admitted students do graduate the PhD. And around 26% of (this very selective group of) graduates end up in a Top-20 school. The sample is a bit old and selective unfortunately and some things might have changed. Unfortunately, one has definitely not. It is still mainly male students.
1) Do admissions requirements matter for grades?
Before entering, a standardized exam called GRE is required. This has three parts: math, verbal and analytical. GRE math and analytical grades - even within this group of people with really high ones - are highly positively associated with good core grades in the PhD program. I always thought this was more of a filter requirement: once above it, all students would be pretty similar. But it seems not.
Coming from a Top-15 US university is not associated with better grades. A masters degree helps slightly. And coming from a foreign school is correlated with better grades. But this may be due to a much more selective procedure for students coming from abroad. Or from them being more devoted since they are willing to leave their home countries.
2) Do grades matter for graduation?
First, grades are highly correlated: if you do well in one, you also do well in others. I find this very interesting since we are looking at people who will later on focus on a very very tiny part of the world of economist, so we could have expected that people doing great in one Micro would not do well in Macro, or viceversa. Let me clarify that grades are only a small part of the PhD. Most of it is actually doing research, which is what most graduates will do afterwards in their careers. But core micro and macro - sorry econometrics! - grades certainly seem to matter for graduation. Even (sort of) when restricting to those who passed the courses requirement, goods grades were associated with graduation.
3) And finally, what matters for job placement?
A) Observable before starting the PhD.
Once again, coming from a foreign university is positively associated with landing a Top 20 job. Coming from a Top school in the US is also good. GRE not so much anymore. (Being a man or a woman does not seem important either, so maybe we have a hope.)
B) Observable after starting the PhD.
Micro and Macro core grades are good predictors of job placement - sorry econometrics again. Admissions rank does not seem relevant, which might question the capacity of departments to rank students. Conditional on grades, coming from a foreign school does not seem to matter as much. But coming from a Top US school still does. I wonder if a language or culture bias could be behind this...
The questions that remains are why are some characteristics much stronger predictors of grades than of job placements? If what really matters for the outcome of PhD students or the evaluation of the department is the placement, why does the admission procedure seem quite ineffective in predicting it? And, finally, what's wrong with econometrics?
Based on article from AEA.
Deep question. And you might think completely out of reach of economics. But economics is fundamentally about how to allocate scarce resources in world of seemingly unlimited wants. Many would say life is invaluable. But health research, among others, forces us to think deeper. Suppose a drug extends life of cancer patients by a month on average but costs around 30 thousand dollars. Is it worth it? Assuming there are no other alternative drugs for simplicity, the question behind is how much is a month of life worth? This is not a hypothetical philosophical question. It was actually a case made public by doctors from the Memorial Sloan-Kettering Cancer Center.
If you think life is invaluable, you should then think that one month of life is worth those 30 thousand dollars. But think again. Resources are scarce. There isn't an infinite amount of money available. Suppose that money is coming from public funds, what if that money were used to help other people with simpler/cheaper health issues, which can also extend or improve life? What if that money were used in education? Now suppose that money is coming from your own pocket. Would you rather use it to go travel around the world instead? Or maybe buy a house for your children? Or, even, would you be willing to leave a 30 thousand dollar debt to your family for that extra month of life? If life is thought as invaluable, none of these comparisons can be done.
Doctors from the Sloan-Ketterin Cancer Center pondered about this due to a combination of new treatments available for cancer that were estimated to cost around 600 thousand dollars per year of life extended. Having been approved by the FDA, most insurers had to cover these treatments. But doctors at that hospital decided those drugs were not worth the price, other alternatives were better. If those new treatments were used by everyone, resources available for health would run out very fast. That money could be better used elsewhere. And so they decided to boycott them by going publicly against some of these treatments.
Avastin, $5,000/month; Zaltrap, $11,000/month; Yervoy, $39,000/month; Provenge, $93,000/course of treatment; Erbitux, $8,400/month; Gleevec, $92,000/year; Tasigna, $115,000/year; Sprycel, $123,000/year. (Photo: NYMag, Illustrations by Remie Geoffroi)
OK, so those treatments may not have been very good. Other alternatives were available. But what if the drug in question is really good and no alternatives are available? One such drug has been suggested to be SOVALDI for hepatitis C. Let me clarify that I know nothing about this drug, so let's approach the example more as a thinking process to understand how life sometimes needs to be given a dollar value, instead of a health study. SOVALDI is suggested to have smaller side effects and as much as 95% rate of cure in the US, hence an impressive drug. Its alternatives were suggested to perform much worse, possibly not curing Hepatitis C, just dealing with it temporarily. However, SOVALDI costs one thousand dollars a pill and is to be taken daily for 12 weeks, or around 84 thousand dollars per treatment. This has been taken by around 75 thousand people last year, totaling a cost of around 5 billion dollars in the US.
Given the costs, states have limited the coverage to some special cases. But the bigger picture is that this drug might actually cure you, reducing future costs and allowing patients to get back to their lives faster and with less problems in the future. So it might not be fare to just compare the price tags among the different Hepatitis C treatments. Assume away all other possible life improvements beside work and just suppose that the average patient is able to produce for one more year of life than those that take other cheaper drugs. Moreover, say this person produces the average GDP per Capita of the US: over 50 thousand dollars. This extra year of production can be considered to reduced the actual "cost" of this drug by more than half. Then add all other aspects of life that may improve with such a drug: not taking any more medicine later on in life, enjoying more time with family and friends, and so on.
The first drug was too expensive for what it provided. (Suppose it actually works) second might be worth the price. What is the cut-off? In other words, we return to the same question: what is a month of life worth? Neither zero, nor infinite. Many numbers are actually out there. For example, The World Health Organization typically places the value one year of life between one and three times the GDP per capita of the country, i.e. one to three times of what is produced by the average person in that country. I believe this number should depend on the person, determining the quality of that month (e.g. age, other possible health issues, general happiness): a month of life for an average 20 year old person should be worth more than an extra month of an average 90 year old one. But it gets very complicated to go into these details. However abstract this may seem, this number that values life is supposed to start being used to evaluate cardiology treatments. Independently of the final number, if you thought life was invaluable, think again.
Based on Radiolab's podcast.
A funnier example of scarcity, to finish off laughing...
How much are we investing in our industries? Most Argentinians nowadays complain that the government's policies are affecting their investments as they cannot import these goods. This idea can be exploited to evaluate the capital investments of most countries. Even though direct measures of equipment installed in a country by type are not available, Eaton and Kortum have shown that most of the world’s capital is produced in a small number of R&D intensive countries. The rest of the world generally imports its equipment. Two other economists, Caselli and Wilson, then have concluded that, for most countries, imports of capital of a certain type are an adequate proxy for overall investment in that type of equipment. Before leaving Argentina, I thought it was a good time to test how capital imports (and hence investments) compare over a selection of Latin American countries.
The United Nations has a dataset COMTRADE which holds information on all trade for each country, in a quite disaggregated way. (Notice that for some countries one big benefit of this source is that the information collection is independent of their governments). For example, we can check the number, price and weight of male suits made of linen that are exported from Egypt to Germany each year. And this happens for almost any good, year and set of countries you can imagine. Then, by identifying which of these imports are capital, we can identify the value of capital imports every Latin American country has done over the last 20 years. Figure 1 shows this as a share of GDP. It is clear that Bolivia (really Evo?), Chile and Paraguay are some of the highest capital investors/importers. On the other hand, the rest of Latin America seems to be pretty similar, importing/investing below 10% of GDP (I was surprised by Brazil's low numbers as well). Just to get an idea Asia has had a stable average ratio of capital imports to GDP of close to 16% over the same period.
But not all capital is the same when looking at investments. Aircraft or computers are not the same as cars or even more basic electrical equipment. Hence, we can even use COMTRADE data and evaluate the uses of capital investments of each of these countries. For the sake of clarity, in Figures 2, High R&D refers mainly to aircraft, computers and communication equipment; Medium R&D refers to other electrical and non-electrical equipment (includes professional goods like photographic ones); and Low R&D refers to motor vehicles and fabricated metal products. And now we can see how each Latin American country has invested in the last two decades.
Figure 1: Capital Imports as a share of GDP.
Figures 2: Capital Imports Composition, by country.
Let me start with my own country. Argentina has seen a big switch in the early 2000s from High R&D towards Low R&D. Imports of motor vehicles and fabricated metals seem to have taken place over equipment with more advanced uses like computers and other office goods. (This seems to go against what the Argentinean government keeps saying about building our own motor industry.) A similar pattern can be seen in Chile, though in much milder terms. Paraguay is on the opposite end, switching away from Low R&D towards more Medium and High R&D goods. Interestingly, some of the countries with the best recent reputation (like Brazil, Peru or Uruguay) have had a much more stable path. Figures 2 have many stories to tell and probably depend a lot on each country's policies, which I am most likely unaware of. Nevertheless, they do let us test whether the comments our politicians make on industrial and other forms of investment hold in the data. So pick your favorite country and check it out!
As I spend my vacations back home in Argentina, I am being asked by many people what do I think about the Argentinean economy? My answer to most of these people was that, on top of what the newspapers say, it is very hard to comment. Data is necessary to properly evaluate the state of an economy. And reliable data on my country's situation is hard to get.
A lot has been said about the inflation index in Argentina. The figure published by the INDEC, the national statistics agency, has been accused by almost every media source as unreliable. The government is accused of cherry picking the stores and goods it follows such that inflation is reported to be low. But every Argentinean you meet on the streets will tell you the index is a lie. They go to the supermarket or any store and they can "estimate" the index on their own heads. And things do not add up to what is published. MIT's billion dollar project includes an index where they follow a - albeit smaller - selection of goods online and report on them. This MIT inflation index is not perfect and probably would not exist if people trusted the official inflation. The orange line in Figure 1 shows their index, while the blue one is official inflation. Clearly, the official one is usually below half of the one built from online prices.
Figure 1: Argentina's official versus independent inflation indices.
Figure 2: Argentina's life cycle of income (monthly), 2012.
This will not come as a surprise to any Argentinean. They all know they cannot trust the official inflation index. However, I am always surprised to notice that do not comment on the reliability of other reports. It is relatively easy for everyone in the street to partially check the inflation index while going to the supermarket. But it is absolutely impossible for a regular person on the street to check if GDP has grown. If the government or the statistics agency is willing to lie on inflation - which we can all easily check - imagine what they can be doing with reports on things like GDP, unemployment, reserves or government spending/revenue - which are very hard to measure.
What can we do about it? Probably not much. But in order to try to reduce some of these issues I decided to look at the micro data available. The Encuesta Permanente de Hogares is a regular household survey (also held by the INDEC) where people are asked several questions on their education and economic status among others. My hope was that looking at something that is not directly published from this, maybe I can extract some more reliable information. I am not certain about this but here we go.
Looking at the life cycle of income the pattern found is surprinsingly similar to that of the US in the 1970s. (Are we really 40 years behind?) Using the EPH for 2012, the shape observed is very similar. The peak is observed around the age 40, a lot earlier than currently in the US, but similar to them in the 1970s. But the level of income is shockingly low, with an average monthly income at the peak age of below 4000 Argentinean pesos (between 300 and 500 dollars, depending on the dollar exchange rate used). Similar calculations for 1995 give a similar shape but an average monthly income at the peak of twice as much, around 900 dollars.
Going from the average, at the peak age, of say 500 dollars a month to the published value of GDP per capita of over 14 thousand dollars seems a bit complicated to me (12 months x 500 = 6000 dollars a year, which is a lot less than 14 thousand...). Remember that the average of 500 dollars was based on people who have jobs. Unemployed people were not included. The sample is small and results may not be very accurate due to that. But even then, you still need to add all the people who are not working but still count for the "per capita" part of those 14 thousand dollars. This is the issue with Argentinean data nowadays. We cannot trust them and is hard to find out if we researchers are making a mistake in our calculations or if things simply just do not add up.
Figure 1: Live First-Birth Rates by Age of Mothers
The 1960s were revolutionary times. As Bob Dylan - one of my favorite musicians and probably one of the most famous characters of that time - said, "there is nothing so stable as change". This was certainly true in the US at the time: The Civil Rights Movements, social unrest due to the Vietnam War, the invention of the microchip, antidiscrimination legislation, the women's movement. And the invention of Enovid, the first contraceptive pill. Yes, you read right. The contraceptive pill was a revolutionary element. And as such, it has also been studied by an economist (and by the way published in the Quarterly Journal of Economics, among the top 3 economics journals). Martha Bailey evaluated the effect the release of this little pill in 1960 had on female labor participation. Gary Becker had previously said that "the contraceptive revolution [...] has probably not been a major cause of the sharp drop in fertility". However, Bailey will show that even if fertility did not decrease because of the pill, it did delay it, allowing women to get more education and improve their labor outcomes.
Figure 1 shows trends in first-birth rates by age groups since 1940. A marked decline in childbearing among young women (focus on 20-24 years old) is seen since the pill was introduced. This lasted until 1976 when all unmarried minors were allowed obtain contraceptives under the law. Early access allowed women between 18 and 21 to get access to the pill and hence the largest decline is seen for those 18-19 years old. A first robustness check can be seen from those 15-17 years old. Since they are expected to be too young to benefit from the pill, we should and do observe no effect for them. This gives us confidence we are not just seeing a spurious result.
As the diffusion of the pill increases, the distribution of age at first-birth also changes. Figure 2 plots the fraction of women first giving birth by age groups and cohorts. Among women born before the 1940 who were too old to benefit from early access to the pill, around 62% report having children before age 22. For those born around 1955, this had dropped by 25%. Notice that both figures suggest that these effects were not due to preexisting trends. Also no changes are seen between 1955 and 1960, when all women would have already had access to the pill.
Figure 2: Distribution of Age at First-Birth, by Cohorts.
And where does the economics come in? Early access to the pill was reflected in female labor force participation. Before 1940 the increase in women's participation had been driven by married women over 30 years old, who returned after their children had grown. On the other hand, for those born in 1955 the "fertility dip" is not observed any more. Participation rates were 25% higher at age 25.
Figure 3: Labor Force Participation, by Age and Cohort.
But how can we disentangle the effect of the pill from all the other things going on the 1960s that I mentioned above? Here is were econometric tools come in. The expansion of the pill was different across states, which individually changed the legal rights of individuals ages 18 to 21. Indirectly, this effect empowered women to get early access to the pill, without parental consent.* This exogenous variation will allow Bailey to compare the effect of the pill on women's life cycle labor force participation. Just to fix ideas, the methodology is like taking two states that were previously equal. But one state decides to extend legal rights to younger individuals and the other does not. Consequently, only one state allows young women to get access to the pill. Then, the difference in the labor force participation of the women between the two states will be coming from the pill. More than two states and more controls are used to obtain the results, but the intuition of the technique is in the previous simple example.
A first thing to check is whether early access to the pill had an effect on fertility. Table 1 shows the baseline estimate (column 2) is that it reduced the probability of giving birth by age 22 by 14%. Interestingly, early access to abortion does not seem to drive the results (column 3). As expected, it did not reduce the number of children before 19, since women did not have legal access to the pill without parent consent before that age. Finally, as other people had reported, the pill did not reduce the number of children women had, suggesting it just delayed it.
Table 1: The effect of early legal access to the pill on fertility.
What effect did this have on labor outcomes? Bailey shows that early access to the pill increased labor force participation of women ages 26-30 by 7%, and also increased those of ages 31-35. They also seem to work more hours, hence getting closer to male labor outcome averages. For women under 25 years old, results suggest that the pill increased their enrollment in school. Changing career trajectories - resulting from delay in childbearing - was the primary mechanism this little pill increased female labor-force participation.
* Bailey goes into some detail to justify that this extension of rights was not related to states characteristics that could be directly related to the variables of interest. Most of the changes are suggested to have to do with discrepancy under federal law of being old enough to be drafted to the Vietnam war by age 18, but not being able to vote. At the state-level, legislation was extending rights to 18 year old men and women.
A few weeks ago I wrote about the life cycle of earnings, where Guvenen, Karahan, Ozkan and Song had used over 200 million tax data observations from the Social Security Administration (between 1978 and 2010) to see how income moved with age. With that amazing data they showed that mean income would peak around the age of 50, though for the median person income would peak earlier. Given their findings I decided to look (though with worse data) at how the life cycle of earnings has changed over time.
Using Census data from the US (available through IPUMS for any other data addicts reading this), I looked at average income by age for each decade. The caveat from using this information is that if there are some cohort effects (meaning earnings are changing differently for young than for older people within one decade) I will not capture this directly, possibly leading to some confusion in the analysis. Nevertheless, the patterns are quite striking.
Figure 1 shows that average labor income used to peak a lot earlier than it does nowadays. Back in the 1960s, it used to peak around the age of 35. Income was expected to start going down after 35. However, decade by decade, this peaking point has been increasing. By the 1990 the peak seemed closer to 45, and nowadays the peak can be as high as 50. Given the results found in Guvenen's research it might be that nowadays the median worker's labor experience is much more different from the mean worker than it used to be. But why?
Figure 1: Average Income by Age, over time.
Figure 2: Relative variance of Income by Age, over time.
Source: IPUMS Census USA. Family labor income by age of head, excluding people in school or with no income.
One possibility is the increase in the share of people going to school and looking for skill demanding jobs. Back in the 1960s, the share of young people who were high school graduates was around 53%. Another 13% had graduated from college as well, hence leaving a 34% of high school dropouts. Nowadays, there are only 10% high school dropouts, while the share of college graduates has increased to around 34%. I believe this might be pushing the peaking point. For example, an engineer or lawyer probably needs to go through some lower paying job training (or internship) and needs to try many different offices until it finds the one that suits him best. Hence, they start with a quite low pay but see a high increase over time. On the other hand, a construction worker's income will probably not change much over his life. Most companies will probably pay similarly, and his wage will not change as much over his life as it will for the lawyer/engineer.
This is consistent with what is found for the variance of income. Figure 2 shows variance relative to the variance at age 26, so that we can see how it moves over life. Another interesting pattern emerges here. It used to be that income differences were quite constant until the age of 40. However, since the 2000s differences seem to have started showing up earlier. A constant increase since the age of 25 is found nowadays.
Source: IPUMS Census USA. Variance of log family labor income by age of head, excluding people in school or with no income, relative to variance at age 26.
Once again, I believe this probably might have to do with education. Back in the 1960s, more than 80% of the population would start looking for jobs around the age of 18, leaving many years until the age of 25 - where my plots start - for them to find the appropriate job.* Moreover, these types of jobs probably did not experience much wage differentials between employers. On the other hand, nowadays, 34% of young people graduate to college (and even attempt go and fail to graduate), leaving them with less years to find a job. Moreover, once again their skills probably need more time to find the appropriate employer ("matching" issues in the economics jargon).** Hence, incomes are a lot more varied nowadays since earlier stages of life. And my income peak is getting farther and farther away...
* Plots start at age 25, so as to avoid having selection issues with people who go to college and start showing up in the sample after they graduate (say at age 22). For example, if plots started at age 18, the data until age 22 would include only people who did not go to college. Starting age 22 the pool of people would change a lot, as college graduates come in. The mean income might change significantly, but this would not be due to the life cycle of earnings of the workers, just because the pool of people in the data changed. Hence, this problem is reduced by starting the analysis at age 25.
** An interesting way to evaluate this would be to look at the same data but focusing only on college graduates. Maybe another week.
Given the recent events in Ferguson - where white policeman was not indicted for shooting a black young man - that lead to protests around the US to try to stop racial discrimination, I thought it was a (unfortunately) perfect moment to see what economics has to say about this. What is the status of inequality between black and white? Using research from the Urban Institute, Figure 1 suggests that white people have 6 times more wealth than blacks, having this gap increased almost threefold since 1983. So it seems the situation has not got better over time.
Figure 1: The wealth gap in the last three decades
Source: The Urban Institute.
Moreover, whites accumulate more wealth over their lives than black (or Hispanics) do. Focusing on those born between 1943 and 1950, Figure 2 shows that this wealth gap increases over the life cycle. In 1983, whites between 32 and 40 have an average family wealth of $184,000, rising to over a million by age 59 to 67. However, blacks wealth goes from $54,000 to only $161,000 between the same ages. So whites have three and a half more wealth than blacks when they are young, but over seven times more when they are old.
Figure 2: The life cycle of wealth by race
Source: The Urban Institute.
On top of this wealth gap, an over-simplistic look at the data suggests that blacks receive worse sentences and are more likely to be suspended in school. Finally, Figure 3 shows are twice more likely to be unemployed.
Figure 3: Unemployment rate by race.
Source: Bureau of Labor Statistics.
What is behind such big gaps? Econ 101 teaches us that a properly working market system should hire and pay people according to their value. Discrimination makes no sense in competitive environment. Suppose every employer is discriminating against blacks, hence providing them with a lower wage even though they are as productive as whites. This would allow any unbiased person to take over the market. She would be able to hire these discriminated people at a wage level in between the gap (i.e. between the black and white ones) and get a better profit than everyone else, possibly kicking all racist businessmen out of the market. This Econ 101 logic is definitely too simplistic, but should let us frame our thoughts to see what it is missing out.
A possible issue is that blacks and white differ in their characteristics, beyond their color. For example, white people could be more educated. Focusing on unemployment, the question then is: When faced with observably equivalent (i.e. education, experience, etc) black and white job applicants, do employers favor the white one? Evidence goes both ways. Some suggest they do not, claiming the black-white gap stems from supply factors: African-Americans lack many skills when entering the labor market, so they perform worse. Others suggest that employers do discriminate, either by prejudice ("Taste-biased" in economics jargon) or, more usually, by what economists call "statistical discrimination": race is used as a signal for unobservable characteristics. For example, if blacks tend to be raised in worse environments (which could lead to worse productivity), employers who care about productivity (but do not about race) and cannot observe it perfectly, would use race (or ZIP codes) as a signal for it. Hence, black people would be discriminated but not (directly) because of their color.
Data limitations make it difficult to test these views. Researchers posses far less data than employers do, so even if applicants appear similar to researchers they may not be to employers. Employers can observe social skills during interviews and assess the quality behind what is stated in the typical resume information. And any racial difference in labor outcomes could easily be attributed to that. That would be a highly unsatisfactory open ending to this post.
Fortunately, Bertrand and Mullainathan designed a field experiment to try to circumvent this problem. They sent close to 5,000 resumes to more than 1,300 help-wanted ads and measured the call-back for interview for each resume. Since race cannot be explicitly written in the resume, they manipulated the perception of race by (randomly) assigning names to those resumes. Half the names used are white-sounding (e.g. Emily Walsh) while the other half is black-sounding (e.g. Lakisha Washington). A side experiment showed that the names used are associated with their respective races by more than 90% of the people. They also experimented on changing the quality of the resumes, in order to see if call-backs for black are more responsive to quality than for white (like statistical discrimination might suggest). Approximately four resumes are sent to each ad: Black-High (quality), White-High, Black-Low, and White-Low. Even though this does not go further than the call-back stage (i.e. it does not go all the way to employment), this methodology guarantees that the information the researcher and employer have is the same.
Table 1 shows the callback rate for both groups: Whites have 50% more chances of being called back. A white person would need to send 10 resumes to receive one callback, while a black one would have to send 15. Using the data on quality of the resumes (Table 5 in the paper), the return to a white name is equivalent to as much as eight additional years of experience. Moreover, there seems to be no difference depending on the industry or occupation category of the job itself. They all show differences of this sort.
Table 1: Callback rates by age.
A possible issue with this strategy is that when employers read a name like Lakisha, they may assume more than just skin color. They could interpret that the applicant comes from a disadvantaged background. In such a case, signals of quality like experience or special skills should be more important for black applicants. Similarly, ZIP codes could be used to get an idea of the social background of the applicants. If we expected statistical discrimination to be behind the gap, we should expect black applicants callbacks to respond more to either of this. However, the study suggests they don't. Higher quality of resumes improves the callbacks for white applicants but not so much for black ones. And ZIP codes don't seem to matter much either. Finally, a way of looking at this directly is by examining the average social background (proxied by mother's education) for each name used. Table 2 shows the first names used, together with their callback rate and average mother high-school completion rate. The social background hypothesis would suggest higher callback rate for higher mother education. However, no such evidence is found.
Table 2: Social background and callback rate for each name used.
If statistical discrimination is not behind this, what is then? "Taste-based" discrimination where people consciously think worse about blacks seems contradictory to other studies in the literature. In a second paper the same authors (together with Chugh) suggest that a possible explanation is that we might be unintentionally discriminating. Using a tool popular in neuroscience and sociology, the Implicit Association Test (IAT), they suggest that people have unconsciously more difficulties in associating black persons with positive words. And this is found to be harder to control in environments with time-pressure or considerable ambiguity (like looking at job applicants).
What is the best way to improve on unconscious discrimination? Is making differences between skin colors, that go as far as avoiding any topic that refers to colors which are as obviously there as any other parts of our bodies, the correct way to improve our unconscious mind? If we are raised with these concerns of what is politically correct to say, we might be doomed to unconsciously make such an unfair and damaging difference between people's skin colors.
Figure 1: Composition of US Income Inequality.
In the last few years, substantial research from Piketty, Saez, Atkinson and others has brought the topic of inequality back to the front page of economics. They use extensive data, including tax records in some cases, to analyze the evolution of (mainly top) income inequality for a long period of time. Charles Jones has updated and summarized some these studies, which is the basis of this article. The starting question is then: How much inequality is there?
Figure 1 shows the share (and composition) of income held by the top 0.1% of the population. The first striking finding is that there is a long U-shaped pattern: (Top) Inequality was very high before the Great Depression (with the top 0.1% holding as much as 10% of the total income); Lower and steady inequality after WWII; Rising inequality since the 1970s (reaching pre-1930 levels).
Taking into account that GDP can be theoretically split into labor income (e.g. wages, salaries and business income) and capital income (capital income and gains), we can divide the analysis of inequality in a similar fashion. This shows that most of the initial decline is due to a reduction in capital income, while most of the sequential increase is due to labor income (and capital gains possibly). The returns on capital seem to have become relatively smaller for the top 0.1% of the population, while wages and business income have become more important. (A big driver of of this might be the importance of land rents in the income of this part of the population)
If you have read about Piketty's book, you may have heard about the magnitude of wealth inequality. Wealth inequality is much greater than income inequality. While the top 1% of the population hold about 17% of income, the share of wealth held by them in the US is estimated to be above 40%. The cutoff to be in the top 1% of income is 330 thousand dollars a year, while 4 million dollars are needed to be among the wealthiest 1%. Figure 2 shows the path of wealth inequality for the France, the US and the UK. It is seen that wealth inequality was a lot higher before WWI than it is today. However, this hides the fact that wealth inequality has started to increase in the 1960s. On the positive side, (at least for UK and France) it still remains smaller than in the 19th century.
Figure 2: Wealth Inequality.
So far we have discussed how inequality has behaved within labor income and within wealth. Given the importance of inequality within wealth, the remaining question is how has the share of income taken by capital evolved over time. Since most of the capital income is captured by a small number of people, a tiny change in the share taken by capital (instead of labor) can lead to substantial effects on general levels of inequality. While most of the previous plots focused on the top 1%, this is now more about the top 10% (which holds 3/4 of the wealth in the US) versus the bottom 90% (which holds the other quarter, most of which is actually held within the 50-90% range). Figure 3 shows that the share of income taken by capital had either decreased or remained stable until the 1980s. However, since then, the share of income (think of this as the share of the revenues taken by capital and property owners) taken by capital has increased in all three countries.
Figure 3: Capital share of payments.
Inequality is a big concern. However, its causes and consequences remain a puzzle. On the causation side, much research remains to be done. On the consequences side, many views are possible. Regarding the individual level, inequality might affect the chances some people have of making progress, for example through access to education. If children lack basic needs (like food), they most likely won't attend school. Regarding the aggregate level, inequality might also hinder general economic growth. For example, through reduced access to education, innovation might be damaged. However, it has also been claimed that inequality might be necessary for growth. For example, in a very poor country, if wealth is split equally no one might be able to invest. However, higher inequality might allow the richest people to be able to use their extra resources to invest and generate growth. Later, opportunities for the poor ones might flourish, leading to lower inequality. This is known as the Kuznets curve. Whatever your hypothesis is, careful thinking and proper research are probably necessary.
Based on a working paper by Charles Jones.
How do individual labor earnings evolve over the course of a person's life? If you have ever asked yourself "Should I expect my income to increase this year?" and "By how much?" this post might interest you. In a very elegant study, Guvenen, Karahan, Ozkan and Song have tried to answer these questions and more, using over 200 million tax data observations from Social Security Administration (between 1978 and 2010). If you ever thought tax data was not public, this (and my last post) might suggest that they are not. Don't worry. Only a few people are allowed to use this information, and even then they are not allowed to actually see the name of the person behind each income observation.
Looking at employed people between the ages of 25 and 60, they focus on how much earnings grow every year in average. A first look at the data is provided in Figure 1. Taking the average among all the population, yearly income peaks around 50 years old, with an increase as high as 127% from age 25.
Figure 1: Average (Log) Earnings by Age.
If you are past age 50 and have not seen such an increase, you might be wondering what's wrong with yourself. Before entering into such a depressing state of mind, please read a few more lines. This average income path hides a lot of variation across different people. More importantly, it is strongly influenced by the very top earners. Figure 2 shows that the median worker only shows a 38% increase in his earnings between age 25 and 55. It is the very top earners who influence the 127% number before. For example, the Top 1% shows 1500% increase in their earnings in that same period. More than 300 times the median increase in earnings...
Figure 2: Earnings Growth (25 to 55) by Lifetime Earnings.
Another interesting finding is that income does not peak at the same age for everyone. Even though the average person's income peaks around the age of 50, this is not the case for most people. Figure 3 shows that the median worker has almost no income growth between 35 and 45, and only the top 2% actually experience earnings growth after 45.* I hope these depressing findings for the median worker might help your self-confidence. The average numbers shown in Figure 1 are not the appropriate ones to question your life. (Figures 2 and 3 might be...)
Figure 3: Earnings Growth by Decade of Life.
Some other interesting findings in these article are that the dispersion of income growth (i.e. how much income growth differs across individuals) has a U-shape, decreasing with age up to when people are 50 years old where it spikes up again. Top earners are the exception once again, since their income dispersion grows every year of their lives.
How about the asymmetries? Is it more likely to be below or above the average increase in income? The data suggests that as people get older or richer, it is more likely to get negative shocks to income. And these seems to be due to there being more room to fall down (not less room to move up). The higher your income, the more you can lose (remember most people are not willing to pay to work, so you cannot have negative wages).
Finally, let me end with a happy note. Suppose you just saw your income go down. You might be worried that it will remain like this for a long time. The data suggests otherwise. If the decrease was very strong, it is most likely that the persistence will be very short (unless you were a very high earner). In less than a year you should see your income recover most of its previous value.
(Very Small Print Note: this does not mean you should just lay down and wait for this fact to bring your salary back to normal. No complaints are accepted if incomes do not go up.)
* Remember that if a distribution is such that there are a few outliers with extremely high income growth, we will observe an average growth much higher than the one the median worker has. Hence, focusing on the median worker might be more illustrative in these cases.
Based on an article by Guvenen, Karahan, Ozkan and Song.
Follow me on twitter at @diedaruich
News and posts for an active mind.