Yet Another Reason Why Minimum Wage Studies Might “Fail”
Last year, I wrote about why empirical minimum wage studies might fail to find a disemploying effect. In this post, I want to explore yet another reason why it might seem like raising the minimum wage is a free lunch. The reason concerns economics going “beyond p’s and q’s”—a theme I’m fond of promoting on Marginalia.
Here’s my idea in a nutshell. In 2023, there are more available “margins of adjustment” as compared with 1938 (the year of the Fair Labor Standards Act). With more margins of adjustment available, employers can adjust to minimum wage hikes by removing those perks from the total compensation package while leaving employment untouched.
What are some possible reasons for there being more margins of adjustment now as compared with 1938?
• Real wages have risen dramatically since the passage of the first federal minimum wage in the United States. As wages rise, the marginal utility of money falls. Thus, employers might be able to attract workers by offering other benefits besides simply higher money wages. To an employer, a dollar is a dollar. Employers don’t care if they expend an extra hourly dollar in your take-home pay or by installing additional safety equipment in the factory. At some point, that additional dollar will be more likely to attract a worker when it’s spent on a safer working environment instead of a higher money wage. Non-money margins of adjustment proliferate nicer offices, more safety on the job, coffee in the break room, health insurance, personal time off, and others.
• Employer-provided insurance—an example of non-money wage compensation—arose in the United States during World War II. Interestingly, this story itself involves margins of adjustment. The price/wage controls of the war led to a labor shortage. Employers responded by offering insurance, which was a price control workaround. After WWII, employer-provided health insurance was written into the tax code—so it persists.
• The regular process of innovation has allowed for a greater number of things to be offered as compensation. As I understand it, central AC wasn’t around until the 1970’s. Yet, the ability to “turn off the AC” plays a role in Gordon Tullock’s famous thought experiment about the minimum wage. Tullock suggested employers could shut off the air conditioning to restore profitability in the face of a minimum wage. It doesn’t send workers scurrying since employers face a surplus anyway.
We know fringe benefits, as a share of compensation, have risen. For instance, Boudreaux and Palagashvili write, “Because fringe benefits today make up a larger share of the typical employee’s pay than they did 40 years ago (about 19% today compared with 10% back then), excluding them fosters the illusion that the workers’ slice of the (bigger) pie is shrinking.”
For the sake of minimum wage analysis, this “10% to 19%” bump could potentially understate the matter. It doesn’t include things like installing AC on a hot factory floor or reducing the threat of on-the-job hazards.
There is also preliminary (more, please!) work showing that employers yank benefits in the face of the minimum wage—for instance, health insurance.
People have made the “margins of adjustment” point a million times as it pertains to the minimum wage. But I’ve never seen anyone note that such adjustments are probably more likely now than when the minimum wage was first introduced.
So, the real questions are: How empirically significant is employers’ ability to adjust on all these other margins? Has that ability played a role in dampening the effects of the minimum wage on employment? Has the importance of this effect grown over time?
I think the logic and theory are sound. But only careful empirical work—work not myopically focused on “p’s” and “q’s”—can provide us with a sense of magnitude.
This article was adapted from Marginalia. You can read the original here.