The Dismal Science

Illustration by Simin Lim

BY NAHUEL FEFER

In 1931, John Maynard Keynes observed that, “If economists could manage to get themselves thought of as humble, competent people on a level with dentists, that would be splendid.” If anything, however, the public’s regard for economics has fallen since the days of Keynes, and economists have no one to blame but themselves.

Unrealistic Models
Most economists are theorists. They use models predicated on assumptions and conditions that are difficult, if not impossible, to meet in the real world. This isn’t inherently bad, but just as most people wouldn’t trust a physicist to build a bridge, the American people don’t trust economists to steer the economy. Instead, bridges are built by architects and engineers who receive a specialized, largely separate education. Today’s economic policy makers, however, began as theorists, and this lack of specialization has conflated the worlds of economic theory and economic policy. Policy is too often based on unrealistic, theoretical models that simply do not hold up in the real world.

Economists get so caught up in the logic and beauty of their models that they forget that they are seeing only a simplified picture of what is happening. Most models, for example, rely on the assumptions that agents are omniscient and rational, ideals that do not hold at all in the real world. Other assumptions, such as the presence of perfect competition, are very rarely observed. And thus these theoretically sound models often fail the most basic test: they neither accurately reflect our reality nor accurately predict our future.

Unfalsifiable Models
To make matters worse, economists are bitterly divided regarding which of the Neo Keynesian, New Keynesian (yes, they are different), and Real Business Cycle schools of macroeconomic thought (amongst others) provides the most useful theoretical framework. While honest disagreement is not inherently bad, its persistence is not encouraging, as it indicates the existence of disagreements that may be impossible to resolve.

Part of the problem comes down to practical limitations – key factors like risk aversion or how much value individuals place on future consumption are difficult, if not impossible, to measure. Models often include unobserved variables, and though there is little doubt that these variables belong in our economic models and can be debated theoretically, they cannot be tested empirically without making questionable assumptions.

There is certainly hope for the science of economics in the long term – physicists run into the limits of human measurement abilities regularly and have been pushing the boundaries of their science for decades. In the short term, however, economists have no way to resolve subjective differences definitively. In the meantime, just as engineers step away from questions of bosons and neutrinos, policy makers should not concern themselves with grand macroeconomic theories. Instead they should use less ambitious, falsifiable models based on realistic assumptions.

Unstable Models
That economists disagree shouldn’t come as any surprise, but what is worrying is how profoundly even slight changes within the same broad model can change policy prescriptions. Two papers authored by Steve Williamson, an economist here at Washington University, provide a good example. The papers tackle the same issue of quantitative easing with slightly different New Monetarist models. The results were striking: while the first concluded that quantitative easing was inflationary, and recommended reducing it, the other found that it was deflationary, and recommended ramping it up.

What was the theoretical change responsible for the profoundly altered policy prescription? The models that central bankers within the model were assumed to be using. Now, tinkering with models is all well and good for theorists, and an important part of the scientific process, but it is worrying that some current policy is based on similarly fragile models. It was this same frustration with ever changing economic predictions that caused President Truman to famously proclaim, “Give me a one-handed economist! All my economists say, ‘On the one hand… on the other…’” While this may be unavoidable, perhaps policymakers should focus on tackling simpler problems and providing concrete solutions.

Consequences
When people see flip flopping economists, models that make little intuitive sense, and multiple economists each claiming the corner on economic truth, they either side with the argument closest to their ideological leanings or conclude that economics has little to offer. Unfortunately, as I’ve alluded to above, they may have a point. The theoretical foundations for policy making are often either counterproductive or redundant.

It should be clear by this point that models that assume perfect markets are unrealistic and deeply counterproductive, as they prove irrelevant to most economic policy by definition. Even the most realistic model, however, remains redundant so long as it is unfalsifiable and there exists an equally valid model that produces contradictory policy recommendations. The existence of multiple theoretically defensible models results in early self selection amongst economists – those with liberal inclinations learn more and more about the models with more progressive implications, those with conservative tendencies learn more and more about models with conservative implications. This means that, although we may be getting better and better at defending them, most policies remains fundamentally subjective. The justifications economics provides us are no more falsifiable than our political beliefs. They are comforting but ultimately redundant.

Market Failures
While economic theorists should continue pushing the limits of our knowledge, and Professor Williamson should continue his exploration of the impact of quantitative easing on interest rates within New Monetarist models, policy makers should take action with respect to the topics that almost all economists can agree on. Market failures are one such area.

It is no exaggeration to say that all economists agree that perfect markets are the very model of economic health. Markets in which consumers and producers have equal price setting power are omniscient, rational, don’t produce positive or negative externalities, and have well defined and protected property rights are not to be altered. The presence of market failures, however, implies the possibility for efficiency gain if the cost of minimizing the failure through government action is less than the potential benefit.

Where market failures exist, we can use both theoretical refinement and quantitative research to craft effective policy and finally talk dentistry, or, for the sake of this metaphor, medicine. Unfortunately, as a quick scan of the U.S. economy indicates, the existence of significant market failures is not up for debate.

Stagnating Wages
Real average compensation growth (wages + benefits) has trailed average productivity growth in the United States since the early 1980s. This is worrying because, given perfect markets, a worker’s compensation should equal the marginal output that the worker produces. The decoupling of compensation and productivity growth indicates that the U.S. labor market is hindered by market failure and inefficiency.

Many suggest that inequality is increasing, a fear validated by the U.S. Gini coefficient, which hit .477 in 2012, its highest value since the Great Depression. Symptoms of stagnating compensation growth and rising inequality include depressed demand and anemic growth, as well as reduced socioeconomic mobility.

Bargaining Power
Here we arrive at some partisan disagreements, but they are not as fundamental as they first appear. Liberal economists such as Lawrence Mishel and Jared Bernstein argue that compensation has been decoupled from productivity as a result of workers’ loss of bargaining power. They cite three trends: deregulation, which has reduced the costs companies face when lowering wages or firing workers, making threats of both more credible and strengthening their negotiating hand; reduced benefits, which have raised the costs of unemployment, effectively disincentivizing workers from risking unemployment by striking or employing other tough negotiation tactics; and globalization, which has given businesses cheap alternatives to U.S. workers, forcing many employees to either accept wage stagnation or lose their jobs.

These economists go on to argue that this is, in effect, a loss of wage setting power, and that labor markets – particularly those populated by replaceable, low skilled workers with minimal negotiating power – are suffering from market failure (monopsony). Given these assumptions, many advocate for a minimum wage to place some constraints on negotiation. They argue that if labor markets are suffering from market failure, raising the minimum wage would increase efficiency and, paradoxically, employment.

Unsurprisingly, conservative economists disagree. They argue that labor markets are currently operating efficiently, and that a minimum wage represents a tax on the very companies hiring low skilled workers. They expect that minimum wage raises would produce job loss and inefficiency. Some economists, like Gregory Mankiw, who acknowledge inequality as a problem, propose earned income tax credits as a solution.

Industry Specific Minimum Wages
The disagreement over the existence of market failure is certainly a superficial impasse, but, since both parties agree that government action is called for if a market failure exists, it is easily resolved. Unlike the subjective value of a model, market failure is a verifiable fact, and one that economists have good reason to disagree on given that in the United States there are multiple labor markets, some of which are competitive while others are monopsonistic.

Both sides need to stop cherry picking the data to prove their ideological point. Embracing a more nuanced reality allows us to embrace more nuanced policy, such as industry and region specific minimum wage laws. An approach pioneered by Germany upon the integration of East Germany, this strategy acknowledges that productivity varies by industry and geographic region, but shouldn’t fall below the competitive market level in any case.

Such a policy would avoid both the pitfalls of failing to raise the minimum wage (leaving inefficient labor markets broken) and raising the minimum wage across the board (introducing inefficiency to existing competitive markets), and would magnify the benefits by reducing market failure at all income levels. While making such a policy a reality in the United States would be politically difficult, I find the existence of an apolitical policy that, by sheer economic logic, increases efficiency across the board comforting.

 

Share your thoughts